path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
content/lessons/11/Now-You-Code/NYC3-Historical-Weather.ipynb | ###Markdown
Now You Code 3: Syracuse Historical WeatherWrite a program to prompt a user to input a date and time string, and then it should report the temperature and weather conditions at that day and time for Syracuse, NY.To lookup the weather you will need to use the Dark Sky Time Machine: https://darksky.net/dev/docs/time-machine The date and time that the user inputs must be in the following format: `YYYY-MM-DDThh:mm:ss` for example January 7, 2016 at 4:30pm would be: `2016-01-07T16:30:00`Be sure to use the GPS coordinates for Syracuse, NY which are (lat=43.048122,lng=-76.147424)Example Run (Based on the exchange rates from 2017-03-06:```Enter the amount in USD you wish to exchange: 100100.00 in USD is 94.41 in EUR100.00 in USD is 81.48 in GBP```
###Code
# Write your plan (todo list) here
#todo write code here.
###Output
_____no_output_____
###Markdown
Now You Code 3: Syracuse Historical WeatherWrite a program to prompt a user to input a date and time string, and then it should **report the temperature and weather conditions on that day and time for Syracuse, NY.**To accomplish this, use the Dark Sky Time Machine API, here: https://darksky.net/dev/docs/time-machine You're going to have to read about the API and understand how it works before you can write this program, which is the point of the exercise.The date and time that the user inputs must be in the following format: `YYYY-MM-DDThh:mm:ss` For example January 7, 2016 at 4:30pm would be: `2016-01-07T16:30:00`Be sure to use the GPS coordinates for Syracuse, NY which are (lat=43.048122,lng=-76.147424)Example Run (Based on the exchange rates from 2017-03-06:```Syracuse, NY Historical WeatherEnter a date and time in the following format: YYYY-MM-DDThh:mm:ss => 2016-07-23T14:30:00On 2016-07-23T14:30:00 Syracuse, NY was Partly Cloudy with a temperature of 85``` Step 1: Problem AnalysisInputs: A date in the format `YYYY-MM-DDThh:mm:ss` Outputs: the current weather conditions (sunny, rain, partly cloudy) and the temperature.Algorithm (Steps in Program):- input date and time string- set url to https://api.darksky.net/forecast/[key]/43.048122,76.147424,[time]- use requests.get to obtain the site data- format the response into JSON- print the output using the time (variable), weather summary, and temperature (dictionary entries)
###Code
print("Syracuse Historical Weather (Powered by Dark Sky)")
import requests
import json
# step 2: write code here
time = input("In the format 'YYYY-MM-DDThh:mm:ss', enter the date and time: ")
url = 'https://api.darksky.net/forecast/443488a736f9e3a0dcbd670aa8d3b401/43.048122,-76.147424,' + time
try:
response = requests.get(url)
response = response.json()
print("On %s, Syracuse, NY, was %s with a temperature of %s°F." % (time, response['currently']['summary'], response['currently']['temperature']))
except json.decoder.JSONDecodeError as e:
print("Error: Could not decode the response into JSON (Make sure the date is in the correct format!)\nDetails:",e)
except requests.exceptions.RequestException as e:
print("Error: Could not connect to the site\nDetails:",e)
###Output
Syracuse Historical Weather (Powered by Dark Sky)
In the format 'YYYY-MM-DDThh:mm:ss', enter the date and time: 2018-01-01T12:00:00
On 2018-01-01T12:00:00, Syracuse, NY, was Clear with a temperature of 10.06°F.
###Markdown
Now You Code 3: Syracuse Historical WeatherWrite a program to prompt a user to input a date and time string, and then it should **report the temperature and weather conditions on that day and time for Syracuse, NY.**To accomplish this, use the Dark Sky Time Machine API, here: https://darksky.net/dev/docs/time-machine You're going to have to read about the API and understand how it works before you can write this program, which is the point of the exercise.The date and time that the user inputs must be in the following format: `YYYY-MM-DDThh:mm:ss` For example January 7, 2016 at 4:30pm would be: `2016-01-07T16:30:00`Be sure to use the GPS coordinates for Syracuse, NY which are (lat=43.048122,lng=-76.147424)Example Run (Based on the exchange rates from 2017-03-06:```Syracuse, NY Historical WeatherEnter a date and time in the following format: YYYY-MM-DDThh:mm:ss => 2016-07-23T14:30:00On 2016-07-23T14:30:00 Syracuse, NY was Partly Cloudy with a temperature of 85``` Step 1: Problem AnalysisInputs: A date in the format `YYYY-MM-DDThh:mm:ss` Outputs: the current weather conditions (sunny, rain, partly cloudy) and the temperature.Algorithm (Steps in Program):```todo write algorithm here```
###Code
import requests
try:
print("Syracuse, NY Historical Weather")
datetime = input("Enter date in the following format: YYYY-MM-DDThh:mm:ss")
key = '3489a0a9c979f039feefca0ffcb95cdb'
url = 'https://api.darksky.net/forecast/%s/43.048122,-76.147424,%s' % (key,datetime)
response = requests.get(url)
conditions = response.json()
currently = conditions['currently']
print("On %s Syracuse, NY was %s with a temperature of %.0f" % (datetime, currently['summary'], currently['apparentTemperature']))
except:
print("There was an issue calling the DarkSky SPI:" ,url)
###Output
Syracuse, NY Historical Weather
Enter date in the following format: YYYY-MM-DDThh:mm:ss2012-07-13T01:22:22
On 2012-07-13T01:22:22 Syracuse, NY was Mostly Cloudy with a temperature of 69
###Markdown
Step 6: Questions1. What happens when you enter `1/1/2017` as date input? Which error to you get? Fix the program in step 2 so that it handles this error.2. Put your laptop in Airplane mode (disable the wifi) and then run the program. What happens? Fix the program in step 4 so that it handles this error. Reminder of Evaluation Criteria1. What the problem attempted (analysis, code, and answered questions) ?2. What the problem analysis thought out? (does the program match the plan?)3. Does the code execute without syntax error?4. Does the code solve the intended problem?5. Is the code well written? (easy to understand, modular, and self-documenting, handles errors)
###Code
1. An error occurs; One in calling the DarkSky SPI.
2. The program will not run.
###Output
_____no_output_____
###Markdown
Now You Code 3: Syracuse Historical WeatherWrite a program to prompt a user to input a date and time string, and then it should **report the temperature and weather conditions on that day and time for Syracuse, NY.**To accomplish this, use the Dark Sky Time Machine API, here: https://darksky.net/dev/docs/time-machine You're going to have to read about the API and understand how it works before you can write this program, which is the point of the exercise.The date and time that the user inputs must be in the following format: `YYYY-MM-DDThh:mm:ss` For example January 7, 2016 at 4:30pm would be: `2016-01-07T16:30:00`Be sure to use the GPS coordinates for Syracuse, NY which are (lat=43.048122,lng=-76.147424)Example Run (Based on the exchange rates from 2017-03-06:```Syracuse, NY Historical WeatherEnter a date and time in the following format: YYYY-MM-DDThh:mm:ss => 2016-07-23T14:30:00On 2016-07-23T14:30:00 Syracuse, NY was Partly Cloudy with a temperature of 85``` Step 1: Problem AnalysisInputs: A date in the format `YYYY-MM-DDThh:mm:ss` Outputs: the current weather conditions (sunny, rain, partly cloudy) and the temperature.Algorithm (Steps in Program):```todo write algorithm here```
###Code
#https://api.darksky.net/forecast/[key]/[latitude],[longitude],[time]
lat = '43.048122'
lng = '-76.147424'
key = 'f401535c6d98772f61738417ac33e0c5'
key2 = '8bbd2056a66915d251dd59abdf906bd2'
time = '2016-01-07T16:30:00'
print('https://api.darksky.net/forecast/%s/%s,%s,%s' % (key, lat, lng, time))
# step 2: write code here
import requests
import json
day = input('Enter a Day in Format(yyyy-mm-dd): ')
day_time = input('Enter a Time for in Date Format(HH:MM:SS): ')
time = day + 'T' + day_time
print(time)
url = ('https://api.darksky.net/forecast/%s/%s,%s,%s' % (key, lat, lng, time))
response = requests.get(url)
if response.ok:
weather = response.json()
temp = weather['currently']['temperature']
conditions = weather['currently']['summary']
print(temp, conditions)
###Output
Enter a Day in Format(yyyy-mm-dd): 2016-06-23
Enter a Time for in Date Format(HH:MM:SS): 14:30:00
2016-06-23T14:30:00
72.96 Partly Cloudy
###Markdown
Now You Code 3: Syracuse Historical WeatherWrite a program to prompt a user to input a date and time string, and then it should **report the temperature and weather conditions on that day and time for Syracuse, NY.**To accomplish this, use the Dark Sky Time Machine API, here: https://darksky.net/dev/docs/time-machine You're going to have to read about the API and understand how it works before you can write this program, which is the point of the exercise.The date and time that the user inputs must be in the following format: `YYYY-MM-DDThh:mm:ss` For example January 7, 2016 at 4:30pm would be: `2016-01-07T16:30:00`Be sure to use the GPS coordinates for Syracuse, NY which are (lat=43.048122,lng=-76.147424)Example Run (Based on the exchange rates from 2017-03-06:```Syracuse, NY Historical WeatherEnter a date and time in the following format: YYYY-MM-DDThh:mm:ss => 2016-07-23T14:30:00On 2016-07-23T14:30:00 Syracuse, NY was Partly Cloudy with a temperature of 85``` Step 1: Problem AnalysisInputs: A date in the format `YYYY-MM-DDThh:mm:ss` Outputs: the current weather conditions (sunny, rain, partly cloudy) and the temperature.Algorithm (Steps in Program):```todo write algorithm here```
###Code
import requests
try:
print("Syracuse, NY Weather")
date = input("Enter a date and time - YYYY-MMDDThh:mm:ss")
key = '04651eb3d270ddd5d15c2420efdf3c7b'
url = 'https://api.darksky.net/forecast/06e97b238ec0e2056b8a6a4d5f53b8ed/43.048122,-76.147424,%s'%(date)
response = requests.get(url)
conditions = response.json()
currently = conditions['currently']
print(date, currently['summary'], currently['apparentTemperature'])
except:
print("Invaid")
###Output
Syracuse, NY Weather
Enter a date and time - YYYY-MMDDThh:mm:ss2016-01-07T16:30:00
2016-01-07T16:30:00 Mostly Cloudy 35.92
###Markdown
Step 6: Questions1. What happens when you enter `1/1/2017` as date input? Which error to you get? Fix the program in step 2 so that it handles this error.2. Put your laptop in Airplane mode (disable the wifi) and then run the program. What happens? Fix the program in step 4 so that it handles this error.
###Code
1) Error. I put a try/except to handle any errors associated with invalid inputs and/or API call issues
###Output
_____no_output_____
###Markdown
Now You Code 3: Syracuse Historical WeatherWrite a program to prompt a user to input a date and time string, and then it should **report the temperature and weather conditions on that day and time for Syracuse, NY.**To accomplish this, use the Dark Sky Time Machine API, here: https://darksky.net/dev/docs/time-machine You're going to have to read about the API and understand how it works before you can write this program, which is the point of the exercise.The date and time that the user inputs must be in the following format: `YYYY-MM-DDThh:mm:ss` For example January 7, 2016 at 4:30pm would be: `2016-01-07T16:30:00`Be sure to use the GPS coordinates for Syracuse, NY which are (lat=43.048122,lng=-76.147424)Example Run (Based on the exchange rates from 2017-03-06:```Syracuse, NY Historical WeatherEnter a date and time in the following format: YYYY-MM-DDThh:mm:ss => 2016-07-23T14:30:00On 2016-07-23T14:30:00 Syracuse, NY was Partly Cloudy with a temperature of 85``` Step 1: Problem AnalysisInputs: A date in the format `YYYY-MM-DDThh:mm:ss` Outputs: the current weather conditions (sunny, rain, partly cloudy) and the temperature.Algorithm (Steps in Program):```todo write algorithm here```
###Code
# step 2: write code here
import requests
try:
date_time = input("Enter date and time in the following format: YYYY-MM-DDThh:mm:ss => ")
key = "691910a478cdb6c87658c57022e88fe4"
url = "https://api.darksky.net/forecast/[key]/[43.048122],[-76.147424],[date_time]"
response = requests.get(url)
conditions = response.json()
currently = conditions["currently"]
print(type(currently))
print("On",datetime,"Sycracuse, NY was",currently,conditions)
except:
print("aaaaaaah")
###Output
Enter date and time in the following format: YYYY-MM-DDThh:mm:ss => 2016-07-23T14:30:00
aaaaaaah
###Markdown
Now You Code 3: Syracuse Historical WeatherWrite a program to prompt a user to input a date and time string, and then it should **report the temperature and weather conditions on that day and time for Syracuse, NY.**To accomplish this, use the Dark Sky Time Machine API, here: https://darksky.net/dev/docs/time-machine You're going to have to read about the API and understand how it works before you can write this program, which is the point of the exercise.The date and time that the user inputs must be in the following format: `YYYY-MM-DDThh:mm:ss` For example January 7, 2016 at 4:30pm would be: `2016-01-07T16:30:00`Be sure to use the GPS coordinates for Syracuse, NY which are (lat=43.048122,lng=-76.147424)Example Run (Based on the exchange rates from 2017-03-06:```Syracuse, NY Historical WeatherEnter a date and time in the following format: YYYY-MM-DDThh:mm:ss => 2016-07-23T14:30:00On 2016-07-23T14:30:00 Syracuse, NY was Partly Cloudy with a temperature of 85```
###Code
#secret key: 087ae10e6a98da069cd508f620f290e9
###Output
_____no_output_____
###Markdown
Step 1: Problem AnalysisInputs: A date in the format `YYYY-MM-DDThh:mm:ss` Outputs: the current weather conditions (sunny, rain, partly cloudy) and the temperature.Algorithm (Steps in Program):```todo write algorithm here```
###Code
!pip install -q requests
import requests
import json
# step 2: write code here
print("Syracuse, NY Historical Weather")
key='087ae10e6a98da069cd508f620f290e9'
month=input("Enter a month as 2 digits ie 02 ")
day=input("Enter a day as 2 digits ie 02 ")
year=input("Enter year as 4 digits ie 2017 ")
hour=input("Enter the hour as two digits, in 24 hour format ")
minute=input("Ender the minute as two digits ie 03 ")
timeDay=(year+'-'+month+'-'+day+'T'+hour+':'+minute+':00')
url='https://api.darksky.net/forecast/%s/43.048122,-76.147424,%s' %(key, timeDay)
response=requests.get(url)
weather=requests.get(url)
conditions=response.json()
currently=conditions['currently']
print(currently)
###Output
_____no_output_____
###Markdown
Step 6: Questions1. What happens when you enter `1/1/2017` as date input? Which error to you get? Fix the program in step 2 so that it handles this error.2. Put your laptop in Airplane mode (disable the wifi) and then run the program. What happens? Fix the program in step 4 so that it handles this error.
###Code
#1. If you put the date as month, day, year, the program can't correctly input the data for the url
#2. It didn't run, but I think the point is that being off the Wifi you can't ping the website to gather the info.
###Output
_____no_output_____
###Markdown
Reminder of Evaluation Criteria1. What the problem attempted (analysis, code, and answered questions) ?2. What the problem analysis thought out? (does the program match the plan?)3. Does the code execute without syntax error?4. Does the code solve the intended problem?5. Is the code well written? (easy to understand, modular, and self-documenting, handles errors)
###Code
#1. The problem attempted was to create a program to ping the API to gather weather info at a certain time in Syracuse, NY
#2. yes
#3. yes
#4. yes
#5. It is written only for Syracuse, NY, but I could have taken the time to allow the user to enter any location.
###Output
_____no_output_____
###Markdown
Now You Code 3: Syracuse Historical WeatherWrite a program to prompt a user to input a date and time string, and then it should **report the temperature and weather conditions on that day and time for Syracuse, NY.**To accomplish this, use the Dark Sky Time Machine API, here: https://darksky.net/dev/docs/time-machine You're going to have to read about the API and understand how it works before you can write this program, which is the point of the exercise.The date and time that the user inputs must be in the following format: `YYYY-MM-DDThh:mm:ss` For example January 7, 2016 at 4:30pm would be: `2016-01-07T16:30:00`Be sure to use the GPS coordinates for Syracuse, NY which are (lat=43.048122,lng=-76.147424)Example Run (Based on the exchange rates from 2017-03-06:```Syracuse, NY Historical WeatherEnter a date and time in the following format: YYYY-MM-DDThh:mm:ss => 2016-07-23T14:30:00On 2016-07-23T14:30:00 Syracuse, NY was Partly Cloudy with a temperature of 85``` Step 1: Problem AnalysisInputs: Outputs: Algorithm (Steps in Program):```todo write algorithm here```
###Code
# step 2: write code here
###Output
_____no_output_____
###Markdown
Now You Code 3: Syracuse Historical WeatherWrite a program to prompt a user to input a date and time string, and then it should **report the temperature and weather conditions on that day and time for Syracuse, NY.**To accomplish this, use the Dark Sky Time Machine API, here: https://darksky.net/dev/docs/time-machine You're going to have to read about the API and understand how it works before you can write this program, which is the point of the exercise.The date and time that the user inputs must be in the following format: `YYYY-MM-DDThh:mm:ss` For example January 7, 2016 at 4:30pm would be: `2016-01-07T16:30:00`Be sure to use the GPS coordinates for Syracuse, NY which are (lat=43.048122,lng=-76.147424)Example Run (Based on the exchange rates from 2017-03-06:```Syracuse, NY Historical WeatherEnter a date and time in the following format: YYYY-MM-DDThh:mm:ss => 2016-07-23T14:30:00On 2016-07-23T14:30:00 Syracuse, NY was Partly Cloudy with a temperature of 85``` Step 1: Problem AnalysisInputs: A date in the format `YYYY-MM-DDThh:mm:ss` Outputs: the current weather conditions (sunny, rain, partly cloudy) and the temperature.Algorithm (Steps in Program):```todo write algorithm here```
###Code
# step 2: write code here
###Output
_____no_output_____
###Markdown
Now You Code 3: Syracuse Historical WeatherWrite a program to prompt a user to input a date and time string, and then it should **report the temperature and weather conditions on that day and time for Syracuse, NY.**To accomplish this, use the Dark Sky Time Machine API, here: https://darksky.net/dev/docs/time-machine You're going to have to read about the API and understand how it works before you can write this program, which is the point of the exercise.The date and time that the user inputs must be in the following format: `YYYY-MM-DDThh:mm:ss` For example January 7, 2016 at 4:30pm would be: `2016-01-07T16:30:00`Be sure to use the GPS coordinates for Syracuse, NY which are (lat=43.048122,lng=-76.147424)Example Run (Based on the exchange rates from 2017-03-06:```Syracuse, NY Historical WeatherEnter a date and time in the following format: YYYY-MM-DDThh:mm:ss => 2016-07-23T14:30:00On 2016-07-23T14:30:00 Syracuse, NY was Partly Cloudy with a temperature of 85``` Step 1: Problem AnalysisInputs: A date in the format `YYYY-MM-DDThh:mm:ss` Outputs: the current weather conditions (sunny, rain, partly cloudy) and the temperature.Algorithm (Steps in Program): use the url, request the url, use coordinates, enter date and time.```todo write algorithm here```
###Code
# step 2: write code here
url = 'https://darksky.net/dev/docs/time-machine'
options = { 'q' : search, 'format' : 'json'}
response = requests.get(url, params = options)
geodata = response.json()
coords = { 'lat' : float(geodata[0]['lat']), 'lng' : float(geodata[0]['lon']) }
location = input("Enter a location: ")
coords = get_coordinates(location)
weather =
temperature =
date_time = input("Enter date and time int this format, YYYY-MM-DDThh:mm:ss")
print("On %s %s was %s with a temperature of %s" % (date_time, location, weather, temperature))
###Output
_____no_output_____
###Markdown
Now You Code 3: Syracuse Historical WeatherWrite a program to prompt a user to input a date and time string, and then it should **report the temperature and weather conditions on that day and time for Syracuse, NY.**To accomplish this, use the Dark Sky Time Machine API, here: https://darksky.net/dev/docs/time-machine You're going to have to read about the API and understand how it works before you can write this program, which is the point of the exercise.The date and time that the user inputs must be in the following format: `YYYY-MM-DDThh:mm:ss` For example January 7, 2016 at 4:30pm would be: `2016-01-07T16:30:00`Be sure to use the GPS coordinates for Syracuse, NY which are (lat=43.048122,lng=-76.147424)Example Run (Based on the exchange rates from 2017-03-06:```Syracuse, NY Historical WeatherEnter a date and time in the following format: YYYY-MM-DDThh:mm:ss => 2016-07-23T14:30:00On 2016-07-23T14:30:00 Syracuse, NY was Partly Cloudy with a temperature of 85``` Step 1: Problem AnalysisInputs: A date in the format `YYYY-MM-DDThh:mm:ss` Outputs: the current weather conditions (sunny, rain, partly cloudy) and the temperature.Algorithm (Steps in Program):```todo write algorithm here```
###Code
# step 2: write code here
###Output
_____no_output_____
###Markdown
Now You Code 3: Syracuse Historical WeatherWrite a program to prompt a user to input a date and time string, and then it should **report the temperature and weather conditions on that day and time for Syracuse, NY.**To accomplish this, use the Dark Sky Time Machine API, here: https://darksky.net/dev/docs/time-machine You're going to have to read about the API and understand how it works before you can write this program, which is the point of the exercise.The date and time that the user inputs must be in the following format: `YYYY-MM-DDThh:mm:ss` For example January 7, 2016 at 4:30pm would be: `2016-01-07T16:30:00`Be sure to use the GPS coordinates for Syracuse, NY which are (lat=43.048122,lng=-76.147424)Example Run (Based on the exchange rates from 2017-03-06:```Syracuse, NY Historical WeatherEnter a date and time in the following format: YYYY-MM-DDThh:mm:ss => 2016-07-23T14:30:00On 2016-07-23T14:30:00 Syracuse, NY was Partly Cloudy with a temperature of 85```
###Code
#once you make a get requeset to this url
#you get all these data (the stuff u c in the example request)
#two things
two things:
-
###Output
_____no_output_____
###Markdown
Step 1: Problem AnalysisInputs: A date in the format `YYYY-MM-DDThh:mm:ss` Outputs: the current weather conditions (sunny, rain, partly cloudy) and the temperature.Algorithm (Steps in Program):```import requestsimport jsonassign KEYassign url```
###Code
import requests
import json
# step 2: write code here
key = '67fb6248744159aa45f51831736aa1fc'
url = 'https://api.darksky.net/forecast/67fb6248744159aa45f51831736aa1fc/37.8267,-122.4233'
response = requests.get(url)
print(response)
response.ok
response.text
geodata = response.json()
print(geodata)
###Output
<Response [200]>
{'latitude': 37.8267, 'longitude': -122.4233, 'timezone': 'America/Los_Angeles', 'currently': {'time': 1523452001, 'summary': 'Partly Cloudy', 'icon': 'partly-cloudy-night', 'nearestStormDistance': 17, 'nearestStormBearing': 58, 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 51.1, 'apparentTemperature': 51.1, 'dewPoint': 45.62, 'humidity': 0.81, 'pressure': 1017.22, 'windSpeed': 7.33, 'windGust': 10.5, 'windBearing': 268, 'cloudCover': 0.44, 'uvIndex': 0, 'visibility': 9.92, 'ozone': 306.77}, 'minutely': {'summary': 'Partly cloudy for the hour.', 'icon': 'partly-cloudy-night', 'data': [{'time': 1523451960, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523452020, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523452080, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523452140, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523452200, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523452260, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523452320, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523452380, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523452440, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523452500, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523452560, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523452620, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523452680, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523452740, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523452800, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523452860, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523452920, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523452980, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523453040, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523453100, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523453160, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523453220, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523453280, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523453340, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523453400, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523453460, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523453520, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523453580, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523453640, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523453700, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523453760, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523453820, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523453880, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523453940, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523454000, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523454060, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523454120, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523454180, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523454240, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523454300, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523454360, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523454420, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523454480, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523454540, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523454600, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523454660, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523454720, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523454780, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523454840, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523454900, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523454960, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523455020, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523455080, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523455140, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523455200, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523455260, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523455320, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523455380, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523455440, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523455500, 'precipIntensity': 0, 'precipProbability': 0}, {'time': 1523455560, 'precipIntensity': 0, 'precipProbability': 0}]}, 'hourly': {'summary': 'Rain and breezy starting this evening.', 'icon': 'rain', 'data': [{'time': 1523451600, 'summary': 'Partly Cloudy', 'icon': 'partly-cloudy-night', 'precipIntensity': 0.0005, 'precipProbability': 0.02, 'precipType': 'rain', 'temperature': 51.02, 'apparentTemperature': 51.02, 'dewPoint': 45.51, 'humidity': 0.81, 'pressure': 1017.22, 'windSpeed': 7.28, 'windGust': 10.26, 'windBearing': 268, 'cloudCover': 0.4, 'uvIndex': 0, 'visibility': 9.92, 'ozone': 306.81}, {'time': 1523455200, 'summary': 'Mostly Cloudy', 'icon': 'partly-cloudy-day', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 51.72, 'apparentTemperature': 51.72, 'dewPoint': 46.46, 'humidity': 0.82, 'pressure': 1017.14, 'windSpeed': 7.79, 'windGust': 12.39, 'windBearing': 269, 'cloudCover': 0.74, 'uvIndex': 0, 'visibility': 9.92, 'ozone': 306.45}, {'time': 1523458800, 'summary': 'Mostly Cloudy', 'icon': 'partly-cloudy-day', 'precipIntensity': 0.0007, 'precipProbability': 0.03, 'precipType': 'rain', 'temperature': 52.86, 'apparentTemperature': 52.86, 'dewPoint': 47.5, 'humidity': 0.82, 'pressure': 1017.37, 'windSpeed': 7.66, 'windGust': 13.46, 'windBearing': 276, 'cloudCover': 0.9, 'uvIndex': 0, 'visibility': 10, 'ozone': 305.94}, {'time': 1523462400, 'summary': 'Overcast', 'icon': 'cloudy', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 54.16, 'apparentTemperature': 54.16, 'dewPoint': 47.65, 'humidity': 0.79, 'pressure': 1017.34, 'windSpeed': 5.99, 'windGust': 14.12, 'windBearing': 315, 'cloudCover': 0.97, 'uvIndex': 1, 'visibility': 10, 'ozone': 305.01}, {'time': 1523466000, 'summary': 'Mostly Cloudy', 'icon': 'partly-cloudy-day', 'precipIntensity': 0.0012, 'precipProbability': 0.04, 'precipType': 'rain', 'temperature': 55.56, 'apparentTemperature': 55.56, 'dewPoint': 47.23, 'humidity': 0.74, 'pressure': 1017.27, 'windSpeed': 6.12, 'windGust': 11.86, 'windBearing': 213, 'cloudCover': 0.76, 'uvIndex': 2, 'visibility': 10, 'ozone': 303.97}, {'time': 1523469600, 'summary': 'Mostly Cloudy', 'icon': 'partly-cloudy-day', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 56.87, 'apparentTemperature': 56.87, 'dewPoint': 46.97, 'humidity': 0.69, 'pressure': 1017.17, 'windSpeed': 6.23, 'windGust': 10.46, 'windBearing': 251, 'cloudCover': 0.71, 'uvIndex': 3, 'visibility': 10, 'ozone': 303.5}, {'time': 1523473200, 'summary': 'Overcast', 'icon': 'cloudy', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 58.15, 'apparentTemperature': 58.15, 'dewPoint': 46.73, 'humidity': 0.66, 'pressure': 1016.89, 'windSpeed': 7.53, 'windGust': 11.14, 'windBearing': 253, 'cloudCover': 1, 'uvIndex': 4, 'visibility': 10, 'ozone': 303.59}, {'time': 1523476800, 'summary': 'Overcast', 'icon': 'cloudy', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 59.2, 'apparentTemperature': 59.2, 'dewPoint': 46.33, 'humidity': 0.62, 'pressure': 1016.9, 'windSpeed': 10.02, 'windGust': 14.08, 'windBearing': 246, 'cloudCover': 1, 'uvIndex': 5, 'visibility': 10, 'ozone': 304.19}, {'time': 1523480400, 'summary': 'Overcast', 'icon': 'cloudy', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 59.94, 'apparentTemperature': 59.94, 'dewPoint': 46.13, 'humidity': 0.6, 'pressure': 1016.44, 'windSpeed': 12.04, 'windGust': 15.98, 'windBearing': 237, 'cloudCover': 0.99, 'uvIndex': 5, 'visibility': 10, 'ozone': 305.96}, {'time': 1523484000, 'summary': 'Overcast', 'icon': 'cloudy', 'precipIntensity': 0.0013, 'precipProbability': 0.04, 'precipType': 'rain', 'temperature': 60.3, 'apparentTemperature': 60.3, 'dewPoint': 46.24, 'humidity': 0.6, 'pressure': 1015.83, 'windSpeed': 13.52, 'windGust': 17.93, 'windBearing': 235, 'cloudCover': 0.96, 'uvIndex': 3, 'visibility': 10, 'ozone': 308.67}, {'time': 1523487600, 'summary': 'Mostly Cloudy', 'icon': 'partly-cloudy-day', 'precipIntensity': 0.002, 'precipProbability': 0.06, 'precipType': 'rain', 'temperature': 60.37, 'apparentTemperature': 60.37, 'dewPoint': 46.64, 'humidity': 0.6, 'pressure': 1015.2, 'windSpeed': 14.7, 'windGust': 20.02, 'windBearing': 233, 'cloudCover': 0.84, 'uvIndex': 2, 'visibility': 10, 'ozone': 312.52}, {'time': 1523491200, 'summary': 'Mostly Cloudy', 'icon': 'partly-cloudy-day', 'precipIntensity': 0.0019, 'precipProbability': 0.07, 'precipType': 'rain', 'temperature': 59.77, 'apparentTemperature': 59.77, 'dewPoint': 47.12, 'humidity': 0.63, 'pressure': 1014.7, 'windSpeed': 15.6, 'windGust': 22.31, 'windBearing': 216, 'cloudCover': 0.76, 'uvIndex': 1, 'visibility': 10, 'ozone': 319.06}, {'time': 1523494800, 'summary': 'Mostly Cloudy', 'icon': 'partly-cloudy-day', 'precipIntensity': 0.0172, 'precipProbability': 0.36, 'precipType': 'rain', 'temperature': 58.35, 'apparentTemperature': 58.35, 'dewPoint': 48.05, 'humidity': 0.69, 'pressure': 1014.2, 'windSpeed': 16.66, 'windGust': 25.21, 'windBearing': 163, 'cloudCover': 0.84, 'uvIndex': 0, 'visibility': 8.61, 'ozone': 329.76}, {'time': 1523498400, 'summary': 'Rain and Breezy', 'icon': 'rain', 'precipIntensity': 0.0804, 'precipProbability': 0.7, 'precipType': 'rain', 'temperature': 56.7, 'apparentTemperature': 56.7, 'dewPoint': 48.8, 'humidity': 0.75, 'pressure': 1013.75, 'windSpeed': 17.66, 'windGust': 26.81, 'windBearing': 342, 'cloudCover': 0.87, 'uvIndex': 0, 'visibility': 5.18, 'ozone': 343.21}, {'time': 1523502000, 'summary': 'Rain and Breezy', 'icon': 'rain', 'precipIntensity': 0.0942, 'precipProbability': 0.76, 'precipType': 'rain', 'temperature': 55.1, 'apparentTemperature': 55.1, 'dewPoint': 48.36, 'humidity': 0.78, 'pressure': 1013.61, 'windSpeed': 18.41, 'windGust': 27.66, 'windBearing': 281, 'cloudCover': 0.89, 'uvIndex': 0, 'visibility': 5.47, 'ozone': 356.75}, {'time': 1523505600, 'summary': 'Rain and Breezy', 'icon': 'rain', 'precipIntensity': 0.07, 'precipProbability': 0.71, 'precipType': 'rain', 'temperature': 54.04, 'apparentTemperature': 54.04, 'dewPoint': 47.06, 'humidity': 0.77, 'pressure': 1014.41, 'windSpeed': 18.7, 'windGust': 28.17, 'windBearing': 256, 'cloudCover': 0.85, 'uvIndex': 0, 'visibility': 7.61, 'ozone': 370.71}, {'time': 1523509200, 'summary': 'Light Rain and Breezy', 'icon': 'rain', 'precipIntensity': 0.0283, 'precipProbability': 0.51, 'precipType': 'rain', 'temperature': 53.23, 'apparentTemperature': 53.23, 'dewPoint': 45.06, 'humidity': 0.74, 'pressure': 1015.7, 'windSpeed': 18.41, 'windGust': 28.18, 'windBearing': 308, 'cloudCover': 0.79, 'uvIndex': 0, 'visibility': 10, 'ozone': 384.66}, {'time': 1523512800, 'summary': 'Breezy and Mostly Cloudy', 'icon': 'wind', 'precipIntensity': 0.0085, 'precipProbability': 0.26, 'precipType': 'rain', 'temperature': 52.09, 'apparentTemperature': 52.09, 'dewPoint': 43.32, 'humidity': 0.72, 'pressure': 1016.9, 'windSpeed': 18.22, 'windGust': 28.08, 'windBearing': 296, 'cloudCover': 0.69, 'uvIndex': 0, 'visibility': 10, 'ozone': 394.83}, {'time': 1523516400, 'summary': 'Breezy and Partly Cloudy', 'icon': 'wind', 'precipIntensity': 0.0029, 'precipProbability': 0.13, 'precipType': 'rain', 'temperature': 50.84, 'apparentTemperature': 50.84, 'dewPoint': 42.23, 'humidity': 0.72, 'pressure': 1017.79, 'windSpeed': 17.67, 'windGust': 28.07, 'windBearing': 292, 'cloudCover': 0.53, 'uvIndex': 0, 'visibility': 10, 'ozone': 398.62}, {'time': 1523520000, 'summary': 'Breezy and Partly Cloudy', 'icon': 'wind', 'precipIntensity': 0.0008, 'precipProbability': 0.06, 'precipType': 'rain', 'temperature': 49.72, 'apparentTemperature': 43.87, 'dewPoint': 41.39, 'humidity': 0.73, 'pressure': 1018.57, 'windSpeed': 16.83, 'windGust': 27.96, 'windBearing': 297, 'cloudCover': 0.33, 'uvIndex': 0, 'visibility': 10, 'ozone': 398.58}, {'time': 1523523600, 'summary': 'Breezy', 'icon': 'wind', 'precipIntensity': 0.0005, 'precipProbability': 0.03, 'precipType': 'rain', 'temperature': 48.76, 'apparentTemperature': 42.81, 'dewPoint': 40.68, 'humidity': 0.74, 'pressure': 1019.27, 'windSpeed': 16.01, 'windGust': 27.55, 'windBearing': 304, 'cloudCover': 0.17, 'uvIndex': 0, 'visibility': 10, 'ozone': 398.32}, {'time': 1523527200, 'summary': 'Breezy', 'icon': 'wind', 'precipIntensity': 0.0004, 'precipProbability': 0.02, 'precipType': 'rain', 'temperature': 48.29, 'apparentTemperature': 42.38, 'dewPoint': 40.15, 'humidity': 0.73, 'pressure': 1019.81, 'windSpeed': 15.29, 'windGust': 26.73, 'windBearing': 305, 'cloudCover': 0.1, 'uvIndex': 0, 'visibility': 10, 'ozone': 398.99}, {'time': 1523530800, 'summary': 'Clear', 'icon': 'clear-night', 'precipIntensity': 0.0003, 'precipProbability': 0.03, 'precipType': 'rain', 'temperature': 48.11, 'apparentTemperature': 42.32, 'dewPoint': 39.79, 'humidity': 0.73, 'pressure': 1020.24, 'windSpeed': 14.61, 'windGust': 25.62, 'windBearing': 305, 'cloudCover': 0.06, 'uvIndex': 0, 'visibility': 10, 'ozone': 399.41}, {'time': 1523534400, 'summary': 'Clear', 'icon': 'clear-night', 'precipIntensity': 0.0004, 'precipProbability': 0.03, 'precipType': 'rain', 'temperature': 48.51, 'apparentTemperature': 43.01, 'dewPoint': 39.57, 'humidity': 0.71, 'pressure': 1020.8, 'windSpeed': 13.94, 'windGust': 24.47, 'windBearing': 306, 'cloudCover': 0.04, 'uvIndex': 0, 'visibility': 10, 'ozone': 399.71}, {'time': 1523538000, 'summary': 'Clear', 'icon': 'clear-night', 'precipIntensity': 0.0004, 'precipProbability': 0.03, 'precipType': 'rain', 'temperature': 48.99, 'apparentTemperature': 43.8, 'dewPoint': 39.56, 'humidity': 0.7, 'pressure': 1021.6, 'windSpeed': 13.24, 'windGust': 23.41, 'windBearing': 307, 'cloudCover': 0.03, 'uvIndex': 0, 'visibility': 10, 'ozone': 400.26}, {'time': 1523541600, 'summary': 'Clear', 'icon': 'clear-day', 'precipIntensity': 0.0004, 'precipProbability': 0.03, 'precipType': 'rain', 'temperature': 49.83, 'apparentTemperature': 45.05, 'dewPoint': 39.66, 'humidity': 0.68, 'pressure': 1022.51, 'windSpeed': 12.56, 'windGust': 22.29, 'windBearing': 309, 'cloudCover': 0.03, 'uvIndex': 0, 'visibility': 10, 'ozone': 400.74}, {'time': 1523545200, 'summary': 'Clear', 'icon': 'clear-day', 'precipIntensity': 0.0003, 'precipProbability': 0.03, 'precipType': 'rain', 'temperature': 50.79, 'apparentTemperature': 50.79, 'dewPoint': 39.74, 'humidity': 0.66, 'pressure': 1023.35, 'windSpeed': 12.07, 'windGust': 21.11, 'windBearing': 309, 'cloudCover': 0.03, 'uvIndex': 1, 'visibility': 10, 'ozone': 400.13}, {'time': 1523548800, 'summary': 'Clear', 'icon': 'clear-day', 'precipIntensity': 0.0002, 'precipProbability': 0.02, 'precipType': 'rain', 'temperature': 52.12, 'apparentTemperature': 52.12, 'dewPoint': 39.6, 'humidity': 0.62, 'pressure': 1024.05, 'windSpeed': 11.73, 'windGust': 19.71, 'windBearing': 307, 'cloudCover': 0.02, 'uvIndex': 1, 'visibility': 10, 'ozone': 398.22}, {'time': 1523552400, 'summary': 'Clear', 'icon': 'clear-day', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 53.67, 'apparentTemperature': 53.67, 'dewPoint': 39.39, 'humidity': 0.58, 'pressure': 1024.68, 'windSpeed': 11.54, 'windGust': 18.24, 'windBearing': 305, 'cloudCover': 0.01, 'uvIndex': 2, 'visibility': 10, 'ozone': 395.26}, {'time': 1523556000, 'summary': 'Clear', 'icon': 'clear-day', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 55.08, 'apparentTemperature': 55.08, 'dewPoint': 39.36, 'humidity': 0.55, 'pressure': 1025.26, 'windSpeed': 11.74, 'windGust': 17.31, 'windBearing': 301, 'cloudCover': 0, 'uvIndex': 4, 'visibility': 10, 'ozone': 391.13}, {'time': 1523559600, 'summary': 'Clear', 'icon': 'clear-day', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 56.16, 'apparentTemperature': 56.16, 'dewPoint': 39.65, 'humidity': 0.54, 'pressure': 1025.84, 'windSpeed': 12.56, 'windGust': 17.28, 'windBearing': 297, 'cloudCover': 0.02, 'uvIndex': 6, 'visibility': 10, 'ozone': 385.26}, {'time': 1523563200, 'summary': 'Clear', 'icon': 'clear-day', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 57.24, 'apparentTemperature': 57.24, 'dewPoint': 40.06, 'humidity': 0.53, 'pressure': 1026.32, 'windSpeed': 13.81, 'windGust': 17.82, 'windBearing': 290, 'cloudCover': 0.05, 'uvIndex': 7, 'visibility': 10, 'ozone': 378.26}, {'time': 1523566800, 'summary': 'Clear', 'icon': 'clear-day', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 57.99, 'apparentTemperature': 57.99, 'dewPoint': 40.43, 'humidity': 0.52, 'pressure': 1026.65, 'windSpeed': 14.9, 'windGust': 18.46, 'windBearing': 287, 'cloudCover': 0.06, 'uvIndex': 7, 'visibility': 10, 'ozone': 371.72}, {'time': 1523570400, 'summary': 'Clear', 'icon': 'clear-day', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 58.46, 'apparentTemperature': 58.46, 'dewPoint': 40.69, 'humidity': 0.52, 'pressure': 1026.62, 'windSpeed': 15.48, 'windGust': 19.14, 'windBearing': 286, 'cloudCover': 0.05, 'uvIndex': 5, 'visibility': 10, 'ozone': 366.26}, {'time': 1523574000, 'summary': 'Clear', 'icon': 'clear-day', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 58.61, 'apparentTemperature': 58.61, 'dewPoint': 40.9, 'humidity': 0.52, 'pressure': 1026.39, 'windSpeed': 15.91, 'windGust': 19.93, 'windBearing': 293, 'cloudCover': 0.03, 'uvIndex': 3, 'visibility': 10, 'ozone': 361.34}, {'time': 1523577600, 'summary': 'Clear', 'icon': 'clear-day', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 58.4, 'apparentTemperature': 58.4, 'dewPoint': 41.04, 'humidity': 0.52, 'pressure': 1026.31, 'windSpeed': 16.05, 'windGust': 20.67, 'windBearing': 294, 'cloudCover': 0.01, 'uvIndex': 2, 'visibility': 10, 'ozone': 356.92}, {'time': 1523581200, 'summary': 'Clear', 'icon': 'clear-day', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 57.76, 'apparentTemperature': 57.76, 'dewPoint': 41.09, 'humidity': 0.54, 'pressure': 1026.54, 'windSpeed': 15.71, 'windGust': 21.42, 'windBearing': 296, 'cloudCover': 0.02, 'uvIndex': 1, 'visibility': 10, 'ozone': 353.13}, {'time': 1523584800, 'summary': 'Clear', 'icon': 'clear-day', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 57.09, 'apparentTemperature': 57.09, 'dewPoint': 41.08, 'humidity': 0.55, 'pressure': 1026.93, 'windSpeed': 15.12, 'windGust': 22.14, 'windBearing': 299, 'cloudCover': 0.03, 'uvIndex': 0, 'visibility': 10, 'ozone': 349.85}, {'time': 1523588400, 'summary': 'Clear', 'icon': 'clear-night', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 56.25, 'apparentTemperature': 56.25, 'dewPoint': 41.03, 'humidity': 0.57, 'pressure': 1027.35, 'windSpeed': 14.49, 'windGust': 22.65, 'windBearing': 302, 'cloudCover': 0.04, 'uvIndex': 0, 'visibility': 10, 'ozone': 347.01}, {'time': 1523592000, 'summary': 'Clear', 'icon': 'clear-night', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 55.74, 'apparentTemperature': 55.74, 'dewPoint': 41.05, 'humidity': 0.58, 'pressure': 1027.91, 'windSpeed': 13.91, 'windGust': 22.92, 'windBearing': 305, 'cloudCover': 0.03, 'uvIndex': 0, 'visibility': 10, 'ozone': 345}, {'time': 1523595600, 'summary': 'Clear', 'icon': 'clear-night', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 55.16, 'apparentTemperature': 55.16, 'dewPoint': 41.05, 'humidity': 0.59, 'pressure': 1028.55, 'windSpeed': 13.3, 'windGust': 22.98, 'windBearing': 308, 'cloudCover': 0.03, 'uvIndex': 0, 'visibility': 10, 'ozone': 343.36}, {'time': 1523599200, 'summary': 'Clear', 'icon': 'clear-night', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 54.32, 'apparentTemperature': 54.32, 'dewPoint': 40.98, 'humidity': 0.61, 'pressure': 1029.01, 'windSpeed': 12.69, 'windGust': 22.56, 'windBearing': 312, 'cloudCover': 0.02, 'uvIndex': 0, 'visibility': 10, 'ozone': 340.72}, {'time': 1523602800, 'summary': 'Clear', 'icon': 'clear-night', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 53.27, 'apparentTemperature': 53.27, 'dewPoint': 40.76, 'humidity': 0.62, 'pressure': 1029.21, 'windSpeed': 12.14, 'windGust': 21.7, 'windBearing': 317, 'cloudCover': 0.02, 'uvIndex': 0, 'visibility': 10, 'ozone': 336}, {'time': 1523606400, 'summary': 'Clear', 'icon': 'clear-night', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 52.15, 'apparentTemperature': 52.15, 'dewPoint': 40.49, 'humidity': 0.64, 'pressure': 1029.23, 'windSpeed': 11.59, 'windGust': 20.46, 'windBearing': 322, 'cloudCover': 0, 'uvIndex': 0, 'visibility': 10, 'ozone': 330.17}, {'time': 1523610000, 'summary': 'Clear', 'icon': 'clear-night', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 51.19, 'apparentTemperature': 51.19, 'dewPoint': 40.17, 'humidity': 0.66, 'pressure': 1029.21, 'windSpeed': 10.86, 'windGust': 18.57, 'windBearing': 327, 'cloudCover': 0, 'uvIndex': 0, 'visibility': 10, 'ozone': 325.09}, {'time': 1523613600, 'summary': 'Clear', 'icon': 'clear-night', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 50.26, 'apparentTemperature': 50.26, 'dewPoint': 39.81, 'humidity': 0.67, 'pressure': 1029.12, 'windSpeed': 9.77, 'windGust': 15.55, 'windBearing': 333, 'cloudCover': 0.01, 'uvIndex': 0, 'visibility': 10, 'ozone': 321.52}, {'time': 1523617200, 'summary': 'Clear', 'icon': 'clear-night', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 49.6, 'apparentTemperature': 46.07, 'dewPoint': 39.4, 'humidity': 0.68, 'pressure': 1029.01, 'windSpeed': 8.53, 'windGust': 11.96, 'windBearing': 339, 'cloudCover': 0.03, 'uvIndex': 0, 'visibility': 10, 'ozone': 318.72}, {'time': 1523620800, 'summary': 'Clear', 'icon': 'clear-night', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 49.29, 'apparentTemperature': 46.13, 'dewPoint': 39.06, 'humidity': 0.68, 'pressure': 1028.97, 'windSpeed': 7.47, 'windGust': 9.07, 'windBearing': 344, 'cloudCover': 0.04, 'uvIndex': 0, 'visibility': 10, 'ozone': 315.99}, {'time': 1523624400, 'summary': 'Clear', 'icon': 'clear-night', 'precipIntensity': 0, 'precipProbability': 0, 'temperature': 49.18, 'apparentTemperature': 46.36, 'dewPoint': 38.82, 'humidity': 0.67, 'pressure': 1029.18, 'windSpeed': 6.67, 'windGust': 7.43, 'windBearing': 349, 'cloudCover': 0.05, 'uvIndex': 0, 'visibility': 10, 'ozone': 313.13}]}, 'daily': {'summary': 'Rain today, with high temperatures peaking at 67°F on Saturday.', 'icon': 'rain', 'data': [{'time': 1523430000, 'summary': 'Rain and breezy starting in the evening.', 'icon': 'rain', 'sunriseTime': 1523454042, 'sunsetTime': 1523500960, 'moonPhase': 0.86, 'precipIntensity': 0.0128, 'precipIntensityMax': 0.0942, 'precipIntensityMaxTime': 1523502000, 'precipProbability': 0.71, 'precipType': 'rain', 'temperatureHigh': 60.37, 'temperatureHighTime': 1523487600, 'temperatureLow': 48.11, 'temperatureLowTime': 1523530800, 'apparentTemperatureHigh': 60.37, 'apparentTemperatureHighTime': 1523487600, 'apparentTemperatureLow': 42.32, 'apparentTemperatureLowTime': 1523530800, 'dewPoint': 46.39, 'humidity': 0.73, 'pressure': 1016.63, 'windSpeed': 8.08, 'windGust': 28.18, 'windGustTime': 1523509200, 'windBearing': 261, 'cloudCover': 0.66, 'uvIndex': 5, 'uvIndexTime': 1523476800, 'visibility': 10, 'ozone': 321.99, 'temperatureMin': 50.43, 'temperatureMinTime': 1523444400, 'temperatureMax': 60.37, 'temperatureMaxTime': 1523487600, 'apparentTemperatureMin': 50.43, 'apparentTemperatureMinTime': 1523444400, 'apparentTemperatureMax': 60.37, 'apparentTemperatureMaxTime': 1523487600}, {'time': 1523516400, 'summary': 'Clear throughout the day.', 'icon': 'clear-day', 'sunriseTime': 1523540356, 'sunsetTime': 1523587415, 'moonPhase': 0.89, 'precipIntensity': 0.0003, 'precipIntensityMax': 0.0029, 'precipIntensityMaxTime': 1523516400, 'precipProbability': 0.21, 'precipType': 'rain', 'temperatureHigh': 58.61, 'temperatureHighTime': 1523574000, 'temperatureLow': 49.18, 'temperatureLowTime': 1523628000, 'apparentTemperatureHigh': 58.61, 'apparentTemperatureHighTime': 1523574000, 'apparentTemperatureLow': 46.07, 'apparentTemperatureLowTime': 1523617200, 'dewPoint': 40.42, 'humidity': 0.61, 'pressure': 1024.26, 'windSpeed': 14.1, 'windGust': 28.07, 'windGustTime': 1523516400, 'windBearing': 300, 'cloudCover': 0.07, 'uvIndex': 7, 'uvIndexTime': 1523563200, 'visibility': 10, 'ozone': 378.26, 'temperatureMin': 48.11, 'temperatureMinTime': 1523530800, 'temperatureMax': 58.61, 'temperatureMaxTime': 1523574000, 'apparentTemperatureMin': 42.32, 'apparentTemperatureMinTime': 1523530800, 'apparentTemperatureMax': 58.61, 'apparentTemperatureMaxTime': 1523574000}, {'time': 1523602800, 'summary': 'Clear throughout the day.', 'icon': 'clear-day', 'sunriseTime': 1523626671, 'sunsetTime': 1523673870, 'moonPhase': 0.92, 'precipIntensity': 0.0001, 'precipIntensityMax': 0.0002, 'precipIntensityMaxTime': 1523653200, 'precipProbability': 0.07, 'precipType': 'rain', 'temperatureHigh': 65.64, 'temperatureHighTime': 1523660400, 'temperatureLow': 51, 'temperatureLowTime': 1523707200, 'apparentTemperatureHigh': 65.64, 'apparentTemperatureHighTime': 1523660400, 'apparentTemperatureLow': 51, 'apparentTemperatureLowTime': 1523707200, 'dewPoint': 42.22, 'humidity': 0.59, 'pressure': 1028.94, 'windSpeed': 7.46, 'windGust': 21.7, 'windGustTime': 1523602800, 'windBearing': 315, 'cloudCover': 0.06, 'uvIndex': 9, 'uvIndexTime': 1523649600, 'visibility': 10, 'ozone': 308.79, 'temperatureMin': 49.18, 'temperatureMinTime': 1523628000, 'temperatureMax': 65.64, 'temperatureMaxTime': 1523660400, 'apparentTemperatureMin': 46.07, 'apparentTemperatureMinTime': 1523617200, 'apparentTemperatureMax': 65.64, 'apparentTemperatureMaxTime': 1523660400}, {'time': 1523689200, 'summary': 'Partly cloudy in the morning.', 'icon': 'partly-cloudy-day', 'sunriseTime': 1523712986, 'sunsetTime': 1523760325, 'moonPhase': 0.95, 'precipIntensity': 0.0003, 'precipIntensityMax': 0.0011, 'precipIntensityMaxTime': 1523718000, 'precipProbability': 0.05, 'precipType': 'rain', 'temperatureHigh': 66.52, 'temperatureHighTime': 1523743200, 'temperatureLow': 49.76, 'temperatureLowTime': 1523786400, 'apparentTemperatureHigh': 66.52, 'apparentTemperatureHighTime': 1523743200, 'apparentTemperatureLow': 48.29, 'apparentTemperatureLowTime': 1523786400, 'dewPoint': 47.14, 'humidity': 0.7, 'pressure': 1025.58, 'windSpeed': 4.7, 'windGust': 15.46, 'windGustTime': 1523750400, 'windBearing': 266, 'cloudCover': 0.13, 'uvIndex': 9, 'uvIndexTime': 1523736000, 'visibility': 10, 'ozone': 308.12, 'temperatureMin': 51, 'temperatureMinTime': 1523707200, 'temperatureMax': 66.52, 'temperatureMaxTime': 1523743200, 'apparentTemperatureMin': 51, 'apparentTemperatureMinTime': 1523707200, 'apparentTemperatureMax': 66.52, 'apparentTemperatureMaxTime': 1523743200}, {'time': 1523775600, 'summary': 'Mostly cloudy until evening.', 'icon': 'partly-cloudy-day', 'sunriseTime': 1523799302, 'sunsetTime': 1523846781, 'moonPhase': 0.98, 'precipIntensity': 0.0033, 'precipIntensityMax': 0.0163, 'precipIntensityMaxTime': 1523851200, 'precipProbability': 0.29, 'precipType': 'rain', 'temperatureHigh': 60.27, 'temperatureHighTime': 1523833200, 'temperatureLow': 48.65, 'temperatureLowTime': 1523876400, 'apparentTemperatureHigh': 60.27, 'apparentTemperatureHighTime': 1523833200, 'apparentTemperatureLow': 45.02, 'apparentTemperatureLowTime': 1523880000, 'dewPoint': 46.98, 'humidity': 0.75, 'pressure': 1017.59, 'windSpeed': 7.8, 'windGust': 18.5, 'windGustTime': 1523818800, 'windBearing': 240, 'cloudCover': 0.48, 'uvIndex': 5, 'uvIndexTime': 1523818800, 'ozone': 345.81, 'temperatureMin': 49.76, 'temperatureMinTime': 1523786400, 'temperatureMax': 60.27, 'temperatureMaxTime': 1523833200, 'apparentTemperatureMin': 48.29, 'apparentTemperatureMinTime': 1523786400, 'apparentTemperatureMax': 60.27, 'apparentTemperatureMaxTime': 1523833200}, {'time': 1523862000, 'summary': 'Mostly cloudy until evening.', 'icon': 'partly-cloudy-day', 'sunriseTime': 1523885619, 'sunsetTime': 1523933236, 'moonPhase': 0.03, 'precipIntensity': 0.0119, 'precipIntensityMax': 0.0269, 'precipIntensityMaxTime': 1523901600, 'precipProbability': 0.68, 'precipType': 'rain', 'temperatureHigh': 54.8, 'temperatureHighTime': 1523923200, 'temperatureLow': 48.52, 'temperatureLowTime': 1523962800, 'apparentTemperatureHigh': 54.8, 'apparentTemperatureHighTime': 1523923200, 'apparentTemperatureLow': 45.22, 'apparentTemperatureLowTime': 1523966400, 'dewPoint': 42.21, 'humidity': 0.7, 'pressure': 1014.61, 'windSpeed': 7.68, 'windGust': 19.84, 'windGustTime': 1523930400, 'windBearing': 249, 'cloudCover': 0.46, 'uvIndex': 4, 'uvIndexTime': 1523905200, 'ozone': 432.28, 'temperatureMin': 48.65, 'temperatureMinTime': 1523876400, 'temperatureMax': 54.8, 'temperatureMaxTime': 1523923200, 'apparentTemperatureMin': 45.02, 'apparentTemperatureMinTime': 1523880000, 'apparentTemperatureMax': 54.8, 'apparentTemperatureMaxTime': 1523923200}, {'time': 1523948400, 'summary': 'Mostly cloudy overnight.', 'icon': 'partly-cloudy-night', 'sunriseTime': 1523971936, 'sunsetTime': 1524019692, 'moonPhase': 0.06, 'precipIntensity': 0.0001, 'precipIntensityMax': 0.0011, 'precipIntensityMaxTime': 1523955600, 'precipProbability': 0.02, 'precipType': 'rain', 'temperatureHigh': 59.76, 'temperatureHighTime': 1524006000, 'temperatureLow': 47.84, 'temperatureLowTime': 1524049200, 'apparentTemperatureHigh': 59.76, 'apparentTemperatureHighTime': 1524006000, 'apparentTemperatureLow': 47.41, 'apparentTemperatureLowTime': 1524045600, 'dewPoint': 38.75, 'humidity': 0.58, 'pressure': 1022.88, 'windSpeed': 6.44, 'windGust': 14.85, 'windGustTime': 1523948400, 'windBearing': 291, 'cloudCover': 0.01, 'uvIndex': 8, 'uvIndexTime': 1523995200, 'ozone': 380.22, 'temperatureMin': 48.52, 'temperatureMinTime': 1523962800, 'temperatureMax': 59.76, 'temperatureMaxTime': 1524006000, 'apparentTemperatureMin': 45.22, 'apparentTemperatureMinTime': 1523966400, 'apparentTemperatureMax': 59.76, 'apparentTemperatureMaxTime': 1524006000}, {'time': 1524034800, 'summary': 'Overcast throughout the day.', 'icon': 'cloudy', 'sunriseTime': 1524058255, 'sunsetTime': 1524106147, 'moonPhase': 0.1, 'precipIntensity': 0, 'precipIntensityMax': 0, 'precipProbability': 0, 'temperatureHigh': 64.1, 'temperatureHighTime': 1524092400, 'temperatureLow': 50.14, 'temperatureLowTime': 1524139200, 'apparentTemperatureHigh': 64.1, 'apparentTemperatureHighTime': 1524092400, 'apparentTemperatureLow': 50.14, 'apparentTemperatureLowTime': 1524139200, 'dewPoint': 39.58, 'humidity': 0.56, 'pressure': 1018.86, 'windSpeed': 0.49, 'windGust': 11.91, 'windGustTime': 1524074400, 'windBearing': 1, 'cloudCover': 0.92, 'uvIndex': 4, 'uvIndexTime': 1524078000, 'ozone': 383.93, 'temperatureMin': 47.84, 'temperatureMinTime': 1524049200, 'temperatureMax': 64.1, 'temperatureMaxTime': 1524092400, 'apparentTemperatureMin': 47.41, 'apparentTemperatureMinTime': 1524045600, 'apparentTemperatureMax': 64.1, 'apparentTemperatureMaxTime': 1524092400}]}, 'flags': {'sources': ['isd', 'nearest-precip', 'nwspa', 'cmc', 'gfs', 'hrrr', 'madis', 'nam', 'sref', 'darksky'], 'isd-stations': ['724943-99999', '745039-99999', '745045-99999', '745060-23239', '745065-99999', '994016-99999', '994033-99999', '994036-99999', '997734-99999', '998197-99999', '998476-99999', '998477-99999', '998479-99999', '998496-99999', '999999-23239', '999999-23272'], 'units': 'us'}, 'offset': -7}
###Markdown
Now You Code 3: Syracuse Historical WeatherWrite a program to prompt a user to input a date and time string, and then it should report the temperature and weather conditions at that day and time for Syracuse, NY.To lookup the weather you will need to use the Dark Sky Time Machine: https://darksky.net/dev/docs/time-machine The date and time that the user inputs must be in the following format: `YYYY-MM-DDThh:mm:ss` for example January 7, 2016 at 4:30pm would be: `2016-01-07T16:30:00`Be sure to use the GPS coordinates for Syracuse, NY which are (lat=43.048122,lng=-76.147424)Example Run (Based on the exchange rates from 2017-03-06:```Enter the amount in USD you wish to exchange: 100100.00 in USD is 94.41 in EUR100.00 in USD is 81.48 in GBP```
###Code
# Write your plan (todo list) here
#import requests
#set function and call the function
#take out the information from the provided url
#todo write code here.
import requests
#first I write these
def darksky_weather():
key = '3038a571280c82aa87bc9ea715500d8c' # sign up for your own key at https://darksky.net/dev
url='https://api.darksky.net/forecast/3038a571280c82aa87bc9ea715500d8c/43.048122,-76.147424,2016-01-07T16:30:00'
response = requests.get(url)
weather = response.json()
return weather
darksky_weather()
time=input("Enter a time in the future or the past to check the wheather in syracuse(e.g. YYYY-MM-DDThh:mm:ss)")
function=darksky_weather()
history_weather=function['currently']
print("The weather was or is:",history_weather['summary'])
print("temperature:",history_weather['temperature'])
###Output
Enter a time in the future or the past to check the wheather in syracuse(e.g. YYYY-MM-DDThh:mm:ss)2016-01-07T16:30:00
The weather was or is: Partly Cloudy
temperature: 34.96
###Markdown
Now You Code 3: Syracuse Historical WeatherWrite a program to prompt a user to input a date and time string, and then it should **report the temperature and weather conditions on that day and time for Syracuse, NY.**To accomplish this, use the Dark Sky Time Machine API, here: https://darksky.net/dev/docs/time-machine You're going to have to read about the API and understand how it works before you can write this program, which is the point of the exercise.The date and time that the user inputs must be in the following format: `YYYY-MM-DDThh:mm:ss` For example January 7, 2016 at 4:30pm would be: `2016-01-07T16:30:00`Be sure to use the GPS coordinates for Syracuse, NY which are (lat=43.048122,lng=-76.147424)Example Run (Based on the exchange rates from 2017-03-06:```Syracuse, NY Historical WeatherEnter a date and time in the following format: YYYY-MM-DDThh:mm:ss => 2016-07-23T14:30:00On 2016-07-23T14:30:00 Syracuse, NY was Partly Cloudy with a temperature of 85``` Step 1: Problem AnalysisInputs: A date in the format `YYYY-MM-DDThh:mm:ss` Outputs: the current weather conditions (sunny, rain, partly cloudy) and the temperature.Algorithm (Steps in Program):```todo write algorithm here```
###Code
# step 2: write code here
import requests
print("Syracuse, NY Historical weather")
datetime = input("enter the date in this format; YYYY-MM-DDThh:mm:ss")
key="77dd557fe61bd2a048a28a11d3500612"
url ="https://api.darksky.net/forecast/%s/43.03821555,-76.1333456417294,%s" %(key,datetime)
response = requests.get(url)
conditions = response.json()
currently=conditions['currently']
print("On the day" (datetime), "the weather for syracuse is"(currently))
###Output
_____no_output_____
###Markdown
Now You Code 3: Syracuse Historical WeatherWrite a program to prompt a user to input a date and time string, and then it should **report the temperature and weather conditions on that day and time for Syracuse, NY.**To accomplish this, use the Dark Sky Time Machine API, here: https://darksky.net/dev/docs/time-machine You're going to have to read about the API and understand how it works before you can write this program, which is the point of the exercise.The date and time that the user inputs must be in the following format: `YYYY-MM-DDThh:mm:ss` For example January 7, 2016 at 4:30pm would be: `2016-01-07T16:30:00`Be sure to use the GPS coordinates for Syracuse, NY which are (lat=43.048122,lng=-76.147424)Example Run (Based on the exchange rates from 2017-03-06:```Syracuse, NY Historical WeatherEnter a date and time in the following format: YYYY-MM-DDThh:mm:ss => 2016-07-23T14:30:00On 2016-07-23T14:30:00 Syracuse, NY was Partly Cloudy with a temperature of 85``` Step 1: Problem AnalysisInputs: A date in the format `YYYY-MM-DDThh:mm:ss` Outputs: the current weather conditions (sunny, rain, partly cloudy) and the temperature.Algorithm (Steps in Program):```todo write algorithm here```- print title of program- get the date and time- load in the darksky website program - get the weather and temperature from the user inputted date and time
###Code
# step 2: write code here
import requests
import json
print('Syracuse, NY Historical Weather')
datetime = input("Enter a date and time in the following format: YYYY-MM-DDThh:mm:ss : ")
coords = { 'lat' : 43.048122, 'lng' : -76.147424 }
key = 'b559f6aa7586d6d6e8073e9761dba259'
url= 'https://api.darksky.net/forecast/%s/%f,%f,%s' % (key, coords['lat'], coords['lng'], datetime)
response = requests.get(url)
weather = response.json
conditions = { 'daily' : float(weather[0]['summary']) }
temperature = { 'daily' : float(weather[0]['temperature']) }
print("On %s Syracuse, NY was %s with a temperature of %.0f" %
(datetime, currently['summary'], currently['temperature']))
###Output
Syracuse, NY Historical Weather
###Markdown
Now You Code 3: Syracuse Historical WeatherWrite a program to prompt a user to input a date and time string, and then it should **report the temperature and weather conditions on that day and time for Syracuse, NY.**To accomplish this, use the Dark Sky Time Machine API, here: https://darksky.net/dev/docs/time-machine You're going to have to read about the API and understand how it works before you can write this program, which is the point of the exercise.The date and time that the user inputs must be in the following format: `YYYY-MM-DDThh:mm:ss` For example January 7, 2016 at 4:30pm would be: `2016-01-07T16:30:00`Be sure to use the GPS coordinates for Syracuse, NY which are (lat=43.048122,lng=-76.147424)Example Run (Based on the exchange rates from 2017-03-06:```Syracuse, NY Historical WeatherEnter a date and time in the following format: YYYY-MM-DDThh:mm:ss => 2016-07-23T14:30:00On 2016-07-23T14:30:00 Syracuse, NY was Partly Cloudy with a temperature of 85``` Step 1: Problem AnalysisInputs: A date in the format `YYYY-MM-DDThh:mm:ss` Outputs: the current weather conditions (sunny, rain, partly cloudy) and the temperature.Algorithm (Steps in Program):```todo write algorithm here```
###Code
import requests
try:
answer=input("Wanna see weather from the past? ")
if(answer.lower()!='no'):
datetime=input("Enter a date and time using this format: 'YYYY-MM-DDThh:mm:ss'")
key = '6c307ee6e23e16e635c0c66105feec90'
url = 'https://api.darksky.net/forecast/%s/43.048122,-76.147424,%s'%(key,datetime)
response = requests.get(url)
conditions = response.json()
currently = conditions['currently']
#print(currently) shows the dictionary 'currently'
print("On %s Syracuse, NY was %s with a temperature of %.2f"%(datetime,currently['summary'],currently['apparentTemperature']))
elif(answer.lower()=='no'):
print("okay, see ya later")
except:
print("Didn't work, big guy")
###Output
Wanna see weather from the past? yes
Enter a date and time using this format: 'YYYY-MM-DDThh:mm:ss'2019-01-01T09:09:00
On 2019-01-01T09:09:00 Syracuse, NY was Overcast with a temperature of 28.95
###Markdown
Now You Code 3: Syracuse Historical WeatherWrite a program to prompt a user to input a date and time string, and then it should **report the temperature and weather conditions on that day and time for Syracuse, NY.**To accomplish this, use the Dark Sky Time Machine API, here: https://darksky.net/dev/docs/time-machine You're going to have to read about the API and understand how it works before you can write this program, which is the point of the exercise.The date and time that the user inputs must be in the following format: `YYYY-MM-DDThh:mm:ss` For example January 7, 2016 at 4:30pm would be: `2016-01-07T16:30:00`Be sure to use the GPS coordinates for Syracuse, NY which are (lat=43.048122,lng=-76.147424)Example Run (Based on the exchange rates from 2017-03-06:```Syracuse, NY Historical WeatherEnter a date and time in the following format: YYYY-MM-DDThh:mm:ss => 2016-07-23T14:30:00On 2016-07-23T14:30:00 Syracuse, NY was Partly Cloudy with a temperature of 85``` Step 1: Problem AnalysisInputs: Outputs: Algorithm (Steps in Program):```todo write algorithm here```
###Code
# step 2: write code here
###Output
_____no_output_____
###Markdown
Now You Code 3: Syracuse Historical WeatherWrite a program to prompt a user to input a date and time string, and then it should **report the temperature and weather conditions on that day and time for Syracuse, NY.**To accomplish this, use the Dark Sky Time Machine API, here: https://darksky.net/dev/docs/time-machine You're going to have to read about the API and understand how it works before you can write this program, which is the point of the exercise.The date and time that the user inputs must be in the following format: `YYYY-MM-DDThh:mm:ss` For example January 7, 2016 at 4:30pm would be: `2016-01-07T16:30:00`Be sure to use the GPS coordinates for Syracuse, NY which are (lat=43.048122,lng=-76.147424)Example Run (Based on the exchange rates from 2017-03-06:```Syracuse, NY Historical WeatherEnter a date and time in the following format: YYYY-MM-DDThh:mm:ss => 2016-07-23T14:30:00On 2016-07-23T14:30:00 Syracuse, NY was Partly Cloudy with a temperature of 85``` Step 1: Problem AnalysisInputs: A date in the format `YYYY-MM-DDThh:mm:ss` Outputs: the current weather conditions (sunny, rain, partly cloudy) and the temperature.Algorithm (Steps in Program):```todo write algorithm here```
###Code
###Output
_____no_output_____ |
03_Machine_Learning/sol/[HW10]_Simple_Linear_Regression.ipynb | ###Markdown
[HW10] Simple Linear Regression 1. Linear regressionLinear regression은 종속 변수 $y$와 한개 이상의 독립 변수 $X$와의 선형 관계를 모델링하는 방법론입니다. 여기서 독립 변수는 입력 값이나 원인을 나타내고, 종속 변수는 독립 변수에 의해 영향을 받는 변수입니다. 종속 변수는 보통 결과물을 나타냅니다. 선형 관계를 모델링한다는 것은 1차로 이루어진 직선을 구하는 것입니다. 우리의 데이터를 가장 잘 설명하는 최적의 직선을 찾아냄으로써 독립 변수와 종속 변수 사이의 관계를 도출해 내는 과정입니다. 이번 실습에서는 독립 변수가 1개인 simple linear regression을 진행하겠습니다. 변수가 하나인 직선을 정의하겠습니다. $$f(x_i) = wx_i + b$$ <img src="https://nbviewer.jupyter.org/github/engineersCode/EngComp6_deeplearning/blob/master/images/residuals.png" width="400" height="300" /> 우리의 데이터를 가장 잘 설명하는 직선은 우리가 직선을 통해 예측한 값이 실제 데이터의 값과 가장 비슷해야 합니다. 우리의 모델이 예측한 값은 위에서 알 수 있듯 $f(x_i)$입니다. 그리고 실제 데이터는 $y$ 입니다. 실제 데이터(위 그림에서 빨간 점) 과 직선 사이의 차이를 줄이는 것이 우리의 목적입니다. 그것을 바탕으로 cost function을 다음과 같이 정의해보겠습니다.$$\text{cost function} = \frac{1}{N}\sum_{i=1}^n (y_i - f(x_i))^2$$ 우리는 cost function을 최소로 하는 $w$와 $b$를 찾아야 합니다. 우리의 cost function은 이차함수입니다. 우리는 고등학교 수학시간에 이차함수의 최솟값을 구하는 방법을 배웠습니다! 고등학교 때 배웠던 방법을 다시 한번 알아보고, 새로운 gradient descent 방법도 알아보겠습니다. 1.1 Analytically다음 식의 최솟값을 어떻게 찾을 수 있을까요? $$f(w) = w^2 + 3w -5$$고등학교 때 배운 방법은 미분한 값이 0이 되는 지점을 찾는 것입니다. 손으로 푸는 방법은 익숙하겠지만 sympy와 numpy 패키지를 사용하여 코드를 통해서 알아보도록 하겠습니다.
###Code
import sympy
import numpy
from matplotlib import pyplot
%matplotlib inline
sympy.init_printing()
w = sympy.Symbol('w', real=True)
f = w**2 + 3*w - 5
f
sympy.plotting.plot(f);
###Output
_____no_output_____
###Markdown
1차 미분한 식은 다음과 같이 알아볼 수 있습니다.
###Code
fprime = f.diff(w)
fprime
###Output
_____no_output_____
###Markdown
그리고 해당 식의 해는 다음과 같이 구할 수 있습니다.
###Code
sympy.solve(fprime, w)
###Output
_____no_output_____
###Markdown
1.2 Gradient Descent두번째 방법은 오늘 배운 Gradient Descent 방법으로 한번에 정답에 접근하는 것이 아닌 반복적으로 정답에 가까워지는 방법입니다. 이것도 코드를 통해서 이해해보도록 하겠습니다. <img src="https://nbviewer.jupyter.org/github/engineersCode/EngComp6_deeplearning/blob/master/images/descent.png" width="400" height="300" /> 먼저 기울기값을 구하는 함수를 먼저 만들겠습니다.
###Code
fpnum = sympy.lambdify(w, fprime)
type(fpnum)
###Output
_____no_output_____
###Markdown
그 다음 처음 $w$ 값을 설정한 뒤, 반복적으로 최솟값을 향해서 접근해보겠습니다.
###Code
w = 10.0 # starting guess for the min
for i in range(1000):
w = w - fpnum(w)*0.01 # with 0.01 the step size
print(w)
###Output
-1.4999999806458753
###Markdown
이처럼 첫번째 방법과 두번째 방법에서 같은 값이 나온 것을 알 수 있습니다. Gradient descent 방법을 직접 데이터를 만들어서 적용해보겠습니다. 1.3 Linear regression실제로 linear 한 관계를 가진 데이터 셋을 사용하기 위해서 직접 데이터를 만들어보도록 하겠습니다. Numpy 패키지 안에 Normal distribution 함수를 통해서 조금의 noise 를 추가해서 생성하도록 하겠습니다.
###Code
x_data = numpy.linspace(-5, 5, 100)
w_true = 2
b_true = 20
y_data = w_true*x_data + b_true + numpy.random.normal(size=len(x_data))
pyplot.scatter(x_data,y_data);
x_data.shape
y_data.shape
###Output
_____no_output_____
###Markdown
총 100개의 데이터를 생성하였습니다. 이제 코드를 통해 접근해보도록 하겠습니다. 먼저 cost function을 나타내보겠습니다.
###Code
w, b, x, y = sympy.symbols('w b x y')
cost_function = (w*x + b - y)**2
cost_function
###Output
_____no_output_____
###Markdown
위의 gradient descent 예시에서 한 것처럼 기울기 함수를 정의합니다.
###Code
grad_b = sympy.lambdify([w,b,x,y], cost_function.diff(b), 'numpy')
grad_w = sympy.lambdify([w,b,x,y], cost_function.diff(w), 'numpy')
###Output
_____no_output_____
###Markdown
이제 $w$와 $b$의 초기값을 정의하고 gradient descent 방법을 적용하여 cost function을 최소로 하는 $w$와 $b$ 값을 찾아보겠습니다.
###Code
w = 0
b = 0
for i in range(1000):
descent_b = numpy.sum(grad_b(w,b,x_data,y_data))/len(x_data)
descent_w = numpy.sum(grad_w(w,b,x_data,y_data))/len(x_data)
w = w - descent_w*0.01 # with 0.01 the step size
b = b - descent_b*0.01
print(w)
print(b)
###Output
2.0303170198038307
19.90701393429327
###Markdown
처음에 데이터를 생성할 때 정의한 $w, b$ 값과 매우 유사한 값을 구할 수 있었습니다.
###Code
pyplot.scatter(x_data,y_data)
pyplot.plot(x_data, w*x_data + b, '-r');
###Output
_____no_output_____
###Markdown
우리가 구한 직선이 데이터와 잘 맞는 것을 볼 수 있습니다. 이번에는 실제 데이터에서 linear regression을 진행해보겠습니다. 2. Earth temperature over time 오늘 배운 linear regression 방법을 사용해서 시간 흐름에 따른 지구의 온도 변화를 분석해보겠습니다. Global temperature anomaly라는 지표를 통해서 분석을 해볼 것입니다. 여기서 temperature anomaly는 어떠한 기준 온도 값을 정해놓고 그것과의 차이를 나타낸 것입니다. 예를 들어서 temperature anomaly가 양수의 높은 값을 가진다면 그것은 평소보다 따듯한 기온을 가졌다는 말이고, 음수의 작은 값을 가진다면 그것은 평소보다 차가운 기온을 가졌다는 말입니다. 세계 여러 지역의 온도가 각각 다 다르기 때문에 global temperature anomaly를 사용해서 분석을 하도록 하겠습니다. 자세한 내용은 아래 링크에서 확인하실 수 있습니다. https://www.ncdc.noaa.gov/monitoring-references/faq/anomalies.php
###Code
from IPython.display import YouTubeVideo
YouTubeVideo('gGOzHVUQCw0')
###Output
_____no_output_____
###Markdown
위 영상으로 기온이 점점 상승하고 있다는 것을 알 수 있습니다. 이제부터는 실제 데이터를 가져와서 분석해보도록 하겠습니다. Step 1 : Read a data fileNOAA(National Oceanic and Atmospheric Administration) 홈페이지에서 데이터를 가져오겠습니다. 아래 명령어로 데이터를 다운받겠습니다.
###Code
from urllib.request import urlretrieve
URL = 'http://go.gwu.edu/engcomp1data5?accessType=DOWNLOAD'
urlretrieve(URL, 'land_global_temperature_anomaly-1880-2016.csv')
###Output
_____no_output_____
###Markdown
다운로드한 데이터를 numpy 패키지를 이용해 불러오겠습니다.
###Code
import numpy
fname = '/content/land_global_temperature_anomaly-1880-2016.csv'
year, temp_anomaly = numpy.loadtxt(fname, delimiter=',', skiprows=5, unpack=True)
###Output
_____no_output_____
###Markdown
Step 2 : Plot the dataMatplotlib 패키지의 pyplot을 이용해서 2D plot을 찍어보도록 하겠습니다.
###Code
from matplotlib import pyplot
%matplotlib inline
pyplot.plot(year, temp_anomaly);
###Output
_____no_output_____
###Markdown
Plot 에 여러 정보를 추가해서 더 보기 좋게 출력해보겠습니다.
###Code
pyplot.rc('font', family='serif', size='18')
#You can set the size of the figure by doing:
pyplot.figure(figsize=(10,5))
#Plotting
pyplot.plot(year, temp_anomaly, color='#2929a3', linestyle='-', linewidth=1)
pyplot.title('Land global temperature anomalies. \n')
pyplot.xlabel('Year')
pyplot.ylabel('Land temperature anomaly [°C]')
pyplot.grid();
###Output
_____no_output_____
###Markdown
Step 3 : AnalyticallyLinear regression을 하기 위해서 먼저 직선을 정의하겠습니다. $$f(x_i) = wx + b$$그 다음 수업 시간에 배운 cost function을 정의하도록 하겠습니다. 우리가 최소화 해야 할 cost function은 다음과 같습니다. $$\frac{1}{n} \sum_{i=1}^n (y_i - f(x_i))^2 = \frac{1}{n} \sum_{i=1}^n (y_i - (wx_i + b))^2$$이제 cost function 을 구하고자 하는 변수로 미분한 뒤 0이 되도록 하는 값을 찾으면 됩니다. 먼저 $b$에 대해서 미분을 하겠습니다. $$\frac{\partial{J(w,b)}}{\partial{b}} = \frac{1}{n}\sum_{i=1}^n -2(y_i - (wx_i+b)) = \frac{2}{n}\left(b + w\sum_{i=1}^n x_i -\sum_{i=1}^n y_i\right) = 0$$위 식을 만족하는 $b$에 대해서 정리하면 $$b = \bar{y} - w\bar{x}$$여기서 $\bar{x} = \frac{\sum_{i=1}^n x_i}{n}$ , $\bar{y} = \frac{\sum_{i=1}^n y_i}{n}$ 입니다. 이제 $w$에 대해서 미분을 하겠습니다. $$\frac{\partial{J(w,b)}}{\partial{w}} = \frac{1}{n}\sum_{i=1}^n -2(y_i - (wx_i+b))x_i = \frac{2}{n}\left(b\sum_{i=1}^nx_i + w\sum_{i=1}^n x_i^2 - \sum_{i=1}^n x_iy_i\right)$$ 여기에 아까 구한 $b$를 대입한 후 0이 되는 $w$값을 구하하면$$w = \frac{\sum_{i=1}^ny_i(x_i-\bar{x_i})}{\sum_{i=1}^nx_i(x_i-\bar{x_i})}$$가 됩니다. 우리는 계산을 통해서 $w$와 $b$ 값을 구했습니다. 이제 코드를 통해서 적용해보도록 하겠습니다.
###Code
w = numpy.sum(temp_anomaly*(year - year.mean())) / numpy.sum(year*(year - year.mean()))
b = a_0 = temp_anomaly.mean() - w*year.mean()
print(w)
print(b)
###Output
0.01037028394347266
-20.148685384658464
###Markdown
이제 그래프로 그려서 확인해보도록 하겠습니다.
###Code
reg = b + w * year
pyplot.figure(figsize=(10, 5))
pyplot.plot(year, temp_anomaly, color='#2929a3', linestyle='-', linewidth=1, alpha=0.5)
pyplot.plot(year, reg, 'k--', linewidth=2, label='Linear regression')
pyplot.xlabel('Year')
pyplot.ylabel('Land temperature anomaly [°C]')
pyplot.legend(loc='best', fontsize=15)
pyplot.grid();
###Output
_____no_output_____ |
6_feature_engineering_selection/homework.ipynb | ###Markdown
Train model (for example, `LogisticRegression(solver='liblinear', penalty='l1')` on raw `wine_train` data; then train same model after data scaling; then add feature selection (and train model again on scaled data). For each experiment all required preprocessing steps (if any) should be wrapped into sklearn pipeline.Measure `accuracy` of all 3 approaches on `wine_val` dataset. Describe and explain results. - Проведем эксперимент, использую указанные в задании параметры. - В качестве средства масштабирования используем StandardScaler. - В качестве средства выбора информативных признаков используем RandomForestClassifier.
###Code
result_score = pd.DataFrame(index=['original','scaled','scaled_selected'], columns=['score'])
logit = LogisticRegression(solver='liblinear', penalty='l1')
#1) prediction based on original data.
pipe_original = make_pipeline(logit)
pipe_original.fit(wine_train, wine_labels_train)
result_score.at['original', 'score'] = pipe_original.score(wine_val, wine_labels_val)
#2) prediction based on scaled (Standard) data.
pipe_scaled = make_pipeline(StandardScaler(), logit)
pipe_scaled.fit(wine_train, wine_labels_train)
result_score.at['scaled', 'score'] = pipe_scaled.score(wine_val, wine_labels_val)
#3) prediction based on scaled (MinMax) data with feature selection.
rf = RandomForestClassifier(n_estimators=15, random_state=17)
pipe_scaled_selected = make_pipeline(StandardScaler(), SelectFromModel(estimator=rf), logit)
pipe_scaled_selected.fit(wine_train, wine_labels_train)
result_score.at['scaled_selected', 'score'] = pipe_scaled_selected.score(wine_val, wine_labels_val)
result_score.head()
###Output
_____no_output_____
###Markdown
- Повторим эксперимент, используя кросс-валидацию.
###Code
result_sv_score = pd.DataFrame(index=['original','scaled','scaled_selected'], columns=['score'])
logit = LogisticRegression(solver='liblinear', penalty='l1')
#1) prediction based on original data.
pipe_original = make_pipeline(logit)
result_sv_score.at['original', 'score'] = cross_val_score(pipe_original,
wine_data,
wine_labels,
scoring="accuracy",
cv=5
).mean()
#2) prediction based on scaled (Standard) data.
pipe_scaled = make_pipeline(StandardScaler(), logit)
result_sv_score.at['scaled', 'score'] = cross_val_score(pipe_scaled,
wine_data,
wine_labels,
scoring="accuracy",
cv=5
).mean()
#3) prediction based on scaled (MinMax) data with feature selection.
rf = RandomForestClassifier(n_estimators=15, random_state=17)
pipe_scaled_selected = make_pipeline(StandardScaler(), SelectFromModel(estimator=rf), logit)
result_sv_score.at['scaled_selected', 'score'] = cross_val_score(pipe_scaled_selected,
wine_data,
wine_labels,
scoring="accuracy",
cv=5
).mean()
result_sv_score.head()
###Output
_____no_output_____
###Markdown
**Выводы:**- Использование средств масштабирования признаков (scaling) и выбора наиболее информативных из них (feature selection) позволяет значительно повысить точность распознавания. Данный факт наиболее наглядно продемонстрирован в случае использования кросс-валидации (для повышения достоверности получаемых результатов). Exercise 4 - manual PCA (5 points)The task is to solve PCA as an optimization problem, without explicitly doing eigen value decomposition.In the most general setting PCA is minimization of reconstruction error of a projection of given rank $q$$$\min_{\mu, \lambda_1,\ldots, \lambda_n, \mathbf{V}_q} \sum_{i=1}^n ||x_i - \mu - \mathbf{V}_q \lambda_i||^2$$With a number of steps that can be found here https://stats.stackexchange.com/a/10260 this task transforms to $$\max_{u_i} \sum_{i=1}^q u_i^T \mathbf{S} u_i$$ where $\mathbf{S}$ is the sample covariance matrix (after standartization) and $u_1, \ldots, u_q$ are the $q$ are orthonormal columns in $\mathbf{V}_q$. Let us solve this optimization problem with `scipy.optimize` library. Additional 2 point are given for visualization of the results. PCA (3 points)
###Code
wine_data, wine_labels = wine_sklearn['data'], wine_sklearn['target']
###Output
_____no_output_____
###Markdown
Find a covariance matrix of standartized data and assing it to S.
###Code
standard_scaler = StandardScaler()
wine_data_standard = standard_scaler.fit_transform(wine_data)
mean_vec=np.mean(wine_data_standard,axis=0)
cov_matrix=(wine_data_standard-mean_vec).T.dot((wine_data_standard-mean_vec))/(wine_data_standard.shape[0]-1)
S = cov_matrix
###Output
_____no_output_____
###Markdown
If your code is correct, the following assert should be Ok.
###Code
assert np.allclose(np.linalg.norm(S), 5.787241159764733)
from scipy.optimize import minimize
def objective(x):
return -(x.T@S@x)
def norm_constraint(x):
norm = np.linalg.norm(x)
return 0 if norm == 1 else (norm-1)
con1 = {'type': 'eq', 'fun': norm_constraint}
x0 = np.zeros(13)
sol = minimize(objective,
x0,
constraints = [con1]
)
x0 = sol.x
###Output
_____no_output_____
###Markdown
Hurray! We have first vector! Let's do another one.
###Code
def orthogonality_constraint(x):
return x.T@x0
con2 = {'type': 'eq', 'fun': orthogonality_constraint}
x1 = np.zeros(13)
sol = minimize(objective,
x1,
constraints = [con1, con2]
)
x1 = sol.x
###Output
_____no_output_____
###Markdown
If your solution is correct, the following asserts should be Ok.
###Code
assert np.allclose(x0@S@x0, 4.732436977583595)
assert np.allclose(x1@S@x1, 2.5110809296451233)
###Output
_____no_output_____
###Markdown
Visualization (2 points) Visualize the points after applying custom dimension reduction with 2 components.
###Code
# Project data to x0 and x1 vectors.
# class_0
data_x0_0 = np.dot(wine_data_standard[wine_labels==0,:], x0)
data_x1_0 = np.dot(wine_data_standard[wine_labels==0,:], x1)
# class_1
data_x0_1 = np.dot(wine_data_standard[wine_labels==1,:], x0)
data_x1_1 = np.dot(wine_data_standard[wine_labels==1,:], x1)
# class_2
data_x0_2 = np.dot(wine_data_standard[wine_labels==2,:], x0)
data_x1_2 = np.dot(wine_data_standard[wine_labels==2,:], x1)
# Plot results.
plt.figure(figsize=(8,6))
plt.scatter(data_x0_0, data_x1_0)
plt.scatter(data_x0_1, data_x1_1)
plt.scatter(data_x0_2, data_x1_2)
plt.xlabel('x0')
plt.ylabel('x1')
plt.title('Data projection onto PCA vectors (x0, x1)')
plt.legend(('class_0','class_1','class_2'))
plt.show()
###Output
_____no_output_____
###Markdown
**Выводы:**- После преобразования признаков на базе PCA и выделения двух основных компонент (x0 и x1) стало возможным отобразить данных на 2D-графике. При этом можно заметить, что при такой конфигурации данных обозначенные в наборе классы (class_0, class_1, class_2) практически линейно разделимы. Exercise 5 - Boruta (3 points)Let us classify handwritten digits 0, 1 and 2. To make task not so easy the images are binarized (no shadows of gray present) as it happens with xerocopied documents.Let us also find out to which parts of an image there's no need to look in order to clasify three digits of interest.
###Code
X, y = load_digits(n_class=3, return_X_y=True, as_frame=True)
X = (X>10).astype(int)
f, ax = plt.subplots(1,3,figsize=(10,4))
for i in range(3):
ax[i].imshow(X.iloc[i].values.reshape(8,8))
ax[i].set_title(f"This is digit {y[i]}.")
plt.suptitle("First three images.")
plt.show()
###Output
_____no_output_____
###Markdown
Split data into train and test, let test size be 30% of the dataset and fix random state to 42:
###Code
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.3, random_state=42)
assert y_val.shape[0] == 162
assert y_val.sum() == 169
###Output
_____no_output_____
###Markdown
Fit a RandomForestClassifier with max_depth=13 and evaluate it's performance:
###Code
clf = RandomForestClassifier(max_depth=13)
clf.fit(X_train, y_train)
acc = clf.score(X_val, y_val)
print(acc)
assert acc > 0.98
###Output
_____no_output_____
###Markdown
Now we will use Boruta to find redundand pixels. If the package is not installed in your system, uncomment and run the following cell.
###Code
# ! pip install boruta
from boruta import BorutaPy
feat_selector = BorutaPy(RandomForestClassifier(max_depth=13),
n_estimators='auto',
verbose=0,
max_iter=100,
random_state=42)
feat_selector.fit(np.array(X_train), np.array(y_train))
###Output
_____no_output_____
###Markdown
Let us print redundant pixels as a mask. Does the result looks similar to mine (or to Among us chracters)?
###Code
mask = np.array(feat_selector.support_).reshape(8,8)
plt.imshow(mask);
###Output
_____no_output_____
###Markdown
At the end let us redo classification but only with selected features
###Code
clf = RandomForestClassifier(max_depth=13)
clf.fit(X_train.iloc[:,feat_selector.support_], y_train)
acc = clf.score(X_val.iloc[:,feat_selector.support_], y_val)
print(acc)
assert acc > 0.99
###Output
_____no_output_____
###Markdown
Homework Exercise 1 - Scaling (2 points) Perform standardization for wine dataset (`wine_data`) using only basic python, numpy and pandas (without using `StandardScaler` and sklearn at all). Implementation of function (or class) that can get dataset as input and return standardized dataset as output is preferrable, but not necessary.Compare you results (output) with `StandardScaler`.**NOTE:**- 1 point for functional version, 2 points for implementing scaling as sklearn pipeline compartible class. - Maximum for the exercise is 2 points. Simple version (1 point)
###Code
# 1 point
def scale(X):
return (X - np.mean(X, axis=0))/np.std(X, axis=0)
assert np.allclose(np.array(scale(wine_data)), StandardScaler().fit_transform(wine_data))
###Output
_____no_output_____
###Markdown
Pipeline Version (2 points)
###Code
# 2 points
from sklearn.base import BaseEstimator, TransformerMixin
class CustomScaler(BaseEstimator, TransformerMixin):
def __init__(self, copy=True, with_mean=True, with_std=True):
self.copy_ = copy
self.with_mean_ = with_mean
self.with_std_ = with_std
def fit(self, X, y=None):
X = np.array(X)
self.mean_ = np.zeros(X.shape[1])
if (self.with_mean_ == True):
self.mean_ = np.mean(X, axis=0)
self.std_ = np.ones(X.shape[1])
if (self.with_std_ == True):
self.std_ = np.std(X, axis=0)
return self
def transform(self, X, y=None, copy=None):
return (np.array(X) - self.mean_)/self.std_
def fit_transform(self, X, y=None, **fit_params):
self.fit(X, fit_params)
return self.transform(X)
assert np.allclose(CustomScaler().fit_transform(wine_data), StandardScaler().fit_transform(wine_data))
###Output
_____no_output_____
###Markdown
Exercise 2 - Visualization (3 points) As noted earlier, standardization/normalization of data can be crucial for some distance-based ML methods.Let’s generate some toy example of unnormalized data and visualize the importance of this process once more:
###Code
feature_0 = np.random.randn(1000) * 10
feature_1 = np.concatenate([np.random.randn(500), np.random.randn(500) + 5])
data = np.column_stack([feature_0, feature_1])
data
plot_scatter(data[:, 0], data[:, 1], auto_scaled=True, title='Data (different axes units!)')
###Output
_____no_output_____
###Markdown
**NOTE:** on the plot above axes are scaled differently and we can clearly see two potential *classes/clusters*. In fact `matplotlib` performed `autoscaling` (which is basically can be considered as `MinMaxScaling` of original data) just for better visualization purposes.Let's turn this feature off and visualize the original data on the plot with equally scaled axes:
###Code
plot_scatter(data[:, 0], data[:, 1], auto_scaled=False , title='Data (equal axes units!)')
###Output
_____no_output_____
###Markdown
This picture is clearly less interpretable, but much closer to "how distance-based algorithm see the original data": separability of data is hardly noticable only because the variation (std) of x-feature is much bigger in absolute numbers. Perform `StandardScaling` and `MinMaxScaling` of original data; visualize results for each case (**use `plot_scatter` with `auto_scaled=False`**): MinMaxScaling (0.5 point)
###Code
min_max_scaler = MinMaxScaler()
data_minmax = min_max_scaler.fit_transform(data)
plot_scatter(data_minmax[:,0], data_minmax[:,1], auto_scaled=False, title='MinMax Data (different axes units!)')
###Output
_____no_output_____
###Markdown
StandardScaler (0.5 point)
###Code
standard_scaler = StandardScaler()
data_standard = standard_scaler.fit_transform(data)
plot_scatter(data_standard[:,0], data_standard[:,1], auto_scaled=False, title='Standard Data (different axes units!)')
###Output
_____no_output_____
###Markdown
(Bonus) K-means (2 points) Illustrate the impact of scaling on basic distance-based clustering algorithm [K-means](https://towardsdatascience.com/understanding-k-means-clustering-in-machine-learning-6a6e67336aa1) using `data` generated above.**NOTE:** basically, you don't need understanding K-means algorithm here, you just need to:1) run algorithm (with k=2, k - number of clusters/classes) on unscaled data 2) run algorithm (with k=2) on scaled data 3) plot results: highlight different clusters using different colors.You can use this [question](https://stats.stackexchange.com/questions/89809/is-it-important-to-scale-data-before-clustering/89813) as a hint, but I recommend you to plot results using `plot_scatter` with `equal_scaled=True`: it might help you to intuitively understand the reasons of such scaling impact.
###Code
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters= 2)
#1) get clusters from the original unscaled data.
labels = kmeans.fit_predict(data)
filtered_labels_0 = data[labels == 0]
filtered_labels_1 = data[labels == 1]
#2) get clusters from the scaled data (MinMax).
labels_minmax = kmeans.fit_predict(data_minmax)
filtered_labels_minmax_0 = data_minmax[labels_minmax == 0]
filtered_labels_minmax_1 = data_minmax[labels_minmax == 1]
#3) get clusters from the scaled data (Standard).
labels_standard = kmeans.fit_predict(data_standard)
filtered_labels_standard_0 = data_standard[labels_standard == 0]
filtered_labels_standard_1 = data_standard[labels_standard == 1]
#4) plotting the results
fig = plt.figure(figsize=(16,4))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
ax1.scatter(filtered_labels_0[:,0] , filtered_labels_0[:,1])
ax1.scatter(filtered_labels_1[:,0] , filtered_labels_1[:,1])
ax1.axis('square')
ax1.set_title('Clustering on unscaled data')
ax1.set_xlabel('feature_0')
ax1.set_ylabel('feature_1')
ax2.scatter(filtered_labels_minmax_0[:,0] , filtered_labels_minmax_0[:,1])
ax2.scatter(filtered_labels_minmax_1[:,0] , filtered_labels_minmax_1[:,1])
ax2.axis('square')
ax2.set_title('Clustering on MinMax-scaled data')
ax2.set_xlabel('feature_0')
ax2.set_ylabel('feature_1')
ax3.scatter(filtered_labels_standard_0[:,0] , filtered_labels_standard_0[:,1])
ax3.scatter(filtered_labels_standard_1[:,0] , filtered_labels_standard_1[:,1])
ax3.axis('square')
ax3.set_title('Clustering on Standard-scaled data')
ax3.set_xlabel('feature_0')
ax3.set_ylabel('feature_1')
plt.show()
###Output
_____no_output_____
###Markdown
**Выводы:**- Кластеризация немасштабированных данных проведена ложно по причине того, что расстояния между разномасштабными признаками рассчитывалась неверно по причине того, что большие по масштабу признаки обладали большим удельным весом в полученном Евклидовом расстоянии.- Напротив, кластеризация равномасштабных признаков (MinMax или Standard) проведена корректно. Exercise 3 - Preprocessing Pipeline (2 points)
###Code
wine_train, wine_val, wine_labels_train, wine_labels_val = train_test_split(wine_data, wine_labels,
test_size=0.3, random_state=42)
###Output
_____no_output_____ |
ml-foundations/notebooks/week-4/Document Retrieval.ipynb | ###Markdown
Document retrieval from wikipedia data Fire up GraphLab Create(See [Getting Started with SFrames](../Week%201/Getting%20Started%20with%20SFrames.ipynb) for setup instructions)
###Code
import graphlab
# Limit number of worker processes. This preserves system memory, which prevents hosted notebooks from crashing.
graphlab.set_runtime_config('GRAPHLAB_DEFAULT_NUM_PYLAMBDA_WORKERS', 4)
###Output
This non-commercial license of GraphLab Create is assigned to [email protected] and will expire on June 05, 2017. For commercial licensing options, visit https://turi.com/buy/.
###Markdown
Load some text data - from wikipedia, pages on people
###Code
people = graphlab.SFrame('people_wiki.gl/')
###Output
_____no_output_____
###Markdown
Data contains: link to wikipedia article, name of person, text of article.
###Code
people.head()
len(people)
###Output
_____no_output_____
###Markdown
Explore the dataset and checkout the text it contains Exploring the entry for president Obama
###Code
obama = people[people['name'] == 'Barack Obama']
obama
obama['text']
###Output
_____no_output_____
###Markdown
Exploring the entry for actor George Clooney
###Code
clooney = people[people['name'] == 'George Clooney']
clooney['text']
###Output
_____no_output_____
###Markdown
Get the word counts for Obama article
###Code
obama['word_count'] = graphlab.text_analytics.count_words(obama['text'])
print obama['word_count']
###Output
[{'operations': 1, 'represent': 1, 'office': 2, 'unemployment': 1, 'is': 2, 'doddfrank': 1, 'over': 1, 'unconstitutional': 1, 'domestic': 2, 'named': 1, 'ending': 1, 'ended': 1, 'proposition': 1, 'seats': 1, 'graduate': 1, 'worked': 1, 'before': 1, 'death': 1, '20': 2, 'taxpayer': 1, 'inaugurated': 1, 'obamacare': 1, 'civil': 1, 'mccain': 1, 'to': 14, '4': 1, 'policy': 2, '8': 1, 'has': 4, '2011': 3, '2010': 2, '2013': 1, '2012': 1, 'bin': 1, 'then': 1, 'his': 11, 'march': 1, 'gains': 1, 'cuba': 1, 'californias': 1, '1992': 1, 'new': 1, 'not': 1, 'during': 2, 'years': 1, 'continued': 1, 'presidential': 2, 'husen': 1, 'osama': 1, 'term': 3, 'equality': 1, 'prize': 1, 'lost': 1, 'stimulus': 1, 'january': 3, 'university': 2, 'rights': 1, 'gun': 1, 'republican': 2, 'rodham': 1, 'troop': 1, 'withdrawal': 1, 'involvement': 3, 'response': 3, 'where': 1, 'referred': 1, 'affordable': 1, 'attorney': 1, 'school': 3, 'senate': 3, 'house': 2, 'national': 2, 'creation': 1, 'related': 1, 'hawaii': 1, 'born': 2, 'second': 2, 'street': 1, 'election': 3, 'close': 1, 'operation': 1, 'insurance': 1, 'sandy': 1, 'afghanistan': 2, 'initiatives': 1, 'for': 4, 'reform': 1, 'federal': 1, 'review': 1, 'representatives': 2, 'debate': 1, 'current': 1, 'state': 1, 'won': 1, 'marriage': 1, 'victory': 1, 'unsuccessfully': 1, 'reauthorization': 1, 'keynote': 1, 'full': 1, 'patient': 1, 'august': 1, 'degree': 1, '44th': 1, 'bm': 1, 'mitt': 1, 'attention': 1, 'delegates': 1, 'lgbt': 1, 'job': 1, 'protection': 2, 'address': 1, 'ask': 1, 'november': 2, 'debt': 1, 'by': 1, 'care': 1, 'on': 2, 'great': 1, 'defense': 1, 'signed': 3, 'libya': 1, 'receive': 1, 'of': 18, 'months': 1, 'against': 1, 'foreign': 2, 'spending': 1, 'american': 3, 'harvard': 2, 'act': 8, 'military': 4, 'hussein': 1, 'or': 1, 'first': 3, 'and': 21, 'major': 1, 'clinton': 1, '1997': 1, 'campaign': 3, 'russia': 1, 'wall': 1, 'legislation': 1, 'into': 1, 'primary': 2, 'community': 1, 'three': 1, 'down': 1, 'hook': 1, 'ii': 1, '63': 1, 'americans': 1, 'elementary': 1, 'total': 1, 'earning': 1, 'often': 1, 'barack': 1, 'law': 6, 'from': 3, 'raise': 1, 'district': 1, 'representing': 1, 'nine': 1, 'reinvestment': 1, 'arms': 1, 'relations': 1, 'nobel': 1, 'start': 1, 'dont': 2, 'tell': 1, 'iraq': 4, 'convention': 1, 'strike': 1, 'served': 2, 'john': 1, 'was': 5, 'war': 1, 'form': 1, 'that': 1, 'tax': 1, 'sufficient': 1, 'republicans': 1, 'resulted': 1, 'hillary': 1, 'taught': 1, 'honolulu': 1, 'filed': 1, 'regained': 1, 'july': 1, 'hold': 1, 'with': 3, 'he': 7, '13th': 1, 'made': 1, 'brk': 1, '1996': 1, 'whether': 1, 'reelected': 1, 'budget': 1, 'us': 6, 'nations': 1, 'recession': 1, 'while': 1, 'economic': 1, 'limit': 1, 'policies': 1, 'promoted': 1, 'called': 1, 'at': 2, 'control': 4, 'supreme': 1, 'ordered': 3, 'nominee': 2, 'process': 1, '2000in': 1, '2012obama': 1, 'received': 1, 'romney': 1, 'briefs': 1, 'defeated': 1, 'general': 1, 'states': 3, 'as': 6, 'urged': 1, 'in': 30, 'sought': 1, 'organizer': 1, 'shooting': 1, 'increased': 1, 'normalize': 1, 'lengthy': 1, 'united': 3, 'court': 1, 'recovery': 1, 'laden': 1, 'laureateduring': 1, 'peace': 1, 'administration': 1, '1961': 1, 'illinois': 2, 'other': 1, 'which': 1, 'party': 3, 'primaries': 1, 'sworn': 1, '2007': 1, 'obama': 9, 'columbia': 1, 'combat': 1, 'after': 4, 'islamic': 1, 'running': 1, 'levels': 1, 'two': 1, 'included': 1, 'president': 4, 'repeal': 1, 'nomination': 1, 'the': 40, 'a': 7, '2009': 3, 'chicago': 2, 'constitutional': 1, 'defeating': 1, 'treaty': 1, 'relief': 2, '2004': 3, 'african': 1, '2008': 1, 'democratic': 4, 'consumer': 1, 'began': 1, 'terms': 1}]
###Markdown
Sort the word counts for the Obama article Turning dictonary of word counts into a table
###Code
obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])
###Output
_____no_output_____
###Markdown
Sorting the word counts to show most common words at the top
###Code
obama_word_count_table.head()
obama_word_count_table.sort('count',ascending=False)
###Output
_____no_output_____
###Markdown
Most common words include uninformative words like "the", "in", "and",... Compute TF-IDF for the corpus To give more weight to informative words, we weigh them by their TF-IDF scores.
###Code
people['word_count'] = graphlab.text_analytics.count_words(people['text'])
people.head()
tfidf = graphlab.text_analytics.tf_idf(people['word_count'])
# Earlier versions of GraphLab Create returned an SFrame rather than a single SArray
# This notebook was created using Graphlab Create version 1.7.1
if graphlab.version <= '1.6.1':
tfidf = tfidf['docs']
tfidf.head()
people['tfidf'] = tfidf
###Output
_____no_output_____
###Markdown
Examine the TF-IDF for the Obama article
###Code
obama = people[people['name'] == 'Barack Obama']
obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
###Output
_____no_output_____
###Markdown
Words with highest TF-IDF are much more informative. Manually compute distances between a few peopleLet's manually compare the distances between the articles for a few famous people.
###Code
clinton = people[people['name'] == 'Bill Clinton']
beckham = people[people['name'] == 'David Beckham']
###Output
_____no_output_____
###Markdown
Is Obama closer to Clinton than to Beckham?We will use cosine distance, which is given by(1-cosine_similarity) and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.
###Code
graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0])
graphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])
###Output
_____no_output_____
###Markdown
Build a nearest neighbor model for document retrievalWe now create a nearest-neighbors model and apply it to document retrieval.
###Code
knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name')
###Output
_____no_output_____
###Markdown
Applying the nearest-neighbors model for retrieval Who is closest to Obama?
###Code
knn_model.query(obama)
###Output
_____no_output_____
###Markdown
As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians. Other examples of document retrieval
###Code
swift = people[people['name'] == 'Taylor Swift']
knn_model.query(swift)
jolie = people[people['name'] == 'Angelina Jolie']
knn_model.query(jolie)
arnold = people[people['name'] == 'Arnold Schwarzenegger']
knn_model.query(arnold)
###Output
_____no_output_____
###Markdown
1. Compare top words according to word counts to TF-IDFIn the notebook we covered in the module, we explored two document representations: word counts and TF-IDF. Now, take a particular famous person, 'Elton John'. What are the 3 words in his articles with highest word counts?
###Code
elton = people[people['name'] == 'Elton John']
elton
elton[['word_count']].stack('word_count',new_column_name=['word','word_count']).sort('word_count',ascending=False)
###Output
_____no_output_____
###Markdown
The 3 words with the highest word counts are:the, in, and What are the 3 words in his articles with highest TF-IDF?
###Code
elton[['tfidf']].stack('tfidf', new_column_name=['word', 'tfidf']).sort('tfidf', ascending = False)
###Output
_____no_output_____
###Markdown
The 3 words with highest TF-IDF are:furnish, elton, billboard 2. Measuring distanceElton John is a famous singer; let’s compute the distance between his article and those of two other famous singers. In this assignment, you will use the cosine distance, which one measure of similarity between vectors, similar to the one discussed in the lectures. You can compute this distance using the graphlab.distances.cosine function. What’s the cosine distance between the articles on ‘Elton John’ and ‘Victoria Beckham’?
###Code
victoria = people[people['name'] == 'Victoria Beckham']
graphlab.distances.cosine(elton['tfidf'][0], victoria['tfidf'][0])
###Output
_____no_output_____
###Markdown
What’s the cosine distance between the articles on ‘Elton John’ and Paul McCartney’?
###Code
paul = people[people['name'] == 'Paul McCartney']
graphlab.distances.cosine(elton['tfidf'][0], paul['tfidf'][0])
###Output
_____no_output_____
###Markdown
Which one of the two is closest to Elton John? Paul McCartney is closer to Elton John than Victoria Beckham. Does this result make sense to you? Yes, it does since, even though all of them are singers, Paul McCartney and Elton John are more contemporaries than Elton John and Victoria Beckham. 3. Building nearest neighbors models with different input features and setting the distance metric
###Code
word_count_cosine_model = graphlab.nearest_neighbors.create(people,features=['word_count'], label='name', distance='cosine')
tfidf_cosine_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name', distance='cosine')
###Output
_____no_output_____
###Markdown
What’s the most similar article, other than itself, to the one on ‘Elton John’ using word count features?
###Code
word_count_cosine_model.query(elton)
###Output
_____no_output_____
###Markdown
What’s the most similar article, other than itself, to the one on ‘Elton John’ using TF-IDF features?
###Code
tfidf_cosine_model.query(elton)
###Output
_____no_output_____
###Markdown
What’s the most similar article, other than itself, to the one on ‘Victoria Beckham’ using word count features?
###Code
word_count_cosine_model.query(victoria)
###Output
_____no_output_____
###Markdown
What’s the most similar article, other than itself, to the one on ‘Victoria Beckham’ using TF-IDF features?
###Code
tfidf_cosine_model.query(victoria)
###Output
_____no_output_____ |
LinearRegressionTensorflow.ipynb | ###Markdown
Linear RegressionIn this I python notebook we will focus on $\textit{Linear Regression}$ task. We will solve problem using Tensorflow machine learning framework ProblemThis type of problem is called $\textit{Supervised learning}$ problemTraining set contains $N$ data points $(x_i, y_i) \: \mid \: x_i \in \mathbb{R}^{d_1}, y_i \in \mathbb{R}^{d_2}, i = 1, \dots, N $. For a given set $D = \{(x_n, y_n)\}_{n=1}^N$ predict the $\textit{y}$ for the new $\textbf{x}$. Example Flat size of $x_n$ has the price $y_n$. We have the following data: Flat size $[m^2]$ Price $[k \$]$ 20 100 30 180 55 320 80 450 We want to predict price $\textit{y}$ for the new flat $\textbf{x}$ based on its size. Flat size $[m^2]$ Price $[k \$]$ 45 ? 70 ? SolutionOur goal is to find parameters $\textit{W}$ of function $f_{W} \colon \mathbb{R}^{d_1} \rightarrow \mathbb{R}^{d_2}$ given by $$ f_W(x) = Wx $$ where the data matrix $W \in \mathbb{R}^{d_2 \times d_1}$ holds the _parameters_ of the model.To find best parameter $\textit{W}$ we need to define a $\textit{Learning problem}$. $\textit{Learning problem}$: Find a $\textit{W}$ such that minimizes the residuals $\Delta$: $\hat{w} = arg min_{w}\lVert \Delta \rVert$ defined by $\Delta_{i} = y_i - \hat{y}(x_i, w)$ [1] Imports
###Code
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
np.random.seed(42)
tf.random.set_random_seed(42)
###Output
_____no_output_____
###Markdown
Generate Data To check the how our model works we need some data, so lets generate it. We will generate 2 data sets, ground truth data set and observations data set. Obsrervations (like in a real world) will contain some noise.
###Code
def get_generated_data(x):
"""
:param x: vector of observation, x values
:return: y value for a given vector of observation
"""
return np.sin(1.1*x) - np.cos(1.9 * x)
observed_samples_number = 50
noise_ratio = 0.3
observation_data = np.random.uniform(low=-np.pi, high=np.pi, size=(observed_samples_number, 1))
values_for_observation = get_generated_data(observation_data) + noise_ratio * np.random.normal(
size=observation_data.shape)
xs = np.linspace(-np.pi, np.pi, 50)[:, np.newaxis]
ys = get_generated_data(xs)
###Output
_____no_output_____
###Markdown
Plot Genrated Data
###Code
plt.plot(xs, ys, label="ground truth")
plt.scatter(observation_data, values_for_observation, marker="x", c="r", label="samples")
plt.legend()
###Output
_____no_output_____
###Markdown
Finding parameters W Our model will be simple Linear Regression model such as: $\hat{y}(x, W, b) = x * W + b$
###Code
data_input = tf.placeholder(dtype=tf.float32, shape=(None, 1), name="data_input")
target_output = tf.placeholder(dtype=tf.float32, shape=(None, 1), name="target_input")
with tf.variable_scope("linear_model", reuse=tf.AUTO_REUSE):
W = tf.get_variable(name="w", shape=(1, 1), dtype=tf.float32,
initializer=tf.initializers.random_normal())
b = tf.get_variable(name="b", shape=(1, 1), dtype=tf.float32,
initializer=tf.initializers.zeros())
model_output = W * data_input + b
print(model_output)
print(data_input)
###Output
WARNING:tensorflow:From c:\users\piotr\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
Tensor("linear_model/add:0", shape=(?, 1), dtype=float32)
Tensor("data_input:0", shape=(?, 1), dtype=float32)
###Markdown
Defining lost function To measure how good our model is we would use the cost function called Mean Squered Error$E = \frac{1}{2N}\lVert\overline{\textbf{y}} - \textbf{y}\rVert_2^2$$\lVert \overline{\textbf{y}} - \textbf{y}\rVert_2$ is called Norm 2 In linear algebra, functional analysis, and related areas of mathematics, a norm is a function that assigns a strictly positive length or size to each vector in a vector space$\lVert\textbf{x}\rVert_2 = \sqrt[2]{\sum_{n=1}^{N} x_n^2}$
###Code
with tf.name_scope("cost_function_linear"):
cost_function = tf.losses.mean_squared_error(model_output, target_output)
###Output
WARNING:tensorflow:From c:\users\piotr\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\ops\losses\losses_impl.py:667: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
###Markdown
Model optimization To best fitting $W$ & $b$ parameters we have to find cost function minimum. To do so we will use Gradient Descent Optimization method.
###Code
with tf.name_scope("optimizer_linear"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate=1e-3)
optimization_step = optimizer.minimize(cost_function)
###Output
_____no_output_____
###Markdown
Train the model
###Code
n_iteration_steps = 10_000
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for i in range(n_iteration_steps):
current_loss, _ = sess.run([cost_function, optimization_step], feed_dict={
data_input: observation_data,
target_output: values_for_observation
})
if i % 2000 == 0:
print(f"iteration: {i}, loss: {current_loss}")
predictions = sess.run(model_output, feed_dict={data_input: xs})
plt.plot(xs, ys, label="ground truth")
plt.plot(xs, predictions, c="r", label="predictions")
plt.scatter(observation_data, values_for_observation, marker="x", c="g", label="observations")
plt.legend()
###Output
_____no_output_____
###Markdown
As we can see our model doesn't perform really good. To find better solution we will have to use some more sphisticated model e.g. Polynomial regression. Polynomial Regression In statistics, polynomial regression is a form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modelled as an nth degree polynomial in x.[2]E.g:$\hat{y}(x, W, b) = w_1 * x + w_2 * x^2 + \dots + w^n*x^n$ Design matrix $\Phi$Design matrix $\Phi$ is matrix of values of explanatory variables of a set of objects. Depending on the definition it takes different forms e.g:$\phi(x) = \begin{pmatrix}x & x^2 & x^ 3x\\\end{pmatrix}^T$$\phi(x) = \begin{pmatrix}1 & e^x & e^{2x} \\\end{pmatrix}^T$Where $x$ is the vector of observations $x_n$ Example: $ x = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} $$\phi(x) = \begin{pmatrix}x & x^2 & x^3\\\end{pmatrix}^T \quad \; \; \Phi=\begin{bmatrix} 1 & 1 & 1\\ 2 & 4 & 8\\ 3 & 9 & 27\\ \end{bmatrix}$ $\phi(x) = \begin{pmatrix}1 & e^x & e^{2x} \\\end {pmatrix}^T \quad \Phi=\begin{bmatrix} 1 & e^1 & e^2 \\ 1 & e^2 & e^4 \\ 1 & e^3 & e^6 \\ \end{bmatrix}$ In our example I would use only first form of polynomial matrix
###Code
def design_matrix(x_train, degree_of_polynomial):
'''
:param x_train: vector of input values Nx1
:param degree_of_polynomial: degree of polynomial e.g. 1,2,...
:return: Design Matrix X_train for polynomial degree of M
'''
matrix = np.array([x_train ** i for i in range(1, degree_of_polynomial + 1)]).T
return matrix[0]
###Output
_____no_output_____
###Markdown
Example
###Code
a = np.array([1,2,3])
a = np.reshape(a, (3,1))
print("Input vector: \n", a)
print()
print ("Design matrix: \n", design_matrix(a, degree_of_polynomial = 4))
print ("Design shape: \n", design_matrix(a, 4).shape)
###Output
Input vector:
[[1]
[2]
[3]]
Design matrix:
[[ 1 1 1 1]
[ 2 4 8 16]
[ 3 9 27 81]]
Design shape:
(3, 4)
###Markdown
Hyperparameters
###Code
degree_of_polynomial = 4
lambda_parameter = 1e-4
learning_rate = 1e-4
n_iteration_steps = 50_000
###Output
_____no_output_____
###Markdown
Transforming dataOur new model will be polynomial (non-linear) model, so we need to transform our data. Insead of signle dim data, now we will expand it using Design Matrix $\Phi$
###Code
observation_data_design_matrix = design_matrix(observation_data, degree_of_polynomial)
xs_design_matrix = design_matrix(xs, degree_of_polynomial)
###Output
_____no_output_____
###Markdown
Finding parameters WOur new model will be Polynomial regression Regression model such as $X * W = \textbf{Y}$Where:$X$ is Design matrix of size (N,$d_1$)$W$ is matrix of weights, $\colon \mathbb{R}^{d_1} \rightarrow \mathbb{R}^{d_2}$$Y$ is result matrix of a size (N, $d_2$), in our example $d_2$ will be just 1
###Code
data_input = tf.placeholder(dtype=tf.float32, shape=(None, degree_of_polynomial), name="data_input")
target_output = tf.placeholder(dtype=tf.float32, shape=(None, 1), name="target_input")
with tf.variable_scope("polynomial_model", reuse=tf.AUTO_REUSE):
W = tf.get_variable(name="w", shape=(degree_of_polynomial, 1), dtype=tf.float32,
initializer=tf.initializers.random_normal())
b = tf.get_variable(name="b", shape=(1, 1), dtype=tf.float32,
initializer=tf.initializers.random_normal())
model_output = tf.linalg.matmul(data_input, W) + b
###Output
_____no_output_____
###Markdown
Cost function As a cost function we will again use MSE, but this time to avoid overfitting we will add Tikhonov Regularization [3]
###Code
tikhonov_regularization = lambda_parameter * tf.math.reduce_sum(W)
with tf.name_scope("cost_function_polynomial"):
cost_function = tf.losses.mean_squared_error(model_output, target_output) + tikhonov_regularization
###Output
_____no_output_____
###Markdown
OptimizationAgain we will optimize parameters using Gradient Descent
###Code
with tf.name_scope("optimizer_polynomial"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
optimization_step = optimizer.minimize(cost_function)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for i in range(n_iteration_steps):
current_loss, _ = sess.run([cost_function, optimization_step],
feed_dict={
data_input: observation_data_design_matrix,
target_output: values_for_observation
})
if i % 5000 == 0:
print(f"iteration: {i}, loss: {current_loss}")
predictions = sess.run(model_output, feed_dict={data_input: xs_design_matrix})
plt.scatter(observation_data, values_for_observation, c="g", label="observations")
plt.plot(xs, ys, c="r", label="ground truth")
plt.plot(xs, predictions, label="predictions")
plt.legend()
plt.show()
###Output
_____no_output_____ |
content/ch-labs/Lab01_QuantumCircuits.ipynb | ###Markdown
<h1 style="font-size:35px; color:black; ">Lab 1 Quantum Circuits Prerequisite- [Qiskit basics](https://qiskit.org/documentation/tutorials/circuits/1_getting_started_with_qiskit.html)- [Ch.1.2 The Atoms of Computation](https://qiskit.org/textbook/ch-states/atoms-computation.html)Other relevant materials- [Access IBM Quantum Systems](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems)- [IBM Quantum Systems Configuration](https://quantum-computing.ibm.com/docs/manage/backends/configuration)- [Transpile](https://qiskit.org/documentation/apidoc/transpiler.html)- [IBM Quantum account](https://quantum-computing.ibm.com/docs/manage/account/ibmq)- [Quantum Circuits](https://qiskit.org/documentation/apidoc/circuit.html)
###Code
from qiskit import *
from qiskit.visualization import plot_histogram
import numpy as np
###Output
_____no_output_____
###Markdown
Part 1: Classical logic gates with quantum circuits<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Create quantum circuit functions that can compute the XOR, AND, NAND and OR gates using the NOT gate (expressed as x in Qiskit), the CNOT gate (expressed as cx in Qiskit) and the Toffoli gate (expressed as ccx in Qiskit) .An implementation of the `NOT` gate is provided as an example.
###Code
def NOT(inp):
"""An NOT gate.
Parameters:
inp (str): Input, encoded in qubit 0.
Returns:
QuantumCircuit: Output NOT circuit.
str: Output value measured from qubit 0.
"""
qc = QuantumCircuit(1, 1) # A quantum circuit with a single qubit and a single classical bit
qc.reset(0)
# We encode '0' as the qubit state |0⟩, and '1' as |1⟩
# Since the qubit is initially |0⟩, we don't need to do anything for an input of '0'
# For an input of '1', we do an x to rotate the |0⟩ to |1⟩
if inp=='1':
qc.x(0)
# barrier between input state and gate operation
qc.barrier()
# Now we've encoded the input, we can do a NOT on it using x
qc.x(0)
#barrier between gate operation and measurement
qc.barrier()
# Finally, we extract the |0⟩/|1⟩ output of the qubit and encode it in the bit c[0]
qc.measure(0,0)
qc.draw('mpl')
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp in ['0', '1']:
qc, out = NOT(inp)
print('NOT with input',inp,'gives output',out)
display(qc.draw())
print('\n')
###Output
NOT with input 0 gives output 1
###Markdown
&128211; XOR gateTakes two binary strings as input and gives one as output.The output is '0' when the inputs are equal and '1' otherwise.
###Code
def XOR(inp1,inp2):
"""An XOR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 1.
"""
qc = QuantumCircuit(2, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
# barrier between input state and gate operation
qc.barrier()
# this is where your program for quantum XOR gate goes
# barrier between input state and gate operation
qc.barrier()
qc.measure(1,0) # output from qubit 1 is measured
#We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
#Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = XOR(inp1, inp2)
print('XOR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; AND gateTakes two binary strings as input and gives one as output.The output is `'1'` only when both the inputs are `'1'`.
###Code
def AND(inp1,inp2):
"""An AND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum AND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = AND(inp1, inp2)
print('AND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; NAND gateTakes two binary strings as input and gives one as output.The output is `'0'` only when both the inputs are `'1'`.
###Code
def NAND(inp1,inp2):
"""An NAND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output NAND circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum NAND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = NAND(inp1, inp2)
print('NAND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; OR gateTakes two binary strings as input and gives one as output.The output is '1' if either input is '1'.
###Code
def OR(inp1,inp2):
"""An OR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum OR gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = OR(inp1, inp2)
print('OR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
Part 2: AND gate on Quantum Computer<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Execute AND gate on two quantum systems and learn how the different circuit properties affect the result.In Part 1 you made an `AND` gate from quantum gates, and executed it on the simulator. Here in Part 2 you will do it again, but instead run the circuits on a real quantum computer. When using a real quantum system, one thing you should keep in mind is that present day quantum computers are not fault tolerant; they are noisy.The 'noise' in a quantum system is the collective effects of all the things that should not happen, but nevertheless do. Noise results in outputs are not always what we would expect. There is noise associated with all processes in a quantum circuit: preparing the initial state, applying gates, and qubit measurement. For the gates, noise levels can vary between different gates and between different qubits. `cx` gates are typically more noisy than any single qubit gate.Here we will use the quantum systems from the IBM Quantum Experience. If you do not have acess, you can do so [here](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems).Now that you are ready to use the real quantum computer, let's begin. Step 1. Choosing a device First load the account from the credentials saved on disk by running the following cell:
###Code
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
After your account is loaded, you can see the list of providers that you have access to by running the cell below. Each provider offers different systems for use. For open users, there is typically only one provider `ibm-q/open/main`:.
###Code
IBMQ.providers()
###Output
_____no_output_____
###Markdown
Let us grab the provider using `get_provider`. The command, `provider.backends( )` shows you the list of backends that are available to you from the selected provider.
###Code
provider = IBMQ.get_provider('ibm-q')
provider.backends()
###Output
_____no_output_____
###Markdown
Among these options, you may pick one of the systems to run your circuits on. All except the `ibmq_qasm_simulator` all are real quantum computers that you can use. The differences among these systems resides in the number of qubits, their connectivity, and the system error rates. Upon executing the following cell you will be presented with a widget that displays all of the information about your choice of the backend. You can obtain information that you need by clicking on the tabs. For example, backend status, number of qubits and the connectivity are under `configuration` tab, where as the `Error Map` tab will reveal the latest noise information for the system.
###Code
import qiskit.tools.jupyter
backend_ex = provider.get_backend('ibmq_16_melbourne')
backend_ex
###Output
_____no_output_____
###Markdown
For our AND gate circuit, we need a backend with three or more qubits, which is true for all the real systems except for `ibmq_armonk`. Below is an example of how to filter backends, where we filter for number of qubits, and remove simulators:
###Code
backends = provider.backends(filters = lambda x:x.configuration().n_qubits >= 2 and not x.configuration().simulator
and x.status().operational==True)
backends
###Output
_____no_output_____
###Markdown
One convienent way to choose a system is using the `least_busy` function to get the backend with the lowest number of jobs in queue. The downside is that the result might have relatively poor accuracy because, not surprisingly, the lowest error rate systems are the most popular.
###Code
from qiskit.providers.ibmq import least_busy
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2 and
not x.configuration().simulator and x.status().operational==True))
backend
###Output
_____no_output_____
###Markdown
Real quantum computers need to be recalibrated regularly, and the fidelity of a specific qubit or gate can change over time. Therefore, which system would produce results with less error can vary. `ibmq_athens` tends to show relatively low error rates.In this exercise, we select two systems: `ibmq_athens` for its low error rates, and `ibmqx2` for its additional connectivity, in particular its triangular connectivity, that will be useful for circuits with Toffoli gates.
###Code
# run this cell
backend1 = provider.get_backend('ibmqx2')
backend2 = provider.get_backend('ibmq_athens')
###Output
_____no_output_____
###Markdown
Step 2. Define AND function for a real deviceWe now define the AND function. We choose 8192 as the the number of shots, the maximum number of shots for open IBM systems, to reduce the variance in the final result. Related informations is well explained [here](https://quantum-computing.ibm.com/docs/manage/backends/configuration) Qiskit Transpiler It is important to know that when running a circuit on a real quantum computer, cicruits typically need to be transpiled for the backend that you select so that the circuit contains only those gates that the quantum computer can actually perform. Primarily this involves the addition of swap gates so that two-qubit gates in the circuit map to those pairs of qubits on the device that can actually perform these gates. The following cell shows the AND gate represented as a Toffoli gate decomposed into single- and two-qubit gates, which are the only types of gate that can be run on IBM hardware. Provided that CNOT gates can be performed between all three qubits, a triangle topology, no other gates are required.
###Code
qc_and = QuantumCircuit(3)
qc_and.ccx(0,1,2)
print('AND gate')
display(qc_and.draw())
print('\n\nTranspiled AND gate with all the reqiured connectiviy')
qc_and.decompose().draw()
###Output
AND gate
###Markdown
In addition, there are often optimizations that the transpiler can perform that reduce the overall gate count, and thus total length of the input circuits. Note that the addition of swaps to match the device topology, and optimizations for reducing the length of a circuit are at odds with each other. In what follows we will make use of `initial_layout` that allows us to pick the qubits on a device used for the computation and `optimization_level`, an argument that allows selecting from internal defaults for circuit swap mapping and optimization methods to perform.You can learn more about transpile function in depth [here](https://qiskit.org/documentation/apidoc/transpiler.html). Let's modify AND function in Part1 properly for the real system with the transpile step included.
###Code
from qiskit.tools.monitor import job_monitor
# run the cell to define AND gate for real quantum system
def AND(inp1, inp2, backend, layout):
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
qc.ccx(0, 1, 2)
qc.barrier()
qc.measure(2, 0)
qc_trans = transpile(qc, backend, initial_layout=layout, optimization_level=3)
job = execute(qc_trans, backend, shots=8192)
print(job.job_id())
job_monitor(job)
output = job.result().get_counts()
return qc_trans, output
###Output
_____no_output_____
###Markdown
When you submit jobs to quantum systems, `job_monitor` will start tracking where your submitted job is in the pipeline. Case A) Three qubits on ibmqx2 with the triangle connectivity First, examine `ibmqx2` using the widget introduced earlier. Find a group of three qubits with triangle conntection and determine your initial layout.
###Code
# run this cell for the widget
backend1
###Output
_____no_output_____
###Markdown
&128211; Assign your choice of layout to the list variable layout1 in the cell below
###Code
# Assign your choice of the initial_layout to the variable layout1 as a list
# ex) layout1 = [0,2,4]
layout1 =
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for your choice of initial layout. Execute `AND` gate on `ibmqx2` by running the cell below.
###Code
output1_all = []
qc_trans1_all = []
prob1_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans1, output1 = AND(input1, input2, backend1, layout1)
output1_all.append(output1)
qc_trans1_all.append(qc_trans1)
prob = output1[str(int( input1=='1' and input2=='1' ))]/8192
prob1_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print( '{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Once your job is finished by running, you can then easily access the results via:```pythonresults = backend.retrieve_job('JOB_ID').result().```Your job_ids will be printed out through the `AND` function defined above. You can also find the job_ids from the results under your `IQX` account. More information can be found [here](https://quantum-computing.ibm.com/docs/manage/account/ibmq). Case B) Three qubits on ibmq_athens for the linear nearest neighbor connectivity Examine `ibmq_athens` through the widget by running the cell below.
###Code
backend2
###Output
_____no_output_____
###Markdown
&128211; Find three qubits with the linear nearest neighbor connectivity. Determine the initial layout considering the error map and assign it to the list variable layout2.
###Code
layout2 = []
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for choice of initial layout. Execute `AND` gate on `ibmq_athens` by running the cell below.
###Code
output2_all = []
qc_trans2_all = []
prob2_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans2, output2 = AND(input1, input2, backend2, layout2)
output2_all.append(output2)
qc_trans2_all.append(qc_trans2)
prob = output2[str(int( input1=='1' and input2=='1' ))]/8192
prob2_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print('{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Step 3. Interpret the result There are several quantities that distinguish the circuits. Chief among them is the **circuit depth**. Circuit depth is defined in detail [here](https://qiskit.org/documentation/apidoc/circuit.html) (See the Supplementray Information and click the Quantum Circuit Properties tab). Circuit depth is proportional to the number of gates in a circuit, and loosly corresponds to the runtime of the circuit on hardware. Therefore, circuit depth is an easy to compute metric that can be used to estimate the fidelity of an executed circuit.A second important value is the number of **nonlocal** (multi-qubit) **gates** in a circuit. On IBM Quantum systems, the only nonlocal gate that can physically be performed is the CNOT gate. Recall that CNOT gates are the most expensive gates to perform, and thus the total number of these gates also serves as a good benchmark for the accuracy of the final output. A) Circuit depth and result accuracy Running the cells below will display the four transpiled AND gate circuit diagrams with the corresponding inputs that executed on `ibmq_athens` and their circuit depths with the success probability for producing correct answer.
###Code
print('Transpiled AND gate circuit for ibmq_athens with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[0]) )
qc_trans2_all[0].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[1]) )
qc_trans2_all[1].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[2]) )
qc_trans2_all[2].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[3]) )
qc_trans2_all[3].draw()
###Output
_____no_output_____
###Markdown
&128211; Explain reason for the disimmilarity of the circuits. Descibe the relations between the property of the circuit and the accuracy of the outcomes. B) Qubit connectivity and circuit depth Investigate the transpiled circuits for `ibmqx2` by running the cells below.
###Code
print('Transpiled AND gate circuit for ibmqx2 with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[0]) )
qc_trans1_all[0].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[1]) )
qc_trans1_all[1].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[2]) )
qc_trans1_all[2].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[3]) )
qc_trans1_all[3].draw()
###Output
_____no_output_____
###Markdown
<h1 style="font-size:35px; color:black; ">Lab 1 Quantum Circuits Prerequisite- [Qiskit basics](https://qiskit.org/documentation/tutorials/circuits/1_getting_started_with_qiskit.html)- [Ch.1.2 The Atoms of Computation](https://qiskit.org/textbook/ch-states/atoms-computation.html)Other relevant materials- [Access IBM Quantum Systems](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems)- [IBM Quantum Systems Configuration](https://quantum-computing.ibm.com/docs/manage/backends/configuration)- [Transpile](https://qiskit.org/documentation/apidoc/transpiler.html)- [IBM Quantum account](https://quantum-computing.ibm.com/docs/manage/account/ibmq)- [Quantum Circuits](https://qiskit.org/documentation/apidoc/circuit.html)
###Code
from qiskit import *
from qiskit.visualization import plot_histogram
import numpy as np
###Output
_____no_output_____
###Markdown
Part 1: Classical logic gates with quantum circuits<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Create quantum circuit functions that can compute the XOR, AND, NAND and OR gates using the NOT gate (expressed as x in Qiskit), the CNOT gate (expressed as cx in Qiskit) and the Toffoli gate (expressed as ccx in Qiskit) .An implementation of the `NOT` gate is provided as an example.
###Code
def NOT(inp):
"""An NOT gate.
Parameters:
inp (str): Input, encoded in qubit 0.
Returns:
QuantumCircuit: Output NOT circuit.
str: Output value measured from qubit 0.
"""
qc = QuantumCircuit(1, 1) # A quantum circuit with a single qubit and a single classical bit
qc.reset(0)
# We encode '0' as the qubit state |0⟩, and '1' as |1⟩
# Since the qubit is initially |0⟩, we don't need to do anything for an input of '0'
# For an input of '1', we do an x to rotate the |0⟩ to |1⟩
if inp=='1':
qc.x(0)
# barrier between input state and gate operation
qc.barrier()
# Now we've encoded the input, we can do a NOT on it using x
qc.x(0)
#barrier between gate operation and measurement
qc.barrier()
# Finally, we extract the |0⟩/|1⟩ output of the qubit and encode it in the bit c[0]
qc.measure(0,0)
qc.draw('mpl')
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp in ['0', '1']:
qc, out = NOT(inp)
print('NOT with input',inp,'gives output',out)
display(qc.draw())
print('\n')
###Output
NOT with input 0 gives output 1
###Markdown
&128211; XOR gateTakes two binary strings as input and gives one as output.The output is '0' when the inputs are equal and '1' otherwise.
###Code
def XOR(inp1,inp2):
"""An XOR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 1.
"""
qc = QuantumCircuit(2, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
# barrier between input state and gate operation
qc.barrier()
# this is where your program for quantum XOR gate goes
# barrier between input state and gate operation
qc.barrier()
qc.measure(1,0) # output from qubit 1 is measured
#We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
#Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = XOR(inp1, inp2)
print('XOR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; AND gateTakes two binary strings as input and gives one as output.The output is `'1'` only when both the inputs are `'1'`.
###Code
def AND(inp1,inp2):
"""An AND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum AND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = AND(inp1, inp2)
print('AND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; NAND gateTakes two binary strings as input and gives one as output.The output is `'0'` only when both the inputs are `'1'`.
###Code
def NAND(inp1,inp2):
"""An NAND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output NAND circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum NAND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = NAND(inp1, inp2)
print('NAND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; OR gateTakes two binary strings as input and gives one as output.The output is '1' if either input is '1'.
###Code
def OR(inp1,inp2):
"""An OR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum OR gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = OR(inp1, inp2)
print('OR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
Part 2: AND gate on Quantum Computer<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Execute AND gate on a real quantum system and learn how the noise properties affect the result.In Part 1 you made an `AND` gate from quantum gates, and executed it on the simulator. Here in Part 2 you will do it again, but instead run the circuits on a real quantum computer. When using a real quantum system, one thing you should keep in mind is that present day quantum computers are not fault tolerent; they are noisy.The 'noise' in a quantum system is the collective effects of all the things that should not happen, but nevertheless do. Noise results in outputs are not always what we would expect. There is noise associated with all processes in a quantum circuit: preparing the initial state, applying gates, and qubit measurement. For the gates, noise levels can vary between different gates and between different qubits. `cx` gates are typically more noisy than any single qubit gate.Here we will use the quantum systems from the IBM Quantum Experience. If you do not have acess, you can do so [here](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems).Now that you are ready to use the real quantum computer, let's begin. Step 1. Choosing a device First load the account from the credentials saved on disk by running the following cell:
###Code
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
After your account is loaded, you can see the list of providers that you have access to by running the cell below. Each provider offers different systems for use. For open users, there is typically only one provider `ibm-q/open/main`:
###Code
IBMQ.providers()
###Output
_____no_output_____
###Markdown
Let us grab the provider using `get_provider`. The command, `provider.backends( )` shows you the list of backends that are available to you from the selected provider.
###Code
provider = IBMQ.get_provider('ibm-q')
provider.backends()
###Output
_____no_output_____
###Markdown
Among these options, you may pick one of the systems to run your circuits on. All except the `ibmq_qasm_simulator` all are real quantum computers that you can use. The differences among these systems resides in the number of qubits, their connectivity, and the system error rates. Upon executing the following cell you will be presented with a widget that displays all of the information about your choice of the backend. You can obtain information that you need by clicking on the tabs. For example, backend status, number of qubits and the connectivity are under `configuration` tab, where as the `Error Map` tab will reveal the latest noise information for the system.
###Code
import qiskit.tools.jupyter
backend_ex = provider.get_backend('ibmq_lima')
backend_ex
###Output
_____no_output_____
###Markdown
For our AND gate circuit, we need a backend with three or more qubits, which is true for all the real systems except for `ibmq_armonk`. Below is an example of how to filter backends, where we filter for number of qubits, and remove simulators:
###Code
backends = provider.backends(filters = lambda x:x.configuration().n_qubits >= 2 and not x.configuration().simulator
and x.status().operational==True)
backends
###Output
_____no_output_____
###Markdown
One convenient way to choose a system is using the `least_busy` function to get the backend with the lowest number of jobs in queue. The downside is that the result might have relatively poor accuracy because, not surprisingly, the lowest error rate systems are the most popular.
###Code
from qiskit.providers.ibmq import least_busy
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2 and
not x.configuration().simulator and x.status().operational==True))
backend
###Output
_____no_output_____
###Markdown
Real quantum computers need to be recalibrated regularly, and the fidelity of a specific qubit or gate can change over time. Therefore, which system would produce results with less error can vary. In this exercise, we select one of the IBM Quantum systems: `ibmq_quito`.
###Code
# run this cell
backend = provider.get_backend('ibmq_quito')
###Output
_____no_output_____
###Markdown
Step 2. Define AND function for a real deviceWe now define the AND function. We choose 8192 as the number of shots, the maximum number of shots for open IBM systems, to reduce the variance in the final result. Related information is well explained [here](https://quantum-computing.ibm.com/docs/manage/backends/configuration). Qiskit Transpiler It is important to know that when running a circuit on a real quantum computer, cicruits typically need to be transpiled for the backend that you select so that the circuit contains only those gates that the quantum computer can actually perform. Primarily this involves the addition of swap gates so that two-qubit gates in the circuit map to those pairs of qubits on the device that can actually perform these gates. The following cell shows the AND gate represented as a Toffoli gate decomposed into single- and two-qubit gates, which are the only types of gate that can be run on IBM hardware.
###Code
qc_and = QuantumCircuit(3)
qc_and.ccx(0,1,2)
print('AND gate')
display(qc_and.draw())
print('\n\nTranspiled AND gate with all the reqiured connectiviy')
qc_and.decompose().draw()
###Output
AND gate
###Markdown
In addition, there are often optimizations that the transpiler can perform that reduce the overall gate count, and thus total length of the input circuits. Note that the addition of swaps to match the device topology, and optimizations for reducing the length of a circuit are at odds with each other. In what follows we will make use of `initial_layout` that allows us to pick the qubits on a device used for the computation and `optimization_level`, an argument that allows selecting from internal defaults for circuit swap mapping and optimization methods to perform.You can learn more about transpile function in depth [here](https://qiskit.org/documentation/apidoc/transpiler.html). Let's modify AND function in Part1 properly for the real system with the transpile step included.
###Code
from qiskit.tools.monitor import job_monitor
# run the cell to define AND gate for real quantum system
def AND(inp1, inp2, backend, layout):
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
qc.ccx(0, 1, 2)
qc.barrier()
qc.measure(2, 0)
qc_trans = transpile(qc, backend, initial_layout=layout, optimization_level=3)
job = backend.run(qc_trans, shots=8192)
print(job.job_id())
job_monitor(job)
output = job.result().get_counts()
return qc_trans, output
###Output
_____no_output_____
###Markdown
When you submit jobs to quantum systems, `job_monitor` will start tracking where your submitted job is in the pipeline. First, examine `ibmq_quito` through the widget by running the cell below.
###Code
backend
###Output
_____no_output_____
###Markdown
&128211; Determine three qubit initial layout considering the error map and assign it to the list variable layout2.
###Code
layout =
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for your choice of initial layout.**your answer:** Execute `AND` gate on `ibmq_quito` by running the cell below.
###Code
output_all = []
qc_trans_all = []
prob_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans, output = AND(input1, input2, backend, layout)
output_all.append(output)
qc_trans_all.append(qc_trans)
prob = output[str(int( input1=='1' and input2=='1' ))]/8192
prob_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print('{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Step 3. Interpret the result There are several quantities that distinguish the circuits. Chief among them is the **circuit depth**. Circuit depth is defined in detail [here](https://qiskit.org/documentation/apidoc/circuit.html) (See the Supplementray Information and click the Quantum Circuit Properties tab). Circuit depth is proportional to the number of gates in a circuit, and loosly corresponds to the runtime of the circuit on hardware. Therefore, circuit depth is an easy to compute metric that can be used to estimate the fidelity of an executed circuit.A second important value is the number of **nonlocal** (multi-qubit) **gates** in a circuit. On IBM Quantum systems, the only nonlocal gate that can physically be performed is the CNOT gate. Recall that CNOT gates are the most expensive gates to perform, and thus the total number of these gates also serves as a good benchmark for the accuracy of the final output. Circuit depth and result accuracy Running the cells below will display the four transpiled AND gate circuit diagrams with the corresponding inputs that executed on `ibm_lagos` and their circuit depths with the success probability for producing correct answer.
###Code
print('Transpiled AND gate circuit for ibmq_vigo with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob_all[0]) )
qc_trans_all[0].draw('mpl')
print('Transpiled AND gate circuit for ibmq_vigo with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob_all[1]) )
qc_trans_all[1].draw('mpl')
print('Transpiled AND gate circuit for ibmq_vigo with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob_all[2]) )
qc_trans_all[2].draw('mpl')
print('Transpiled AND gate circuit for ibmq_vigo with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob_all[3]) )
qc_trans_all[3].draw('mpl')
###Output
_____no_output_____
###Markdown
<h1 style="font-size:35px; color:black; ">Lab 1 Quantum Circuits Prerequisite- [Qiskit basics](https://qiskit.org/documentation/tutorials/circuits/1_getting_started_with_qiskit.html)- [Ch.1.2 The Atoms of Computation](https://qiskit.org/textbook/ch-states/atoms-computation.html)Other relevant materials- [Access IBM Quantum Systems](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems)- [IBM Quantum Systems Configuration](https://quantum-computing.ibm.com/docs/manage/backends/configuration)- [Transpile](https://qiskit.org/documentation/apidoc/transpiler.html)- [IBM Quantum account](https://quantum-computing.ibm.com/docs/manage/account/ibmq)- [Quantum Circuits](https://qiskit.org/documentation/apidoc/circuit.html)
###Code
from qiskit import *
from qiskit.visualization import plot_histogram
import numpy as np
###Output
_____no_output_____
###Markdown
Part 1: Classical logic gates with quantum circuits<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Create quantum circuit functions that can compute the XOR, AND, NAND and OR gates using the NOT gate (expressed as x in Qiskit), the CNOT gate (expressed as cx in Qiskit) and the Toffoli gate (expressed as ccx in Qiskit) .An implementation of the `NOT` gate is provided as an example.
###Code
def NOT(inp):
"""An NOT gate.
Parameters:
inp (str): Input, encoded in qubit 0.
Returns:
QuantumCircuit: Output NOT circuit.
str: Output value measured from qubit 0.
"""
qc = QuantumCircuit(1, 1) # A quantum circuit with a single qubit and a single classical bit
qc.reset(0)
# We encode '0' as the qubit state |0⟩, and '1' as |1⟩
# Since the qubit is initially |0⟩, we don't need to do anything for an input of '0'
# For an input of '1', we do an x to rotate the |0⟩ to |1⟩
if inp=='1':
qc.x(0)
# barrier between input state and gate operation
qc.barrier()
# Now we've encoded the input, we can do a NOT on it using x
qc.x(0)
#barrier between gate operation and measurement
qc.barrier()
# Finally, we extract the |0⟩/|1⟩ output of the qubit and encode it in the bit c[0]
qc.measure(0,0)
qc.draw('mpl')
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp in ['0', '1']:
qc, out = NOT(inp)
print('NOT with input',inp,'gives output',out)
display(qc.draw())
print('\n')
###Output
NOT with input 0 gives output 1
###Markdown
&128211; XOR gateTakes two binary strings as input and gives one as output.The output is '0' when the inputs are equal and '1' otherwise.
###Code
def XOR(inp1,inp2):
"""An XOR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 1.
"""
qc = QuantumCircuit(2, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
# barrier between input state and gate operation
qc.barrier()
# this is where your program for quantum XOR gate goes
# barrier between input state and gate operation
qc.barrier()
qc.measure(1,0) # output from qubit 1 is measured
#We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
#Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = XOR(inp1, inp2)
print('XOR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; AND gateTakes two binary strings as input and gives one as output.The output is `'1'` only when both the inputs are `'1'`.
###Code
def AND(inp1,inp2):
"""An AND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum AND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = AND(inp1, inp2)
print('AND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; NAND gateTakes two binary strings as input and gives one as output.The output is `'0'` only when both the inputs are `'1'`.
###Code
def NAND(inp1,inp2):
"""An NAND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output NAND circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum NAND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = NAND(inp1, inp2)
print('NAND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; OR gateTakes two binary strings as input and gives one as output.The output is '1' if either input is '1'.
###Code
def OR(inp1,inp2):
"""An OR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum OR gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = OR(inp1, inp2)
print('OR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
Part 2: AND gate on Quantum Computer<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Execute AND gate on two quantum systems and learn how the different circuit properties affect the result.In Part 1 you made an `AND` gate from quantum gates, and executed it on the simulator. Here in Part 2 you will do it again, but instead run the circuits on a real quantum computer. When using a real quantum system, one thing you should keep in mind is that present day quantum computers are not fault tolerant; they are noisy.The 'noise' in a quantum system is the collective effects of all the things that should not happen, but nevertheless do. Noise results in outputs are not always what we would expect. There is noise associated with all processes in a quantum circuit: preparing the initial state, applying gates, and qubit measurement. For the gates, noise levels can vary between different gates and between different qubits. `cx` gates are typically more noisy than any single qubit gate.Here we will use the quantum systems from the IBM Quantum Experience. If you do not have acess, you can do so [here](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems).Now that you are ready to use the real quantum computer, let's begin. Step 1. Choosing a device First load the account from the credentials saved on disk by running the following cell:
###Code
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
After your account is loaded, you can see the list of providers that you have access to by running the cell below. Each provider offers different systems for use. For open users, there is typically only one provider `ibm-q/open/main`:.
###Code
IBMQ.providers()
###Output
_____no_output_____
###Markdown
Let us grab the provider using `get_provider`. The command, `provider.backends( )` shows you the list of backends that are available to you from the selected provider.
###Code
provider = IBMQ.get_provider('ibm-q')
provider.backends()
###Output
_____no_output_____
###Markdown
Among these options, you may pick one of the systems to run your circuits on. All except the `ibmq_qasm_simulator` all are real quantum computers that you can use. The differences among these systems resides in the number of qubits, their connectivity, and the system error rates. Upon executing the following cell you will be presented with a widget that displays all of the information about your choice of the backend. You can obtain information that you need by clicking on the tabs. For example, backend status, number of qubits and the connectivity are under `configuration` tab, where as the `Error Map` tab will reveal the latest noise information for the system.
###Code
import qiskit.tools.jupyter
backend_ex = provider.get_backend('ibmq_16_melbourne')
backend_ex
###Output
_____no_output_____
###Markdown
For our AND gate circuit, we need a backend with three or more qubits, which is true for all the real systems except for `ibmq_armonk`. Below is an example of how to filter backends, where we filter for number of qubits, and remove simulators:
###Code
backends = provider.backends(filters = lambda x:x.configuration().n_qubits >= 2 and not x.configuration().simulator
and x.status().operational==True)
backends
###Output
_____no_output_____
###Markdown
One convienent way to choose a system is using the `least_busy` function to get the backend with the lowest number of jobs in queue. The downside is that the result might have relatively poor accuracy because, not surprisingly, the lowest error rate systems are the most popular.
###Code
from qiskit.providers.ibmq import least_busy
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2 and
not x.configuration().simulator and x.status().operational==True))
backend
###Output
_____no_output_____
###Markdown
Real quantum computers need to be recalibrated regularly, and the fidelity of a specific qubit or gate can change over time. Therefore, which system would produce results with less error can vary. `ibmq_athens` tends to show relatively low error rates.In this exercise, we select two systems: `ibmq_athens` for its low error rates, and `ibmqx2` for its additional connectivity, in particular is triganular connectivity, that will be useful for circuits with Toffoli gates.
###Code
# run this cell
backend1 = provider.get_backend('ibmqx2')
backend2 = provider.get_backend('ibmq_athens')
###Output
_____no_output_____
###Markdown
Step 2. Define AND function for a real deviceWe now define the AND function. We choose 8192 as the the number of shots, the maximum number of shots for open IBM systems, to reduce the variance in the final result. Related informations is well explained [here](https://quantum-computing.ibm.com/docs/manage/backends/configuration) Qiskit Transpiler It is important to know that when running a circuit on a real quantum computer, cicruits typically need to be transpiled for the backend that you select so that the circuit contains only those gates that the quantum computer can actually perform. Primarily this involves the addition of swap gates so that two-qubit gates in the circuit map to those pairs of qubits on the device that can actually perform these gates. The following cell shows the AND gate represented as a Toffoli gate decomposed into single- and two-qubit gates, which are the only types of gate that can be run on IBM hardware. Provided that CNOT gates can be performed between all three qubits, a triangle topology, no other gates are required.
###Code
qc_and = QuantumCircuit(3)
qc_and.ccx(0,1,2)
print('AND gate')
display(qc_and.draw())
print('\n\nTranspiled AND gate with all the reqiured connectiviy')
qc_and.decompose().draw()
###Output
AND gate
###Markdown
In addition, there are often optimizations that the transpiler can perform that reduce the overall gate count, and thus total length of the input circuits. Note that the addition of swaps to match the device topology, and optimizations for reducing the length of a circuit are at odds with each other. In what follows we will make use of `initial_layout` that allows us to pick the qubits on a device used for the computation and `optimization_level`, an argument that allows selecting from internal defaults for circuit swap mapping and optimization methods to perform.You can learn more about transpile function in depth [here](https://qiskit.org/documentation/apidoc/transpiler.html). Let's modify AND function in Part1 properly for the real system with the transpile step included.
###Code
from qiskit.tools.monitor import job_monitor
# run the cell to define AND gate for real quantum system
def AND(inp1, inp2, backend, layout):
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
qc.ccx(0, 1, 2)
qc.barrier()
qc.measure(2, 0)
qc_trans = transpile(qc, backend, initial_layout=layout, optimization_level=3)
job = execute(qc_trans, backend, shots=8192)
print(job.job_id())
job_monitor(job)
output = job.result().get_counts()
return qc_trans, output
###Output
_____no_output_____
###Markdown
When you submit jobs to quantum systems, `job_monitor` will start tracking where your submitted job is in the pipeline. Case A) Three qubits on ibmqx2 with the triangle connectivity First, examine `ibmqx2` using the widget introduced earlier. Find a group of three qubits with triangle conntection and determine your initial layout.
###Code
# run this cell for the widget
backend1
###Output
_____no_output_____
###Markdown
&128211; Assign your choice of layout to the list variable layout1 in the cell below
###Code
# Assign your choice of the initial_layout to the variable layout1 as a list
# ex) layout1 = [0,2,4]
layout1 =
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for your choice of initial layout. Execute `AND` gate on `ibmqx2` by running the cell below.
###Code
output1_all = []
qc_trans1_all = []
prob1_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans1, output1 = AND(input1, input2, backend1, layout1)
output1_all.append(output1)
qc_trans1_all.append(qc_trans1)
prob = output1[str(int( input1=='1' and input2=='1' ))]/8192
prob1_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print( '{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Once your job is finished by running, you can then easily access the results via:```pythonresults = backend.retrieve_job('JOB_ID').result().```Your job_ids will be printed out through the `AND` function defined above. You can also find the job_ids from the results under your `IQX` account. More information can be found [here](https://quantum-computing.ibm.com/docs/manage/account/ibmq). Case B) Three qubits on ibmq_athens for the linear nearest neighbor connectivity Examine `ibmq_athens` through the widget by running the cell below.
###Code
backend2
###Output
_____no_output_____
###Markdown
&128211; Find three qubits with the linear nearest neighbor connectivity. Determine the initial layout considering the error map and assign it to the list variable layout2.
###Code
layout2 = []
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for choice of initial layout. Execute `AND` gate on `ibmq_athens` by running the cell below.
###Code
output2_all = []
qc_trans2_all = []
prob2_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans2, output2 = AND(input1, input2, backend2, layout2)
output2_all.append(output2)
qc_trans2_all.append(qc_trans2)
prob = output2[str(int( input1=='1' and input2=='1' ))]/8192
prob2_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print('{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Step 3. Interpret the result There are several quantities that distinguish the circuits. Chief among them is the **circuit depth**. Circuit depth is defined in detail [here](https://qiskit.org/documentation/apidoc/circuit.html) (See the Supplementray Information and click the Quantum Circuit Properties tab). Circuit depth is proportional to the number of gates in a circuit, and loosly corresponds to the runtime of the circuit on hardware. Therefore, circuit depth is an easy to compute metric that can be used to estimate the fidelity of an executed circuit.A second important value is the number of **nonlocal** (multi-qubit) **gates** in a circuit. On IBM Quantum systems, the only nonlocal gate that can physically be performed is the CNOT gate. Recall that CNOT gates are the most expensive gates to perform, and thus the total number of these gates also serves as a good benchmark for the accuracy of the final output. A) Circuit depth and result accuracy Running the cells below will display the four transpiled AND gate circuit diagrams with the corresponding inputs that executed on `ibmq_athens` and their circuit depths with the success probability for producing correct answer.
###Code
print('Transpiled AND gate circuit for ibmq_athens with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[0]) )
qc_trans2_all[0].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[1]) )
qc_trans2_all[1].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[2]) )
qc_trans2_all[2].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[3]) )
qc_trans2_all[3].draw()
###Output
_____no_output_____
###Markdown
&128211; Explain reason for the disimmilarity of the circuits. Descibe the relations between the property of the circuit and the accuracy of the outcomes. B) Qubit connectivity and circuit depth Investigate the transpiled circuits for `ibmqx2` by running the cells below.
###Code
print('Transpiled AND gate circuit for ibmqx2 with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[0]) )
qc_trans1_all[0].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[1]) )
qc_trans1_all[1].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[2]) )
qc_trans1_all[2].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[3]) )
qc_trans1_all[3].draw()
###Output
_____no_output_____
###Markdown
<h1 style="font-size:35px; color:black; ">Lab 1 Quantum Circuits Prerequisite- [Qiskit basics](https://qiskit.org/documentation/tutorials/circuits/1_getting_started_with_qiskit.html)- [Ch.1.2 The Atoms of Computation](https://qiskit.org/textbook/ch-states/atoms-computation.html)Other relevant materials- [Access IBM Quantum Systems](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems)- [IBM Quantum Systems Configuration](https://quantum-computing.ibm.com/docs/manage/backends/configuration)- [Transpile](https://qiskit.org/documentation/apidoc/transpiler.html)- [IBM Quantum account](https://quantum-computing.ibm.com/docs/manage/account/ibmq)- [Quantum Circuits](https://qiskit.org/documentation/apidoc/circuit.html)
###Code
from qiskit import *
from qiskit.visualization import plot_histogram
import numpy as np
###Output
_____no_output_____
###Markdown
Part 1: Classical logic gates with quantum circuits<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Create quantum circuit functions that can compute the XOR, AND, NAND and OR gates using the NOT gate (expressed as x in Qiskit), the CNOT gate (expressed as cx in Qiskit) and the Toffoli gate (expressed as ccx in Qiskit) .An implementation of the `NOT` gate is provided as an example.
###Code
def NOT(inp):
"""An NOT gate.
Parameters:
inp (str): Input, encoded in qubit 0.
Returns:
QuantumCircuit: Output NOT circuit.
str: Output value measured from qubit 0.
"""
qc = QuantumCircuit(1, 1) # A quantum circuit with a single qubit and a single classical bit
qc.reset(0)
# We encode '0' as the qubit state |0⟩, and '1' as |1⟩
# Since the qubit is initially |0⟩, we don't need to do anything for an input of '0'
# For an input of '1', we do an x to rotate the |0⟩ to |1⟩
if inp=='1':
qc.x(0)
# barrier between input state and gate operation
qc.barrier()
# Now we've encoded the input, we can do a NOT on it using x
qc.x(0)
#barrier between gate operation and measurement
qc.barrier()
# Finally, we extract the |0⟩/|1⟩ output of the qubit and encode it in the bit c[0]
qc.measure(0,0)
qc.draw('mpl')
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp in ['0', '1']:
qc, out = NOT(inp)
print('NOT with input',inp,'gives output',out)
display(qc.draw())
print('\n')
###Output
NOT with input 0 gives output 1
###Markdown
&128211; XOR gateTakes two binary strings as input and gives one as output.The output is '0' when the inputs are equal and '1' otherwise.
###Code
def XOR(inp1,inp2):
"""An XOR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 1.
"""
qc = QuantumCircuit(2, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
# barrier between input state and gate operation
qc.barrier()
# this is where your program for quantum XOR gate goes
# barrier between input state and gate operation
qc.barrier()
qc.measure(1,0) # output from qubit 1 is measured
#We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
#Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = XOR(inp1, inp2)
print('XOR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; AND gateTakes two binary strings as input and gives one as output.The output is `'1'` only when both the inputs are `'1'`.
###Code
def AND(inp1,inp2):
"""An AND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum AND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = AND(inp1, inp2)
print('AND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; NAND gateTakes two binary strings as input and gives one as output.The output is `'0'` only when both the inputs are `'1'`.
###Code
def NAND(inp1,inp2):
"""An NAND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output NAND circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum NAND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = NAND(inp1, inp2)
print('NAND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; OR gateTakes two binary strings as input and gives one as output.The output is '1' if either input is '1'.
###Code
def OR(inp1,inp2):
"""An OR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum OR gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = OR(inp1, inp2)
print('OR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
Part 2: AND gate on Quantum Computer<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Execute AND gate on two quantum systems and learn how the different circuit properties affect the result.In Part 1 you made an `AND` gate from quantum gates, and executed it on the simulator. Here in Part 2 you will do it again, but instead run the circuits on a real quantum computer. When using a real quantum system, one thing you should keep in mind is that present day quantum computers are not fault tolerant; they are noisy.The 'noise' in a quantum system is the collective effects of all the things that should not happen, but nevertheless do. Noise results in outputs are not always what we would expect. There is noise associated with all processes in a quantum circuit: preparing the initial state, applying gates, and qubit measurement. For the gates, noise levels can vary between different gates and between different qubits. `cx` gates are typically more noisy than any single qubit gate.Here we will use the quantum systems from the IBM Quantum Experience. If you do not have access, you can do so [here](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems).Now that you are ready to use the real quantum computer, let's begin. Step 1. Choosing a device First load the account from the credentials saved on disk by running the following cell:
###Code
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
After your account is loaded, you can see the list of providers that you have access to by running the cell below. Each provider offers different systems for use. For open users, there is typically only one provider `ibm-q/open/main`:
###Code
IBMQ.providers()
###Output
_____no_output_____
###Markdown
Let us grab the provider using `get_provider`. The command, `provider.backends( )` shows you the list of backends that are available to you from the selected provider.
###Code
provider = IBMQ.get_provider('ibm-q')
provider.backends()
###Output
_____no_output_____
###Markdown
Among these options, you may pick one of the systems to run your circuits on. All except the `ibmq_qasm_simulator` all are real quantum computers that you can use. The differences among these systems resides in the number of qubits, their connectivity, and the system error rates. Upon executing the following cell you will be presented with a widget that displays all of the information about your choice of the backend. You can obtain information that you need by clicking on the tabs. For example, backend status, number of qubits and the connectivity are under `configuration` tab, where as the `Error Map` tab will reveal the latest noise information for the system.
###Code
import qiskit.tools.jupyter
backend_ex = provider.get_backend('ibmq_16_melbourne')
backend_ex
###Output
_____no_output_____
###Markdown
For our AND gate circuit, we need a backend with three or more qubits, which is true for all the real systems except for `ibmq_armonk`. Below is an example of how to filter backends, where we filter for number of qubits, and remove simulators:
###Code
backends = provider.backends(filters = lambda x:x.configuration().n_qubits >= 2 and not x.configuration().simulator
and x.status().operational==True)
backends
###Output
_____no_output_____
###Markdown
One convenient way to choose a system is using the `least_busy` function to get the backend with the lowest number of jobs in queue. The downside is that the result might have relatively poor accuracy because, not surprisingly, the lowest error rate systems are the most popular.
###Code
from qiskit.providers.ibmq import least_busy
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2 and
not x.configuration().simulator and x.status().operational==True))
backend
###Output
_____no_output_____
###Markdown
Real quantum computers need to be recalibrated regularly, and the fidelity of a specific qubit or gate can change over time. Therefore, which system would produce results with less error can vary. `ibmq_athens` tends to show relatively low error rates.In this exercise, we select two systems: `ibmq_athens` for its low error rates, and `ibmqx2` for its additional connectivity, in particular its triangular connectivity, that will be useful for circuits with Toffoli gates.
###Code
# run this cell
backend1 = provider.get_backend('ibmqx2')
backend2 = provider.get_backend('ibmq_athens')
###Output
_____no_output_____
###Markdown
Step 2. Define AND function for a real deviceWe now define the AND function. We choose 8192 as the number of shots, the maximum number of shots for open IBM systems, to reduce the variance in the final result. Related information is well explained [here](https://quantum-computing.ibm.com/docs/manage/backends/configuration). Qiskit Transpiler It is important to know that when running a circuit on a real quantum computer, circuits typically need to be transpiled for the backend that you select so that the circuit contains only those gates that the quantum computer can actually perform. Primarily this involves the addition of swap gates so that two-qubit gates in the circuit map to those pairs of qubits on the device that can actually perform these gates. The following cell shows the AND gate represented as a Toffoli gate decomposed into single- and two-qubit gates, which are the only types of gate that can be run on IBM hardware. Provided that CNOT gates can be performed between all three qubits, a triangle topology, no other gates are required.
###Code
qc_and = QuantumCircuit(3)
qc_and.ccx(0,1,2)
print('AND gate')
display(qc_and.draw())
print('\n\nTranspiled AND gate with all the reqiured connectiviy')
qc_and.decompose().draw()
###Output
AND gate
###Markdown
In addition, there are often optimizations that the transpiler can perform that reduce the overall gate count, and thus total length of the input circuits. Note that the addition of swaps to match the device topology, and optimizations for reducing the length of a circuit are at odds with each other. In what follows we will make use of `initial_layout` that allows us to pick the qubits on a device used for the computation and `optimization_level`, an argument that allows selecting from internal defaults for circuit swap mapping and optimization methods to perform.You can learn more about transpile function in depth [here](https://qiskit.org/documentation/apidoc/transpiler.html). Let's modify AND function in Part1 properly for the real system with the transpile step included.
###Code
from qiskit.tools.monitor import job_monitor
# run the cell to define AND gate for real quantum system
def AND(inp1, inp2, backend, layout):
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
qc.ccx(0, 1, 2)
qc.barrier()
qc.measure(2, 0)
qc_trans = transpile(qc, backend, initial_layout=layout, optimization_level=3)
job = backend.run(qc_trans, shots=8192)
print(job.job_id())
job_monitor(job)
output = job.result().get_counts()
return qc_trans, output
###Output
_____no_output_____
###Markdown
When you submit jobs to quantum systems, `job_monitor` will start tracking where your submitted job is in the pipeline. Case A) Three qubits on ibmqx2 with the triangle connectivity First, examine `ibmqx2` using the widget introduced earlier. Find a group of three qubits with triangle connection and determine your initial layout.
###Code
# run this cell for the widget
backend1
###Output
_____no_output_____
###Markdown
&128211; Assign your choice of layout to the list variable layout1 in the cell below
###Code
# Assign your choice of the initial_layout to the variable layout1 as a list
# ex) layout1 = [0,2,4]
layout1 =
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for your choice of initial layout. Execute `AND` gate on `ibmqx2` by running the cell below.
###Code
output1_all = []
qc_trans1_all = []
prob1_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans1, output1 = AND(input1, input2, backend1, layout1)
output1_all.append(output1)
qc_trans1_all.append(qc_trans1)
prob = output1[str(int( input1=='1' and input2=='1' ))]/8192
prob1_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print( '{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Once your job is finished by running, you can then easily access the results via:```pythonresults = backend.retrieve_job('JOB_ID').result().```Your job_ids will be printed out through the `AND` function defined above. You can also find the job_ids from the results under your `IQX` account. More information can be found [here](https://quantum-computing.ibm.com/docs/manage/account/ibmq). Case B) Three qubits on ibmq_athens for the linear nearest neighbor connectivity Examine `ibmq_athens` through the widget by running the cell below.
###Code
backend2
###Output
_____no_output_____
###Markdown
&128211; Find three qubits with the linear nearest neighbor connectivity. Determine the initial layout considering the error map and assign it to the list variable layout2.
###Code
layout2 = []
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for choice of initial layout. Execute `AND` gate on `ibmq_athens` by running the cell below.
###Code
output2_all = []
qc_trans2_all = []
prob2_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans2, output2 = AND(input1, input2, backend2, layout2)
output2_all.append(output2)
qc_trans2_all.append(qc_trans2)
prob = output2[str(int( input1=='1' and input2=='1' ))]/8192
prob2_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print('{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Step 3. Interpret the result There are several quantities that distinguish the circuits. Chief among them is the **circuit depth**. Circuit depth is defined in detail [here](https://qiskit.org/documentation/apidoc/circuit.htmlsupplementary-information) (See the Supplementary Information and click the Quantum Circuit Properties tab). Circuit depth is proportional to the number of gates in a circuit, and loosely corresponds to the runtime of the circuit on hardware. Therefore, circuit depth is an easy to compute metric that can be used to estimate the fidelity of an executed circuit.A second important value is the number of **nonlocal** (multi-qubit) **gates** in a circuit. On IBM Quantum systems, the only nonlocal gate that can physically be performed is the CNOT gate. Recall that CNOT gates are the most expensive gates to perform, and thus the total number of these gates also serves as a good benchmark for the accuracy of the final output. A) Circuit depth and result accuracy Running the cells below will display the four transpiled AND gate circuit diagrams with the corresponding inputs that executed on `ibmq_athens` and their circuit depths with the success probability for producing correct answer.
###Code
print('Transpiled AND gate circuit for ibmq_athens with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[0]) )
qc_trans2_all[0].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[1]) )
qc_trans2_all[1].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[2]) )
qc_trans2_all[2].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[3]) )
qc_trans2_all[3].draw()
###Output
_____no_output_____
###Markdown
&128211; Explain reason for the dissimilarity of the circuits. Describe the relations between the property of the circuit and the accuracy of the outcomes. B) Qubit connectivity and circuit depth Investigate the transpiled circuits for `ibmqx2` by running the cells below.
###Code
print('Transpiled AND gate circuit for ibmqx2 with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[0]) )
qc_trans1_all[0].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[1]) )
qc_trans1_all[1].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[2]) )
qc_trans1_all[2].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[3]) )
qc_trans1_all[3].draw()
###Output
_____no_output_____
###Markdown
<h1 style="font-size:35px; color:black; ">Lab 1 Quantum Circuits Prerequisite- [Qiskit basics](https://qiskit.org/documentation/tutorials/circuits/1_getting_started_with_qiskit.html)- [Ch.1.2 The Atoms of Computation](https://qiskit.org/textbook/ch-states/atoms-computation.html)Other relevant materials- [Access IBM Quantum Systems](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems)- [IBM Quantum Systems Configuration](https://quantum-computing.ibm.com/docs/manage/backends/configuration)- [Transpile](https://qiskit.org/documentation/apidoc/transpiler.html)- [IBM Quantum account](https://quantum-computing.ibm.com/docs/manage/account/ibmq)- [Quantum Circuits](https://qiskit.org/documentation/apidoc/circuit.html)
###Code
from qiskit import *
from qiskit.visualization import plot_histogram
import numpy as np
###Output
_____no_output_____
###Markdown
Part 1: Classical logic gates with quantum circuits<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Create quantum circuit functions that can compute the XOR, AND, NAND and OR gates using the NOT gate (expressed as x in Qiskit), the CNOT gate (expressed as cx in Qiskit) and the Toffoli gate (expressed as ccx in Qiskit) .An implementation of the `NOT` gate is provided as an example.
###Code
def NOT(inp):
"""An NOT gate.
Parameters:
inp (str): Input, encoded in qubit 0.
Returns:
QuantumCircuit: Output NOT circuit.
str: Output value measured from qubit 0.
"""
qc = QuantumCircuit(1, 1) # A quantum circuit with a single qubit and a single classical bit
qc.reset(0)
# We encode '0' as the qubit state |0⟩, and '1' as |1⟩
# Since the qubit is initially |0⟩, we don't need to do anything for an input of '0'
# For an input of '1', we do an x to rotate the |0⟩ to |1⟩
if inp=='1':
qc.x(0)
# barrier between input state and gate operation
qc.barrier()
# Now we've encoded the input, we can do a NOT on it using x
qc.x(0)
#barrier between gate operation and measurement
qc.barrier()
# Finally, we extract the |0⟩/|1⟩ output of the qubit and encode it in the bit c[0]
qc.measure(0,0)
qc.draw('mpl')
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp in ['0', '1']:
qc, out = NOT(inp)
print('NOT with input',inp,'gives output',out)
display(qc.draw())
print('\n')
###Output
NOT with input 0 gives output 1
###Markdown
&128211; XOR gateTakes two binary strings as input and gives one as output.The output is '0' when the inputs are equal and '1' otherwise.
###Code
def XOR(inp1,inp2):
"""An XOR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 1.
"""
qc = QuantumCircuit(2, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
# barrier between input state and gate operation
qc.barrier()
# this is where your program for quantum XOR gate goes
# barrier between input state and gate operation
qc.barrier()
qc.measure(1,0) # output from qubit 1 is measured
#We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
#Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = XOR(inp1, inp2)
print('XOR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; AND gateTakes two binary strings as input and gives one as output.The output is `'1'` only when both the inputs are `'1'`.
###Code
def AND(inp1,inp2):
"""An AND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum AND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = AND(inp1, inp2)
print('AND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; NAND gateTakes two binary strings as input and gives one as output.The output is `'0'` only when both the inputs are `'1'`.
###Code
def NAND(inp1,inp2):
"""An NAND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output NAND circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum NAND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = NAND(inp1, inp2)
print('NAND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; OR gateTakes two binary strings as input and gives one as output.The output is '1' if either input is '1'.
###Code
def OR(inp1,inp2):
"""An OR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum OR gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = OR(inp1, inp2)
print('OR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
Part 2: AND gate on Quantum Computer<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Execute AND gate on two quantum systems and learn how the different circuit properties affect the result.In Part 1 you made an `AND` gate from quantum gates, and executed it on the simulator. Here in Part 2 you will do it again, but instead run the circuits on a real quantum computer. When using a real quantum system, one thing you should keep in mind is that present day quantum computers are not fault tolerant; they are noisy.The 'noise' in a quantum system is the collective effects of all the things that should not happen, but nevertheless do. Noise results in outputs are not always what we would expect. There is noise associated with all processes in a quantum circuit: preparing the initial state, applying gates, and qubit measurement. For the gates, noise levels can vary between different gates and between different qubits. `cx` gates are typically more noisy than any single qubit gate.Here we will use the quantum systems from the IBM Quantum Experience. If you do not have acess, you can do so [here](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems).Now that you are ready to use the real quantum computer, let's begin. Step 1. Choosing a device First load the account from the credentials saved on disk by running the following cell:
###Code
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
After your account is loaded, you can see the list of providers that you have access to by running the cell below. Each provider offers different systems for use. For open users, there is typically only one provider `ibm-q/open/main`:.
###Code
IBMQ.providers()
###Output
_____no_output_____
###Markdown
Let us grab the provider using `get_provider`. The command, `provider.backends( )` shows you the list of backends that are available to you from the selected provider.
###Code
provider = IBMQ.get_provider('ibm-q')
provider.backends()
###Output
_____no_output_____
###Markdown
Among these options, you may pick one of the systems to run your circuits on. All except the `ibmq_qasm_simulator` all are real quantum computers that you can use. The differences among these systems resides in the number of qubits, their connectivity, and the system error rates. Upon executing the following cell you will be presented with a widget that displays all of the information about your choice of the backend. You can obtain information that you need by clicking on the tabs. For example, backend status, number of qubits and the connectivity are under `configuration` tab, where as the `Error Map` tab will reveal the latest noise information for the system.
###Code
import qiskit.tools.jupyter
backend_ex = provider.get_backend('ibmq_16_melbourne')
backend_ex
###Output
_____no_output_____
###Markdown
For our AND gate circuit, we need a backend with three or more qubits, which is true for all the real systems except for `ibmq_armonk`. Below is an example of how to filter backends, where we filter for number of qubits, and remove simulators:
###Code
backends = provider.backends(filters = lambda x:x.configuration().n_qubits >= 2 and not x.configuration().simulator
and x.status().operational==True)
backends
###Output
_____no_output_____
###Markdown
One convienent way to choose a system is using the `least_busy` function to get the backend with the lowest number of jobs in queue. The downside is that the result might have relatively poor accuracy because, not surprisingly, the lowest error rate systems are the most popular.
###Code
from qiskit.providers.ibmq import least_busy
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2 and
not x.configuration().simulator and x.status().operational==True))
backend
###Output
_____no_output_____
###Markdown
Real quantum computers need to be recalibrated regularly, and the fidelity of a specific qubit or gate can change over time. Therefore, which system would produce results with less error can vary. `ibmq_athens` tends to show relatively low error rates.In this exercise, we select two systems: `ibmq_athens` for its low error rates, and `ibmqx2` for its additional connectivity, in particular its triangular connectivity, that will be useful for circuits with Toffoli gates.
###Code
# run this cell
backend1 = provider.get_backend('ibmqx2')
backend2 = provider.get_backend('ibmq_athens')
###Output
_____no_output_____
###Markdown
Step 2. Define AND function for a real deviceWe now define the AND function. We choose 8192 as the the number of shots, the maximum number of shots for open IBM systems, to reduce the variance in the final result. Related informations is well explained [here](https://quantum-computing.ibm.com/docs/manage/backends/configuration) Qiskit Transpiler It is important to know that when running a circuit on a real quantum computer, cicruits typically need to be transpiled for the backend that you select so that the circuit contains only those gates that the quantum computer can actually perform. Primarily this involves the addition of swap gates so that two-qubit gates in the circuit map to those pairs of qubits on the device that can actually perform these gates. The following cell shows the AND gate represented as a Toffoli gate decomposed into single- and two-qubit gates, which are the only types of gate that can be run on IBM hardware. Provided that CNOT gates can be performed between all three qubits, a triangle topology, no other gates are required.
###Code
qc_and = QuantumCircuit(3)
qc_and.ccx(0,1,2)
print('AND gate')
display(qc_and.draw())
print('\n\nTranspiled AND gate with all the reqiured connectiviy')
qc_and.decompose().draw()
###Output
AND gate
###Markdown
In addition, there are often optimizations that the transpiler can perform that reduce the overall gate count, and thus total length of the input circuits. Note that the addition of swaps to match the device topology, and optimizations for reducing the length of a circuit are at odds with each other. In what follows we will make use of `initial_layout` that allows us to pick the qubits on a device used for the computation and `optimization_level`, an argument that allows selecting from internal defaults for circuit swap mapping and optimization methods to perform.You can learn more about transpile function in depth [here](https://qiskit.org/documentation/apidoc/transpiler.html). Let's modify AND function in Part1 properly for the real system with the transpile step included.
###Code
from qiskit.tools.monitor import job_monitor
# run the cell to define AND gate for real quantum system
def AND(inp1, inp2, backend, layout):
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
qc.ccx(0, 1, 2)
qc.barrier()
qc.measure(2, 0)
qc_trans = transpile(qc, backend, initial_layout=layout, optimization_level=3)
job = execute(qc_trans, backend, shots=8192)
print(job.job_id())
job_monitor(job)
output = job.result().get_counts()
return qc_trans, output
###Output
_____no_output_____
###Markdown
When you submit jobs to quantum systems, `job_monitor` will start tracking where your submitted job is in the pipeline. Case A) Three qubits on ibmqx2 with the triangle connectivity First, examine `ibmqx2` using the widget introduced earlier. Find a group of three qubits with triangle conntection and determine your initial layout.
###Code
# run this cell for the widget
backend1
###Output
_____no_output_____
###Markdown
&128211; Assign your choice of layout to the list variable layout1 in the cell below
###Code
# Assign your choice of the initial_layout to the variable layout1 as a list
# ex) layout1 = [0,2,4]
layout1 =
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for your choice of initial layout. Execute `AND` gate on `ibmqx2` by running the cell below.
###Code
output1_all = []
qc_trans1_all = []
prob1_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans1, output1 = AND(input1, input2, backend1, layout1)
output1_all.append(output1)
qc_trans1_all.append(qc_trans1)
prob = output1[str(int( input1=='1' and input2=='1' ))]/8192
prob1_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print( '{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Once your job is finished by running, you can then easily access the results via:```pythonresults = backend.retrieve_job('JOB_ID').result().```Your job_ids will be printed out through the `AND` function defined above. You can also find the job_ids from the results under your `IQX` account. More information can be found [here](https://quantum-computing.ibm.com/docs/manage/account/ibmq). Case B) Three qubits on ibmq_athens for the linear nearest neighbor connectivity Examine `ibmq_athens` through the widget by running the cell below.
###Code
backend2
###Output
_____no_output_____
###Markdown
&128211; Find three qubits with the linear nearest neighbor connectivity. Determine the initial layout considering the error map and assign it to the list variable layout2.
###Code
layout2 = []
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for choice of initial layout. Execute `AND` gate on `ibmq_athens` by running the cell below.
###Code
output2_all = []
qc_trans2_all = []
prob2_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans2, output2 = AND(input1, input2, backend2, layout2)
output2_all.append(output2)
qc_trans2_all.append(qc_trans2)
prob = output2[str(int( input1=='1' and input2=='1' ))]/8192
prob2_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print('{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Step 3. Interpret the result There are several quantities that distinguish the circuits. Chief among them is the **circuit depth**. Circuit depth is defined in detail [here](https://qiskit.org/documentation/apidoc/circuit.html) (See the Supplementray Information and click the Quantum Circuit Properties tab). Circuit depth is proportional to the number of gates in a circuit, and loosly corresponds to the runtime of the circuit on hardware. Therefore, circuit depth is an easy to compute metric that can be used to estimate the fidelity of an executed circuit.A second important value is the number of **nonlocal** (multi-qubit) **gates** in a circuit. On IBM Quantum systems, the only nonlocal gate that can physically be performed is the CNOT gate. Recall that CNOT gates are the most expensive gates to perform, and thus the total number of these gates also serves as a good benchmark for the accuracy of the final output. A) Circuit depth and result accuracy Running the cells below will display the four transpiled AND gate circuit diagrams with the corresponding inputs that executed on `ibmq_athens` and their circuit depths with the success probability for producing correct answer.
###Code
print('Transpiled AND gate circuit for ibmq_athens with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[0]) )
qc_trans2_all[0].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[1]) )
qc_trans2_all[1].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[2]) )
qc_trans2_all[2].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[3]) )
qc_trans2_all[3].draw()
###Output
_____no_output_____
###Markdown
&128211; Explain reason for the disimmilarity of the circuits. Descibe the relations between the property of the circuit and the accuracy of the outcomes. B) Qubit connectivity and circuit depth Investigate the transpiled circuits for `ibmqx2` by running the cells below.
###Code
print('Transpiled AND gate circuit for ibmqx2 with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[0]) )
qc_trans1_all[0].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[1]) )
qc_trans1_all[1].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[2]) )
qc_trans1_all[2].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[3]) )
qc_trans1_all[3].draw()
###Output
_____no_output_____
###Markdown
<h1 style="font-size:35px; color:black; ">Lab 1 Quantum Circuits Prerequisite- [Qiskit basics](https://libket.ewi.tudelft.nl/documentation/tutorials/circuits/1_getting_started_with_qiskit.html)- [Ch.1.2 The Atoms of Computation](https://libket.ewi.tudelft.nl/textbook/ch-states/atoms-computation.html)Other relevant materials- [Access IBM Quantum Systems](https://libket.ewi.tudelft.nl/documentation/install.htmlaccess-ibm-quantum-systems)- [IBM Quantum Systems Configuration](https://quantum-computing.ibm.com/docs/manage/backends/configuration)- [Transpile](https://libket.ewi.tudelft.nl/documentation/apidoc/transpiler.html)- [IBM Quantum account](https://quantum-computing.ibm.com/docs/manage/account/ibmq)- [Quantum Circuits](https://libket.ewi.tudelft.nl/documentation/apidoc/circuit.html)
###Code
from qiskit import *
from qiskit.visualization import plot_histogram
import numpy as np
###Output
_____no_output_____
###Markdown
Part 1: Classical logic gates with quantum circuits<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Create quantum circuit functions that can compute the XOR, AND, NAND and OR gates using the NOT gate (expressed as x in Qiskit), the CNOT gate (expressed as cx in Qiskit) and the Toffoli gate (expressed as ccx in Qiskit) .An implementation of the `NOT` gate is provided as an example.
###Code
def NOT(inp):
"""An NOT gate.
Parameters:
inp (str): Input, encoded in qubit 0.
Returns:
QuantumCircuit: Output NOT circuit.
str: Output value measured from qubit 0.
"""
qc = QuantumCircuit(1, 1) # A quantum circuit with a single qubit and a single classical bit
qc.reset(0)
# We encode '0' as the qubit state |0⟩, and '1' as |1⟩
# Since the qubit is initially |0⟩, we don't need to do anything for an input of '0'
# For an input of '1', we do an x to rotate the |0⟩ to |1⟩
if inp=='1':
qc.x(0)
# barrier between input state and gate operation
qc.barrier()
# Now we've encoded the input, we can do a NOT on it using x
qc.x(0)
#barrier between gate operation and measurement
qc.barrier()
# Finally, we extract the |0⟩/|1⟩ output of the qubit and encode it in the bit c[0]
qc.measure(0,0)
qc.draw('mpl')
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp in ['0', '1']:
qc, out = NOT(inp)
print('NOT with input',inp,'gives output',out)
display(qc.draw())
print('\n')
###Output
NOT with input 0 gives output 1
###Markdown
&128211; XOR gateTakes two binary strings as input and gives one as output.The output is '0' when the inputs are equal and '1' otherwise.
###Code
def XOR(inp1,inp2):
"""An XOR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 1.
"""
qc = QuantumCircuit(2, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
# barrier between input state and gate operation
qc.barrier()
# this is where your program for quantum XOR gate goes
# barrier between input state and gate operation
qc.barrier()
qc.measure(1,0) # output from qubit 1 is measured
#We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
#Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = XOR(inp1, inp2)
print('XOR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; AND gateTakes two binary strings as input and gives one as output.The output is `'1'` only when both the inputs are `'1'`.
###Code
def AND(inp1,inp2):
"""An AND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum AND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = AND(inp1, inp2)
print('AND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; NAND gateTakes two binary strings as input and gives one as output.The output is `'0'` only when both the inputs are `'1'`.
###Code
def NAND(inp1,inp2):
"""An NAND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output NAND circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum NAND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = NAND(inp1, inp2)
print('NAND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; OR gateTakes two binary strings as input and gives one as output.The output is '1' if either input is '1'.
###Code
def OR(inp1,inp2):
"""An OR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum OR gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = OR(inp1, inp2)
print('OR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
Part 2: AND gate on Quantum Computer<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Execute AND gate on two quantum systems and learn how the different circuit properties affect the result.In Part 1 you made an `AND` gate from quantum gates, and executed it on the simulator. Here in Part 2 you will do it again, but instead run the circuits on a real quantum computer. When using a real quantum system, one thing you should keep in mind is that present day quantum computers are not fault tolerant; they are noisy.The 'noise' in a quantum system is the collective effects of all the things that should not happen, but nevertheless do. Noise results in outputs are not always what we would expect. There is noise associated with all processes in a quantum circuit: preparing the initial state, applying gates, and qubit measurement. For the gates, noise levels can vary between different gates and between different qubits. `cx` gates are typically more noisy than any single qubit gate.Here we will use the quantum systems from the IBM Quantum Experience. If you do not have acess, you can do so [here](https://libket.ewi.tudelft.nl/documentation/install.htmlaccess-ibm-quantum-systems).Now that you are ready to use the real quantum computer, let's begin. Step 1. Choosing a device First load the account from the credentials saved on disk by running the following cell:
###Code
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
After your account is loaded, you can see the list of providers that you have access to by running the cell below. Each provider offers different systems for use. For open users, there is typically only one provider `ibm-q/open/main`:.
###Code
IBMQ.providers()
###Output
_____no_output_____
###Markdown
Let us grab the provider using `get_provider`. The command, `provider.backends( )` shows you the list of backends that are available to you from the selected provider.
###Code
provider = IBMQ.get_provider('ibm-q')
provider.backends()
###Output
_____no_output_____
###Markdown
Among these options, you may pick one of the systems to run your circuits on. All except the `ibmq_qasm_simulator` all are real quantum computers that you can use. The differences among these systems resides in the number of qubits, their connectivity, and the system error rates. Upon executing the following cell you will be presented with a widget that displays all of the information about your choice of the backend. You can obtain information that you need by clicking on the tabs. For example, backend status, number of qubits and the connectivity are under `configuration` tab, where as the `Error Map` tab will reveal the latest noise information for the system.
###Code
import qiskit.tools.jupyter
backend_ex = provider.get_backend('ibmq_16_melbourne')
backend_ex
###Output
_____no_output_____
###Markdown
For our AND gate circuit, we need a backend with three or more qubits, which is true for all the real systems except for `ibmq_armonk`. Below is an example of how to filter backends, where we filter for number of qubits, and remove simulators:
###Code
backends = provider.backends(filters = lambda x:x.configuration().n_qubits >= 2 and not x.configuration().simulator
and x.status().operational==True)
backends
###Output
_____no_output_____
###Markdown
One convienent way to choose a system is using the `least_busy` function to get the backend with the lowest number of jobs in queue. The downside is that the result might have relatively poor accuracy because, not surprisingly, the lowest error rate systems are the most popular.
###Code
from qiskit.providers.ibmq import least_busy
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2 and
not x.configuration().simulator and x.status().operational==True))
backend
###Output
_____no_output_____
###Markdown
Real quantum computers need to be recalibrated regularly, and the fidelity of a specific qubit or gate can change over time. Therefore, which system would produce results with less error can vary. `ibmq_athens` tends to show relatively low error rates.In this exercise, we select two systems: `ibmq_athens` for its low error rates, and `ibmqx2` for its additional connectivity, in particular its triangular connectivity, that will be useful for circuits with Toffoli gates.
###Code
# run this cell
backend1 = provider.get_backend('ibmqx2')
backend2 = provider.get_backend('ibmq_athens')
###Output
_____no_output_____
###Markdown
Step 2. Define AND function for a real deviceWe now define the AND function. We choose 8192 as the the number of shots, the maximum number of shots for open IBM systems, to reduce the variance in the final result. Related informations is well explained [here](https://quantum-computing.ibm.com/docs/manage/backends/configuration) Qiskit Transpiler It is important to know that when running a circuit on a real quantum computer, cicruits typically need to be transpiled for the backend that you select so that the circuit contains only those gates that the quantum computer can actually perform. Primarily this involves the addition of swap gates so that two-qubit gates in the circuit map to those pairs of qubits on the device that can actually perform these gates. The following cell shows the AND gate represented as a Toffoli gate decomposed into single- and two-qubit gates, which are the only types of gate that can be run on IBM hardware. Provided that CNOT gates can be performed between all three qubits, a triangle topology, no other gates are required.
###Code
qc_and = QuantumCircuit(3)
qc_and.ccx(0,1,2)
print('AND gate')
display(qc_and.draw())
print('\n\nTranspiled AND gate with all the reqiured connectiviy')
qc_and.decompose().draw()
###Output
AND gate
###Markdown
In addition, there are often optimizations that the transpiler can perform that reduce the overall gate count, and thus total length of the input circuits. Note that the addition of swaps to match the device topology, and optimizations for reducing the length of a circuit are at odds with each other. In what follows we will make use of `initial_layout` that allows us to pick the qubits on a device used for the computation and `optimization_level`, an argument that allows selecting from internal defaults for circuit swap mapping and optimization methods to perform.You can learn more about transpile function in depth [here](https://libket.ewi.tudelft.nl/documentation/apidoc/transpiler.html). Let's modify AND function in Part1 properly for the real system with the transpile step included.
###Code
from qiskit.tools.monitor import job_monitor
# run the cell to define AND gate for real quantum system
def AND(inp1, inp2, backend, layout):
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
qc.ccx(0, 1, 2)
qc.barrier()
qc.measure(2, 0)
qc_trans = transpile(qc, backend, initial_layout=layout, optimization_level=3)
job = execute(qc_trans, backend, shots=8192)
print(job.job_id())
job_monitor(job)
output = job.result().get_counts()
return qc_trans, output
###Output
_____no_output_____
###Markdown
When you submit jobs to quantum systems, `job_monitor` will start tracking where your submitted job is in the pipeline. Case A) Three qubits on ibmqx2 with the triangle connectivity First, examine `ibmqx2` using the widget introduced earlier. Find a group of three qubits with triangle conntection and determine your initial layout.
###Code
# run this cell for the widget
backend1
###Output
_____no_output_____
###Markdown
&128211; Assign your choice of layout to the list variable layout1 in the cell below
###Code
# Assign your choice of the initial_layout to the variable layout1 as a list
# ex) layout1 = [0,2,4]
layout1 =
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for your choice of initial layout. Execute `AND` gate on `ibmqx2` by running the cell below.
###Code
output1_all = []
qc_trans1_all = []
prob1_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans1, output1 = AND(input1, input2, backend1, layout1)
output1_all.append(output1)
qc_trans1_all.append(qc_trans1)
prob = output1[str(int( input1=='1' and input2=='1' ))]/8192
prob1_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print( '{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Once your job is finished by running, you can then easily access the results via:```pythonresults = backend.retrieve_job('JOB_ID').result().```Your job_ids will be printed out through the `AND` function defined above. You can also find the job_ids from the results under your `IQX` account. More information can be found [here](https://quantum-computing.ibm.com/docs/manage/account/ibmq). Case B) Three qubits on ibmq_athens for the linear nearest neighbor connectivity Examine `ibmq_athens` through the widget by running the cell below.
###Code
backend2
###Output
_____no_output_____
###Markdown
&128211; Find three qubits with the linear nearest neighbor connectivity. Determine the initial layout considering the error map and assign it to the list variable layout2.
###Code
layout2 = []
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for choice of initial layout. Execute `AND` gate on `ibmq_athens` by running the cell below.
###Code
output2_all = []
qc_trans2_all = []
prob2_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans2, output2 = AND(input1, input2, backend2, layout2)
output2_all.append(output2)
qc_trans2_all.append(qc_trans2)
prob = output2[str(int( input1=='1' and input2=='1' ))]/8192
prob2_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print('{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Step 3. Interpret the result There are several quantities that distinguish the circuits. Chief among them is the **circuit depth**. Circuit depth is defined in detail [here](https://libket.ewi.tudelft.nl/documentation/apidoc/circuit.html) (See the Supplementray Information and click the Quantum Circuit Properties tab). Circuit depth is proportional to the number of gates in a circuit, and loosly corresponds to the runtime of the circuit on hardware. Therefore, circuit depth is an easy to compute metric that can be used to estimate the fidelity of an executed circuit.A second important value is the number of **nonlocal** (multi-qubit) **gates** in a circuit. On IBM Quantum systems, the only nonlocal gate that can physically be performed is the CNOT gate. Recall that CNOT gates are the most expensive gates to perform, and thus the total number of these gates also serves as a good benchmark for the accuracy of the final output. A) Circuit depth and result accuracy Running the cells below will display the four transpiled AND gate circuit diagrams with the corresponding inputs that executed on `ibmq_athens` and their circuit depths with the success probability for producing correct answer.
###Code
print('Transpiled AND gate circuit for ibmq_athens with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[0]) )
qc_trans2_all[0].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[1]) )
qc_trans2_all[1].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[2]) )
qc_trans2_all[2].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[3]) )
qc_trans2_all[3].draw()
###Output
_____no_output_____
###Markdown
&128211; Explain reason for the disimmilarity of the circuits. Descibe the relations between the property of the circuit and the accuracy of the outcomes. B) Qubit connectivity and circuit depth Investigate the transpiled circuits for `ibmqx2` by running the cells below.
###Code
print('Transpiled AND gate circuit for ibmqx2 with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[0]) )
qc_trans1_all[0].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[1]) )
qc_trans1_all[1].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[2]) )
qc_trans1_all[2].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[3]) )
qc_trans1_all[3].draw()
###Output
_____no_output_____
###Markdown
<h1 style="font-size:35px; color:black; ">Lab 1 Quantum Circuits Prerequisite- [Qiskit basics](https://qiskit.org/documentation/tutorials/circuits/1_getting_started_with_qiskit.html)- [Ch.1.2 The Atoms of Computation](https://qiskit.org/textbook/ch-states/atoms-computation.html)Other relevant materials- [Access IBM Quantum Systems](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems)- [IBM Quantum Systems Configuration](https://quantum-computing.ibm.com/docs/manage/backends/configuration)- [Transpile](https://qiskit.org/documentation/apidoc/transpiler.html)- [IBM Quantum account](https://quantum-computing.ibm.com/docs/manage/account/ibmq)- [Quantum Circuits](https://qiskit.org/documentation/apidoc/circuit.html)
###Code
from qiskit import *
from qiskit.visualization import plot_histogram
import numpy as np
###Output
_____no_output_____
###Markdown
Part 1: Classical logic gates with quantum circuits<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Create quantum circuit functions that can compute the XOR, AND, NAND and OR gates using the NOT gate (expressed as x in Qiskit), the CNOT gate (expressed as cx in Qiskit) and the Toffoli gate (expressed as ccx in Qiskit) .An implementation of the `NOT` gate is provided as an example.
###Code
def NOT(inp):
"""An NOT gate.
Parameters:
inp (str): Input, encoded in qubit 0.
Returns:
QuantumCircuit: Output NOT circuit.
str: Output value measured from qubit 0.
"""
qc = QuantumCircuit(1, 1) # A quantum circuit with a single qubit and a single classical bit
qc.reset(0)
# We encode '0' as the qubit state |0⟩, and '1' as |1⟩
# Since the qubit is initially |0⟩, we don't need to do anything for an input of '0'
# For an input of '1', we do an x to rotate the |0⟩ to |1⟩
if inp=='1':
qc.x(0)
# barrier between input state and gate operation
qc.barrier()
# Now we've encoded the input, we can do a NOT on it using x
qc.x(0)
#barrier between gate operation and measurement
qc.barrier()
# Finally, we extract the |0⟩/|1⟩ output of the qubit and encode it in the bit c[0]
qc.measure(0,0)
qc.draw('mpl')
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp in ['0', '1']:
qc, out = NOT(inp)
print('NOT with input',inp,'gives output',out)
display(qc.draw())
print('\n')
###Output
NOT with input 0 gives output 1
###Markdown
&128211; XOR gateTakes two binary strings as input and gives one as output.The output is '0' when the inputs are equal and '1' otherwise.
###Code
def XOR(inp1,inp2):
"""An XOR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 1.
"""
qc = QuantumCircuit(2, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
# barrier between input state and gate operation
qc.barrier()
# this is where your program for quantum XOR gate goes
# barrier between input state and gate operation
qc.barrier()
qc.measure(1,0) # output from qubit 1 is measured
#We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
#Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = XOR(inp1, inp2)
print('XOR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; AND gateTakes two binary strings as input and gives one as output.The output is `'1'` only when both the inputs are `'1'`.
###Code
def AND(inp1,inp2):
"""An AND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum AND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = AND(inp1, inp2)
print('AND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; NAND gateTakes two binary strings as input and gives one as output.The output is `'0'` only when both the inputs are `'1'`.
###Code
def NAND(inp1,inp2):
"""An NAND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output NAND circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum NAND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = NAND(inp1, inp2)
print('NAND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; OR gateTakes two binary strings as input and gives one as output.The output is '1' if either input is '1'.
###Code
def OR(inp1,inp2):
"""An OR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum OR gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = OR(inp1, inp2)
print('OR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
Part 2: AND gate on Quantum Computer<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Execute AND gate on two quantum systems and learn how the different circuit properties affect the result.In Part 1 you made an `AND` gate from quantum gates, and executed it on the simulator. Here in Part 2 you will do it again, but instead run the circuits on a real quantum computer. When using a real quantum system, one thing you should keep in mind is that present day quantum computers are not fault tolerent; they are noisy.The 'noise' in a quantum system is the collective effects of all the things that should not happen, but nevertheless do. Noise results in outputs are not always what we would expect. There is noise associated with all processes in a quantum circuit: preparing the initial state, applying gates, and qubit measurement. For the gates, noise levels can vary between different gates and between different qubits. `cx` gates are typically more noisy than any single qubit gate.Here we will use the quantum systems from the IBM Quantum Experience. If you do not have acess, you can do so [here](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems).Now that you are ready to use the real quantum computer, let's begin. Step 1. Choosing a device First load the account from the credentials saved on disk by running the following cell:
###Code
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
After your account is loaded, you can see the list of providers that you have access to by running the cell below. Each provider offers different systems for use. For open users, there is typically only one provider `ibm-q/open/main`:.
###Code
IBMQ.providers()
###Output
_____no_output_____
###Markdown
Let us grab the provider using `get_provider`. The command, `provider.backends( )` shows you the list of backends that are available to you from the selected provider.
###Code
provider = IBMQ.get_provider('ibm-q')
provider.backends()
###Output
_____no_output_____
###Markdown
Among these options, you may pick one of the systems to run your circuits on. All except the `ibmq_qasm_simulator` all are real quantum computers that you can use. The differences among these systems resides in the number of qubits, their connectivity, and the system error rates. Upon executing the following cell you will be presented with a widget that displays all of the information about your choice of the backend. You can obtain information that you need by clicking on the tabs. For example, backend status, number of qubits and the connectivity are under `configuration` tab, where as the `Error Map` tab will reveal the latest noise information for the system.
###Code
import qiskit.tools.jupyter
backend_ex = provider.get_backend('ibmq_16_melbourne')
backend_ex
###Output
_____no_output_____
###Markdown
For our AND gate circuit, we need a backend with three or more qubits, which is true for all the real systems except for `ibmq_armonk`. Below is an example of how to filter backends, where we filter for number of qubits, and remove simulators:
###Code
backends = provider.backends(filters = lambda x:x.configuration().n_qubits >= 2 and not x.configuration().simulator
and x.status().operational==True)
backends
###Output
_____no_output_____
###Markdown
One convienent way to choose a system is using the `least_busy` function to get the backend with the lowest number of jobs in queue. The downside is that the result might have relatively poor accuracy because, not surprisingly, the lowest error rate systems are the most popular.
###Code
from qiskit.providers.ibmq import least_busy
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2 and
not x.configuration().simulator and x.status().operational==True))
backend
###Output
_____no_output_____
###Markdown
Real quantum computers need to be recalibrated regularly, and the fidelity of a specific qubit or gate can change over time. Therefore, which system would produce results with less error can vary. `ibmq_athens` tends to show relatively low error rates.In this exercise, we select two systems: `ibmq_athens` for its low error rates, and `ibmqx2` for its additional connectivity, in particular is triganular connectivity, that will be useful for circuits with Toffoli gates.
###Code
# run this cell
backend1 = provider.get_backend('ibmqx2')
backend2 = provider.get_backend('ibmq_athens')
###Output
_____no_output_____
###Markdown
Step 2. Define AND function for a real deviceWe now define the AND function. We choose 8192 as the the number of shots, the maximum number of shots for open IBM systems, to reduce the variance in the final result. Related informations is well explained [here](https://quantum-computing.ibm.com/docs/manage/backends/configuration) Qiskit Transpiler It is important to know that when running a circuit on a real quantum computer, cicruits typically need to be transpiled for the backend that you select so that the circuit contains only those gates that the quantum computer can actually perform. Primarily this involves the addition of swap gates so that two-qubit gates in the circuit map to those pairs of qubits on the device that can actually perform these gates. The following cell shows the AND gate represented as a Toffoli gate decomposed into single- and two-qubit gates, which are the only types of gate that can be run on IBM hardware. Provided that CNOT gates can be performed between all three qubits, a triangle topology, no other gates are required.
###Code
qc_and = QuantumCircuit(3)
qc_and.ccx(0,1,2)
print('AND gate')
display(qc_and.draw())
print('\n\nTranspiled AND gate with all the reqiured connectiviy')
qc_and.decompose().draw()
###Output
AND gate
###Markdown
In addition, there are often optimizations that the transpiler can perform that reduce the overall gate count, and thus total length of the input circuits. Note that the addition of swaps to match the device topology, and optimizations for reducing the length of a circuit are at odds with each other. In what follows we will make use of `initial_layout` that allows us to pick the qubits on a device used for the computation and `optimization_level`, an argument that allows selecting from internal defaults for circuit swap mapping and optimization methods to perform.You can learn more about transpile function in depth [here](https://qiskit.org/documentation/apidoc/transpiler.html). Let's modify AND function in Part1 properly for the real system with the transpile step included.
###Code
from qiskit.tools.monitor import job_monitor
# run the cell to define AND gate for real quantum system
def AND(inp1, inp2, backend, layout):
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
qc.ccx(0, 1, 2)
qc.barrier()
qc.measure(2, 0)
qc_trans = transpile(qc, backend, initial_layout=layout, optimization_level=3)
job = execute(qc_trans, backend, shots=8192)
print(job.job_id())
job_monitor(job)
output = job.result().get_counts()
return qc_trans, output
###Output
_____no_output_____
###Markdown
When you submit jobs to quantum systems, `job_monitor` will start tracking where your submitted job is in the pipeline. Case A) Three qubits on ibmqx2 with the triangle connectivity First, examine `ibmqx2` using the widget introduced earlier. Find a group of three qubits with triangle conntection and determine your initial layout.
###Code
# run this cell for the widget
backend1
###Output
_____no_output_____
###Markdown
&128211; Assign your choice of layout to the list variable layout1 in the cell below
###Code
# Assign your choice of the initial_layout to the variable layout1 as a list
# ex) layout1 = [0,2,4]
layout1 =
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for your choice of initial layout. Execute `AND` gate on `ibmqx2` by running the cell below.
###Code
output1_all = []
qc_trans1_all = []
prob1_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans1, output1 = AND(input1, input2, backend1, layout1)
output1_all.append(output1)
qc_trans1_all.append(qc_trans1)
prob = output1[str(int( input1=='1' and input2=='1' ))]/8192
prob1_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print( '{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Once your job is finished by running, you can then easily access the results via:```pythonresults = backend.retrieve_job('JOB_ID').result().```Your job_ids will be printed out through the `AND` function defined above. You can also find the job_ids from the results under your `IQX` account. More information can be found [here](https://quantum-computing.ibm.com/docs/manage/account/ibmq). Case B) Three qubits on ibmq_athens for the linear nearest neighbor connectivity Examine `ibmq_athens` through the widget by running the cell below.
###Code
backend2
###Output
_____no_output_____
###Markdown
&128211; Find three qubits with the linear nearest neighbor connectivity. Determine the initial layout considering the error map and assign it to the list variable layout2.
###Code
layout2 = []
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for choice of initial layout. Execute `AND` gate on `ibmq_athens` by running the cell below.
###Code
output2_all = []
qc_trans2_all = []
prob2_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans2, output2 = AND(input1, input2, backend2, layout2)
output2_all.append(output2)
qc_trans2_all.append(qc_trans2)
prob = output2[str(int( input1=='1' and input2=='1' ))]/8192
prob2_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print('{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Step 3. Interpret the result There are several quantities that distinguish the circuits. Chief among them is the **circuit depth**. Circuit depth is defined in detail [here](https://qiskit.org/documentation/apidoc/circuit.html) (See the Supplementray Information and click the Quantum Circuit Properties tab). Circuit depth is proportional to the number of gates in a circuit, and loosly corresponds to the runtime of the circuit on hardware. Therefore, circuit depth is an easy to compute metric that can be used to estimate the fidelity of an executed circuit.A second important value is the number of **nonlocal** (multi-qubit) **gates** in a circuit. On IBM Quantum systems, the only nonlocal gate that can physically be performed is the CNOT gate. Recall that CNOT gates are the most expensive gates to perform, and thus the total number of these gates also serves as a good benchmark for the accuracy of the final output. A) Circuit depth and result accuracy Running the cells below will display the four transpiled AND gate circuit diagrams with the corresponding inputs that executed on `ibmq_athens` and their circuit depths with the success probability for producing correct answer.
###Code
print('Transpiled AND gate circuit for ibmq_athens with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[0]) )
qc_trans2_all[0].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[1]) )
qc_trans2_all[1].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[2]) )
qc_trans2_all[2].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[3]) )
qc_trans2_all[3].draw()
###Output
_____no_output_____
###Markdown
&128211; Explain reason for the disimmilarity of the circuits. Descibe the relations between the property of the circuit and the accuracy of the outcomes. B) Qubit connectivity and circuit depth Investigate the transpiled circuits for `ibmqx2` by running the cells below.
###Code
print('Transpiled AND gate circuit for ibmqx2 with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[0]) )
qc_trans1_all[0].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[1]) )
qc_trans1_all[1].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[2]) )
qc_trans1_all[2].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[3]) )
qc_trans1_all[3].draw()
###Output
_____no_output_____
###Markdown
<h1 style="font-size:35px; color:black; ">Lab 1 Quantum Circuits Prerequisite- [Qiskit basics](https://qiskit.org/documentation/tutorials/circuits/1_getting_started_with_qiskit.html)- [Ch.1.2 The Atoms of Computation](https://qiskit.org/textbook/ch-states/atoms-computation.html)Other relevant materials- [Access IBM Quantum Systems](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems)- [IBM Quantum Systems Configuration](https://quantum-computing.ibm.com/docs/manage/backends/configuration)- [Transpile](https://qiskit.org/documentation/apidoc/transpiler.html)- [IBM Quantum account](https://quantum-computing.ibm.com/docs/manage/account/ibmq)- [Quantum Circuits](https://qiskit.org/documentation/apidoc/circuit.html)
###Code
from qiskit import *
from qiskit.visualization import plot_histogram
import numpy as np
###Output
_____no_output_____
###Markdown
Part 1: Classical logic gates with quantum circuits<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Create quantum circuit functions that can compute the XOR, AND, NAND and OR gates using the NOT gate (expressed as x in Qiskit), the CNOT gate (expressed as cx in Qiskit) and the Toffoli gate (expressed as ccx in Qiskit) .An implementation of the `NOT` gate is provided as an example.
###Code
def NOT(inp):
"""An NOT gate.
Parameters:
inp (str): Input, encoded in qubit 0.
Returns:
QuantumCircuit: Output NOT circuit.
str: Output value measured from qubit 0.
"""
qc = QuantumCircuit(1, 1) # A quantum circuit with a single qubit and a single classical bit
qc.reset(0)
# We encode '0' as the qubit state |0⟩, and '1' as |1⟩
# Since the qubit is initially |0⟩, we don't need to do anything for an input of '0'
# For an input of '1', we do an x to rotate the |0⟩ to |1⟩
if inp=='1':
qc.x(0)
# barrier between input state and gate operation
qc.barrier()
# Now we've encoded the input, we can do a NOT on it using x
qc.x(0)
#barrier between gate operation and measurement
qc.barrier()
# Finally, we extract the |0⟩/|1⟩ output of the qubit and encode it in the bit c[0]
qc.measure(0,0)
qc.draw('mpl')
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp in ['0', '1']:
qc, out = NOT(inp)
print('NOT with input',inp,'gives output',out)
display(qc.draw())
print('\n')
###Output
NOT with input 0 gives output 1
###Markdown
&128211; XOR gateTakes two binary strings as input and gives one as output.The output is '0' when the inputs are equal and '1' otherwise.
###Code
def XOR(inp1,inp2):
"""An XOR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 1.
"""
qc = QuantumCircuit(2, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
# barrier between input state and gate operation
qc.barrier()
# this is where your program for quantum XOR gate goes
# barrier between input state and gate operation
qc.barrier()
qc.measure(1,0) # output from qubit 1 is measured
#We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
#Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = XOR(inp1, inp2)
print('XOR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; AND gateTakes two binary strings as input and gives one as output.The output is `'1'` only when both the inputs are `'1'`.
###Code
def AND(inp1,inp2):
"""An AND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum AND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = AND(inp1, inp2)
print('AND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; NAND gateTakes two binary strings as input and gives one as output.The output is `'0'` only when both the inputs are `'1'`.
###Code
def NAND(inp1,inp2):
"""An NAND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output NAND circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum NAND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = NAND(inp1, inp2)
print('NAND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; OR gateTakes two binary strings as input and gives one as output.The output is '1' if either input is '1'.
###Code
def OR(inp1,inp2):
"""An OR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum OR gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = OR(inp1, inp2)
print('OR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
Part 2: AND gate on Quantum Computer<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Execute AND gate on a real quantum system and learn how the noise properties affect the result.In Part 1 you made an `AND` gate from quantum gates, and executed it on the simulator. Here in Part 2 you will do it again, but instead run the circuits on a real quantum computer. When using a real quantum system, one thing you should keep in mind is that present day quantum computers are not fault tolerant; they are noisy.The 'noise' in a quantum system is the collective effects of all the things that should not happen, but nevertheless do. Noise results in outputs are not always what we would expect. There is noise associated with all processes in a quantum circuit: preparing the initial state, applying gates, and qubit measurement. For the gates, noise levels can vary between different gates and between different qubits. `cx` gates are typically more noisy than any single qubit gate.Here we will use the quantum systems from the IBM Quantum Experience. If you do not have access, you can do so [here](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems).Now that you are ready to use the real quantum computer, let's begin. Step 1. Choosing a device First load the account from the credentials saved on disk by running the following cell:
###Code
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
After your account is loaded, you can see the list of providers that you have access to by running the cell below. Each provider offers different systems for use. For open users, there is typically only one provider `ibm-q/open/main`:
###Code
IBMQ.providers()
###Output
_____no_output_____
###Markdown
Let us grab the provider using `get_provider`. The command, `provider.backends( )` shows you the list of backends that are available to you from the selected provider.
###Code
provider = IBMQ.get_provider('ibm-q')
provider.backends()
###Output
_____no_output_____
###Markdown
Among these options, you may pick one of the systems to run your circuits on. All except the `ibmq_qasm_simulator` all are real quantum computers that you can use. The differences among these systems resides in the number of qubits, their connectivity, and the system error rates. Upon executing the following cell you will be presented with a widget that displays all of the information about your choice of the backend. You can obtain information that you need by clicking on the tabs. For example, backend status, number of qubits and the connectivity are under `configuration` tab, where as the `Error Map` tab will reveal the latest noise information for the system.
###Code
import qiskit.tools.jupyter
backend_ex = provider.get_backend('ibmq_lima')
backend_ex
###Output
_____no_output_____
###Markdown
For our AND gate circuit, we need a backend with three or more qubits, which is true for all the real systems except for `ibmq_armonk`. Below is an example of how to filter backends, where we filter for number of qubits, and remove simulators:
###Code
backends = provider.backends(filters = lambda x:x.configuration().n_qubits >= 2 and not x.configuration().simulator
and x.status().operational==True)
backends
###Output
_____no_output_____
###Markdown
One convenient way to choose a system is using the `least_busy` function to get the backend with the lowest number of jobs in queue. The downside is that the result might have relatively poor accuracy because, not surprisingly, the lowest error rate systems are the most popular.
###Code
from qiskit.providers.ibmq import least_busy
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2 and
not x.configuration().simulator and x.status().operational==True))
backend
###Output
_____no_output_____
###Markdown
Real quantum computers need to be recalibrated regularly, and the fidelity of a specific qubit or gate can change over time. Therefore, which system would produce results with less error can vary. In this exercise, we select one of the IBM Quantum systems: `ibmq_quito`.
###Code
# run this cell
backend = provider.get_backend('ibmq_quito')
###Output
_____no_output_____
###Markdown
Step 2. Define AND function for a real deviceWe now define the AND function. We choose 8192 as the number of shots, the maximum number of shots for open IBM systems, to reduce the variance in the final result. Related information is well explained [here](https://quantum-computing.ibm.com/docs/manage/backends/configuration). Qiskit Transpiler It is important to know that when running a circuit on a real quantum computer, circuits typically need to be transpiled for the backend that you select so that the circuit contains only those gates that the quantum computer can actually perform. Primarily this involves the addition of swap gates so that two-qubit gates in the circuit map to those pairs of qubits on the device that can actually perform these gates. The following cell shows the AND gate represented as a Toffoli gate decomposed into single- and two-qubit gates, which are the only types of gate that can be run on IBM hardware.
###Code
qc_and = QuantumCircuit(3)
qc_and.ccx(0,1,2)
print('AND gate')
display(qc_and.draw())
print('\n\nTranspiled AND gate with all the required connectivity')
qc_and.decompose().draw()
###Output
AND gate
###Markdown
In addition, there are often optimizations that the transpiler can perform that reduce the overall gate count, and thus total length of the input circuits. Note that the addition of swaps to match the device topology, and optimizations for reducing the length of a circuit are at odds with each other. In what follows we will make use of `initial_layout` that allows us to pick the qubits on a device used for the computation and `optimization_level`, an argument that allows selecting from internal defaults for circuit swap mapping and optimization methods to perform.You can learn more about transpile function in depth [here](https://qiskit.org/documentation/apidoc/transpiler.html). Let's modify AND function in Part1 properly for the real system with the transpile step included.
###Code
from qiskit.tools.monitor import job_monitor
# run the cell to define AND gate for real quantum system
def AND(inp1, inp2, backend, layout):
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
qc.ccx(0, 1, 2)
qc.barrier()
qc.measure(2, 0)
qc_trans = transpile(qc, backend, initial_layout=layout, optimization_level=3)
job = backend.run(qc_trans, shots=8192)
print(job.job_id())
job_monitor(job)
output = job.result().get_counts()
return qc_trans, output
###Output
_____no_output_____
###Markdown
When you submit jobs to quantum systems, `job_monitor` will start tracking where your submitted job is in the pipeline. First, examine `ibmq_quito` through the widget by running the cell below.
###Code
backend
###Output
_____no_output_____
###Markdown
&128211; Determine three qubit initial layout considering the error map and assign it to the list variable layout2.
###Code
layout =
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for your choice of initial layout.**your answer:** Execute `AND` gate on `ibmq_quito` by running the cell below.
###Code
output_all = []
qc_trans_all = []
prob_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans, output = AND(input1, input2, backend, layout)
output_all.append(output)
qc_trans_all.append(qc_trans)
prob = output[str(int( input1=='1' and input2=='1' ))]/8192
prob_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print('{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Step 3. Interpret the result There are several quantities that distinguish the circuits. Chief among them is the **circuit depth**. Circuit depth is defined in detail [here](https://qiskit.org/documentation/apidoc/circuit.html) (See the Supplementary Information and click the Quantum Circuit Properties tab). Circuit depth is proportional to the number of gates in a circuit, and loosely corresponds to the runtime of the circuit on hardware. Therefore, circuit depth is an easy to compute metric that can be used to estimate the fidelity of an executed circuit.A second important value is the number of **nonlocal** (multi-qubit) **gates** in a circuit. On IBM Quantum systems, the only nonlocal gate that can physically be performed is the CNOT gate. Recall that CNOT gates are the most expensive gates to perform, and thus the total number of these gates also serves as a good benchmark for the accuracy of the final output. Circuit depth and result accuracy Running the cells below will display the four transpiled AND gate circuit diagrams with the corresponding inputs that executed on `ibm_lagos` and their circuit depths with the success probability for producing correct answer.
###Code
print('Transpiled AND gate circuit for ibmq_vigo with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob_all[0]) )
qc_trans_all[0].draw('mpl')
print('Transpiled AND gate circuit for ibmq_vigo with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob_all[1]) )
qc_trans_all[1].draw('mpl')
print('Transpiled AND gate circuit for ibmq_vigo with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob_all[2]) )
qc_trans_all[2].draw('mpl')
print('Transpiled AND gate circuit for ibmq_vigo with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob_all[3]) )
qc_trans_all[3].draw('mpl')
###Output
_____no_output_____
###Markdown
<h1 style="font-size:35px; color:black; ">Lab 1 Quantum Circuits Prerequisite- [Qiskit basics](https://qiskit.org/documentation/tutorials/circuits/1_getting_started_with_qiskit.html)- [Ch.1.2 The Atoms of Computation](https://qiskit.org/textbook/ch-states/atoms-computation.html)Other relevant materials- [Access IBM Quantum Systems](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems)- [IBM Quantum Systems Configuration](https://quantum-computing.ibm.com/docs/manage/backends/configuration)- [Transpile](https://qiskit.org/documentation/apidoc/transpiler.html)- [IBM Quantum account](https://quantum-computing.ibm.com/docs/manage/account/ibmq)- [Quantum Circuits](https://qiskit.org/documentation/apidoc/circuit.html)
###Code
from qiskit import *
from qiskit.visualization import plot_histogram
import numpy as np
###Output
_____no_output_____
###Markdown
Part 1: Classical logic gates with quantum circuits<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Create quantum circuit functions that can compute the XOR, AND, NAND and OR gates using the NOT gate (expressed as x in Qiskit), the CNOT gate (expressed as cx in Qiskit) and the Toffoli gate (expressed as ccx in Qiskit) .An implementation of the `NOT` gate is provided as an example.
###Code
def NOT(inp):
"""An NOT gate.
Parameters:
inp (str): Input, encoded in qubit 0.
Returns:
QuantumCircuit: Output NOT circuit.
str: Output value measured from qubit 0.
"""
qc = QuantumCircuit(1, 1) # A quantum circuit with a single qubit and a single classical bit
qc.reset(0)
# We encode '0' as the qubit state |0⟩, and '1' as |1⟩
# Since the qubit is initially |0⟩, we don't need to do anything for an input of '0'
# For an input of '1', we do an x to rotate the |0⟩ to |1⟩
if inp=='1':
qc.x(0)
# barrier between input state and gate operation
qc.barrier()
# Now we've encoded the input, we can do a NOT on it using x
qc.x(0)
#barrier between gate operation and measurement
qc.barrier()
# Finally, we extract the |0⟩/|1⟩ output of the qubit and encode it in the bit c[0]
qc.measure(0,0)
qc.draw('mpl')
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp in ['0', '1']:
qc, out = NOT(inp)
print('NOT with input',inp,'gives output',out)
display(qc.draw())
print('\n')
###Output
NOT with input 0 gives output 1
###Markdown
&128211; XOR gateTakes two binary strings as input and gives one as output.The output is '0' when the inputs are equal and '1' otherwise.
###Code
def XOR(inp1,inp2):
"""An XOR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 1.
"""
qc = QuantumCircuit(2, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
# barrier between input state and gate operation
qc.barrier()
# this is where your program for quantum XOR gate goes
# barrier between input state and gate operation
qc.barrier()
qc.measure(1,0) # output from qubit 1 is measured
#We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
#Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = XOR(inp1, inp2)
print('XOR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; AND gateTakes two binary strings as input and gives one as output.The output is `'1'` only when both the inputs are `'1'`.
###Code
def AND(inp1,inp2):
"""An AND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum AND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = AND(inp1, inp2)
print('AND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; NAND gateTakes two binary strings as input and gives one as output.The output is `'0'` only when both the inputs are `'1'`.
###Code
def NAND(inp1,inp2):
"""An NAND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output NAND circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum NAND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = NAND(inp1, inp2)
print('NAND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; OR gateTakes two binary strings as input and gives one as output.The output is '1' if either input is '1'.
###Code
def OR(inp1,inp2):
"""An OR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum OR gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = OR(inp1, inp2)
print('OR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
Part 2: AND gate on Quantum Computer<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Execute AND gate on two quantum systems and learn how the different circuit properties affect the result.In Part 1 you made an `AND` gate from quantum gates, and executed it on the simulator. Here in Part 2 you will do it again, but instead run the circuits on a real quantum computer. When using a real quantum system, one thing you should keep in mind is that present day quantum computers are not fault tolerant; they are noisy.The 'noise' in a quantum system is the collective effects of all the things that should not happen, but nevertheless do. Noise results in outputs are not always what we would expect. There is noise associated with all processes in a quantum circuit: preparing the initial state, applying gates, and qubit measurement. For the gates, noise levels can vary between different gates and between different qubits. `cx` gates are typically more noisy than any single qubit gate.Here we will use the quantum systems from the IBM Quantum Experience. If you do not have access, you can do so [here](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems).Now that you are ready to use the real quantum computer, let's begin. Step 1. Choosing a device First load the account from the credentials saved on disk by running the following cell:
###Code
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
After your account is loaded, you can see the list of providers that you have access to by running the cell below. Each provider offers different systems for use. For open users, there is typically only one provider `ibm-q/open/main`:.
###Code
IBMQ.providers()
###Output
_____no_output_____
###Markdown
Let us grab the provider using `get_provider`. The command, `provider.backends( )` shows you the list of backends that are available to you from the selected provider.
###Code
provider = IBMQ.get_provider('ibm-q')
provider.backends()
###Output
_____no_output_____
###Markdown
Among these options, you may pick one of the systems to run your circuits on. All except the `ibmq_qasm_simulator` all are real quantum computers that you can use. The differences among these systems resides in the number of qubits, their connectivity, and the system error rates. Upon executing the following cell you will be presented with a widget that displays all of the information about your choice of the backend. You can obtain information that you need by clicking on the tabs. For example, backend status, number of qubits and the connectivity are under `configuration` tab, where as the `Error Map` tab will reveal the latest noise information for the system.
###Code
import qiskit.tools.jupyter
backend_ex = provider.get_backend('ibmq_16_melbourne')
backend_ex
###Output
_____no_output_____
###Markdown
For our AND gate circuit, we need a backend with three or more qubits, which is true for all the real systems except for `ibmq_armonk`. Below is an example of how to filter backends, where we filter for number of qubits, and remove simulators:
###Code
backends = provider.backends(filters = lambda x:x.configuration().n_qubits >= 2 and not x.configuration().simulator
and x.status().operational==True)
backends
###Output
_____no_output_____
###Markdown
One convenient way to choose a system is using the `least_busy` function to get the backend with the lowest number of jobs in queue. The downside is that the result might have relatively poor accuracy because, not surprisingly, the lowest error rate systems are the most popular.
###Code
from qiskit.providers.ibmq import least_busy
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2 and
not x.configuration().simulator and x.status().operational==True))
backend
###Output
_____no_output_____
###Markdown
Real quantum computers need to be recalibrated regularly, and the fidelity of a specific qubit or gate can change over time. Therefore, which system would produce results with less error can vary. `ibmq_athens` tends to show relatively low error rates.In this exercise, we select two systems: `ibmq_athens` for its low error rates, and `ibmqx2` for its additional connectivity, in particular its triangular connectivity, that will be useful for circuits with Toffoli gates.
###Code
# run this cell
backend1 = provider.get_backend('ibmqx2')
backend2 = provider.get_backend('ibmq_athens')
###Output
_____no_output_____
###Markdown
Step 2. Define AND function for a real deviceWe now define the AND function. We choose 8192 as the number of shots, the maximum number of shots for open IBM systems, to reduce the variance in the final result. Related informations is well explained [here](https://quantum-computing.ibm.com/docs/manage/backends/configuration) Qiskit Transpiler It is important to know that when running a circuit on a real quantum computer, circuits typically need to be transpiled for the backend that you select so that the circuit contains only those gates that the quantum computer can actually perform. Primarily this involves the addition of swap gates so that two-qubit gates in the circuit map to those pairs of qubits on the device that can actually perform these gates. The following cell shows the AND gate represented as a Toffoli gate decomposed into single- and two-qubit gates, which are the only types of gate that can be run on IBM hardware. Provided that CNOT gates can be performed between all three qubits, a triangle topology, no other gates are required.
###Code
qc_and = QuantumCircuit(3)
qc_and.ccx(0,1,2)
print('AND gate')
display(qc_and.draw())
print('\n\nTranspiled AND gate with all the reqiured connectiviy')
qc_and.decompose().draw()
###Output
AND gate
###Markdown
In addition, there are often optimizations that the transpiler can perform that reduce the overall gate count, and thus total length of the input circuits. Note that the addition of swaps to match the device topology, and optimizations for reducing the length of a circuit are at odds with each other. In what follows we will make use of `initial_layout` that allows us to pick the qubits on a device used for the computation and `optimization_level`, an argument that allows selecting from internal defaults for circuit swap mapping and optimization methods to perform.You can learn more about transpile function in depth [here](https://qiskit.org/documentation/apidoc/transpiler.html). Let's modify AND function in Part1 properly for the real system with the transpile step included.
###Code
from qiskit.tools.monitor import job_monitor
# run the cell to define AND gate for real quantum system
def AND(inp1, inp2, backend, layout):
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
qc.ccx(0, 1, 2)
qc.barrier()
qc.measure(2, 0)
qc_trans = transpile(qc, backend, initial_layout=layout, optimization_level=3)
job = execute(qc_trans, backend, shots=8192)
print(job.job_id())
job_monitor(job)
output = job.result().get_counts()
return qc_trans, output
###Output
_____no_output_____
###Markdown
When you submit jobs to quantum systems, `job_monitor` will start tracking where your submitted job is in the pipeline. Case A) Three qubits on ibmqx2 with the triangle connectivity First, examine `ibmqx2` using the widget introduced earlier. Find a group of three qubits with triangle connection and determine your initial layout.
###Code
# run this cell for the widget
backend1
###Output
_____no_output_____
###Markdown
&128211; Assign your choice of layout to the list variable layout1 in the cell below
###Code
# Assign your choice of the initial_layout to the variable layout1 as a list
# ex) layout1 = [0,2,4]
layout1 =
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for your choice of initial layout. Execute `AND` gate on `ibmqx2` by running the cell below.
###Code
output1_all = []
qc_trans1_all = []
prob1_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans1, output1 = AND(input1, input2, backend1, layout1)
output1_all.append(output1)
qc_trans1_all.append(qc_trans1)
prob = output1[str(int( input1=='1' and input2=='1' ))]/8192
prob1_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print( '{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Once your job is finished by running, you can then easily access the results via:```pythonresults = backend.retrieve_job('JOB_ID').result().```Your job_ids will be printed out through the `AND` function defined above. You can also find the job_ids from the results under your `IQX` account. More information can be found [here](https://quantum-computing.ibm.com/docs/manage/account/ibmq). Case B) Three qubits on ibmq_athens for the linear nearest neighbor connectivity Examine `ibmq_athens` through the widget by running the cell below.
###Code
backend2
###Output
_____no_output_____
###Markdown
&128211; Find three qubits with the linear nearest neighbor connectivity. Determine the initial layout considering the error map and assign it to the list variable layout2.
###Code
layout2 = []
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for choice of initial layout. Execute `AND` gate on `ibmq_athens` by running the cell below.
###Code
output2_all = []
qc_trans2_all = []
prob2_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans2, output2 = AND(input1, input2, backend2, layout2)
output2_all.append(output2)
qc_trans2_all.append(qc_trans2)
prob = output2[str(int( input1=='1' and input2=='1' ))]/8192
prob2_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print('{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Step 3. Interpret the result There are several quantities that distinguish the circuits. Chief among them is the **circuit depth**. Circuit depth is defined in detail [here](https://qiskit.org/documentation/apidoc/circuit.html) (See the Supplementary Information and click the Quantum Circuit Properties tab). Circuit depth is proportional to the number of gates in a circuit, and loosely corresponds to the runtime of the circuit on hardware. Therefore, circuit depth is an easy to compute metric that can be used to estimate the fidelity of an executed circuit.A second important value is the number of **nonlocal** (multi-qubit) **gates** in a circuit. On IBM Quantum systems, the only nonlocal gate that can physically be performed is the CNOT gate. Recall that CNOT gates are the most expensive gates to perform, and thus the total number of these gates also serves as a good benchmark for the accuracy of the final output. A) Circuit depth and result accuracy Running the cells below will display the four transpiled AND gate circuit diagrams with the corresponding inputs that executed on `ibmq_athens` and their circuit depths with the success probability for producing correct answer.
###Code
print('Transpiled AND gate circuit for ibmq_athens with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[0]) )
qc_trans2_all[0].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[1]) )
qc_trans2_all[1].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[2]) )
qc_trans2_all[2].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[3]) )
qc_trans2_all[3].draw()
###Output
_____no_output_____
###Markdown
&128211; Explain reason for the dissimilarity of the circuits. Describe the relations between the property of the circuit and the accuracy of the outcomes. B) Qubit connectivity and circuit depth Investigate the transpiled circuits for `ibmqx2` by running the cells below.
###Code
print('Transpiled AND gate circuit for ibmqx2 with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[0]) )
qc_trans1_all[0].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[1]) )
qc_trans1_all[1].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[2]) )
qc_trans1_all[2].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[3]) )
qc_trans1_all[3].draw()
###Output
_____no_output_____
###Markdown
<h1 style="font-size:35px; color:black; ">Lab 1 Quantum Circuits Prerequisite- [Qiskit basics](https://qiskit.org/documentation/tutorials/circuits/1_getting_started_with_qiskit.html)- [Ch.1.2 The Atoms of Computation](https://qiskit.org/textbook/ch-states/atoms-computation.html)Other relevant materials- [Access IBM Quantum Systems](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems)- [IBM Quantum Systems Configuration](https://quantum-computing.ibm.com/docs/manage/backends/configuration)- [Transpile](https://qiskit.org/documentation/apidoc/transpiler.html)- [IBM Quantum account](https://quantum-computing.ibm.com/docs/manage/account/ibmq)- [Quantum Circuits](https://qiskit.org/documentation/apidoc/circuit.html)
###Code
from qiskit import *
from qiskit.visualization import plot_histogram
import numpy as np
###Output
_____no_output_____
###Markdown
Part 1: Classical logic gates with quantum circuits<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Create quantum circuit functions that can compute the XOR, AND, NAND and OR gates using the NOT gate (expressed as x in Qiskit), the CNOT gate (expressed as cx in Qiskit) and the Toffoli gate (expressed as ccx in Qiskit) .An implementation of the `NOT` gate is provided as an example.
###Code
def NOT(inp):
"""An NOT gate.
Parameters:
inp (str): Input, encoded in qubit 0.
Returns:
QuantumCircuit: Output NOT circuit.
str: Output value measured from qubit 0.
"""
qc = QuantumCircuit(1, 1) # A quantum circuit with a single qubit and a single classical bit
qc.reset(0)
# We encode '0' as the qubit state |0⟩, and '1' as |1⟩
# Since the qubit is initially |0⟩, we don't need to do anything for an input of '0'
# For an input of '1', we do an x to rotate the |0⟩ to |1⟩
if inp=='1':
qc.x(0)
# barrier between input state and gate operation
qc.barrier()
# Now we've encoded the input, we can do a NOT on it using x
qc.x(0)
#barrier between gate operation and measurement
qc.barrier()
# Finally, we extract the |0⟩/|1⟩ output of the qubit and encode it in the bit c[0]
qc.measure(0,0)
qc.draw('mpl')
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp in ['0', '1']:
qc, out = NOT(inp)
print('NOT with input',inp,'gives output',out)
display(qc.draw())
print('\n')
###Output
NOT with input 0 gives output 1
###Markdown
&128211; XOR gateTakes two binary strings as input and gives one as output.The output is '0' when the inputs are equal and '1' otherwise.
###Code
def XOR(inp1,inp2):
"""An XOR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 1.
"""
qc = QuantumCircuit(2, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
# barrier between input state and gate operation
qc.barrier()
# this is where your program for quantum XOR gate goes
# barrier between input state and gate operation
qc.barrier()
qc.measure(1,0) # output from qubit 1 is measured
#We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
#Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = XOR(inp1, inp2)
print('XOR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; AND gateTakes two binary strings as input and gives one as output.The output is `'1'` only when both the inputs are `'1'`.
###Code
def AND(inp1,inp2):
"""An AND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum AND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = AND(inp1, inp2)
print('AND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; NAND gateTakes two binary strings as input and gives one as output.The output is `'0'` only when both the inputs are `'1'`.
###Code
def NAND(inp1,inp2):
"""An NAND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output NAND circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum NAND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = NAND(inp1, inp2)
print('NAND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; OR gateTakes two binary strings as input and gives one as output.The output is '1' if either input is '1'.
###Code
def OR(inp1,inp2):
"""An OR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum OR gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = OR(inp1, inp2)
print('OR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
Part 2: AND gate on Quantum Computer<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Execute AND gate on two quantum systems and learn how the different circuit properties affect the result.In Part 1 you made an `AND` gate from quantum gates, and executed it on the simulator. Here in Part 2 you will do it again, but instead run the circuits on a real quantum computer. When using a real quantum system, one thing you should keep in mind is that present day quantum computers are not fault tolerant; they are noisy.The 'noise' in a quantum system is the collective effects of all the things that should not happen, but nevertheless do. Noise results in outputs are not always what we would expect. There is noise associated with all processes in a quantum circuit: preparing the initial state, applying gates, and qubit measurement. For the gates, noise levels can vary between different gates and between different qubits. `cx` gates are typically more noisy than any single qubit gate.Here we will use the quantum systems from the IBM Quantum Experience. If you do not have access, you can do so [here](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems).Now that you are ready to use the real quantum computer, let's begin. Step 1. Choosing a device First load the account from the credentials saved on disk by running the following cell:
###Code
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
After your account is loaded, you can see the list of providers that you have access to by running the cell below. Each provider offers different systems for use. For open users, there is typically only one provider `ibm-q/open/main`:
###Code
IBMQ.providers()
###Output
_____no_output_____
###Markdown
Let us grab the provider using `get_provider`. The command, `provider.backends( )` shows you the list of backends that are available to you from the selected provider.
###Code
provider = IBMQ.get_provider('ibm-q')
provider.backends()
###Output
_____no_output_____
###Markdown
Among these options, you may pick one of the systems to run your circuits on. All except the `ibmq_qasm_simulator` all are real quantum computers that you can use. The differences among these systems resides in the number of qubits, their connectivity, and the system error rates. Upon executing the following cell you will be presented with a widget that displays all of the information about your choice of the backend. You can obtain information that you need by clicking on the tabs. For example, backend status, number of qubits and the connectivity are under `configuration` tab, where as the `Error Map` tab will reveal the latest noise information for the system.
###Code
import qiskit.tools.jupyter
backend_ex = provider.get_backend('ibmq_16_melbourne')
backend_ex
###Output
_____no_output_____
###Markdown
For our AND gate circuit, we need a backend with three or more qubits, which is true for all the real systems except for `ibmq_armonk`. Below is an example of how to filter backends, where we filter for number of qubits, and remove simulators:
###Code
backends = provider.backends(filters = lambda x:x.configuration().n_qubits >= 2 and not x.configuration().simulator
and x.status().operational==True)
backends
###Output
_____no_output_____
###Markdown
One convenient way to choose a system is using the `least_busy` function to get the backend with the lowest number of jobs in queue. The downside is that the result might have relatively poor accuracy because, not surprisingly, the lowest error rate systems are the most popular.
###Code
from qiskit.providers.ibmq import least_busy
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2 and
not x.configuration().simulator and x.status().operational==True))
backend
###Output
_____no_output_____
###Markdown
Real quantum computers need to be recalibrated regularly, and the fidelity of a specific qubit or gate can change over time. Therefore, which system would produce results with less error can vary. `ibmq_athens` tends to show relatively low error rates.In this exercise, we select two systems: `ibmq_athens` for its low error rates, and `ibmqx2` for its additional connectivity, in particular its triangular connectivity, that will be useful for circuits with Toffoli gates.
###Code
# run this cell
backend1 = provider.get_backend('ibmqx2')
backend2 = provider.get_backend('ibmq_athens')
###Output
_____no_output_____
###Markdown
Step 2. Define AND function for a real deviceWe now define the AND function. We choose 8192 as the number of shots, the maximum number of shots for open IBM systems, to reduce the variance in the final result. Related information is well explained [here](https://quantum-computing.ibm.com/docs/manage/backends/configuration). Qiskit Transpiler It is important to know that when running a circuit on a real quantum computer, circuits typically need to be transpiled for the backend that you select so that the circuit contains only those gates that the quantum computer can actually perform. Primarily this involves the addition of swap gates so that two-qubit gates in the circuit map to those pairs of qubits on the device that can actually perform these gates. The following cell shows the AND gate represented as a Toffoli gate decomposed into single- and two-qubit gates, which are the only types of gate that can be run on IBM hardware. Provided that CNOT gates can be performed between all three qubits, a triangle topology, no other gates are required.
###Code
qc_and = QuantumCircuit(3)
qc_and.ccx(0,1,2)
print('AND gate')
display(qc_and.draw())
print('\n\nTranspiled AND gate with all the reqiured connectiviy')
qc_and.decompose().draw()
###Output
AND gate
###Markdown
In addition, there are often optimizations that the transpiler can perform that reduce the overall gate count, and thus total length of the input circuits. Note that the addition of swaps to match the device topology, and optimizations for reducing the length of a circuit are at odds with each other. In what follows we will make use of `initial_layout` that allows us to pick the qubits on a device used for the computation and `optimization_level`, an argument that allows selecting from internal defaults for circuit swap mapping and optimization methods to perform.You can learn more about transpile function in depth [here](https://qiskit.org/documentation/apidoc/transpiler.html). Let's modify AND function in Part1 properly for the real system with the transpile step included.
###Code
from qiskit.tools.monitor import job_monitor
# run the cell to define AND gate for real quantum system
def AND(inp1, inp2, backend, layout):
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
qc.ccx(0, 1, 2)
qc.barrier()
qc.measure(2, 0)
qc_trans = transpile(qc, backend, initial_layout=layout, optimization_level=3)
job = execute(qc_trans, backend, shots=8192)
print(job.job_id())
job_monitor(job)
output = job.result().get_counts()
return qc_trans, output
###Output
_____no_output_____
###Markdown
When you submit jobs to quantum systems, `job_monitor` will start tracking where your submitted job is in the pipeline. Case A) Three qubits on ibmqx2 with the triangle connectivity First, examine `ibmqx2` using the widget introduced earlier. Find a group of three qubits with triangle connection and determine your initial layout.
###Code
# run this cell for the widget
backend1
###Output
_____no_output_____
###Markdown
&128211; Assign your choice of layout to the list variable layout1 in the cell below
###Code
# Assign your choice of the initial_layout to the variable layout1 as a list
# ex) layout1 = [0,2,4]
layout1 =
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for your choice of initial layout. Execute `AND` gate on `ibmqx2` by running the cell below.
###Code
output1_all = []
qc_trans1_all = []
prob1_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans1, output1 = AND(input1, input2, backend1, layout1)
output1_all.append(output1)
qc_trans1_all.append(qc_trans1)
prob = output1[str(int( input1=='1' and input2=='1' ))]/8192
prob1_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print( '{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Once your job is finished by running, you can then easily access the results via:```pythonresults = backend.retrieve_job('JOB_ID').result().```Your job_ids will be printed out through the `AND` function defined above. You can also find the job_ids from the results under your `IQX` account. More information can be found [here](https://quantum-computing.ibm.com/docs/manage/account/ibmq). Case B) Three qubits on ibmq_athens for the linear nearest neighbor connectivity Examine `ibmq_athens` through the widget by running the cell below.
###Code
backend2
###Output
_____no_output_____
###Markdown
&128211; Find three qubits with the linear nearest neighbor connectivity. Determine the initial layout considering the error map and assign it to the list variable layout2.
###Code
layout2 = []
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for choice of initial layout. Execute `AND` gate on `ibmq_athens` by running the cell below.
###Code
output2_all = []
qc_trans2_all = []
prob2_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans2, output2 = AND(input1, input2, backend2, layout2)
output2_all.append(output2)
qc_trans2_all.append(qc_trans2)
prob = output2[str(int( input1=='1' and input2=='1' ))]/8192
prob2_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print('{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Step 3. Interpret the result There are several quantities that distinguish the circuits. Chief among them is the **circuit depth**. Circuit depth is defined in detail [here](https://qiskit.org/documentation/apidoc/circuit.htmlsupplementary-information) (See the Supplementary Information and click the Quantum Circuit Properties tab). Circuit depth is proportional to the number of gates in a circuit, and loosely corresponds to the runtime of the circuit on hardware. Therefore, circuit depth is an easy to compute metric that can be used to estimate the fidelity of an executed circuit.A second important value is the number of **nonlocal** (multi-qubit) **gates** in a circuit. On IBM Quantum systems, the only nonlocal gate that can physically be performed is the CNOT gate. Recall that CNOT gates are the most expensive gates to perform, and thus the total number of these gates also serves as a good benchmark for the accuracy of the final output. A) Circuit depth and result accuracy Running the cells below will display the four transpiled AND gate circuit diagrams with the corresponding inputs that executed on `ibmq_athens` and their circuit depths with the success probability for producing correct answer.
###Code
print('Transpiled AND gate circuit for ibmq_athens with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[0]) )
qc_trans2_all[0].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[1]) )
qc_trans2_all[1].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[2]) )
qc_trans2_all[2].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[3]) )
qc_trans2_all[3].draw()
###Output
_____no_output_____
###Markdown
&128211; Explain reason for the dissimilarity of the circuits. Describe the relations between the property of the circuit and the accuracy of the outcomes. B) Qubit connectivity and circuit depth Investigate the transpiled circuits for `ibmqx2` by running the cells below.
###Code
print('Transpiled AND gate circuit for ibmqx2 with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[0]) )
qc_trans1_all[0].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[1]) )
qc_trans1_all[1].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[2]) )
qc_trans1_all[2].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[3]) )
qc_trans1_all[3].draw()
###Output
_____no_output_____
###Markdown
<h1 style="font-size:35px; color:black; ">Lab 1 Quantum Circuits Prerequisite- [Qiskit basics](https://qiskit.org/documentation/tutorials/circuits/1_getting_started_with_qiskit.html)- [Ch.1.2 The Atoms of Computation](https://qiskit.org/textbook/ch-states/atoms-computation.html)Other relevant materials- [Access IBM Quantum Systems](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems)- [IBM Quantum Systems Configuration](https://quantum-computing.ibm.com/docs/manage/backends/configuration)- [Transpile](https://qiskit.org/documentation/apidoc/transpiler.html)- [IBM Quantum account](https://quantum-computing.ibm.com/docs/manage/account/ibmq)- [Quantum Circuits](https://qiskit.org/documentation/apidoc/circuit.html)
###Code
from qiskit import *
from qiskit.visualization import plot_histogram
import numpy as np
###Output
_____no_output_____
###Markdown
Part 1: Classical logic gates with quantum circuits<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Create quantum circuit functions that can compute the XOR, AND, NAND and OR gates using the NOT gate (expressed as x in Qiskit), the CNOT gate (expressed as cx in Qiskit) and the Toffoli gate (expressed as ccx in Qiskit) .An implementation of the `NOT` gate is provided as an example.
###Code
def NOT(inp):
"""An NOT gate.
Parameters:
inp (str): Input, encoded in qubit 0.
Returns:
QuantumCircuit: Output NOT circuit.
str: Output value measured from qubit 0.
"""
qc = QuantumCircuit(1, 1) # A quantum circuit with a single qubit and a single classical bit
qc.reset(0)
# We encode '0' as the qubit state |0⟩, and '1' as |1⟩
# Since the qubit is initially |0⟩, we don't need to do anything for an input of '0'
# For an input of '1', we do an x to rotate the |0⟩ to |1⟩
if inp=='1':
qc.x(0)
# barrier between input state and gate operation
qc.barrier()
# Now we've encoded the input, we can do a NOT on it using x
qc.x(0)
#barrier between gate operation and measurement
qc.barrier()
# Finally, we extract the |0⟩/|1⟩ output of the qubit and encode it in the bit c[0]
qc.measure(0,0)
qc.draw('mpl')
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp in ['0', '1']:
qc, out = NOT(inp)
print('NOT with input',inp,'gives output',out)
display(qc.draw())
print('\n')
###Output
NOT with input 0 gives output 1
###Markdown
&128211; XOR gateTakes two binary strings as input and gives one as output.The output is '0' when the inputs are equal and '1' otherwise.
###Code
def XOR(inp1,inp2):
"""An XOR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 1.
"""
qc = QuantumCircuit(2, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
# barrier between input state and gate operation
qc.barrier()
# this is where your program for quantum XOR gate goes
# barrier between input state and gate operation
qc.barrier()
qc.measure(1,0) # output from qubit 1 is measured
#We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
#Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = XOR(inp1, inp2)
print('XOR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; AND gateTakes two binary strings as input and gives one as output.The output is `'1'` only when both the inputs are `'1'`.
###Code
def AND(inp1,inp2):
"""An AND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum AND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = AND(inp1, inp2)
print('AND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; NAND gateTakes two binary strings as input and gives one as output.The output is `'0'` only when both the inputs are `'1'`.
###Code
def NAND(inp1,inp2):
"""An NAND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output NAND circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum NAND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = NAND(inp1, inp2)
print('NAND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; OR gateTakes two binary strings as input and gives one as output.The output is '1' if either input is '1'.
###Code
def OR(inp1,inp2):
"""An OR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp=='1':
qc.x(0)
if inp=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum OR gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = OR(inp1, inp2)
print('OR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
Part 2: AND gate on Quantum Computer<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Execute AND gate on two quantum systems and learn how the different circuit properties affect the result.In Part 1 you made an `AND` gate from quantum gates, and executed it on the simulator. Here in Part 2 you will do it again, but instead run the circuits on a real quantum computer. When using a real quantum system, one thing you should keep in mind is that present day quantum computers are not fault tolerent; they are noisy.The 'noise' in a quantum system is the collective effects of all the things that should not happen, but nevertheless do. Noise results in outputs are not always what we would expect. There is noise associated with all processes in a quantum circuit: preparing the initial state, applying gates, and qubit measurement. For the gates, noise levels can vary between different gates and between different qubits. `cx` gates are typically more noisy than any single qubit gate.Here we will use the quantum systems from the IBM Quantum Experience. If you do not have acess, you can do so [here](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems).Now that you are ready to use the real quantum computer, let's begin. Step 1. Choosing a device First load the account from the credentials saved on disk by running the following cell:
###Code
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
After your account is loaded, you can see the list of providers that you have access to by running the cell below. Each provider offers different systems for use. For open users, there is typically only one provider `ibm-q/open/main`:.
###Code
IBMQ.providers()
###Output
_____no_output_____
###Markdown
Let us grab the provider using `get_provider`. The command, `provider.backends( )` shows you the list of backends that are available to you from the selected provider.
###Code
provider = IBMQ.get_provider('ibm-q')
provider.backends()
###Output
_____no_output_____
###Markdown
Among these options, you may pick one of the systems to run your circuits on. All except the `ibmq_qasm_simulator` all are real quantum computers that you can use. The differences among these systems resides in the number of qubits, their connectivity, and the system error rates. Upon executing the following cell you will be presented with a widget that displays all of the information about your choice of the backend. You can obtain information that you need by clicking on the tabs. For example, backend status, number of qubits and the connectivity are under `configuration` tab, where as the `Error Map` tab will reveal the latest noise information for the system.
###Code
import qiskit.tools.jupyter
backend_ex = provider.get_backend('ibmq_16_melbourne')
backend_ex
###Output
_____no_output_____
###Markdown
For our AND gate circuit, we need a backend with three or more qubits, which is true for all the real systems except for `ibmq_armonk`. Below is an example of how to filter backends, where we filter for number of qubits, and remove simulators:
###Code
backends = provider.backends(filters = lambda x:x.configuration().n_qubits >= 2 and not x.configuration().simulator
and x.status().operational==True)
backends
###Output
_____no_output_____
###Markdown
One convienent way to choose a system is using the `least_busy` function to get the backend with the lowest number of jobs in queue. The downside is that the result might have relatively poor accuracy because, not surprisingly, the lowest error rate systems are the most popular.
###Code
from qiskit.providers.ibmq import least_busy
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2 and
not x.configuration().simulator and x.status().operational==True))
backend
###Output
_____no_output_____
###Markdown
Real quantum computers need to be recalibrated regularly, and the fidelity of a specific qubit or gate can change over time. Therefore, which system would produce results with less error can vary. `ibmq_athens` tends to show relatively low error rates.In this exercise, we select two systems: `ibmq_athens` for its low error rates, and `ibmqx2` for its additional connectivity, in particular is triganular connectivity, that will be useful for circuits with Toffoli gates.
###Code
# run this cell
backend1 = provider.get_backend('ibmqx2')
backend2 = provider.get_backend('ibmq_athens')
###Output
_____no_output_____
###Markdown
Step 2. Define AND function for a real deviceWe now define the AND function. We choose 8192 as the the number of shots, the maximum number of shots for open IBM systems, to reduce the variance in the final result. Related informations is well explained [here](https://quantum-computing.ibm.com/docs/manage/backends/configuration) Qiskit Transpiler It is important to know that when running a circuit on a real quantum computer, cicruits typically need to be transpiled for the backend that you select so that the circuit contains only those gates that the quantum computer can actually perform. Primarily this involves the addition of swap gates so that two-qubit gates in the circuit map to those pairs of qubits on the device that can actually perform these gates. The following cell shows the AND gate represented as a Toffoli gate decomposed into single- and two-qubit gates, which are the only types of gate that can be run on IBM hardware. Provided that CNOT gates can be performed between all three qubits, a triangle topology, no other gates are required.
###Code
qc_and = QuantumCircuit(3)
qc_and.ccx(0,1,2)
print('AND gate')
display(qc_and.draw())
print('\n\nTranspiled AND gate with all the reqiured connectiviy')
qc_and.decompose().draw()
###Output
AND gate
###Markdown
In addition, there are often optimizations that the transpiler can perform that reduce the overall gate count, and thus total length of the input circuits. Note that the addition of swaps to match the device topology, and optimizations for reducing the length of a circuit are at odds with each other. In what follows we will make use of `initial_layout` that allows us to pick the qubits on a device used for the computation and `optimization_level`, an argument that allows selecting from internal defaults for circuit swap mapping and optimization methods to perform.You can learn more about transpile function in depth [here](https://qiskit.org/documentation/apidoc/transpiler.html). Let's modify AND function in Part1 properly for the real system with the transpile step included.
###Code
from qiskit.tools.monitor import job_monitor
# run the cell to define AND gate for real quantum system
def AND(inp1, inp2, backend, layout):
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
qc.ccx(0, 1, 2)
qc.barrier()
qc.measure(2, 0)
qc_trans = transpile(qc, backend, initial_layout=layout, optimization_level=3)
job = execute(qc_trans, backend, shots=8192)
print(job.job_id())
job_monitor(job)
output = job.result().get_counts()
return qc_trans, output
###Output
_____no_output_____
###Markdown
When you submit jobs to quantum systems, `job_monitor` will start tracking where your submitted job is in the pipeline. Case A) Three qubits on ibmqx2 with the triangle connectivity First, examine `ibmqx2` using the widget introduced earlier. Find a group of three qubits with triangle conntection and determine your initial layout.
###Code
# run this cell for the widget
backend1
###Output
_____no_output_____
###Markdown
&128211; Assign your choice of layout to the list variable layout1 in the cell below
###Code
# Assign your choice of the initial_layout to the variable layout1 as a list
# ex) layout1 = [0,2,4]
layout1 =
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for your choice of initial layout. Execute `AND` gate on `ibmqx2` by running the cell below.
###Code
output1_all = []
qc_trans1_all = []
prob1_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans1, output1 = AND(input1, input2, backend1, layout1)
output1_all.append(output1)
qc_trans1_all.append(qc_trans1)
prob = output1[str(int( input1=='1' and input2=='1' ))]/8192
prob1_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print( '{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Once your job is finished by running, you can then easily access the results via:```pythonresults = backend.retrieve_job('JOB_ID').result().```Your job_ids will be printed out through the `AND` function defined above. You can also find the job_ids from the results under your `IQX` account. More information can be found [here](https://quantum-computing.ibm.com/docs/manage/account/ibmq). Case B) Three qubits on ibmq_athens for the linear nearest neighbor connectivity Examine `ibmq_athens` through the widget by running the cell below.
###Code
backend2
###Output
_____no_output_____
###Markdown
&128211; Find three qubits with the linear nearest neighbor connectivity. Determine the initial layout considering the error map and assign it to the list variable layout2.
###Code
layout2 = []
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for choice of initial layout. Execute `AND` gate on `ibmq_athens` by running the cell below.
###Code
output2_all = []
qc_trans2_all = []
prob2_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans2, output2 = AND(input1, input2, backend2, layout2)
output2_all.append(output2)
qc_trans2_all.append(qc_trans2)
prob = output2[str(int( input1=='1' and input2=='1' ))]/8192
prob2_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print('{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Step 3. Interpret the result There are several quantities that distinguish the circuits. Chief among them is the **circuit depth**. Circuit depth is defined in detail [here](https://qiskit.org/documentation/apidoc/circuit.html) (See the Supplementray Information and click the Quantum Circuit Properties tab). Circuit depth is proportional to the number of gates in a circuit, and loosly corresponds to the runtime of the circuit on hardware. Therefore, circuit depth is an easy to compute metric that can be used to estimate the fidelity of an executed circuit.A second important value is the number of **nonlocal** (multi-qubit) **gates** in a circuit. On IBM Quantum systems, the only nonlocal gate that can physically be performed is the CNOT gate. Recall that CNOT gates are the most expensive gates to perform, and thus the total number of these gates also serves as a good benchmark for the accuracy of the final output. A) Circuit depth and result accuracy Running the cells below will display the four transpiled AND gate circuit diagrams with the corresponding inputs that executed on `ibmq_athens` and their circuit depths with the success probability for producing correct answer.
###Code
print('Transpiled AND gate circuit for ibmq_athens with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[0]) )
qc_trans2_all[0].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[1]) )
qc_trans2_all[1].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[2]) )
qc_trans2_all[2].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[3]) )
qc_trans2_all[3].draw()
###Output
_____no_output_____
###Markdown
&128211; Explain reason for the disimmilarity of the circuits. Descibe the relations between the property of the circuit and the accuracy of the outcomes. B) Qubit connectivity and circuit depth Investigate the transpiled circuits for `ibmqx2` by running the cells below.
###Code
print('Transpiled AND gate circuit for ibmqx2 with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[0]) )
qc_trans1_all[0].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[1]) )
qc_trans1_all[1].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[2]) )
qc_trans1_all[2].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[3]) )
qc_trans1_all[3].draw()
###Output
_____no_output_____
###Markdown
<h1 style="font-size:35px; color:black; ">Lab 1 Quantum Circuits Prerequisite- [Qiskit basics](https://qiskit.org/documentation/tutorials/circuits/1_getting_started_with_qiskit.html)- [Ch.1.2 The Atoms of Computation](https://qiskit.org/textbook/ch-states/atoms-computation.html)Other relevant materials- [Access IBM Quantum Systems](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems)- [IBM Quantum Systems Configuration](https://quantum-computing.ibm.com/docs/manage/backends/configuration)- [Transpile](https://qiskit.org/documentation/apidoc/transpiler.html)- [IBM Quantum account](https://quantum-computing.ibm.com/docs/manage/account/ibmq)- [Quantum Circuits](https://qiskit.org/documentation/apidoc/circuit.html)
###Code
from qiskit import *
from qiskit.visualization import plot_histogram
import numpy as np
###Output
_____no_output_____
###Markdown
Part 1: Classical logic gates with quantum circuits<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Create quantum circuit functions that can compute the XOR, AND, NAND and OR gates using the NOT gate (expressed as x in Qiskit), the CNOT gate (expressed as cx in Qiskit) and the Toffoli gate (expressed as ccx in Qiskit) .An implementation of the `NOT` gate is provided as an example.
###Code
def NOT(inp):
"""An NOT gate.
Parameters:
inp (str): Input, encoded in qubit 0.
Returns:
QuantumCircuit: Output NOT circuit.
str: Output value measured from qubit 0.
"""
qc = QuantumCircuit(1, 1) # A quantum circuit with a single qubit and a single classical bit
qc.reset(0)
# We encode '0' as the qubit state |0⟩, and '1' as |1⟩
# Since the qubit is initially |0⟩, we don't need to do anything for an input of '0'
# For an input of '1', we do an x to rotate the |0⟩ to |1⟩
if inp=='1':
qc.x(0)
# barrier between input state and gate operation
qc.barrier()
# Now we've encoded the input, we can do a NOT on it using x
qc.x(0)
#barrier between gate operation and measurement
qc.barrier()
# Finally, we extract the |0⟩/|1⟩ output of the qubit and encode it in the bit c[0]
qc.measure(0,0)
qc.draw('mpl')
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp in ['0', '1']:
qc, out = NOT(inp)
print('NOT with input',inp,'gives output',out)
display(qc.draw())
print('\n')
###Output
NOT with input 0 gives output 1
###Markdown
&128211; XOR gateTakes two binary strings as input and gives one as output.The output is '0' when the inputs are equal and '1' otherwise.
###Code
def XOR(inp1,inp2):
"""An XOR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 1.
"""
qc = QuantumCircuit(2, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
# barrier between input state and gate operation
qc.barrier()
# this is where your program for quantum XOR gate goes
# barrier between input state and gate operation
qc.barrier()
qc.measure(1,0) # output from qubit 1 is measured
#We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
#Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = XOR(inp1, inp2)
print('XOR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; AND gateTakes two binary strings as input and gives one as output.The output is `'1'` only when both the inputs are `'1'`.
###Code
def AND(inp1,inp2):
"""An AND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum AND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = AND(inp1, inp2)
print('AND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; NAND gateTakes two binary strings as input and gives one as output.The output is `'0'` only when both the inputs are `'1'`.
###Code
def NAND(inp1,inp2):
"""An NAND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output NAND circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum NAND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = NAND(inp1, inp2)
print('NAND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; OR gateTakes two binary strings as input and gives one as output.The output is '1' if either input is '1'.
###Code
def OR(inp1,inp2):
"""An OR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum OR gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = OR(inp1, inp2)
print('OR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
Part 2: AND gate on Quantum Computer<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Execute AND gate on two quantum systems and learn how the different circuit properties affect the result.In Part 1 you made an `AND` gate from quantum gates, and executed it on the simulator. Here in Part 2 you will do it again, but instead run the circuits on a real quantum computer. When using a real quantum system, one thing you should keep in mind is that present day quantum computers are not fault tolerant; they are noisy.The 'noise' in a quantum system is the collective effects of all the things that should not happen, but nevertheless do. Noise results in outputs are not always what we would expect. There is noise associated with all processes in a quantum circuit: preparing the initial state, applying gates, and qubit measurement. For the gates, noise levels can vary between different gates and between different qubits. `cx` gates are typically more noisy than any single qubit gate.Here we will use the quantum systems from the IBM Quantum Experience. If you do not have access, you can do so [here](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems).Now that you are ready to use the real quantum computer, let's begin. Step 1. Choosing a device First load the account from the credentials saved on disk by running the following cell:
###Code
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
After your account is loaded, you can see the list of providers that you have access to by running the cell below. Each provider offers different systems for use. For open users, there is typically only one provider `ibm-q/open/main`:
###Code
IBMQ.providers()
###Output
_____no_output_____
###Markdown
Let us grab the provider using `get_provider`. The command, `provider.backends( )` shows you the list of backends that are available to you from the selected provider.
###Code
provider = IBMQ.get_provider('ibm-q')
provider.backends()
###Output
_____no_output_____
###Markdown
Among these options, you may pick one of the systems to run your circuits on. All except the `ibmq_qasm_simulator` all are real quantum computers that you can use. The differences among these systems resides in the number of qubits, their connectivity, and the system error rates. Upon executing the following cell you will be presented with a widget that displays all of the information about your choice of the backend. You can obtain information that you need by clicking on the tabs. For example, backend status, number of qubits and the connectivity are under `configuration` tab, where as the `Error Map` tab will reveal the latest noise information for the system.
###Code
import qiskit.tools.jupyter
backend_ex = provider.get_backend('ibmq_16_melbourne')
backend_ex
###Output
_____no_output_____
###Markdown
For our AND gate circuit, we need a backend with three or more qubits, which is true for all the real systems except for `ibmq_armonk`. Below is an example of how to filter backends, where we filter for number of qubits, and remove simulators:
###Code
backends = provider.backends(filters = lambda x:x.configuration().n_qubits >= 2 and not x.configuration().simulator
and x.status().operational==True)
backends
###Output
_____no_output_____
###Markdown
One convenient way to choose a system is using the `least_busy` function to get the backend with the lowest number of jobs in queue. The downside is that the result might have relatively poor accuracy because, not surprisingly, the lowest error rate systems are the most popular.
###Code
from qiskit.providers.ibmq import least_busy
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2 and
not x.configuration().simulator and x.status().operational==True))
backend
###Output
_____no_output_____
###Markdown
Real quantum computers need to be recalibrated regularly, and the fidelity of a specific qubit or gate can change over time. Therefore, which system would produce results with less error can vary. `ibmq_athens` tends to show relatively low error rates.In this exercise, we select two systems: `ibmq_athens` for its low error rates, and `ibmqx2` for its additional connectivity, in particular its triangular connectivity, that will be useful for circuits with Toffoli gates.
###Code
# run this cell
backend1 = provider.get_backend('ibmqx2')
backend2 = provider.get_backend('ibmq_athens')
###Output
_____no_output_____
###Markdown
Step 2. Define AND function for a real deviceWe now define the AND function. We choose 8192 as the number of shots, the maximum number of shots for open IBM systems, to reduce the variance in the final result. Related information is well explained [here](https://quantum-computing.ibm.com/docs/manage/backends/configuration). Qiskit Transpiler It is important to know that when running a circuit on a real quantum computer, circuits typically need to be transpiled for the backend that you select so that the circuit contains only those gates that the quantum computer can actually perform. Primarily this involves the addition of swap gates so that two-qubit gates in the circuit map to those pairs of qubits on the device that can actually perform these gates. The following cell shows the AND gate represented as a Toffoli gate decomposed into single- and two-qubit gates, which are the only types of gate that can be run on IBM hardware. Provided that CNOT gates can be performed between all three qubits, a triangle topology, no other gates are required.
###Code
qc_and = QuantumCircuit(3)
qc_and.ccx(0,1,2)
print('AND gate')
display(qc_and.draw())
print('\n\nTranspiled AND gate with all the reqiured connectiviy')
qc_and.decompose().draw()
###Output
AND gate
###Markdown
In addition, there are often optimizations that the transpiler can perform that reduce the overall gate count, and thus total length of the input circuits. Note that the addition of swaps to match the device topology, and optimizations for reducing the length of a circuit are at odds with each other. In what follows we will make use of `initial_layout` that allows us to pick the qubits on a device used for the computation and `optimization_level`, an argument that allows selecting from internal defaults for circuit swap mapping and optimization methods to perform.You can learn more about transpile function in depth [here](https://qiskit.org/documentation/apidoc/transpiler.html). Let's modify AND function in Part1 properly for the real system with the transpile step included.
###Code
from qiskit.tools.monitor import job_monitor
# run the cell to define AND gate for real quantum system
def AND(inp1, inp2, backend, layout):
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
qc.ccx(0, 1, 2)
qc.barrier()
qc.measure(2, 0)
qc_trans = transpile(qc, backend, initial_layout=layout, optimization_level=3)
job = backend.run(qc_trans, shots=8192)
print(job.job_id())
job_monitor(job)
output = job.result().get_counts()
return qc_trans, output
###Output
_____no_output_____
###Markdown
When you submit jobs to quantum systems, `job_monitor` will start tracking where your submitted job is in the pipeline. Case A) Three qubits on ibmqx2 with the triangle connectivity First, examine `ibmqx2` using the widget introduced earlier. Find a group of three qubits with triangle connection and determine your initial layout.
###Code
# run this cell for the widget
backend1
###Output
_____no_output_____
###Markdown
&128211; Assign your choice of layout to the list variable layout1 in the cell below
###Code
# Assign your choice of the initial_layout to the variable layout1 as a list
# ex) layout1 = [0,2,4]
layout1 =
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for your choice of initial layout. Execute `AND` gate on `ibmqx2` by running the cell below.
###Code
output1_all = []
qc_trans1_all = []
prob1_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans1, output1 = AND(input1, input2, backend1, layout1)
output1_all.append(output1)
qc_trans1_all.append(qc_trans1)
prob = output1[str(int( input1=='1' and input2=='1' ))]/8192
prob1_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print( '{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Once your job is finished by running, you can then easily access the results via:```pythonresults = backend.retrieve_job('JOB_ID').result().```Your job_ids will be printed out through the `AND` function defined above. You can also find the job_ids from the results under your `IQX` account. More information can be found [here](https://quantum-computing.ibm.com/docs/manage/account/ibmq). Case B) Three qubits on ibmq_athens for the linear nearest neighbor connectivity Examine `ibmq_athens` through the widget by running the cell below.
###Code
backend2
###Output
_____no_output_____
###Markdown
&128211; Find three qubits with the linear nearest neighbor connectivity. Determine the initial layout considering the error map and assign it to the list variable layout2.
###Code
layout2 = []
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for choice of initial layout. Execute `AND` gate on `ibmq_athens` by running the cell below.
###Code
output2_all = []
qc_trans2_all = []
prob2_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans2, output2 = AND(input1, input2, backend2, layout2)
output2_all.append(output2)
qc_trans2_all.append(qc_trans2)
prob = output2[str(int( input1=='1' and input2=='1' ))]/8192
prob2_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print('{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Step 3. Interpret the result There are several quantities that distinguish the circuits. Chief among them is the **circuit depth**. Circuit depth is defined in detail [here](https://qiskit.org/documentation/apidoc/circuit.htmlsupplementary-information) (See the Supplementary Information and click the Quantum Circuit Properties tab). Circuit depth is proportional to the number of gates in a circuit, and loosely corresponds to the runtime of the circuit on hardware. Therefore, circuit depth is an easy to compute metric that can be used to estimate the fidelity of an executed circuit.A second important value is the number of **nonlocal** (multi-qubit) **gates** in a circuit. On IBM Quantum systems, the only nonlocal gate that can physically be performed is the CNOT gate. Recall that CNOT gates are the most expensive gates to perform, and thus the total number of these gates also serves as a good benchmark for the accuracy of the final output. A) Circuit depth and result accuracy Running the cells below will display the four transpiled AND gate circuit diagrams with the corresponding inputs that executed on `ibmq_athens` and their circuit depths with the success probability for producing correct answer.
###Code
print('Transpiled AND gate circuit for ibmq_athens with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[0]) )
qc_trans2_all[0].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[1]) )
qc_trans2_all[1].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[2]) )
qc_trans2_all[2].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[3]) )
qc_trans2_all[3].draw()
###Output
_____no_output_____
###Markdown
&128211; Explain reason for the dissimilarity of the circuits. Describe the relations between the property of the circuit and the accuracy of the outcomes. B) Qubit connectivity and circuit depth Investigate the transpiled circuits for `ibmqx2` by running the cells below.
###Code
print('Transpiled AND gate circuit for ibmqx2 with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[0]) )
qc_trans1_all[0].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[1]) )
qc_trans1_all[1].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[2]) )
qc_trans1_all[2].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[3]) )
qc_trans1_all[3].draw()
###Output
_____no_output_____ |
PY0101EN-2-4-Sets.ipynb | ###Markdown
Sets in PythonEstimated time needed: **20** minutes ObjectivesAfter completing this lab you will be able to:- Work with sets in Python, including operations and logic operations. Table of Contents Sets Set Content Set Operations Sets Logic Operations Quiz on Sets Sets Set Content A set is a unique collection of objects in Python. You can denote a set with a curly bracket {}. Python will automatically remove duplicate items:
###Code
# Create a set
set1 = {"pop", "rock", "soul", "hard rock", "rock", "R&B", "rock", "disco"}
set1
###Output
_____no_output_____
###Markdown
The process of mapping is illustrated in the figure: You can also create a set from a list as follows:
###Code
# Convert list to set
album_list = [ "Michael Jackson", "Thriller", 1982, "00:42:19", \
"Pop, Rock, R&B", 46.0, 65, "30-Nov-82", None, 10.0]
album_set = set(album_list)
album_set
###Output
_____no_output_____
###Markdown
Now let us create a set of genres:
###Code
# Convert list to set
music_genres = set(["pop", "pop", "rock", "folk rock", "hard rock", "soul", \
"progressive rock", "soft rock", "R&B", "disco"])
music_genres
###Output
_____no_output_____
###Markdown
Set Operations Let us go over set operations, as these can be used to change the set. Consider the set A:
###Code
# Sample set
A = set(["Thriller", "Back in Black", "AC/DC"])
A
###Output
_____no_output_____
###Markdown
We can add an element to a set using the add() method:
###Code
# Add element to set
A.add("NSYNC")
A
###Output
_____no_output_____
###Markdown
If we add the same element twice, nothing will happen as there can be no duplicates in a set:
###Code
# Try to add duplicate element to the set
A.add("NSYNC")
A
###Output
_____no_output_____
###Markdown
We can remove an item from a set using the remove method:
###Code
# Remove the element from set
A.remove("NSYNC")
A
###Output
_____no_output_____
###Markdown
We can verify if an element is in the set using the in command:
###Code
# Verify if the element is in the set
"AC/DC" in A
###Output
_____no_output_____
###Markdown
Sets Logic Operations Remember that with sets you can check the difference between sets, as well as the symmetric difference, intersection, and union: Consider the following two sets:
###Code
# Sample Sets
album_set1 = set(["Thriller", 'AC/DC', 'Back in Black'])
album_set2 = set([ "AC/DC", "Back in Black", "The Dark Side of the Moon"])
###Output
_____no_output_____
###Markdown
###Code
# Print two sets
album_set1, album_set2
###Output
_____no_output_____
###Markdown
As both sets contain AC/DC and Back in Black we represent these common elements with the intersection of two circles. You can find the intersect of two sets as follow using &:
###Code
# Find the intersections
intersection = album_set1 & album_set2
intersection
###Output
_____no_output_____
###Markdown
You can find all the elements that are only contained in album_set1 using the difference method:
###Code
# Find the difference in set1 but not set2
album_set1.difference(album_set2)
###Output
_____no_output_____
###Markdown
You only need to consider elements in album_set1; all the elements in album_set2, including the intersection, are not included. The elements in album_set2 but not in album_set1 is given by:
###Code
album_set2.difference(album_set1)
###Output
_____no_output_____
###Markdown
You can also find the intersection of album_list1 and album_list2, using the intersection method:
###Code
# Use intersection method to find the intersection of album_list1 and album_list2
album_set1.intersection(album_set2)
###Output
_____no_output_____
###Markdown
This corresponds to the intersection of the two circles: The union corresponds to all the elements in both sets, which is represented by coloring both circles: The union is given by:
###Code
# Find the union of two sets
album_set1.union(album_set2)
###Output
_____no_output_____
###Markdown
And you can check if a set is a superset or subset of another set, respectively, like this:
###Code
# Check if superset
set(album_set1).issuperset(album_set2)
# Check if subset
set(album_set2).issubset(album_set1)
###Output
_____no_output_____
###Markdown
Here is an example where issubset() and issuperset() return true:
###Code
# Check if subset
set({"Back in Black", "AC/DC"}).issubset(album_set1)
# Check if superset
album_set1.issuperset({"Back in Black", "AC/DC"})
###Output
_____no_output_____
###Markdown
Quiz on Sets Convert the list ['rap','house','electronic music', 'rap'] to a set:
###Code
# Write your code below and press Shift+Enter to execute
L=['rap','house','electronic music', 'rap']
Set=print(set(L))
set(['rap','house','electronic music', 'rap'])
###Output
{'electronic music', 'house', 'rap'}
###Markdown
Click here for the solution```pythonset(['rap','house','electronic music','rap'])``` Consider the list A = [1, 2, 2, 1] and set B = set([1, 2, 2, 1]), does sum(A) = sum(B)
###Code
# Write your code below and press Shift+Enter to execute
A = [1, 2, 2, 1]
B = set([1, 2, 2, 1])
print("the sum of A is:", sum(A))
print("the sum of B is:", sum(B))
print(A)
print(B)
###Output
the sum of A is: 6
the sum of B is: 3
[1, 2, 2, 1]
{1, 2}
###Markdown
Click here for the solution```pythonA = [1, 2, 2, 1] B = set([1, 2, 2, 1])print("the sum of A is:", sum(A))print("the sum of B is:", sum(B))``` Create a new set album_set3 that is the union of album_set1 and album_set2:
###Code
# Write your code below and press Shift+Enter to execute
album_set1 = set(["Thriller", 'AC/DC', 'Back in Black'])
album_set2 = set([ "AC/DC", "Back in Black", "The Dark Side of the Moon"])
album_set3 = album_set1.union(album_set2)
print(album_set3)
###Output
{'Back in Black', 'Thriller', 'The Dark Side of the Moon', 'AC/DC'}
###Markdown
Click here for the solution```pythonalbum_set3 = album_set1.union(album_set2)album_set3``` Find out if album_set1 is a subset of album_set3:
###Code
# Write your code below and press Shift+Enter to execute
album_set1 = set(["Thriller", 'AC/DC', 'Back in Black'])
album_set2 = set([ "AC/DC", "Back in Black", "The Dark Side of the Moon"])
album_set3 = album_set1.union(album_set2)
print(album_set3)
album_set1.issubset(album_set3)
###Output
{'Back in Black', 'Thriller', 'The Dark Side of the Moon', 'AC/DC'}
###Markdown
Sets in Python Welcome! This notebook will teach you about the sets in the Python Programming Language. By the end of this lab, you'll know the basics set operations in Python, including what it is, operations and logic operations. Table of Contents Sets Set Content Set Operations Sets Logic Operations Quiz on Sets Estimated time needed: 20 min Sets Set Content A set is a unique collection of objects in Python. You can denote a set with a curly bracket {}. Python will automatically remove duplicate items:
###Code
# Create a set
set1 = {"pop", "rock", "soul", "hard rock", "rock", "R&B", "rock", "disco"}
set1
###Output
_____no_output_____
###Markdown
The process of mapping is illustrated in the figure: You can also create a set from a list as follows:
###Code
# Convert list to set
album_list = [ "Michael Jackson", "Thriller", 1982, "00:42:19", \
"Pop, Rock, R&B", 46.0, 65, "30-Nov-82", None, 10.0]
album_set = set(album_list)
album_set
###Output
_____no_output_____
###Markdown
Now let us create a set of genres:
###Code
# Convert list to set
music_genres = set(["pop", "pop", "rock", "folk rock", "hard rock", "soul", \
"progressive rock", "soft rock", "R&B", "disco"])
music_genres
###Output
_____no_output_____
###Markdown
Set Operations Let us go over set operations, as these can be used to change the set. Consider the set A:
###Code
# Sample set
A = set(["Thriller", "Back in Black", "AC/DC"])
A
###Output
_____no_output_____
###Markdown
We can add an element to a set using the add() method:
###Code
# Add element to set
A.add("NSYNC")
A
###Output
_____no_output_____
###Markdown
If we add the same element twice, nothing will happen as there can be no duplicates in a set:
###Code
# Try to add duplicate element to the set
A.add("NSYNC")
A
###Output
_____no_output_____
###Markdown
We can remove an item from a set using the remove method:
###Code
# Remove the element from set
A.remove("NSYNC")
A
###Output
_____no_output_____
###Markdown
We can verify if an element is in the set using the in command:
###Code
# Verify if the element is in the set
"AC/DC" in A
###Output
_____no_output_____
###Markdown
Sets Logic Operations Remember that with sets you can check the difference between sets, as well as the symmetric difference, intersection, and union: Consider the following two sets:
###Code
# Sample Sets
album_set1 = set(["Thriller", 'AC/DC', 'Back in Black'])
album_set2 = set([ "AC/DC", "Back in Black", "The Dark Side of the Moon"])
###Output
_____no_output_____
###Markdown
###Code
# Print two sets
album_set1, album_set2
###Output
_____no_output_____
###Markdown
As both sets contain AC/DC and Back in Black we represent these common elements with the intersection of two circles. You can find the intersect of two sets as follow using &:
###Code
# Find the intersections
intersection = album_set1 & album_set2
intersection
###Output
_____no_output_____
###Markdown
You can find all the elements that are only contained in album_set1 using the difference method:
###Code
# Find the difference in set1 but not set2
album_set1.difference(album_set2)
###Output
_____no_output_____
###Markdown
You only need to consider elements in album_set1; all the elements in album_set2, including the intersection, are not included. The elements in album_set2 but not in album_set1 is given by:
###Code
album_set2.difference(album_set1)
###Output
_____no_output_____
###Markdown
You can also find the intersection of album_list1 and album_list2, using the intersection method:
###Code
# Use intersection method to find the intersection of album_list1 and album_list2
album_set1.intersection(album_set2)
###Output
_____no_output_____
###Markdown
This corresponds to the intersection of the two circles: The union corresponds to all the elements in both sets, which is represented by coloring both circles: The union is given by:
###Code
# Find the union of two sets
album_set1.union(album_set2)
###Output
_____no_output_____
###Markdown
And you can check if a set is a superset or subset of another set, respectively, like this:
###Code
# Check if superset
set(album_set1).issuperset(album_set2)
# Check if subset
set(album_set2).issubset(album_set1)
###Output
_____no_output_____
###Markdown
Here is an example where issubset() and issuperset() return true:
###Code
# Check if subset
set({"Back in Black", "AC/DC"}).issubset(album_set1)
# Check if superset
album_set1.issuperset({"Back in Black", "AC/DC"})
###Output
_____no_output_____
###Markdown
Quiz on Sets Convert the list ['rap','house','electronic music', 'rap'] to a set:
###Code
# Write your code below and press Shift+Enter to execute
set(['rap','house','electronic music', 'rap'])
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:set(['rap','house','electronic music','rap'])--> Consider the list A = [1, 2, 2, 1] and set B = set([1, 2, 2, 1]), does sum(A) = sum(B)
###Code
# Write your code below and press Shift+Enter to execute
A = [1, 2, 2, 1]
B = set([1, 2, 2, 1])
print("the sum of A is:", sum(A))
print("the sum of B is:", sum(B))
###Output
the sum of A is: 6
the sum of B is: 3
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:A = [1, 2, 2, 1] B = set([1, 2, 2, 1])print("the sum of A is:", sum(A))print("the sum of B is:", sum(B))--> Create a new set album_set3 that is the union of album_set1 and album_set2:
###Code
# Write your code below and press Shift+Enter to execute
album_set1 = set(["Thriller", 'AC/DC', 'Back in Black'])
album_set2 = set([ "AC/DC", "Back in Black", "The Dark Side of the Moon"])
album_set3 = album_set1.union(album_set2)
album_set3
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:album_set3 = album_set1.union(album_set2)album_set3--> Find out if album_set1 is a subset of album_set3:
###Code
# Write your code below and press Shift+Enter to execute
album_set1.issubset(album_set3)
###Output
_____no_output_____
###Markdown
Sets in Python Welcome! This notebook will teach you about the sets in the Python Programming Language. By the end of this lab, you'll know the basics set operations in Python, including what it is, operations and logic operations. Table of Contents Sets Set Content Set Operations Sets Logic Operations Quiz on Sets Estimated time needed: 20 min Sets Set Content A set is a unique collection of objects in Python. You can denote a set with a curly bracket {}. Python will automatically remove duplicate items:
###Code
# Create a set
set1 = {"pop", "rock", "soul", "hard rock", "rock", "R&B", "rock", "disco"}
set1
###Output
_____no_output_____
###Markdown
The process of mapping is illustrated in the figure: You can also create a set from a list as follows:
###Code
# Convert list to set
album_list = [ "Michael Jackson", "Thriller", 1982, "00:42:19", \
"Pop, Rock, R&B", 46.0, 65, "30-Nov-82", None, 10.0]
album_set = set(album_list)
album_set
###Output
_____no_output_____
###Markdown
Now let us create a set of genres:
###Code
# Convert list to set
music_genres = set(["pop", "pop", "rock", "folk rock", "hard rock", "soul", \
"progressive rock", "soft rock", "R&B", "disco"])
music_genres
###Output
_____no_output_____
###Markdown
Set Operations Let us go over set operations, as these can be used to change the set. Consider the set A:
###Code
# Sample set
A = set(["Thriller", "Back in Black", "AC/DC"])
A
###Output
_____no_output_____
###Markdown
We can add an element to a set using the add() method:
###Code
# Add element to set
A.add("NSYNC")
A
###Output
_____no_output_____
###Markdown
If we add the same element twice, nothing will happen as there can be no duplicates in a set:
###Code
# Try to add duplicate element to the set
A.add("NSYNC")
A
###Output
_____no_output_____
###Markdown
We can remove an item from a set using the remove method:
###Code
# Remove the element from set
A.remove("NSYNC")
A
###Output
_____no_output_____
###Markdown
We can verify if an element is in the set using the in command:
###Code
# Verify if the element is in the set
"AC/DC" in A
###Output
_____no_output_____
###Markdown
Sets Logic Operations Remember that with sets you can check the difference between sets, as well as the symmetric difference, intersection, and union: Consider the following two sets:
###Code
# Sample Sets
album_set1 = set(["Thriller", 'AC/DC', 'Back in Black'])
album_set2 = set([ "AC/DC", "Back in Black", "The Dark Side of the Moon"])
###Output
_____no_output_____
###Markdown
###Code
# Print two sets
album_set1, album_set2
###Output
_____no_output_____
###Markdown
As both sets contain AC/DC and Back in Black we represent these common elements with the intersection of two circles. You can find the intersect of two sets as follow using &:
###Code
# Find the intersections
intersection = album_set1 & album_set2
intersection
###Output
_____no_output_____
###Markdown
You can find all the elements that are only contained in album_set1 using the difference method:
###Code
# Find the difference in set1 but not set2
album_set1.difference(album_set2)
###Output
_____no_output_____
###Markdown
You only need to consider elements in album_set1; all the elements in album_set2, including the intersection, are not included. The elements in album_set2 but not in album_set1 is given by:
###Code
album_set2.difference(album_set1)
###Output
_____no_output_____
###Markdown
You can also find the intersection of album_list1 and album_list2, using the intersection method:
###Code
# Use intersection method to find the intersection of album_list1 and album_list2
album_set1.intersection(album_set2)
###Output
_____no_output_____
###Markdown
This corresponds to the intersection of the two circles: The union corresponds to all the elements in both sets, which is represented by coloring both circles: The union is given by:
###Code
# Find the union of two sets
album_set1.union(album_set2)
###Output
_____no_output_____
###Markdown
And you can check if a set is a superset or subset of another set, respectively, like this:
###Code
# Check if superset
set(album_set1).issuperset(album_set2)
# Check if subset
set(album_set2).issubset(album_set1)
###Output
_____no_output_____
###Markdown
Here is an example where issubset() and issuperset() return true:
###Code
# Check if subset
set({"Back in Black", "AC/DC"}).issubset(album_set1)
# Check if superset
album_set1.issuperset({"Back in Black", "AC/DC"})
###Output
_____no_output_____
###Markdown
Quiz on Sets Convert the list ['rap','house','electronic music', 'rap'] to a set:
###Code
# Write your code below and press Shift+Enter to execute
set(['rap','house','electronic music', 'rap'])
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:set(['rap','house','electronic music','rap'])--> Consider the list A = [1, 2, 2, 1] and set B = set([1, 2, 2, 1]), does sum(A) = sum(B)
###Code
# Write your code below and press Shift+Enter to execute
A = sum([1, 2, 2, 1])
B = sum(set([1, 2, 2, 1]))
if (A==B):
print("The sum of A is equal to B")
else:
print("The sum of A is not equal to B")
###Output
The sum of A is not equal to B
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:A = [1, 2, 2, 1] B = set([1, 2, 2, 1])print("the sum of A is:", sum(A))print("the sum of B is:", sum(B))--> Create a new set album_set3 that is the union of album_set1 and album_set2:
###Code
# Write your code below and press Shift+Enter to execute
album_set1 = set(["Thriller", 'AC/DC', 'Back in Black'])
album_set2 = set([ "AC/DC", "Back in Black", "The Dark Side of the Moon"])
album_set3 = album_set1.union(album_set2)
album_set3
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:album_set3 = album_set1.union(album_set2)album_set3--> Find out if album_set1 is a subset of album_set3:
###Code
# Write your code below and press Shift+Enter to execute
set(album_set1).issubset(album_set3)
###Output
_____no_output_____
###Markdown
Sets in Python Welcome! This notebook will teach you about the sets in the Python Programming Language. By the end of this lab, you'll know the basics set operations in Python, including what it is, operations and logic operations. Table of Contents Sets Set Content Set Operations Sets Logic Operations Quiz on Sets Estimated time needed: 20 min Sets Set Content A set is a unique collection of objects in Python. You can denote a set with a curly bracket {}. Python will automatically remove duplicate items:
###Code
# Create a set
set1 = {"pop", "rock", "soul", "hard rock", "rock", "R&B", "rock", "disco"}
set1
###Output
_____no_output_____
###Markdown
The process of mapping is illustrated in the figure: You can also create a set from a list as follows:
###Code
# Convert list to set
album_list = [ "Michael Jackson", "Thriller", 1982, "00:42:19", \
"Pop, Rock, R&B", 46.0, 65, "30-Nov-82", None, 10.0]
album_set = set(album_list)
album_set
###Output
_____no_output_____
###Markdown
Now let us create a set of genres:
###Code
# Convert list to set
music_genres = set(["pop", "pop", "rock", "folk rock", "hard rock", "soul", \
"progressive rock", "soft rock", "R&B", "disco"])
music_genres
###Output
_____no_output_____
###Markdown
Set Operations Let us go over set operations, as these can be used to change the set. Consider the set A:
###Code
# Sample set
A = set(["Thriller", "Back in Black", "AC/DC"])
A
###Output
_____no_output_____
###Markdown
We can add an element to a set using the add() method:
###Code
# Add element to set
A.add("NSYNC")
A
###Output
_____no_output_____
###Markdown
If we add the same element twice, nothing will happen as there can be no duplicates in a set:
###Code
# Try to add duplicate element to the set
A.add("NSYNC")
A
###Output
_____no_output_____
###Markdown
We can remove an item from a set using the remove method:
###Code
# Remove the element from set
A.remove("NSYNC")
A
###Output
_____no_output_____
###Markdown
We can verify if an element is in the set using the in command:
###Code
# Verify if the element is in the set
"AC/DC" in A
###Output
_____no_output_____
###Markdown
Sets Logic Operations Remember that with sets you can check the difference between sets, as well as the symmetric difference, intersection, and union: Consider the following two sets:
###Code
# Sample Sets
album_set1 = set(["Thriller", 'AC/DC', 'Back in Black'])
album_set2 = set([ "AC/DC", "Back in Black", "The Dark Side of the Moon"])
###Output
_____no_output_____
###Markdown
###Code
# Print two sets
album_set1, album_set2
###Output
_____no_output_____
###Markdown
As both sets contain AC/DC and Back in Black we represent these common elements with the intersection of two circles. You can find the intersect of two sets as follow using &:
###Code
# Find the intersections
intersection = album_set1 & album_set2
intersection
###Output
_____no_output_____
###Markdown
You can find all the elements that are only contained in album_set1 using the difference method:
###Code
# Find the difference in set1 but not set2
album_set1.difference(album_set2)
###Output
_____no_output_____
###Markdown
You only need to consider elements in album_set1; all the elements in album_set2, including the intersection, are not included. The elements in album_set2 but not in album_set1 is given by:
###Code
album_set2.difference(album_set1)
###Output
_____no_output_____
###Markdown
You can also find the intersection of album_list1 and album_list2, using the intersection method:
###Code
# Use intersection method to find the intersection of album_list1 and album_list2
album_set1.intersection(album_set2)
###Output
_____no_output_____
###Markdown
This corresponds to the intersection of the two circles: The union corresponds to all the elements in both sets, which is represented by coloring both circles: The union is given by:
###Code
# Find the union of two sets
album_set1.union(album_set2)
###Output
_____no_output_____
###Markdown
And you can check if a set is a superset or subset of another set, respectively, like this:
###Code
# Check if superset
set(album_set1).issuperset(album_set2)
# Check if subset
set(album_set2).issubset(album_set1)
###Output
_____no_output_____
###Markdown
Here is an example where issubset() and issuperset() return true:
###Code
# Check if subset
set({"Back in Black", "AC/DC"}).issubset(album_set1)
# Check if superset
album_set1.issuperset({"Back in Black", "AC/DC"})
###Output
_____no_output_____
###Markdown
Quiz on Sets Convert the list ['rap','house','electronic music', 'rap'] to a set:
###Code
# Write your code below and press Shift+Enter to execute
genre_list = ['rap','house','electronic music', 'rap']
genre_set = set(genre_list)
genre_set
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:set(['rap','house','electronic music','rap'])--> Consider the list A = [1, 2, 2, 1] and set B = set([1, 2, 2, 1]), does sum(A) = sum(B)
###Code
# Write your code below and press Shift+Enter to execute
A = [1, 2, 2, 1]
B = set([1, 2, 2, 1])
print("the sum of A is:", sum(A))
print("the sum of B is:", sum(B))
###Output
the sum of A is: 6
the sum of B is: 3
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:A = [1, 2, 2, 1] B = set([1, 2, 2, 1])print("the sum of A is:", sum(A))print("the sum of B is:", sum(B))--> Create a new set album_set3 that is the union of album_set1 and album_set2:
###Code
# Write your code below and press Shift+Enter to execute
album_set1 = set(["Thriller", 'AC/DC', 'Back in Black'])
album_set2 = set([ "AC/DC", "Back in Black", "The Dark Side of the Moon"])
album_set3 = album_set1.union(album_set2)
album_set3
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:album_set3 = album_set1.union(album_set2)album_set3--> Find out if album_set1 is a subset of album_set3:
###Code
# Write your code below and press Shift+Enter to execute
album_set1.issubset(album_set3)
###Output
_____no_output_____
###Markdown
Sets in PythonEstimated time needed: **20** minutes ObjectivesAfter completing this lab you will be able to:* Work with sets in Python, including operations and logic operations. Table of Contents Sets Set Content Set Operations Sets Logic Operations Quiz on Sets Sets Set Content A set is a unique collection of objects in Python. You can denote a set with a pair of curly brackets {}. Python will automatically remove duplicate items:
###Code
# Create a set
set1 = {"pop", "rock", "soul", "hard rock", "rock", "R&B", "rock", "disco"}
set1
###Output
_____no_output_____
###Markdown
The process of mapping is illustrated in the figure: You can also create a set from a list as follows:
###Code
# Convert list to set
album_list = [ "Michael Jackson", "Thriller", 1982, "00:42:19", \
"Pop, Rock, R&B", 46.0, 65, "30-Nov-82", None, 10.0]
album_set = set(album_list)
album_set
###Output
_____no_output_____
###Markdown
Now let us create a set of genres:
###Code
# Convert list to set
music_genres = set(["pop", "pop", "rock", "folk rock", "hard rock", "soul", \
"progressive rock", "soft rock", "R&B", "disco"])
music_genres
###Output
_____no_output_____
###Markdown
Set Operations Let us go over set operations, as these can be used to change the set. Consider the set A:
###Code
# Sample set
A = set(["Thriller", "Back in Black", "AC/DC"])
A
###Output
_____no_output_____
###Markdown
We can add an element to a set using the add() method:
###Code
# Add element to set
A.add("NSYNC")
A
###Output
_____no_output_____
###Markdown
If we add the same element twice, nothing will happen as there can be no duplicates in a set:
###Code
# Try to add duplicate element to the set
A.add("NSYNC")
A
###Output
_____no_output_____
###Markdown
We can remove an item from a set using the remove method:
###Code
# Remove the element from set
A.remove("NSYNC")
A
###Output
_____no_output_____
###Markdown
We can verify if an element is in the set using the in command:
###Code
# Verify if the element is in the set
"AC/DC" in A
###Output
_____no_output_____
###Markdown
Sets Logic Operations Remember that with sets you can check the difference between sets, as well as the symmetric difference, intersection, and union: Consider the following two sets:
###Code
# Sample Sets
album_set1 = set(["Thriller", 'AC/DC', 'Back in Black'])
album_set2 = set([ "AC/DC", "Back in Black", "The Dark Side of the Moon"])
###Output
_____no_output_____
###Markdown
###Code
# Print two sets
album_set1, album_set2
###Output
_____no_output_____
###Markdown
As both sets contain AC/DC and Back in Black we represent these common elements with the intersection of two circles. You can find the intersect of two sets as follow using &:
###Code
# Find the intersections
intersection = album_set1 & album_set2
intersection
###Output
_____no_output_____
###Markdown
You can find all the elements that are only contained in album_set1 using the difference method:
###Code
# Find the difference in set1 but not set2
album_set1.difference(album_set2)
###Output
_____no_output_____
###Markdown
You only need to consider elements in album_set1; all the elements in album_set2, including the intersection, are not included. The elements in album_set2 but not in album_set1 is given by:
###Code
album_set2.difference(album_set1)
###Output
_____no_output_____
###Markdown
You can also find the intersection of album_list1 and album_list2, using the intersection method:
###Code
# Use intersection method to find the intersection of album_list1 and album_list2
album_set1.intersection(album_set2)
###Output
_____no_output_____
###Markdown
This corresponds to the intersection of the two circles: The union corresponds to all the elements in both sets, which is represented by coloring both circles: The union is given by:
###Code
# Find the union of two sets
album_set1.union(album_set2)
###Output
_____no_output_____
###Markdown
And you can check if a set is a superset or subset of another set, respectively, like this:
###Code
# Check if superset
set(album_set1).issuperset(album_set2)
# Check if subset
set(album_set2).issubset(album_set1)
###Output
_____no_output_____
###Markdown
Here is an example where issubset() and issuperset() return true:
###Code
# Check if subset
set({"Back in Black", "AC/DC"}).issubset(album_set1)
# Check if superset
album_set1.issuperset({"Back in Black", "AC/DC"})
###Output
_____no_output_____
###Markdown
Quiz on Sets Convert the list \['rap','house','electronic music', 'rap'] to a set:
###Code
# Write your code below and press Shift+Enter to execute
set(['rap', 'house', 'eletronic music', 'rap'])
###Output
_____no_output_____
###Markdown
Click here for the solution```pythonset(['rap','house','electronic music','rap'])``` Consider the list A = \[1, 2, 2, 1] and set B = set(\[1, 2, 2, 1]), does sum(A) == sum(B)?
###Code
# Write your code below and press Shift+Enter to execute
A = [1, 2, 2, 1]
B = set([1, 2, 2, 1])
print("the sum of A is:", sum(A))
print("the sum of B is:", sum(B))
###Output
the sum of A is: 6
the sum of B is: 3
###Markdown
Click here for the solution```pythonA = [1, 2, 2, 1] B = set([1, 2, 2, 1])print("the sum of A is:", sum(A))print("the sum of B is:", sum(B))``` Create a new set album_set3 that is the union of album_set1 and album_set2:
###Code
# Write your code below and press Shift+Enter to execute
album_set1 = set(["Thriller", 'AC/DC', 'Back in Black'])
album_set2 = set([ "AC/DC", "Back in Black", "The Dark Side of the Moon"])
album_set3 = album_set1.union(album_set2)
album_set3
###Output
_____no_output_____
###Markdown
Click here for the solution```pythonalbum_set3 = album_set1.union(album_set2)album_set3``` Find out if album_set1 is a subset of album_set3:
###Code
# Write your code below and press Shift+Enter to execute
album_set1.issubset(album_set3)
###Output
_____no_output_____
###Markdown
Sets in PythonEstimated time needed: **20** minutes ObjectivesAfter completing this lab you will be able to:* Work with sets in Python, including operations and logic operations. Table of Contents Sets Set Content Set Operations Sets Logic Operations Quiz on Sets Sets Set Content A set is a unique collection of objects in Python. You can denote a set with a pair of curly brackets {}. Python will automatically remove duplicate items:
###Code
# Create a set
set1 = {"pop", "rock", "soul", "hard rock", "rock", "R&B", "rock", "disco"}
set1
###Output
_____no_output_____
###Markdown
The process of mapping is illustrated in the figure: You can also create a set from a list as follows:
###Code
# Convert list to set
album_list = [ "Michael Jackson", "Thriller", 1982, "00:42:19", \
"Pop, Rock, R&B", 46.0, 65, "30-Nov-82", None, 10.0]
album_set = set(album_list)
album_set
###Output
_____no_output_____
###Markdown
Now let us create a set of genres:
###Code
# Convert list to set
music_genres = set(["pop", "pop", "rock", "folk rock", "hard rock", "soul", \
"progressive rock", "soft rock", "R&B", "disco"])
music_genres
###Output
_____no_output_____
###Markdown
Set Operations Let us go over set operations, as these can be used to change the set. Consider the set A:
###Code
# Sample set
A = set(["Thriller", "Back in Black", "AC/DC"])
A
###Output
_____no_output_____
###Markdown
We can add an element to a set using the add() method:
###Code
# Add element to set
A.add("NSYNC")
A
###Output
_____no_output_____
###Markdown
If we add the same element twice, nothing will happen as there can be no duplicates in a set:
###Code
# Try to add duplicate element to the set
A.add("NSYNC")
A
###Output
_____no_output_____
###Markdown
We can remove an item from a set using the remove method:
###Code
# Remove the element from set
A.remove("NSYNC")
A
###Output
_____no_output_____
###Markdown
We can verify if an element is in the set using the in command:
###Code
# Verify if the element is in the set
"AC/DC" in A
###Output
_____no_output_____
###Markdown
Sets Logic Operations Remember that with sets you can check the difference between sets, as well as the symmetric difference, intersection, and union: Consider the following two sets:
###Code
# Sample Sets
album_set1 = set(["Thriller", 'AC/DC', 'Back in Black'])
album_set2 = set([ "AC/DC", "Back in Black", "The Dark Side of the Moon"])
###Output
_____no_output_____
###Markdown
###Code
# Print two sets
album_set1, album_set2
###Output
_____no_output_____
###Markdown
As both sets contain AC/DC and Back in Black we represent these common elements with the intersection of two circles. You can find the intersect of two sets as follow using &:
###Code
# Find the intersections
intersection = album_set1 & album_set2
intersection
###Output
_____no_output_____
###Markdown
You can find all the elements that are only contained in album_set1 using the difference method:
###Code
# Find the difference in set1 but not set2
album_set1.difference(album_set2)
###Output
_____no_output_____
###Markdown
You only need to consider elements in album_set1; all the elements in album_set2, including the intersection, are not included. The elements in album_set2 but not in album_set1 is given by:
###Code
album_set2.difference(album_set1)
###Output
_____no_output_____
###Markdown
You can also find the intersection of album_list1 and album_list2, using the intersection method:
###Code
# Use intersection method to find the intersection of album_list1 and album_list2
album_set1.intersection(album_set2)
###Output
_____no_output_____
###Markdown
This corresponds to the intersection of the two circles: The union corresponds to all the elements in both sets, which is represented by coloring both circles: The union is given by:
###Code
# Find the union of two sets
album_set1.union(album_set2)
###Output
_____no_output_____
###Markdown
And you can check if a set is a superset or subset of another set, respectively, like this:
###Code
# Check if superset
set(album_set1).issuperset(album_set2)
# Check if subset
set(album_set2).issubset(album_set1)
###Output
_____no_output_____
###Markdown
Here is an example where issubset() and issuperset() return true:
###Code
# Check if subset
set({"Back in Black", "AC/DC"}).issubset(album_set1)
# Check if superset
album_set1.issuperset({"Back in Black", "AC/DC"})
###Output
_____no_output_____
###Markdown
Quiz on Sets Convert the list \['rap','house','electronic music', 'rap'] to a set:
###Code
# Write your code below and press Shift+Enter to execute
set(['rap','house','electronic music','rap'])
###Output
_____no_output_____
###Markdown
Click here for the solution```pythonset(['rap','house','electronic music','rap'])``` Consider the list A = \[1, 2, 2, 1] and set B = set(\[1, 2, 2, 1]), does sum(A) == sum(B)?
###Code
# Write your code below and press Shift+Enter to execute
A = [1, 2, 2, 1]
B = set([1, 2, 2, 1])
print("the sum of A is:", sum(A))
print("the sum of B is:", sum(B))
###Output
the sum of A is: 6
the sum of B is: 3
###Markdown
Click here for the solution```pythonA = [1, 2, 2, 1] B = set([1, 2, 2, 1])print("the sum of A is:", sum(A))print("the sum of B is:", sum(B))``` Create a new set album_set3 that is the union of album_set1 and album_set2:
###Code
# Write your code below and press Shift+Enter to execute
album_set1 = set(["Thriller", 'AC/DC', 'Back in Black'])
album_set2 = set([ "AC/DC", "Back in Black", "The Dark Side of the Moon"])
album_set3 = album_set1.union(album_set2)
album_set3
###Output
_____no_output_____
###Markdown
Click here for the solution```pythonalbum_set3 = album_set1.union(album_set2)album_set3``` Find out if album_set1 is a subset of album_set3:
###Code
# Write your code below and press Shift+Enter to execute
album_set1.issubset(album_set3)
###Output
_____no_output_____ |
PhD.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/content/drive/')
%%time
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import OneHotEncoder
%matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
np.random.seed(42)
import tensorflow as tf
from tensorflow import keras
import keras
from keras import Sequential, Model
from keras.layers import Dense,Input,Activation
CDT=pd.read_excel('/content/drive/MyDrive/PhD Project/Data/CDT8.xlsx')
CDT.isnull().sum()
CDT.shape
CDT.info()
CDT.head()
target = pd.DataFrame(CDT[["DWELL_DAYS"]])
target
x_categorical=CDT.drop(columns=["DWELL_DAYS","Transaction_ID","MANIFESTED_WEIGHT","CNEE_CODE","IMP_CODE","AGENT_OPERATIVE_ID_NO","IS_REEFER","BL_VERSION","BOE_VERSION","HZ_STATUS","ADDITIONAL"])
x_categorical.info()
dummied_cat=pd.get_dummies(x_categorical)
dummied_cat
#x_categorical.CNEE_CODE.value_counts().to_dict()
x_numeric=CDT[["MANIFESTED_WEIGHT","IS_REEFER","BL_VERSION","BOE_VERSION","HZ_STATUS","ADDITIONAL"]]
x_numeric
scaling=StandardScaler()
scaled_x_numeric_Manifested=scaling.fit_transform(target)
scaled_x_numeric_Manifested=pd.DataFrame(data=x_numeric,columns=["MANIFESTED_WEIGHT"])
scaled_x_numeric_Manifested
x_non_scaled= x_numeric.drop(columns=["MANIFESTED_WEIGHT"])
x_non_scaled
frames = [scaled_x_numeric_Manifested,dummied_cat,x_non_scaled,target]
combined=pd.concat(frames,axis=1)
combined
X=combined.drop(columns=["DWELL_DAYS"])
y=combined["DWELL_DAYS"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
##model=RandomForestRegressor()
#model.fit(X_train,y_train)
model=Sequential()
model.add(Dense(10, activation='relu', input_dim = X.shape[1]))
model.add(Dense(10, activation='relu'))
model.add(Dense(10, activation="relu"))
model.add(Dense(10, activation="relu"))
model.add(Dense(10, activation="relu"))
model.add(Dense(1, activation = 'sigmoid'))
model.compile(optimizer='RMSprop', loss = 'mse',metrics=['accuracy'])
%%timeit
model.fit(X_train, y_train, batch_size = 20, epochs = 10, verbose = 1,validation_data=(X_test,y_test))
# evaluate the model
scores = model.evaluate(X, y)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
!pip3 install graphviz
import graphviz
!pip3 install ann_visualizer
import ann_visualizer as ann_viz
ann_viz(model, view=True, filename = "network.gv")
###Output
_____no_output_____ |
deep-learning/NLP/Exploring Recipes - Part 1.ipynb | ###Markdown
Step 1: Get and load the dataGo to Gutenberg press and https://www.gutenberg.org/ebooks/24407, get all the data and put it innto your /data/recipes folder.
###Code
import os
data_folder = os.path.join('data/recipes')
all_recipe_files = [os.path.join(data_folder, fname)
for fname in os.listdir(data_folder)]
documents = {}
for recipe_fname in all_recipe_files:
bname = os.path.basename(recipe_fname)
recipe_number = os.path.splitext(bname)[0]
with open(recipe_fname, 'r') as f:
documents[recipe_number] = f.read()
corpus_all_in_one = ' '.join([doc for doc in documents.values()])
print("Number of docs: {}".format(len(documents)))
print("Corpus size (char): {}".format(len(corpus_all_in_one)))
###Output
Number of docs: 220
Corpus size (char): 161146
###Markdown
Step 2: Let's tokenizeWhat this actually means is that we will be splitting raw string into a list of tokens. Where A "token" seentially is meaningful units of text such as **words, phrases, punctuation, numbers, dates,...**
###Code
from nltk.tokenize import word_tokenize
all_tokens = [token for token in word_tokenize(corpus_all_in_one)]
print("Total number of tokens: {}".format(len(all_tokens)))
###Output
Total number of tokens: 33719
###Markdown
Step 3: Let's do a word countWe start with a simple word count using `collections.Counter` function. Why we're doing this?We want to know the number times a word occurs in the whole corpus and in home many docs it occurs.
###Code
from collections import Counter
total_word_freq = Counter(all_tokens)
for word, freq in total_word_freq.most_common(20):
# Let's try the top 25 words in descending order
print("{}\t{}".format(word, freq))
###Output
the 1933
, 1726
. 1568
and 1435
a 1076
of 988
in 811
with 726
it 537
to 452
or 389
is 337
( 295
) 295
be 266
them 248
butter 231
on 220
water 205
little 198
###Markdown
Step 4: Stop wordsObviously you can see that a lot of words above were expected. Actually also quite boring as (, ) or full stop is something one would expect. If it were a scary novel a lot of ! would appear.Wwe call these type of words **stop words** and they are pretty meaningless in themselves, right?Also you will see that there is no universal list of stop words *and* removing them can have a certain desirable or undesirable effect, right?So lets's import stop words from the big and mighty nltk library
###Code
from nltk.corpus import stopwords
import string
print(stopwords.words('english'))
print(len(stopwords.words('english')))
print(string.punctuation)
###Output
['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', 'she', 'her', 'hers', 'herself', 'it', 'its', 'itself', 'they', 'them', 'their', 'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', 'these', 'those', 'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', 'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', 'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after', 'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further', 'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more', 'most', 'other', 'some', 'such', 'no', 'nor', 'not', 'only', 'own', 'same', 'so', 'than', 'too', 'very', 's', 't', 'can', 'will', 'just', 'don', 'should', 'now', 'd', 'll', 'm', 'o', 're', 've', 'y', 'ain', 'aren', 'couldn', 'didn', 'doesn', 'hadn', 'hasn', 'haven', 'isn', 'ma', 'mightn', 'mustn', 'needn', 'shan', 'shouldn', 'wasn', 'weren', 'won', 'wouldn']
153
!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~
###Markdown
Tip: A little bit about strings and digits btwThere is a pythonic way to do stuff as well but that's for another time and you can play a little game by creating a password generator and checking out all kinds og modules in string as well as crypt (there is cryptography as well)
###Code
string.ascii_letters
string.ascii_lowercase
string.ascii_uppercase
# How to get them all including symbols and make a cool password
import random
char_set = string.ascii_letters + string.digits + string.punctuation
print("".join(random.sample(char_set*9, 9)))
import crypt
passwd = input("Enter your email: ")
value = '$1$' + ''.join([random.choice(string.ascii_letters + string.digits) for _ in range(16)])
# print("%s" % value)
print(crypt.crypt(passwd, value))
###Output
Enter your email: [email protected]
$1ocjj.wZDJpw
###Markdown
OK, we got distracted a bit, so we're back 😅 **So, back to where we were...**
###Code
stop_list = stopwords.words('english') + list(string.punctuation)
tokens_no_stop = [token for token in all_tokens if token not in stop_list]
total_term_freq_no_stop = Counter(tokens_no_stop)
for word, freq in total_term_freq_no_stop.most_common(25):
print("{}\t{}".format(word, freq))
###Output
butter 231
water 205
little 198
put 197
one 186
salt 185
fire 169
half 169
two 157
When 132
sauce 128
pepper 128
add 125
cut 125
flour 116
piece 116
The 111
sugar 100
saucepan 100
oil 99
pieces 95
well 94
meat 90
brown 88
small 87
###Markdown
Do you see capitalized When and The?
###Code
print(total_term_freq_no_stop['olive'])
print(total_term_freq_no_stop['olives'])
print(total_term_freq_no_stop['Olive'])
print(total_term_freq_no_stop['Olives'])
print(total_term_freq_no_stop['OLIVE'])
print(total_term_freq_no_stop['OLIVES'])
###Output
27
3
1
0
0
1
###Markdown
Step 5: Text NormalizationReplacing tokens with a canonical form, so we can group them together different spelling / variations of the same word:- lowercases- stemming- UStoGB mapping- synonym mappingStemming, btw - is a process of reducing the words -- nenerally modifed of derived, to their word stem or root form. The ain goal of stemmnig is to reduce related words to the same stem even when the stem isn't a dictionary word.As a simple example, lets take this for instance:1. handsome and handsomly would be stemmed as "handsom" - so it does not end up being a word you know!2. Nice, cool, awesome would be stemmed as nice, cool and awesome- You must also be careful with one-way transformations as well such as lowercasing (these you should be able to imporve post your training/epochs and loading the computation graph when done)Lets take a deeper look at this...
###Code
from nltk.stem import PorterStemmer
stemmer = PorterStemmer()
all_tokens_lowercase = [token.lower() for token in all_tokens]
tokens_normalized = [stemmer.stem(token) for token in all_tokens_lowercase if token not in stop_list]
total_term_freq_normalized = Counter(tokens_normalized)
for word, freq in total_term_freq_normalized.most_common(25):
print("{}\t{}".format(word, freq))
###Output
put 286
butter 245
salt 215
piec 211
one 210
water 209
cook 208
littl 198
cut 175
half 170
brown 169
fire 169
egg 163
two 162
add 160
boil 154
sauc 152
pepper 130
serv 128
remov 127
flour 123
season 123
sugar 116
slice 102
saucepan 101
###Markdown
Clearly you see the effect we just discussed aboce such as **"littl"** and so on... n-grams -- What are they?n-gram is a sequence of n-items from a given sequence of text or speech. The items can be phonemes, syllables, letters, words or base pairs.n-grams of texts are used quite heavily in text mining and NLP tasks. They basically are a set of words that co-occur within a given sentence and typically move one word forward. for instance `the dog jumps over the car`, and say if `N=2`(a bi-gram), then n-grams would be as such:- th dog- dog jumps- jumps over- over the- the carSo, we have 5-ngrams in this case.And if `N = 3` (tri-gram), then you have four n-grams and so on...- the dog jumps- dog jumps over- jumps over the- over the carSo, how many N-grams can be in a sentence?If `X= number of words in a sentence K` then the number of n-grams for sentence K would be:$$N_{gramsK} = X - (N - 1)$$Two popular uses of N-grams:- For buildin g language models (unigram, bigram, trigram). Google Yahoo Microsoft Amazon Netflix etc. use web scale n-gram models to do stuff like fix spellings, word breaking and text summarization- For developing features for supervised Deep Learningh models such as SVM, MaxEnt model, Naive Bayes etc**OK, enough lecture**, we move on to the next...
###Code
from nltk import ngrams
phrases = Counter(ngrams(all_tokens_lowercase, 2)) # N = 2
for phrase, frew in phrases.most_common(25):
print(phrase, freq)
# Sorry, I know its elegant to write like this => print()"{}\t{}".format(phrase, freq)), but too non-intuitive!
phrases = Counter(ngrams(tokens_no_stop, 3)) # N = 3
for phrase, freq in phrases.most_common(25):
print(phrase, freq)
###Output
('season', 'salt', 'pepper') 28
('Season', 'salt', 'pepper') 16
('pinch', 'grated', 'cheese') 11
('bread', 'crumbs', 'ground') 11
('cut', 'thin', 'slices') 11
('good', 'olive', 'oil') 10
('saucepan', 'piece', 'butter') 9
('another', 'piece', 'butter') 9
('cut', 'small', 'pieces') 9
('salt', 'pepper', 'When') 9
('half', 'inch', 'thick') 9
('greased', 'butter', 'sprinkled') 9
('small', 'piece', 'butter') 9
('tomato', 'sauce', 'No') 8
('sauce', 'No', '12') 8
('medium', 'sized', 'onion') 8
('ounces', 'Sweet', 'almonds') 8
('three', 'half', 'ounces') 8
('piece', 'butter', 'When') 7
('seasoning', 'salt', 'pepper') 7
('put', 'back', 'fire') 7
('oil', 'salt', 'pepper') 7
('butter', 'salt', 'pepper') 7
('tomato', 'paste', 'diluted') 7
('crumbs', 'ground', 'fine') 7
|
notebooks/nm_05_LinEqns_PolynomialFitting.ipynb | ###Markdown
Content:1. [Data modeling](1.-Data-modeling)2. [Polynomial fitting explained using a quick implementation](2.-Polynomial-fitting-explained-using-a-quick-implementation)3. [Polyfit and Polyval](3.-Polyfit-and-Polyval)4. [Numpy's polyfit, polyval, and poly1d](4.-Numpy's-polyfit,-polyval,-and-poly1d) 1. Data modeling  _Note:_ Instead of matrix inversion, we will directly solve the equation given in the left side using a linear solver. Since $\left[ {\bf X}^{\rm T}{\bf X}\right]$ is a symmetric matrix, we can use Cholesky decomposition. 2. Polynomial fitting explained using a quick implementation Let's write a general program to fit a set of $N$ points to a $D$ degree polynomial. Vandermonde matrix The first step is to calculate the Vandermonde matrix. Let's calculate it for a set of x-values. Suppose we want to fit 4 points to a straightline, then $N=4$ and $D=1$. However, remember, in python the index starts with 0. So, we have to assign the variables accordingly.
###Code
import numpy as np
x = np.array([1, 2, 3, 5],float)
N = x.shape[0]
D=1
# Initialize the X-matrix
X = np.ones([N,D+1]) # Note that we are using D+1 here
# Add columns of x, x^2, ..., x^N-1 to build the Vandermonde matrix
#X[:,1]=x[:]
#X[:,2]=x[:]**2
#X[:,3]=x[:]**3
for i in range(1,D+1): # Note that we are using D+1 here
X[:,i]=x[:]**i
print(X)
###Output
[[1. 1.]
[1. 2.]
[1. 3.]
[1. 5.]]
###Markdown
Even though it is easy to calculate the Vandermonde matrix, we should note down that numpy already has a function to calculate this matrix. We can check if our results agree with numpy
###Code
np.vander(x, D+1,increasing=True) #If the last argument is not given, you get the orders of columns reversed!
###Output
_____no_output_____
###Markdown
Now let's solve a problem Lets's use the known form of a parabola, say, $y=-0.4x^2$. We can sample some points of $x$ and fit to the known values of $y$. After fitting the data we can check of the polynomial coefficients come out as expected.
###Code
x=np.arange(-5, 6, 1, float) # start, stop, step, dtype
print("x-vector is:\n", x)
y=-0.4*x**2
print("y-vector is:\n", y)
D=2 # for a parabola
X=np.vander(x, D+1, increasing=True) # V is the Vandermonde matrix.
print(X)
XT=np.transpose(X)
A=np.matmul(XT,X)
print(A)
###Output
[[ 11. 0. 110.]
[ 0. 110. 0.]
[ 110. 0. 1958.]]
###Markdown
Now, all we have to do is solve ${\bf A}{\bf c}={\bf b}$, where ${\bf b}={\bf X}^{\rm T}{\bf y}$.
###Code
import numpy as np
from scipy.linalg import cho_factor, cho_solve
b=np.matmul(XT,y)
c=np.zeros(D,float)
L, low = cho_factor(A)
c = cho_solve((L, low), b)
print('\nThe solution is\n')
print(c)
###Output
The solution is
[ 0.00000000e+00 6.45947942e-17 -4.00000000e-01]
###Markdown
We see that the coefficient for $x^0$ and $x^1$ terms are 0.0. For the quadratic term ($x^2$), the coefficient is 0.4 according to the parabola we have started with. Now, suppose you want to find the value of the function at a new value of $x$, all you have to do is evaluate the polynomial.
###Code
xnew=0.5
ynew=c[0]*xnew**0 + c[1]*xnew**1 + c[2]*xnew**2
print("Value of y at x=", xnew, " is ", ynew)
###Output
Value of y at x= 0.5 is -0.09999999999999996
###Markdown
The result is what is expected $y(0.5)=-0.4 \times 0.5^2=-0.1$. 3. Polyfit and Polyval What we have done so far is to fit a set of points to a polynomial (polyfit) and evaluate the polynomial at new points (polyval). We can write general functions for these two steps.
###Code
def chol(A,b):
from scipy.linalg import cho_factor, cho_solve
D=b.shape[0]
c=np.zeros(D,float)
L, low = cho_factor(A)
c = cho_solve((L, low), b)
return c
def polyfit(x,y,D):
'''
Fits a given set of data x,y to a polynomial of degree D
'''
import numpy as np
X=np.vander(x, D+1, increasing=True)
XT=np.transpose(X)
A=np.matmul(XT,X)
b=np.matmul(XT,y)
c=chol(A,b)
return(c)
#=== Let's fit to a parabola
x=np.arange(-5, 6, 1, float)
y=-0.4*x**2
D=2 # for a parabola
c=polyfit(x,y,D)
for i in range(D+1):
print("coefficient of x^",i," is ",c[i])
###Output
coefficient of x^ 0 is 0.0
coefficient of x^ 1 is 6.459479416000912e-17
coefficient of x^ 2 is -0.39999999999999997
###Markdown
Now, let's see what happens if we can fit the same data to higher-degree polynomial.
###Code
D=5
c=polyfit(x,y,D)
for i in range(D+1):
print("coefficient of x^",i," is ",c[i])
###Output
coefficient of x^ 0 is 8.033876086462141e-15
coefficient of x^ 1 is -9.08310463879971e-15
coefficient of x^ 2 is -0.4000000000000023
coefficient of x^ 3 is 1.3349935850729272e-15
coefficient of x^ 4 is 8.83347612444574e-17
coefficient of x^ 5 is -3.970233448744883e-17
###Markdown
Only the quadratic term survives, all other coefficients are zero! How nice! To evaluate the polynomial, i.e., the estimated values of y, one can write another function, called polyval.
###Code
def polyval(a,x):
'''
Determines the value of the polynomial using x and the coefficient vector a
'''
import numpy as np
D=a.shape[0]
N=x.shape
y=np.zeros(N)
for i in range(D):
y=y+a[i]*x**i
return(y)
xnew=np.array([-0.5,0.5]) # we will make the new x-values as an array
ynew=polyval(c,xnew)
print(ynew)
###Output
[-0.1 -0.1]
###Markdown
4. Numpy's polyfit, polyval, and poly1d Again, since we have learned the basics of polynomial fitting _from scratch_, we can use numpy's in-built routines for production runs. But, before that we need to test if numpy's results agree with our own values!
###Code
x=np.arange(-5, 6, 1, float)
y=-0.4*x**2
D=4 # some polynomial degree
c=np.polyfit(x, y, D)
xnew=np.array([-0.5,0.5])
ynew=np.polyval(c,xnew)
print("Estimated value of y at new points of x is: \n",ynew)
###Output
Estimated value of y at new points of x is:
[-0.1 -0.1]
###Markdown
There's also a cool function in numpy to print the polynomial as an expression.
###Code
p = np.poly1d(c)
print(p)
###Output
4 3 2
-4.122e-18 x + 1.754e-17 x - 0.4 x + 6.324e-17 x + 1.071e-15
|
ICA/Group2_ICA2_DataMining.ipynb | ###Markdown
**GROUP 2** - Name 1: Taylor Bonar- Name 2: Robert Burigo- Name 3: Rashmi Patel- Name 4: Scott Englerth ________ Live Session Assignment TwoIn the following assignment you will be asked to fill in python code and derivations for a number of different problems. Please read all instructions carefully and turn in the rendered notebook (.ipynb file, remember to save it!!) or HTML of the rendered notebook before the end of class. Contents* Loading the Classification Data* Using Decision Trees - Gini* Using Decision Trees - Entropy* Multi-way Splits* Decision Trees in Scikit-Learn________________________________________________________________________________________________________Back to Top Loading the Classification DataPlease run the following code to read in the "digits" dataset from sklearn's data loading module. This is identical to the first in class assignment for loading the data into matrices. `ds.data` is a matrix of feature values and `ds.target` is a column vector of the class output (in our case, the hand written digit we want to classify). Each class is a number (0 through 9) that we want to classify as one of ten hand written digits.
###Code
from __future__ import print_function
import numpy as np
from sklearn.datasets import load_digits
ds = load_digits()
# this holds the continuous feature data
print('features shape:', ds.data.shape) # there are 1797 instances and 64 features per instance
print('target shape:', ds.target.shape )
print('range of target:', np.min(ds.target),np.max(ds.target))
###Output
features shape: (1797, 64)
target shape: (1797,)
range of target: 0 9
###Markdown
________________________________________________________________________________________________________Back to Top Using Decision TreesIn the videos, we talked about the splitting conditions for different attributes. Specifically, we discussed the number of ways in which it is possible to split a node, depending on the attribute types. To understand the possible splits, we need to understand the attributes. For the question below, you might find the description in the `ds['DESCR']` field to be useful. You can see the field using `print(ds['DESCR'])`**Question 1:** For the digits dataset, what are the type(s) of the attributes? How many attributes are there? What do they represent?
###Code
## Enter your comments here
print(ds['DESCR'])
## Enter comments here
###Output
.. _digits_dataset:
Optical recognition of handwritten digits dataset
--------------------------------------------------
**Data Set Characteristics:**
:Number of Instances: 1797
:Number of Attributes: 64
:Attribute Information: 8x8 image of integer pixels in the range 0..16.
:Missing Attribute Values: None
:Creator: E. Alpaydin (alpaydin '@' boun.edu.tr)
:Date: July; 1998
This is a copy of the test set of the UCI ML hand-written digits datasets
https://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits
The data set contains images of hand-written digits: 10 classes where
each class refers to a digit.
Preprocessing programs made available by NIST were used to extract
normalized bitmaps of handwritten digits from a preprinted form. From a
total of 43 people, 30 contributed to the training set and different 13
to the test set. 32x32 bitmaps are divided into nonoverlapping blocks of
4x4 and the number of on pixels are counted in each block. This generates
an input matrix of 8x8 where each element is an integer in the range
0..16. This reduces dimensionality and gives invariance to small
distortions.
For info on NIST preprocessing routines, see M. D. Garris, J. L. Blue, G.
T. Candela, D. L. Dimmick, J. Geist, P. J. Grother, S. A. Janet, and C.
L. Wilson, NIST Form-Based Handprint Recognition System, NISTIR 5469,
1994.
.. topic:: References
- C. Kaynak (1995) Methods of Combining Multiple Classifiers and Their
Applications to Handwritten Digit Recognition, MSc Thesis, Institute of
Graduate Studies in Science and Engineering, Bogazici University.
- E. Alpaydin, C. Kaynak (1998) Cascading Classifiers, Kybernetika.
- Ken Tang and Ponnuthurai N. Suganthan and Xi Yao and A. Kai Qin.
Linear dimensionalityreduction using relevance weighted LDA. School of
Electrical and Electronic Engineering Nanyang Technological University.
2005.
- Claudio Gentile. A New Approximate Maximal Margin Classification
Algorithm. NIPS. 2000.
###Markdown
**Number of Attributes**: 64 **What are the Attributes' Type?** An integer between 0-16. **What is it representing?** Each attribute is representing a cell/square (8x8 square = 64 cells) of a handwritten number containing a value between 0-16 to create a bitmap. ___ Using the gini coefficientWe talked about the gini index in the videos. The gini coefficient for a **given split** is given by:$$Gini=\sum_{t=1}^T \frac{n_t}{N}gini(t)$$where $T$ is the total number of splits (2 for binary attributes), $n_t$ is the number of instances in node $t$ after splitting, and $N$ is the total number of instances in the parent node. $gini(t)$ is the **gini index for each individual node that is created by the split** and is given by:$$gini(t)=1-\sum_{j=0}^{C-1} p(j|t)^2$$where $C$ is the total number of possible classes and $p(j|t)$ is the probability of class $j$ in node $t$ (i.e., $n_j==$ the count of instances belonging to class $j$ in node $t$, normalized by the total number of instances in node $t$).$$ p(j|t) = \frac{n_j}{n_t}$$ For the given dataset, $gini(t)$ has been programmed for you in the function `gini_index`. * `def gini_index(classes_in_split):` * To use the function, pass in a `numpy` array of the class labels for a node as (i.e., pass in the rows from `ds.target` that make up a node in the tree) and the gini will be returned for that node.
###Code
# compute the gini of several examples for the starting dataset
# This function "gini_index" is written for you. Once you run this block, you
# will have access to the function for the notebook. You do not need to know
# how this function works--only what it returns
# This function returns the gini index for an array of classes in a node.
def gini_index(classes_in_split):
# pay no attention to this code in the function-- it just computes the gini for a given split
classes_in_split = np.reshape(classes_in_split,(len(classes_in_split),-1))
unique_classes = np.unique(classes_in_split)
gini = 1
for c in unique_classes:
gini -= (np.sum(classes_in_split==c) / float(len(classes_in_split)))**2
return gini
###Output
_____no_output_____
###Markdown
In the example below, the function is used calculate the gini for splitting the dataset on feature 28, with value 2.5. In this example, we need to create two separate tree nodes: the first node has all the `ds.target` labels when feature 28 is greater than 2.5, the second node has all the rows from `ds.target` where feature 28 is less than 2.5. The steps are outlined below. **Read this carefully to understand what the code does below in the block following this.**- Feature 28 is saved into a separate variable `feature28 = ds.data[:,28]`- First all the target classes for the first node are calculated using `numpy` indexing `ds.target[feature28>2.5]` - Note: this grabs all the rows in `ds.target` (the classes) which have feature 28 greater than 2.5 (similar to indexing in pandas)- Second, those classes are passed into the function to get the gini for the right node in this split (i.e., feature 28 being greater than the threshold 2.5). - `gini_r = gini_index(ds.target[feature28>2.5])`- Third, the gini is calculated for the left node in the tree. This grabs only the rows in `ds.target` where feature 28 is less than 2.5. - `gini_l = gini_index(ds.target[feature28<=2.5])`- Combining the gini indices is left as an exercise in the next section
###Code
#==========================Use the gini_index Example===============
# get the value for this feature as a column vector
# (this is like grabbing one column of the record table)
feature28 = ds.data[:,28]
# if we split on the value of 2.5, then this is the gini for each resulting node:
gini_r = gini_index(ds.target[feature28>2.5]) # just like in pandas, we are sending in the rows where feature28>2.5
gini_l = gini_index(ds.target[feature28<=2.5]) # and sending the rows where feature28<=2.5
# compute gini example. This splits on attribute '28' with a value of 2.5
print('gini for right node of split:', gini_r)
print('gini for left node of split:', gini_l)
###Output
gini for right node of split: 0.8845857867667073
gini for left node of split: 0.7115407566535388
###Markdown
**Question 2:** Now, using the above values `gini_r` and `gini_l`. Calculate the combined Gini for the entire split. You will need to write the weighted summation (based upon the number of instances inside each node). To count the number of instances greater than a value using numpy, you can use broadcasting, which is a special way of indexing into a numpy array. For example, the code `some_array>5` will return a new numpy array of true/false elements. It is the same size as `some_array` and is marked true where the array is greater than `5`, and false otherwise. By taking the `sum` of this array, we can count how many times `some_array` is greater than `5`. `counts = sum(some_array>5)` You will need to use this syntax to count the values in each node as a result of splitting.
###Code
## Enter your code here
count_r = sum(feature28>2.5)
print(f'{count_r}')
count_l = sum(feature28<=2.5)
print(f'{count_l}')
total_weights = count_r+count_l
right_weight = count_r/total_weights
print(f'{right}')
left_weight = count_l/total_weights
print(left)
gini_total = left_weight*gini_l + right_weight * gini_r
## Enter your code here
print('The total gini of the split for a threshold of 2.5 is:',f'{gini_total}')
###Output
1398
399
0.7779632721202003
0.22203672787979967
The total gini of the split for a threshold of 2.5 is: 0.8461634345045179
###Markdown
___ Start of Live Session Coding**Question 3:** Now we want to know which is a better split:- `feature28` split on a value of `2.5` - `feature28` split on a value of `10`. Enter your code to find the total gini of splitting on the threshold of 10 and compare it to the total gini of splitting on threshold of 2.5 (for feature 28 only). According to gini, which threshold is better for spliting on feature 28, `threshold=2.5` or `threshold=10.0`?
###Code
# Enter your code here
feature28 = ds.data[:,28]
# if we split on the value of 10, then this is the gini for each resulting node:
gini_r = gini_index(ds.target[feature28>10]) # just like in pandas, we are sending in the rows where feature28>10
gini_l = gini_index(ds.target[feature28<=10]) # and sending the rows where feature28<=10
# compute gini example. This splits on attribute '28' with a value of 10
print('gini for right node of split:', gini_r)
print('gini for left node of split:', gini_l)
count_r = sum(feature28>10)
print(f'{count_r}')
count_l = sum(feature28<=10)
print(f'{count_l}')
total_weights = count_r+count_l
right_weight = count_r/total_weights
print(f'{right}')
left_weight = count_l/total_weights
print(left)
gini_10_total = left_weight*gini_l + right_weight * gini_r
if(gini_10_total > gini_total):
print("2.5 is better")
else:
print("10 is better")
# Enter your code here
print('The total gini of the split for a threshold of 10 is:',f'{gini_10_total}')
print('This is not better than the split on 2.5')
###Output
gini for right node of split: 0.8737186870604284
gini for left node of split: 0.8496295618768864
1043
754
0.7779632721202003
0.22203672787979967
2.5 is better
The total gini of the split for a threshold of 10 is: 0.8636111743234276
This is not better than the split on 2.5
###Markdown
___Back to Top Entropy based splittingWe discussed entropy as well in the video as another means of splitting. We calculated entropy for a node $t$ by:$$ Entropy(t) = -\sum p(j|t) \log p(j|t) $$where $p(j|t)$ is the same as above. To combine Entropy measures from a set of nodes, t = {1,...,T} we use: $$Entropy_{split}=\sum_{t=1}^T \frac{n_t}{N}Entropy(t)$$ where $n_t$ and $N$ are the same as defined above for the $Gini$. Information gain is calculated by subtracting the Entropy of the split from the Entropy of the parent node before splitting:$$InfoGain = Entropy(p)-Entropy_{split}$$where $p$ is the parent node before splitting. You are given an equation for calculating the $Entropy(t)$ of node $t$. It works exactly like the `gini_index` function above, but is named `entropy_value` and returns the entropy for a node. You simply send in an array of the feature values for the node you want to calculate the entropy value for.
###Code
def entropy_value(classes_in_split):
# pay no attention to this code -- it just computes the gini for a given split
classes_in_split = np.reshape(classes_in_split,(len(classes_in_split),-1))
unique_classes = np.unique(classes_in_split)
ent = 0
for c in unique_classes:
p = (np.sum(classes_in_split==c) / float(len(classes_in_split)))
ent += p * np.log(p)
return -ent
ent_r = entropy_value(ds.target[feature28>2.5])
ent_l = entropy_value(ds.target[feature28<=2.5])
# compute entropy example. This splits on attribute '28' with a value of 2.5
print('entropy for right node of split:', ent_r)
print('entropy for left node of split:', ent_l)
###Output
entropy for right node of split: 2.1836975378213057
entropy for left node of split: 1.4898881412786364
###Markdown
___**Question 4:** Calculate the **information gain** of the split when the threshold is 2.5 on `feature28`. What is the value of the information gain?
###Code
# Enter your code here
ent_p = entropy_value(ds.target)
info_gained = ent_p - (left_weight*ent_l + right_weight * ent_r)
# Enter your code here
print('The information gain of the split for threshold of 2.5:',f'{info_gained}')
###Output
The information gain of the split for threshold of 2.5: 0.40989592076102355
###Markdown
**Question 5:** What is the information gain if the threshold is 10.0 on `feature28`? According to information gain, is it better to split on a threshold of 2.5 or 10? Does entropy give the same decision as gini for this example?
###Code
# Enter your code here
ent_r = entropy_value(ds.target[feature28>10])
ent_l = entropy_value(ds.target[feature28<=10])
# compute entropy example. This splits on attribute '28' with a value of 10
print('entropy for right node of split:', ent_r)
print('entropy for left node of split:', ent_l)
info_gained_10 = ent_p - (left_weight*ent_l + right_weight * ent_r)
# Enter your code here
print('The information gain of the split for threshold of 10:',f'{info_gained_10}')
print('This is not better than the split on 2.5')
print('This is the same as gini')
###Output
entropy for right node of split: 2.112391791714538
entropy for left node of split: 2.066003576622626
The information gain of the split for threshold of 10: 0.20955137704371163
This is not better than the split on 2.5
This is the same as gini
###Markdown
___Back to Top Information gain and multi-way splittingNow assume that we can use not just a binary split, but a three way split. **Question 6** What is the information gain if we split feature28 on two thesholds (three separate nodes corresponding to three branches from one node) - node left: `feature28<2.5`, - node middle: `2.5<=feature28<10`, and - node right: `10<=feature28`? Is the information gain better? ***Note***: You can index into a `numpy` array for the middle node with the following notation: `some_array[(2.5<=feature28) & (feature28<10.0)]`
###Code
# Enter your code here
ent_r = entropy_value(ds.target[feature28>=10])
ent_m = entropy_value(ds.target[(2.5<=feature28) & (feature28<10.0)])
ent_l = entropy_value(ds.target[feature28<2.5])
# compute entropy example. This splits on attribute '28' with a value of 10
print('entropy for right node of split:', ent_r)
print('entropy for middle node of split:', ent_m)
print('entropy for left node of split:', ent_l)
count_r = sum(feature28>=10)
print(f'{count_r}')
count_m = sum((2.5<=feature28) & (feature28<10.0))
print(f'{count_m}')
count_l = sum(feature28<2.5)
print(f'{count_l}')
total_weights = count_r+count_l+count_m
right_weight = count_r/total_weights
middle_weight = count_m/total_weights
left_weight = count_l/total_weights
info_gained = ent_p - (left_weight*ent_l + middle_weight*ent_m + right_weight * ent_r)
# Enter your code here
print('The information gain of the three way split is:',f'{info_gained}')
###Output
entropy for right node of split: 2.118750287884169
entropy for middle node of split: 2.1558341564612853
entropy for left node of split: 1.4898881412786364
1099
299
399
The information gain of the three way split is: 0.3171890999123379
###Markdown
**Question 7**: Should we normalize the quantity that we just calculated if we want to compare it to the information gain of a binary split? Why or Why not? Yes, we should normalize as the information gain prefers larger, purer splits rather than smaller impure splits. ___Back to Top Decision Trees in scikit-learnScikit-learn also has an implementation of decision trees. Its available here:- http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.htmlsklearn.tree.DecisionTreeClassifier**Question 8**: What algorithm does scikit-learn use for creating decision trees (i.e., ID3, C4.5, C5.0, CART, MARS, CHAID, etc.)? CART ___**Question 9**: Using the documentation, use scikit-learn to train a decision tree on the digits data. Calculate the accuracy on the training data. What is the accuracy? Did you expect the decision tree to have this kind of accuracy? Why or Why not?
###Code
# use scikit learn to train a decision tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn import metrics as mt
# enter your code below here to train and predict on the same data
cv = StratifiedShuffleSplit(n_splits=1,train_size=0.5)
print (cv)
dt_clf = DecisionTreeClassifier()
# now get the training and testing
for train, test in cv.split(ds.data,ds.target):
print ('Training Once:')
# train the decision tree algorithm
%time dt_clf.fit(ds.data[train],ds.target[train])
yhat = dt_clf.predict(ds.data[test])
# enter your code above here
# enter your code below here to calculate accuracy
accuracy = mt.accuracy_score(ds.target[test],yhat)
print('accuracy:', f'{accuracy}')
print('I did/did not expect... Rashmi - b/w 85; Robert - 80; Scott - 77; Taylor - 75')
print('ROBERT WON!')
# enter your code above here
###Output
StratifiedShuffleSplit(n_splits=1, random_state=None, test_size=None,
train_size=0.5)
Training Once:
Wall time: 7 ms
accuracy: 0.7997775305895439
I did/did not expect... Rashmi - b/w 85; Robert - 80; Scott - 77; Taylor - 75
ROBERT WON!
|
Data-Science-Projects-master/ProjectMovieRating/Exploration.ipynb | ###Markdown
Awards and Gross
###Code
d = df['nb_awards'].sort_values(ascending=False)[:15]
plt.figure(figsize=(20,5))
plot = sns.barplot(x=d.index, y=d)
_ = plot.set_xticklabels([elem[:17] for elem in d.index], rotation=15)
_ = plot.set_title('Most awarded (won and nominated) movies')
_ = plot.set_ylabel('Number of awards')
d = df.worldwide_gross.sort_values(ascending=False)[:17]
plt.figure(figsize=(20,5))
plot = sns.barplot(x=d.index, y=d)
_ = plot.set_xticklabels([elem[:20] for elem in d.index], rotation=15)
_ = plot.set_title('Most prolific movies')
_ = plot.set_ylabel('Gross (B$)')
sns.set()
d = df.worldwide_gross.sort_values(ascending=False)[:20]
e = df_awards[df_awards.index.isin(d.index)].isnull().sum(axis=1)
e = len(awards_columns) - e[~e.index.duplicated(keep='first')].reindex(d.index)
margin = 0.05
width = 4*(1.-2.*margin)/15
fig = plt.figure(figsize=(20,5))
ax = fig.add_subplot(111)
ax2 = ax.twinx()
d.plot(kind='bar', color='green', ax=ax, width=width, position=0)
e.plot(kind='bar', color='blue', ax=ax2, width=width, position=1)
ax.set_ylabel('Worldwide Gross (GREEN)')
ax2.set_ylabel('Awards (BLUE)')
ax.set_xlabel('')
ax.set_title('Comparaison between Worldwide Gross and Awards')
_ = ax.set_xticklabels([elem[:17] for elem in d.index], rotation = 30, ha='right')
ax2.grid(False)
###Output
_____no_output_____
###Markdown
Facebook likes
###Code
d = df['total_cast_fb_likes'].sort_values(ascending=False)[:15]
e = df[df.index.isin(d.index)].num_facebook_like
plt.figure(figsize=(20,5))
plot = sns.barplot(x=d.index, y=d)
_ = plot.set_xticklabels([elem[:17] for elem in d.index], rotation=15)
_ = plot.set_title('Movies with the most "famous" casting')
_ = plot.set_ylabel('Total Facebook Likes (casting + director)')
sns.set()
d = df['total_cast_fb_likes'].sort_values(ascending=False)[:20]
e = df[df.index.isin(d.index)].num_facebook_like.reindex(d.index)
margin = 0.05
width = 4*(1.-2.*margin)/15
fig = plt.figure(figsize=(20,5))
ax = fig.add_subplot(111)
ax2 = ax.twinx()
d.plot(kind='bar', color='green', ax=ax, width=width, position=0)
e.plot(kind='bar', color='blue', ax=ax2, width=width, position=1)
ax.set_ylabel('Casting Likes (GREEN)')
ax2.set_ylabel('Movie Likes (BLUE)')
ax.set_xlabel('')
ax.set_title('Comparaison between Casting Likes and Movie Likes')
_ = ax.set_xticklabels([elem[:17] for elem in d.index], rotation = 30, ha='right')
ax2.grid(False)
###Output
_____no_output_____
###Markdown
Best Actors Actor in movie
###Code
all_actors = [actor for actor in list(set(list(df.actor_1_name) + list(df.actor_2_name) + list(df.actor_3_name))) if pd.notnull(actor)]
imdb_score_per_actor = {}
for actor in all_actors:
imdb_score_per_actor[actor] = df[(df.actor_1_name == actor) | (df.actor_2_name == actor) | (df.actor_3_name == actor)].idmb_score.mean()
millnames = ['',' K',' M',' B']
def millify(n):
if pd.notnull(n):
n = float(n)
millidx = max(0,min(len(millnames)-1,
int(math.floor(0 if n == 0 else math.log10(abs(n))/3))))
return '{:.1f}{}'.format(n / 10**(3 * millidx), millnames[millidx])
else:
return n
gross_per_actor = {}
for actor in all_actors:
gross_per_actor[actor] = df[(df.actor_1_name == actor) | (df.actor_2_name == actor) | (df.actor_3_name == actor)].worldwide_gross.mean()
mini_movie = 3
top_k = 3
best_mini_gross = sorted([(k,v) for k,v in sorted(gross_per_actor.items(), key=lambda x:x[1], reverse=True) if len(df[(df.actor_1_name == k)
| (df.actor_2_name == k)
| (df.actor_3_name == k)]) >= mini_movie], key=lambda x:x[1], reverse=True)[:20]
best_mini_gross_str = [elem[0]+ ', %s (%s movie.s)' % (millify(elem[1]),len(df[(df.actor_1_name == elem[0])
| (df.actor_2_name == elem[0])
| (df.actor_3_name == elem[0])])) for elem in best_mini_gross][:top_k]
best_mini = [(k,v) for k,v in sorted(imdb_score_per_actor.items(), key=lambda x:x[1], reverse=True) if len(df[(df.actor_1_name == k)
| (df.actor_2_name == k)
| (df.actor_3_name == k)]) >= mini_movie][:20]
best_mini_str = [elem[0]+ ', %s (%s movie.s)' % (round(elem[1], 2),len(df[(df.actor_1_name == elem[0])
| (df.actor_2_name == elem[0])
| (df.actor_3_name == elem[0])])) for elem in best_mini][:top_k]
print('The {} best actors are (with minimum {} movies) : \n{}'.format(top_k, mini_movie,
'\n'.join(best_mini_str)))
print('\nThe {} most prolific actors are (with minimum {} movies) : \n{}'.format(top_k, mini_movie,
'\n'.join(best_mini_gross_str)))
plt.figure(figsize=(23,5))
plot = sns.barplot([elem[0] for elem in best_mini], [elem[1] for elem in best_mini])
_ = plot.set_xticklabels([elem[0] for elem in best_mini], rotation=15)
_ = plot.set_title('Most beneficial (IMDB score) actors')
_ = plot.set_ylabel('IMDB score')
plt.figure(figsize=(23,5))
plot = sns.barplot([elem[0] for elem in best_mini_gross], [elem[1] for elem in best_mini_gross])
_ = plot.set_xticklabels([elem[0] for elem in best_mini_gross], rotation=15)
_ = plot.set_title('Most prolific actors')
_ = plot.set_ylabel('Worldwide gross')
###Output
_____no_output_____
###Markdown
First star in movie
###Code
big_star = df.groupby(['actor_1_name'])['idmb_score', 'worldwide_gross'].mean().sort_values(['idmb_score', 'worldwide_gross'], ascending=False)
big_star['nb_movies'] = big_star.index
big_star['nb_movies'] = big_star['nb_movies'].map(df.groupby(['actor_1_name'])['movie_title'].count().to_dict())
big_star['worldwide_gross'] = big_star['worldwide_gross'].apply(millify)
top_k = 7
print('The {} best actors as most famous actor are :'.format(top_k))
big_star[big_star.nb_movies >= 3].head(top_k)
big_star = df.groupby(['actor_1_name'])['idmb_score', 'worldwide_gross'].mean().sort_values(['worldwide_gross', 'idmb_score'], ascending=False)
big_star['nb_movies'] = big_star.index
big_star['nb_movies'] = big_star['nb_movies'].map(df.groupby(['actor_1_name'])['movie_title'].count().to_dict())
big_star['worldwide_gross'] = big_star['worldwide_gross'].apply(millify)
top_k = 7
print('The {} most prolific actors as most famous actor are :'.format(top_k))
big_star[big_star.nb_movies >= 3].head(top_k)
###Output
The 7 most prolific actors as most famous actor are :
###Markdown
IMDB rating and other variables
###Code
d = df['idmb_score'].apply(float).sort_values(ascending=False)[:12]
e = df[df.index.isin(d.index)].num_facebook_like.reindex(d.index)
f = df[df.index.isin(d.index)].worldwide_gross.reindex(d.index)
margin = 0.05
width = 4*(1.-2.*margin)/15
fig = plt.figure(figsize=(20,5))
ax = fig.add_subplot(111)
ax2 = ax.twinx()
ax3= ax2.twinx()
d.plot(kind='bar', color='green', ax=ax, width=width, position=0)
e.plot(kind='bar', color='blue', ax=ax2, width=width, position=1)
f.plot(kind='bar', color='purple', ax=ax3, width=width, position=2)
ax.set_ylabel('IMDB Score (GREEN)')
ax2.set_ylabel('Movie Likes(BLUE) and Gross(PURPLE)')
ax3.set_yticklabels('')
ax2.set_yticklabels('')
ax.set_xlabel('')
_ = ax.set_xticklabels([elem[:17] for elem in d.index], rotation = 30, ha='right')
ax3.grid(False)
ax2.grid(False)
ax.set_title('Gross and Movie Likes compared to IMDB score')
# Correlation Matrix
corr = df[['nb_awards', 'domestic_gross','worldwide_gross',
'total_cast_fb_likes','director_fb_likes', 'production_budget',
'num_critic_for_reviews', 'idmb_score', 'actor_1_fb_likes', 'actor_2_fb_likes', 'actor_3_fb_likes']].corr()
plt.figure(figsize=(8,8))
sns.heatmap(corr, mask=np.zeros_like(corr, dtype=np.bool), cmap=sns.diverging_palette(250, 10, as_cmap=True),
square=True)
plt.title('Correlation matrix for 7 variables and the IMDB Score')
corr
###Output
_____no_output_____
###Markdown
Genres
###Code
with open('genre.json', 'r') as f:
genres = json.load(f)
imdb_score_per_genre = {}
gross_per_genre = {}
genre_columns = [col for col in df.columns if 'genre_' in col]
df_genres = df[genre_columns]
for genre, value in genres.items():
mask = np.column_stack([df_genres[col] == value for col in df_genres])
df_specific_genre = df.loc[mask.any(axis=1)][['genres', 'idmb_score', 'worldwide_gross']]
imdb_score_per_genre[genre] = df_specific_genre.idmb_score.mean()
gross_per_genre[genre] = df_specific_genre.worldwide_gross.mean()
gross_per_genre = {k:v for k,v in gross_per_genre.items() if pd.notnull(v)}
top_k = 5
print('The {} best genres (in terms of IMDB score) are : \n{}'.format(top_k,
'\n'.join(['%s (%s)' % (elem[0], round(elem[1], 1)) for elem in sorted(imdb_score_per_genre.items(), key=lambda x:x[1], reverse=True)][:top_k])))
print('\nThe {} most prolific genres are : \n{}'.format(top_k,
'\n'.join(['%s (%s)' % (elem[0], millify(elem[1])) for elem in sorted(gross_per_genre.items(), key=lambda x:x[1], reverse=True)][:top_k])))
margin = 0.05
width = 4*(1.-2.*margin)/15
fig = plt.figure(figsize=(20,5))
ax = fig.add_subplot(111)
ax2 = ax.twinx()
df_combine = pd.concat([pd.Series(gross_per_genre), pd.Series(imdb_score_per_genre)], axis=1)
df_combine = df_combine.sort_values(1, ascending=False)
df_combine.columns = ['Gross', 'Score']
df_combine.Gross.plot(kind='bar', color='green', ax=ax, width=width, position=0)
df_combine.Score.plot(kind='bar', color='blue', ax=ax2, width=width, position=1)
ax.set_ylabel('Worldwide Gross in M$ (green)')
ax2.set_ylabel('IMDB Score (blue)')
ax.set_xlabel('')
ax.set_title('Comparaison between Worldwide Gross and IMDB score per genre')
_ = ax.set_xticklabels(pd.Series(imdb_score_per_genre).index, rotation = 30)
ax2.grid(False)
###Output
_____no_output_____
###Markdown
Prediction Preprocessing
###Code
## Fill NA for genres
df.genres = df.genres.fillna('')
## Mean Inputer
col_to_impute = ['actor_1_fb_likes', 'actor_2_fb_likes', 'actor_3_fb_likes',
'domestic_gross', 'duration_sec', 'num_critic_for_reviews', 'num_facebook_like', 'num_user_for_reviews',
'production_budget', 'total_cast_fb_likes', 'worldwide_gross', 'director_fb_likes']
for col in col_to_impute:
column = np.array(df[col]).reshape(1, -1)
imp = Imputer(missing_values='NaN', strategy='mean', axis=1)
df[col] = imp.fit_transform(column)[0]
numerical_cols = list(df.dtypes[df.dtypes != 'object'].index)
not_wanted_cols = ['title_year', 'storyline', 'release_date', 'image_urls', 'movie_title', 'keywords', 'movie_imdb_link', 'num_voted_users'] + genre_columns
df.country = df.country.apply(lambda x:x.split('|'))
df.language = df.language.apply(lambda x:x.split('|'))
list_cols = ['country', 'genres', 'language']
cols_to_transform = [cols for cols in df.columns if cols not in numerical_cols + not_wanted_cols + list_cols]
df2 = df[cols_to_transform]
## Dummies for columns with list
df_col_list = pd.DataFrame()
for col in list_cols:
df_col_list = pd.concat([df_col_list, pd.get_dummies(df[col].apply(pd.Series).stack()).sum(level=0)], axis=1)
## Dummies for columns with string
df_col_string = pd.get_dummies(df2, columns=cols_to_transform)
X_raw = pd.concat([df[numerical_cols], df_col_string, df_col_list], axis=1)
print('Columns dtypes :', Counter(X_raw.dtypes))
y = list(X_raw.idmb_score)
X = X_raw.drop('idmb_score', axis=1)
X_train, X_test, Y_train, Y_test = train_test_split(
X, y, test_size=0.20, random_state=42)
print('Train', X_train.shape, 'Test', X_test.shape)
###Output
Train (4089, 13017) Test (1023, 13017)
###Markdown
Choosing ML algorithm
###Code
gbr = ensemble.GradientBoostingRegressor(n_estimators=1000)
gbr.fit(X_train,Y_train)
print ("Training Score GradientBoosting: ", str(gbr.score(X_train,Y_train)))
print ("Test Score GradientBoosting: " , str(gbr.score(X_test,Y_test)))
abr = ensemble.AdaBoostRegressor(n_estimators=10, learning_rate=0.4, loss='linear')
abr.fit(X_train,Y_train)
print ("Training Score AdaBoostRegressor: ", str(abr.score(X_train,Y_train)))
print ("Test Score AdaBoostRegressor: " , str(abr.score(X_test,Y_test)))
rf=ensemble.RandomForestRegressor(n_estimators=500,oob_score=True, )
rf.fit(X,y)
print ("Training Score RandomForest: ", str(rf.score(X,y)))
print ("Cross Validation (10 fold) Score: " , np.mean(cross_val_score(rf, X_train, Y_train, cv=10)))
###Output
Training Score RandomForest: 0.933556405649
OOB Score RandomForest: 0.514455022206
###Markdown
Tuning Cross Validation to choose n_estimators
###Code
rfs = {}
for k in [10, 20, 50, 70, 100, 120, 150, 200]:
rf=ensemble.RandomForestRegressor(n_estimators=k, oob_score=True)
rf.fit(X,y)
rfs[k] = np.mean(cross_val_score(rf, X_train, Y_train, cv=5))
x_plot = list(rfs.keys())
y_plot = list(rfs.values())
f, ax = plt.subplots()
ax.scatter(x_plot, y_plot)
ax.set_title('Variation of the Cross Validation score in function of the number of estimators')
ax.set_xlabel('Number of estimators')
ax.set_ylabel('Cross Validation score')
###Output
_____no_output_____
###Markdown
Min leaf
###Code
rfs2 = {}
for k in tqdm(list(range(1, 11, 2))+list(range(11,25,4))):
rf = ensemble.RandomForestRegressor(n_estimators=120, oob_score=True, min_samples_leaf=k)
rf.fit(X,y)
rfs2[k] = rf.oob_score_
x_plot = list(rfs2.keys())
y_plot = list(rfs2.values())
f, ax = plt.subplots()
ax.scatter(x_plot, y_plot)
ax.set_title('Variation of the Cross Validation score in function of the minimum of sample per leaf')
ax.set_xlabel('Minimum of Samples per leaf')
ax.set_ylabel('OOB score')
###Output
_____no_output_____
###Markdown
max_features
###Code
rfs2 = {}
for k in ["log2", "auto", "sqrt", 0.2, 0.1, 0.3] :
rf = ensemble.RandomForestRegressor(n_estimators=120, oob_score=True, min_samples_leaf= 1, max_features = k)
rf.fit(X,y)
rfs2[k] = rf.oob_score_
x_plot = range(len(rfs2))# list(rfs2.keys())
y_plot = list(rfs2.values())
print(list(rfs2.keys()))
f, ax = plt.subplots()
ax.scatter(x_plot, y_plot)
ax.set_title('Variation of the Cross Validation score in function of the minimum of sample per leaf')
ax.set_xlabel('Number of estimators')
ax.set_ylabel('Cross Validation score')
###Output
[0.2, 0.1, 0.3, 'log2', 'auto', 'sqrt']
###Markdown
Learning
###Code
rf = ensemble.RandomForestRegressor(n_estimators=120, oob_score=True, max_features=0.2, min_samples_leaf=5)
rf.fit(X,y)
print ("Training Score RandomForest: ", str(rf.score(X,y)))
print ("OOB Score RandomForest: " , str(rf.oob_score_))
###Output
Training Score RandomForest: 0.676602290011
OOB Score RandomForest: 0.472550704314
###Markdown
Most important features
###Code
top_k = 15
plt.figure(figsize=(20,5))
names = X_train.columns[np.argsort(rf.feature_importances_)[::-1][:top_k]]
values = np.sort(rf.feature_importances_)[::-1][:top_k]
plot = sns.barplot(x = names, y = values, order=names)
_ = plot.set_xticklabels(names, rotation=15)
_ = plot.set_title('Most important features')
###Output
_____no_output_____ |
notebooks/Surface_Data/Advanced StationPlots with Mesonet Data.ipynb | ###Markdown
Advanced Surface Observations: Working with Mesonet DataUnidata Python Workshop Overview:* **Teaching:** 30 minutes* **Exercises:** 35 minutes Questions1. How do I read in complicated mesonet data with Pandas?1. How do I merge multiple Pandas DataFrames?1. What's the best way to make a station plot of data?1. How can I make a time series of data from one station? Objectives1. Read Mesonet data with Pandas2. Merge multiple Pandas DataFrames together 3. Plot mesonet data with MetPy and CartoPy4. Create time series plots of station data Reading Mesonet Data In this notebook, we're going to use the Pandas library to read text-based data. Pandas is excellent at handling text, csv, and other files. However, you have to help Pandas figure out how your data is formatted sometimes. Lucky for you, mesonet data frequently comes in forms that are not the most user-friendly. Through this notebook, we'll see how these complicated datasets can be handled nicely by Pandas to create useful station plots for hand analysis or publication.
###Code
# Import Pandas
import pandas as pd
###Output
_____no_output_____
###Markdown
West Texas Mesonet The [West Texas Mesonet](http://www.depts.ttu.edu/nwi/research/facilities/wtm/index.php) is a wonderful data source for researchers and storm chasers alike! We have some 5-minute observations from the entire network on 22 March 2019 that we'll analyze in this notebook. Pandas can parse time into a nice internal storage format as we read in the file. If the time is specified in the file in a somewhat standard form, pandas will even guess at the format if you tell it which column to use. However, in this case the time is reported in a horrible format: between one and four characters that, if there are four characters, represent hours and minutes as HHMM. Let's turn take a charater string, turn it into an integer, and then use integer string formatting to write out a four character string.
###Code
for t in ['0', '05', '100', '1005']:
print('{0:04d}'.format(int(t)))
###Output
_____no_output_____
###Markdown
Pandas can be told how to parse non-standard dates formats by writing an arbitrary function that takes a string and returns a datetime. Here's what that function looks like in this case. We can use timedelta to convert hours and minutes, and then add them to the start date using date math.
###Code
def parse_tx_date(v, start_date=None):
s = '{0:04d}'.format(int(v)) # regularize the data to a four character string
hour = pd.to_timedelta(int(s[0:2]), 'hour')
minute = pd.to_timedelta(int(s[2:4]), 'minute')
return start_date + hour + minute
# Read in the data and handle the lines that cause issues
# Get a nice date variable cooresponding to the start time
start_date = pd.datetime.strptime('2019-03-22', '%Y-%m-%d')
print(start_date)
# Pre-apply the start date to our date parsing function, so that pandas only passes one value
from functools import partial
date_parser = partial(parse_tx_date, start_date=start_date)
filename = 'West_Texas_data/FIVEMIN_82.txt'
tx_data = pd.read_csv(filename, delimiter=',', header=None, error_bad_lines=False, warn_bad_lines=False,
parse_dates=[2], date_parser=date_parser
)
tx_data
# Rename columns to be understandable
tx_data.columns = ['Array_ID', 'QC_flag', 'Time', 'Station_ID', '10m_scalar_wind_speed',
'10m_vector_wind_speed', '10m_wind_direction',
'10m_wind_direction_std', '10m_wind_speed_std',
'10m_gust_wind_speed', '1.5m_temperature',
'9m_temperature', '2m_temperature',
'1.5m_relative_humidity', 'station_pressure', 'rainfall',
'dewpoint', '2m_wind_speed', 'solar_radiation']
tx_data
###Output
_____no_output_____
###Markdown
The West Texas mesonet provides data on weather, agriculture, and radiation. These different observations are encoded 1, 2, and 3, respectively in the Array ID column. Let's parse out only the meteorological data for this exercise.
###Code
# Remove non-meteorological rows
tx_data = tx_data[tx_data['Array_ID'] == 1]
tx_data
###Output
_____no_output_____
###Markdown
Station pressure is 600 hPa lower than it should be, so let's correct that as well!
###Code
# Correct presssure
tx_data['station_pressure'] += 600
tx_data['station_pressure']
###Output
_____no_output_____
###Markdown
Finally, let's read in the station metadata file for the West Texas mesonet, so that we can have coordinates to plot data later on.
###Code
tx_stations = pd.read_csv('WestTexas_stations.csv')
tx_stations
###Output
_____no_output_____
###Markdown
Oklahoma Data Try reading in the Oklahoma Mesonet data located in the `201903222300.mdf` file using Pandas. Check out the documentation on Pandas if you run into issues! Make sure to handle missing values as well. Also read in the Oklahoma station data from the `Oklahoma_stations.csv` file. Only read in the station ID, latitude, and longitude columns from that file.
###Code
# Your code here
def parse_ok_date(v, start_date=None):
s = '{0:04d}'.format(int(v)) # regularize the data to a four character string
minute = pd.to_timedelta(int(s), 'minute')
return start_date + minute
# %load solutions/read_ok.py
###Output
_____no_output_____
###Markdown
Merging DataFrames We now have two data files per mesonet - one for the data itself and one for the metadata. It would be really nice to combine these DataFrames together into one for each mesonet. Pandas has some built in methods to do this - see [here](https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html). For this example, we'll be using the `merge` method. First, let's rename columns in the Oklahoma station DataFrame to be more understandable.
###Code
# Rename columns so merging can occur
ok_stations.columns = ['STID', 'LAT', 'LON']
###Output
_____no_output_____
###Markdown
Conveniently, we have a `STID` column in both DataFrames. Let's base our merge on that and see what we get!
###Code
# Merge the two data frames based on the Station ID
ok_data = pd.merge(ok_data, ok_stations, on='STID')
ok_data
###Output
_____no_output_____
###Markdown
That was nice! But what if our DataFrames don't have the same column name, and we want to avoid renaming columns? Check out the documentation for `pd.merge` and see how we can merge the West Texas DataFrames together. Also, subset the data to only be from 2300 UTC, which is when our Oklahoma data was taken. Call the new DataFrame `tx_one_time`.
###Code
# Your code here
# %load solutions/merge_texas.py
###Output
_____no_output_____
###Markdown
Creating a Station Plot Let's say we want to plot temperature, dewpoint, and wind barbs. Given our data from the two mesonets, do we have what we need? If not, use MetPy to calculate what you need!
###Code
import metpy.calc as mpcalc
from metpy.units import units
# Your code here
# %load solutions/data_conversion.py
###Output
_____no_output_____
###Markdown
Now, let's make a Station Plot with our data using MetPy and CartoPy.
###Code
from metpy.plots import StationPlot
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
# Set up a plot with map features
fig = plt.figure(figsize=(12, 12))
proj = ccrs.Stereographic(central_longitude=-100, central_latitude=35)
ax = fig.add_subplot(1, 1, 1, projection=proj)
ax.add_feature(cfeature.STATES.with_scale('50m'), edgecolor='black')
ax.gridlines()
# Create a station plot pointing to an Axes to draw on as well as the location of points
stationplot = StationPlot(ax, ok_data['LON'].values, ok_data['LAT'].values, transform=ccrs.PlateCarree(),
fontsize=10)
stationplot.plot_parameter('NW', ok_data['TAIR'], color='red')
stationplot.plot_parameter('SW', ok_dewpoint, color='green')
stationplot.plot_barb(ok_u, ok_v)
# Texas Data
stationplot = StationPlot(ax, tx_one_time['Long'].values, tx_one_time['Lat'].values, transform=ccrs.PlateCarree(),
fontsize=10)
stationplot.plot_parameter('NW', tx_one_time['2m_temperature'], color='red')
stationplot.plot_parameter('SW', tx_one_time['dewpoint'], color='green')
stationplot.plot_barb(tx_u, tx_v)
###Output
_____no_output_____
###Markdown
This is an informative plot, but is rather crowded. Using MetPy's `reduce_point_density` function, try cleaning up this plot to something that would be presentable/publishable. This function will return a mask, which you'll apply to all arrays in the plotting commands to filter down the data.
###Code
# Oklahoma
xy = proj.transform_points(ccrs.PlateCarree(), ok_data['LON'].values, ok_data['LAT'].values)
# Reduce point density so that there's only one point within a 50km circle
ok_mask = mpcalc.reduce_point_density(xy, 50000)
# Texas
# Your code here
# Plot
# Your code here
# %load solutions/reduce_and_plot.py
###Output
_____no_output_____
###Markdown
Creating Time Series for Stations What if we want to take data from all times from a single station to make a time series (or meteogram) plot? How can we easily do that with Pandas without having to aggregate the data by hand?
###Code
import numpy as np
# Select daylight hours
tx_daytime = tx_data[(tx_data['Time'] >= '2019-03-22 06:00') & (tx_data['Time'] <= '2019-03-22 20:00')]
# Create sub-tables for each station
tx_grp = tx_daytime.groupby('ID')
# Get data from station DIMM
station_data = tx_grp.get_group('DIMM')
# Create hourly averaged data
# time_bins = pd.cut(station_data['Time'], np.arange(600, 2100, 100))
# xarray has groupby_bins, but pandas has cut
station_data.index=station_data['Time']
station_hourly = station_data.resample('H')
# station_hourly = station_data.groupby(time_bins)
station_hourly_mean = station_hourly.mean()
station_hourly_mean = station_hourly_mean.reset_index() # no longer index by time so that we get it back as a regular variable.
# The times are reported at the beginning of the interval, but really represent
# the mean symmetric about the half hour. Let's fix that.
# from datetime import timedelta timedelta(minutes=30) #
station_hourly_mean['Time'] += pd.to_timedelta(30, 'minutes')
print(station_hourly_mean['Time'])
print(station_data['Time'])
###Output
_____no_output_____
###Markdown
Use the data above to make a time series plot of the instantaneous data and the hourly averaged data:
###Code
# Your code here
# %load solutions/mesonet_timeseries.py
###Output
_____no_output_____
###Markdown
Advanced Surface Observations: Working with Mesonet DataUnidata Python Workshop Overview:* **Teaching:** 30 minutes* **Exercises:** 35 minutes Questions1. How do I read in complicated mesonet data with Pandas?1. How do I merge multiple Pandas DataFrames?1. What's the best way to make a station plot of data?1. How can I make a time series of data from one station? Objectives1. Read Mesonet data with Pandas2. Merge multiple Pandas DataFrames together 3. Plot mesonet data with MetPy and CartoPy4. Create time series plots of station data Reading Mesonet Data In this notebook, we're going to use the Pandas library to read text-based data. Pandas is excellent at handling text, csv, and other files. However, you have to help Pandas figure out how your data is formatted sometimes. Lucky for you, mesonet data frequently comes in forms that are not the most user-friendly. Through this notebook, we'll see how these complicated datasets can be handled nicely by Pandas to create useful station plots for hand analysis or publication.
###Code
# Import Pandas
import pandas as pd
###Output
_____no_output_____
###Markdown
West Texas Mesonet The [West Texas Mesonet](http://www.depts.ttu.edu/nwi/research/facilities/wtm/index.php) is a wonderful data source for researchers and storm chasers alike! We have some 5-minute observations from the entire network on 22 March 2019 that we'll analyze in this notebook. Pandas can parse time into a nice internal storage format as we read in the file. If the time is specified in the file in a somewhat standard form, pandas will even guess at the format if you tell it which column to use. However, in this case the time is reported in a horrible format: between one and four characters that, if there are four characters, represent hours and minutes as HHMM. Let's turn take a charater string, turn it into an integer, and then use integer string formatting to write out a four character string.
###Code
for t in ['0', '05', '100', '1005']:
print('{0:04d}'.format(int(t)))
###Output
_____no_output_____
###Markdown
Pandas can be told how to parse non-standard dates formats by writing an arbitrary function that takes a string and returns a datetime. Here's what that function looks like in this case. We can use timedelta to convert hours and minutes, and then add them to the start date using date math.
###Code
def parse_tx_date(v, start_date=None):
s = '{0:04d}'.format(int(v)) # regularize the data to a four character string
hour = pd.to_timedelta(int(s[0:2]), 'hour')
minute = pd.to_timedelta(int(s[2:4]), 'minute')
return start_date + hour + minute
# Read in the data and handle the lines that cause issues
# Get a nice date variable cooresponding to the start time
start_date = pd.datetime.strptime('2019-03-22', '%Y-%m-%d')
print(start_date)
# Pre-apply the start date to our date parsing function, so that pandas only passes one value
from functools import partial
date_parser = partial(parse_tx_date, start_date=start_date)
filename = 'West_Texas_data/FIVEMIN_82.txt'
tx_data = pd.read_csv(filename, delimiter=',', header=None, error_bad_lines=False, warn_bad_lines=False,
parse_dates=[2], date_parser=date_parser
)
tx_data
# Rename columns to be understandable
tx_data.columns = ['Array_ID', 'QC_flag', 'Time', 'Station_ID', '10m_scalar_wind_speed',
'10m_vector_wind_speed', '10m_wind_direction',
'10m_wind_direction_std', '10m_wind_speed_std',
'10m_gust_wind_speed', '1.5m_temperature',
'9m_temperature', '2m_temperature',
'1.5m_relative_humidity', 'station_pressure', 'rainfall',
'dewpoint', '2m_wind_speed', 'solar_radiation']
tx_data
###Output
_____no_output_____
###Markdown
The West Texas mesonet provides data on weather, agriculture, and radiation. These different observations are encoded 1, 2, and 3, respectively in the Array ID column. Let's parse out only the meteorological data for this exercise.
###Code
# Remove non-meteorological rows
tx_data = tx_data[tx_data['Array_ID'] == 1]
tx_data
###Output
_____no_output_____
###Markdown
Station pressure is 600 hPa lower than it should be, so let's correct that as well!
###Code
# Correct presssure
tx_data['station_pressure'] += 600
tx_data['station_pressure']
###Output
_____no_output_____
###Markdown
Finally, let's read in the station metadata file for the West Texas mesonet, so that we can have coordinates to plot data later on.
###Code
tx_stations = pd.read_csv('WestTexas_stations.csv')
tx_stations
###Output
_____no_output_____
###Markdown
Oklahoma Data Try reading in the Oklahoma Mesonet data located in the `201903222300.mdf` file using Pandas. Check out the documentation on Pandas if you run into issues! Make sure to handle missing values as well. Also read in the Oklahoma station data from the `Oklahoma_stations.csv` file. Only read in the station ID, latitude, and longitude columns from that file.
###Code
# Your code here
def parse_ok_date(v, start_date=None):
s = '{0:04d}'.format(int(v)) # regularize the data to a four character string
minute = pd.to_timedelta(int(s), 'minute')
return start_date + minute
# %load solutions/read_ok.py
###Output
_____no_output_____
###Markdown
Merging DataFrames We now have two data files per mesonet - one for the data itself and one for the metadata. It would be really nice to combine these DataFrames together into one for each mesonet. Pandas has some built in methods to do this - see [here](https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html). For this example, we'll be using the `merge` method. First, let's rename columns in the Oklahoma station DataFrame to be more understandable.
###Code
# Rename columns so merging can occur
ok_stations.columns = ['STID', 'LAT', 'LON']
###Output
_____no_output_____
###Markdown
Conveniently, we have a `STID` column in both DataFrames. Let's base our merge on that and see what we get!
###Code
# Merge the two data frames based on the Station ID
ok_data = pd.merge(ok_data, ok_stations, on='STID')
ok_data
###Output
_____no_output_____
###Markdown
That was nice! But what if our DataFrames don't have the same column name, and we want to avoid renaming columns? Check out the documentation for `pd.merge` and see how we can merge the West Texas DataFrames together. Also, subset the data to only be from 2300 UTC, which is when our Oklahoma data was taken. Call the new DataFrame `tx_one_time`.
###Code
# Your code here
# %load solutions/merge_texas.py
###Output
_____no_output_____
###Markdown
Creating a Station Plot Let's say we want to plot temperature, dewpoint, and wind barbs. Given our data from the two mesonets, do we have what we need? If not, use MetPy to calculate what you need!
###Code
import metpy.calc as mpcalc
from metpy.units import units
# Your code here
# %load solutions/data_conversion.py
###Output
_____no_output_____
###Markdown
Now, let's make a Station Plot with our data using MetPy and CartoPy.
###Code
from metpy.plots import StationPlot
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
# Set up a plot with map features
fig = plt.figure(figsize=(12, 12))
proj = ccrs.Stereographic(central_longitude=-100, central_latitude=35)
ax = fig.add_subplot(1, 1, 1, projection=proj)
ax.add_feature(cfeature.STATES.with_scale('50m'), edgecolor='black')
ax.gridlines()
# Create a station plot pointing to an Axes to draw on as well as the location of points
stationplot = StationPlot(ax, ok_data['LON'].values, ok_data['LAT'].values, transform=ccrs.PlateCarree(),
fontsize=10)
stationplot.plot_parameter('NW', ok_data['TAIR'], color='red')
stationplot.plot_parameter('SW', ok_dewpoint, color='green')
stationplot.plot_barb(ok_u, ok_v)
# Texas Data
stationplot = StationPlot(ax, tx_one_time['Long'].values, tx_one_time['Lat'].values, transform=ccrs.PlateCarree(),
fontsize=10)
stationplot.plot_parameter('NW', tx_one_time['2m_temperature'], color='red')
stationplot.plot_parameter('SW', tx_one_time['dewpoint'], color='green')
stationplot.plot_barb(tx_u, tx_v)
###Output
_____no_output_____
###Markdown
This is an informative plot, but is rather crowded. Using MetPy's `reduce_point_density` function, try cleaning up this plot to something that would be presentable/publishable. This function will return a mask, which you'll apply to all arrays in the plotting commands to filter down the data.
###Code
# Oklahoma
xy = proj.transform_points(ccrs.PlateCarree(), ok_data['LON'].values, ok_data['LAT'].values)
# Reduce point density so that there's only one point within a 50km circle
ok_mask = mpcalc.reduce_point_density(xy, 50000)
# Texas
# Your code here
# Plot
# Your code here
# %load solutions/reduce_and_plot.py
###Output
_____no_output_____
###Markdown
Creating Time Series for Stations What if we want to take data from all times from a single station to make a time series (or meteogram) plot? How can we easily do that with Pandas without having to aggregate the data by hand?
###Code
import numpy as np
# Select daylight hours
tx_daytime = tx_data[(tx_data['Time'] >= '2019-03-22 06:00') & (tx_data['Time'] <= '2019-03-22 20:00')]
# Create sub-tables for each station
tx_grp = tx_daytime.groupby('ID')
# Get data from station DIMM
station_data = tx_grp.get_group('DIMM')
# Create hourly averaged data
# time_bins = pd.cut(station_data['Time'], np.arange(600, 2100, 100))
# xarray has groupby_bins, but pandas has cut
station_data.index=station_data['Time']
station_hourly = station_data.resample('H')
# station_hourly = station_data.groupby(time_bins)
station_hourly_mean = station_hourly.mean()
station_hourly_mean = station_hourly_mean.reset_index() # no longer index by time so that we get it back as a regular variable.
# The times are reported at the beginning of the interval, but really represent
# the mean symmetric about the half hour. Let's fix that.
# from datetime import timedelta timedelta(minutes=30) #
station_hourly_mean['Time'] += pd.to_timedelta(30, 'minutes')
print(station_hourly_mean['Time'])
print(station_data['Time'])
###Output
_____no_output_____
###Markdown
Use the data above to make a time series plot of the instantaneous data and the hourly averaged data:
###Code
# Your code here
# %load solutions/mesonet_timeseries.py
###Output
_____no_output_____
###Markdown
Advanced Surface Observations: Working with Mesonet DataUnidata Python Workshop Overview:* **Teaching:** 30 minutes* **Exercises:** 35 minutes Questions1. How do I read in complicated mesonet data with Pandas?1. How do I merge multiple Pandas DataFrames?1. What's the best way to make a station plot of data?1. How can I make a time series of data from one station? Objectives1. Read Mesonet data with Pandas2. Merge multiple Pandas DataFrames together 3. Plot mesonet data with MetPy and CartoPy4. Create time series plots of station data Reading Mesonet Data In this notebook, we're going to use the Pandas library to read text-based data. Pandas is excellent at handling text, csv, and other files. However, you have to help Pandas figure out how your data is formatted sometimes. Lucky for you, mesonet data frequently comes in forms that are not the most user-friendly. Through this notebook, we'll see how these complicated datasets can be handled nicely by Pandas to create useful station plots for hand analysis or publication.
###Code
# Import Pandas
import pandas as pd
###Output
_____no_output_____
###Markdown
West Texas Mesonet The [West Texas Mesonet](http://www.depts.ttu.edu/nwi/research/facilities/wtm/index.php) is a wonderful data source for researchers and storm chasers alike! We have some 5-minute observations from the entire network on 22 March 2019 that we'll analyze in this notebook.
###Code
# Read in the data and handle the lines that cause issues
filename = 'West_Texas_data/FIVEMIN_82.txt'
tx_data = pd.read_csv(filename, delimiter=',', header=None, error_bad_lines=False, warn_bad_lines=False)
tx_data
# Rename columns to be understandable
tx_data.columns = ['Array_ID', 'QC_flag', 'Time', 'Station_ID', '10m_scalar_wind_speed',
'10m_vector_wind_speed', '10m_wind_direction',
'10m_wind_direction_std', '10m_wind_speed_std',
'10m_gust_wind_speed', '1.5m_temperature',
'9m_temperature', '2m_temperature',
'1.5m_relative_humidity', 'station_pressure', 'rainfall',
'dewpoint', '2m_wind_speed', 'solar_radiation']
tx_data
###Output
_____no_output_____
###Markdown
The West Texas mesonet provides data on weather, agriculture, and radiation. These different observations are encoded 1, 2, and 3, respectively in the Array ID column. Let's parse out only the meteorological data for this exercise.
###Code
# Remove non-meteorological rows
tx_data = tx_data[tx_data['Array_ID'] == 1]
tx_data
###Output
_____no_output_____
###Markdown
Station pressure is 600 hPa lower than it should be, so let's correct that as well!
###Code
# Correct presssure
tx_data['station_pressure'] += 600
tx_data['station_pressure']
###Output
_____no_output_____
###Markdown
Convert TimeTime is given as HHMM in this file, but it would be great if we have a ```datetime``` object to work with so that time is continue for plotting and not just an array of scalar values.
###Code
# Something like this should work but isn't for me
# tx_data['Time'] = pd.to_datetime(tx_data['Time'].apply(str), format='%H%d', origin='2019-03-22')
tx_data
###Output
_____no_output_____
###Markdown
Finally, let's read in the station metadata file for the West Texas mesonet, so that we can have coordinates to plot data later on.
###Code
tx_stations = pd.read_csv('WestTexas_stations.csv')
tx_stations
###Output
_____no_output_____
###Markdown
Oklahoma Data Try reading in the Oklahoma Mesonet data located in the `201903222300.mdf` file using Pandas. Check out the documentation on Pandas if you run into issues! Make sure to handle missing values as well. Also read in the Oklahoma station data from the `Oklahoma_stations.csv` file. Only read in the station ID, latitude, and longitude columns from that file.
###Code
# Your code here
# %load solutions/read_ok.py
###Output
_____no_output_____
###Markdown
Merging DataFrames We now have two data files per mesonet - one for the data itself and one for the metadata. It would be really nice to combine these DataFrames together into one for each mesonet. Pandas has some built in methods to do this - see [here](https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html). For this example, we'll be using the `merge` method. First, let's rename columns in the Oklahoma station DataFrame to be more understandable.
###Code
# Rename columns so merging can occur
ok_stations.columns = ['STID', 'LAT', 'LON']
###Output
_____no_output_____
###Markdown
Conveniently, we have a `STID` column in both DataFrames. Let's base our merge on that and see what we get!
###Code
# Merge the two data frames based on the Station ID
ok_data = pd.merge(ok_data, ok_stations, on='STID')
ok_data
###Output
_____no_output_____
###Markdown
That was nice! But what if our DataFrames don't have the same column name, and we want to avoid renaming columns? Check out the documentation for `pd.merge` and see how we can merge the West Texas DataFrames together. Also, subset the data to only be from 2300 UTC, which is when our Oklahoma data was taken. Call the new DataFrame `tx_one_time`.
###Code
# Your code here
# %load solutions/merge_texas.py
###Output
_____no_output_____
###Markdown
Creating a Station Plot Let's say we want to plot temperature, dewpoint, and wind barbs. Given our data from the two mesonets, do we have what we need? If not, use MetPy to calculate what you need!
###Code
import metpy.calc as mpcalc
from metpy.units import units
# Your code here
# %load solutions/data_conversion.py
###Output
_____no_output_____
###Markdown
Now, let's make a Station Plot with our data using MetPy and CartoPy.
###Code
from metpy.plots import StationPlot
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
# Set up a plot with map features
fig = plt.figure(figsize=(12, 12))
proj = ccrs.Stereographic(central_longitude=-100, central_latitude=35)
ax = fig.add_subplot(1, 1, 1, projection=proj)
ax.add_feature(cfeature.STATES.with_scale('50m'), edgecolor='black')
ax.gridlines()
# Create a station plot pointing to an Axes to draw on as well as the location of points
stationplot = StationPlot(ax, ok_data['LON'].values, ok_data['LAT'].values, transform=ccrs.PlateCarree(),
fontsize=10)
stationplot.plot_parameter('NW', ok_data['TAIR'], color='red')
stationplot.plot_parameter('SW', ok_dewpoint, color='green')
stationplot.plot_barb(ok_u, ok_v)
# Texas Data
stationplot = StationPlot(ax, tx_one_time['Long'].values, tx_one_time['Lat'].values, transform=ccrs.PlateCarree(),
fontsize=10)
stationplot.plot_parameter('NW', tx_one_time['2m_temperature'], color='red')
stationplot.plot_parameter('SW', tx_one_time['dewpoint'], color='green')
stationplot.plot_barb(tx_u, tx_v)
###Output
_____no_output_____
###Markdown
This is an informative plot, but is rather crowded. Using MetPy's `reduce_point_density` function, try cleaning up this plot to something that would be presentable/publishable. This function will return a mask, which you'll apply to all arrays in the plotting commands to filter down the data.
###Code
# Oklahoma
xy = proj.transform_points(ccrs.PlateCarree(), ok_data['LON'].values, ok_data['LAT'].values)
# Reduce point density so that there's only one point within a 50km circle
ok_mask = mpcalc.reduce_point_density(xy, 50000)
# Texas
# Your code here
# Plot
# Your code here
# %load solutions/reduce_and_plot.py
###Output
_____no_output_____
###Markdown
Creating Time Series for Stations What if we want to take data from all times from a single station to make a time series (or meteogram) plot? How can we easily do that with Pandas without having to aggregate the data by hand?
###Code
import numpy as np
# Select daylight hours
tx_daytime = tx_data[(tx_data['Time'] >= 600) & (tx_data['Time'] <= 2000)]
# Create sub-tables for each station
tx_grp = tx_daytime.groupby('ID')
# Get data from station DIMM
station_data = tx_grp.get_group('DIMM')
# Create hourly averaged data
time_bins = pd.cut(station_data['Time'], np.arange(600, 2100, 100))
# xarray has groupby_bins, but pandas has cut
station_hourly = station_data.groupby(time_bins)
station_hourly_mean = station_hourly.mean()
###Output
_____no_output_____
###Markdown
Use the data above to make a time series plot of the instantaneous data and the hourly averaged data:
###Code
# Your code here
# %load solutions/mesonet_timeseries.py
###Output
_____no_output_____ |
PageRank-LA.ipynb | ###Markdown
This is our micro-internet. Imagine we have 100 Procrastinating Pats on our micro-internet, each viewing a single website at a time. Each minute the Pats follow a link on their website to another site on the micro-internet. After a while, the websites that are most linked to will have more Pats visiting them, and in the long run, each minute for every Pat that leaves a website, another will enter keeping the total numbers of Pats on each website constant. The PageRank is simply the ranking of websites by how many Pats they have on them at the end of this process.
###Code
L = np.array([[0, 1/3, 1/3, 1/3],[1/2 ,0, 0, 1/2],[0, 0, 0, 1],[0, 1/2 , 1/2, 0]]).T
L
eVals, eVecs = la.eig(L) # Gets the eigenvalues and vectors
eVals #we only care about the principal eigenvector (the one with the largest eigenvalue, which will be 1 in this case)
order = np.absolute(eVals).argsort()[::-1] # Orders them by their eigenvalues
eVals = eVals[order]
eVecs = eVecs[:,order]
eVals
eVecs
r = eVecs[:, 0] # Sets r to be the principal eigenvector
100 * np.real(r / np.sum(r)) # Make this eigenvector sum to one, then multiply by 100 Procrastinating Pats
###Output
_____no_output_____
###Markdown
We can see from this list, the number of Procrastinating Pats that we expect to find on each website after long times. Putting them in order of popularity (based on this metric), the PageRank of this micro-internet is: D => 40 pats C & B => 24 each A => 12 In principle, we could use a linear algebra library, as above, to calculate the eigenvalues and vectors. And this would work for a small system. But this gets unmanagable for large systems. And since we only care about the principal eigenvector (the one with the largest eigenvalue, which will be 1 in this case), we can use the power iteration method which will scale better, and is faster for large systems.
###Code
d = 0.5
n = 4
M = d * L + (1-d)/n * np.ones([n, n])
M
r = 100 * np.ones(n) / n # Sets up this vector (6 entries of 1/6 × 100 each)
lastR = r
r = M @ r
i = 0
while la.norm(lastR - r) > 0.01 :
lastR = r
r = M @ r
i += 1
print(str(i) + " iterations to convergence.")
r
np.sum(r)
###Output
_____no_output_____ |
18-Day-Kaggle-Competition-Automobile/concrete-baseline.ipynb | ###Markdown
Concrete Feature Engineering--- Reference> [What Is Feature Engineering](https://www.kaggle.com/ryanholbrook/what-is-feature-engineering)> [Data Source](https://www.kaggle.com/ryanholbrook/fe-course-data)--- Dependencies
###Code
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import cross_val_score
%load_ext autotime
###Output
time: 160 µs (started: 2021-08-15 15:29:08 -07:00)
###Markdown
--- Import Dataset To illustrate these ideas we'll see how adding a few synthetic features to a dataset can improve the predictive performance of a random forest model.The [Concrete](https://www.kaggle.com/sinamhd9/concrete-comprehensive-strength) dataset contains a variety of concrete formulations and the resulting product's *compressive strength*, which is a measure of how much load that kind of concrete can bear. The task for this dataset is to predict a concrete's compressive strength given its formulation.
###Code
df = pd.read_csv("data/concrete.csv")
df.head()
###Output
_____no_output_____
###Markdown
--- Baseline You can see here the various ingredients going into each variety of concrete. We'll see in a moment how adding some additional synthetic features derived from these can help a model to learn important relationships among them.We'll first establish a baseline by training the model on the un-augmented dataset. This will help us determine whether our new features are actually useful.Establishing baselines like this is good practice at the start of the feature engineering process. A baseline score can help you decide whether your new features are worth keeping, or whether you should discard them and possibly try something else.
###Code
X = df.copy()
y = X.pop("CompressiveStrength")
# Train and score baseline model
baseline = RandomForestRegressor(criterion="mae", random_state=0)
baseline_score = cross_val_score(
baseline, X, y, cv=5, scoring="neg_mean_absolute_error"
)
baseline_score = -1 * baseline_score.mean()
print(f"MAE Baseline Score: {baseline_score:.4}")
###Output
MAE Baseline Score: 8.232
time: 8.81 s (started: 2021-08-15 15:29:08 -07:00)
###Markdown
If you ever cook at home, you might know that the ratio of ingredients in a recipe is usually a better predictor of how the recipe turns out than their absolute amounts. We might reason then that ratios of the features above would be a good predictor of `CompressiveStrength`.The cell below adds three new ratio features to the dataset.
###Code
X = df.copy()
y = X.pop("CompressiveStrength")
# Create synthetic features
X["FCRatio"] = X["FineAggregate"] / X["CoarseAggregate"]
X["AggCmtRatio"] = (X["CoarseAggregate"] + X["FineAggregate"]) / X["Cement"]
X["WtrCmtRatio"] = X["Water"] / X["Cement"]
# Train and score model on dataset with additional ratio features
model = RandomForestRegressor(criterion="mae", random_state=0)
score = cross_val_score(
model, X, y, cv=5, scoring="neg_mean_absolute_error"
)
score = -1 * score.mean()
print(f"MAE Score with Ratio Features: {score:.4}")
###Output
MAE Score with Ratio Features: 7.948
time: 12.9 s (started: 2021-08-15 15:29:17 -07:00)
###Markdown
And sure enough, performance improved! This is evidence that these new ratio features exposed important information to the model that it wasn't detecting before.
###Code
X.head()
###Output
_____no_output_____ |
C4.Classification_SVM/svm_professor.ipynb | ###Markdown
Support Vector Machines Authors: Jesús Cid Sueiro ([email protected]) Jerónimo Arenas García ([email protected])This notebook is a compilation of material taken from several sources: - The [sklearn documentation](href = http://scikit-learn.org/stable/modules/svm.html>)- A [notebook](https://github.com/jakevdp/sklearn_pycon2015/blob/master/notebooks/03.1-Classification-SVMs.ipynb) by [Jake Vanderplas](https://github.com/jakevdp>)- [Wikipedia](https://en.wikipedia.org/wiki/Support_vector_machine) Notebook version: 1.0 (Oct 28, 2015) 1.1 (Oct 27, 2016) 2.0 (Nov 2, 2017) 2.1 (Oct 20, 2018) Changes: v.1.0 - First version v.1.1 - Typo correction and illustrative figures for linear SVM v.2.0 - Compatibility with Python 3 (backcompatible with Python 2.7) v.2.1 - Minor corrections on the notation v.2.2 - Minor equation errors. Reformatted hyperlinks. Restoring broken visualization of images in some Jupyter versions.
###Code
from __future__ import print_function
# To visualize plots in the notebook
%matplotlib inline
# Imported libraries
#import csv
#import random
#import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
#import pylab
import numpy as np
#from sklearn.preprocessing import PolynomialFeatures
from sklearn import svm
from sklearn.datasets.samples_generator import make_blobs
from sklearn.datasets.samples_generator import make_circles
from ipywidgets import interact
###Output
_____no_output_____
###Markdown
1. Introduction [Source: [sklearn documentation](http://scikit-learn.org/stable/modules/svm.html) ] Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.The advantages of support vector machines are:- Effective in high dimensional spaces.- Still effective in cases where number of dimensions is greater than the number of samples.- Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient.- Versatile: different Kernel functions can be specified for the decision function.The disadvantages of support vector machines include:- SVMs do not directly provide probability estimates. 2. Motivating Support Vector Machines [Source: A [notebook](https://github.com/jakevdp/sklearn_pycon2015/blob/master/notebooks/03.1-Classification-SVMs.ipynb) by [Jake Vanderplas](https://github.com/jakevdp>)] Support Vector Machines (SVMs) are a kind of ***discriminative*** classifiers: that is, they draw a boundary between clusters of data without making any explicit assumption about the probability model underlying the data generation process.Let's show a quick example of support vector classification. First we need to create a dataset:
###Code
X, y = make_blobs(n_samples=50, centers=2, random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
plt.xlabel("$x_0$", fontsize=14)
plt.ylabel("$x_1$", fontsize=14)
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
A discriminative classifier attempts to draw a line between the two sets of data. Immediately we see an inconvenience: such problem is ill-posed! For example, we could come up with several possibilities which perfectly discriminate between the classes in this example:
###Code
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
for m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]:
plt.plot(xfit, m * xfit + b, '-k')
plt.xlim(-1, 3.5);
plt.xlabel("$x_0$", fontsize=14)
plt.ylabel("$x_1$", fontsize=14)
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
These are three very different separators which perfectly discriminate between these samples. Depending on which you choose, a new data point will be classified almost entirely differently! How can we improve on this?Support Vector Machines (SVM) select the boundary decision maximizing the ***margin***. The margin of a classifier is defined as twice the maximum signed distance between the decision boundary and the training data. By *signed* we mean that the distance to misclassified samples is counted negatively. Thus, if the classification problem is "separable" (i.e. if there exist a decision boundary with zero errors in the training set), the SVM will choose the zero-error decision boundary that is "as far as possible" from the training data.In summary, what an SVM does is to not only draw a line, but consider the "sample free" region about the line. Here's an example of what it might look like:
###Code
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:,0], X[:,1], c=y, s=50, cmap='copper')
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit-d, yfit+d, edgecolor='none',
color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5)
plt.xlabel("$x_0$", fontsize=14)
plt.ylabel("$x_1$", fontsize=14)
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
Notice here that if we want to maximize this width, the middle fit is clearly the best. This is the intuition of the SVM, which optimizes a linear discriminant model in conjunction with a margin representing the perpendicular distance between the datasets. 3. Linear SVM [Source: adapted from [Wikipedia](https://en.wikipedia.org/wiki/Support_vector_machine)] In order to present the SVM in a formal way, consider a training dataset $\mathcal{D} = \left\{ (\mathbf{x}^{(k)}, y^{(k)}) \mid \mathbf{x}^{(k)}\in \Re^N,\, y^{(k)} \in \{-1,1\}, k=0,\ldots, {K-1}\right\}$, where the binary symmetric label $y^{(k)}\in \{-1,1\}$ indicates the class to which the point $\mathbf{x}^{(k)}$ belongs. Each $\mathbf{x}^{(k)}$ is a $p$-dimensional real vector. We want to find the maximum-margin hyperplane that divides the points having $y^{(k)}=1$ from those having $y^{(k)}=-1$. Any hyperplane can be written as the set of points $\mathbf{x}$ satisfying$$\mathbf{w}^\intercal \mathbf{x} - b=0,$$where ${\mathbf{w}}$ denotes the (not necessarily normalized) normal vector to the hyperplane. The parameter $\tfrac{b}{\|\mathbf{w}\|}$ determines the offset of the hyperplane from the origin along the normal vector ${\mathbf{w}}$.If the training data are linearly separable, we can select two parallel hyperplanes in a way that they separate the data and there are no points between them, and then try to maximize their distance. The region bounded by them is called "the margin". These hyperplanes can be described by the equations$$\mathbf{w}^\intercal \mathbf{x} - b=1$$and$$\mathbf{w}^\intercal \mathbf{x} - b=-1.$$Note that the two equations above can represent any two parallel hyperplanes in $\Re^N$. Essentially, the direction of vector $\mathbf{w}$ determines the orientation of the hyperplanes, whereas parameter $b$ and the norm of $\mathbf{w}$ can be used to select their exact location. To compute the distance between the hyperplanes, we can obtain the projection of vector ${\mathbf x}_1 - {\mathbf x}_2$, where ${\mathbf x}_1$ and ${\mathbf x}_2$ are points from each of the hyperplanes, onto a unitary vector orthonormal to the hyperplanes:$$\text{Distance between hyperplanes} = \left[\frac{\mathbf{w}}{\|\mathbf{w}\|}\right]^\top ({\mathbf x}_1 - {\mathbf x}_2) = \frac{\mathbf{w}^\top {\mathbf x}_1 - \mathbf{w}^\top {\mathbf x}_2}{\|\mathbf{w}\|} = \frac{2}{\|\mathbf{w}\|}.$$Therefore, to maximize the distance between the planes we want to minimize $\|\mathbf{w}\|$. As we also have to prevent data points from falling into the margin, we add the following constraints: for each $k$ either\begin{align}\mathbf{w}^\top \mathbf{x}^{(k)} - b &\ge +1, \qquad\text{ if } y^{(k)}=1, \qquad \text{or} \\\mathbf{w}^\top \mathbf{x}^{(k)} - b &\le -1, \qquad\text{ if } y^{(k)}=-1.\end{align}This can be rewritten as:$$y^{(k)}(\mathbf{w}^\top \mathbf{x}^{(k)} - b) \ge 1, \quad \text{ for all } 1 \le k \le K.$$We can put this together to get the optimization problem:$$(\mathbf{w}^*,b^*) = \arg\min_{(\mathbf{w},b)} \|\mathbf{w}\| \\\text{subject to: } y^{(k)}(\mathbf{w}^\top \mathbf{x}^{(k)} - b) \ge 1, \, \text{ for any } k = 0, \dots, {K-1}$$ This optimization problem is difficult to solve because it depends on $\|\mathbf{w}\|$, the norm of $\mathbf{w}$, which involves a square root. Fortunately it is possible to alter the minimization objective $\|\mathbf{w}\|$ by substituting it with $\tfrac{1}{2}\|\mathbf{w}\|^2$ (the factor of $\frac{1}{2}$ being used for mathematical convenience) without changing the solution (the minimum of the original and the modified equation have the same $\mathbf{w}$ and $b$):$$(\mathbf{w}^*,b^*) = \arg\min_{(\mathbf{w},b)} \frac{1}{2}\|\mathbf{w}\|^2 \\\text{subject to: } y^{(k)}(\mathbf{w}^\intercal \mathbf{x}^{(k)} - b) \ge 1, \, \text{ for any } k = 0, \dots, {K-1}$$This is a particular case of a *quadratic programming* problem. 3.1. Primal formThe optimization problem stated in the preceding section can be solved by means of a generalization of the Lagrange method of multipliers for inequality constraints, using the so called Karush–Kuhn–Tucker (KKT) multipliers $\boldsymbol{\alpha}$. According to it, the constrained problem can be expressed as$$(\mathbf{w}^*,b^*, \boldsymbol{\alpha}^*) = \arg\min_{\mathbf{w},b } \max_{\boldsymbol{\alpha}\geq 0 } \left\{ \frac{1}{2}\|\mathbf{w}\|^2 - \sum_{k=0}^{K-1}{\alpha^{(k)}\left[y^{(k)}(\mathbf{w}^\intercal \mathbf{x}^{(k)} - b)-1\right]} \right\}$$that is, we look for a *saddle point*.A key result in convex optimization theory is that, for the kind of optimization problems discussed here (see [here](http://www.onmyphd.com/?p=kkt.karush.kuhn.tucker&ckattempt=1), for instance), the *max* and *min* operators are interchangeable, so that$$(\mathbf{w}^*,b^*, \boldsymbol{\alpha}^*) = \arg\max_{\boldsymbol{\alpha}\geq 0 } \min_{\mathbf{w},b } \left\{ \frac{1}{2}\|\mathbf{w}\|^2 - \sum_{k=0}^{K-1}{\alpha^{(k)}\left[y^{(k)}(\mathbf{w}^\intercal \mathbf{x}^{(k)} - b)-1\right]} \right\}$$Note that the inner minimization problem is now quadratic in $\mathbf{w}$ and, thus, the minimum can be found by differentiation:$$\mathbf{w}^* = \sum_{k=0}^{K-1}{\alpha^{(k)} y^{(k)}\mathbf{x}^{(k)}}.$$ 3.1.1. Support VectorsIn view of the optimization problem, we can check that all the points which can be separated as $y^{(k)}(\mathbf{w}^\intercal \mathbf{x}^{(k)} - b) - 1 > 0 $ do not matter since we must set the corresponding $\alpha^{(k)}$ to zero. Therefore, only a few $\alpha^{(k)}$ will be greater than zero. The corresponding $\mathbf{x}^{(k)}$ are known as `support vectors`.It can be seen that the optimum parameter vector $\mathbf{w}^\ast$ can be expressed in terms of the support vectors only:$$\mathbf{w}^* = \sum_{k\in {\cal{S}}_{SV}}{\alpha^{(k)} y^{(k)}\mathbf{x}^{(k)}}.$$where ${\cal{S}}_{SV}$ is the set of indexes associated to support vectors. 3.1.2. The computation of $b$Support vectors lie on the margin and satisfy $y^{(k)}(\mathbf{w}^\intercal \mathbf{x}^{(k)} - b) = 1$. From this condition, we can obtain the value of $b$, since for any support vector:$$\mathbf{w}^\intercal\mathbf{x}^{(k)} - b = \frac{1}{y^{(k)}} = y^{(k)} \iff b = \mathbf{w}^\intercal\mathbf{x}^{(k)} - y^{(k)}$$This estimate of $b$, the centerpoint of the division, depends only on a single pair $y^{(k)}$ and $x^{(k)}$. We may get a more robust estimate of the center by averaging over all of the $N_{SV}$ support vectors, if we believe the population mean is a good estimate of the midpoint, so in practice, $b$ is often computed as:$$b = \frac{1}{N_{SV}} \sum_{\mathbf{x}^{(k)}\in {\cal{S}}_{SV}}{(\mathbf{w}^\intercal\mathbf{x}^{(k)} - y^{(k)})}$$ 3.2. Dual formWriting the classification rule in its unconstrained dual form reveals that the *maximum-margin hyperplane* and therefore the classification task is only a function of the *support vectors*, the subset of the training data that lie on the margin.Using the fact that $\|\mathbf{w}\|^2 = \mathbf{w}^\intercal \mathbf{w}$ and substituting $\mathbf{w} = \sum_{k=0}^{K-1}{\alpha^{(k)} y^{(k)}\mathbf{x}^{(k)}}$, we obtain\begin{align}(b^*, \boldsymbol{\alpha}^*) &= \arg\max_{\boldsymbol{\alpha}\geq 0 } \min_b \left\{ \sum_{k=0}^{K-1}\alpha^{(k)} - \frac{1}{2} \sum_{k=0}^{K-1} \sum_{j=0}^{K-1} {\alpha^{(k)} \alpha^{(j)} y^{(k)} y^{(j)} (\mathbf{x}^{(k)})^\intercal\mathbf{x}^{(j)}} + b \sum_{k=0}^{K-1}\alpha^{(k)}y^{(k)}\right\} \end{align}Note that, if $\sum_{k=0}^{K-1}\alpha^{(k)}y^{(k)} \neq 0$ the optimal value of $b$ is $+\infty$ of $-\infty$, and \begin{align}\min_b \left\{\sum_{k=0}^{K-1}\alpha^{(k)} - \frac{1}{2}\sum_{k=0}^{K-1} \sum_{j=0}^{K-1} {\alpha^{(k)} \alpha^{(j)} y^{(k)} y^{(j)} (\mathbf{x}^{(k)})^\intercal\mathbf{x}^{(j)}}+ b \sum_{k=0}^{K-1}\alpha^{(k)}y^{(k)}\right\} = -\infty.\end{align}Therefore, any $\boldsymbol{\alpha}$ satifying $\sum_{k=0}^{K-1}\alpha^{(k)}y^{(k)} \neq 0$ is suboptimal, so that the optimal multipliers must satisfy the condition $\sum_{k=0}^{K-1}\alpha^{(k)}y^{(k)} = 0$. Summarizing, the dual formulation of the optimization problem is$$\boldsymbol{\alpha}^* = \arg\max_{\boldsymbol{\alpha}\geq 0} \sum_{k=0}^{K-1} \alpha^{(k)} - \frac12 \sum_{k,j} \alpha^{(k)} \alpha^{(j)} y^{(k)} y^{(j)} k(\mathbf{x}^{(k)}, \mathbf{x}^{(j)}) \\\text{subject to: } \qquad \sum_{k=0}^{K-1} \alpha^{(k)} y^{(k)} = 0.$$where the *kernel* $k(\cdot)$ is defined by $k(\mathbf{x}^{(k)},\mathbf{x}^{(j)})=(\mathbf{x}^{(k)})^\intercal\mathbf{x}^{(j)}$.Many implementations of the SVM use this dual formulation. They proceed in three steps:1. Solve the dual problem to obtain $\boldsymbol{\alpha}^*$. Usually, only a small number of $\alpha^{*(k)}$ are nonzero. The corresponding values of ${\bf x}^{(k)}$ are called the *support vectors*.2. Compute $\mathbf{w}^* = \sum_{k=0}^{K-1}{\alpha^{*(k)} y^{(k)}\mathbf{x}^{(k)}}$3. Compute $b^* = \frac{1}{N_{SV}} \sum_{\alpha^{*(k)}\neq 0}{(\mathbf{w}^{*\intercal}\mathbf{x}^{(k)} - y^{(k)})}$ 4. Fitting a Support Vector Machine [Source: A [notebook](https://github.com/jakevdp/sklearn_pycon2015/blob/master/notebooks/03.1-Classification-SVMs.ipynb) by [Jake Vanderplas](https://github.com/jakevdp>)] Now we'll fit a Support Vector Machine Classifier to these points.
###Code
clf = svm.SVC(kernel='linear')
clf.fit(X, y)
###Output
_____no_output_____
###Markdown
To better visualize what's happening here, let's create a quick convenience function that will plot SVM decision boundaries for us:
###Code
def plot_svc_decision_function(clf, ax=None):
"""Plot the decision function for a 2D SVC"""
if ax is None:
ax = plt.gca()
x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30)
y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30)
Y, X = np.meshgrid(y, x)
P = np.zeros_like(X)
for i, xi in enumerate(x):
for j, yj in enumerate(y):
P[i, j] = clf.decision_function(np.array([xi, yj]).reshape(1,-1))
# plot the margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
plot_svc_decision_function(clf);
###Output
_____no_output_____
###Markdown
Notice that the dashed lines touch a couple of the points: these points are the *support vectors*. In scikit-learn, these are stored in the ``support_vectors_`` attribute of the classifier:
###Code
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, marker='s');
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
plot_svc_decision_function(clf)
###Output
_____no_output_____
###Markdown
Let's use IPython's interact functionality to explore how the distribution of points affects the support vectors and the discriminative fit. (This is only available in IPython 2.0+, and will not work in a static view)
###Code
def plot_svm(N=10):
X, y = make_blobs(n_samples=200, centers=2,
random_state=0, cluster_std=0.60)
X = X[:N]
y = y[:N]
clf = svm.SVC(kernel='linear')
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
plt.xlim(-1, 4)
plt.ylim(-1, 6)
plot_svc_decision_function(clf, plt.gca())
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none')
interact(plot_svm, N=[10, 200], kernel='linear')
###Output
_____no_output_____
###Markdown
Notice the unique thing about SVM is that only the support vectors matter: that is, if you moved any of the other points without letting them cross the decision boundaries, they would have no effect on the classification results! 5. Non-separable problems. [Source: adapted from [Wikipedia](https://en.wikipedia.org/wiki/Support_vector_machine)] In 1995, Corinna Cortes and Vladimir N. Vapnik suggested a modified maximum margin idea that allows for mislabeled examples. If there exists no hyperplane that can split the `positive` and `negative` samples, the `Soft Margin` method will choose a hyperplane that splits the examples as cleanly as possible, while still maximizing the distance to the nearest cleanly split examples. The method introduces non-negative slack variables, $\xi^{(k)}$, which measure the degree of misclassification of the data $\mathbf{x}^{(k)}$$$y^{(k)}(\mathbf{w}^\intercal\mathbf{x}^{(k)} - b) \ge 1 - \xi^{(k)} \quad k=0,\ldots, K-1.$$The objective function is then increased by a function which penalizes non-zero $\xi^{(k)}$, and the optimization becomes a trade off between a large margin and a small error penalty. If the penalty function is linear, the optimization problem becomes:$$(\mathbf{w}^*,\mathbf{\xi}^*, b^*) = \arg\min_{\mathbf{w},\mathbf{\xi}, b } \left\{\frac{1}{2} \|\mathbf{w}\|^2 + C \sum_{k=0}^{K-1} \xi^{(k)} \right\} \\\text{subject to: } \quad y^{(k)}(\mathbf{w}^\intercal\mathbf{x}^{(k)} - b) \ge 1 - \xi^{(k)}, \quad \xi^{(k)} \ge 0, \quad k=0,\ldots, K-1.$$ This constraint along with the objective of minimizing $\|\mathbf{w}\|$ can be solved using KKT multipliers as done above. One then has to solve the following problem:$$\arg\min_{\mathbf{w}, \mathbf{\xi}, b } \max_{\boldsymbol{\alpha}, \boldsymbol{\beta} }\left\{ \frac{1}{2}\|\mathbf{w}\|^2 + C \sum_{k=0}^{K-1} \xi^{(k)}- \sum_{k=0}^{K-1} {\alpha^{(k)}\left[y^{(k)}(\mathbf{w}^\intercal \mathbf{x}^{(k)} - b) -1 + \xi^{(k)}\right]}- \sum_{k=0}^{K-1} \beta^{(k)} \xi^{(k)} \right \}\\\text{subject to: } \quad \alpha^{(k)}, \beta^{(k)} \ge 0.$$A similar analysis to that in the separable case can be applied to show that the dual formulation of the optimization problem is$$ \boldsymbol{\alpha}^* = \arg\max_{0 \leq \alpha^{(k)} \leq C} \sum_{k=0}^{K-1} \alpha^{(k)} - \frac12 \sum_{k,j} \alpha^{(k)} \alpha^{(j)} y^{(k)} y^{(j)} k(\mathbf{x}^{(k)}, \mathbf{x}^{(j)}) \\\text{subject to: } \qquad \sum_{k=0}^{K-1} \alpha^{(k)} y^{(k)} = 0.$$Note that the only difference with the separable case is given by the constraints $\alpha^{(k)} \leq C$. 6. Nonlinear classification [Source: adapted from [Wikipedia](https://en.wikipedia.org/wiki/Support_vector_machine)] The original optimal hyperplane algorithm proposed by Vapnik in 1963 was a linear classifier. However, in 1992, Bernhard E. Boser, Isabelle M. Guyon and Vladimir N. Vapnik suggested a way to create nonlinear classifiers by applying the *kernel trick* to maximum-margin hyperplanes. The resulting algorithm is formally similar, except that every dot product is replaced by a nonlinear kernel function. This allows the algorithm to fit the maximum-margin hyperplane in a transformed feature space. The transformation may be nonlinear and the transformed space high dimensional; thus though the classifier is a hyperplane in the high-dimensional feature space, it may be nonlinear in the original input space.The kernel is related to the transform $\phi(\mathbf{x})$ by the equation $k(\mathbf{x}, \mathbf{x}') = \phi(\mathbf{x})^\intercal \phi(\mathbf{x}')$. However, note that we do not need to explicitly compute $\phi(\mathbf{x})$, as long as we can express all necessary calculations in terms of the kernel function only, as it is the case for the optimization problem in the dual case. The predictions of the SVM classifier can also be expressed in terms of kernels only, so that we never need to explicitely compute $\phi(\mathbf{x})$.$$\begin{align}\hat y({\mathbf{x}}) & = {\mathbf {w^\ast}}^\intercal \phi(\mathbf{x}) - b^\ast \\ \\& = \left[\sum_{k \in {\cal{S}}_{SV}} \alpha^{(k)^*} y^{(k)} \phi(\mathbf{x}^{(k)})\right]^\intercal {\phi(\mathbf{x})} - b^\ast \\ \\& = - b^\ast + \sum_{k \in {\cal{S}}_{SV}} \alpha^{(k)^*} y^{(k)} k(\mathbf{x}^{(k)}, {\mathbf{x}})\end{align}$$ Some common kernels include:* **Gaussian**: $k(\mathbf{x},\mathbf{x}')=\exp(-\gamma \|\mathbf{x} - \mathbf{x}'\|^2)$, for $\gamma > 0$. Sometimes parametrized using $\gamma=\dfrac{1}{2 \sigma^2}$. This is by far the most widely used kernel.* Polynomial (homogeneous): $k(\mathbf{x},\mathbf{x}')=(\mathbf{x}^\intercal \mathbf{x}')^d$* Polynomial (inhomogeneous): $k(\mathbf{x},\mathbf{x}') = (\mathbf{x}^\intercal \mathbf{x}' + 1)^d$* Hyperbolic tangent: $k(\mathbf{x},\mathbf{x}') = \tanh(\kappa \mathbf{x}^\intercal \mathbf{x}'+c)$, for some (not every) $\kappa > 0$ and $c < 0$. 6.1. Example. [Source: A [notebook](https://github.com/jakevdp/sklearn_pycon2015/blob/master/notebooks/03.1-Classification-SVMs.ipynb) by [Jake Vanderplas](https://github.com/jakevdp>)] Where SVM gets incredibly exciting is when it is used in conjunction with *kernels*.To motivate the need for kernels, let's look at some data which is not linearly separable:
###Code
X, y = make_circles(100, factor=.1, noise=.1)
clf = svm.SVC(kernel='linear').fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
plot_svc_decision_function(clf);
###Output
_____no_output_____
###Markdown
Clearly, no linear discrimination will ever separate these data.One way we can adjust this is to apply a **kernel**, which is some functional transformation of the input data.For example, one simple model we could use is a **radial basis function**
###Code
r = np.exp(-(X[:, 0] ** 2 + X[:, 1] ** 2))
###Output
_____no_output_____
###Markdown
If we plot this along with our data, we can see the effect of it:
###Code
def plot_3D(elev=30, azim=30):
ax = plt.subplot(projection='3d')
ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50, cmap='spring')
ax.view_init(elev=elev, azim=azim)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('r')
interact(plot_3D, elev=[-90, 90], azip=(-180, 180));
###Output
_____no_output_____
###Markdown
We can see that with this additional dimension, the data becomes trivially linearly separable!This is a relatively simple kernel; SVM has a more sophisticated version of this kernel built-in to the process. This is accomplished by using the Gaussian kernel (``kernel='rbf'``), short for *radial basis function*:
###Code
clf = svm.SVC(kernel='rbf', C=10)
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none');
###Output
_____no_output_____
###Markdown
Support Vector Machines Authors: Jesús Cid Sueiro ([email protected]) Jerónimo Arenas García ([email protected])This notebook is a compilation of material taken from several sources: - The [sklearn documentation](href = http://scikit-learn.org/stable/modules/svm.html>)- A [notebook](https://github.com/jakevdp/sklearn_pycon2015/blob/master/notebooks/03.1-Classification-SVMs.ipynb) by [Jake Vanderplas](https://github.com/jakevdp>)- [Wikipedia](https://en.wikipedia.org/wiki/Support_vector_machine) Notebook version: 1.0 (Oct 28, 2015) 1.1 (Oct 27, 2016) 2.0 (Nov 2, 2017) 2.1 (Oct 20, 2018) 2.2 (Oct 20, 2019) Changes: v.1.0 - First version v.1.1 - Typo correction and illustrative figures for linear SVM v.2.0 - Compatibility with Python 3 (backcompatible with Python 2.7) v.2.1 - Minor corrections on the notation v.2.2 - Minor equation errors. Reformatted hyperlinks. Restoring broken visualization of images in some Jupyter versions. v.2.3 - Notation revision
###Code
from __future__ import print_function
# To visualize plots in the notebook
%matplotlib inline
# Imported libraries
#import csv
#import random
#import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
#import pylab
import numpy as np
#from sklearn.preprocessing import PolynomialFeatures
from sklearn import svm
from sklearn.datasets.samples_generator import make_blobs
from sklearn.datasets.samples_generator import make_circles
from ipywidgets import interact
###Output
_____no_output_____
###Markdown
1. Introduction [Source: [sklearn documentation](http://scikit-learn.org/stable/modules/svm.html) ] Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.The advantages of support vector machines are:- Effective in high dimensional spaces.- Still effective in cases where number of dimensions is greater than the number of samples.- Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient.- Versatile: different Kernel functions can be specified for the decision function.The disadvantages of support vector machines include:- SVMs do not directly provide probability estimates. 2. Motivating Support Vector Machines [Source: A [notebook](https://github.com/jakevdp/sklearn_pycon2015/blob/master/notebooks/03.1-Classification-SVMs.ipynb) by [Jake Vanderplas](https://github.com/jakevdp>)] Support Vector Machines (SVMs) are a kind of ***discriminative*** classifiers: that is, they draw a boundary between clusters of data without making any explicit assumption about the probability model underlying the data generation process.Let's show a quick example of support vector classification. First we need to create a dataset:
###Code
X, y = make_blobs(n_samples=50, centers=2, random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
plt.xlabel("$x_0$", fontsize=14)
plt.ylabel("$x_1$", fontsize=14)
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
A discriminative classifier attempts to draw a line between the two sets of data. Immediately we see an inconvenience: such problem is ill-posed! For example, we could come up with several possibilities which perfectly discriminate between the classes in this example:
###Code
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
for m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]:
plt.plot(xfit, m * xfit + b, '-k')
plt.xlim(-1, 3.5);
plt.xlabel("$x_0$", fontsize=14)
plt.ylabel("$x_1$", fontsize=14)
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
These are three very different separators which perfectly discriminate between these samples. Depending on which you choose, a new data point will be classified almost entirely differently! How can we improve on this?Support Vector Machines (SVM) select the boundary decision maximizing the ***margin***. The margin of a classifier is defined as twice the maximum signed distance between the decision boundary and the training data. By *signed* we mean that the distance to misclassified samples is counted negatively. Thus, if the classification problem is "separable" (i.e. if there exist a decision boundary with zero errors in the training set), the SVM will choose the zero-error decision boundary that is "as far as possible" from the training data.In summary, what an SVM does is to not only draw a line, but consider the "sample free" region about the line. Here's an example of what it might look like:
###Code
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:,0], X[:,1], c=y, s=50, cmap='copper')
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit-d, yfit+d, edgecolor='none',
color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5)
plt.xlabel("$x_0$", fontsize=14)
plt.ylabel("$x_1$", fontsize=14)
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
Notice here that if we want to maximize this width, the middle fit is clearly the best. This is the intuition of the SVM, which optimizes a linear discriminant model in conjunction with a margin representing the perpendicular distance between the datasets. 3. Linear SVM [Source: adapted from [Wikipedia](https://en.wikipedia.org/wiki/Support_vector_machine)] In order to present the SVM in a formal way, consider a training dataset $\mathcal{D} = \left\{ (\mathbf{x}_k, y_k) \mid \mathbf{x}_k\in \Re^M,\, y_k \in \{-1,1\}, k=0,\ldots, {K-1}\right\}$, where the binary symmetric label $y_k\in \{-1,1\}$ indicates the class to which the point $\mathbf{x}_k$ belongs. Each $\mathbf{x}_k$ is an $M$-dimensional real vector. We want to find the maximum-margin hyperplane that divides the points having $y_k=1$ from those having $y_k=-1$. Any hyperplane can be written as the set of points $\mathbf{x}$ satisfying$$\mathbf{w}^\intercal \mathbf{x} - b=0,$$where ${\mathbf{w}}$ denotes the (not necessarily normalized) normal vector to the hyperplane. The parameter $\tfrac{b}{\|\mathbf{w}\|}$ determines the offset of the hyperplane from the origin along the normal vector ${\mathbf{w}}$.If the training data are linearly separable, we can select two parallel hyperplanes in a way that they separate the data and there are no points between them, and then try to maximize their distance. The region bounded by them is called "the margin". These hyperplanes can be described by the equations$$\mathbf{w}^\intercal \mathbf{x} - b=1$$and$$\mathbf{w}^\intercal \mathbf{x} - b=-1.$$Note that the two equations above can represent any two parallel hyperplanes in $\Re^M$. Essentially, the direction of vector $\mathbf{w}$ determines the orientation of the hyperplanes, whereas parameter $b$ and the norm of $\mathbf{w}$ can be used to select their exact location. To compute the distance between the hyperplanes, we can obtain the projection of vector ${\mathbf x}_1 - {\mathbf x}_2$, where ${\mathbf x}_1$ and ${\mathbf x}_2$ are points from each of the hyperplanes, onto a unitary vector orthonormal to the hyperplanes:$$\text{Distance between hyperplanes} = \left[\frac{\mathbf{w}}{\|\mathbf{w}\|}\right]^\top ({\mathbf x}_1 - {\mathbf x}_2) = \frac{\mathbf{w}^\top {\mathbf x}_1 - \mathbf{w}^\top {\mathbf x}_2}{\|\mathbf{w}\|} = \frac{2}{\|\mathbf{w}\|}.$$Therefore, to maximize the distance between the planes we want to minimize $\|\mathbf{w}\|$. As we also have to prevent data points from falling into the margin, we add the following constraints: for each $k$ either\begin{align}\mathbf{w}^\top \mathbf{x}_k - b &\ge +1, \qquad\text{ if }\;\;y_k=1, \qquad \text{or} \\\mathbf{w}^\top \mathbf{x}_k - b &\le -1, \qquad\text{ if }\;\;y_k=-1.\end{align}This can be rewritten as:$$y_k(\mathbf{w}^\top \mathbf{x}_k - b) \ge 1, \quad \text{ for all } 0 \le k \le K-1.$$We can put this together to get the optimization problem:$$(\mathbf{w}^*,b^*) = \arg\min_{(\mathbf{w},b)} \|\mathbf{w}\| \\\text{subject to: } y_k(\mathbf{w}^\top \mathbf{x}_k - b) \ge 1, \, \text{ for any } k = 0, \dots, {K-1}$$ This optimization problem is difficult to solve because it depends on $\|\mathbf{w}\|$, the norm of $\mathbf{w}$, which involves a square root. Fortunately it is possible to alter the minimization objective $\|\mathbf{w}\|$ by substituting it with $\tfrac{1}{2}\|\mathbf{w}\|^2$ (the factor of $\frac{1}{2}$ being used for mathematical convenience) without changing the solution (the minimum of the original and the modified equation have the same $\mathbf{w}$ and $b$):$$(\mathbf{w}^*,b^*) = \arg\min_{(\mathbf{w},b)} \frac{1}{2}\|\mathbf{w}\|^2 \\\text{subject to: } y_k(\mathbf{w}^\top \mathbf{x}_k - b) \ge 1, \, \text{ for any } k = 0, \dots, {K-1}$$This is a particular case of a *quadratic programming* problem. 3.1. Primal formThe optimization problem stated in the preceding section can be solved by means of a generalization of the Lagrange method of multipliers for inequality constraints, using the so called Karush–Kuhn–Tucker (KKT) multipliers $\boldsymbol{\alpha}$. According to it, the constrained problem can be expressed as$$(\mathbf{w}^*,b^*, \boldsymbol{\alpha}^*) = \arg\min_{\mathbf{w},b } \max_{\boldsymbol{\alpha}\geq 0 } \left\{ \frac{1}{2}\|\mathbf{w}\|^2 - \sum_{k=0}^{K-1}{\alpha_k\left[y_k(\mathbf{w}^\top \mathbf{x}_k - b)-1\right]} \right\}$$that is, we look for a *saddle point*.A key result in convex optimization theory is that, for the kind of optimization problems discussed here (see [here](http://www.onmyphd.com/?p=kkt.karush.kuhn.tucker&ckattempt=1), for instance), the *max* and *min* operators are interchangeable, so that$$(\mathbf{w}^*,b^*, \boldsymbol{\alpha}^*) = \arg\max_{\boldsymbol{\alpha}\geq 0 } \min_{\mathbf{w},b } \left\{ \frac{1}{2}\|\mathbf{w}\|^2 - \sum_{k=0}^{K-1}{\alpha_k\left[y_k(\mathbf{w}^\top \mathbf{x}_k - b)-1\right]} \right\}$$Note that the inner minimization problem is now quadratic in $\mathbf{w}$ and, thus, the minimum can be found by differentiation:$$\mathbf{w}^* = \sum_{k=0}^{K-1}{\alpha_k y_k\mathbf{x}_k}.$$ 3.1.1. Support VectorsIn view of the optimization problem, we can check that all the points which can be separated as $y_k(\mathbf{w}^\top \mathbf{x}_k - b) - 1 > 0 $ do not matter since we must set the corresponding $\alpha_k$ to zero. Therefore, only a few $\alpha_k$ will be greater than zero. The corresponding $\mathbf{x}_k$ are known as `support vectors`.It can be seen that the optimum parameter vector $\mathbf{w}^\ast$ can be expressed in terms of the support vectors only:$$\mathbf{w}^* = \sum_{k\in {\cal{S}}_{SV}}{\alpha_k y_k\mathbf{x}_k}.$$where ${\cal{S}}_{SV}$ is the set of indexes associated to support vectors. 3.1.2. The computation of $b$Support vectors lie on the margin and satisfy $y_k(\mathbf{w}^\top \mathbf{x}_k - b) = 1$. From this condition, we can obtain the value of $b$, since for any support vector:$$\mathbf{w}^\top\mathbf{x}_k - b = \frac{1}{y_k} = y_k \iff b = \mathbf{w}^\top\mathbf{x}_k - y_k$$This estimate of $b$, the centerpoint of the division, depends only on a single pair $y_k$ and $x_k$. We may get a more robust estimate of the center by averaging over all of the $N_{SV}$ support vectors, if we believe the population mean is a good estimate of the midpoint, so in practice, $b$ is often computed as:$$b = \frac{1}{N_{SV}} \sum_{\mathbf{x}_k\in {\cal{S}}_{SV}}{(\mathbf{w}^\top\mathbf{x}_k - y_k)}$$ 3.2. Dual formWriting the classification rule in its unconstrained dual form reveals that the *maximum-margin hyperplane* and therefore the classification task is only a function of the *support vectors*, the subset of the training data that lie on the margin.Using the fact that $\|\mathbf{w}\|^2 = \mathbf{w}^\top \mathbf{w}$ and substituting $\mathbf{w} = \sum_{k=0}^{K-1}{\alpha_k y_k\mathbf{x}_k}$, we obtain\begin{align}(b^*, \boldsymbol{\alpha}^*) &= \arg\max_{\boldsymbol{\alpha}\geq 0 } \min_b \left\{ \sum_{k=0}^{K-1}\alpha_k - \frac{1}{2} \sum_{k=0}^{K-1} \sum_{j=0}^{K-1} {\alpha_k \alpha_j y_k y_j \mathbf{x}_k^\top\mathbf{x}_j} + b \sum_{k=0}^{K-1}\alpha_k y_k\right\} \end{align}Note that, if $\sum_{k=0}^{K-1}\alpha_k y_k \neq 0$ the optimal value of $b$ is $+\infty$ of $-\infty$, and \begin{align}\min_b \left\{\sum_{k=0}^{K-1}\alpha_k - \frac{1}{2}\sum_{k=0}^{K-1} \sum_{j=0}^{K-1} {\alpha_k \alpha_j y_k y_j \mathbf{x}_k^\top\mathbf{x}_j}+ b \sum_{k=0}^{K-1}\alpha_k y_k\right\} = -\infty.\end{align}Therefore, any $\boldsymbol{\alpha}$ satifying $\sum_{k=0}^{K-1}\alpha_k y_k \neq 0$ is suboptimal, so that the optimal multipliers must satisfy the condition $\sum_{k=0}^{K-1}\alpha_k y_k = 0$. Summarizing, the dual formulation of the optimization problem is$$\boldsymbol{\alpha}^* = \arg\max_{\boldsymbol{\alpha}\geq 0} \sum_{k=0}^{K-1} \alpha_k - \frac12 \sum_{k,j} \alpha_k \alpha_j y_k y_j\;\kappa(\mathbf{x}_k, \mathbf{x}_j) \\\text{subject to: } \qquad \sum_{k=0}^{K-1} \alpha_k y_k = 0.$$where the *kernel* $\kappa(\cdot)$ is defined by $\kappa(\mathbf{x}_k,\mathbf{x}_j)=\mathbf{x}_k^\top\mathbf{x}_j$.Many implementations of the SVM use this dual formulation. They proceed in three steps:1. Solve the dual problem to obtain $\boldsymbol{\alpha}^*$. Usually, only a small number of $\alpha_k^*$ are nonzero. The corresponding values of ${\bf x}_k$ are called the *support vectors*.2. Compute $\mathbf{w}^* = \sum_{k=0}^{K-1} \alpha_k^* y_k\mathbf{x}_k$3. Compute $b^* = \frac{1}{N_{SV}} \sum_{\alpha_k^*\neq 0}{(\mathbf{w}^{*\top}\mathbf{x}_k - y_k)}$ 4. Fitting a Support Vector Machine [Source: A [notebook](https://github.com/jakevdp/sklearn_pycon2015/blob/master/notebooks/03.1-Classification-SVMs.ipynb) by [Jake Vanderplas](https://github.com/jakevdp>)] Now we'll fit a Support Vector Machine Classifier to these points.
###Code
clf = svm.SVC(kernel='linear')
clf.fit(X, y)
###Output
_____no_output_____
###Markdown
To better visualize what's happening here, let's create a quick convenience function that will plot SVM decision boundaries for us:
###Code
def plot_svc_decision_function(clf, ax=None):
"""Plot the decision function for a 2D SVC"""
if ax is None:
ax = plt.gca()
x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30)
y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30)
Y, X = np.meshgrid(y, x)
P = np.zeros_like(X)
for i, xi in enumerate(x):
for j, yj in enumerate(y):
P[i, j] = clf.decision_function(np.array([xi, yj]).reshape(1,-1))
# plot the margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
plot_svc_decision_function(clf);
###Output
_____no_output_____
###Markdown
Notice that the dashed lines touch a couple of the points: these points are the *support vectors*. In scikit-learn, these are stored in the ``support_vectors_`` attribute of the classifier:
###Code
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, marker='s');
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
plot_svc_decision_function(clf)
###Output
_____no_output_____
###Markdown
Let's use IPython's interact functionality to explore how the distribution of points affects the support vectors and the discriminative fit.
###Code
def plot_svm(N=10):
X, y = make_blobs(n_samples=200, centers=2,
random_state=0, cluster_std=0.60)
X = X[:N]
y = y[:N]
clf = svm.SVC(kernel='linear')
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
plt.xlim(-1, 4)
plt.ylim(-1, 6)
plot_svc_decision_function(clf, plt.gca())
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none')
interact(plot_svm, N=[10, 200], kernel='linear')
###Output
_____no_output_____
###Markdown
Notice the unique thing about SVM is that only the support vectors matter: that is, if you moved any of the other points without letting them cross the decision boundaries, they would have no effect on the classification results! 5. Non-separable problems. [Source: adapted from [Wikipedia](https://en.wikipedia.org/wiki/Support_vector_machine)] In 1995, Corinna Cortes and Vladimir N. Vapnik suggested a modified maximum margin idea that allows for mislabeled examples. If there exists no hyperplane that can split the `positive` and `negative` samples, the `Soft Margin` method will choose a hyperplane that splits the examples as cleanly as possible, while still maximizing the distance to the nearest cleanly split examples. The method introduces non-negative slack variables, $\xi_k$, which measure the degree of misclassification of the data $\mathbf{x}_k$$$y_k(\mathbf{w}^\top\mathbf{x}_k - b) \ge 1 - \xi_k \quad k=0,\ldots, K-1.$$The objective function is then increased by a function which penalizes non-zero $\xi_k$, and the optimization becomes a trade off between a large margin and a small error penalty. If the penalty function is linear, the optimization problem becomes:$$(\mathbf{w}^*,\mathbf{\xi}^*, b^*) = \arg\min_{\mathbf{w},\mathbf{\xi}, b } \left\{\frac{1}{2} \|\mathbf{w}\|^2 + C \sum_{k=0}^{K-1} \xi_k \right\} \\\text{subject to: } \quad y_k(\mathbf{w}^\intercal\mathbf{x}_k - b) \ge 1 - \xi_k, \quad \xi_k \ge 0, \quad k=0,\ldots, K-1.$$ This constraint along with the objective of minimizing $\|\mathbf{w}\|$ can be solved using KKT multipliers as done above. One then has to solve the following problem:$$\arg\min_{\mathbf{w}, \mathbf{\xi}, b } \max_{\boldsymbol{\alpha}, \boldsymbol{\beta} }\left\{ \frac{1}{2}\|\mathbf{w}\|^2 + C \sum_{k=0}^{K-1} \xi_k- \sum_{k=0}^{K-1} {\alpha_k\left[y_k(\mathbf{w}^\top \mathbf{x}_k - b) -1 + \xi_k\right]}- \sum_{k=0}^{K-1} \beta_k \xi_k \right \}\\\text{subject to: } \quad \alpha_k, \beta_k \ge 0.$$A similar analysis to that in the separable case can be applied to show that the dual formulation of the optimization problem is$$ \boldsymbol{\alpha}^* = \arg\max_{0 \leq \alpha_k \leq C} \sum_{k=0}^{K-1} \alpha_k - \frac12 \sum_{k,j} \alpha_k \alpha_j y_k y_j \;\kappa(\mathbf{x}_k, \mathbf{x}_j) \\\text{subject to: } \qquad \sum_{k=0}^{K-1} \alpha_k y_k = 0.$$Note that the only difference with the separable case is given by the constraints $\alpha_k \leq C$. 6. Nonlinear classification [Source: adapted from [Wikipedia](https://en.wikipedia.org/wiki/Support_vector_machine)] The original optimal hyperplane algorithm proposed by Vapnik in 1963 was a linear classifier. However, in 1992, Bernhard E. Boser, Isabelle M. Guyon and Vladimir N. Vapnik suggested a way to create nonlinear classifiers by applying the *kernel trick* to maximum-margin hyperplanes. The resulting algorithm is formally similar, except that every dot product is replaced by a nonlinear kernel function. This allows the algorithm to fit the maximum-margin hyperplane in a transformed feature space. The transformation may be nonlinear and the transformed space high dimensional; thus though the classifier is a hyperplane in the high-dimensional feature space, it may be nonlinear in the original input space.The kernel is related to the transform $\phi(\mathbf{x})$ by the equation $\kappa(\mathbf{x}, \mathbf{x}') = \phi(\mathbf{x})^\top \phi(\mathbf{x}')$. However, note that we do not need to explicitly compute $\phi(\mathbf{x})$, as long as we can express all necessary calculations in terms of the kernel function only, as it is the case for the optimization problem in the dual case. The predictions of the SVM classifier can also be expressed in terms of kernels only, so that we never need to explicitely compute $\phi(\mathbf{x})$.$$\begin{align}\hat y({\mathbf{x}}) & = {\mathbf {w^\ast}}^\top \phi(\mathbf{x}) - b^\ast \\ \\& = \left[\sum_{k \in {\cal{S}}_{SV}} \alpha_k^* y_k \phi(\mathbf{x}_k)\right]^\top {\phi(\mathbf{x})} - b^\ast \\ \\& = - b^\ast + \sum_{k \in {\cal{S}}_{SV}} \alpha_k^* y_k \; \kappa(\mathbf{x}_k, {\mathbf{x}})\end{align}$$ Some common kernels include:* **Gaussian**: $\kappa(\mathbf{x},\mathbf{x}')=\exp(-\gamma \|\mathbf{x} - \mathbf{x}'\|^2)$, for $\gamma > 0$. Sometimes parametrized using $\gamma=\dfrac{1}{2 \sigma^2}$. This is by far the most widely used kernel.* Polynomial (homogeneous): $\kappa(\mathbf{x},\mathbf{x}')=(\mathbf{x}^\top \mathbf{x}')^d$* Polynomial (inhomogeneous): $\kappa(\mathbf{x},\mathbf{x}') = (\mathbf{x}^\top \mathbf{x}' + 1)^d$* Hyperbolic tangent: $\kappa(\mathbf{x},\mathbf{x}') = \tanh(\gamma \mathbf{x}^\top \mathbf{x}'+c)$, for some (not every) $\gamma > 0$ and $c < 0$. 6.1. Example. [Source: A [notebook](https://github.com/jakevdp/sklearn_pycon2015/blob/master/notebooks/03.1-Classification-SVMs.ipynb) by [Jake Vanderplas](https://github.com/jakevdp>)] Where SVM gets incredibly exciting is when it is used in conjunction with *kernels*.To motivate the need for kernels, let's look at some data which is not linearly separable:
###Code
X, y = make_circles(100, factor=.1, noise=.1)
clf = svm.SVC(kernel='linear').fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
plot_svc_decision_function(clf);
###Output
_____no_output_____
###Markdown
Clearly, no linear discrimination will ever separate these data.One way we can adjust this is to apply a **kernel**, which is some functional transformation of the input data.For example, one simple model we could use is a **radial basis function**
###Code
r = np.exp(-(X[:, 0] ** 2 + X[:, 1] ** 2))
###Output
_____no_output_____
###Markdown
If we plot this along with our data, we can see the effect of it:
###Code
def plot_3D(elev=30, azim=30):
ax = plt.subplot(projection='3d')
ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50, cmap='spring')
ax.view_init(elev=elev, azim=azim)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('r')
interact(plot_3D, elev=[-90, 90], azip=(-180, 180));
###Output
_____no_output_____
###Markdown
We can see that with this additional dimension, the data becomes trivially linearly separable!This is a relatively simple kernel; SVM has a more sophisticated version of this kernel built-in to the process. This is accomplished by using the Gaussian kernel (``kernel='rbf'``), short for *radial basis function*:
###Code
clf = svm.SVC(kernel='rbf', C=10)
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none');
###Output
_____no_output_____
###Markdown
Support Vector Machines Notebook version: 1.0 (Oct 28, 2015) 1.1 (Oct 27, 2016) 2.0 (Nov 2, 2017) Authors: Jesús Cid Sueiro ([email protected]) Jerónimo Arenas García ([email protected]) Changes: v.1.0 - First version v.1.1 - Typo correction and illustrative figures for linear SVM v.2.0 - Compatibility with Python 3 (backcompatible with Python 2.7) This notebook is a compilation of material taken from several sources: - The sklearn documentation - A notebook by Jake Vanderplas- Wikipedia
###Code
from __future__ import print_function
# To visualize plots in the notebook
%matplotlib inline
# Imported libraries
#import csv
#import random
#import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
#import pylab
import numpy as np
#from sklearn.preprocessing import PolynomialFeatures
from sklearn import svm
from sklearn.datasets.samples_generator import make_blobs
from sklearn.datasets.samples_generator import make_circles
from ipywidgets import interact
###Output
_____no_output_____
###Markdown
1. Introduction [Source: sklearn documentation ] Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.The advantages of support vector machines are:- Effective in high dimensional spaces.- Still effective in cases where number of dimensions is greater than the number of samples.- Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient.- Versatile: different Kernel functions can be specified for the decision function.The disadvantages of support vector machines include:- SVMs do not directly provide probability estimates. 2. Motivating Support Vector Machines [Source: notebook by Jake Vanderplas] Support Vector Machines (SVMs) are a kind of ***discriminative*** classifiers: that is, they draw a boundary between clusters of data without making any explicit assumption about the probability model underlying the data generation process.Let's show a quick example of support vector classification. First we need to create a dataset:
###Code
X, y = make_blobs(n_samples=50, centers=2, random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
plt.xlabel("$x_0$", fontsize=14)
plt.ylabel("$x_1$", fontsize=14)
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
A discriminative classifier attempts to draw a line between the two sets of data. Immediately we see an inconvenience: such problem is ill-posed! For example, we could come up with several possibilities which perfectly discriminate between the classes in this example:
###Code
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
for m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]:
plt.plot(xfit, m * xfit + b, '-k')
plt.xlim(-1, 3.5);
plt.xlabel("$x_0$", fontsize=14)
plt.ylabel("$x_1$", fontsize=14)
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
These are three very different separators which perfectly discriminate between these samples. Depending on which you choose, a new data point will be classified almost entirely differently! How can we improve on this?Support Vector Machines (SVM) select the boundary decision maximizing the ***margin***. The margin of a classifier is defined as twice the maximum signed distance between the decision boundary and the training data. By *signed* we mean that the distance to misclassified samples is counted negatively. Thus, if the classification problem is "separable" (i.d. if there exist a decision boundary with zero errors in the training set), the SVM will choose the zero-error decision boundary that is "as far as possible" from the training data.In summary, what an SVM does is to not only draw a line, but consider the "sample free" region about the line. Here's an example of what it might look like:
###Code
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:,0], X[:,1], c=y, s=50, cmap='copper')
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit-d, yfit+d, edgecolor='none',
color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5)
plt.xlabel("$x_0$", fontsize=14)
plt.ylabel("$x_1$", fontsize=14)
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
Notice here that if we want to maximize this width, the middle fit is clearly the best. This is the intuition of the SVM, which optimizes a linear discriminant model in conjunction with a margin representing the perpendicular distance between the datasets. 3. Linear SVM [Source: adapted from Wikipedia ] In order to present the SVM in a formal way, consider a training dataset $\mathcal{S} = \left\{ (\mathbf{x}^{(k)}, y^{(k)}) \mid \mathbf{x}^{(k)}\in \Re^N,\, y^{(k)} \in \{-1,1\}, k=1,\ldots, K\right\}$, where the binary symmetric label $y^{(k)}\in \{-1,1\}$ indicates the class to which the point $\mathbf{x}^{(k)}$ belongs. Each $\mathbf{x}^{(k)}$ is a $p$-dimensional real vector. We want to find the maximum-margin hyperplane that divides the points having $y^{(k)}=1$ from those having $y^{(k)}=-1$. Any hyperplane can be written as the set of points $\mathbf{x}$ satisfying$$\mathbf{w}^\intercal \mathbf{x} - b=0,$$where ${\mathbf{w}}$ denotes the (not necessarily normalized) normal vector to the hyperplane. The parameter $\tfrac{b}{\|\mathbf{w}\|}$ determines the offset of the hyperplane from the origin along the normal vector ${\mathbf{w}}$.If the training data are linearly separable, we can select two parallel hyperplanes in a way that they separate the data and there are no points between them, and then try to maximize their distance. The region bounded by them is called "the margin". These hyperplanes can be described by the equations$$\mathbf{w}^\intercal \mathbf{x} - b=1$$and$$\mathbf{w}^\intercal \mathbf{x} - b=-1.$$Note that the two equations above can represent any two parallel hyperplanes in $\Re^N$. Essentially, the direction of vector $\mathbf{w}$ determines the orientation of the hyperplanes, whereas parameter $b$ and the norm of $\mathbf{w}$ can be used to select their exact location. To compute the distance between the hyperplanes, we can obtain the projection of vector ${\mathbf x}_1 - {\mathbf x}_2$, where ${\mathbf x}_1$ and ${\mathbf x}_2$ are points from each of the hyperplanes, onto a unitary vector orthonormal to the hyperplanes:$$\text{Distance between hyperplanes} = \left[\frac{\mathbf{w}}{\|\mathbf{w}\|}\right]^\intercal ({\mathbf x}_1 - {\mathbf x}_2) = \frac{\mathbf{w}^\intercal {\mathbf x}_1 - \mathbf{w}^\intercal {\mathbf x}_2}{\|\mathbf{w}\|} = \frac{2}{\|\mathbf{w}\|}.$$Therefore, to maximize the distance between the planes we want to minimize $\|\mathbf{w}\|$. As we also have to prevent data points from falling into the margin, we add the following constraints: for each $k$ either\begin{align}\mathbf{w}^\intercal \mathbf{x}^{(k)} - b &\ge +1, \qquad\text{ if } y^{(k)}=1, \qquad \text{or} \\\mathbf{w}^\intercal \mathbf{x}^{(k)} - b &\le -1, \qquad\text{ if } y^{(k)}=-1.\end{align}This can be rewritten as:$$y^{(k)}(\mathbf{w}^\intercal \mathbf{x}^{(k)} - b) \ge 1, \quad \text{ for all } 1 \le k \le K.$$We can put this together to get the optimization problem:$$(\mathbf{w}^*,b^*) = \arg\min_{(\mathbf{w},b)} \|\mathbf{w}\| \\\text{subject to: } y^{(k)}(\mathbf{w}^\intercal \mathbf{x}^{(k)} - b) \ge 1, \, \text{ for any } k = 1, \dots, K$$ This optimization problem is difficult to solve because it depends on $\|\mathbf{w}\|$, the norm of $\mathbf{w}$, which involves a square root. Fortunately it is possible to alter the minimization objective $\|\mathbf{w}\|$ by substituting it with $\tfrac{1}{2}\|\mathbf{w}\|^2$ (the factor of $\frac{1}{2}$ being used for mathematical convenience) without changing the solution (the minimum of the original and the modified equation have the same $\mathbf{w}$ and $b$):$$(\mathbf{w}^*,b^*) = \arg\min_{(\mathbf{w},b)} \frac{1}{2}\|\mathbf{w}\|^2 \\\text{subject to: } y^{(k)}(\mathbf{w}^\intercal \mathbf{x}^{(k)} - b) \ge 1, \, \text{ for any } k = 1, \dots, K$$This is a particular case of a *quadratic programming* problem. 3.1. Primal formThe optimization problem stated in the preceding section can be solved by means of a generalization of the Lagrange method of multipliers for inequality constraints, using the so called Karush–Kuhn–Tucker (KKT) multipliers $\boldsymbol{\alpha}$. According to it, the constrained problem can be expressed as$$(\mathbf{w}^*,b^*, \boldsymbol{\alpha}^*) = \arg\min_{\mathbf{w},b } \max_{\boldsymbol{\alpha}\geq 0 } \left\{ \frac{1}{2}\|\mathbf{w}\|^2 - \sum_{k=1}^{K}{\alpha^{(k)}[y^{(k)}(\mathbf{w}^\intercal \mathbf{x}^{(k)} - b)-1]} \right\}$$that is, we look for a *saddle point*.A key result in convex optimization theory is that, for the kind of optimization problems discussed here (see here, for instance), the *max* and *min* operators are interchangeable, so that$$(\mathbf{w}^*,b^*, \boldsymbol{\alpha}^*) = \arg\max_{\boldsymbol{\alpha}\geq 0 } \min_{\mathbf{w},b } \left\{ \frac{1}{2}\|\mathbf{w}\|^2 - \sum_{k=1}^{K}{\alpha^{(k)}[y^{(k)}(\mathbf{w}^\intercal \mathbf{x}^{(k)} - b)-1]} \right\}$$Note that the inner minimization problem is now quadratic in $\mathbf{w}$ and, thus, the minimum can be found by differentiation:$$\mathbf{w}^* = \sum_{k=1}^K{\alpha^{(k)} y^{(k)}\mathbf{x}^{(k)}}.$$ 3.1.1. Support VectorsIn view of the optimization problem, we can check that all the points which can be separated as $y^{(k)}(\mathbf{w}^\intercal \mathbf{x}^{(k)} - b) - 1 > 0 $ do not matter since we must set the corresponding $\alpha^{(k)}$ to zero. Therefore, only a few $\alpha^{(k)}$ will be greater than zero. The corresponding $\mathbf{x}^{(k)}$ are known as `support vectors`.It can be seen that the optimum parameter vector $\mathbf{w}^\ast$ can be expressed in terms of the support vectors only:$$\mathbf{w}^* = \sum_{k\in {\cal{S}}_{SV}}{\alpha^{(k)} y^{(k)}\mathbf{x}^{(k)}}.$$where ${\cal{S}}_{SV}$ is the set of indexes associated to support vectors. 3.1.2. The computation of $b$Support vectors lie on the margin and satisfy $y^{(k)}(\mathbf{w}^\intercal \mathbf{x}^{(k)} - b) = 1$. From this condition, we can obtain the value of $b$, since for any support vector:$$\mathbf{w}^\intercal\mathbf{x}^{(k)} - b = \frac{1}{y^{(k)}} = y^{(k)} \iff b = \mathbf{w}^\intercal\mathbf{x}^{(k)} - y^{(k)}$$This estimate of $b$, the centerpoint of the division, depends only on a single pair $y^{(k)}$ and $x^{(k)}$. We may get a more robust estimate of the center by averaging over all of the $N_{SV}$ support vectors, if we believe the population mean is a good estimate of the midpoint, so in practice, $b$ is often computed as:$$b = \frac{1}{N_{SV}} \sum_{\mathbf{x}^{(k)}\in {\cal{S}}_{SV}}{(\mathbf{w}^\intercal\mathbf{x}^{(k)} - y^{(k)})}$$ 3.2. Dual formWriting the classification rule in its unconstrained dual form reveals that the *maximum-margin hyperplane* and therefore the classification task is only a function of the *support vectors*, the subset of the training data that lie on the margin.Using the fact that $\|\mathbf{w}\|^2 = \mathbf{w}^\intercal \mathbf{w}$ and substituting $\mathbf{w} = \sum_{k=1}^K{\alpha^{(k)} y^{(k)}\mathbf{x}^{(k)}}$, we obtain\begin{align}(b^*, \boldsymbol{\alpha}^*) &= \arg\max_{\boldsymbol{\alpha}\geq 0 } \min_b \left\{ \sum_{k=1}^{K}\alpha^{(k)} - \frac{1}{2} \sum_{k=1}^K \sum_{j=1}^K {\alpha^{(k)} \alpha^{(j)} y^{(k)} y^{(j)} (\mathbf{x}^{(k)})^\intercal\mathbf{x}^{(j)}} + b \sum_{k=1}^{K}\alpha^{(k)}y^{(k)}\right\} \end{align}Note that, if $\sum_{k=1}^{K}\alpha^{(k)}y^{(k)} \neq 0$ the optimal value of $b$ is $+\infty$ of $-\infty$, and \begin{align}\min_b \left\{\sum_{k=1}^{K}\alpha^{(k)} - \frac{1}{2}\sum_{k=1}^K \sum_{j=1}^K {\alpha^{(k)} \alpha^{(j)} y^{(k)} y^{(j)} (\mathbf{x}^{(k)})^\intercal\mathbf{x}^{(j)}}+ b \sum_{k=1}^{K}\alpha^{(k)}y^{(k)}\right\} = -\infty.\end{align}Therefore, any $\boldsymbol{\alpha}$ satifying $\sum_{k=1}^{K}\alpha^{(k)}y^{(k)} \neq 0$ is suboptimal, so that the optimal multipliers must satisfy the condition $\sum_{k=1}^{K}\alpha^{(k)}y^{(k)} = 0$. Summarizing, the dual formulation of the optimization problem is$$\boldsymbol{\alpha}^* = \arg\max_{\boldsymbol{\alpha}\geq 0} \sum_{k=1}^K \alpha^{(k)} - \frac12 \sum_{k,j} \alpha^{(k)} \alpha^{(j)} y^{(k)} y^{(j)} k(\mathbf{x}^{(k)}, \mathbf{x}^{(j)}) \\\text{subject to: } \qquad \sum_{k=1}^K \alpha^{(k)} y^{(k)} = 0.$$where the *kernel* $k(\cdot)$ is defined by $k(\mathbf{x}^{(k)},\mathbf{x}^{(j)})=(\mathbf{x}^{(k)})^\intercal\mathbf{x}^{(j)}$.Many implementations of the SVM use this dual formulation. They proceed in three steps:1. Solve the dual problem to obtain $\boldsymbol{\alpha}^*$. Usually, only a small number of $\alpha^{*(k)}$ are nonzero. The corresponding values of ${\bf x}^{(k)}$ are called the *support vectors*.2. Compute $\mathbf{w}^* = \sum_{k=1}^K{\alpha^{*(k)} y^{(k)}\mathbf{x}^{(k)}}$3. Compute $b^* = \frac{1}{N_{SV}} \sum_{\alpha^{*(k)}\neq 0}{(\mathbf{w}^{*\intercal}\mathbf{x}^{(k)} - y^{(k)})}$ 4. Fitting a Support Vector Machine [Source: notebook by Jake Vanderplas] Now we'll fit a Support Vector Machine Classifier to these points.
###Code
clf = svm.SVC(kernel='linear')
clf.fit(X, y)
###Output
_____no_output_____
###Markdown
To better visualize what's happening here, let's create a quick convenience function that will plot SVM decision boundaries for us:
###Code
def plot_svc_decision_function(clf, ax=None):
"""Plot the decision function for a 2D SVC"""
if ax is None:
ax = plt.gca()
x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30)
y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30)
Y, X = np.meshgrid(y, x)
P = np.zeros_like(X)
for i, xi in enumerate(x):
for j, yj in enumerate(y):
P[i, j] = clf.decision_function(np.array([xi, yj]).reshape(1,-1))
# plot the margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
plot_svc_decision_function(clf);
###Output
_____no_output_____
###Markdown
Notice that the dashed lines touch a couple of the points: these points are the *support vectors*. In scikit-learn, these are stored in the ``support_vectors_`` attribute of the classifier:
###Code
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, marker='s');
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
plot_svc_decision_function(clf)
###Output
_____no_output_____
###Markdown
Let's use IPython's interact functionality to explore how the distribution of points affects the support vectors and the discriminative fit. (This is only available in IPython 2.0+, and will not work in a static view)
###Code
def plot_svm(N=10):
X, y = make_blobs(n_samples=200, centers=2,
random_state=0, cluster_std=0.60)
X = X[:N]
y = y[:N]
clf = svm.SVC(kernel='linear')
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
plt.xlim(-1, 4)
plt.ylim(-1, 6)
plot_svc_decision_function(clf, plt.gca())
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none')
interact(plot_svm, N=[10, 200], kernel='linear')
###Output
_____no_output_____
###Markdown
Notice the unique thing about SVM is that only the support vectors matter: that is, if you moved any of the other points without letting them cross the decision boundaries, they would have no effect on the classification results! 5. Non-separable problems. [Source: adapted from Wikipedia ] In 1995, Corinna Cortes and Vladimir N. Vapnik suggested a modified maximum margin idea that allows for mislabeled examples. If there exists no hyperplane that can split the `positive` and `negative` samples, the `Soft Margin` method will choose a hyperplane that splits the examples as cleanly as possible, while still maximizing the distance to the nearest cleanly split examples. The method introduces non-negative slack variables, $\xi^{(k)}$, which measure the degree of misclassification of the data $\mathbf{x}^{(k)}$$$y^{(k)}(\mathbf{w}^\intercal\mathbf{x}^{(k)} - b) \ge 1 - \xi^{(k)} \quad 1 \le k \le K.$$The objective function is then increased by a function which penalizes non-zero $\xi^{(k)}$, and the optimization becomes a trade off between a large margin and a small error penalty. If the penalty function is linear, the optimization problem becomes:$$(\mathbf{w}^*,\mathbf{\xi}^*, b^*) = \arg\min_{\mathbf{w},\mathbf{\xi}, b } \left\{\frac{1}{2} \|\mathbf{w}\|^2 + C \sum_{k=1}^K \xi^{(k)} \right\} \\\text{subject to: } \quad y^{(k)}(\mathbf{w}\cdot\mathbf{x_i} - b) \ge 1 - \xi^{(k)}, \quad \xi^{(k)} \ge 0, \quad k=1,\ldots, K.$$ This constraint along with the objective of minimizing $\|\mathbf{w}\|$ can be solved using KKT multipliers as done above. One then has to solve the following problem:$$\arg\min_{\mathbf{w}, \mathbf{\xi}, b } \max_{\boldsymbol{\alpha}, \boldsymbol{\beta} }\left\{ \frac{1}{2}\|\mathbf{w}\|^2 + C \sum_{k=1}^K \xi^{(k)}- \sum_{k=1}^K {\alpha^{(k)}[y^{(k)}(\mathbf{w}^\intercal \mathbf{x}^{(k)} - b) -1 + \xi^{(k)}]}- \sum_{k=1}^K \beta^{(k)} \xi^{(k)} \right \}\\\text{subject to: } \quad \alpha^{(k)}, \beta^{(k)} \ge 0.$$A similar analysis to that in the separable case can be applied to show that the dual formulation of the optimization problem is$$ \boldsymbol{\alpha}^* = \arg\max_{0 \leq \alpha^{(k)} \leq C , \, k=1,\ldots,K} \sum_{k=1}^K \alpha^{(k)} - \frac12 \sum_{k,j} \alpha^{(k)} \alpha^{(j)} y^{(k)} y^{(j)} k(\mathbf{x}^{(k)}, \mathbf{x}^{(j)}) \\\text{subject to: } \qquad \sum_{k=1}^K \alpha^{(k)} y^{(k)} = 0.$$Note that the only difference with the separable case is given by the constraints $\alpha^{(k)} \leq C$. 6. Nonlinear classification [Source: adapted from Wikipedia ] The original optimal hyperplane algorithm proposed by Vapnik in 1963 was a linear classifier. However, in 1992, Bernhard E. Boser, Isabelle M. Guyon and Vladimir N. Vapnik suggested a way to create nonlinear classifiers by applying the *kernel trick* to maximum-margin hyperplanes. The resulting algorithm is formally similar, except that every dot product is replaced by a nonlinear kernel function. This allows the algorithm to fit the maximum-margin hyperplane in a transformed feature space. The transformation may be nonlinear and the transformed space high dimensional; thus though the classifier is a hyperplane in the high-dimensional feature space, it may be nonlinear in the original input space.The kernel is related to the transform $\phi(\mathbf{x})$ by the equation $k(\mathbf{x}, \mathbf{x}') = \phi(\mathbf{x})^\intercal \phi(\mathbf{x}')$. However, note that we do not need to explicitly compute $\phi(\mathbf{x})$, as long as we can express all necessary calculations in terms of the kernel function only, as it is the case for the optimization problem in the dual case. The predictions of the SVM classifier can also be expressed in terms of kernels only, so that we never need to explicitely compute $\phi(\mathbf{x})$.$$\begin{align}\hat y({\mathbf{x}}) & = {\mathbf {w^\ast}}^\intercal {\mathbf{x}} + b^\ast \\ \\& = \left[\sum_{k \in {\cal{S}}_{SV}} \alpha^{(k)^*} y^{(k)} \phi(\mathbf{x}^{(k)})\right]^\intercal {\mathbf{x}} + b^\ast \\ \\& = b^\ast + \sum_{k \in {\cal{S}}_{SV}} \alpha^{(k)^*} y^{(k)} k(\mathbf{x}^{(k)}, {\mathbf{x}})\end{align}$$ Some common kernels include:* **Gaussian**: $k(\mathbf{x},\mathbf{x}')=\exp(-\gamma \|\mathbf{x} - \mathbf{x}'\|^2)$, for $\gamma > 0$. Sometimes parametrized using $\gamma=1/{2 \sigma^2}$. This is by far the most widely used kernel.* Polynomial (homogeneous): $k(\mathbf{x},\mathbf{x}')=(\mathbf{x}^\intercal \mathbf{x}')^d$* Polynomial (inhomogeneous): $k(\mathbf{x},\mathbf{x}') = (\mathbf{x}^\intercal \mathbf{x}' + 1)^d$* Hyperbolic tangent: $k(\mathbf{x},\mathbf{x}') = \tanh(\kappa \mathbf{x}^\intercal \mathbf{x}'+c)$, for some (not every) $\kappa > 0$ and $c < 0$. 6.1. Example. [Source: notebook by Jake Vanderplas] Where SVM gets incredibly exciting is when it is used in conjunction with *kernels*.To motivate the need for kernels, let's look at some data which is not linearly separable:
###Code
X, y = make_circles(100, factor=.1, noise=.1)
clf = svm.SVC(kernel='linear').fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
plot_svc_decision_function(clf);
###Output
_____no_output_____
###Markdown
Clearly, no linear discrimination will ever separate these data.One way we can adjust this is to apply a **kernel**, which is some functional transformation of the input data.For example, one simple model we could use is a **radial basis function**
###Code
r = np.exp(-(X[:, 0] ** 2 + X[:, 1] ** 2))
###Output
_____no_output_____
###Markdown
If we plot this along with our data, we can see the effect of it:
###Code
def plot_3D(elev=30, azim=30):
ax = plt.subplot(projection='3d')
ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50, cmap='spring')
ax.view_init(elev=elev, azim=azim)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('r')
interact(plot_3D, elev=[-90, 90], azip=(-180, 180));
###Output
_____no_output_____
###Markdown
We can see that with this additional dimension, the data becomes trivially linearly separable!This is a relatively simple kernel; SVM has a more sophisticated version of this kernel built-in to the process. This is accomplished by using the Gaussian kernel (``kernel='rbf'``), short for *radial basis function*:
###Code
clf = svm.SVC(kernel='rbf')
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none');
###Output
_____no_output_____ |
Python_Stock/Portfolio_Strategies/Ventilator_Manufacturer_Portfolio.ipynb | ###Markdown
Ventilator Manufacturer Portfolio Risk and Returns
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import math
import warnings
warnings.filterwarnings("ignore")
# yfinance is used to fetch data
import yfinance as yf
yf.pdr_override()
# input
# Ventilator Manufacturer
symbols = ['MDT','RMD','AHPI']
start = '2019-12-01'
end = '2020-04-14'
df = pd.DataFrame()
for s in symbols:
df[s] = yf.download(s,start,end)['Adj Close']
from datetime import datetime
from dateutil import relativedelta
d1 = datetime.strptime(start, "%Y-%m-%d")
d2 = datetime.strptime(end, "%Y-%m-%d")
delta = relativedelta.relativedelta(d2,d1)
print('How many years of investing?')
print('%s years' % delta.years)
number_of_years = delta.years
days = (df.index[-1] - df.index[0]).days
days
df.head()
df.tail()
plt.figure(figsize=(12,8))
plt.plot(df)
plt.title('Ventilator Manufacturer Stocks Closing Price')
plt.legend(labels=df.columns)
# Normalize the data
normalize = (df - df.min())/ (df.max() - df.min())
plt.figure(figsize=(18,12))
plt.plot(normalize)
plt.title('Ventilator Manufacturer Stocks Normalize')
plt.legend(labels=normalize.columns)
stock_rets = df.pct_change().dropna()
plt.figure(figsize=(12,8))
plt.plot(stock_rets)
plt.title('Ventilator Manufacturer Stocks Returns')
plt.legend(labels=stock_rets.columns)
plt.figure(figsize=(12,8))
plt.plot(stock_rets.cumsum())
plt.title('Ventilator Manufacturer Stocks Returns Cumulative Sum')
plt.legend(labels=stock_rets.columns)
sns.set(style='ticks')
ax = sns.pairplot(stock_rets, diag_kind='hist')
nplot = len(stock_rets.columns)
for i in range(nplot) :
for j in range(nplot) :
ax.axes[i, j].locator_params(axis='x', nbins=6, tight=True)
ax = sns.PairGrid(stock_rets)
ax.map_upper(plt.scatter, color='purple')
ax.map_lower(sns.kdeplot, color='blue')
ax.map_diag(plt.hist, bins=30)
for i in range(nplot) :
for j in range(nplot) :
ax.axes[i, j].locator_params(axis='x', nbins=6, tight=True)
plt.figure(figsize=(7,7))
corr = stock_rets.corr()
# plot the heatmap
sns.heatmap(corr,
xticklabels=corr.columns,
yticklabels=corr.columns,
cmap="Reds")
# Box plot
stock_rets.plot(kind='box',figsize=(12,8))
rets = stock_rets.dropna()
plt.figure(figsize=(12,8))
plt.scatter(rets.mean(), rets.std(),alpha = 0.5)
plt.title('Stocks Risk & Returns')
plt.xlabel('Expected returns')
plt.ylabel('Risk')
plt.grid(which='major')
for label, x, y in zip(rets.columns, rets.mean(), rets.std()):
plt.annotate(
label,
xy = (x, y), xytext = (50, 50),
textcoords = 'offset points', ha = 'right', va = 'bottom',
arrowprops = dict(arrowstyle = '-', connectionstyle = 'arc3,rad=-0.3'))
rets = stock_rets.dropna()
area = np.pi*20.0
sns.set(style='darkgrid')
plt.figure(figsize=(12,8))
plt.scatter(rets.mean(), rets.std(), s=area)
plt.xlabel("Expected Return", fontsize=15)
plt.ylabel("Risk", fontsize=15)
plt.title("Return vs. Risk for Stocks", fontsize=20)
for label, x, y in zip(rets.columns, rets.mean(), rets.std()) :
plt.annotate(label, xy=(x,y), xytext=(50, 0), textcoords='offset points',
arrowprops=dict(arrowstyle='-', connectionstyle='bar,angle=180,fraction=-0.2'),
bbox=dict(boxstyle="round", fc="w"))
rest_rets = rets.corr()
pair_value = rest_rets.abs().unstack()
pair_value.sort_values(ascending = False)
# Normalized Returns Data
Normalized_Value = ((rets[:] - rets[:].min()) /(rets[:].max() - rets[:].min()))
Normalized_Value.head()
Normalized_Value.corr()
normalized_rets = Normalized_Value.corr()
normalized_pair_value = normalized_rets.abs().unstack()
normalized_pair_value.sort_values(ascending = False)
print("Stock returns: ")
print(rets.mean())
print('-' * 50)
print("Stock risks:")
print(rets.std())
table = pd.DataFrame()
table['Returns'] = rets.mean()
table['Risk'] = rets.std()
table.sort_values(by='Returns')
table.sort_values(by='Risk')
rf = 0.01
table['Sharpe Ratio'] = (table['Returns'] - rf) / table['Risk']
table
table['Max Returns'] = rets.max()
table['Min Returns'] = rets.min()
table['Median Returns'] = rets.median()
total_return = stock_rets[-1:].transpose()
table['Total Return'] = 100 * total_return
table
table['Average Return Days'] = (1 + total_return)**(1 / days) - 1
table
initial_value = df.iloc[0]
ending_value = df.iloc[-1]
table['CAGR'] = ((ending_value / initial_value) ** (252.0 / days)) -1
table
table.sort_values(by='Average Return Days')
###Output
_____no_output_____
###Markdown
Ventilator Manufacturer Portfolio Risk and Returns
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import math
import warnings
warnings.filterwarnings("ignore")
# fix_yahoo_finance is used to fetch data
import yfinance as yf
yf.pdr_override()
# input
# Ventilator Manufacturer
symbols = ['MDT','RMD','AHPI']
start = '2019-12-01'
end = '2020-04-14'
df = pd.DataFrame()
for s in symbols:
df[s] = yf.download(s,start,end)['Adj Close']
from datetime import datetime
from dateutil import relativedelta
d1 = datetime.strptime(start, "%Y-%m-%d")
d2 = datetime.strptime(end, "%Y-%m-%d")
delta = relativedelta.relativedelta(d2,d1)
print('How many years of investing?')
print('%s years' % delta.years)
number_of_years = delta.years
days = (df.index[-1] - df.index[0]).days
days
df.head()
df.tail()
plt.figure(figsize=(12,8))
plt.plot(df)
plt.title('Ventilator Manufacturer Stocks Closing Price')
plt.legend(labels=df.columns)
# Normalize the data
normalize = (df - df.min())/ (df.max() - df.min())
plt.figure(figsize=(18,12))
plt.plot(normalize)
plt.title('Ventilator Manufacturer Stocks Normalize')
plt.legend(labels=normalize.columns)
stock_rets = df.pct_change().dropna()
plt.figure(figsize=(12,8))
plt.plot(stock_rets)
plt.title('Ventilator Manufacturer Stocks Returns')
plt.legend(labels=stock_rets.columns)
plt.figure(figsize=(12,8))
plt.plot(stock_rets.cumsum())
plt.title('Ventilator Manufacturer Stocks Returns Cumulative Sum')
plt.legend(labels=stock_rets.columns)
sns.set(style='ticks')
ax = sns.pairplot(stock_rets, diag_kind='hist')
nplot = len(stock_rets.columns)
for i in range(nplot) :
for j in range(nplot) :
ax.axes[i, j].locator_params(axis='x', nbins=6, tight=True)
ax = sns.PairGrid(stock_rets)
ax.map_upper(plt.scatter, color='purple')
ax.map_lower(sns.kdeplot, color='blue')
ax.map_diag(plt.hist, bins=30)
for i in range(nplot) :
for j in range(nplot) :
ax.axes[i, j].locator_params(axis='x', nbins=6, tight=True)
plt.figure(figsize=(7,7))
corr = stock_rets.corr()
# plot the heatmap
sns.heatmap(corr,
xticklabels=corr.columns,
yticklabels=corr.columns,
cmap="Reds")
# Box plot
stock_rets.plot(kind='box',figsize=(12,8))
rets = stock_rets.dropna()
plt.figure(figsize=(12,8))
plt.scatter(rets.mean(), rets.std(),alpha = 0.5)
plt.title('Stocks Risk & Returns')
plt.xlabel('Expected returns')
plt.ylabel('Risk')
plt.grid(which='major')
for label, x, y in zip(rets.columns, rets.mean(), rets.std()):
plt.annotate(
label,
xy = (x, y), xytext = (50, 50),
textcoords = 'offset points', ha = 'right', va = 'bottom',
arrowprops = dict(arrowstyle = '-', connectionstyle = 'arc3,rad=-0.3'))
rets = stock_rets.dropna()
area = np.pi*20.0
sns.set(style='darkgrid')
plt.figure(figsize=(12,8))
plt.scatter(rets.mean(), rets.std(), s=area)
plt.xlabel("Expected Return", fontsize=15)
plt.ylabel("Risk", fontsize=15)
plt.title("Return vs. Risk for Stocks", fontsize=20)
for label, x, y in zip(rets.columns, rets.mean(), rets.std()) :
plt.annotate(label, xy=(x,y), xytext=(50, 0), textcoords='offset points',
arrowprops=dict(arrowstyle='-', connectionstyle='bar,angle=180,fraction=-0.2'),
bbox=dict(boxstyle="round", fc="w"))
rest_rets = rets.corr()
pair_value = rest_rets.abs().unstack()
pair_value.sort_values(ascending = False)
# Normalized Returns Data
Normalized_Value = ((rets[:] - rets[:].min()) /(rets[:].max() - rets[:].min()))
Normalized_Value.head()
Normalized_Value.corr()
normalized_rets = Normalized_Value.corr()
normalized_pair_value = normalized_rets.abs().unstack()
normalized_pair_value.sort_values(ascending = False)
print("Stock returns: ")
print(rets.mean())
print('-' * 50)
print("Stock risks:")
print(rets.std())
table = pd.DataFrame()
table['Returns'] = rets.mean()
table['Risk'] = rets.std()
table.sort_values(by='Returns')
table.sort_values(by='Risk')
rf = 0.01
table['Sharpe Ratio'] = (table['Returns'] - rf) / table['Risk']
table
table['Max Returns'] = rets.max()
table['Min Returns'] = rets.min()
table['Median Returns'] = rets.median()
total_return = stock_rets[-1:].transpose()
table['Total Return'] = 100 * total_return
table
table['Average Return Days'] = (1 + total_return)**(1 / days) - 1
table
initial_value = df.iloc[0]
ending_value = df.iloc[-1]
table['CAGR'] = ((ending_value / initial_value) ** (252.0 / days)) -1
table
table.sort_values(by='Average Return Days')
###Output
_____no_output_____ |
assignment2_colab/assignment2/Ex/Experiment+utils.ipynb | ###Markdown
Testing PSMNet model training w/ Basic Block Rewriting model dictionary
###Code
# import argparse
# import torch
# parser = argparse.ArgumentParser()
# parser.add_argument("--source", type=str, required=True)
# parser.add_argument("--dest", type=str, required=True)
# args = parser.parse_args()
model_state = torch.load(loadmodel)
new_model_state = {}
for key in model_state.keys():
new_model_state[key[7:]] = model_state[key]
torch.save(new_model_state, './kitti_3d/pretrained_sceneflow2.tar')
###Output
_____no_output_____ |
midi_processing_matrices.ipynb | ###Markdown
MIDI processingThis notebook will handle processing the midi files into the state matrix we want for the network, and back from a network-produced state matrix to a midi.
###Code
import midi
import pandas as pd
import numpy as np
import pickle
import random
import os
def read_midis(filenames):
'''
A function that takes a list of filenames and returns the corresponding midi Patterns as a list.
'''
pattern_list = []
for f in filenames:
pattern = midi.read_midifile(f)
pattern_list.append(pattern)
return pattern_list
def patterns_to_matrices(pattern_list):
'''
Takes a list of midi Patterns and converts them to note-matrix format.
'''
for pattern in pattern_list:
if pattern.tick_relative:
pattern.make_ticks_abs() # convert to absolute times in order to sort by time.
# constructs a dataframe
df = pd.DataFrame({'sixteenths':[], 'notes':[], 'velocities':[]})
for track in pattern[1:3]:
for event in track:
if event.name == 'Note On':
df = df.append({'sixteenths': event.tick*4/pattern.resolution, 'notes': event.data[0],
'velocities':event.data[1]}, ignore_index=True)
df = df.sort_values('sixteenths')
matrix = np.zeros(((int(max(df['sixteenths'].values)) + 1), 174))
for event in df.iterrows():
timing = int(event[1][1])
note = int(event[1][0]) - 21
velocity = int(event[1][2])
if velocity != 0:
matrix[timing, note * 2] = 1
matrix[timing:, note * 2 + 1] = 1
if velocity == 0:
matrix[timing:, note * 2 + 1] = 0
yield matrix
def matrix_to_midi(matrix, save_path):
'''
Takes a sample in the note-matrix format and saves it as a midi to save_path.
Returns a copy of the pattern that is written out.
'''
# create a pattern and add a track
pattern = midi.Pattern()
pattern.resolution = 480
track = midi.Track()
pattern.append(track)
pattern.make_ticks_abs()
prev_step = np.zeros(174)
for time, step in enumerate(matrix):
for note, strike in enumerate(step[::2]):
if strike == 1:
track.append(midi.NoteOnEvent(tick = time * pattern.resolution / 4, data = [note + 21, 100]))
for note, (sustain, last_sustain) in enumerate(zip(step[1::2], prev_step[1::2])):
if last_sustain - sustain > 0:
track.append(midi.NoteOnEvent(tick = time * pattern.resolution / 4, data = [note + 21, 0]))
prev_step = step
pattern.make_ticks_rel()
track.append(midi.EndOfTrackEvent())
midi.write_midifile(save_path, pattern)
return pattern
###Output
_____no_output_____
###Markdown
Processing MIDI filesRun this code to convert midi files in the folder specified by the `path` variable below to note-matrix format and save them to disk.
###Code
path = 'C:/Users/Emerson/Documents/bigdata/midis/format_1/'
filepaths = [path + f for f in os.listdir(path)]
pattern_list = read_midis(filepaths)
np.random.shuffle(pattern_list)
master_matrix = np.concatenate([m for m in patterns_to_matrices(pattern_list[:int(len(pattern_list)*.9)])])
test_matrix = np.concatenate([m for m in patterns_to_matrices(pattern_list[int(len(pattern_list)*.9):])])
np.save('C:/Users/Emerson/Documents/bigdata/midis/processed/mastermatrix.npy', master_matrix)
np.save('C:/Users/Emerson/Documents/bigdata/midis/processed/testmatrix.npy', test_matrix)
###Output
_____no_output_____
###Markdown
Processing generated matricesRun this code to convert the specified note-matrix sample into a .mid file.
###Code
sample = 'epoch 10 sample 1'
matrix = np.load('C:/Users/Emerson/Documents/bigdata/midis/generated/matrices/' + sample + '.npy')
save_path = 'C:/Users/Emerson/Documents/bigdata/midis/generated/midis/' + sample + '.mid'
pattern = matrix_to_midi(matrix, save_path)
###Output
_____no_output_____ |
MG_sklearn_topic_modelling2.ipynb | ###Markdown
###Code
# install if not available
%%capture
!pip install pyLDAvis
# import libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn
import numpy as np
import re
import nltk
from nltk.tokenize import RegexpTokenizer
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import LatentDirichletAllocation
import pickle
import pyLDAvis
import pyLDAvis.sklearn
pyLDAvis.enable_notebook()
###Output
/usr/local/lib/python3.7/dist-packages/past/types/oldstr.py:5: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working
from collections import Iterable
/usr/local/lib/python3.7/dist-packages/past/builtins/misc.py:4: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working
from collections import Mapping
###Markdown
Preprocessing
###Code
# download stopwords
!python -m nltk.downloader stopwords
def clean2(text):
"""cleans the text to prepare for NLP"""
text = str(text).lower()
text = re.sub(r'@\w+', ' ', text)
text = re.sub('https?://\S+|www\.\S+', '', text)
text = re.sub(r'[^a-z A-Z]', ' ',text)
text = re.sub(r'\b\w{1,2}\b', '', text)
text = re.sub(r'[^\w\s]','',text)
text = re.sub(r'^RT[\s]+', '', text)
text = re.sub('\[.*?\]', '', text)
text = re.sub('<.*?>+', '', text)
text = re.sub('\n', '', text)
text = re.sub('\w*\d\w*', '', text)
text = re.sub(r'#', '', text)
text = re.sub(r'[^\w\s]','',text)
text = re.sub(r'@[A-Za-z0–9]+', '', text)
text = re.sub(r' +', ' ', text)
return text
# download the tweet dataset
!wget https://dsiwork.s3.amazonaws.com/dataset.csv
data = pd.read_csv("dataset.csv", parse_dates=["date_created"], encoding="ISO-8859-1")
data.head()
data['clean_tweet'] = data.tweet.apply(clean2)
data.head()
# Remove stopwords
stop_words = set(stopwords.words("english"))
data["clean_tweet"] = data["clean_tweet"].apply(lambda x : " ".join([w.lower() for w in x.split() if w not in stop_words and len(w) > 3]))
#Tokenize tweet
tweets = data["clean_tweet"].apply(lambda x : x.split())
data.head()
###Output
_____no_output_____
###Markdown
**Lemmatization**
###Code
%%capture
!python -m spacy download en_core_web_sm
%%capture
import spacy
def lemmatization(texts, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']):
texts_out = []
for sent in texts:
doc = nlp(" ".join(sent))
texts_out.append(" ".join([token.lemma_ if token.lemma_ not in ['-PRON-'] else '' for token in doc if token.pos_ in allowed_postags]))
return texts_out
# Initialize spacy 'en' model, keeping only tagger component (for efficiency)
nlp = spacy.load('en_core_web_sm', disable=['parser', 'ner'])
# Do lemmatization keeping only Noun, Adj, Verb, Adverb
data_lemmatized = lemmatization(tweets, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV'])
vectorizer = TfidfVectorizer(ngram_range=(2,3))
data_vectorized = vectorizer.fit_transform(data_lemmatized)
data_vectorized
###Output
_____no_output_____
###Markdown
Modelling
###Code
# LDA Implementation
number_of_topics = 10
model = LatentDirichletAllocation(n_components=number_of_topics, random_state=0)
model.fit(data_vectorized)
def display_topics(model, feature_names, no_top_words):
"""
creates dataframe showing top words for each topic from the model
Parameters
----------
model: object instance of the topic model
feature_names: output feature names from vectorizer e.g CountVectorizer.get_feature_names()
no_top_words:
returns
--------
dataframe showing topics and the weight for the top words specified
"""
topic_dict = {}
for topic_idx, topic in enumerate(model.components_):
topic_dict["Topic %d words" % (topic_idx)]= ['{}'.format(feature_names[i])
for i in topic.argsort()[:-no_top_words - 1:-1]]
topic_dict["Topic %d weights" % (topic_idx)]= ['{:.1f}'.format(topic[i])
for i in topic.argsort()[:-no_top_words - 1:-1]]
return pd.DataFrame(topic_dict)
# get the feature names from the vectorization
tf_feature_names = vectorizer.get_feature_names()
no_top_words = 20
display_topics(model, tf_feature_names, no_top_words)
#df.to_excel("topics_output.xlsx")
###Output
_____no_output_____
###Markdown
**Model Performance Metrics**
###Code
# log-likelihood
print(model.score(data_vectorized))
# perplexity
print(model.perplexity(data_vectorized))
###Output
-50035.89601931713
12972.267981287418
###Markdown
pyLDAVis
###Code
pyLDAvis.sklearn.prepare(model, data_vectorized, vectorizer)
###Output
/usr/local/lib/python3.7/dist-packages/sklearn/utils/deprecation.py:87: FutureWarning: Function get_feature_names is deprecated; get_feature_names is deprecated in 1.0 and will be removed in 1.2. Please use get_feature_names_out instead.
warnings.warn(msg, category=FutureWarning)
/usr/local/lib/python3.7/dist-packages/pyLDAvis/_prepare.py:247: FutureWarning: In a future version of pandas all arguments of DataFrame.drop except for the argument 'labels' will be keyword-only
by='saliency', ascending=False).head(R).drop('saliency', 1)
###Markdown
**Hyperparameter Tuning** **How to GridSearch the best LDA model**
###Code
from sklearn.decomposition import LatentDirichletAllocation
from sklearn.model_selection import GridSearchCV
# Define Search Param
search_params = {'n_components': [10, 15, 20, 25, 30], 'learning_decay': [.5, .7, .9]}
# Init the Model
lda = LatentDirichletAllocation()
# Init Grid Search Class
model2 = GridSearchCV(lda, param_grid=search_params)
# Do the Grid Search
model2.fit(data_vectorized)
###Output
_____no_output_____
###Markdown
**How to see the best topic model and its parameters?**
###Code
# Best Model
best_lda_model = model2.best_estimator_
# Model Parameters
print("Best Model's Params: ", model2.best_params_)
# Log Likelihood Score
print("Best Log Likelihood Score: ", model2.best_score_)
# Perplexity
print("Model Perplexity: ", best_lda_model.perplexity(data_vectorized))
###Output
Best Model's Params: {'learning_decay': 0.5, 'n_components': 10}
Best Log Likelihood Score: -18443.739474704387
Model Perplexity: 12978.496776555903
###Markdown
*This shows us that the best model is obtained with 10 topics as done above* Inference
###Code
def get_inference(model, vectorizer, topics, text, threshold):
"""
runs inference on text input
paramaters
----------
model: loaded model to use to transform the input
vectorizer: instance of the vectorizer e.g TfidfVectorizer(ngram_range=(2, 3))
topics: the list of topics in the model
text: input string to be classified
threshold: float of threshold to use to output a topic
returns
-------
tuple => (top score, the scores for each topic)
"""
v_text = vectorizer.transform([text])
score = model.transform(v_text)
labels = set()
for i in range(len(score[0])):
if score[0][i] > threshold:
labels.add(topics[i])
if not labels:
return 'None', -1, set()
return topics[np.argmax(score)], score
# test the model with some text
topics = list(np.arange(0,10))
result = get_inference(model, vectorizer, topics, "operation dudula", 0 )
result
###Output
_____no_output_____
###Markdown
Testing inference from loading the model
###Code
# Save the model then test it by loading it
with open("lda_model.pk","wb") as f:
pickle.dump(model, f)
f.close()
# then reload it
with open("lda_model.pk","rb") as f:
lda_model = pickle.load(f)
# test example text
result = get_inference(lda_model, vectorizer, topics, "operation dudula", 0 )
result
pickle.dump(vectorizer, open("vectorizer.pickle", "wb"))
#pickle.load(open("models/vectorizer.pickle", 'rb')) // Load vectorizer
###Output
_____no_output_____ |
Interfaces/Notebooks/Arabic_nmt_attention.ipynb | ###Markdown
This is a modified version of the code published by TensorFlow https://www.tensorflow.org/alpha/tutorials/text/nmt_with_attention Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"). Neural Machine Translation with Attention
###Code
from __future__ import absolute_import, division, print_function
import tensorflow as tf
tf.enable_eager_execution()
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import time
print(tf.__version__)
#download the dataset
!wget http://www.manythings.org/anki/ara-eng.zip
!unzip ara-eng.zip
!cat ara.txt | head -5
path_to_file = "ara.txt"
###Output
_____no_output_____
###Markdown
Prepare the datasetEach line in the dataset contains a line with two statements one in Arabic and one in English`Call me. اتصل بي.`After downloading the dataset, here are the steps we'll take to prepare the data:1. Add a *start* and *end* token to each sentence.2. Clean the sentences by removing special characters.3. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).4. Pad each sentence to a maximum length.
###Code
# Converts the unicode file to ascii
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
# Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# replacing everything with space except (a-z, A-Z, ".", "?", "!", ",")
#w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
w = w.rstrip().strip()
# adding a start and an end token to the sentence
# so that the model know when to start and stop predicting.
w = '<start> ' + w + ' <end>'
return w
# 1. Remove the accents
# 2. Clean the sentences
# 3. Return word pairs in the format: [ENGLISH, SPANISH]
def create_dataset(path, num_examples):
lines = open(path, encoding='UTF-8').read().strip().split('\n')
print(lines[0].split('\t'))
word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]]
return word_pairs
# This class creates a word -> index mapping (e.g,. "dad" -> 5) and vice-versa
# (e.g., 5 -> "dad") for each language,
class LanguageIndex():
def __init__(self, lang):
self.lang = lang
self.word2idx = {}
self.idx2word = {}
self.vocab = set()
self.create_index()
def create_index(self):
for phrase in self.lang:
self.vocab.update(phrase.split(' '))
self.vocab = sorted(self.vocab)
self.word2idx['<pad>'] = 0
for index, word in enumerate(self.vocab):
self.word2idx[word] = index + 1
for word, index in self.word2idx.items():
self.idx2word[index] = word
def max_length(tensor):
return max(len(t) for t in tensor)
def load_dataset(path, num_examples):
# creating cleaned input, output pairs
pairs = create_dataset(path, num_examples)
# index language using the class defined above
inp_lang = LanguageIndex(sp for en, sp in pairs)
targ_lang = LanguageIndex(en for en, sp in pairs)
# Vectorize the input and target languages
# Spanish sentences
input_tensor = [[inp_lang.word2idx[s] for s in sp.split(' ')] for en, sp in pairs]
# English sentences
target_tensor = [[targ_lang.word2idx[s] for s in en.split(' ')] for en, sp in pairs]
# Calculate max_length of input and output tensor
# Here, we'll set those to the longest sentence in the dataset
max_length_inp, max_length_tar = max_length(input_tensor), max_length(target_tensor)
# Padding the input and output tensor to the maximum length
input_tensor = tf.keras.preprocessing.sequence.pad_sequences(input_tensor,
maxlen=max_length_inp,
padding='post')
target_tensor = tf.keras.preprocessing.sequence.pad_sequences(target_tensor,
maxlen=max_length_tar,
padding='post')
return input_tensor, target_tensor, inp_lang, targ_lang, max_length_inp, max_length_tar
###Output
_____no_output_____
###Markdown
Limit the size of the dataset to experiment faster (optional)You can reduce the num_examples to train faster
###Code
# Try experimenting with the size of that dataset
num_examples = 30000
input_tensor, target_tensor, inp_lang, targ_lang, max_length_inp, max_length_targ = load_dataset(path_to_file, num_examples)
# Creating training and validation sets using an 80-20 split
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.01)
# Show length
len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val)
###Output
_____no_output_____
###Markdown
Create a tf.data dataset
###Code
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
N_BATCH = BUFFER_SIZE//BATCH_SIZE
embedding_dim = 256
units = 256
vocab_inp_size = len(inp_lang.word2idx)
vocab_tar_size = len(targ_lang.word2idx)
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
###Output
_____no_output_____
###Markdown
Write the encoder and decoder modelHere, we'll implement an encoder-decoder model with attention which you can read about in the TensorFlow [Neural Machine Translation (seq2seq) tutorial](https://github.com/tensorflow/nmt). This example uses a more recent set of APIs. This notebook implements the [attention equations](https://github.com/tensorflow/nmtbackground-on-the-attention-mechanism) from the seq2seq tutorial. The following diagram shows that each input word is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence.The input is put through an encoder model which gives us the encoder output of shape *(batch_size, max_length, hidden_size)* and the encoder hidden state of shape *(batch_size, hidden_size)*. Here are the equations that are implemented:We're using *Bahdanau attention*. Lets decide on notation before writing the simplified form:* FC = Fully connected (dense) layer* EO = Encoder output* H = hidden state* X = input to the decoderAnd the pseudo-code:* `score = FC(tanh(FC(EO) + FC(H)))`* `attention weights = softmax(score, axis = 1)`. Softmax by default is applied on the last axis but here we want to apply it on the *1st axis*, since the shape of score is *(batch_size, max_length, 1)*. `Max_length` is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis.* `context vector = sum(attention weights * EO, axis = 1)`. Same reason as above for choosing axis as 1.* `embedding output` = The input to the decoder X is passed through an embedding layer.* `merged vector = concat(embedding output, context vector)`* This merged vector is then given to the GRU The shapes of all the vectors at each step have been specified in the comments in the code:
###Code
def gru(units):
# If you have a GPU, we recommend using CuDNNGRU(provides a 3x speedup than GRU)
# the code automatically does that.
return tf.keras.layers.GRU(units,
return_sequences=True,
return_state=True,
recurrent_activation='sigmoid',
recurrent_initializer='glorot_uniform')
def get_encoder(vocab_size, embedding_dim, enc_units, batch_sz):
hidden = tf.zeros((batch_sz, enc_units))
input = tf.keras.layers.Input((max_length_inp,))
x = tf.keras.layers.Embedding(vocab_size, embedding_dim)(input)
x = gru(units)(x)
return tf.keras.models.Model(inputs = input, outputs = x)
def get_decoder(vocab_size, embedding_dim, units, batch_sz):
#define the inputs to the decoder
enc_output = tf.keras.layers.Input((max_length_inp, embedding_dim))
enc_hidden = tf.keras.layers.Input((embedding_dim,))
dec_input = tf.keras.layers.Input((1,))
hidden_with_time_axis = tf.keras.layers.Reshape((1, embedding_dim))(enc_hidden)
# we get 1 at the last axis because we are applying tanh(FC(EO) + FC(H)) to self.V
W1 = tf.keras.layers.Dense(units)
W2 = tf.keras.layers.Dense(units)
V = tf.keras.layers.Dense(1, activation = 'softmax')
attention_weights = V(tf.keras.layers.Activation(activation = "tanh")(tf.keras.layers.Add()([W1(enc_output), W2(hidden_with_time_axis)])))
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = tf.keras.layers.Multiply()([attention_weights, enc_output])
# reshape the context_vector to concatneate with the output of the first input
context_vector = tf.keras.layers.Permute((2, 1))(context_vector)
context_vector = tf.keras.layers.Dense(1)(context_vector)
context_vector = tf.keras.layers.Permute((2, 1))(context_vector)
x = tf.keras.layers.Embedding(vocab_size, embedding_dim)(dec_input)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.keras.layers.Concatenate(axis = -1)([context_vector, x])
# passing the concatenated vector to the GRU
output, state = gru(units)(x)
# output shape == (batch_size * 1, hidden_size)
output = tf.keras.layers.Reshape((output.shape[2],))(output)
# output shape == (batch_size * 1, vocab)
x = tf.keras.layers.Dense(vocab_size)(output)
return tf.keras.models.Model(inputs = [dec_input, enc_hidden, enc_output], outputs = x)
encoder = get_encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
decoder = get_decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/embedding_ops.py:132: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
###Markdown
Define the optimizer and the loss function
###Code
optimizer = tf.train.AdamOptimizer()
def loss_function(real, pred):
mask = 1 - np.equal(real, 0)
loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred) * mask
return tf.reduce_mean(loss_)
###Output
_____no_output_____
###Markdown
Checkpoints (Object-based saving)
###Code
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
###Output
_____no_output_____
###Markdown
Training1. Pass the *input* through the *encoder* which return *encoder output* and the *encoder hidden state*.2. The encoder output, encoder hidden state and the decoder input (which is the *start token*) is passed to the decoder.3. The decoder returns the *predictions* and the *decoder hidden state*.4. The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.5. Use *teacher forcing* to decide the next input to the decoder.6. *Teacher forcing* is the technique where the *target word* is passed as the *next input* to the decoder.7. The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
###Code
EPOCHS = 30
for epoch in range(EPOCHS):
start = time.time()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp)
#print(tf.reduce_sum(enc_output))
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word2idx['<start>']] * BATCH_SIZE, 1)
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
# passing enc_output to the decoder
#print(dec_input.shape, dec_hidden.shape, enc_output.shape)
predictions = decoder([dec_input, dec_hidden, enc_output])
loss += loss_function(targ[:, t], predictions)
# using teacher forcing
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
total_loss += batch_loss
variables = encoder.variables + decoder.variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
batch_loss.numpy()))
# saving (checkpoint) the model every 2 epochs
if (epoch + 1) % 2 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print('Epoch {} Loss {:.4f}'.format(epoch + 1,
total_loss / N_BATCH))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_grad.py:102: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.
Epoch 1 Batch 0 Loss 1.5737
Epoch 1 Batch 100 Loss 0.9952
Epoch 1 Loss 1.0801
Time taken for 1 epoch 184.59089255332947 sec
Epoch 2 Batch 0 Loss 0.9520
Epoch 2 Batch 100 Loss 0.9073
Epoch 2 Loss 0.9059
Time taken for 1 epoch 185.6432590484619 sec
Epoch 3 Batch 0 Loss 0.8685
Epoch 3 Batch 100 Loss 0.8189
Epoch 3 Loss 0.8228
Time taken for 1 epoch 184.95331001281738 sec
Epoch 4 Batch 0 Loss 0.7762
Epoch 4 Batch 100 Loss 0.7495
Epoch 4 Loss 0.7631
Time taken for 1 epoch 184.47491669654846 sec
Epoch 5 Batch 0 Loss 0.7334
Epoch 5 Batch 100 Loss 0.7252
Epoch 5 Loss 0.7156
Time taken for 1 epoch 183.40274953842163 sec
Epoch 6 Batch 0 Loss 0.6917
Epoch 6 Batch 100 Loss 0.7698
Epoch 6 Loss 0.6743
Time taken for 1 epoch 185.21108555793762 sec
Epoch 7 Batch 0 Loss 0.6076
Epoch 7 Batch 100 Loss 0.7121
Epoch 7 Loss 0.6360
Time taken for 1 epoch 183.29096484184265 sec
Epoch 8 Batch 0 Loss 0.6804
Epoch 8 Batch 100 Loss 0.5700
Epoch 8 Loss 0.5996
Time taken for 1 epoch 183.7897927761078 sec
Epoch 9 Batch 0 Loss 0.6281
Epoch 9 Batch 100 Loss 0.5707
Epoch 9 Loss 0.5638
Time taken for 1 epoch 183.74300241470337 sec
Epoch 10 Batch 0 Loss 0.5120
Epoch 10 Batch 100 Loss 0.5701
Epoch 10 Loss 0.5279
Time taken for 1 epoch 183.26960945129395 sec
Epoch 11 Batch 0 Loss 0.5019
Epoch 11 Batch 100 Loss 0.4268
Epoch 11 Loss 0.4933
Time taken for 1 epoch 183.0635199546814 sec
Epoch 12 Batch 0 Loss 0.4919
Epoch 12 Batch 100 Loss 0.4477
Epoch 12 Loss 0.4590
Time taken for 1 epoch 182.36777782440186 sec
Epoch 13 Batch 0 Loss 0.3958
Epoch 13 Batch 100 Loss 0.4551
Epoch 13 Loss 0.4253
Time taken for 1 epoch 184.3013105392456 sec
Epoch 14 Batch 0 Loss 0.3796
Epoch 14 Batch 100 Loss 0.3660
Epoch 14 Loss 0.3935
Time taken for 1 epoch 183.02555990219116 sec
Epoch 15 Batch 0 Loss 0.3373
Epoch 15 Batch 100 Loss 0.3465
Epoch 15 Loss 0.3624
Time taken for 1 epoch 180.61645650863647 sec
Epoch 16 Batch 0 Loss 0.2803
Epoch 16 Batch 100 Loss 0.3393
Epoch 16 Loss 0.3323
Time taken for 1 epoch 181.69418787956238 sec
Epoch 17 Batch 0 Loss 0.3148
Epoch 17 Batch 100 Loss 0.2400
Epoch 17 Loss 0.3031
Time taken for 1 epoch 182.0835063457489 sec
Epoch 18 Batch 0 Loss 0.2892
Epoch 18 Batch 100 Loss 0.2624
Epoch 18 Loss 0.2753
Time taken for 1 epoch 182.76570463180542 sec
Epoch 19 Batch 0 Loss 0.2201
Epoch 19 Batch 100 Loss 0.2476
Epoch 19 Loss 0.2500
Time taken for 1 epoch 182.3220407962799 sec
Epoch 20 Batch 0 Loss 0.2662
Epoch 20 Batch 100 Loss 0.2456
Epoch 20 Loss 0.2258
Time taken for 1 epoch 181.4635727405548 sec
Epoch 21 Batch 0 Loss 0.1932
Epoch 21 Batch 100 Loss 0.2100
Epoch 21 Loss 0.2031
Time taken for 1 epoch 183.69462966918945 sec
Epoch 22 Batch 0 Loss 0.1938
Epoch 22 Batch 100 Loss 0.2069
Epoch 22 Loss 0.1826
Time taken for 1 epoch 181.63858151435852 sec
Epoch 23 Batch 0 Loss 0.1478
Epoch 23 Batch 100 Loss 0.1644
Epoch 23 Loss 0.1639
Time taken for 1 epoch 182.15275311470032 sec
Epoch 24 Batch 0 Loss 0.1215
Epoch 24 Batch 100 Loss 0.1390
Epoch 24 Loss 0.1460
Time taken for 1 epoch 181.0750732421875 sec
Epoch 25 Batch 0 Loss 0.1328
Epoch 25 Batch 100 Loss 0.1351
Epoch 25 Loss 0.1296
Time taken for 1 epoch 183.3895823955536 sec
Epoch 26 Batch 0 Loss 0.1455
Epoch 26 Batch 100 Loss 0.0939
Epoch 26 Loss 0.1151
Time taken for 1 epoch 182.7859992980957 sec
Epoch 27 Batch 0 Loss 0.0952
Epoch 27 Batch 100 Loss 0.1030
Epoch 27 Loss 0.1019
Time taken for 1 epoch 181.32154488563538 sec
Epoch 28 Batch 0 Loss 0.0853
Epoch 28 Batch 100 Loss 0.0747
Epoch 28 Loss 0.0903
Time taken for 1 epoch 185.0141680240631 sec
Epoch 29 Batch 0 Loss 0.0848
Epoch 29 Batch 100 Loss 0.0685
Epoch 29 Loss 0.0799
Time taken for 1 epoch 180.88364839553833 sec
Epoch 30 Batch 0 Loss 0.0672
Epoch 30 Batch 100 Loss 0.0570
Epoch 30 Loss 0.0708
Time taken for 1 epoch 183.36261177062988 sec
###Markdown
Translate* The evaluate function is similar to the training loop, except we don't use *teacher forcing* here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.* Stop predicting when the model predicts the *end token*.* And store the *attention weights for every time step*.Note: The encoder output is calculated only once for one input.
###Code
# restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
def evaluate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ):
sentence = preprocess_sentence(sentence)
inputs = [inp_lang.word2idx[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs], maxlen=max_length_inp, padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
enc_out, enc_hidden = encoder(inputs)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word2idx['<start>']], 0)
for t in range(max_length_targ):
predictions = decoder([dec_input, dec_hidden, enc_out])
predicted_id = tf.argmax(predictions[0]).numpy()
result += targ_lang.idx2word[predicted_id] + ' '
if targ_lang.idx2word[predicted_id] == '<end>':
return result, sentence
# the predicted ID is fed back into the model
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence
def translate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ):
result, sentence = evaluate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
print('Input: {}'.format(sentence))
print('Predicted translation: {}'.format(result))
###Output
_____no_output_____
###Markdown
Restore the latest checkpoint and test
###Code
translate(u'اين انت ؟', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate(u'من أين أنت', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
# wrong translation
translate(u'اهلا', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate(u'اذهب إلى المدرسة', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate(u'اصمت', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate(u'اخرج من هنا', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate(u'هل ستذهب الى المسجد', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate(u'الهزيمة كانت قاسية', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate(u'اركض', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
###Output
Input: <start> اركض <end>
Predicted translation: i turned down . <end>
###Markdown
Save the models and the dictionaries
###Code
encoder.save('encoder.h5')
decoder.save('decoder.h5')
import csv
def create_csv(file, dict):
with open(file, 'w') as csvfile:
writer = csv.writer(csvfile)
for key in dict.keys():
writer.writerow([key,dict[key]])
create_csv('idx2word.csv', targ_lang.idx2word)
create_csv('word2idx.csv', inp_lang.word2idx)
###Output
_____no_output_____
###Markdown
This is a modified version of the code published by TensorFlow https://www.tensorflow.org/alpha/tutorials/text/nmt_with_attention Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"). Neural Machine Translation with Attention
###Code
from __future__ import absolute_import, division, print_function
import tensorflow as tf
tf.enable_eager_execution()
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import time
print(tf.__version__)
#download the dataset
!wget http://www.manythings.org/anki/ara-eng.zip
!unzip ara-eng.zip
!cat ara.txt | head -5
path_to_file = "ara.txt"
###Output
_____no_output_____
###Markdown
Prepare the datasetEach line in the dataset contains a line with two statements one in Arabic and one in English`Call me. اتصل بي.`After downloading the dataset, here are the steps we'll take to prepare the data:1. Add a *start* and *end* token to each sentence.2. Clean the sentences by removing special characters.3. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).4. Pad each sentence to a maximum length.
###Code
# Converts the unicode file to ascii
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
# Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# replacing everything with space except (a-z, A-Z, ".", "?", "!", ",")
#w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
w = w.rstrip().strip()
# adding a start and an end token to the sentence
# so that the model know when to start and stop predicting.
w = '<start> ' + w + ' <end>'
return w
# 1. Remove the accents
# 2. Clean the sentences
# 3. Return word pairs in the format: [ENGLISH, SPANISH]
def create_dataset(path, num_examples):
lines = open(path, encoding='UTF-8').read().strip().split('\n')
print(lines[0].split('\t'))
word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]]
return word_pairs
# This class creates a word -> index mapping (e.g,. "dad" -> 5) and vice-versa
# (e.g., 5 -> "dad") for each language,
class LanguageIndex():
def __init__(self, lang):
self.lang = lang
self.word2idx = {}
self.idx2word = {}
self.vocab = set()
self.create_index()
def create_index(self):
for phrase in self.lang:
self.vocab.update(phrase.split(' '))
self.vocab = sorted(self.vocab)
self.word2idx['<pad>'] = 0
for index, word in enumerate(self.vocab):
self.word2idx[word] = index + 1
for word, index in self.word2idx.items():
self.idx2word[index] = word
def max_length(tensor):
return max(len(t) for t in tensor)
def load_dataset(path, num_examples):
# creating cleaned input, output pairs
pairs = create_dataset(path, num_examples)
# index language using the class defined above
inp_lang = LanguageIndex(sp for en, sp in pairs)
targ_lang = LanguageIndex(en for en, sp in pairs)
# Vectorize the input and target languages
# Spanish sentences
input_tensor = [[inp_lang.word2idx[s] for s in sp.split(' ')] for en, sp in pairs]
# English sentences
target_tensor = [[targ_lang.word2idx[s] for s in en.split(' ')] for en, sp in pairs]
# Calculate max_length of input and output tensor
# Here, we'll set those to the longest sentence in the dataset
max_length_inp, max_length_tar = max_length(input_tensor), max_length(target_tensor)
# Padding the input and output tensor to the maximum length
input_tensor = tf.keras.preprocessing.sequence.pad_sequences(input_tensor,
maxlen=max_length_inp,
padding='post')
target_tensor = tf.keras.preprocessing.sequence.pad_sequences(target_tensor,
maxlen=max_length_tar,
padding='post')
return input_tensor, target_tensor, inp_lang, targ_lang, max_length_inp, max_length_tar
###Output
_____no_output_____
###Markdown
Limit the size of the dataset to experiment faster (optional)You can reduce the num_examples to train faster
###Code
# Try experimenting with the size of that dataset
num_examples = 30000
input_tensor, target_tensor, inp_lang, targ_lang, max_length_inp, max_length_targ = load_dataset(path_to_file, num_examples)
# Creating training and validation sets using an 80-20 split
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.01)
# Show length
len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val)
###Output
_____no_output_____
###Markdown
Create a tf.data dataset
###Code
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
N_BATCH = BUFFER_SIZE//BATCH_SIZE
embedding_dim = 256
units = 256
vocab_inp_size = len(inp_lang.word2idx)
vocab_tar_size = len(targ_lang.word2idx)
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
###Output
_____no_output_____
###Markdown
Write the encoder and decoder modelHere, we'll implement an encoder-decoder model with attention which you can read about in the TensorFlow [Neural Machine Translation (seq2seq) tutorial](https://github.com/tensorflow/nmt). This example uses a more recent set of APIs. This notebook implements the [attention equations](https://github.com/tensorflow/nmtbackground-on-the-attention-mechanism) from the seq2seq tutorial. The following diagram shows that each input word is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence.The input is put through an encoder model which gives us the encoder output of shape *(batch_size, max_length, hidden_size)* and the encoder hidden state of shape *(batch_size, hidden_size)*. Here are the equations that are implemented:We're using *Bahdanau attention*. Lets decide on notation before writing the simplified form:* FC = Fully connected (dense) layer* EO = Encoder output* H = hidden state* X = input to the decoderAnd the pseudo-code:* `score = FC(tanh(FC(EO) + FC(H)))`* `attention weights = softmax(score, axis = 1)`. Softmax by default is applied on the last axis but here we want to apply it on the *1st axis*, since the shape of score is *(batch_size, max_length, 1)*. `Max_length` is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis.* `context vector = sum(attention weights * EO, axis = 1)`. Same reason as above for choosing axis as 1.* `embedding output` = The input to the decoder X is passed through an embedding layer.* `merged vector = concat(embedding output, context vector)`* This merged vector is then given to the GRU The shapes of all the vectors at each step have been specified in the comments in the code:
###Code
def gru(units):
# If you have a GPU, we recommend using CuDNNGRU(provides a 3x speedup than GRU)
# the code automatically does that.
return tf.keras.layers.GRU(units,
return_sequences=True,
return_state=True,
recurrent_activation='sigmoid',
recurrent_initializer='glorot_uniform')
def get_encoder(vocab_size, embedding_dim, enc_units, batch_sz):
hidden = tf.zeros((batch_sz, enc_units))
input = tf.keras.layers.Input((max_length_inp,))
x = tf.keras.layers.Embedding(vocab_size, embedding_dim)(input)
x = gru(units)(x)
return tf.keras.models.Model(inputs = input, outputs = x)
def get_decoder(vocab_size, embedding_dim, units, batch_sz):
#define the inputs to the decoder
enc_output = tf.keras.layers.Input((max_length_inp, embedding_dim))
enc_hidden = tf.keras.layers.Input((embedding_dim,))
dec_input = tf.keras.layers.Input((1,))
hidden_with_time_axis = tf.keras.layers.Reshape((1, embedding_dim))(enc_hidden)
# we get 1 at the last axis because we are applying tanh(FC(EO) + FC(H)) to self.V
W1 = tf.keras.layers.Dense(units)
W2 = tf.keras.layers.Dense(units)
V = tf.keras.layers.Dense(1, activation = 'softmax')
attention_weights = V(tf.keras.layers.Activation(activation = "tanh")(tf.keras.layers.Add()([W1(enc_output), W2(hidden_with_time_axis)])))
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = tf.keras.layers.Multiply()([attention_weights, enc_output])
# reshape the context_vector to concatneate with the output of the first input
context_vector = tf.keras.layers.Permute((2, 1))(context_vector)
context_vector = tf.keras.layers.Dense(1)(context_vector)
context_vector = tf.keras.layers.Permute((2, 1))(context_vector)
x = tf.keras.layers.Embedding(vocab_size, embedding_dim)(dec_input)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.keras.layers.Concatenate(axis = -1)([context_vector, x])
# passing the concatenated vector to the GRU
output, state = gru(units)(x)
# output shape == (batch_size * 1, hidden_size)
output = tf.keras.layers.Reshape((output.shape[2],))(output)
# output shape == (batch_size * 1, vocab)
x = tf.keras.layers.Dense(vocab_size)(output)
return tf.keras.models.Model(inputs = [dec_input, enc_hidden, enc_output], outputs = x)
encoder = get_encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
decoder = get_decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/embedding_ops.py:132: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
###Markdown
Define the optimizer and the loss function
###Code
optimizer = tf.train.AdamOptimizer()
def loss_function(real, pred):
mask = 1 - np.equal(real, 0)
loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred) * mask
return tf.reduce_mean(loss_)
###Output
_____no_output_____
###Markdown
Checkpoints (Object-based saving)
###Code
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
###Output
_____no_output_____
###Markdown
Training1. Pass the *input* through the *encoder* which return *encoder output* and the *encoder hidden state*.2. The encoder output, encoder hidden state and the decoder input (which is the *start token*) is passed to the decoder.3. The decoder returns the *predictions* and the *decoder hidden state*.4. The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.5. Use *teacher forcing* to decide the next input to the decoder.6. *Teacher forcing* is the technique where the *target word* is passed as the *next input* to the decoder.7. The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
###Code
EPOCHS = 30
for epoch in range(EPOCHS):
start = time.time()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp)
#print(tf.reduce_sum(enc_output))
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word2idx['<start>']] * BATCH_SIZE, 1)
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
# passing enc_output to the decoder
#print(dec_input.shape, dec_hidden.shape, enc_output.shape)
predictions = decoder([dec_input, dec_hidden, enc_output])
loss += loss_function(targ[:, t], predictions)
# using teacher forcing
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
total_loss += batch_loss
variables = encoder.variables + decoder.variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
batch_loss.numpy()))
# saving (checkpoint) the model every 2 epochs
if (epoch + 1) % 2 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print('Epoch {} Loss {:.4f}'.format(epoch + 1,
total_loss / N_BATCH))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_grad.py:102: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.
Epoch 1 Batch 0 Loss 1.5737
Epoch 1 Batch 100 Loss 0.9952
Epoch 1 Loss 1.0801
Time taken for 1 epoch 184.59089255332947 sec
Epoch 2 Batch 0 Loss 0.9520
Epoch 2 Batch 100 Loss 0.9073
Epoch 2 Loss 0.9059
Time taken for 1 epoch 185.6432590484619 sec
Epoch 3 Batch 0 Loss 0.8685
Epoch 3 Batch 100 Loss 0.8189
Epoch 3 Loss 0.8228
Time taken for 1 epoch 184.95331001281738 sec
Epoch 4 Batch 0 Loss 0.7762
Epoch 4 Batch 100 Loss 0.7495
Epoch 4 Loss 0.7631
Time taken for 1 epoch 184.47491669654846 sec
Epoch 5 Batch 0 Loss 0.7334
Epoch 5 Batch 100 Loss 0.7252
Epoch 5 Loss 0.7156
Time taken for 1 epoch 183.40274953842163 sec
Epoch 6 Batch 0 Loss 0.6917
Epoch 6 Batch 100 Loss 0.7698
Epoch 6 Loss 0.6743
Time taken for 1 epoch 185.21108555793762 sec
Epoch 7 Batch 0 Loss 0.6076
Epoch 7 Batch 100 Loss 0.7121
Epoch 7 Loss 0.6360
Time taken for 1 epoch 183.29096484184265 sec
Epoch 8 Batch 0 Loss 0.6804
Epoch 8 Batch 100 Loss 0.5700
Epoch 8 Loss 0.5996
Time taken for 1 epoch 183.7897927761078 sec
Epoch 9 Batch 0 Loss 0.6281
Epoch 9 Batch 100 Loss 0.5707
Epoch 9 Loss 0.5638
Time taken for 1 epoch 183.74300241470337 sec
Epoch 10 Batch 0 Loss 0.5120
Epoch 10 Batch 100 Loss 0.5701
Epoch 10 Loss 0.5279
Time taken for 1 epoch 183.26960945129395 sec
Epoch 11 Batch 0 Loss 0.5019
Epoch 11 Batch 100 Loss 0.4268
Epoch 11 Loss 0.4933
Time taken for 1 epoch 183.0635199546814 sec
Epoch 12 Batch 0 Loss 0.4919
Epoch 12 Batch 100 Loss 0.4477
Epoch 12 Loss 0.4590
Time taken for 1 epoch 182.36777782440186 sec
Epoch 13 Batch 0 Loss 0.3958
Epoch 13 Batch 100 Loss 0.4551
Epoch 13 Loss 0.4253
Time taken for 1 epoch 184.3013105392456 sec
Epoch 14 Batch 0 Loss 0.3796
Epoch 14 Batch 100 Loss 0.3660
Epoch 14 Loss 0.3935
Time taken for 1 epoch 183.02555990219116 sec
Epoch 15 Batch 0 Loss 0.3373
Epoch 15 Batch 100 Loss 0.3465
Epoch 15 Loss 0.3624
Time taken for 1 epoch 180.61645650863647 sec
Epoch 16 Batch 0 Loss 0.2803
Epoch 16 Batch 100 Loss 0.3393
Epoch 16 Loss 0.3323
Time taken for 1 epoch 181.69418787956238 sec
Epoch 17 Batch 0 Loss 0.3148
Epoch 17 Batch 100 Loss 0.2400
Epoch 17 Loss 0.3031
Time taken for 1 epoch 182.0835063457489 sec
Epoch 18 Batch 0 Loss 0.2892
Epoch 18 Batch 100 Loss 0.2624
Epoch 18 Loss 0.2753
Time taken for 1 epoch 182.76570463180542 sec
Epoch 19 Batch 0 Loss 0.2201
Epoch 19 Batch 100 Loss 0.2476
Epoch 19 Loss 0.2500
Time taken for 1 epoch 182.3220407962799 sec
Epoch 20 Batch 0 Loss 0.2662
Epoch 20 Batch 100 Loss 0.2456
Epoch 20 Loss 0.2258
Time taken for 1 epoch 181.4635727405548 sec
Epoch 21 Batch 0 Loss 0.1932
Epoch 21 Batch 100 Loss 0.2100
Epoch 21 Loss 0.2031
Time taken for 1 epoch 183.69462966918945 sec
Epoch 22 Batch 0 Loss 0.1938
Epoch 22 Batch 100 Loss 0.2069
Epoch 22 Loss 0.1826
Time taken for 1 epoch 181.63858151435852 sec
Epoch 23 Batch 0 Loss 0.1478
Epoch 23 Batch 100 Loss 0.1644
Epoch 23 Loss 0.1639
Time taken for 1 epoch 182.15275311470032 sec
Epoch 24 Batch 0 Loss 0.1215
Epoch 24 Batch 100 Loss 0.1390
Epoch 24 Loss 0.1460
Time taken for 1 epoch 181.0750732421875 sec
Epoch 25 Batch 0 Loss 0.1328
Epoch 25 Batch 100 Loss 0.1351
Epoch 25 Loss 0.1296
Time taken for 1 epoch 183.3895823955536 sec
Epoch 26 Batch 0 Loss 0.1455
Epoch 26 Batch 100 Loss 0.0939
Epoch 26 Loss 0.1151
Time taken for 1 epoch 182.7859992980957 sec
Epoch 27 Batch 0 Loss 0.0952
Epoch 27 Batch 100 Loss 0.1030
Epoch 27 Loss 0.1019
Time taken for 1 epoch 181.32154488563538 sec
Epoch 28 Batch 0 Loss 0.0853
Epoch 28 Batch 100 Loss 0.0747
Epoch 28 Loss 0.0903
Time taken for 1 epoch 185.0141680240631 sec
Epoch 29 Batch 0 Loss 0.0848
Epoch 29 Batch 100 Loss 0.0685
Epoch 29 Loss 0.0799
Time taken for 1 epoch 180.88364839553833 sec
Epoch 30 Batch 0 Loss 0.0672
Epoch 30 Batch 100 Loss 0.0570
Epoch 30 Loss 0.0708
Time taken for 1 epoch 183.36261177062988 sec
###Markdown
Translate* The evaluate function is similar to the training loop, except we don't use *teacher forcing* here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.* Stop predicting when the model predicts the *end token*.* And store the *attention weights for every time step*.Note: The encoder output is calculated only once for one input.
###Code
# restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
def evaluate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ):
sentence = preprocess_sentence(sentence)
inputs = [inp_lang.word2idx[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs], maxlen=max_length_inp, padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
enc_out, enc_hidden = encoder(inputs)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word2idx['<start>']], 0)
for t in range(max_length_targ):
predictions = decoder([dec_input, dec_hidden, enc_out])
predicted_id = tf.argmax(predictions[0]).numpy()
result += targ_lang.idx2word[predicted_id] + ' '
if targ_lang.idx2word[predicted_id] == '<end>':
return result, sentence
# the predicted ID is fed back into the model
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence
def translate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ):
result, sentence = evaluate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
print('Input: {}'.format(sentence))
print('Predicted translation: {}'.format(result))
###Output
_____no_output_____
###Markdown
Restore the latest checkpoint and test
###Code
translate(u'اين انت ؟', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate(u'من أين أنت', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
# wrong translation
translate(u'اهلا', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate(u'اذهب إلى المدرسة', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate(u'اصمت', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate(u'اخرج من هنا', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate(u'هل ستذهب الى المسجد', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate(u'الهزيمة كانت قاسية', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate(u'اركض', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
###Output
Input: <start> اركض <end>
Predicted translation: i turned down . <end>
###Markdown
Save the models and the dictionaries
###Code
encoder.save('encoder.h5')
decoder.save('decoder.h5')
import csv
def create_csv(file, dict):
with open(file, 'w') as csvfile:
writer = csv.writer(csvfile)
for key in dict.keys():
writer.writerow([key,dict[key]])
create_csv('idx2word.csv', targ_lang.idx2word)
create_csv('word2idx.csv', inp_lang.word2idx)
###Output
_____no_output_____ |
Sort/Bubble Sort.ipynb | ###Markdown
Bubble SortBubble sort is one of the simplest sorting algorihtms to understand, however it is also one of the most inefficient. In the worst case the time complexity is O(n²)
###Code
import random
def bubblesort(input_array):
length = len(input_array)
for i in range(length):
for j in range(length - 1):
if input_array[j] > input_array[j + 1]:
input_array[j], input_array[j + 1] = input_array[j + 1], input_array[j]
return input_array
bubblesort([random.random() for i in range(10)])
###Output
_____no_output_____ |
01-1. Pandas DA (200709).ipynb | ###Markdown
■ 판다스를 이용한 머신러닝 데이터 분석 ■ 1장. 목차 1. 데이터 과학자가 판다스를 배우는 이유 2. 판다스의 자료구조 3. 인덱스 활용 4. 산술 연산 ■ 데이터 과학자가 판다스를 배우는 이유 1. 인공지능의 기름(원유)이 되는 것이 데이터인데 이 데이터를 수집하고 정리하는데 최적화된 도구 2. 무료 3. 가장 배우기 쉬운 언어인 파이썬을 기반으로 만들어짐 예: 코로나 케글 데이터 분석 사례 ■ 머신러닝 데이터 분석 5가지 단계 1. 데이터 수집과 설명 : Pandas 2. 데이터 탐색 및 시각화 : Pandas, matplotlib, seaborn 3. 머신러닝 모델 훈련 : sklearn 4. 머신러닝 모델 평가 : Pandas 5. 머신러닝 모델 성능 개선 : Pandas (파생변수 생성) ■ 2. Pandas 자료구조 1. 판다스 시리즈(pandas Series) 2. 판다스 데이터 프레임(pandas dataFrame) ■ Pandas의 Series에 대한 이해 시리즈(series)는 데이터가 순차적으로 나열된 1차원 배열의 형태를 갖는다. 파이썬 기본 내장 자료구조인 딕셔너리와 비슷한 구조를 갖고 있다. 파이썬의 딕셔너리 자료구조 index0 ----> data0 index1 ----> data1 index2 ----> data2 index3 ----> data3 : : indexN ----> dataN 위와 같이 시리즈 또한 인덱스와 데이터의 값으로 구성되어 있다.
###Code
import pandas as pd
dict_data = {'a': 1, 'b': 2, 'c': 3}
sr = pd.Series(dict_data) # 딕셔너리를 시리즈로 변환
print(sr)
print(type(sr)) # type은 구조를 확인하는 함수
###Output
a 1
b 2
c 3
dtype: int64
<class 'pandas.core.series.Series'>
###Markdown
※ 문제1. 코로나 데이터 중 Case.csv를 pd.read_csv로 불러와서 case DataFrame으로 생성하고 city라는 컬럼의 type을 확인하시오
###Code
import pandas as pd
case = pd.read_csv('Case.csv')
print(type(case['city']))
print(type(case.city))
###Output
<class 'pandas.core.series.Series'>
<class 'pandas.core.series.Series'>
###Markdown
※ 문제2. 코로나 case 데이터프레임의 모든 데이터를 다 출력하시오
###Code
import pandas as pd
case = pd.read_csv('Case.csv')
print(case)
###Output
case_id province city group \
0 1000001 Seoul Yongsan-gu True
1 1000002 Seoul Gwanak-gu True
2 1000003 Seoul Guro-gu True
3 1000004 Seoul Yangcheon-gu True
4 1000005 Seoul Dobong-gu True
.. ... ... ... ...
169 6100012 Gyeongsangnam-do - False
170 7000001 Jeju-do - False
171 7000002 Jeju-do - False
172 7000003 Jeju-do - False
173 7000004 Jeju-do from other city True
infection_case confirmed latitude longitude
0 Itaewon Clubs 139 37.538621 126.992652
1 Richway 119 37.48208 126.901384
2 Guro-gu Call Center 95 37.508163 126.884387
3 Yangcheon Table Tennis Club 43 37.546061 126.874209
4 Day Care Center 43 37.679422 127.044374
.. ... ... ... ...
169 etc 20 - -
170 overseas inflow 14 - -
171 contact with patient 0 - -
172 etc 4 - -
173 Itaewon Clubs 1 - -
[174 rows x 8 columns]
###Markdown
※ 문제3. 코로나 case 데이터프레임의 컬럼들이 뭐가 있는지 확인하시오
###Code
import pandas as pd
case = pd.read_csv('Case.csv')
print(case.columns)
###Output
Index([' case_id', 'province', 'city', 'group', 'infection_case', 'confirmed',
'latitude', 'longitude'],
dtype='object')
###Markdown
컬럼소개 case_id : case 식별번호 province : 도시이름 city : 지역구이름 group : 집단 감염 여부 infection_case : 감염 경로 confirmed : 지역 내 누적 확진자 수 ※ 문제4. 데이터프레임을 print해서 볼 때 아래와 같이 컬럼이 전부 출력되지 않고 중간에 ... 으로 나오면서 생략하는데 생략하지 않고 다 나오게 하려면 어떻게 해야하는가?
###Code
import pandas as pd
case = pd.read_csv('Case.csv')
pd.set_option('display.max_columns',100)
pd.set_option('display.max_rows',200)
print(case)
###Output
case_id province city group \
0 1000001 Seoul Yongsan-gu True
1 1000002 Seoul Gwanak-gu True
2 1000003 Seoul Guro-gu True
3 1000004 Seoul Yangcheon-gu True
4 1000005 Seoul Dobong-gu True
5 1000006 Seoul Guro-gu True
6 1000007 Seoul from other city True
7 1000008 Seoul Dongdaemun-gu True
8 1000009 Seoul from other city True
9 1000010 Seoul Gwanak-gu True
10 1000011 Seoul Eunpyeong-gu True
11 1000012 Seoul Seongdong-gu True
12 1000013 Seoul Jongno-gu True
13 1000014 Seoul Gangnam-gu True
14 1000015 Seoul Jung-gu True
15 1000016 Seoul Seodaemun-gu True
16 1000017 Seoul Jongno-gu True
17 1000018 Seoul Gangnam-gu True
18 1000019 Seoul from other city True
19 1000020 Seoul Geumcheon-gu True
20 1000021 Seoul from other city True
21 1000022 Seoul from other city True
22 1000023 Seoul Jung-gu True
23 1000024 Seoul Yeongdeungpo-gu True
24 1000025 Seoul Gangnam-gu True
25 1000026 Seoul Yangcheon-gu True
26 1000027 Seoul Seocho-gu True
27 1000028 Seoul from other city True
28 1000029 Seoul Gangnam-gu True
29 1000030 Seoul Gangseo-gu True
30 1000031 Seoul from other city True
31 1000032 Seoul Jung-gu True
32 1000033 Seoul from other city True
33 1000034 Seoul - True
34 1000035 Seoul Guro-gu True
35 1000036 Seoul - False
36 1000037 Seoul - False
37 1000038 Seoul - False
38 1100001 Busan Dongnae-gu True
39 1100002 Busan from other city True
40 1100003 Busan Suyeong-gu True
41 1100004 Busan Haeundae-gu True
42 1100005 Busan Jin-gu True
43 1100006 Busan from other city True
44 1100007 Busan from other city True
45 1100008 Busan - False
46 1100009 Busan - False
47 1100010 Busan - False
48 1200001 Daegu Nam-gu True
49 1200002 Daegu Dalseong-gun True
50 1200003 Daegu Seo-gu True
51 1200004 Daegu Dalseong-gun True
52 1200005 Daegu Dong-gu True
53 1200006 Daegu from other city True
54 1200007 Daegu from other city True
55 1200008 Daegu - False
56 1200009 Daegu - False
57 1200010 Daegu - False
58 1300001 Gwangju Dong-gu True
59 1300002 Gwangju from other city True
60 1300003 Gwangju - False
61 1300004 Gwangju - False
62 1300005 Gwangju - False
63 1400001 Incheon from other city True
64 1400002 Incheon from other city True
65 1400003 Incheon from other city True
66 1400004 Incheon from other city True
67 1400005 Incheon - False
68 1400006 Incheon - False
69 1400007 Incheon - False
70 1500001 Daejeon - True
71 1500002 Daejeon Seo-gu True
72 1500003 Daejeon Seo-gu True
73 1500004 Daejeon Seo-gu True
74 1500005 Daejeon Seo-gu True
75 1500006 Daejeon from other city True
76 1500007 Daejeon from other city True
77 1500008 Daejeon - False
78 1500009 Daejeon - False
79 1500010 Daejeon - False
80 1600001 Ulsan from other city True
81 1600002 Ulsan - False
82 1600003 Ulsan - False
83 1600004 Ulsan - False
84 1700001 Sejong Sejong True
85 1700002 Sejong Sejong True
86 1700003 Sejong from other city True
87 1700004 Sejong - False
88 1700005 Sejong - False
89 1700006 Sejong - False
90 2000001 Gyeonggi-do Seongnam-si True
91 2000002 Gyeonggi-do Bucheon-si True
92 2000003 Gyeonggi-do from other city True
93 2000004 Gyeonggi-do from other city True
94 2000005 Gyeonggi-do Uijeongbu-si True
95 2000006 Gyeonggi-do from other city True
96 2000007 Gyeonggi-do from other city True
97 2000008 Gyeonggi-do from other city True
98 2000009 Gyeonggi-do - True
99 2000010 Gyeonggi-do Seongnam-si True
100 2000011 Gyeonggi-do Anyang-si True
101 2000012 Gyeonggi-do Suwon-si True
102 2000013 Gyeonggi-do Anyang-si True
103 2000014 Gyeonggi-do Suwon-si True
104 2000015 Gyeonggi-do from other city True
105 2000016 Gyeonggi-do from other city True
106 2000017 Gyeonggi-do from other city True
107 2000018 Gyeonggi-do from other city True
108 2000019 Gyeonggi-do Seongnam-si True
109 2000020 Gyeonggi-do - False
110 2000021 Gyeonggi-do - False
111 2000022 Gyeonggi-do - False
112 3000001 Gangwon-do from other city True
113 3000002 Gangwon-do from other city True
114 3000003 Gangwon-do Wonju-si True
115 3000004 Gangwon-do from other city True
116 3000005 Gangwon-do from other city True
117 3000006 Gangwon-do - False
118 3000007 Gangwon-do - False
119 3000008 Gangwon-do - False
120 4000001 Chungcheongbuk-do Goesan-gun True
121 4000002 Chungcheongbuk-do from other city True
122 4000003 Chungcheongbuk-do from other city True
123 4000004 Chungcheongbuk-do from other city True
124 4000005 Chungcheongbuk-do - False
125 4000006 Chungcheongbuk-do - False
126 4000007 Chungcheongbuk-do - False
127 4100001 Chungcheongnam-do Cheonan-si True
128 4100002 Chungcheongnam-do from other city True
129 4100003 Chungcheongnam-do Seosan-si True
130 4100004 Chungcheongnam-do from other city True
131 4100005 Chungcheongnam-do from other city True
132 4100006 Chungcheongnam-do - False
133 4100007 Chungcheongnam-do - False
134 4100008 Chungcheongnam-do - False
135 5000001 Jeollabuk-do from other city True
136 5000002 Jeollabuk-do from other city True
137 5000003 Jeollabuk-do from other city True
138 5000004 Jeollabuk-do - False
139 5000005 Jeollabuk-do - False
140 5100001 Jeollanam-do Muan-gun True
141 5100002 Jeollanam-do from other city True
142 5100003 Jeollanam-do - False
143 5100004 Jeollanam-do - False
144 5100005 Jeollanam-do - False
145 6000001 Gyeongsangbuk-do from other city True
146 6000002 Gyeongsangbuk-do Cheongdo-gun True
147 6000003 Gyeongsangbuk-do Bonghwa-gun True
148 6000004 Gyeongsangbuk-do Gyeongsan-si True
149 6000005 Gyeongsangbuk-do from other city True
150 6000006 Gyeongsangbuk-do Yechun-gun True
151 6000007 Gyeongsangbuk-do Chilgok-gun True
152 6000008 Gyeongsangbuk-do Gyeongsan-si True
153 6000009 Gyeongsangbuk-do Gyeongsan-si True
154 6000010 Gyeongsangbuk-do Gumi-si True
155 6000011 Gyeongsangbuk-do - False
156 6000012 Gyeongsangbuk-do - False
157 6000013 Gyeongsangbuk-do - False
158 6100001 Gyeongsangnam-do from other city True
159 6100002 Gyeongsangnam-do Geochang-gun True
160 6100003 Gyeongsangnam-do Jinju-si True
161 6100004 Gyeongsangnam-do Geochang-gun True
162 6100005 Gyeongsangnam-do Changwon-si True
163 6100006 Gyeongsangnam-do Changnyeong-gun True
164 6100007 Gyeongsangnam-do Yangsan-si True
165 6100008 Gyeongsangnam-do from other city True
166 6100009 Gyeongsangnam-do from other city True
167 6100010 Gyeongsangnam-do - False
168 6100011 Gyeongsangnam-do - False
169 6100012 Gyeongsangnam-do - False
170 7000001 Jeju-do - False
171 7000002 Jeju-do - False
172 7000003 Jeju-do - False
173 7000004 Jeju-do from other city True
infection_case confirmed latitude \
0 Itaewon Clubs 139 37.538621
1 Richway 119 37.48208
2 Guro-gu Call Center 95 37.508163
3 Yangcheon Table Tennis Club 43 37.546061
4 Day Care Center 43 37.679422
5 Manmin Central Church 41 37.481059
6 SMR Newly Planted Churches Group 36 -
7 Dongan Church 17 37.592888
8 Coupang Logistics Center 25 -
9 Wangsung Church 30 37.481735
10 Eunpyeong St. Mary's Hospital 14 37.63369
11 Seongdong-gu APT 13 37.55713
12 Jongno Community Center 10 37.57681
13 Samsung Medical Center 7 37.48825
14 Jung-gu Fashion Company 7 37.562405
15 Yeonana News Class 5 37.558147
16 Korea Campus Crusade of Christ 7 37.594782
17 Gangnam Yeoksam-dong gathering 6 -
18 Daejeon door-to-door sales 1 -
19 Geumcheon-gu rice milling machine manufacture 6 -
20 Shincheonji Church 8 -
21 Guri Collective Infection 5 -
22 KB Life Insurance 13 37.560899
23 Yeongdeungpo Learning Institute 3 37.520846
24 Gangnam Dongin Church 1 37.522331
25 Biblical Language study meeting 3 37.524623
26 Seocho Family 5 -
27 Anyang Gunpo Pastors Group 1 -
28 Samsung Fire & Marine Insurance 4 37.498279
29 SJ Investment Call Center 0 37.559649
30 Yongin Brothers 4 -
31 Seoul City Hall Station safety worker 3 37.565699
32 Uiwang Logistics Center 2 -
33 Orange Life 1 -
34 Daezayeon Korea 3 37.486837
35 overseas inflow 298 -
36 contact with patient 162 -
37 etc 100 -
38 Onchun Church 39 35.21628
39 Shincheonji Church 12 -
40 Suyeong-gu Kindergarten 5 35.16708
41 Haeundae-gu Catholic Church 6 35.20599
42 Jin-gu Academy 4 35.17371
43 Itaewon Clubs 4 -
44 Cheongdo Daenam Hospital 1 -
45 overseas inflow 36 -
46 contact with patient 19 -
47 etc 30 -
48 Shincheonji Church 4511 35.84008
49 Second Mi-Ju Hospital 196 35.857375
50 Hansarang Convalescent Hospital 124 35.885592
51 Daesil Convalescent Hospital 101 35.857393
52 Fatima Hospital 39 35.88395
53 Itaewon Clubs 2 -
54 Cheongdo Daenam Hospital 2 -
55 overseas inflow 41 -
56 contact with patient 917 -
57 etc 747 -
58 Gwangneuksa Temple 5 35.136035
59 Shincheonji Church 9 -
60 overseas inflow 23 -
61 contact with patient 5 -
62 etc 1 -
63 Itaewon Clubs 53 -
64 Coupang Logistics Center 42 -
65 Guro-gu Call Center 20 -
66 Shincheonji Church 2 -
67 overseas inflow 68 -
68 contact with patient 6 -
69 etc 11 -
70 Door-to-door sales in Daejeon 55 -
71 Dunsan Electronics Town 13 36.3400973
72 Orange Town 7 36.3398739
73 Dreaming Church 4 36.346869
74 Korea Forest Engineer Institute 3 36.358123
75 Shincheonji Church 2 -
76 Seosan-si Laboratory 2 -
77 overseas inflow 15 -
78 contact with patient 15 -
79 etc 15 -
80 Shincheonji Church 16 -
81 overseas inflow 25 -
82 contact with patient 3 -
83 etc 7 -
84 Ministry of Oceans and Fisheries 31 36.504713
85 gym facility in Sejong 8 36.48025
86 Shincheonji Church 1 -
87 overseas inflow 5 -
88 contact with patient 3 -
89 etc 1 -
90 River of Grace Community Church 67 37.455687
91 Coupang Logistics Center 67 37.530579
92 Itaewon Clubs 59 -
93 Richway 58 -
94 Uijeongbu St. Mary’s Hospital 50 37.758635
95 Guro-gu Call Center 50 -
96 Shincheonji Church 29 -
97 Yangcheon Table Tennis Club 28 -
98 SMR Newly Planted Churches Group 25 -
99 Bundang Jesaeng Hospital 22 37.38833
100 Anyang Gunpo Pastors Group 22 37.381784
101 Lotte Confectionery logistics center 15 37.287356
102 Lord Glory Church 17 37.403722
103 Suwon Saeng Myeong Saem Church 10 37.2376
104 Korea Campus Crusade of Christ 7 -
105 Geumcheon-gu rice milling machine manufacture 6 -
106 Wangsung Church 6 -
107 Seoul City Hall Station safety worker 5 -
108 Seongnam neighbors gathering 5 -
109 overseas inflow 305 -
110 contact with patient 63 -
111 etc 84 -
112 Shincheonji Church 17 -
113 Uijeongbu St. Mary’s Hospital 10 -
114 Wonju-si Apartments 4 37.342762
115 Richway 4 -
116 Geumcheon-gu rice milling machine manufacture 4 -
117 overseas inflow 16 -
118 contact with patient 0 -
119 etc 7 -
120 Goesan-gun Jangyeon-myeon 11 36.82422
121 Itaewon Clubs 9 -
122 Guro-gu Call Center 2 -
123 Shincheonji Church 6 -
124 overseas inflow 13 -
125 contact with patient 8 -
126 etc 11 -
127 gym facility in Cheonan 103 36.81503
128 Door-to-door sales in Daejeon 10 -
129 Seosan-si Laboratory 9 37.000354
130 Richway 3 -
131 Eunpyeong-Boksagol culture center 3 -
132 overseas inflow 16 -
133 contact with patient 2 -
134 etc 12 -
135 Itaewon Clubs 2 -
136 Door-to-door sales in Daejeon 3 -
137 Shincheonji Church 1 -
138 overseas inflow 12 -
139 etc 5 -
140 Manmin Central Church 2 35.078825
141 Shincheonji Church 1 -
142 overseas inflow 14 -
143 contact with patient 4 -
144 etc 4 -
145 Shincheonji Church 566 -
146 Cheongdo Daenam Hospital 119 35.64887
147 Bonghwa Pureun Nursing Home 68 36.92757
148 Gyeongsan Seorin Nursing Home 66 35.782149
149 Pilgrimage to Israel 41 -
150 Yechun-gun 40 36.646845
151 Milal Shelter 36 36.0581
152 Gyeongsan Jeil Silver Town 17 35.84819
153 Gyeongsan Cham Joeun Community Center 16 35.82558
154 Gumi Elim Church 10 -
155 overseas inflow 22 -
156 contact with patient 190 -
157 etc 133 -
158 Shincheonji Church 32 -
159 Geochang Church 10 35.68556
160 Wings Tower 9 35.164845
161 Geochang-gun Woongyang-myeon 8 35.805681
162 Hanmaeum Changwon Hospital 7 35.22115
163 Changnyeong Coin Karaoke 7 35.54127
164 Soso Seowon 3 35.338811
165 Itaewon Clubs 2 -
166 Onchun Church 2 -
167 overseas inflow 26 -
168 contact with patient 6 -
169 etc 20 -
170 overseas inflow 14 -
171 contact with patient 0 -
172 etc 4 -
173 Itaewon Clubs 1 -
longitude
0 126.992652
1 126.901384
2 126.884387
3 126.874209
4 127.044374
5 126.894343
6 -
7 127.056766
8 -
9 126.930121
10 126.9165
11 127.0403
12 127.006
13 127.08559
14 126.984377
15 126.943799
16 126.968022
17 -
18 -
19 -
20 -
21 -
22 126.966998
23 126.931278
24 127.057388
25 126.843118
26 -
27 -
28 127.030139
29 126.835102
30 -
31 126.977079
32 -
33 -
34 126.893163
35 -
36 -
37 -
38 129.0771
39 -
40 129.1124
41 129.1256
42 129.0633
43 -
44 -
45 -
46 -
47 -
48 128.5667
49 128.466651
50 128.556649
51 128.466653
52 128.624059
53 -
54 -
55 -
56 -
57 -
58 126.956405
59 -
60 -
61 -
62 -
63 -
64 -
65 -
66 -
67 -
68 -
69 -
70 -
71 127.3927099
72 127.3819744
73 127.368594
74 127.388856
75 -
76 -
77 -
78 -
79 -
80 -
81 -
82 -
83 -
84 127.265172
85 127.289
86 -
87 -
88 -
89 -
90 127.161627
91 126.775254
92 -
93 -
94 127.077716
95 -
96 -
97 -
98 -
99 127.1218
100 126.93615
101 127.013827
102 126.954939
103 127.0517
104 -
105 -
106 -
107 -
108 -
109 -
110 -
111 -
112 -
113 -
114 127.983815
115 -
116 -
117 -
118 -
119 -
120 127.9552
121 -
122 -
123 -
124 -
125 -
126 -
127 127.1139
128 -
129 126.354443
130 -
131 -
132 -
133 -
134 -
135 -
136 -
137 -
138 -
139 -
140 126.316746
141 -
142 -
143 -
144 -
145 -
146 128.7368
147 128.9099
148 128.801498
149 -
150 128.437416
151 128.4941
152 128.7621
153 128.7373
154 -
155 -
156 -
157 -
158 -
159 127.9127
160 128.126969
161 127.917805
162 128.6866
163 128.5008
164 129.017508
165 -
166 -
167 -
168 -
169 -
170 -
171 -
172 -
173 -
###Markdown
※ 문제5. emp3.csv를 불러와서 emp 데이터프레임으로 만들고 emp 테이블 전체를 출력하는데 모든 컬럼이 다 나오게 하시오
###Code
import pandas as pd
emp = pd.read_csv('emp3.csv')
print(emp)
###Output
index empno ename job mgr hiredate sal comm \
0 1 7839 KING PRESIDENT NaN 1981-11-17 0:00 5000 NaN
1 2 7698 BLAKE MANAGER 7839.0 1981-05-01 0:00 2850 NaN
2 3 7782 CLARK MANAGER 7839.0 1981-05-09 0:00 2450 NaN
3 4 7566 JONES MANAGER 7839.0 1981-04-01 0:00 2975 NaN
4 5 7654 MARTIN SALESMAN 7698.0 1981-09-10 0:00 1250 1400.0
5 6 7499 ALLEN SALESMAN 7698.0 1981-02-11 0:00 1600 300.0
6 7 7844 TURNER SALESMAN 7698.0 1981-08-21 0:00 1500 0.0
7 8 7900 JAMES CLERK 7698.0 1981-12-11 0:00 950 NaN
8 9 7521 WARD SALESMAN 7698.0 1981-02-23 0:00 1250 500.0
9 10 7902 FORD ANALYST 7566.0 1981-12-11 0:00 3000 NaN
10 11 7369 SMITH CLERK 7902.0 1980-12-09 0:00 800 NaN
11 12 7788 SCOTT ANALYST 7566.0 1982-12-22 0:00 3000 NaN
12 13 7876 ADAMS CLERK 7788.0 1983-01-15 0:00 1100 NaN
13 14 7934 MILLER CLERK 7782.0 1982-01-11 0:00 1300 NaN
deptno
0 10
1 30
2 10
3 20
4 30
5 30
6 30
7 30
8 30
9 20
10 20
11 20
12 20
13 10
###Markdown
■ 2.2 Pandas series의 index와 values 시리즈 클래스의 index 속성을 이용하여 인덱스 배열만 따로 선택할 수 있다. 문법 : Series객체.index 시리즈 클래스의 values 속성을 이용하여 값 배열만 따로 선택할 수 있다. 문법 : Series객체.values
###Code
# 예제
import pandas as pd
list_data = ['a', 'b', 'c', 'd']
sr = pd.Series(list_data)
print(sr)
print(sr.index)
print(sr.values)
###Output
0 a
1 b
2 c
3 d
dtype: object
RangeIndex(start=0, stop=4, step=1)
['a' 'b' 'c' 'd']
###Markdown
※ 문제6. 코로나 데이터 Case 데이터프레임에서 누적 확진자수를 나타내는 confirmed의 값들을 비어있는 리스트인 cf_num에 담으시오
###Code
import pandas as pd
case = pd.read_csv('Case.csv')
cf_num = list(case['confirmed'])
cf_num
###Output
_____no_output_____
###Markdown
문제7. 누적 확진자수의 평균값을 출력하시오
###Code
import numpy as np
print(sum(cf_num)/len(cf_num))
print(np.mean(cf_num))
###Output
65.48850574712644
65.48850574712644
###Markdown
■ 2.3 series의 원소 선택 원소의 위치를 나타내는 주소 역할을 하는 인덱스를 이용하여 시리즈의 원소를 선택할 수 있다. 하나의 원소를 선택할 수 있고 여러개의 원소를 선택할 수도 있다.
###Code
# 예제
tup_data = ('영인', '2020-01-28', '여', True)
sr = pd.Series(tup_data, index=['이름', '생년월일', '성별', '학생여부'])
print(sr)
print(sr['이름']) # 하나의 원소 선택
print(sr[['이름','생년월일']]) # 여러 원소 선택. 리스트 형태로 제공
###Output
이름 영인
생년월일 2020-01-28
성별 여
학생여부 True
dtype: object
영인
이름 영인
생년월일 2020-01-28
dtype: object
###Markdown
■ 2.4 Pandas DataFrame pandas 데이터프레임은 2차원 배열이다. 행과 열로 만들어지는 2차원 배열 구조는 엑셀과 오라클과 같은 RDBMS 등 다양한 분야에서 사용된다. pandas의 데이터프레임 자료구조는 R의 데이터프레임에서 유래되었다. 시리즈를 열벡터라고 하면 데이터프레임은 열 벡터들이 같은 행 인덱스를 기준으로 줄지어 결합된 2차원벡터 또는 행렬이다.
###Code
# 예제
import pandas as pd
dict_data = {'c0':[1,2,3],'c1':[4,5,6],'c2':[6,7,8]}
df = pd.DataFrame(dict_data)
df
###Output
_____no_output_____
###Markdown
※ 문제8. kaggle에서 내려받은 타이타닉 데이터(train.csv)를 titanic 변수에 로드하고 column들이 뭐가 있는지 출력하시오
###Code
import pandas as pd
titanic = pd.read_csv('train.csv')
titanic.columns
###Output
_____no_output_____
###Markdown
※ 문제9. (점심시간 문제) 아래와 같이 데이터 프레임을 생성하시오```pythonprint(df2) Passengerid Survived Pclass age 1 0 3 22 2 1 1 38 3 1 3 26```
###Code
import pandas as pd
titanic = pd.read_csv('train.csv')
data={'Passengerid':[1,2,3],'Survived':[0,1,1],'Pclass':[3,1,3],'Age':[22,38,26]}
df2 = pd.DataFrame(data)
df2
###Output
_____no_output_____
###Markdown
■ 데이터를 임의로 생성해서 data frame으로 만드는 방법
###Code
import pandas as pd
dates=pd.date_range('20190101',periods=6)
dates
import numpy as np
np.random.randn(6,4) # 6행4열의 행렬이 출력 (숫자는 랜덤)
df = pd.DataFrame(np.random.randn(6,4), index=dates, columns=list('ABCD'))
df
# 위 df 데이터프레임에 독버섯(p)인지 정상버섯(n)을 나타내는 라벨 컬럼을 생성하시오
k = pd.Categorical(['p','p','n','n','p','n'])
k
###Output
_____no_output_____
###Markdown
※ 문제10. 아래의 코드를 이용해서 위의 df 데이터프레임 맨 끝에 F라는 컬럼으로 라벨을 생성하시오
###Code
k = pd.Categorical(['P','P','N','N','P','N'])
df = pd.DataFrame(np.random.randn(6,4), index=dates, columns=list('ABCD'))
df['F']=k
df
print(type(df['F']))
df['F']
###Output
<class 'pandas.core.series.Series'>
###Markdown
■ 2.6 판다스 데이터프레임의 행 인덱스/열 이름을 지정하는 방법```python 문법 pd.DataFrame(2차원배열, index=행 인덱스배열, columns=열 이름 배열)```
###Code
# 예제
df = pd.DataFrame([[15,'남','역삼중'],[14,'여','역삼중']], index=['준서','예은'], columns=['나이','성별','학교'])
df
###Output
_____no_output_____
###Markdown
※ 문제11. rename() 메소드를 이용하여 df의 열 이름과 행이름을 아래와 같이 변경하시오 열이름 : 나이, 성별, 학교 -> 연령, 남녀, 소속 행이름 : 준서, 예은 -> 학생1, 학생2
###Code
df = pd.DataFrame([[15, '남', '역삼중'], [14, '여', '역삼중']], index=[
'준서', '예은'], columns=['나이', '성별', '학교'])
df.rename(columns={'나이': '연령', '성별': '남녀', '학교': '소속'},
index={'준서': '학생1', '예은': '학생2'}, inplace=True)
# inplace=True : 원본을 직접 변경할 때 사용
# inplace=False : 새로운 객체가 생성되고, 원본 데이터는 변경없이 유지
df
###Output
_____no_output_____
###Markdown
■ 2.7 Pandas 데이터프레임에서 행/열 삭제하는 방법 행과 열을 삭제하는 것은 drop() 메소드를 사용한다 axis=0 : 행 삭제 axit=1 : 열 삭제```python 문법 DataFrame객체.drop(행 인덱스 또는 배열, axis=0) : 행 삭제 DataFrame객체.drop(행 인덱스 또는 배열, axis=1) : 열 삭제``` ※ 문제12. 아래의 데이터프레임을 생성하시오 수학 영어 음악 체육 서준 90 98 85 100 우현 80 89 95 90 인아 70 95 100 90
###Code
df = pd.DataFrame([[90, 98, 85, 100], [80, 89, 95, 90], [70, 95, 100, 90]], index=['서준', '우현', '인아'], columns=['수학', '영어', '음악', '체육'])
df
###Output
_____no_output_____
###Markdown
※ 문제13. 위의 df 데이터프레임을 df2 데이터프레임으로 복제하시오
###Code
df2 = df[:]
df2
###Output
_____no_output_____
###Markdown
※ 문제14. 위의 df2 데이터프레임에서 우현 데이터를 삭제하시오
###Code
df2.drop('우현', axis=0, inplace=True)
df2
###Output
C:\Users\knitwill\anaconda3\lib\site-packages\pandas\core\frame.py:3997: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
errors=errors,
###Markdown
※ 문제15. df2 데이터프레임에서 영어 컬럼을 삭제하시오
###Code
df2.drop('영어',axis=1,inplace=True)
df2
###Output
C:\Users\knitwill\anaconda3\lib\site-packages\pandas\core\frame.py:3997: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
errors=errors,
###Markdown
※ 문제16. emp 데이터프레임에서 직업이 SALESMAN인 사원들을 삭제하시오
###Code
import pandas as pd
emp = pd.read_csv('emp3.csv',index_col='job') # index(행 index) 이름을 job으로 지정하면서 데이터프레임을 생성
emp_c = emp[:]
emp_c.drop('SALESMAN',axis=0,inplace=True)
emp_c
###Output
C:\Users\knitwill\anaconda3\lib\site-packages\pandas\core\frame.py:3997: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
errors=errors,
###Markdown
■ 2.8 판다스 행 선택 데이터프레임의 행 데이터를 선택하기 위해서는 loc와 iloc 인덱스를 사용 인덱스 이름을 기준으로 행을 선택할 때는 loc를 이용 정수형 위치 인덱스를 사용해서 행을 선택할 때는 iloc를 이용 구분 loc iloc 탐색대상 인덱스 이름 정수형 위치 인덱스 범위지정 기능(범위의 끝 포함) 기능(범위의 끝 제외) 예 ['a':'c'] -> 'a','b','c' [3:7] -> 3,4,5,6 (7제외) ※ 문제17. 아래의 df 데이터프레임에서 서준의 데이터를 검색하시오 수학 영어 음악 체육 서준 90 98 85 100 우현 80 89 95 90 인아 70 95 100 90
###Code
data = {'수학':[90,80,70],'영어':[98,89,95],'음악':[85,95,100],'체육':[100,90,90]}
df = pd.DataFrame(data, index=['서준','우현','인아'])
df.loc['서준'], df.iloc[0]
###Output
_____no_output_____
###Markdown
※ 문제18. 코로나 데이터에서 infection_case가 Itaewon Clubs인 데이터들만 조회하시오
###Code
import pandas as pd
case = pd.read_csv('Case.csv',index_col='infection_case')
case.loc['Itaewon Clubs']
###Output
_____no_output_____
###Markdown
※ 문제19. Itaewon Clubs와 Richway도 같이 조회하시오
###Code
import pandas as pd
case = pd.read_csv('Case.csv',index_col='infection_case')
case.loc[['Itaewon Clubs','Richway']]
###Output
_____no_output_____
###Markdown
※ 문제20. 코로나 case 데이터에서 맨 위 3개 행만 출력하시오
###Code
import pandas as pd
case = pd.read_csv('Case.csv')
case.iloc[0:3]
case.head()
###Output
_____no_output_____
###Markdown
※ 문제21. 코로나 case 데이터에서 맨 아래 행 3개만 출력하시오
###Code
import pandas as pd
case = pd.read_csv('Case.csv')
case.tail(3)
case.iloc[-3:]
###Output
_____no_output_____
###Markdown
■ 2.9 판다스 열 선택 판다스에서 열 이름을 선택할 때 1. 열 1개를 선택(시리즈 생성) : DataFrame['열이름'] / DataFrame객체.열이름 2. 열 여러개를 선택(데이터프레임으로 생성) : DataFrame[열1, 열2, 열3, ... ] ※ 문제22. 코로나 데이터에서 case_id, group을 가져오시오
###Code
import pandas as pd
case = pd.read_csv('Case.csv')
case[[' case_id','group']]
###Output
_____no_output_____
###Markdown
※ 문제23. case 데이터프레임에 case_id의 앞에 공백을 지우게끔 case_id 컬럼을 변경하시오
###Code
import pandas as pd
case = pd.read_csv('Case.csv')
case.rename(columns={' case_id':'case_id'},inplace=True)
case['case_id']
###Output
_____no_output_____
###Markdown
■ 2.10 범위 슬라이싱 고급 활용 DataFrame객체.loc[시작 인덱스 : 끝 인덱스 : 슬라이싱 간격] 슬라이싱 범위는 '시작 인덱스'를 포함하고 '끝 인덱스' 보다 1작은 인덱스가지 포함 '슬라이싱 간격'을 지정하지 않으면 1씩 증가
###Code
# 예제 : 코로나 case 데이터를 출력하는데 위의 10개의 행을 출력하시오
case.iloc[0:10:2]
###Output
_____no_output_____
###Markdown
※ 문제24. 코로나 데이터를 출력하는데 뒤에서 부터 역순으로 출력되게 하시오
###Code
case[::-1]
###Output
_____no_output_____
###Markdown
PatientInfo.csv 로드
###Code
import pandas as pd
pf = pd.read_csv('PatientInfo.csv')
pf
###Output
_____no_output_____
###Markdown
■ 2.11 데이터프레임에서 특정 원소를 선택하는 방법```python데이터프레임의 행 인덱스와 열 이름을 [행, 열] 형식의 2차원 좌표로 입력하여 원소 위치를 지정하는 방법문법 1. 이름으로 찾는 방법 DataFrame객체.loc[행 이름, 열 이름] 2. 번호로 찾는 방법 DataFrame.loc[행 번호, 열 번호]``` 수학 영어 음악 체육 서준 90 98 85 100 우현 80 89 95 90 인아 70 95 100 90
###Code
# 예제
import pandas as pd
data = {'수학':[90,80,70],'영어':[98,89,95],'음악':[85,95,100],'체육':[100,90,90]}
df = pd.DataFrame(data, index=['서준','우현','인아'])
df.loc['서준','음악'], df.iloc[0,2]
###Output
_____no_output_____
###Markdown
※ 문제25. PatientInfo.csv에서 환자번호 1000000010환자는 어느 환자번호에 의해서 코로나에 감염되었는가?
###Code
import pandas as pd
pf = pd.read_csv('PatientInfo.csv')
pf.iloc[9,7]
###Output
_____no_output_____
###Markdown
■ 2.12 컬럼추가 (feature engineering) (파생변수를 생성하기 위해서 꼭 알아야하는 내용) 문법 DataFrame객체['추가하려는 열 이름']=데이터 값 수학 영어 음악 체육 국어 서준 90 98 85 100 80 우현 80 89 95 90 80 인아 70 95 100 90 80
###Code
import pandas as pd
data = {'수학':[90,80,70],'영어':[98,89,95],'음악':[85,95,100],'체육':[100,90,90]}
df = pd.DataFrame(data, index=['서준','우현','인아'])
df['국어']=80
df
###Output
_____no_output_____
###Markdown
※ 문제26. 타이타닉 생존자와 사망자를 분류하는 머신러닝 모델을 만들 때 나이 컬럼에 대한 파생변수가 중요했는데 타이타닉 데이터프레임에 age를 제곱한 데이터가 추가되게 하시오. age2라는 파생변수에 age의 제곱값이 들어가게 하시오
###Code
import pandas as pd
tit = pd.read_csv('train.csv')
tit['age2']=tit['Age']**2
tit
###Output
_____no_output_____
###Markdown
※ 문제27. 타이타닉 데이터프레임이 몇 행 몇 열로 되어있는지 확인하시오
###Code
tit.shape
###Output
_____no_output_____
###Markdown
■ 2.13 행 추가 문법 DataFrame객체.loc['새로운 행 이름'] = 데이터 값 수학 영어 음악 체육 국어 서준 90 98 85 100 80 우현 80 89 95 90 80 인아 70 95 100 90 80 동규 0 0 0 0 0 ※ 문제28. df 데이터프레임에 현수로 아래의 데이터를 넣으시오 수학 영어 음악 체육 국어 서준 90 98 85 100 80 우현 80 89 95 90 80 인아 70 95 100 90 80 동규 0 0 0 0 0 현수 90 99 97 93 89
###Code
import pandas as pd
data = {'수학':[90,80,70],'영어':[98,89,95],'음악':[85,95,100],'체육':[100,90,90]}
df = pd.DataFrame(data, index=['서준','우현','인아'])
df['국어']=80
df.loc['동규']=0
df.loc['현수']=[90,99,97,93,89]
df
###Output
_____no_output_____
###Markdown
■ 2.14 원소값 변경 데이터프레임의 특정 원소를 선택하고 새로운 데이터 값을 지정해주면 원소값이 변경 문법 df.loc['서준']['국어']=100 수학 영어 음악 체육 국어 서준 90 98 85 100 100 우현 80 89 95 90 80 인아 70 95 100 90 80 동규 0 0 0 0 0 현수 90 99 97 93 89
###Code
df.loc['서준']['국어']=100
df
###Output
_____no_output_____
###Markdown
※ 문제29. 동규의 체육점수를 90점으로 변경하시오
###Code
df.loc['동규']['체육']=90
df
###Output
_____no_output_____
###Markdown
■ 2.15 데이터프레임의 행과 열의 위치를 변경하기 문법 df = df.transpose() or df.T 수학 영어 음악 체육 국어 서준 90 98 85 100 100 우현 80 89 95 90 80 인아 70 95 100 90 80 동규 0 0 0 0 0 현수 90 99 97 93 89
###Code
df.transpose()
###Output
_____no_output_____
###Markdown
■ 2.16 결측치(NaN) 확인하기
###Code
import pandas as pd
emp = pd.read_csv('emp3.csv')
emp.isnull()
emp['comm'].isnull()
###Output
_____no_output_____
###Markdown
※ 문제30. emp 데이터프레임의 결측치가 전부 몇 개인지 출력하시오
###Code
print(sum(emp.isnull().sum()))
print(emp.comm.isnull().sum())
###Output
11
10
###Markdown
※ 문제31. emp 데이터프레임의 커미션의 결측치를 0으로 바꾸시오
###Code
emp['comm']=emp['comm'].fillna(0)
emp
###Output
_____no_output_____
###Markdown
■ 2.17 특정 컬럼을 기준으로 데이터를 정렬하는 방법 문법 df_sort = df.sort_values(by='특정 컬럼명',ascending=False/True) 예제. emp 데이터프레임을 월급이 높은 순으로 출력하시오
###Code
import pandas as pd
emp = pd.read_csv('emp3.csv')
emp = emp.sort_values(by='sal',ascending=False)
emp
###Output
_____no_output_____
###Markdown
※ 문제32. (오늘의 마지막 문제) PatientInfo.csv에서 infected_by와 patient_id를 출력하는데 infected_by를 ascending하게 출력하시오
###Code
import pandas as pd
pf = pd.read_csv('PatientInfo.csv')
pf = pf.sort_values(by='infected_by',ascending=True)
pf[['patient_id','infected_by']]
###Output
_____no_output_____ |
adam_asmaca.ipynb | ###Markdown
###Code
import random
kelimeler = ["dolap", "kitap", "lamba"]
kelime = random.choice(kelimeler)
tahminsayisi = 4
harfler = []
a = len(kelime)
b = list('_'*a)
print(''.join(b), end='\e')
while tahminsayisi > 0:
c = input("Harf tahmininde bulunun! :")
if c in harfler:
print("lütfen farklı kelime tahmin ediniz!")
continue
elif len(c) > 1:
print("sadece bir harf giriniz!")
continue
else:
for d in range(len(kelime)):
if c == kelime[d]:
print("doğru sonuç!")
b[d] = c
harfler.append(c)
print(''.join(b), end='\e')
cevap = input("kelimeyi tahmin edebilirsiniz ['k' veya 'h'] : ")
if cevap == "k":
tahmin = input("kelimeyi yazabilirsiniz! : ")
if tahmin == kelime:
print("tebrikler doğru!")
break
else:
tahminsayisi -= 1
print("yanlıs tahmin! {} tane hakkınız kaldı".format(tahminsayisi))
if tahminsayisi == 0:
print("tahmin hakkınız bitmiştir. Adam asıldı.")
break
###Output
_____no_output_____
###Markdown
###Code
import random
kelimeler = ["menekşe" , "yasemin" , "zambak" , "papatya" , "orkide" ]
kelime = random.choice(kelimeler)
canSayisi = 3
harfler = []
x = len(kelime)
z = list('_' * x)
print("******ADAM ASMACA OYUNUNA HOŞGELDİNİZ!*****")
print("Tahmin edeceğiniz kelime bir çiçek çeşitidir.")
print(' '.join(z), end='\n')
while canSayisi > 0:
kullanıcı = input("Bir harf tahmin edin : ")
if kullanıcı in harfler:
print("Lutfen daha once tahmin ettiginiz harfleri tekrar tahmin etmeyin...")
continue
elif len(kullanıcı) > 1:
print("Lütfen sadece bir harf giriniz.")
continue
elif kullanıcı not in kelime:
canSayisi -= 1
print("Yanlış tahmin ettiniz!. {} tane tahmin hakkiniz kaldi.".format(canSayisi))
else:
for i in range(len(kelime)):
if kullanıcı == kelime[i]:
z[i] = kullanıcı
harfler.append(kullanıcı)
print(' '.join(z), end='\n')
cevap = input("Kelimenin tamamini tahmin etmek istiyor musunuz? ['e' veya 'h'] : ")
if cevap == "e":
tahmin = input("Kelimenin tamamini tahmin edebilirsiniz : ")
if tahmin == kelime:
print("Tebrikler bildiniz...")
break
else:
canSayisi -= 1
print("Yanlis tahmin ettiniz. {} tane tahmin hakkiniz kaldi.".format(canSayisi))
if canSayisi == 0:
print("Tahmin hakkiniz kalmadi. Kaybettiniz!")
break
###Output
******ADAM ASMACA OYUNUNA HOŞGELDİNİZ!*****
Tahmin edeceğiniz kelime bir çiçek çeşitidir.
_ _ _ _ _ _
Bir harf tahmin edin : a
_ a _ _ a _
Kelimenin tamamini tahmin etmek istiyor musunuz? ['e' veya 'h'] : e
Kelimenin tamamini tahmin edebilirsiniz : zambak
Tebrikler bildiniz...
###Markdown
###Code
from random import randint
suRunGenler = ["yılan","kertenkele","timsah","kaplumbağa"]
sec = randint(0,3)
print("3 yanlış tahmin hakkına sahipsiniz!")
print("Seçilen kelimenin harf sayısı: ",len(suRunGenler[sec]))
szck= []
for i in suRunGenler[sec]:
szck.append("_")
print(szck)
sayac=0
yanlis = 1
while yanlis <=3:
liste=[]
for k in suRunGenler[sec]:
liste.append(k)
harf = input(" Bir harf yazınız: ")
if harf in suRunGenler[sec]:
print(harf + " doğru harf tahmini ")
for j in range(0,len(liste)):
if harf==liste[j]:
szck[j]=harf
sayac=sayac+1
print(szck)
if sayac==len(liste):
print("Kazandınız")
break
else:
print(harf + " Yanlış harf tahmini ")
yanlis = yanlis+1
can=4-yanlis
if can>0:
print(can,"canınız kaldı.")
if yanlis>3:
print("Kaybettin.Hata sınırı aşıldı.")
###Output
3 yanlış tahmin hakkına sahipsiniz!
Seçilen kelimenin harf sayısı: 10
['_', '_', '_', '_', '_', '_', '_', '_', '_', '_']
###Markdown
###Code
print("Adam asmaca oyununa hoşgeldiniz !!!!!toplamda 5 deneme hakkınız bulunmaktadır...")
kelimeler = ["ruj", "maskara", "ananas",]
for kelime in kelimeler:
can = 5
bulunan = ""
while True:
for i in kelime:
if i not in bulunan:
print("-", end = " ")
else:
print(i, end =" ")
print("\n{} deneme hakkınız kaldı".format(can))
x = input("Lütfen bir harf girin: ")
if (x in kelime) & (x not in bulunan):
if x not in bulunan:
bulunan = bulunan + x
else:
can -= 1
finished = True
for i in kelime:
if i not in bulunan:
finished = False
if finished:
print("\nTebrikler Bildiniz :)")
tekrar = input("\nTekrar oynamak ister misiniz ? (evet / hayır): ")
if tekrar == "evet":
break
else:
exit()
if can == 0:
print("\nAdam asıldı.. Kaybettiniz :(")
tekrar = input("\nTekrar oynamak ister misiniz ? (evet / hayır): ")
if tekrar == "evet":
break
###Output
Adam asmaca oyununa hoşgeldiniz !!!!!toplamda 5 deneme hakkınız bulunmaktadır...
- - -
5 deneme hakkınız kaldı
Lütfen bir harf girin: u
- u -
5 deneme hakkınız kaldı
Lütfen bir harf girin: j
- u j
5 deneme hakkınız kaldı
Lütfen bir harf girin: r
Tebrikler Bildiniz :)
###Markdown
###Code
__author__="CANSEL KUNDUKAN"
print("ADAM ASMACA OYUNUNA HOŞGELDİNİZ...")
print("ip ucu=Oyunumuz da ülke isimlerini bulmaya çalışıyoruz")
from random import choice
while True:
kelime = choice (["ispanya", "almanya","japonya","ingiltere","brezilya","mısır","macaristan","hindistan"])
kelime = kelime.upper()
harfsayisi = len(kelime)
print("Kelimemiz {} harflidir.\n".format(harfsayisi))
tahminler = []
hata = []
KalanCan = 3
while KalanCan > 0:
bos = ""
for girilenharf in kelime:
if girilenharf in tahminler:
bos = bos + girilenharf
else:
bos = bos + " _ "
if bos == kelime:
print("Tebrikler!")
break
print("Kelimeyi Tahmin Ediniz", bos)
print(KalanCan, "Canınız Kaldı")
Tahmin = input("Bir Harf Giriniz :")
Tahmin = Tahmin.upper()
if Tahmin == kelime:
print("\n\n Tebrikler\n\n")
break
elif Tahmin in kelime:
rpt = kelime.count(Tahmin)
print("Dogru.{0} Harfi Kelimemiz İçerisinde {1} Kere Geçiyor".format(Tahmin, rpt))
tahminler.append(Tahmin)
else:
print("Yanlış.")
hata.append(Tahmin)
KalanCan = KalanCan - 1
if KalanCan == 0:
print("\n\nHiç Hakkınız Kalmadı.")
print("Kelimemiz {}\n\n".format(kelime))
print("Oyundan Çıkmak İstiyorsanız\n'X' Tuşuna Basınız\nDevam Etmek İçin -> ENTER. ")
devam = input(":")
devam = devam.upper()
if devam == "X":
break
else:
continue
###Output
ADAM ASMACA OYUNUNA HOŞGELDİNİZ...
ip ucu=Oyunumuz da ülke isimlerini bulmaya çalışıyoruz
Kelimemiz 9 harflidir.
Kelimeyi Tahmin Ediniz _ _ _ _ _ _ _ _ _
3 Canınız Kaldı
Bir Harf Giriniz :b
Yanlış.
Kelimeyi Tahmin Ediniz _ _ _ _ _ _ _ _ _
2 Canınız Kaldı
Bir Harf Giriniz :a
Dogru.A Harfi Kelimemiz İçerisinde 1 Kere Geçiyor
Kelimeyi Tahmin Ediniz _ _ _ _ _ _ _ A _
2 Canınız Kaldı
Bir Harf Giriniz :i
Dogru.I Harfi Kelimemiz İçerisinde 2 Kere Geçiyor
Kelimeyi Tahmin Ediniz _ I _ _ I _ _ A _
2 Canınız Kaldı
Bir Harf Giriniz :m
Yanlış.
Kelimeyi Tahmin Ediniz _ I _ _ I _ _ A _
1 Canınız Kaldı
Bir Harf Giriniz :b
Yanlış.
Hiç Hakkınız Kalmadı.
Kelimemiz HINDISTAN
Oyundan Çıkmak İstiyorsanız
'X' Tuşuna Basınız
Devam Etmek İçin -> ENTER.
|
ipynb/Austria.ipynb | ###Markdown
Austria* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Austria.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview("Austria", weeks=5);
overview("Austria");
compare_plot("Austria", normalise=True);
# load the data
cases, deaths = get_country_data("Austria")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Austria.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
Austria* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Austria.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview("Austria", weeks=5);
overview("Austria");
compare_plot("Austria", normalise=True);
# load the data
cases, deaths = get_country_data("Austria")
# get population of the region for future normalisation:
inhabitants = population("Austria")
print(f'Population of "Austria": {inhabitants} people')
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 1000 rows
pd.set_option("max_rows", 1000)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Austria.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
Austria* Homepage of project: https://oscovida.github.io* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Austria.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview("Austria");
# load the data
cases, deaths, region_label = get_country_data("Austria")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Austria.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____ |
0.process-data.ipynb | ###Markdown
Process CoMMpass DataIn the following notebook, we process input RNAseq gene expression matrices for downstream machine learning applications.Prior to processing, the input expression matrix was FPKM normalized.We first calculate and visualize the per gene variability in the CoMMpass gene expression dataset.We use Median Absolute Deviation ([MAD](https://en.wikipedia.org/wiki/median_absolute_deviation)) to measure gene expression variability.We output this measurement to a file and recommend subsetting to 8,000 genes before input to machine learning models. This captures the majority of the variation in the data in raw gene space. This is a similar observation as seen previously in other experiments (see [Way et al. 2018](https://doi.org/10.1016/j.celrep.2018.03.046 "Machine Learning Detects Pan-cancer Ras Pathway Activation in The Cancer Genome Atlas") and [this discussion](https://github.com/cognoma/machine-learning/pull/18issuecomment-236265506)).We next subset the training and testing X matrices by these MAD genes and scale their measurements to a range of (0,1) by gene. We also process Y matrices into an `sklearn` ingestible format. Importantly, we process the dual Ras mutated samples (~3%) separately.
###Code
import os
import pandas as pd
from statsmodels.robust.scale import mad
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
num_genes = 8000
###Output
_____no_output_____
###Markdown
Load and Subset Training X MatrixInput to sklearn classifiers requires data to be in a bit different format than what was provided.
###Code
# Load the CoMMpass Training Data
file = os.path.join('data', 'raw', 'CoMMpass_train_set.csv')
train_df = pd.read_csv(file, index_col=0).drop("Location", axis='columns')
print(train_df.shape)
# Reorder genes based on MAD and retain higher variable genes
train_df = (
train_df
.assign(mad_genes = mad(train_df, axis=1))
.sort_values(by='mad_genes', ascending=False)
.drop('mad_genes', axis='columns')
)
# Remove duplicate ENSEMBL genes - retain genes with higher variability
train_df = train_df[~train_df.index.duplicated(keep='first')]
print(train_df.shape)
train_df.head(2)
###Output
(57981, 706)
###Markdown
Explore the variability of gene expression in the training set
###Code
# Get MAD genes again and sort
mad_genes = mad(train_df, axis=1)
mad_gene_df = (
pd.DataFrame(mad_genes, index=train_df.index, columns=['mad_genes'])
.sort_values(by='mad_genes', ascending=False)
)
# How many genes have no variance
(mad_gene_df['mad_genes'] == 0).value_counts()
# Remove genes lacking variance
mad_gene_df = mad_gene_df.query("mad_genes > 0")
print(mad_gene_df.shape)
mad_gene_df.head()
###Output
_____no_output_____
###Markdown
It looks like the highest variable gene is a large outlierThe gene is [B2M](http://useast.ensembl.org/Homo_sapiens/Gene/Summary?g=ENSG00000166710;r=15:44711477-44718877).
###Code
# Distribution of gene expression variability even after removing zeros
sns.distplot(mad_gene_df['mad_genes']);
# Distribution of genes with high gene expression variability
sns.distplot(mad_gene_df.query("mad_genes > 100")['mad_genes']);
# Get the proportion of total MAD variance for each gene
total_mad = mad_gene_df['mad_genes'].sum()
mad_gene_df = (
mad_gene_df
.assign(variance_proportion = mad_gene_df['mad_genes'].cumsum() / total_mad)
)
# Visualize the proportion of MAD variance against all non-zero genes
sns.regplot(x='index',
y='variance_proportion',
ci=None,
fit_reg=False,
data=mad_gene_df.reset_index().reset_index())
plt.xlabel('Number of Genes')
plt.ylabel('Proportion of Variance')
plt.axvline(x=5000, color='r', linestyle='--')
plt.axvline(x=10000, color='r', linestyle='--')
plt.axvline(x=num_genes, color='g', linestyle='--');
# Use only the top `num_genes` in the classifier
mad_gene_df = mad_gene_df.assign(use_in_classifier = 0)
mad_gene_df['use_in_classifier'].iloc[range(0, num_genes)] = 1
mad_gene_df.head()
# Write to file
file = os.path.join('data', 'mad_genes.tsv')
mad_gene_df.to_csv(file, sep='\t')
###Output
_____no_output_____
###Markdown
Subset and Scale Training X Matrix
###Code
use_genes = mad_gene_df.query('use_in_classifier == 1')
train_df = (
train_df
.reindex(use_genes.index)
.sort_index(axis='columns')
.sort_index(axis='rows')
.transpose()
)
# Scale between range of (0, 1) by gene
# matrix must be sample x gene
fitted_scaler = MinMaxScaler().fit(train_df)
train_df = pd.DataFrame(fitted_scaler.transform(train_df),
columns=train_df.columns,
index=train_df.index)
print(train_df.shape)
# Output Training X Matrix
file = os.path.join('data', 'compass_x_train.tsv.gz')
train_df.to_csv(file, compression='gzip', sep='\t')
###Output
_____no_output_____
###Markdown
Load and Process Testing X Matrix**Note that the testing matrix includes samples with both _KRAS_ and _NRAS_ mutations.**Remove these samples from the testing matrix and set aside for a separate test phase.The X matrix is written to file _after_ processing the Y matrix in order to separate dual _KRAS_/_NRAS_ samples from processing.
###Code
# Load and process test data
file = os.path.join('data', 'raw', 'CoMMpass_test_set.csv')
test_df = pd.read_csv(file, index_col=0).drop("Location", axis='columns')
# Reorder genes based on MAD and retain higher variable genes
test_df = (
test_df
.assign(mad_genes = mad(test_df, axis=1))
.sort_values(by='mad_genes', ascending=False)
.drop('mad_genes', axis='columns')
)
# Remove duplicate ENSEMBL genes - retain genes with higher variability
test_df = test_df[~test_df.index.duplicated(keep='first')]
test_df = (
test_df.reindex(use_genes.index)
.sort_index(axis='columns')
.sort_index(axis='rows')
.transpose()
)
print(test_df.shape)
# Confirm that the genes are the same between training and testing
assert (test_df.columns == train_df.columns).all(), 'The genes between training and testing are not aligned!'
###Output
_____no_output_____
###Markdown
Process Y MatricesThe Y represents mutation status for samples. Note that there are 26 samples (3.2%) that have dual _KRAS_ and _NRAS_ mutations. Split these samples into a different X and Y matrices.Also, sklearn expects a single array of values for multiclass classifiers. Set the following assignments:| Mutation | Assignment || -------- | ---------- || Wild-type | 0 || _KRAS_ | 1 || _NRAS_ | 2 | Training Y Matrix
###Code
# Load the training labels from the Y matrix
file = os.path.join('data', 'raw', 'CoMMpass_train_set_labels.csv')
y_train_df = pd.read_csv(file, index_col=0)
y_train_df = (
y_train_df
.reindex(train_df.index)
.astype(int)
)
print(y_train_df.sum())
y_train_df.head(2)
# Observe the proportion of KRAS/NRAS mutations in the training set
y_train_df.sum() / y_train_df.shape[0]
# sklearn expects a single column with classes separate 0, 1, 2
# Set NRAS mutations equal to 2
y_train_df.loc[y_train_df['NRAS_mut'] == 1, 'KRAS_mut'] = 2
y_train_df = y_train_df.drop(['NRAS_mut', 'dual_RAS_mut'], axis='columns')
y_train_df.columns = ['ras_status']
# Confirm that the samples are the same between training and testing
assert (y_train_df.index == train_df.index).all(), 'The samples between X and Y training matrices are not aligned!'
file = os.path.join('data', 'compass_y_train.tsv')
y_train_df.to_csv(file, sep='\t')
###Output
_____no_output_____
###Markdown
Testing Y Matrix
###Code
file = os.path.join('data', 'raw', 'CoMMpass_test_set_labels.csv')
y_test_df = pd.read_csv(file, index_col=0)
y_test_df = (
y_test_df
.reindex(test_df.index)
.astype(int)
)
y_test_df.head(3)
###Output
_____no_output_____
###Markdown
Split off dual Ras from normal testing
###Code
y_dual_df = y_test_df.query('dual_RAS_mut == 1')
y_test_df = y_test_df.query('dual_RAS_mut == 0')
print(y_dual_df.shape)
print(y_test_df.shape)
# How many KRAS/NRAS mutations in testing set
# After removal of dual mutants
y_test_df.sum()
# What is the proportion of KRAS/NRAS mutations in testing set
# After removal of dual mutants
y_test_df.sum() / y_test_df.shape[0]
# How many KRAS/NRAS mutations in the dual set
y_dual_df.sum()
# sklearn expects a single column with classes separate 0, 1, 2
# Set NRAS mutations equal to 2
y_test_df.loc[y_test_df['NRAS_mut'] == 1, 'KRAS_mut'] = 2
y_test_df = y_test_df.drop(['NRAS_mut', 'dual_RAS_mut'], axis='columns')
y_test_df.columns = ['ras_status']
###Output
_____no_output_____
###Markdown
Subset and Output Testing X MatrixBecause the testing set includes samples with dual Ras mutations, split it before scaling by gene. Then use the scale fit on the non-dual test set to scale the dual samples. This is done in order for the testing set to be independent from the training set, but also to not be influenced by an overabundance of dual Ras mutant samples.
###Code
# Now, process X matrix for both testing and dual sets
x_test_df = test_df.reindex(y_test_df.index, axis='rows')
x_dual_df = test_df.reindex(y_dual_df.index, axis='rows')
# Fit the data using the testing set
# The dual mutated samples are filtered
fitted_scaler = MinMaxScaler().fit(x_test_df)
# Transform the testing matrix with testing matrix fit
x_test_df = pd.DataFrame(fitted_scaler.transform(x_test_df),
columns=x_test_df.columns,
index=x_test_df.index)
# Transform the dual matrix with testing matrix fit
x_dual_df = pd.DataFrame(fitted_scaler.transform(x_dual_df),
columns=x_dual_df.columns,
index=x_dual_df.index)
print(x_test_df.shape)
print(x_dual_df.shape)
# Before writing to file, confirm that the samples are aligned in the testing set
assert (x_test_df.index == y_test_df.index).all(), 'The samples between X and Y testing matrices are not aligned!'
file = os.path.join('data', 'compass_y_test.tsv')
y_test_df.to_csv(file, sep='\t')
file = os.path.join('data', 'compass_x_test.tsv.gz')
x_test_df.to_csv(file, compression='gzip', sep='\t')
###Output
_____no_output_____
###Markdown
Process and Output both X and Y Matrix for Dual Ras mutated samples
###Code
percent_dual = y_dual_df.shape[0] / (train_df.shape[0] + test_df.shape[0])
print('{0:.1f}% of the samples have mutations in both KRAS and NRAS'.format(percent_dual * 100))
y_dual_df.head()
# Before writing to file, confirm that the samples are aligned in the dual set
# This does not actually matter because we will not use the dual Y matrix ever
# The dual Y matrix is implied
assert (x_dual_df.index == y_dual_df.index).all(), 'The samples between X and Y dual matrices are not aligned!'
# Also, confirm that the dual genes are aligned to other X matrices
assert (x_dual_df.columns == train_df.columns).all(), 'The genes between dual and other X matrices are not aligned!'
file = os.path.join('data', 'compass_x_dual.tsv.gz')
x_dual_df.to_csv(file, compression='gzip', sep='\t')
file = os.path.join('data', 'compass_y_dual.tsv')
y_dual_df.drop('dual_RAS_mut', axis='columns').to_csv(file, sep='\t')
###Output
_____no_output_____ |
_notebooks/us-counties-cancer-death-rates-prediction.ipynb | ###Markdown
Cancer Mortality rate prediction for US counties using feedfoward neural networksUsing data aggregated from the American Community Survey (https://census.gov), https://hclinicaltrials.gov, and https://cancer.gov. Link of aggregated data can be found here: https://data.world/nrippner/ols-regression-challenge ABSTRACTThis project aims to predict cancer mortality rates of US counties, using feedfoward neural networks. We'll first start by downloading and cleaning the data. Then we'll perform some exploratory data analyses to look for trends. We'll then build a model with the dataset, which takes a bunch of inputs and returns predictions for target death rate. Finally, we'll examine how how our model performed against The data type of data: tabular data Model used: Regression with feed-forward neural networks IntroductionCancer is the second leading cause of death in the united states. Various types of cancer have been associated with modifiable risk factors like smoking, lack of physical activity, obesity, etc. Because modifiable risk factors have a very large association with the risk of developing cancers, half of cancer may be preventable (Colditz et al, 1996).However, individual risk factors might not be the only modifiable risk factors for cancer. Parameters like poor access to health care, low income etc, should also be considered as potential indirect risk factors. It's no wonder, areas with better cancer detection centers, tend to have a lower rate of cancer mortality, thanks to early detection.The dataset used in this project mainly contains population features of US counties, which will be fed into our model, in order to predict how these features might affect cancer mortality rates of US counties. NoteA copy of the dataset on my local drive will be used. However, I'll leave the download url to the dataset below. if you wish to explore on your own machine, simply uncomment the download_url function and replace the read_csv path with your path
###Code
# Uncomment and run the commands below if imports fail
# !conda install numpy pytorch torchvision cpuonly -c pytorch -y
# !pip install matplotlib --upgrade --quiet
!pip install jovian --upgrade --quiet
# Imports
import torch
import jovian
import torchvision
import torch.nn as nn
import pandas as pd
import matplotlib.pyplot as plt
import torch.nn.functional as F
from torchvision.datasets.utils import download_url
from torch.utils.data import DataLoader, TensorDataset, random_split
# Hyperparameters
batch_size=16
learning_rate=5e-7
# Other constants
DATASET_URL = "https://data.world/nrippner/ols-regression-challenge/file/cancer_reg.csv"
DATA_FILENAME = "cancer.csv"
TARGET_COLUMN = 'target_deathrate'
input_size=13
output_size=1
###Output
_____no_output_____
###Markdown
Dataset loading and cleaningHere, We'll load our datasets into to dataframe variable and clean the data by searching for null cells
###Code
# Download the data
#download_url(DATASET_URL, '.')
dataframe = pd.read_csv(r'C:\Users\User\Desktop\drdata\cancer.csv', encoding = "ISO-8859-1")
dataframe = dataframe[['avgAnnCount', 'avgDeathsPerYear', 'TARGET_deathRate', 'incidenceRate', 'povertyPercent', 'MedianAge', 'PctEmpPrivCoverage', 'PctPublicCoverage', 'PctWhite', 'PctBlack', 'PctAsian', 'PctOtherRace', 'PctMarriedHouseholds', 'BirthRate']]
dataframe.isnull().sum().sum()
###Output
_____no_output_____
###Markdown
As shown above, We'll be working with 14 columns from our data, 13 of which will be input columns for our model.Also, our data has no null values, since it had been cleaned beforehand Exploratory Data AnalysesLet's try finding correlations between the input columns and Target Death rateTo avoid clogging up this notebook, I have plotted only a handful of graphs. Correlation between povertyPercent and TARGET deathrateThe plot below, shows a positive correlation between the percentage of poor people in individual US counties, and target death rate.
###Code
#correlation between povertyPercent and TARGET_deathRate
plt.scatter(dataframe['povertyPercent'], dataframe['TARGET_deathRate'])
plt.xlabel('povertyPercent')
plt.ylabel('TARGET_deathRate')
plt.show()
###Output
_____no_output_____
###Markdown
Correlation between percentage empirc coverage and TARGET deathrateThe plot below, shows a negative correlation between the percentage of Empiric coverage for individual US counties, and target death rate. Generally, counties that have higher percentage of empiric coverage tend to experience lower death rates.
###Code
#correlation between median age and TARGET_deathRate
plt.scatter(dataframe['PctEmpPrivCoverage'], dataframe['TARGET_deathRate'])
plt.xlabel('percentage empiric coverage')
plt.ylabel('TARGET_deathRate')
plt.xscale('log')
plt.show()
###Output
_____no_output_____
###Markdown
Correlation between percentage public coverage and TARGET deathrateThe plot below, shows that there might be positive correlation between the percentage of public coverage for individual US counties, and target death rate. As shown, counties that have higher percentage of public coverage tend to experience higher death rates. What could be a reasonable explanation for this?
###Code
#correlation between percentage public coverage and TARGET_deathRate
plt.scatter(dataframe['PctPublicCoverage'], dataframe['TARGET_deathRate'])
plt.xlabel('percentage public coverage')
plt.ylabel('TARGET_deathRate')
plt.xscale('log')
plt.show()
###Output
_____no_output_____
###Markdown
Converting data to PyTorch datasetLets' now convert our dataset to a pytorch data set, to enable us feed it to models
###Code
# Convert from Pandas dataframe to numpy arrays
inputs = dataframe.drop('TARGET_deathRate', axis=1).values
targets = dataframe[['TARGET_deathRate']].values
inputs.shape, targets.shape
rown = dataframe.shape[0]
0.8 * rown, 0.2 * rown
## convert to pytorch dataset
dataset = TensorDataset(torch.tensor(inputs, dtype=torch.float32), torch.tensor(targets, dtype=torch.float32))
train_ds, val_ds = random_split(dataset, [2437, 610])
train_loader = DataLoader(train_ds, batch_size, shuffle=True)
val_loader = DataLoader(val_ds, batch_size*2)
input_size = 13
output_size = 1
!pip install jovian --upgrade --quiet
import jovian
jovian.commit(project='us-counties-cancer-death-rate-prediction')
###Output
_____no_output_____
###Markdown
ModelTo train our our model, we'll be using a feed-foward neural network with 3 hidden layers. We'll be using the Relu function as the non-linear activation function for the hidden layersLet's now define our models.
###Code
class LifeExpectancyModel(nn.Module):
"""Feedfoward neural network with 3 hidden layers"""
def __init__(self, in_size, hidden_size, out_size):
super().__init__()
# hidden layer
self.linear1 = nn.Linear(in_size, hidden_size)
self.linear2 = nn.Linear(hidden_size, 64)
self.linear3 = nn.Linear(64, 128)
# output layer
self.linear4 = nn.Linear(128, out_size)
def forward(self, xb):
# Get intermediate outputs using hidden layer
out = self.linear1(xb)
# Apply activation function
out = F.relu(out)
# Get predictions using output layer
out = self.linear2(out)
out = F.relu(out)
out = self.linear3(out)
out = F.relu(out)
out = self.linear4(out)
return out
def training_step(self, batch):
inputs, targets = batch
out = self(inputs) # Generate predictions
loss = F.l1_loss(out, targets) # Calculate loss
return loss
def validation_step(self, batch):
inputs, targets = batch
out = self(inputs) # Generate predictions
loss = F.l1_loss(out, targets) # Calculate loss
acc = accuracy(out, targets) # Calculate accuracy
return {'val_loss': loss, 'val_acc': acc}
def validation_epoch_end(self, outputs):
batch_losses = [x['val_loss'] for x in outputs]
epoch_loss = torch.stack(batch_losses).mean() # Combine losses
return {'val_loss': epoch_loss.item()}
def epoch_end(self, epoch, result, num_epochs):
# Print result every 20th epoch
if (epoch+1) % 20 == 0 or epoch == num_epochs-1:
print("Epoch [{}], val_loss: {:.4f}".format(epoch+1, result['val_loss']))
def accuracy(outputs, labels):
_, preds = torch.max(outputs, dim=1)
return torch.tensor(torch.sum(preds == labels).item() / len(preds))
model = LifeExpectancyModel(13, 32, 1)
###Output
_____no_output_____
###Markdown
TrainingLet's now define our evaluate and fit functionsWe'll then train our model and improve it by varying our hyperparameters
###Code
def evaluate(model, val_loader):
outputs = [model.validation_step(batch) for batch in val_loader]
return model.validation_epoch_end(outputs)
def fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD):
history = []
optimizer = opt_func(model.parameters(), lr)
for epoch in range(epochs):
# Training Phase
for batch in train_loader:
loss = model.training_step(batch)
loss.backward()
optimizer.step()
optimizer.zero_grad()
# Validation phase
result = evaluate(model, val_loader)
model.epoch_end(epoch, result, epochs)
history.append(result)
return history
result = evaluate(model, val_loader)
result
epochs = 100
lr = 1e-5
history1 = fit(epochs, lr, model, train_loader, val_loader)
epochs = 200
lr = 1e-5
history2 = fit(epochs, lr, model, train_loader, val_loader)
epochs = 300
lr = 1e-7
history3 = fit(epochs, lr, model, train_loader, val_loader)
loss_final = 13.5614
losses = [r['val_loss'] for r in [result] + history1 + history2 + history3]
plt.plot(losses, '-x')
plt.xlabel('epoch')
plt.ylabel('val_loss')
plt.title('val_loss vs. epochs');
###Output
_____no_output_____
###Markdown
PredictionLet's now take a look at a couple of predictions to see how our well our model did
###Code
def predict_single(x, model):
xb = x.unsqueeze(0)
return model(x).item()
x, target = val_ds[10]
pred = predict_single(x, model)
print("Input: ", x)
print("Target: ", target.item())
print("Prediction:", pred)
for i in range(100, 200):
x, target = val_ds[i]
pred = predict_single(x, model)
print("Input: ", x)
print("Target: ", target.item())
print("Prediction:", pred)
print('\n\n')
torch.save(model.state_dict(), 'cancer-rate-feedforward.pth')
###Output
_____no_output_____
###Markdown
ConclusionIn this project, We have analyzed the US counties datasets and tried to find links between various parameters and death rate.We also built a model Using feed-forward neural networks, which predicts targets cancer death-rates, based on given parameters.However, there's still a wealth of information that can be dug out of this dataset.Also, the model didn't measure the accuracy of our predictions because measurement of accuracy in this case is subjective (since we're dealing with values here) and would have to be based on individually defined criteria.A good improvement to this model, will be to set a function that calculates the accuracy based on how far the predictions are from the actual target death rate. for example, you could objectively decide that any prediction that's less than 20 death rates from the actual death rate, would be classified as an accurate prediction. that way, you'll have an objective measure of exactly how accurate the model is. Referencehttps://data.world/nrippner/ols-regression-challengehttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC2410150/ Save and uploadLet's save and upload our notebook, model and metrics Let's log in our metrics
###Code
jovian.log_metrics(final_loss = loss_final, hidden_layers = 3, lr1 = 1e-5, epoch1 = 100, lr2 = 1e-5, epoch2 = 200, lr3 = 1e-7, epoch3 =300)
jovian.commit(project='us-counties-cancer-death-rate-prediction',
environment=None,
outputs=['cancer-rate-feedforward.pth'])
###Output
_____no_output_____ |
Seaborn/seaborn_building_structured_dataset.ipynb | ###Markdown
Building structured multiplot grids Conditional small multiples
###Code
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style='ticks')
# building multiplot grids needs initializing FacetGrid object
# with a dataframe and the names of variables to form: rows, columns, hue
# variables should be categorical or discrete
tips = sns.load_dataset('tips')
g = sns.FacetGrid(tips, col='time')
# it sets up the matplotlib figure and axes but does not draw anything
# visualizing data on this grid with the FacetGrid.map() method
g = sns.FacetGrid(tips, col='time')
g.map(plt.hist, 'tip'); # needed plotting func and variable to plot
# to make relational plot pass multiple variable names
# or/and keyword arguments
g = sns.FacetGrid(tips, col='sex', hue='smoker')
g.map(plt.scatter, 'total_bill', 'tip', alpha=.7)
g.add_legend();
# controlling the look of the grid with a few options
# passed to the class constructor
g = sns.FacetGrid(tips, row='smoker', col='time',
margin_titles=True)
g.map(sns.regplot, 'size', 'total_bill', color='.3',
fit_reg=False, x_jitter=.1);
# the size of figure is set by providing the height
#of each facet along with the aspect ratio
g = sns.FacetGrid(tips, col='day', height=4, aspect=.5)
g.map(sns.barplot, 'sex', 'total_bill');
sns.relplot?
###Output
_____no_output_____ |
notebooks/solutions/9_s_ligthfm.ipynb | ###Markdown
Unit 9: LightFM You almost made it - this is the final lesson and it is also going to be the easiest one.As you may already assume - there are a lot of recommender packages in Python out there. In this lesson we will look at LightFM - an easy to use and lightweight implementation of different approaches and algorithms (FM, BPR, WARP, ...) to perform CF, CBF and hybrid recommenders.Within a few lines of code we set-up, train and use a recommender for recommendations.* [LightFM on GitHub](https://github.com/lyst/lightfm)* [LightFM documentation](https://making.lyst.com/lightfm/docs/home.html)
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.sparse import coo_matrix
from recsys_training.data import Dataset, genres
from lightfm.datasets import fetch_movielens
from lightfm.evaluation import precision_at_k
from lightfm import LightFM
ml100k_ratings_filepath = '../../data/raw/ml-100k/u.data'
ml100k_item_filepath = '../../data/raw/ml-100k/u.item'
ml100k_user_filepath = '../../data/raw/ml-100k/u.user'
###Output
_____no_output_____
###Markdown
Load Data You may easily load Movielens Data ...
###Code
data = fetch_movielens(min_rating=4.0, genre_features=True)
data
###Output
_____no_output_____
###Markdown
But, we want to use the exact same data and split that we used in the lessons before
###Code
data = Dataset(ml100k_ratings_filepath)
data.filter(min_rating=4.0)
data.rating_split(seed=42)
###Output
_____no_output_____
###Markdown
Transform our training and testing data into sparse matrices
###Code
# Train DataFrame to Train COO Matrix
ratings = data.train_ratings["rating"].values
# We subtract 1 to make user/item ids 0-index-based
rows = data.train_ratings["user"].values - 1
cols = data.train_ratings["item"].values - 1
train_mat = coo_matrix((ratings, (rows, cols)),
shape=(data.n_users, data.n_items))
# Test DataFrame to Test COO Matrix
ratings = data.test_ratings["rating"].values
# We subtract 1 to make user/item ids 0-index-based
rows = data.test_ratings["user"].values - 1
cols = data.test_ratings["item"].values - 1
test_mat = coo_matrix((ratings, (rows, cols)),
shape=(data.n_users, data.n_items))
train_mat
test_mat
###Output
_____no_output_____
###Markdown
Collaborative Filtering
###Code
params = {
'no_components': 10,
'loss': 'bpr',
'learning_rate': 0.07,
'random_state': 42,
'user_alpha': 0.0002,
'item_alpha': 0.0002
}
epochs = 10
N = 10
cf_model = LightFM(**params)
cf_model.fit(train_mat, epochs=epochs, verbose=True)
###Output
Epoch: 100%|██████████| 10/10 [00:00<00:00, 48.66it/s]
###Markdown
Evaluate the `MAP@10` on test dataIf we provide training data with evaluation, known positives will be removed.
###Code
prec_at_N = precision_at_k(cf_model, test_mat, train_mat, k=N)
prec_at_N.mean()
###Output
_____no_output_____
###Markdown
Evaluate the `MAP@10` on train data
###Code
prec_at_N = precision_at_k(cf_model, train_mat, k=N)
prec_at_N.mean()
###Output
_____no_output_____
###Markdown
Maybe try adding some regularization to improve the recommendation relevancy - simply add `user_alpha` and `item_alpha` to the `params` dictionary and find appropriate values. Hybrid (CF + CBF) Load user and item features
###Code
def min_max_scale(val, bounds):
min_max_range = bounds['max']-bounds['min']
return (val-bounds['min'])/min_max_range
def user_profiler(group):
genre_dist = group[genres].mean()
year_dist = group['release_year'].describe()[['mean', 'std', '50%']]
return pd.concat((genre_dist, year_dist), axis=0)
def get_user_profiles(ratings: pd.DataFrame,
item_feat: pd.DataFrame,
min_rating: float = 4.0) -> pd.DataFrame:
ratings = ratings[ratings.rating >= min_rating]
ratings = ratings[['user', 'item']]
ratings = ratings.merge(item_feat, on='item', how='left')
ratings.drop(['item'], axis=1, inplace=True)
grouped = ratings.groupby('user')
profiles = grouped.apply(user_profiler).reset_index()
profiles.rename(columns={'50%': 'median'}, inplace=True)
return profiles
item_feat = pd.read_csv(ml100k_item_filepath, sep='|', header=None,
names=['item', 'title', 'release', 'video_release', 'imdb_url']+genres,
engine='python')
user_feat = pd.read_csv(ml100k_user_filepath, sep='|', header=None,
names=['user', 'age', 'gender', 'occupation', 'zip'])
# Infer the release year
idxs = item_feat[item_feat['release'].notnull()].index
item_feat.loc[idxs, 'release_year'] = item_feat.loc[idxs, 'release'].str.split('-')
item_feat.loc[idxs, 'release_year'] = item_feat.loc[idxs, 'release_year'].apply(lambda val: val[2]).astype(int)
# Impute median release year value for the items with missing release year
top_year = item_feat.loc[idxs, 'release_year'].astype(int).describe()['50%']
idx = item_feat[item_feat['release'].isnull()].index
item_feat.loc[idx, 'release_year'] = top_year
# Min-max scale the release year
item_year_bounds = {'min': item_feat['release_year'].min(),
'max': item_feat['release_year'].max()}
item_feat['release_year'] = item_feat['release_year'].apply(
lambda year: min_max_scale(year, item_year_bounds))
# Drop other columns
item_feat.drop(['title', 'release', 'video_release', 'imdb_url'], axis=1, inplace=True)
# Min-max scale the age
user_age_bounds = {'min': user_feat['age'].min(),
'max': user_feat['age'].max()}
user_feat['age'] = user_feat['age'].apply(lambda age: min_max_scale(age, user_age_bounds))
# Transform gender characters to numerical values (categories)
genders = sorted(user_feat['gender'].unique())
user_gender_map = dict(zip(genders, range(len(genders))))
user_feat['gender'] = user_feat['gender'].map(user_gender_map)
# Transform occupation strings to numerical values (categories)
occupations = sorted(user_feat['occupation'].unique())
user_occupation_map = dict(zip(occupations, range(len(occupations))))
user_feat['occupation'] = user_feat['occupation'].map(user_occupation_map)
# Transform the zip codes to categories keeping the first three digits and impute for missing
idxs = user_feat[~user_feat['zip'].str.isnumeric()].index
user_feat.loc[idxs, 'zip'] = '00000'
zip_digits_to_cut = 3
user_feat['zip'] = user_feat['zip'].apply(lambda val: int(val) // 10 ** zip_digits_to_cut)
profiles = get_user_profiles(data.train_ratings, item_feat)
user_feat = user_feat.merge(profiles, on='user', how='left')
occupation_1H = pd.get_dummies(user_feat['occupation'], prefix='occupation')
zip_1H = pd.get_dummies(user_feat['zip'], prefix='zip')
user_feat.drop(['occupation', 'zip', ], axis=1, inplace=True)
user_feat = pd.concat([user_feat, occupation_1H, zip_1H], axis=1)
user_feat.fillna(0, inplace=True)
user_feat.index = user_feat['user'].values
user_feat.drop('user', axis=1, inplace=True)
item_feat.index = item_feat['item'].values
item_feat.drop('item', axis=1, inplace=True)
(user_feat==0).sum().sum()/user_feat.size
(item_feat==0).sum().sum()/item_feat.size
# Create User Feature COO Matrix
# user_feat_mat = coo_matrix(np.eye(data.n_users))
user_feat_mat = coo_matrix(np.concatenate((user_feat.values, np.eye(data.n_users)), axis=1))
# Create Item Feature COO Matrix
# item_feat_mat = coo_matrix(np.eye(data.n_items))
item_feat_mat = coo_matrix(np.concatenate((item_feat.values, np.eye(data.n_items)), axis=1))
user_feat_mat
item_feat_mat
###Output
_____no_output_____
###Markdown
Model Training
###Code
params = {
'no_components': 10,
'loss': 'warp',
'learning_rate': 0.07,
'random_state': 42,
'user_alpha': 0.0002,
'item_alpha': 0.0002
}
epochs = 10
N = 10
hybrid_model = LightFM(**params)
hybrid_model.fit(train_mat,
user_features=user_feat_mat,
item_features=item_feat_mat,
epochs=epochs,
verbose=True)
prec_at_N = precision_at_k(hybrid_model,
test_mat,
train_mat,
k=N,
user_features=user_feat_mat,
item_features=item_feat_mat)
prec_at_N.mean()
###Output
_____no_output_____
###Markdown
Unit 9: LightFM You almost made it - this is the final lesson and it is also going to be the easiest one.As you might assume there are a lot of recommender packages in Python out there. And yes, there are. In this lesson we will look at LightFM - an easy to user and lightweight implementation of different algorithms for factorization machines to implement CF, CBF and hybrid recommenders.Within a few lines of code we set-up, train and use a recommender for recommendations.* [LightFM on GitHub](https://github.com/lyst/lightfm)* [LightFM documentation](https://making.lyst.com/lightfm/docs/home.html)
###Code
import matplotlib.pyplot as plt
from lightfm.datasets import fetch_movielens
from lightfm.evaluation import precision_at_k
from lightfm import LightFM
###Output
/anaconda3/envs/recsys_training/lib/python3.7/site-packages/lightfm/_lightfm_fast.py:9: UserWarning: LightFM was compiled without OpenMP support. Only a single thread will be used.
warnings.warn('LightFM was compiled without OpenMP support. '
###Markdown
Load Data
###Code
data = fetch_movielens(min_rating=4.0)
data
###Output
_____no_output_____
###Markdown
Collaborative Filtering
###Code
params = {
'no_components': 16,
'loss': 'bpr',
'learning_rate': 0.05,
'random_state': 42
}
epochs = 10
N = 10
cf_model = LightFM(**params)
data['train']
cf_model.fit(data['train'], epochs=epochs, verbose=True)
# if we provide training data with evaluation, known positives will be removed
prec_at_N = precision_at_k(cf_model, data['test'], data['train'], k=N)
prec_at_N.mean()
###Output
_____no_output_____
###Markdown
Hybrid (CF + CBF)
###Code
hybrid_model = LightFM(**params)
hybrid_model.fit(data['train'], item_features=data['item_features'],
epochs=epochs, verbose=True)
prec_at_N = precision_at_k(hybrid_model, data['test'], data['train'], k=N,
item_features=data['item_features'])
prec_at_N.mean()
hybrid_model.user_embeddings
hybrid_model.item_embeddings
hybrid_model.user_biases
###Output
_____no_output_____ |
photometry/photometry_tutorial.ipynb | ###Markdown
Fink case study: ZTF photometry
###Code
import io
import requests
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('talk')
###Output
_____no_output_____
###Markdown
Difference image PSF-fit magnitudeLet's start with the ZTF alerts processed by Fink. By default, each alert packet includes the difference image PSF-fit magnitude:
###Code
r = requests.post(
'https://fink-portal.org/api/v1/objects',
json={
'objectId': 'ZTF21abfaohe',
'withupperlim': 'True',
}
)
# Format output in a DataFrame
pdf_magpsf = pd.read_json(io.BytesIO(r.content))
mjd = pdf_magpsf['i:jd'].apply(lambda x: x - 2400000.5)
fig = plt.figure(figsize=(15, 5))
colordic = {1: 'C0', 2: 'C1'}
filtdic = {1: 'g', 2: 'r'}
for filt in np.unique(pdf_magpsf['i:fid']):
maskFilt = pdf_magpsf['i:fid'] == filt
# The column `d:tag` is used to check data type
maskValid = pdf_magpsf['d:tag'] == 'valid'
plt.errorbar(
pdf_magpsf[maskValid & maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),
pdf_magpsf[maskValid & maskFilt]['i:magpsf'],
pdf_magpsf[maskValid & maskFilt]['i:sigmapsf'],
ls = '', marker='o', color=colordic[filt], label='{} band'.format(filtdic[filt])
)
maskUpper = pdf_magpsf['d:tag'] == 'upperlim'
plt.plot(
pdf_magpsf[maskUpper & maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),
pdf_magpsf[maskUpper & maskFilt]['i:diffmaglim'],
ls='', marker='v', color=colordic[filt], markerfacecolor='none'
)
maskBadquality = pdf_magpsf['d:tag'] == 'badquality'
plt.errorbar(
pdf_magpsf[maskBadquality & maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),
pdf_magpsf[maskBadquality & maskFilt]['i:magpsf'],
pdf_magpsf[maskBadquality & maskFilt]['i:sigmapsf'],
ls='', marker='^', color=colordic[filt]
)
plt.ylim(12, 22)
plt.gca().invert_yaxis()
plt.legend()
plt.title('Difference image PSF-fit magnitude')
plt.xlabel('Modified Julian Date [UTC]')
plt.ylabel('Difference Magnitude');
###Output
_____no_output_____
###Markdown
_Circles (&9679;) with error bars show valid alerts that pass the Fink quality cuts. Upper triangles with errors (&9650;) represent alert measurements that do not satisfy Fink quality cuts, but are nevetheless contained in the history of valid alerts and used by Fink science modules. Lower triangles (&9661;) represent 5-sigma mag limit in difference image based on PSF-fit photometry contained in the history of valid alerts._ DC mag
###Code
# from fink_science.conversion import dc_mag
from fink_utils.photometry.conversion import dc_mag
# Take only valid measurements
pdf_magpsf_valid = pdf_magpsf[pdf_magpsf['d:tag'] == 'valid'].sort_values('i:jd', ascending=False)
# Use DC magnitude instead of difference mag
mag_dc, err_dc = np.transpose(
[
dc_mag(*args) for args in zip(
pdf_magpsf_valid['i:fid'].astype(int).values,
pdf_magpsf_valid['i:magpsf'].astype(float).values,
pdf_magpsf_valid['i:sigmapsf'].astype(float).values,
pdf_magpsf_valid['i:magnr'].astype(float).values,
pdf_magpsf_valid['i:sigmagnr'].astype(float).values,
pdf_magpsf_valid['i:magzpsci'].astype(float).values,
pdf_magpsf_valid['i:isdiffpos'].values
)
]
)
pdf_magpsf_valid['i:mag_dc'] = mag_dc
pdf_magpsf_valid['i:err_dc'] = err_dc
fig = plt.figure(figsize=(15, 5))
colordic = {1: 'C0', 2: 'C1'}
filtdic = {1: 'g', 2: 'r'}
for filt in np.unique(pdf_magpsf_valid['i:fid']):
maskFilt = pdf_magpsf_valid['i:fid'] == filt
plt.errorbar(
pdf_magpsf_valid[maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),
pdf_magpsf_valid[maskFilt]['i:magpsf'],
pdf_magpsf_valid[maskFilt]['i:sigmapsf'],
ls = '', marker='x',
color=colordic[filt],
label='{} band (PSF-fit)'.format(filtdic[filt]),
)
plt.errorbar(
pdf_magpsf_valid[maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),
pdf_magpsf_valid[maskFilt]['i:mag_dc'],
pdf_magpsf_valid[maskFilt]['i:err_dc'],
ls = '', marker='o',
color=colordic[filt],
label='{} band (DC)'.format(filtdic[filt]),
)
plt.gca().invert_yaxis()
plt.legend()
plt.title('Comparison of PSF-fit and DC magnitudes')
plt.xlabel('Modified Julian Date [UTC]')
plt.ylabel('Magnitude');
###Output
_____no_output_____
###Markdown
ZTF data release data
###Code
ra0 = np.mean(pdf_magpsf_valid['i:ra'].values)
dec0 = np.mean(pdf_magpsf_valid['i:dec'].values)
r = requests.post(
'https://irsa.ipac.caltech.edu/cgi-bin/ZTF/nph_light_curves',
data={'POS': 'CIRCLE {} {} 0.0004'.format(ra0, dec0),
'BAD_CATFLAGS_MASK': 32768,
'FORMAT': 'csv'
}
)
pdf_ZTF = pd.read_csv(io.StringIO(r.text))
pdf_ZTF
fig = plt.figure(figsize=(15, 6))
colordic = {1: 'C0', 2: 'C1'}
filtdic = {1: 'g', 2: 'r'}
for filt in np.unique(pdf_magpsf_valid['i:fid']):
maskFilt = pdf_magpsf_valid['i:fid'] == filt
plt.errorbar(
pdf_magpsf_valid[maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),
pdf_magpsf_valid[maskFilt]['i:mag_dc'],
pdf_magpsf_valid[maskFilt]['i:err_dc'],
ls = '', marker='o',
color=colordic[filt],
label='{} band (DC)'.format(filtdic[filt]),
)
f = pdf_ZTF['catflags'] == 0
colordic = {'zg': 'C0', 'zr': 'C1', 'zi': 'C2'}
for filt in np.unique(pdf_ZTF[f]['filtercode']):
maskFilt = pdf_ZTF[f]['filtercode'] == filt
plt.errorbar(
pdf_ZTF[f][maskFilt]['mjd'],
pdf_ZTF[f][maskFilt]['mag'],
pdf_ZTF[f][maskFilt]['magerr'],
ls='', color=colordic[filt],
label='ZTF DR {} band'.format(filt))
plt.gca().invert_yaxis()
plt.legend(ncol=3)
plt.title('DC mag from alert vs ZTF DR photometry')
plt.xlabel('Modified Julian Date [UTC]')
plt.ylabel('Magnitude');
###Output
_____no_output_____
###Markdown
Let's zoom on the peak:
###Code
fig = plt.figure(figsize=(15, 6))
colordic = {1: 'C0', 2: 'C1'}
filtdic = {1: 'g', 2: 'r'}
for filt in np.unique(pdf_magpsf_valid['i:fid']):
maskFilt = pdf_magpsf_valid['i:fid'] == filt
plt.errorbar(
pdf_magpsf_valid[maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),
pdf_magpsf_valid[maskFilt]['i:mag_dc'],
pdf_magpsf_valid[maskFilt]['i:err_dc'],
ls = '', marker='o',
color=colordic[filt],
label='{} band (DC)'.format(filtdic[filt]),
)
f = pdf_ZTF['catflags'] == 0
colordic = {'zg': 'C0', 'zr': 'C1', 'zi': 'C2'}
for filt in np.unique(pdf_ZTF[f]['filtercode']):
maskFilt = pdf_ZTF[f]['filtercode'] == filt
plt.errorbar(
pdf_ZTF[f][maskFilt]['mjd'],
pdf_ZTF[f][maskFilt]['mag'],
pdf_ZTF[f][maskFilt]['magerr'],
ls='', color=colordic[filt], marker='x',
label='ZTF DR {} band'.format(filt))
plt.gca().invert_yaxis()
plt.legend(ncol=2)
plt.title('DC mag from alert vs ZTF DR photometry')
plt.xlabel('Modified Julian Date [UTC]')
plt.ylabel('Magnitude')
plt.xlim(59275, 59700);
###Output
_____no_output_____
###Markdown
Forced photometryhttps://web.ipac.caltech.edu/staff/fmasci/ztf/forcedphot.pdf
###Code
pdf = pd.read_csv('forcedphotometry_req00149191_lc.txt', comment='#', sep=' ')
pdf = pdf\
.drop(columns=['Unnamed: 0'])\
.rename(lambda x: x.split(',')[0], axis='columns')
pdf.columns
###Output
_____no_output_____
###Markdown
The default data is difference image PSF-fit flux (similar to alerts)
###Code
fig = plt.figure(figsize=(15, 7))
for filt in np.unique(pdf['filter']):
mask = pdf['filter'] == filt
sub = pdf[mask]
plt.errorbar(
sub['jd'].apply(lambda x: x - 2400000.5),
sub['forcediffimflux'],
sub['forcediffimfluxunc'],
ls='',
marker='.',
label=filt
)
plt.legend()
plt.title('Difference image PSF-fit flux')
plt.xlabel('Modified Julian Date [UTC]')
plt.ylabel('Difference flux');
# plt.xlim(59350, 59550)
###Output
_____no_output_____
###Markdown
Generating absolute-photometry lightcurves for variable sources:
###Code
def diff_phot(forcediffimflux, forcediffimfluxunc, zpdiff, SNT=3, SNU=5, set_to_nan=True):
"""
"""
if (forcediffimflux / forcediffimfluxunc) > SNT:
# we have a confident detection, compute and plot mag with error bar:
mag = zpdiff - 2.5 * np.log10(forcediffimflux)
err = 1.0857 * forcediffimfluxunc / forcediffimflux
else:
# compute flux upper limit and plot as arrow:
if not set_to_nan:
mag = zpdiff - 2.5 * np.log10(SNU * forcediffimfluxunc)
else:
mag = np.nan
err = np.nan
return mag, err
def apparent_flux(magpsf, sigmapsf, magnr, sigmagnr, magzpsci):
""" Compute apparent flux from difference magnitude supplied by ZTF
This was heavily influenced by the computation provided by Lasair:
https://github.com/lsst-uk/lasair/blob/master/src/alert_stream_ztf/common/mag.py
Paramters
---------
fid
filter, 1 for green and 2 for red
magpsf,sigmapsf; floats
magnitude from PSF-fit photometry, and 1-sigma error
magnr,sigmagnr: floats
magnitude of nearest source in reference image PSF-catalog
within 30 arcsec and 1-sigma error
magzpsci: float
Magnitude zero point for photometry estimates
isdiffpos: str
t or 1 => candidate is from positive (sci minus ref) subtraction;
f or 0 => candidate is from negative (ref minus sci) subtraction
Returns
--------
dc_flux: float
Apparent magnitude
dc_sigflux: float
Error on apparent magnitude
"""
if magpsf is None:
return None, None
# reference flux and its error
magdiff = magzpsci - magnr
if magdiff > 12.0:
magdiff = 12.0
ref_flux = 10**(0.4 * magdiff)
ref_sigflux = (sigmagnr / 1.0857) * ref_flux
magdiff = magzpsci - magpsf
if magdiff > 12.0:
magdiff = 12.0
difference_flux = 10**(0.4 * magdiff)
difference_sigflux = (sigmapsf / 1.0857) * difference_flux
dc_flux = ref_flux + difference_flux
# assumes errors are independent. Maybe too conservative.
dc_sigflux = np.sqrt(difference_sigflux**2 + ref_sigflux**2)
return dc_flux, dc_sigflux
def dc_mag(magpsf, sigmapsf, magnr, sigmagnr, magzpsci):
""" Compute apparent magnitude from difference magnitude supplied by ZTF
Parameters
Stolen from Lasair.
----------
fid
filter, 1 for green and 2 for red
magpsf,sigmapsf
magnitude from PSF-fit photometry, and 1-sigma error
magnr,sigmagnr
magnitude of nearest source in reference image PSF-catalog
within 30 arcsec and 1-sigma error
magzpsci
Magnitude zero point for photometry estimates
isdiffpos
t or 1 => candidate is from positive (sci minus ref) subtraction;
f or 0 => candidate is from negative (ref minus sci) subtraction
"""
dc_flux, dc_sigflux = apparent_flux(
magpsf, sigmapsf, magnr, sigmagnr, magzpsci
)
# apparent mag and its error from fluxes
if (dc_flux == dc_flux) and dc_flux > 0.0:
dc_mag = magzpsci - 2.5 * np.log10(dc_flux)
dc_sigmag = dc_sigflux / dc_flux * 1.0857
else:
dc_mag = np.nan
dc_sigmag = np.nan
return dc_mag, dc_sigmag
###Output
_____no_output_____
###Markdown
difference image PSF-fit flux to difference image PSF-fit magnitude:
###Code
magpsf, sigmapsf = np.transpose(
[
diff_phot(*args) for args in zip(
pdf['forcediffimflux'],
pdf['forcediffimfluxunc'].values,
pdf['zpdiff'].values,
)
]
)
###Output
_____no_output_____
###Markdown
difference image PSF-fit magnitude to absolute magnitude (DC):
###Code
mag_dc, err_dc = np.transpose(
[
dc_mag(*args) for args in zip(
magpsf,
sigmapsf,
pdf['nearestrefmag'].values,
pdf['nearestrefmagunc'].values,
pdf['zpmaginpsci'].values,
)
]
)
fig = plt.figure(figsize=(15, 7))
for filt in np.unique(pdf['filter']):
mask = pdf['filter'] == filt
sub = pdf[mask]
plt.errorbar(
sub['jd'].apply(lambda x: x - 2400000.5),
mag_dc[mask],
err_dc[mask],
ls='',
marker='o',
label=filt
)
fig.gca().invert_yaxis()
plt.legend();
plt.title('Magnitude from forced photometry')
plt.xlabel('Modified Julian Date [UTC]')
plt.ylabel('DC magnitude');
# plt.xlim(59350, 59550)
###Output
_____no_output_____
###Markdown
We can further clean the data by taking only points with no flag:
###Code
fig = plt.figure(figsize=(15, 7))
for filt in np.unique(pdf['filter']):
mask = pdf['filter'] == filt
# Keep onluy measurements with flag = 0
mask *= pdf['infobitssci'] == 0
sub = pdf[mask]
plt.errorbar(
sub['jd'].apply(lambda x: x - 2400000.5),
mag_dc[mask],
err_dc[mask],
ls='',
marker='o',
label=filt
)
fig.gca().invert_yaxis()
plt.legend();
plt.title('Magnitude from forced photometry')
plt.xlabel('Modified Julian Date [UTC]')
plt.ylabel('DC magnitude');
# plt.xlim(59350, 59550)
###Output
_____no_output_____
###Markdown
Comparing all DC measurements
###Code
from matplotlib.gridspec import GridSpec
fig = plt.figure(figsize=(15, 10))
gs = GridSpec(3, 1, hspace=0.05)
ax1 = fig.add_subplot(gs[0])
ax2 = fig.add_subplot(gs[1], sharex=ax1)
ax3 = fig.add_subplot(gs[2], sharex=ax1)
axes = [ax1, ax2, ax3]
filt_DR = {1: 'zg', 2: 'zr', 3: 'zi'}
filt_forced = {1: 'ZTF_g', 2: 'ZTF_r', 3: 'ZTF_i'}
names = ['g', 'r', 'i']
for index in range(len(axes)):
# forced photometry
mask = pdf['filter'] == filt_forced[index + 1]
mask *= pdf['infobitssci'] == 0
axes[index].errorbar(
pdf[mask]['jd'].apply(lambda x: x - 2400000.5),
mag_dc[mask],
err_dc[mask],
ls='',
marker='s',
label='Forced photometry ({})'.format(names[index])
)
# Data release
f = pdf_ZTF['catflags'] == 0
maskFilt = pdf_ZTF[f]['filtercode'] == filt_DR[index + 1]
axes[index].errorbar(
pdf_ZTF[f][maskFilt]['mjd'],
pdf_ZTF[f][maskFilt]['mag'],
pdf_ZTF[f][maskFilt]['magerr'],
ls='', marker='x',
label='DR10 ({})'.format(names[index])
)
# Alert
maskFilt = pdf_magpsf_valid['i:fid'] == index + 1
axes[index].errorbar(
pdf_magpsf_valid[maskFilt]['i:jd'].apply(lambda x: x - 2400000.5),
pdf_magpsf_valid[maskFilt]['i:mag_dc'],
pdf_magpsf_valid[maskFilt]['i:err_dc'],
ls = '', marker='+',
label='Alert data ({})'.format(names[index])
)
axes[index].legend()
axes[index].invert_yaxis()
axes[index].set_xlim(59250, 59650)
###Output
_____no_output_____ |
ROQ/Reduced Basis generation for a simple function.ipynb | ###Markdown
Brief summary In this example we will generate a reduced basis set for a simple two-parameter oscillating function - which corresponds to a simple (gravitational) waveform. The idea of a writing a function in terms of an orthonormal basis will be familiar. Perhaps the most common example is the Fourier series.For many applications a Fourier series is a very conveinent representation. However, for certain applications the Fourier series might be very cumbersome: the number of terms in the series might be very large for the desired accuracy, or the series might not converge, for example. In such cases it would be useful to have a way of computing some kind of "well adapted" basis for the specific problem of interest. A common approach to this sort of problem is to use a singular value decomposition, or principle component analysis, to represent the space of functions. In these cases, the bases generated by the SVD or PCA can lead to very compact and accurate representations of a space of functions. However, such approaches are not guaranteed to be gobally optimal, and hence the repersentations may be more accurate in some parts of the function space than others. To generate a globally optimal basis (that is, where the representation errors are minimized everywhere over the function space) we will consider yet another approach, known as the 'reduced basis method'. Before proceeding, I'll give a short sumamry of how the method works. Essentially we want to find a highly accurate, compact representation of the space of waveforms - in practice we will construct the basis so that the reduced basis representation suffers from essentially no loss in precision. Not only does this method produce highly accurate bases, but the algorithm used to generate them converges exponentially, which is a neat feature. It is also totally application specific, so the bases produced are very well adapted to particular problems.The space of waveforms is parameterized by two parameters; a mass and frequency. The reduced basis method works by first constructing a dense space of waveforms, distributed on the mass parameter, and discretely sampled in frequency. This "training space" is used in a greedy algorithm, which will select waveforms in this space to make up the basis: unlike SVD, Fourier, PCA etc..., the bases used in the reduced basis method are directly related to the functions we're trying to represent, and are therefore somewhat less abstract. In this case the bases will be an orthonormalized set of waveforms. The greedy algorithm selects the basis elements iteratively, and works as follows. On the zeroth iteration, one selects an arbitrary seed waveform: at this iteration, the reduced basis consists of one waveform - the seed waveform. For the first iteration, one then computes the projection error of this basis element with every waveform in the training space. The waveform in the training space which has the worst projection error with the basis is then added to the basis, and the basis is orthonormalized. On the second iteration, one computes the projection error of the two basis elements with every waveform in the training space, finds the waveform with the worst projection error, adds it to the basis and orthonormalizes. And so on for the third, fourth....iteraions. The algorithm is terminated once once the projection error of the $n^{th}$ iteration reaches some user-defined threshold. The input to the algorithm is a training space of m waveforms $\{h_j(M_c;f)\}_{j=1}^{m}$. The output is an orthonormal basis $\{e(f)_i\}_{i=1}^{n}$. The result is that we will be able to write the waveform as an expansion $h(Mc;f) = \sum_{i=1}^{n} \langle h(M_c;f), e_i(f) \rangle\,e_i(f)$. The coeffieicnts $\langle h(M_c;f), e_i(f) \rangle$ are the inner product of $h$ with the basis elements $e_i$.In this note, I'll show how to build the reduced basis in practice. The waveform is shown in the cell directly below. It corresponds to a post-Newtonian gravitational waveform, but it's not important and I want to stress the generality of the approach: it can be applied to any parameterized function.
###Code
def phase(f, Mc):
phase = -np.pi/4. + ( 3./( 128. * pow(Mc*np.pi*f, 5./3.) ) )
return phase
def htilde(f, Mc):
Mc *= solar_mass_in_seconds
htilde = pow(f, -7./6.) * pow(Mc,5./6.) * np.exp(1j*phase(f,Mc))
return htilde
###Output
_____no_output_____
###Markdown
The parameters (f, Mc) are frequency and chirp mass. Before we generate the basis we need to decide on the range in parameter space that we will work in. The chirp mass range I'll work in will be $1.5 \leq M_c \leq 3$. I won't explain the exact choice of these values,
###Code
Mc_min = 1.5
Mc_max = 3
###Output
_____no_output_____
###Markdown
Next I'll define the upper and lower frequencies of the waveforms: $f_{min} =40Hz$ and $f_{max} = 1024Hz$. Rather than have a uniformly sampled waveforms in this range, I've opted to create a frequency series at the Chebyshev-Gauss-Lobatto nodes in the frequency interval. The only reason for doing this is to make the greedy more efficient, but don't dwell on this as it will probably just obscure the main point of the example.
###Code
fmin = 40.
fmax = 1024.
fseries, df = chebyshev_gauss_lobatto_nodes_and_weights(fmin, fmax, 5000)
np.savetxt("fseries.dat", np.matrix(fseries))
np.savetxt("df.dat", np.matrix(df))
###Output
_____no_output_____
###Markdown
Next I'll define the parameters for the training space. I'll make a training space with 2000 waveforms. Rather than distribute the waveforms uniformly between $Mc_{min}$ and $Mc_{max}$, I'll distribute them uniformly between $Mc_{min}^{5/3}$ and $Mc_{max}^{5/3}$. This is because $Mc^{5/3}$ appears in the phase, and it will turn out to be a much more judicious way to make the training space. In particular, it will make this script run much faster on your laptop.
###Code
TS_size = 2000 # training space of TS_size number of waveforms
Mcs_5_over_3 = np.linspace(Mc_min**(5./3.), Mc_max**(5./3.), TS_size)
Mcs = Mcs_5_over_3**(3./5.)
###Output
_____no_output_____
###Markdown
Now I'll actually make the training space. For storage purposes I'll allocate it as a matrix whose rows correspond to waveforms distributed on $M_c$. The columns of the matrix are the frequency samples of the waveforms. In addition, I'll normalize all the waveforms: You don't have to do this last step, but it makes computing the projection errors in the next step more simple.
###Code
#### allocate memory for training space ####
TS = np.zeros(TS_size*len(fseries), dtype=complex).reshape(TS_size, len(fseries)) # store training space in TS_size X len(fseries) array
for i in range(TS_size):
TS[i] = htilde(fseries, Mcs[i])
# normalize
TS[i] /= np.sqrt(abs(dot_product(df, TS[i], TS[i])))
plt.plot(fseries, TS[0], 'b', fseries, TS[345], 'r', fseries, TS[999], 'k')
plt.show()
###Output
_____no_output_____
###Markdown
The projection operation and the projection errors are defined as follows.$\textbf{Projection}$: for a basis set $\{e_i\}_{i=1}^{n}$, the projection of $h$ onto the basis is defined as $\mathcal{P}h = \sum_{i=1}^{n}\langle h,e_i \rangle e_i$, where $\langle a, b \rangle$ is an inner product.$\textbf{Projection coefficient}$: the coefficients $\langle h,e_i \rangle$ are the projection coefficients.$\textbf{Projection error}$: the projection error $\sigma$ is the inner product of the residual of $h$ and it's projection: $\sigma = \langle (h - \mathcal{P}h), (h - \mathcal{P}h) \rangle.$The stuff below is just some convenient storage for all the projections and projection coefficients.
###Code
#### Set up stuff for greedy####
# Allocate storage for projection coefficients of training space waveforms onto the reduced basis elements
proj_coefficients = np.zeros(TS_size*TS_size, dtype=complex).reshape(TS_size, TS_size)
# Allocate matrix to store the projection of training space waveforms onto the reduced basis
projections = np.zeros(TS_size*len(fseries), dtype=complex).reshape(TS_size, len(fseries))
rb_errors = []
###Output
_____no_output_____
###Markdown
Now we will start the greedy algorithm to find the bases. We start by seeding the algorithm with the first basis element, chosen arbitrarily as the first waveform in the training set. This is stored in RB_matrix. For this example, I've set tolerance = 1e-12, which will be the target error of the complete basis to represent the training spcae, i.e., the waveforms written as an expansion in terms of the bases should be accurate to one part in $10^{12}$. The greedy algorithm will terminate once the maximum projection error - of the waveforms in the training space onto the basis - is less than or equal to the tolerance. Once the algorithm is done, the real and imaginary parts of the basis are stored in basis_re.dat and basis_im.dat respectively.
###Code
#### Begin greedy: see Field et al. arXiv:1308.3565v2 ####
tolerance = 10e-12 # set maximum RB projection error
sigma = 1 # (2) of Algorithm 1. (projection error at 0th iteration)
rb_errors.append(sigma)
RB_matrix = [TS[0]] # (3) of Algorithm 1. (seed greedy algorithm (arbitrary))
iter = 0
while sigma >= tolerance: # (5) of Algorithm 1.
# project the whole training set onto the reduced basis set
projections = project_onto_basis(df, RB_matrix, TS, projections, proj_coefficients, iter)
residual = TS - projections
# Find projection errors
projection_errors = [dot_product(df, residual[i], residual[i]) for i in range(len(residual))]
sigma = abs(max(projection_errors)) # (7) of Algorithm 1. (Find largest projection error)
print sigma, iter
index = np.argmax(projection_errors) # Find Training-space index of waveform with largest proj. error
rb_errors.append(sigma)
#Gram-Schmidt to get the next basis and normalize
next_basis = TS[index] - projections[index] # (9) of Algorithm 1. (Gram-Schmidt)
next_basis /= np.sqrt(abs(dot_product(df, next_basis, next_basis))) #(10) of Alg 1. (normalize)
RB_matrix.append(next_basis) # (11) of Algorithm 1. (append reduced basis set)
iter += 1
np.savetxt("basis_re.dat", np.matrix(RB_matrix).real)
np.savetxt("basis_im.dat", np.matrix(RB_matrix).imag)
plt.plot(rb_errors)
plt.yscale('log')
plt.xlabel('greedy iteration')
plt.ylabel('projection error')
plt.show()
###Output
_____no_output_____
###Markdown
The above plot shows the projection error as a function of the greedy iteration. Notice that the errors hover around 1 for most of the algorithm and at some point decrease rapidly in only a few iterations. This feature is common, and corresponds to the exponential convergence promised earlier.We should now check that the basis is as good as we hope: while the basis is already accurate to the 1e-12 level for approximating the training set, we should also check that it's accurate at describing waveforms which are in the Mc interval we considered, but which were not in the training space. To do this, I'll generate a new random training space in the Mc interval, and look at the projection errors of the reduced basis on the random training space.
###Code
#### Error check ####
TS_rand_size = 2000
TS_rand = np.zeros(TS_rand_size*len(fseries), dtype=complex).reshape(TS_rand_size, len(fseries)) # Allocate random training space
Mcs_5_over_3_rand = Mc_min**(5./3.) + np.random.rand(TS_rand_size) * ( Mc_max**(5./3.) - Mc_min**(5./3.) )
Mcs_rand = pow(Mcs_5_over_3_rand, 3./5.)
for i in range(TS_rand_size):
TS_rand[i] = htilde(fseries, Mcs_rand[i])
# normalize
TS_rand[i] /= np.sqrt(abs(dot_product(df, TS_rand[i], TS_rand[i])))
### find projection errors ###
iter = 0
proj_rand = np.zeros(len(fseries), dtype=complex)
proj_error = []
for h in TS_rand:
while iter < len(RB_matrix):
proj_coefficients_rand = dot_product(df, RB_matrix[iter], h)
proj_rand += proj_coefficients_rand*RB_matrix[iter]
iter += 1
residual = h - proj_rand
projection_errors = abs(dot_product(df, residual, residual))
proj_error.append(projection_errors)
proj_rand = np.zeros(len(fseries), dtype=complex)
iter = 0
plt.scatter(np.linspace(0, len(proj_error), len(proj_error)), np.log10(proj_error))
plt.ylabel('log10 projection error')
plt.show()
###Output
_____no_output_____ |
Tutorial-ETK_thorn-IDScalarWaveNRPy.ipynb | ###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); IDScalarWaveNRPy: An Einstein Toolkit Initial Data Thorn for the Scalar Wave Equation Author: Terrence Pierre Jacques & Zach Etienne Formatting improvements courtesy Brandon Clark[comment]: (Abstract: TODO)[comment]: (Notebook Status and Validation Notes: TODO) NRPy+ Source Code for this module: [ScalarWave/InitialData.py](../edit/ScalarWave/InitialData.py) [\[**tutorial**\]](Tutorial-ScalarWave.ipynb) Contructs the SymPy expressions for spherical gaussian and plane-wave initial data Introduction:In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up *initial data* for the scalar wave initial value problem. In a [previous tutorial notebook](Tutorial-ScalarWave.ipynb), we used NRPy+ to contruct the SymPy expressions for either spherical gaussian or plane-wave initial data. This thorn is largely based on and should function similarly to the $\text{IDScalarWaveC}$ thorn included in the Einstein Toolkit (ETK) $\text{CactusWave}$ arrangement.We will construct this thorn in two steps.1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module. Table of Contents$$\label{toc}$$ This notebook is organized as follows1. [Step 1](initializenrpy): Call on NRPy+ to convert the SymPy expression for the scalar wave initial data into a C-code kernel1. [Step 2](einstein): Interfacing with the Einstein Toolkit 1. [Step 2.a](einstein_c): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](einstein_ccl): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](einstein_list): Add the C code to the Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Initialize needed Python/NRPy+ modules \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step 1: Import needed core NRPy+ modules
from outputC import lhrh # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import loop as lp # NRPy+: Generate C code loops
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import os, sys # Standard Python modules for multiplatform OS-level functions
import time # Standard Python module; useful for benchmarking
# Step 1a: Create directories for the thorn if they don't exist.
# Create directory for WaveToyNRPy thorn & subdirectories in case they don't exist.
outrootdir = "IDScalarWaveNRPy/"
cmd.mkdir(os.path.join(outrootdir))
outdir = os.path.join(outrootdir,"src") # Main C code output directory
cmd.mkdir(outdir)
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
###Output
_____no_output_____
###Markdown
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{einstein}$$ Step 2.a: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{einstein_c}$$Using sympy, we construct the exact expressions for all scalar wave initial data currently supported in NRPy, documented in [Tutorial-ScalarWave.ipynb](Tutorial-ScalarWave.ipynb). We write the generated C codes into different C files, corresponding to the type of initial data the may want to choose at run time. Note that the code below can be easily extensible to include other types of initial data.
###Code
# Step 1c: Call the InitialData() function from within the
# ScalarWave/InitialData.py module.
import ScalarWave.InitialData as swid
# Step 1e: Call the InitialData() function to set up initial data.
# Options include:
# "PlaneWave": monochromatic (single frequency/wavelength) plane wave
# "SphericalGaussian": spherically symmetric Gaussian, with default stdev=3
ID_options = ["PlaneWave", "SphericalGaussian"]
for ID in ID_options:
gri.glb_gridfcs_list = []
# Within the ETK, the 3D gridfunctions x, y, and z store the
# Cartesian grid coordinates. Setting the gri.xx[] arrays
# to point to these gridfunctions forces NRPy+ to treat
# the Cartesian coordinate gridfunctions properly --
# reading them from memory as needed.
x,y,z = gri.register_gridfunctions("AUX",["x","y","z"])
rfm.xx[0] = x
rfm.xx[1] = y
rfm.xx[2] = z
swid.InitialData(Type=ID,
default_sigma=0.25,
default_k0=1.0,
default_k1=0.,
default_k2=0.)
# Step 1f: Register uu and vv gridfunctions so they can be written to by NRPy.
uu,vv = gri.register_gridfunctions("EVOL",["uu","vv"])
# Step 1g: Set the uu and vv gridfunctions to the uu_ID & vv_ID variables
# defined by InitialData_PlaneWave().
uu = swid.uu_ID
vv = swid.vv_ID
# Step 1h: Create the C code output kernel.
ScalarWave_ID_SymbExpressions = [\
lhrh(lhs=gri.gfaccess("out_gfs","uu"),rhs=uu),\
lhrh(lhs=gri.gfaccess("out_gfs","vv"),rhs=vv),]
ScalarWave_ID_CcodeKernel = fin.FD_outputC("returnstring",ScalarWave_ID_SymbExpressions)
ScalarWave_ID_looped = lp.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
["1","1","1"],["#pragma omp parallel for","",""],"",\
ScalarWave_ID_CcodeKernel.replace("time","cctk_time"))
# Write the C code kernel to file.
with open(os.path.join(outdir,"ScalarWave_"+ID+"ID.h"), "w") as file:
file.write(str(ScalarWave_ID_looped))
###Output
_____no_output_____
###Markdown
Step 2. b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{einstein_ccl}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1. `interface.ccl`: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns. Specifically, this file governs the interaction between this thorn and others; more information can be found in the [official Einstein Toolkit documentation](https://einsteintoolkit.org/usersguide/UsersGuide.htmlx1-179000D2.2). With "implements", we give our thorn its unique name. By "inheriting" other thorns, we tell the Toolkit that we will rely on variables that exist and are declared "public" within those functions.
###Code
evol_gfs_list = []
for i in range(len(gri.glb_gridfcs_list)):
if gri.glb_gridfcs_list[i].gftype == "EVOL":
evol_gfs_list.append( gri.glb_gridfcs_list[i].name+"GF")
# NRPy+'s finite-difference code generator assumes gridfunctions
# are alphabetized; not sorting may result in unnecessary
# cache misses.
evol_gfs_list.sort()
with open(os.path.join(outrootdir,"interface.ccl"), "w") as file:
file.write("""
# With "implements", we give our thorn its unique name.
implements: IDScalarWaveNRPy
# By "inheriting" other thorns, we tell the Toolkit that we
# will rely on variables/function that exist within those
# functions.
inherits: WaveToyNRPy grid
""")
###Output
_____no_output_____
###Markdown
2. `param.ccl`: specifies free parameters within the thorn, enabling them to be set at runtime. It is required to provide allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](https://einsteintoolkit.org/usersguide/UsersGuide.htmlx1-184000D2.3).
###Code
def keep_param__return_type(paramtuple):
keep_param = True # We'll not set some parameters in param.ccl;
# e.g., those that should be #define'd like M_PI.
typestring = ""
# Separate thorns within the ETK take care of grid/coordinate parameters;
# thus we ignore NRPy+ grid/coordinate parameters:
if paramtuple.module == "grid" or paramtuple.module == "reference_metric" or paramtuple.parname == "wavespeed":
keep_param = False
partype = paramtuple.type
if partype == "bool":
typestring += "BOOLEAN "
elif partype == "REAL":
if paramtuple.defaultval != 1e300: # 1e300 is a magic value indicating that the C parameter should be mutable
typestring += "CCTK_REAL "
else:
keep_param = False
elif partype == "int":
typestring += "CCTK_INT "
elif partype == "#define":
keep_param = False
elif partype == "char":
# FIXME: char/string parameter types should in principle be supported
print("Error: parameter "+paramtuple.module+"::"+paramtuple.parname+
" has unsupported type: \""+ paramtuple.type + "\"")
sys.exit(1)
else:
print("Error: parameter "+paramtuple.module+"::"+paramtuple.parname+
" has unsupported type: \""+ paramtuple.type + "\"")
sys.exit(1)
return keep_param, typestring
paramccl_str="""
# This param.ccl file was automatically generated by NRPy+.
# You are advised against modifying it directly; instead
# modify the Python code that generates it.
shares: grid
USES KEYWORD type
shares: WaveToyNRPy
USES REAL wavespeed
restricted:
CCTK_KEYWORD initial_data "Type of initial data"
{"""
for ID in ID_options:
paramccl_str +='''
"'''+ID+'''" :: "'''+ID+'"'
paramccl_str +='''
} "'''+ID+'''"
'''
paramccl_str +="""
restricted:
"""
for i in range(len(par.glb_Cparams_list)):
# keep_param is a boolean indicating whether we should accept or reject
# the parameter. singleparstring will contain the string indicating
# the variable type.
keep_param, singleparstring = keep_param__return_type(par.glb_Cparams_list[i])
if keep_param:
parname = par.glb_Cparams_list[i].parname
partype = par.glb_Cparams_list[i].type
singleparstring += parname + " \""+ parname +" (see NRPy+ for parameter definition)\"\n"
singleparstring += "{\n"
if partype != "bool":
singleparstring += " *:* :: \"All values accepted. NRPy+ does not restrict the allowed ranges of parameters yet.\"\n"
singleparstring += "} "+str(par.glb_Cparams_list[i].defaultval)+"\n\n"
paramccl_str += singleparstring
with open(os.path.join(outrootdir,"param.ccl"), "w") as file:
file.write(paramccl_str)
###Output
_____no_output_____
###Markdown
3. `schedule.ccl`: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](https://einsteintoolkit.org/usersguide/UsersGuide.htmlx1-187000D2.4). We specify here the standardized ETK "scheduling bins" in which we want each of our thorn's functions to run.
###Code
with open(os.path.join(outrootdir,"schedule.ccl"), "w") as file:
file.write("""
# This schedule.ccl file was automatically generated by NRPy+.
# You are advised against modifying it directly; instead
# modify the Python code that generates it.
if (CCTK_EQUALS (initial_data, "PlaneWave"))
{
schedule IDScalarWaveNRPy_param_check at CCTK_PARAMCHECK
{
LANG: C
OPTIONS: global
} "Check sanity of parameters"
}
schedule IDScalarWaveNRPy_InitialData at CCTK_INITIAL as WaveToy_InitialData
{
STORAGE: WaveToyNRPy::scalar_fields[3]
LANG: C
} "Initial data for 3D wave equation"
""")
###Output
_____no_output_____
###Markdown
Step 2.c: Add the C code to the Einstein Toolkit compilation list \[Back to [top](toc)\]$$\label{einstein_list}$$We will also need `make.code.defn`, which indicates the list of files that need to be compiled. This thorn only has the one C file to compile.
###Code
make_code_defn_list = []
def append_to_make_code_defn_list(filename):
if filename not in make_code_defn_list:
make_code_defn_list.append(filename)
return os.path.join(outdir,filename)
with open(append_to_make_code_defn_list("InitialData.c"),"w") as file:
file.write("""
#include <math.h>
#include <stdio.h>
#include <string.h>
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
void IDScalarWaveNRPy_param_check(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
if (kk0 == 0 && kk1 == 0 && kk2 == 0) {
CCTK_WARN(0,"kk0==kk1==kk2==0: Zero wave vector cannot be normalized. Set one of the kk's to be != 0.");
}
}
void IDScalarWaveNRPy_InitialData(CCTK_ARGUMENTS)
{
DECLARE_CCTK_ARGUMENTS
DECLARE_CCTK_PARAMETERS
const CCTK_REAL *xGF = x;
const CCTK_REAL *yGF = y;
const CCTK_REAL *zGF = z;
if (CCTK_EQUALS (initial_data, "PlaneWave")) {
#include "ScalarWave_PlaneWaveID.h"
} else if (CCTK_EQUALS (initial_data, "SphericalGaussian")) {
#include "ScalarWave_SphericalGaussianID.h"
}
}
""")
with open(os.path.join(outdir,"make.code.defn"), "w") as file:
file.write("""
# Main make.code.defn file for thorn WaveToyNRPy
# Source files in this directory
SRCS =""")
filestring = ""
for i in range(len(make_code_defn_list)):
filestring += " "+make_code_defn_list[i]
if i != len(make_code_defn_list)-1:
filestring += " \\\n"
else:
filestring += "\n"
file.write(filestring)
###Output
_____no_output_____
###Markdown
Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-IDScalarWaveNRPy.pdf](Tutorial-ETK_thorn-IDScalarWaveNRPy.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-ETK_thorn-IDScalarWaveNRPy.ipynb")
###Output
[NbConvertApp] WARNING | pattern 'Tutorial-ETK_thorn-IDScalarWaveNRPy.ipynb.ipynb' matched no files
Created Tutorial-ETK_thorn-IDScalarWaveNRPy.ipynb.tex, and compiled LaTeX
file to PDF file Tutorial-ETK_thorn-IDScalarWaveNRPy.ipynb.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); IDScalarWaveNRPy: An Einstein Toolkit Initial Data Thorn for the Scalar Wave Equation Author: Zach Etienne Formatting improvements courtesy Brandon Clark[comment]: (Abstract: TODO)[comment]: (Notebook Status and Validation Notes: TODO) NRPy+ Source Code for this module: [ScalarWave/InitialData_PlaneWave.py](../edit/ScalarWave/InitialData_PlaneWave.py) [\[**tutorial**\]](Tutorial-ScalarWave.ipynb) Contructs the SymPy expressions for plane-wave initial data Introduction:In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up *initial data* for the scalar wave initial value problem. In a [previous tutorial notebook](Tutorial-ScalarWave.ipynb), we used NRPy+ to contruct the SymPy expressions for plane-wave initial data. This thorn is largely based on and should function similarly to the $\text{IDScalarWaveC}$ thorn included in the Einstein Toolkit (ETK) $\text{CactusWave}$ arrangement.We will construct this thorn in two steps.1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module. Table of Contents$$\label{toc}$$ This notebook is organized as follows1. [Step 1](initializenrpy): Call on NRPy+ to convert the SymPy expression for the scalar wave initial data into a C-code kernel1. [Step 2](einstein): Interfacing with the Einstein Toolkit 1. [Step 2.a](einstein_c): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](einstein_ccl): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](einstein_list): Add the C code to the Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Call on NRPy+ to convert the SymPy expression for the scalar wave initial data into a C-code kernel \[Back to [top](toc)\]$$\label{initializenrpy}$$After importing the core modules, since we are writing an ETK thorn, we'll need to set `"grid::GridFuncMemAccess"` to `"ETK"`. SymPy expressions for plane wave initial data are written inside [ScalarWave/InitialData_PlaneWave.py](../edit/ScalarWave/InitialData_PlaneWave.py), and we simply import them for use here.
###Code
# Step 1: Call on NRPy+ to convert the SymPy expression
# for the scalar wave initial data into a C-code kernel
# Step 1a: Import needed NRPy+ core modules:
import NRPy_param_funcs as par
import indexedexp as ixp
import grid as gri
import finite_difference as fin
from outputC import *
import loop
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
# Step 1c: Call the InitialData_PlaneWave() function from within the
# ScalarWave/InitialData_PlaneWave.py module.
import ScalarWave.InitialData_PlaneWave as swid
# Step 1d: Within the ETK, the 3D gridfunctions x, y, and z store the
# Cartesian grid coordinates. Setting the gri.xx[] arrays
# to point to these gridfunctions forces NRPy+ to treat
# the Cartesian coordinate gridfunctions properly --
# reading them from memory as needed.
x,y,z = gri.register_gridfunctions("AUX",["x","y","z"])
gri.xx[0] = x
gri.xx[1] = y
gri.xx[2] = z
# Step 1e: Set up the plane wave initial data. This sets uu_ID and vv_ID.
swid.InitialData_PlaneWave()
# Step 1f: Register uu and vv gridfunctions so they can be written to by NRPy.
uu,vv = gri.register_gridfunctions("EVOL",["uu","vv"])
# Step 1g: Set the uu and vv gridfunctions to the uu_ID & vv_ID variables
# defined by InitialData_PlaneWave().
uu = swid.uu_ID
vv = swid.vv_ID
# Step 1h: Create the C code output kernel.
scalar_PWID_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","uu"),rhs=uu),\
lhrh(lhs=gri.gfaccess("out_gfs","vv"),rhs=vv),]
scalar_PWID_CcodeKernel = fin.FD_outputC("returnstring",scalar_PWID_to_print)
scalar_PWID_looped = loop.loop(["i2","i1","i0"],["1","1","1"],["cctk_lsh[2]-1","cctk_lsh[1]-1","cctk_lsh[0]-1"],\
["1","1","1"],["#pragma omp parallel for","",""],"",\
scalar_PWID_CcodeKernel.replace("time","cctk_time"))
# Step 1i: Create directories for the thorn if they don't exist.
!mkdir IDScalarWaveNRPy 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
!mkdir IDScalarWaveNRPy/src 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
# Step 1j: Write the C code kernel to file.
with open("IDScalarWaveNRPy/src/ScalarWave_PWID.h", "w") as file:
file.write(str(scalar_PWID_looped))
###Output
_____no_output_____
###Markdown
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{einstein}$$ Step 2.a: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{einstein_c}$$We will write another C file with the functions we need here.
###Code
%%writefile IDScalarWaveNRPy/src/InitialData.c
#include <math.h>
#include <stdio.h>
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
void IDScalarWaveNRPy_param_check(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
if (kk0 == 0 && kk1 == 0 && kk2 == 0) {
CCTK_WARN(0,"kk0==kk1==kk2==0: Zero wave vector cannot be normalized. Set one of the kk's to be != 0.");
}
}
void IDScalarWaveNRPy_InitialData(CCTK_ARGUMENTS)
{
DECLARE_CCTK_ARGUMENTS
DECLARE_CCTK_PARAMETERS
const CCTK_REAL *xGF = x;
const CCTK_REAL *yGF = y;
const CCTK_REAL *zGF = z;
#include "ScalarWave_PWID.h"
}
###Output
Writing IDScalarWaveNRPy/src/InitialData.c
###Markdown
Step 2. b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{einstein_ccl}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1. `interface.ccl`: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns. Specifically, this file governs the interaction between this thorn and others; more information can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-178000D2.2). With "implements", we give our thorn its unique name. By "inheriting" other thorns, we tell the Toolkit that we will rely on variables that exist and are declared "public" within those functions.
###Code
%%writefile IDScalarWaveNRPy/interface.ccl
implements: IDScalarWaveNRPy
inherits: WaveToyNRPy grid
###Output
Writing IDScalarWaveNRPy/interface.ccl
###Markdown
2. `param.ccl`: specifies free parameters within the thorn, enabling them to be set at runtime. It is required to provide allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-183000D2.3).
###Code
%%writefile IDScalarWaveNRPy/param.ccl
shares: grid
USES KEYWORD type
shares: WaveToyNRPy
USES REAL wavespeed
restricted:
CCTK_KEYWORD initial_data "Type of initial data"
{
"plane" :: "Plane wave"
} "plane"
restricted:
CCTK_REAL kk0 "The wave number in the x-direction"
{
*:* :: "No restriction"
} 4.0
restricted:
CCTK_REAL kk1 "The wave number in the y-direction"
{
*:* :: "No restriction"
} 0.0
restricted:
CCTK_REAL kk2 "The wave number in the z-direction"
{
*:* :: "No restriction"
} 0.0
###Output
Writing IDScalarWaveNRPy/param.ccl
###Markdown
3. `schedule.ccl`: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions. `schedule.ccl`'s official documentation may be found [here](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-186000D2.4). We specify here the standardized ETK "scheduling bins" in which we want each of our thorn's functions to run.
###Code
%%writefile IDScalarWaveNRPy/schedule.ccl
schedule IDScalarWaveNRPy_param_check at CCTK_PARAMCHECK
{
LANG: C
OPTIONS: global
} "Check sanity of parameters"
schedule IDScalarWaveNRPy_InitialData at CCTK_INITIAL as WaveToy_InitialData
{
STORAGE: WaveToyNRPy::scalar_fields[3]
LANG: C
READS: grid::x(Everywhere)
READS: grid::y(Everywhere)
READS: grid::y(Everywhere)
WRITES: uuGF(Everywhere)
WRITES: vvGF(Everywhere)
} "Initial data for 3D wave equation"
###Output
Writing IDScalarWaveNRPy/schedule.ccl
###Markdown
Step 2.c: Add the C code to the Einstein Toolkit compilation list \[Back to [top](toc)\]$$\label{einstein_list}$$We will also need `make.code.defn`, which indicates the list of files that need to be compiled. This thorn only has the one C file to compile.
###Code
%%writefile IDScalarWaveNRPy/src/make.code.defn
SRCS = InitialData.c
###Output
Writing IDScalarWaveNRPy/src/make.code.defn
###Markdown
Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-IDScalarWaveNRPy.pdf](Tutorial-ETK_thorn-IDScalarWaveNRPy.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-ETK_thorn-IDScalarWaveNRPy.ipynb
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[pandoc warning] Duplicate link reference `[comment]' "source" (line 17, column 1)
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
IDScalarWaveNRPy: An Einstein Toolkit Initial Data Thorn for the Scalar Wave Equation Author: Zach Etienne Formatting improvements courtesy Brandon Clark NRPy+ Source Code for this module: [ScalarWave/InitialData_PlaneWave.py](../edit/ScalarWave/InitialData_PlaneWave.py)[\[**tutorial**\]](Tutorial-ScalarWave.ipynb) Contructs the SymPy expressions for plane-wave initial data Introduction:In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up *initial data* for the scalar wave initial value problem. In a [previous tutorial module](Tutorial-ScalarWave.ipynb), we used NRPy+ to contruct the SymPy expressions for plane-wave initial data. This thorn is largely based on and should function similarly to the $\text{IDScalarWaveC}$ thorn included in the Einstein Toolkit (ETK) $\text{CactusWave}$ arrangement.We will construct this thorn in two steps.1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module. Table of Contents$$\label{toc}$$ This module is organized as follows1. [Step 1](initializenrpy): Call on NRPy+ to convert the SymPy expression for the scalar wave initial data into a C-code kernel1. [Step 2](einstein): Interfacing with the Einstein Toolkit 1. [Step 2.a](einstein_c): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](einstein_ccl): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](einstein_list): Add the C code to the Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this module to $\LaTeX$-formatted PDF Step 1: Call on NRPy+ to convert the SymPy expression for the scalar wave initial data into a C-code kernel \[Back to [top](toc)\]$$\label{initializenrpy}$$After importing the core modules, we will set $\text{GridFuncMemAccess}$ to $\text{ETK}$. SymPy expressions for plane wave initial data are written inside [ScalarWave/InitialData_PlaneWave.py](../edit/ScalarWave/InitialData_PlaneWave.py), and we simply import them for use here.
###Code
# Step 1: Call on NRPy+ to convert the SymPy expression
# for the scalar wave initial data into a C-code kernel
# Step 1a: Import needed NRPy+ core modules:
import NRPy_param_funcs as par
import indexedexp as ixp
import grid as gri
import finite_difference as fin
from outputC import *
import loop
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
# Step 1c: Call the InitialData_PlaneWave() function from within the
# ScalarWave/InitialData_PlaneWave.py module.
import ScalarWave.InitialData_PlaneWave as swid
# Step 1d: Within the ETK, the 3D gridfunctions x, y, and z store the
# Cartesian grid coordinates. Setting the gri.xx[] arrays
# to point to these gridfunctions forces NRPy+ to treat
# the Cartesian coordinate gridfunctions properly --
# reading them from memory as needed.
x,y,z = gri.register_gridfunctions("AUX",["x","y","z"])
gri.xx[0] = x
gri.xx[1] = y
gri.xx[2] = z
# Step 1e: Set up the plane wave initial data. This sets uu_ID and vv_ID.
swid.InitialData_PlaneWave()
# Step 1f: Register uu and vv gridfunctions so they can be written to by NRPy.
uu,vv = gri.register_gridfunctions("EVOL",["uu","vv"])
# Step 1g: Set the uu and vv gridfunctions to the uu_ID & vv_ID variables
# defined by InitialData_PlaneWave().
uu = swid.uu_ID
vv = swid.vv_ID
# Step 1h: Create the C code output kernel.
scalar_PWID_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","uu"),rhs=uu),\
lhrh(lhs=gri.gfaccess("out_gfs","vv"),rhs=vv),]
scalar_PWID_CcodeKernel = fin.FD_outputC("returnstring",scalar_PWID_to_print)
scalar_PWID_looped = loop.loop(["i2","i1","i0"],["1","1","1"],["cctk_lsh[2]-1","cctk_lsh[1]-1","cctk_lsh[0]-1"],\
["1","1","1"],["#pragma omp parallel for","",""],"",\
scalar_PWID_CcodeKernel.replace("time","cctk_time"))
# Step 1i: Create directories for the thorn if they don't exist.
!mkdir IDScalarWaveNRPy 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
!mkdir IDScalarWaveNRPy/src 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
# Step 1j: Write the C code kernel to file.
with open("IDScalarWaveNRPy/src/ScalarWave_PWID.h", "w") as file:
file.write(str(scalar_PWID_looped))
###Output
_____no_output_____
###Markdown
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{einstein}$$ Step 2.a: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{einstein_c}$$We will write another C file with the functions we need here.
###Code
%%writefile IDScalarWaveNRPy/src/InitialData.c
#include <math.h>
#include <stdio.h>
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
void IDScalarWaveNRPy_param_check(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
if (kk0 == 0 && kk1 == 0 && kk2 == 0) {
CCTK_WARN(0,"kk0==kk1==kk2==0: Zero wave vector cannot be normalized. Set one of the kk's to be != 0.");
}
}
void IDScalarWaveNRPy_InitialData(CCTK_ARGUMENTS)
{
DECLARE_CCTK_ARGUMENTS
DECLARE_CCTK_PARAMETERS
const CCTK_REAL *xGF = x;
const CCTK_REAL *yGF = y;
const CCTK_REAL *zGF = z;
#include "ScalarWave_PWID.h"
}
###Output
Overwriting IDScalarWaveNRPy/src/InitialData.c
###Markdown
Step 2. b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{einstein_ccl}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1. $\text{interface.ccl}$: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns. Specifically, this file governs the interaction between this thorn and others; more information can be found in the [official Einstein Toolkit documentation](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-260000C2.2). With "implements", we give our thorn its unique name. By "inheriting" other thorns, we tell the Toolkit that we will rely on variables that exist and are declared "public" within those functions.
###Code
%%writefile IDScalarWaveNRPy/interface.ccl
implements: IDScalarWaveNRPy
inherits: WaveToyNRPy grid
###Output
Overwriting IDScalarWaveNRPy/interface.ccl
###Markdown
2. $\text{param.ccl}$: specifies free parameters within the thorn, enabling them to be set at runtime. It is required to provide allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-265000C2.3).
###Code
%%writefile IDScalarWaveNRPy/param.ccl
shares: grid
USES KEYWORD type
shares: WaveToyNRPy
USES REAL wavespeed
restricted:
CCTK_KEYWORD initial_data "Type of initial data"
{
"plane" :: "Plane wave"
} "plane"
restricted:
CCTK_REAL kk0 "The wave number in the x-direction"
{
*:* :: "No restriction"
} 4.0
restricted:
CCTK_REAL kk1 "The wave number in the y-direction"
{
*:* :: "No restriction"
} 0.0
restricted:
CCTK_REAL kk2 "The wave number in the z-direction"
{
*:* :: "No restriction"
} 0.0
###Output
Overwriting IDScalarWaveNRPy/param.ccl
###Markdown
3. $\text{schedule.ccl}$: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions. $\text{schedule.ccl}$'s official documentation may be found [here](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-268000C2.4). We specify here the standardized ETK "scheduling bins" in which we want each of our thorn's functions to run.
###Code
%%writefile IDScalarWaveNRPy/schedule.ccl
schedule IDScalarWaveNRPy_param_check at CCTK_PARAMCHECK
{
LANG: C
OPTIONS: global
} "Check sanity of parameters"
schedule IDScalarWaveNRPy_InitialData at CCTK_INITIAL as WaveToy_InitialData
{
STORAGE: WaveToyNRPy::scalar_fields[3]
LANG: C
READS: grid::x(Everywhere)
READS: grid::y(Everywhere)
READS: grid::y(Everywhere)
WRITES: uuGF(Everywhere)
WRITES: vvGF(Everywhere)
} "Initial data for 3D wave equation"
###Output
Overwriting IDScalarWaveNRPy/schedule.ccl
###Markdown
Step 2.c: Add the C code to the Einstein Toolkit compilation list \[Back to [top](toc)\]$$\label{einstein_list}$$We will also need $\text{make.code.defn}$, which indicates the list of files that need to be compiled. This thorn only has the one C file to compile.
###Code
%%writefile IDScalarWaveNRPy/src/make.code.defn
SRCS = InitialData.c
###Output
Overwriting IDScalarWaveNRPy/src/make.code.defn
###Markdown
Step 3: Output this module to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-IDScalarWaveNRPy.pdf](Tutorial-ETK_thorn-IDScalarWaveNRPy.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-ETK_thorn-IDScalarWaveNRPy.ipynb
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[NbConvertApp] Converting notebook Tutorial-ETK_thorn-IDScalarWaveNRPy.ipynb to latex
[NbConvertApp] Writing 43836 bytes to Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); IDScalarWaveNRPy: An Einstein Toolkit Initial Data Thorn for the Scalar Wave Equation Author: Zach Etienne Formatting improvements courtesy Brandon Clark**Notebook Status and Validation Notes: TODO** NRPy+ Source Code for this module: [ScalarWave/InitialData.py](../edit/ScalarWave/InitialData.py) [\[**tutorial**\]](Tutorial-ScalarWave.ipynb) Contructs the SymPy expressions for plane-wave initial data Introduction:In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up *initial data* for the scalar wave initial value problem. In a [previous tutorial notebook](Tutorial-ScalarWave.ipynb), we used NRPy+ to contruct the SymPy expressions for plane-wave initial data. This thorn is largely based on and should function similarly to the $\text{IDScalarWaveC}$ thorn included in the Einstein Toolkit (ETK) $\text{CactusWave}$ arrangement.We will construct this thorn in two steps.1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module. Table of Contents$$\label{toc}$$ This notebook is organized as follows1. [Step 1](initializenrpy): Call on NRPy+ to convert the SymPy expression for the scalar wave initial data into a C-code kernel1. [Step 2](einstein): Interfacing with the Einstein Toolkit 1. [Step 2.a](einstein_c): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](einstein_ccl): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](einstein_list): Add the C code to the Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Call on NRPy+ to convert the SymPy expression for the scalar wave initial data into a C-code kernel \[Back to [top](toc)\]$$\label{initializenrpy}$$After importing the core modules, since we are writing an ETK thorn, we'll need to set `"grid::GridFuncMemAccess"` to `"ETK"`. SymPy expressions for plane wave initial data are written inside [ScalarWave/InitialData.py](../edit/ScalarWave/InitialData.py), and we simply import them for use here.
###Code
# Step 1: Call on NRPy+ to convert the SymPy expression
# for the scalar wave initial data into a C-code kernel
# Step 1a: Import needed NRPy+ core modules:
import NRPy_param_funcs as par # NRPy+: Parameter interface
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import grid as gri # NRPy+: Functionality for handling numerical grids
import finite_difference as fin # NRPy+: Finite difference C code generation module
from outputC import lhrh # NRPy+: Core C code output module
import loop # NRPy+: Generate C code loops
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
# Step 1c: Call the InitialData() function from within the
# ScalarWave/InitialData.py module.
import ScalarWave.InitialData as swid
# Step 1d: Within the ETK, the 3D gridfunctions x, y, and z store the
# Cartesian grid coordinates. Setting the gri.xx[] arrays
# to point to these gridfunctions forces NRPy+ to treat
# the Cartesian coordinate gridfunctions properly --
# reading them from memory as needed.
x,y,z = gri.register_gridfunctions("AUX",["x","y","z"])
gri.xx[0] = x
gri.xx[1] = y
gri.xx[2] = z
# Step 1e: Set up the plane wave initial data. This sets uu_ID and vv_ID.
swid.InitialData(CoordSystem="Cartesian",Type="PlaneWave")
# Step 1f: Register uu and vv gridfunctions so they can be written to by NRPy.
uu,vv = gri.register_gridfunctions("EVOL",["uu","vv"])
# Step 1g: Set the uu and vv gridfunctions to the uu_ID & vv_ID variables
# defined by InitialData().
uu = swid.uu_ID
vv = swid.vv_ID
# Step 1h: Create the C code output kernel.
scalar_PWID_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","uu"),rhs=uu),\
lhrh(lhs=gri.gfaccess("out_gfs","vv"),rhs=vv),]
scalar_PWID_CcodeKernel = fin.FD_outputC("returnstring",scalar_PWID_to_print)
scalar_PWID_looped = loop.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
["1","1","1"],["#pragma omp parallel for","",""],"",\
scalar_PWID_CcodeKernel.replace("time","cctk_time"))
# Step 1i: Create directories for the thorn if they don't exist.
!mkdir IDScalarWaveNRPy 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
!mkdir IDScalarWaveNRPy/src 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
# Step 1j: Write the C code kernel to file.
with open("IDScalarWaveNRPy/src/ScalarWave_PWID.h", "w") as file:
file.write(str(scalar_PWID_looped))
###Output
_____no_output_____
###Markdown
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{einstein}$$ Step 2.a: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{einstein_c}$$We will write another C file with the functions we need here.
###Code
%%writefile IDScalarWaveNRPy/src/InitialData.c
#include <math.h>
#include <stdio.h>
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
void IDScalarWaveNRPy_param_check(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
if (kk0 == 0 && kk1 == 0 && kk2 == 0) {
CCTK_WARN(0,"kk0==kk1==kk2==0: Zero wave vector cannot be normalized. Set one of the kk's to be != 0.");
}
}
void IDScalarWaveNRPy_InitialData(CCTK_ARGUMENTS)
{
DECLARE_CCTK_ARGUMENTS
DECLARE_CCTK_PARAMETERS
const CCTK_REAL *xGF = x;
const CCTK_REAL *yGF = y;
const CCTK_REAL *zGF = z;
#include "ScalarWave_PWID.h"
}
###Output
Writing IDScalarWaveNRPy/src/InitialData.c
###Markdown
Step 2. b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{einstein_ccl}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1. `interface.ccl`: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns. Specifically, this file governs the interaction between this thorn and others; more information can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-178000D2.2). With "implements", we give our thorn its unique name. By "inheriting" other thorns, we tell the Toolkit that we will rely on variables that exist and are declared "public" within those functions.
###Code
%%writefile IDScalarWaveNRPy/interface.ccl
implements: IDScalarWaveNRPy
inherits: WaveToyNRPy grid
###Output
Writing IDScalarWaveNRPy/interface.ccl
###Markdown
2. `param.ccl`: specifies free parameters within the thorn, enabling them to be set at runtime. It is required to provide allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-183000D2.3).
###Code
%%writefile IDScalarWaveNRPy/param.ccl
shares: grid
USES KEYWORD type
shares: WaveToyNRPy
USES REAL wavespeed
restricted:
CCTK_KEYWORD initial_data "Type of initial data"
{
"plane" :: "Plane wave"
} "plane"
restricted:
CCTK_REAL kk0 "The wave number in the x-direction"
{
*:* :: "No restriction"
} 4.0
restricted:
CCTK_REAL kk1 "The wave number in the y-direction"
{
*:* :: "No restriction"
} 0.0
restricted:
CCTK_REAL kk2 "The wave number in the z-direction"
{
*:* :: "No restriction"
} 0.0
###Output
Writing IDScalarWaveNRPy/param.ccl
###Markdown
3. `schedule.ccl`: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions. `schedule.ccl`'s official documentation may be found [here](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-186000D2.4). We specify here the standardized ETK "scheduling bins" in which we want each of our thorn's functions to run.
###Code
%%writefile IDScalarWaveNRPy/schedule.ccl
schedule IDScalarWaveNRPy_param_check at CCTK_PARAMCHECK
{
LANG: C
OPTIONS: global
} "Check sanity of parameters"
schedule IDScalarWaveNRPy_InitialData at CCTK_INITIAL as WaveToy_InitialData
{
STORAGE: WaveToyNRPy::scalar_fields[3]
LANG: C
READS: grid::x(Everywhere)
READS: grid::y(Everywhere)
READS: grid::y(Everywhere)
WRITES: uuGF(Everywhere)
WRITES: vvGF(Everywhere)
} "Initial data for 3D wave equation"
###Output
Writing IDScalarWaveNRPy/schedule.ccl
###Markdown
Step 2.c: Add the C code to the Einstein Toolkit compilation list \[Back to [top](toc)\]$$\label{einstein_list}$$We will also need `make.code.defn`, which indicates the list of files that need to be compiled. This thorn only has the one C file to compile.
###Code
%%writefile IDScalarWaveNRPy/src/make.code.defn
SRCS = InitialData.c
###Output
Writing IDScalarWaveNRPy/src/make.code.defn
###Markdown
Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-IDScalarWaveNRPy.pdf](Tutorial-ETK_thorn-IDScalarWaveNRPy.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-ETK_thorn-IDScalarWaveNRPy")
###Output
Created Tutorial-ETK_thorn-IDScalarWaveNRPy.tex, and compiled LaTeX file to
PDF file Tutorial-ETK_thorn-IDScalarWaveNRPy.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); IDScalarWaveNRPy: An Einstein Toolkit Initial Data Thorn for the Scalar Wave Equation Author: Zach Etienne Formatting improvements courtesy Brandon Clark[comment]: (Abstract: TODO)[comment]: (Notebook Status and Validation Notes: TODO) NRPy+ Source Code for this module: [ScalarWave/InitialData_PlaneWave.py](../edit/ScalarWave/InitialData_PlaneWave.py) [\[**tutorial**\]](Tutorial-ScalarWave.ipynb) Contructs the SymPy expressions for plane-wave initial data Introduction:In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up *initial data* for the scalar wave initial value problem. In a [previous tutorial notebook](Tutorial-ScalarWave.ipynb), we used NRPy+ to contruct the SymPy expressions for plane-wave initial data. This thorn is largely based on and should function similarly to the $\text{IDScalarWaveC}$ thorn included in the Einstein Toolkit (ETK) $\text{CactusWave}$ arrangement.We will construct this thorn in two steps.1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module. Table of Contents$$\label{toc}$$ This notebook is organized as follows1. [Step 1](initializenrpy): Call on NRPy+ to convert the SymPy expression for the scalar wave initial data into a C-code kernel1. [Step 2](einstein): Interfacing with the Einstein Toolkit 1. [Step 2.a](einstein_c): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](einstein_ccl): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](einstein_list): Add the C code to the Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Call on NRPy+ to convert the SymPy expression for the scalar wave initial data into a C-code kernel \[Back to [top](toc)\]$$\label{initializenrpy}$$After importing the core modules, since we are writing an ETK thorn, we'll need to set `"grid::GridFuncMemAccess"` to `"ETK"`. SymPy expressions for plane wave initial data are written inside [ScalarWave/InitialData_PlaneWave.py](../edit/ScalarWave/InitialData_PlaneWave.py), and we simply import them for use here.
###Code
# Step 1: Call on NRPy+ to convert the SymPy expression
# for the scalar wave initial data into a C-code kernel
# Step 1a: Import needed NRPy+ core modules:
import NRPy_param_funcs as par
import indexedexp as ixp
import grid as gri
import finite_difference as fin
from outputC import *
import loop
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
# Step 1c: Call the InitialData_PlaneWave() function from within the
# ScalarWave/InitialData_PlaneWave.py module.
import ScalarWave.InitialData_PlaneWave as swid
# Step 1d: Within the ETK, the 3D gridfunctions x, y, and z store the
# Cartesian grid coordinates. Setting the gri.xx[] arrays
# to point to these gridfunctions forces NRPy+ to treat
# the Cartesian coordinate gridfunctions properly --
# reading them from memory as needed.
x,y,z = gri.register_gridfunctions("AUX",["x","y","z"])
gri.xx[0] = x
gri.xx[1] = y
gri.xx[2] = z
# Step 1e: Set up the plane wave initial data. This sets uu_ID and vv_ID.
swid.InitialData_PlaneWave()
# Step 1f: Register uu and vv gridfunctions so they can be written to by NRPy.
uu,vv = gri.register_gridfunctions("EVOL",["uu","vv"])
# Step 1g: Set the uu and vv gridfunctions to the uu_ID & vv_ID variables
# defined by InitialData_PlaneWave().
uu = swid.uu_ID
vv = swid.vv_ID
# Step 1h: Create the C code output kernel.
scalar_PWID_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","uu"),rhs=uu),\
lhrh(lhs=gri.gfaccess("out_gfs","vv"),rhs=vv),]
scalar_PWID_CcodeKernel = fin.FD_outputC("returnstring",scalar_PWID_to_print)
scalar_PWID_looped = loop.loop(["i2","i1","i0"],["1","1","1"],["cctk_lsh[2]-1","cctk_lsh[1]-1","cctk_lsh[0]-1"],\
["1","1","1"],["#pragma omp parallel for","",""],"",\
scalar_PWID_CcodeKernel.replace("time","cctk_time"))
# Step 1i: Create directories for the thorn if they don't exist.
!mkdir IDScalarWaveNRPy 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
!mkdir IDScalarWaveNRPy/src 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
# Step 1j: Write the C code kernel to file.
with open("IDScalarWaveNRPy/src/ScalarWave_PWID.h", "w") as file:
file.write(str(scalar_PWID_looped))
###Output
_____no_output_____
###Markdown
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{einstein}$$ Step 2.a: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{einstein_c}$$We will write another C file with the functions we need here.
###Code
%%writefile IDScalarWaveNRPy/src/InitialData.c
#include <math.h>
#include <stdio.h>
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
void IDScalarWaveNRPy_param_check(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
if (kk0 == 0 && kk1 == 0 && kk2 == 0) {
CCTK_WARN(0,"kk0==kk1==kk2==0: Zero wave vector cannot be normalized. Set one of the kk's to be != 0.");
}
}
void IDScalarWaveNRPy_InitialData(CCTK_ARGUMENTS)
{
DECLARE_CCTK_ARGUMENTS
DECLARE_CCTK_PARAMETERS
const CCTK_REAL *xGF = x;
const CCTK_REAL *yGF = y;
const CCTK_REAL *zGF = z;
#include "ScalarWave_PWID.h"
}
###Output
Writing IDScalarWaveNRPy/src/InitialData.c
###Markdown
Step 2. b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{einstein_ccl}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1. `interface.ccl`: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns. Specifically, this file governs the interaction between this thorn and others; more information can be found in the [official Einstein Toolkit documentation](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-260000C2.2). With "implements", we give our thorn its unique name. By "inheriting" other thorns, we tell the Toolkit that we will rely on variables that exist and are declared "public" within those functions.
###Code
%%writefile IDScalarWaveNRPy/interface.ccl
implements: IDScalarWaveNRPy
inherits: WaveToyNRPy grid
###Output
Writing IDScalarWaveNRPy/interface.ccl
###Markdown
2. `param.ccl`: specifies free parameters within the thorn, enabling them to be set at runtime. It is required to provide allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-265000C2.3).
###Code
%%writefile IDScalarWaveNRPy/param.ccl
shares: grid
USES KEYWORD type
shares: WaveToyNRPy
USES REAL wavespeed
restricted:
CCTK_KEYWORD initial_data "Type of initial data"
{
"plane" :: "Plane wave"
} "plane"
restricted:
CCTK_REAL kk0 "The wave number in the x-direction"
{
*:* :: "No restriction"
} 4.0
restricted:
CCTK_REAL kk1 "The wave number in the y-direction"
{
*:* :: "No restriction"
} 0.0
restricted:
CCTK_REAL kk2 "The wave number in the z-direction"
{
*:* :: "No restriction"
} 0.0
###Output
Writing IDScalarWaveNRPy/param.ccl
###Markdown
3. `schedule.ccl`: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions. `schedule.ccl`'s official documentation may be found [here](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-268000C2.4). We specify here the standardized ETK "scheduling bins" in which we want each of our thorn's functions to run.
###Code
%%writefile IDScalarWaveNRPy/schedule.ccl
schedule IDScalarWaveNRPy_param_check at CCTK_PARAMCHECK
{
LANG: C
OPTIONS: global
} "Check sanity of parameters"
schedule IDScalarWaveNRPy_InitialData at CCTK_INITIAL as WaveToy_InitialData
{
STORAGE: WaveToyNRPy::scalar_fields[3]
LANG: C
READS: grid::x(Everywhere)
READS: grid::y(Everywhere)
READS: grid::y(Everywhere)
WRITES: uuGF(Everywhere)
WRITES: vvGF(Everywhere)
} "Initial data for 3D wave equation"
###Output
Writing IDScalarWaveNRPy/schedule.ccl
###Markdown
Step 2.c: Add the C code to the Einstein Toolkit compilation list \[Back to [top](toc)\]$$\label{einstein_list}$$We will also need `make.code.defn`, which indicates the list of files that need to be compiled. This thorn only has the one C file to compile.
###Code
%%writefile IDScalarWaveNRPy/src/make.code.defn
SRCS = InitialData.c
###Output
Writing IDScalarWaveNRPy/src/make.code.defn
###Markdown
Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-IDScalarWaveNRPy.pdf](Tutorial-ETK_thorn-IDScalarWaveNRPy.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-ETK_thorn-IDScalarWaveNRPy.ipynb
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[pandoc warning] Duplicate link reference `[comment]' "source" (line 17, column 1)
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); IDScalarWaveNRPy: An Einstein Toolkit Initial Data Thorn for the Scalar Wave Equation Author: Zach Etienne Formatting improvements courtesy Brandon Clark[comment]: (Abstract: TODO)[comment]: (Notebook Status and Validation Notes: TODO) NRPy+ Source Code for this module: [ScalarWave/InitialData_PlaneWave.py](../edit/ScalarWave/InitialData_PlaneWave.py) [\[**tutorial**\]](Tutorial-ScalarWave.ipynb) Contructs the SymPy expressions for plane-wave initial data Introduction:In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up *initial data* for the scalar wave initial value problem. In a [previous tutorial notebook](Tutorial-ScalarWave.ipynb), we used NRPy+ to contruct the SymPy expressions for plane-wave initial data. This thorn is largely based on and should function similarly to the $\text{IDScalarWaveC}$ thorn included in the Einstein Toolkit (ETK) $\text{CactusWave}$ arrangement.We will construct this thorn in two steps.1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module. Table of Contents$$\label{toc}$$ This notebook is organized as follows1. [Step 1](initializenrpy): Call on NRPy+ to convert the SymPy expression for the scalar wave initial data into a C-code kernel1. [Step 2](einstein): Interfacing with the Einstein Toolkit 1. [Step 2.a](einstein_c): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](einstein_ccl): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](einstein_list): Add the C code to the Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Call on NRPy+ to convert the SymPy expression for the scalar wave initial data into a C-code kernel \[Back to [top](toc)\]$$\label{initializenrpy}$$After importing the core modules, since we are writing an ETK thorn, we'll need to set `"grid::GridFuncMemAccess"` to `"ETK"`. SymPy expressions for plane wave initial data are written inside [ScalarWave/InitialData_PlaneWave.py](../edit/ScalarWave/InitialData_PlaneWave.py), and we simply import them for use here.
###Code
# Step 1: Call on NRPy+ to convert the SymPy expression
# for the scalar wave initial data into a C-code kernel
# Step 1a: Import needed NRPy+ core modules:
import NRPy_param_funcs as par
import indexedexp as ixp
import grid as gri
import finite_difference as fin
from outputC import *
import loop
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
# Step 1c: Call the InitialData_PlaneWave() function from within the
# ScalarWave/InitialData_PlaneWave.py module.
import ScalarWave.InitialData_PlaneWave as swid
# Step 1d: Within the ETK, the 3D gridfunctions x, y, and z store the
# Cartesian grid coordinates. Setting the gri.xx[] arrays
# to point to these gridfunctions forces NRPy+ to treat
# the Cartesian coordinate gridfunctions properly --
# reading them from memory as needed.
x,y,z = gri.register_gridfunctions("AUX",["x","y","z"])
gri.xx[0] = x
gri.xx[1] = y
gri.xx[2] = z
# Step 1e: Set up the plane wave initial data. This sets uu_ID and vv_ID.
swid.InitialData_PlaneWave()
# Step 1f: Register uu and vv gridfunctions so they can be written to by NRPy.
uu,vv = gri.register_gridfunctions("EVOL",["uu","vv"])
# Step 1g: Set the uu and vv gridfunctions to the uu_ID & vv_ID variables
# defined by InitialData_PlaneWave().
uu = swid.uu_ID
vv = swid.vv_ID
# Step 1h: Create the C code output kernel.
scalar_PWID_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","uu"),rhs=uu),\
lhrh(lhs=gri.gfaccess("out_gfs","vv"),rhs=vv),]
scalar_PWID_CcodeKernel = fin.FD_outputC("returnstring",scalar_PWID_to_print)
scalar_PWID_looped = loop.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
["1","1","1"],["#pragma omp parallel for","",""],"",\
scalar_PWID_CcodeKernel.replace("time","cctk_time"))
# Step 1i: Create directories for the thorn if they don't exist.
!mkdir IDScalarWaveNRPy 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
!mkdir IDScalarWaveNRPy/src 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
# Step 1j: Write the C code kernel to file.
with open("IDScalarWaveNRPy/src/ScalarWave_PWID.h", "w") as file:
file.write(str(scalar_PWID_looped))
###Output
_____no_output_____
###Markdown
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{einstein}$$ Step 2.a: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{einstein_c}$$We will write another C file with the functions we need here.
###Code
%%writefile IDScalarWaveNRPy/src/InitialData.c
#include <math.h>
#include <stdio.h>
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
void IDScalarWaveNRPy_param_check(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
if (kk0 == 0 && kk1 == 0 && kk2 == 0) {
CCTK_WARN(0,"kk0==kk1==kk2==0: Zero wave vector cannot be normalized. Set one of the kk's to be != 0.");
}
}
void IDScalarWaveNRPy_InitialData(CCTK_ARGUMENTS)
{
DECLARE_CCTK_ARGUMENTS
DECLARE_CCTK_PARAMETERS
const CCTK_REAL *xGF = x;
const CCTK_REAL *yGF = y;
const CCTK_REAL *zGF = z;
#include "ScalarWave_PWID.h"
}
###Output
Writing IDScalarWaveNRPy/src/InitialData.c
###Markdown
Step 2. b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{einstein_ccl}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1. `interface.ccl`: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns. Specifically, this file governs the interaction between this thorn and others; more information can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-178000D2.2). With "implements", we give our thorn its unique name. By "inheriting" other thorns, we tell the Toolkit that we will rely on variables that exist and are declared "public" within those functions.
###Code
%%writefile IDScalarWaveNRPy/interface.ccl
implements: IDScalarWaveNRPy
inherits: WaveToyNRPy grid
###Output
Writing IDScalarWaveNRPy/interface.ccl
###Markdown
2. `param.ccl`: specifies free parameters within the thorn, enabling them to be set at runtime. It is required to provide allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-183000D2.3).
###Code
%%writefile IDScalarWaveNRPy/param.ccl
shares: grid
USES KEYWORD type
shares: WaveToyNRPy
USES REAL wavespeed
restricted:
CCTK_KEYWORD initial_data "Type of initial data"
{
"plane" :: "Plane wave"
} "plane"
restricted:
CCTK_REAL kk0 "The wave number in the x-direction"
{
*:* :: "No restriction"
} 4.0
restricted:
CCTK_REAL kk1 "The wave number in the y-direction"
{
*:* :: "No restriction"
} 0.0
restricted:
CCTK_REAL kk2 "The wave number in the z-direction"
{
*:* :: "No restriction"
} 0.0
###Output
Writing IDScalarWaveNRPy/param.ccl
###Markdown
3. `schedule.ccl`: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions. `schedule.ccl`'s official documentation may be found [here](http://einsteintoolkit.org/usersguide/UsersGuidech12.htmlx17-186000D2.4). We specify here the standardized ETK "scheduling bins" in which we want each of our thorn's functions to run.
###Code
%%writefile IDScalarWaveNRPy/schedule.ccl
schedule IDScalarWaveNRPy_param_check at CCTK_PARAMCHECK
{
LANG: C
OPTIONS: global
} "Check sanity of parameters"
schedule IDScalarWaveNRPy_InitialData at CCTK_INITIAL as WaveToy_InitialData
{
STORAGE: WaveToyNRPy::scalar_fields[3]
LANG: C
READS: grid::x(Everywhere)
READS: grid::y(Everywhere)
READS: grid::y(Everywhere)
WRITES: uuGF(Everywhere)
WRITES: vvGF(Everywhere)
} "Initial data for 3D wave equation"
###Output
Writing IDScalarWaveNRPy/schedule.ccl
###Markdown
Step 2.c: Add the C code to the Einstein Toolkit compilation list \[Back to [top](toc)\]$$\label{einstein_list}$$We will also need `make.code.defn`, which indicates the list of files that need to be compiled. This thorn only has the one C file to compile.
###Code
%%writefile IDScalarWaveNRPy/src/make.code.defn
SRCS = InitialData.c
###Output
Writing IDScalarWaveNRPy/src/make.code.defn
###Markdown
Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-IDScalarWaveNRPy.pdf](Tutorial-ETK_thorn-IDScalarWaveNRPy.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-ETK_thorn-IDScalarWaveNRPy.ipynb
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[pandoc warning] Duplicate link reference `[comment]' "source" (line 17, column 1)
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); IDScalarWaveNRPy: An Einstein Toolkit Initial Data Thorn for the Scalar Wave Equation Author: Zach Etienne Formatting improvements courtesy Brandon Clark[comment]: (Abstract: TODO)[comment]: (Module Status and Validation Notes: TODO) NRPy+ Source Code for this module: [ScalarWave/InitialData_PlaneWave.py](../edit/ScalarWave/InitialData_PlaneWave.py) [\[**tutorial**\]](Tutorial-ScalarWave.ipynb) Contructs the SymPy expressions for plane-wave initial data Introduction:In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up *initial data* for the scalar wave initial value problem. In a [previous tutorial module](Tutorial-ScalarWave.ipynb), we used NRPy+ to contruct the SymPy expressions for plane-wave initial data. This thorn is largely based on and should function similarly to the $\text{IDScalarWaveC}$ thorn included in the Einstein Toolkit (ETK) $\text{CactusWave}$ arrangement.We will construct this thorn in two steps.1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module. Table of Contents$$\label{toc}$$ This module is organized as follows1. [Step 1](initializenrpy): Call on NRPy+ to convert the SymPy expression for the scalar wave initial data into a C-code kernel1. [Step 2](einstein): Interfacing with the Einstein Toolkit 1. [Step 2.a](einstein_c): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](einstein_ccl): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](einstein_list): Add the C code to the Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this module to $\LaTeX$-formatted PDF Step 1: Call on NRPy+ to convert the SymPy expression for the scalar wave initial data into a C-code kernel \[Back to [top](toc)\]$$\label{initializenrpy}$$After importing the core modules, since we are writing an ETK thorn, we'll need to set `"grid::GridFuncMemAccess"` to `"ETK"`. SymPy expressions for plane wave initial data are written inside [ScalarWave/InitialData_PlaneWave.py](../edit/ScalarWave/InitialData_PlaneWave.py), and we simply import them for use here.
###Code
# Step 1: Call on NRPy+ to convert the SymPy expression
# for the scalar wave initial data into a C-code kernel
# Step 1a: Import needed NRPy+ core modules:
import NRPy_param_funcs as par
import indexedexp as ixp
import grid as gri
import finite_difference as fin
from outputC import *
import loop
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
# Step 1c: Call the InitialData_PlaneWave() function from within the
# ScalarWave/InitialData_PlaneWave.py module.
import ScalarWave.InitialData_PlaneWave as swid
# Step 1d: Within the ETK, the 3D gridfunctions x, y, and z store the
# Cartesian grid coordinates. Setting the gri.xx[] arrays
# to point to these gridfunctions forces NRPy+ to treat
# the Cartesian coordinate gridfunctions properly --
# reading them from memory as needed.
x,y,z = gri.register_gridfunctions("AUX",["x","y","z"])
gri.xx[0] = x
gri.xx[1] = y
gri.xx[2] = z
# Step 1e: Set up the plane wave initial data. This sets uu_ID and vv_ID.
swid.InitialData_PlaneWave()
# Step 1f: Register uu and vv gridfunctions so they can be written to by NRPy.
uu,vv = gri.register_gridfunctions("EVOL",["uu","vv"])
# Step 1g: Set the uu and vv gridfunctions to the uu_ID & vv_ID variables
# defined by InitialData_PlaneWave().
uu = swid.uu_ID
vv = swid.vv_ID
# Step 1h: Create the C code output kernel.
scalar_PWID_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","uu"),rhs=uu),\
lhrh(lhs=gri.gfaccess("out_gfs","vv"),rhs=vv),]
scalar_PWID_CcodeKernel = fin.FD_outputC("returnstring",scalar_PWID_to_print)
scalar_PWID_looped = loop.loop(["i2","i1","i0"],["1","1","1"],["cctk_lsh[2]-1","cctk_lsh[1]-1","cctk_lsh[0]-1"],\
["1","1","1"],["#pragma omp parallel for","",""],"",\
scalar_PWID_CcodeKernel.replace("time","cctk_time"))
# Step 1i: Create directories for the thorn if they don't exist.
!mkdir IDScalarWaveNRPy 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
!mkdir IDScalarWaveNRPy/src 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
# Step 1j: Write the C code kernel to file.
with open("IDScalarWaveNRPy/src/ScalarWave_PWID.h", "w") as file:
file.write(str(scalar_PWID_looped))
###Output
_____no_output_____
###Markdown
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{einstein}$$ Step 2.a: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{einstein_c}$$We will write another C file with the functions we need here.
###Code
%%writefile IDScalarWaveNRPy/src/InitialData.c
#include <math.h>
#include <stdio.h>
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
void IDScalarWaveNRPy_param_check(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
if (kk0 == 0 && kk1 == 0 && kk2 == 0) {
CCTK_WARN(0,"kk0==kk1==kk2==0: Zero wave vector cannot be normalized. Set one of the kk's to be != 0.");
}
}
void IDScalarWaveNRPy_InitialData(CCTK_ARGUMENTS)
{
DECLARE_CCTK_ARGUMENTS
DECLARE_CCTK_PARAMETERS
const CCTK_REAL *xGF = x;
const CCTK_REAL *yGF = y;
const CCTK_REAL *zGF = z;
#include "ScalarWave_PWID.h"
}
###Output
Overwriting IDScalarWaveNRPy/src/InitialData.c
###Markdown
Step 2. b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{einstein_ccl}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1. `interface.ccl`: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns. Specifically, this file governs the interaction between this thorn and others; more information can be found in the [official Einstein Toolkit documentation](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-260000C2.2). With "implements", we give our thorn its unique name. By "inheriting" other thorns, we tell the Toolkit that we will rely on variables that exist and are declared "public" within those functions.
###Code
%%writefile IDScalarWaveNRPy/interface.ccl
implements: IDScalarWaveNRPy
inherits: WaveToyNRPy grid
###Output
Overwriting IDScalarWaveNRPy/interface.ccl
###Markdown
2. `param.ccl`: specifies free parameters within the thorn, enabling them to be set at runtime. It is required to provide allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-265000C2.3).
###Code
%%writefile IDScalarWaveNRPy/param.ccl
shares: grid
USES KEYWORD type
shares: WaveToyNRPy
USES REAL wavespeed
restricted:
CCTK_KEYWORD initial_data "Type of initial data"
{
"plane" :: "Plane wave"
} "plane"
restricted:
CCTK_REAL kk0 "The wave number in the x-direction"
{
*:* :: "No restriction"
} 4.0
restricted:
CCTK_REAL kk1 "The wave number in the y-direction"
{
*:* :: "No restriction"
} 0.0
restricted:
CCTK_REAL kk2 "The wave number in the z-direction"
{
*:* :: "No restriction"
} 0.0
###Output
Overwriting IDScalarWaveNRPy/param.ccl
###Markdown
3. `schedule.ccl`: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions. `schedule.ccl`'s official documentation may be found [here](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-268000C2.4). We specify here the standardized ETK "scheduling bins" in which we want each of our thorn's functions to run.
###Code
%%writefile IDScalarWaveNRPy/schedule.ccl
schedule IDScalarWaveNRPy_param_check at CCTK_PARAMCHECK
{
LANG: C
OPTIONS: global
} "Check sanity of parameters"
schedule IDScalarWaveNRPy_InitialData at CCTK_INITIAL as WaveToy_InitialData
{
STORAGE: WaveToyNRPy::scalar_fields[3]
LANG: C
READS: grid::x(Everywhere)
READS: grid::y(Everywhere)
READS: grid::y(Everywhere)
WRITES: uuGF(Everywhere)
WRITES: vvGF(Everywhere)
} "Initial data for 3D wave equation"
###Output
Overwriting IDScalarWaveNRPy/schedule.ccl
###Markdown
Step 2.c: Add the C code to the Einstein Toolkit compilation list \[Back to [top](toc)\]$$\label{einstein_list}$$We will also need `make.code.defn`, which indicates the list of files that need to be compiled. This thorn only has the one C file to compile.
###Code
%%writefile IDScalarWaveNRPy/src/make.code.defn
SRCS = InitialData.c
###Output
Overwriting IDScalarWaveNRPy/src/make.code.defn
###Markdown
Step 3: Output this module to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-IDScalarWaveNRPy.pdf](Tutorial-ETK_thorn-IDScalarWaveNRPy.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-ETK_thorn-IDScalarWaveNRPy.ipynb
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[NbConvertApp] Converting notebook Tutorial-ETK_thorn-IDScalarWaveNRPy.ipynb to latex
[NbConvertApp] Writing 43836 bytes to Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); IDScalarWaveNRPy: An Einstein Toolkit Initial Data Thorn for the Scalar Wave Equation Author: Zach Etienne Formatting improvements courtesy Brandon Clark[comment]: (Abstract: TODO)[comment]: (Notebook Status and Validation Notes: TODO) NRPy+ Source Code for this module: [ScalarWave/InitialData_PlaneWave.py](../edit/ScalarWave/InitialData_PlaneWave.py) [\[**tutorial**\]](Tutorial-ScalarWave.ipynb) Contructs the SymPy expressions for plane-wave initial data Introduction:In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up *initial data* for the scalar wave initial value problem. In a [previous tutorial notebook](Tutorial-ScalarWave.ipynb), we used NRPy+ to contruct the SymPy expressions for plane-wave initial data. This thorn is largely based on and should function similarly to the $\text{IDScalarWaveC}$ thorn included in the Einstein Toolkit (ETK) $\text{CactusWave}$ arrangement.We will construct this thorn in two steps.1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module. Table of Contents$$\label{toc}$$ This notebook is organized as follows1. [Step 1](initializenrpy): Call on NRPy+ to convert the SymPy expression for the scalar wave initial data into a C-code kernel1. [Step 2](einstein): Interfacing with the Einstein Toolkit 1. [Step 2.a](einstein_c): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](einstein_ccl): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](einstein_list): Add the C code to the Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Call on NRPy+ to convert the SymPy expression for the scalar wave initial data into a C-code kernel \[Back to [top](toc)\]$$\label{initializenrpy}$$After importing the core modules, since we are writing an ETK thorn, we'll need to set `"grid::GridFuncMemAccess"` to `"ETK"`. SymPy expressions for plane wave initial data are written inside [ScalarWave/InitialData_PlaneWave.py](../edit/ScalarWave/InitialData_PlaneWave.py), and we simply import them for use here.
###Code
# Step 1: Call on NRPy+ to convert the SymPy expression
# for the scalar wave initial data into a C-code kernel
# Step 1a: Import needed NRPy+ core modules:
import NRPy_param_funcs as par
import indexedexp as ixp
import grid as gri
import finite_difference as fin
from outputC import *
import loop
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
# Step 1c: Call the InitialData_PlaneWave() function from within the
# ScalarWave/InitialData_PlaneWave.py module.
import ScalarWave.InitialData_PlaneWave as swid
# Step 1d: Within the ETK, the 3D gridfunctions x, y, and z store the
# Cartesian grid coordinates. Setting the gri.xx[] arrays
# to point to these gridfunctions forces NRPy+ to treat
# the Cartesian coordinate gridfunctions properly --
# reading them from memory as needed.
x,y,z = gri.register_gridfunctions("AUX",["x","y","z"])
gri.xx[0] = x
gri.xx[1] = y
gri.xx[2] = z
# Step 1e: Set up the plane wave initial data. This sets uu_ID and vv_ID.
swid.InitialData_PlaneWave()
# Step 1f: Register uu and vv gridfunctions so they can be written to by NRPy.
uu,vv = gri.register_gridfunctions("EVOL",["uu","vv"])
# Step 1g: Set the uu and vv gridfunctions to the uu_ID & vv_ID variables
# defined by InitialData_PlaneWave().
uu = swid.uu_ID
vv = swid.vv_ID
# Step 1h: Create the C code output kernel.
scalar_PWID_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","uu"),rhs=uu),\
lhrh(lhs=gri.gfaccess("out_gfs","vv"),rhs=vv),]
scalar_PWID_CcodeKernel = fin.FD_outputC("returnstring",scalar_PWID_to_print)
scalar_PWID_looped = loop.loop(["i2","i1","i0"],["1","1","1"],["cctk_lsh[2]-1","cctk_lsh[1]-1","cctk_lsh[0]-1"],\
["1","1","1"],["#pragma omp parallel for","",""],"",\
scalar_PWID_CcodeKernel.replace("time","cctk_time"))
# Step 1i: Create directories for the thorn if they don't exist.
!mkdir IDScalarWaveNRPy 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
!mkdir IDScalarWaveNRPy/src 2>/dev/null # 2>/dev/null: Don't throw an error if the directory already exists.
# Step 1j: Write the C code kernel to file.
with open("IDScalarWaveNRPy/src/ScalarWave_PWID.h", "w") as file:
file.write(str(scalar_PWID_looped))
###Output
_____no_output_____
###Markdown
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{einstein}$$ Step 2.a: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{einstein_c}$$We will write another C file with the functions we need here.
###Code
%%writefile IDScalarWaveNRPy/src/InitialData.c
#include <math.h>
#include <stdio.h>
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
void IDScalarWaveNRPy_param_check(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
if (kk0 == 0 && kk1 == 0 && kk2 == 0) {
CCTK_WARN(0,"kk0==kk1==kk2==0: Zero wave vector cannot be normalized. Set one of the kk's to be != 0.");
}
}
void IDScalarWaveNRPy_InitialData(CCTK_ARGUMENTS)
{
DECLARE_CCTK_ARGUMENTS
DECLARE_CCTK_PARAMETERS
const CCTK_REAL *xGF = x;
const CCTK_REAL *yGF = y;
const CCTK_REAL *zGF = z;
#include "ScalarWave_PWID.h"
}
###Output
Writing IDScalarWaveNRPy/src/InitialData.c
###Markdown
Step 2. b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{einstein_ccl}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1. `interface.ccl`: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns. Specifically, this file governs the interaction between this thorn and others; more information can be found in the [official Einstein Toolkit documentation](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-260000C2.2). With "implements", we give our thorn its unique name. By "inheriting" other thorns, we tell the Toolkit that we will rely on variables that exist and are declared "public" within those functions.
###Code
%%writefile IDScalarWaveNRPy/interface.ccl
implements: IDScalarWaveNRPy
inherits: WaveToyNRPy grid
###Output
Writing IDScalarWaveNRPy/interface.ccl
###Markdown
2. `param.ccl`: specifies free parameters within the thorn, enabling them to be set at runtime. It is required to provide allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-265000C2.3).
###Code
%%writefile IDScalarWaveNRPy/param.ccl
shares: grid
USES KEYWORD type
shares: WaveToyNRPy
USES REAL wavespeed
restricted:
CCTK_KEYWORD initial_data "Type of initial data"
{
"plane" :: "Plane wave"
} "plane"
restricted:
CCTK_REAL kk0 "The wave number in the x-direction"
{
*:* :: "No restriction"
} 4.0
restricted:
CCTK_REAL kk1 "The wave number in the y-direction"
{
*:* :: "No restriction"
} 0.0
restricted:
CCTK_REAL kk2 "The wave number in the z-direction"
{
*:* :: "No restriction"
} 0.0
###Output
Writing IDScalarWaveNRPy/param.ccl
###Markdown
3. `schedule.ccl`: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions. `schedule.ccl`'s official documentation may be found [here](http://cactuscode.org/documentation/referencemanual/ReferenceManualch8.htmlx12-268000C2.4). We specify here the standardized ETK "scheduling bins" in which we want each of our thorn's functions to run.
###Code
%%writefile IDScalarWaveNRPy/schedule.ccl
schedule IDScalarWaveNRPy_param_check at CCTK_PARAMCHECK
{
LANG: C
OPTIONS: global
} "Check sanity of parameters"
schedule IDScalarWaveNRPy_InitialData at CCTK_INITIAL as WaveToy_InitialData
{
STORAGE: WaveToyNRPy::scalar_fields[3]
LANG: C
READS: grid::x(Everywhere)
READS: grid::y(Everywhere)
READS: grid::y(Everywhere)
WRITES: uuGF(Everywhere)
WRITES: vvGF(Everywhere)
} "Initial data for 3D wave equation"
###Output
Writing IDScalarWaveNRPy/schedule.ccl
###Markdown
Step 2.c: Add the C code to the Einstein Toolkit compilation list \[Back to [top](toc)\]$$\label{einstein_list}$$We will also need `make.code.defn`, which indicates the list of files that need to be compiled. This thorn only has the one C file to compile.
###Code
%%writefile IDScalarWaveNRPy/src/make.code.defn
SRCS = InitialData.c
###Output
Writing IDScalarWaveNRPy/src/make.code.defn
###Markdown
Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-IDScalarWaveNRPy.pdf](Tutorial-ETK_thorn-IDScalarWaveNRPy.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-ETK_thorn-IDScalarWaveNRPy.ipynb
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
!pdflatex -interaction=batchmode Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[NbConvertApp] Converting notebook Tutorial-ETK_thorn-IDScalarWaveNRPy.ipynb to latex
[pandoc warning] Duplicate link reference `[comment]' "source" (line 17, column 1)
[NbConvertApp] Writing 43991 bytes to Tutorial-ETK_thorn-IDScalarWaveNRPy.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); IDScalarWaveNRPy: An Einstein Toolkit Initial Data Thorn for the Scalar Wave Equation Author: Terrence Pierre Jacques & Zach Etienne Formatting improvements courtesy Brandon Clark[comment]: (Abstract: TODO)[comment]: (Notebook Status and Validation Notes: TODO) NRPy+ Source Code for this module: [ScalarWave/InitialData.py](../edit/ScalarWave/InitialData.py) [\[**tutorial**\]](Tutorial-ScalarWave.ipynb) Contructs the SymPy expressions for spherical gaussian and plane-wave initial data Introduction:In this part of the tutorial, we will construct an Einstein Toolkit (ETK) thorn (module) that will set up *initial data* for the scalar wave initial value problem. In a [previous tutorial notebook](Tutorial-ScalarWave.ipynb), we used NRPy+ to contruct the SymPy expressions for either spherical gaussian or plane-wave initial data. This thorn is largely based on and should function similarly to the $\text{IDScalarWaveC}$ thorn included in the Einstein Toolkit (ETK) $\text{CactusWave}$ arrangement.We will construct this thorn in two steps.1. Call on NRPy+ to convert the SymPy expressions for the initial data into one C-code kernel.1. Write the C code and linkages to the Einstein Toolkit infrastructure (i.e., the .ccl files) to complete this Einstein Toolkit module. Table of Contents$$\label{toc}$$ This notebook is organized as follows1. [Step 1](initializenrpy): Call on NRPy+ to convert the SymPy expression for the scalar wave initial data into a C-code kernel1. [Step 2](einstein): Interfacing with the Einstein Toolkit 1. [Step 2.a](einstein_c): Constructing the Einstein Toolkit C-code calling functions that include the C code kernels 1. [Step 2.b](einstein_ccl): CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure 1. [Step 2.c](einstein_list): Add the C code to the Einstein Toolkit compilation list1. [Step 3](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Initialize needed Python/NRPy+ modules \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step 1: Import needed core NRPy+ modules
from outputC import lhrh # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import loop as lp # NRPy+: Generate C code loops
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import os, sys # Standard Python modules for multiplatform OS-level functions
import time # Standard Python module; useful for benchmarking
# Step 1a: Create directories for the thorn if they don't exist.
# Create directory for WaveToyNRPy thorn & subdirectories in case they don't exist.
outrootdir = "IDScalarWaveNRPy/"
cmd.mkdir(os.path.join(outrootdir))
outdir = os.path.join(outrootdir,"src") # Main C code output directory
cmd.mkdir(outdir)
# Step 1b: This is an Einstein Toolkit (ETK) thorn. Here we
# tell NRPy+ that gridfunction memory access will
# therefore be in the "ETK" style.
par.set_parval_from_str("grid::GridFuncMemAccess","ETK")
###Output
_____no_output_____
###Markdown
Step 2: Interfacing with the Einstein Toolkit \[Back to [top](toc)\]$$\label{einstein}$$ Step 2.a: Constructing the Einstein Toolkit C-code calling functions that include the C code kernels \[Back to [top](toc)\]$$\label{einstein_c}$$Using sympy, we construct the exact expressions for all scalar wave initial data currently supported in NRPy, documented in [Tutorial-ScalarWave.ipynb](Tutorial-ScalarWave.ipynb). We write the generated C codes into different C files, corresponding to the type of initial data the may want to choose at run time. Note that the code below can be easily extensible to include other types of initial data.
###Code
# Step 1c: Call the InitialData() function from within the
# ScalarWave/InitialData.py module.
import ScalarWave.InitialData as swid
# Step 1e: Call the InitialData() function to set up initial data.
# Options include:
# "PlaneWave": monochromatic (single frequency/wavelength) plane wave
# "SphericalGaussian": spherically symmetric Gaussian, with default stdev=3
ID_options = ["PlaneWave", "SphericalGaussian"]
for ID in ID_options:
gri.glb_gridfcs_list = []
# Within the ETK, the 3D gridfunctions x, y, and z store the
# Cartesian grid coordinates. Setting the gri.xx[] arrays
# to point to these gridfunctions forces NRPy+ to treat
# the Cartesian coordinate gridfunctions properly --
# reading them from memory as needed.
x,y,z = gri.register_gridfunctions("AUX",["x","y","z"])
rfm.xx[0] = x
rfm.xx[1] = y
rfm.xx[2] = z
swid.InitialData(Type=ID,
default_sigma=0.25,
default_k0=1.0,
default_k1=0.,
default_k2=0.)
# Step 1f: Register uu and vv gridfunctions so they can be written to by NRPy.
uu,vv = gri.register_gridfunctions("EVOL",["uu","vv"])
# Step 1g: Set the uu and vv gridfunctions to the uu_ID & vv_ID variables
# defined by InitialData_PlaneWave().
uu = swid.uu_ID
vv = swid.vv_ID
# Step 1h: Create the C code output kernel.
ScalarWave_ID_SymbExpressions = [\
lhrh(lhs=gri.gfaccess("out_gfs","uu"),rhs=uu),\
lhrh(lhs=gri.gfaccess("out_gfs","vv"),rhs=vv),]
ScalarWave_ID_CcodeKernel = fin.FD_outputC("returnstring",ScalarWave_ID_SymbExpressions)
ScalarWave_ID_looped = lp.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"],\
["1","1","1"],["#pragma omp parallel for","",""],"",\
ScalarWave_ID_CcodeKernel.replace("time","cctk_time"))
# Write the C code kernel to file.
with open(os.path.join(outdir,"ScalarWave_"+ID+"ID.h"), "w") as file:
file.write(str(ScalarWave_ID_looped))
###Output
_____no_output_____
###Markdown
Step 2. b: CCL files - Define how this module interacts and interfaces with the larger Einstein Toolkit infrastructure \[Back to [top](toc)\]$$\label{einstein_ccl}$$Writing a module ("thorn") within the Einstein Toolkit requires that three "ccl" files be constructed, all in the root directory of the thorn:1. `interface.ccl`: defines the gridfunction groups needed, and provides keywords denoting what this thorn provides and what it should inherit from other thorns. Specifically, this file governs the interaction between this thorn and others; more information can be found in the [official Einstein Toolkit documentation](https://einsteintoolkit.org/usersguide/UsersGuide.htmlx1-179000D2.2). With "implements", we give our thorn its unique name. By "inheriting" other thorns, we tell the Toolkit that we will rely on variables that exist and are declared "public" within those functions.
###Code
evol_gfs_list = []
for i in range(len(gri.glb_gridfcs_list)):
if gri.glb_gridfcs_list[i].gftype == "EVOL":
evol_gfs_list.append( gri.glb_gridfcs_list[i].name+"GF")
# NRPy+'s finite-difference code generator assumes gridfunctions
# are alphabetized; not sorting may result in unnecessary
# cache misses.
evol_gfs_list.sort()
with open(os.path.join(outrootdir,"interface.ccl"), "w") as file:
file.write("""
# With "implements", we give our thorn its unique name.
implements: IDScalarWaveNRPy
# By "inheriting" other thorns, we tell the Toolkit that we
# will rely on variables/function that exist within those
# functions.
inherits: WaveToyNRPy grid
""")
###Output
_____no_output_____
###Markdown
2. `param.ccl`: specifies free parameters within the thorn, enabling them to be set at runtime. It is required to provide allowed ranges and default values for each parameter. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](https://einsteintoolkit.org/usersguide/UsersGuide.htmlx1-184000D2.3).
###Code
def keep_param__return_type(paramtuple):
keep_param = True # We'll not set some parameters in param.ccl;
# e.g., those that should be #define'd like M_PI.
typestring = ""
# Separate thorns within the ETK take care of grid/coordinate parameters;
# thus we ignore NRPy+ grid/coordinate parameters:
if paramtuple.module == "grid" or paramtuple.module == "reference_metric" or paramtuple.parname == "wavespeed":
keep_param = False
partype = paramtuple.type
if partype == "bool":
typestring += "BOOLEAN "
elif partype == "REAL":
if paramtuple.defaultval != 1e300: # 1e300 is a magic value indicating that the C parameter should be mutable
typestring += "CCTK_REAL "
else:
keep_param = False
elif partype == "int":
typestring += "CCTK_INT "
elif partype == "#define":
keep_param = False
elif partype == "char":
# FIXME: char/string parameter types should in principle be supported
print("Error: parameter "+paramtuple.module+"::"+paramtuple.parname+
" has unsupported type: \""+ paramtuple.type + "\"")
sys.exit(1)
else:
print("Error: parameter "+paramtuple.module+"::"+paramtuple.parname+
" has unsupported type: \""+ paramtuple.type + "\"")
sys.exit(1)
return keep_param, typestring
paramccl_str="""
# This param.ccl file was automatically generated by NRPy+.
# You are advised against modifying it directly; instead
# modify the Python code that generates it.
shares: grid
USES KEYWORD type
shares: WaveToyNRPy
USES REAL wavespeed
restricted:
CCTK_KEYWORD initial_data "Type of initial data"
{"""
for ID in ID_options:
paramccl_str +='''
"'''+ID+'''" :: "'''+ID+'"'
paramccl_str +='''
} "'''+ID+'''"
'''
paramccl_str +="""
restricted:
"""
for i in range(len(par.glb_Cparams_list)):
# keep_param is a boolean indicating whether we should accept or reject
# the parameter. singleparstring will contain the string indicating
# the variable type.
keep_param, singleparstring = keep_param__return_type(par.glb_Cparams_list[i])
if keep_param:
parname = par.glb_Cparams_list[i].parname
partype = par.glb_Cparams_list[i].type
singleparstring += parname + " \""+ parname +" (see NRPy+ for parameter definition)\"\n"
singleparstring += "{\n"
if partype != "bool":
singleparstring += " *:* :: \"All values accepted. NRPy+ does not restrict the allowed ranges of parameters yet.\"\n"
singleparstring += "} "+str(par.glb_Cparams_list[i].defaultval)+"\n\n"
paramccl_str += singleparstring
with open(os.path.join(outrootdir,"param.ccl"), "w") as file:
file.write(paramccl_str)
###Output
_____no_output_____
###Markdown
3. `schedule.ccl`: allocates storage for gridfunctions, defines how the thorn's functions should be scheduled in a broader simulation, and specifies the regions of memory written to or read from gridfunctions. More information on this file's syntax can be found in the [official Einstein Toolkit documentation](https://einsteintoolkit.org/usersguide/UsersGuide.htmlx1-187000D2.4). We specify here the standardized ETK "scheduling bins" in which we want each of our thorn's functions to run.
###Code
with open(os.path.join(outrootdir,"schedule.ccl"), "w") as file:
file.write("""
# This schedule.ccl file was automatically generated by NRPy+.
# You are advised against modifying it directly; instead
# modify the Python code that generates it.
if (CCTK_EQUALS (initial_data, "PlaneWave"))
{
schedule IDScalarWaveNRPy_param_check at CCTK_PARAMCHECK
{
LANG: C
OPTIONS: global
} "Check sanity of parameters"
}
schedule IDScalarWaveNRPy_InitialData at CCTK_INITIAL as WaveToy_InitialData
{
STORAGE: WaveToyNRPy::scalar_fields[3]
LANG: C
} "Initial data for 3D wave equation"
""")
###Output
_____no_output_____
###Markdown
Step 2.c: Add the C code to the Einstein Toolkit compilation list \[Back to [top](toc)\]$$\label{einstein_list}$$We will also need `make.code.defn`, which indicates the list of files that need to be compiled. This thorn only has the one C file to compile.
###Code
make_code_defn_list = []
def append_to_make_code_defn_list(filename):
if filename not in make_code_defn_list:
make_code_defn_list.append(filename)
return os.path.join(outdir,filename)
with open(append_to_make_code_defn_list("InitialData.c"),"w") as file:
file.write("""
#include <math.h>
#include <stdio.h>
#include <string.h>
#include "cctk.h"
#include "cctk_Parameters.h"
#include "cctk_Arguments.h"
void IDScalarWaveNRPy_param_check(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
if (kk0 == 0 && kk1 == 0 && kk2 == 0) {
CCTK_WARN(0,"kk0==kk1==kk2==0: Zero wave vector cannot be normalized. Set one of the kk's to be != 0.");
}
}
void IDScalarWaveNRPy_InitialData(CCTK_ARGUMENTS)
{
DECLARE_CCTK_ARGUMENTS
DECLARE_CCTK_PARAMETERS
const CCTK_REAL *xGF = x;
const CCTK_REAL *yGF = y;
const CCTK_REAL *zGF = z;
if (CCTK_EQUALS (initial_data, "PlaneWave")) {
#include "ScalarWave_PlaneWaveID.h"
} else if (CCTK_EQUALS (initial_data, "SphericalGaussian")) {
#include "ScalarWave_SphericalGaussianID.h"
}
}
""")
with open(os.path.join(outdir,"make.code.defn"), "w") as file:
file.write("""
# Main make.code.defn file for thorn WaveToyNRPy
# Source files in this directory
SRCS =""")
filestring = ""
for i in range(len(make_code_defn_list)):
filestring += " "+make_code_defn_list[i]
if i != len(make_code_defn_list)-1:
filestring += " \\\n"
else:
filestring += "\n"
file.write(filestring)
###Output
_____no_output_____
###Markdown
Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-ETK_thorn-IDScalarWaveNRPy.pdf](Tutorial-ETK_thorn-IDScalarWaveNRPy.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-ETK_thorn-IDScalarWaveNRPy.ipynb")
###Output
[NbConvertApp] WARNING | pattern 'Tutorial-ETK_thorn-IDScalarWaveNRPy.ipynb.ipynb' matched no files
Created Tutorial-ETK_thorn-IDScalarWaveNRPy.ipynb.tex, and compiled LaTeX
file to PDF file Tutorial-ETK_thorn-IDScalarWaveNRPy.ipynb.pdf
|
Cap_01_Error.ipynb | ###Markdown
Escuela de Ciencias Básicas, Tecnología e IngenieríaECBTICurso: Métodos NuméricosUnidad 1: ErrorFebrero 28 de 2020 ***> **Tutor:** Carlos Alberto Álvarez Henao, I.C. D.Sc.> **skype:** carlos.alberto.alvarez.henao> **Herramienta:** [Jupyter](http://jupyter.org/)> **Kernel:** Python 3.7*** ***Comentario:*** estas notas están basadas en el curso del profesor [Kyle T. Mandli](https://github.com/mandli/intro-numerical-methods) (en inglés) Fuentes de errorLos cálculos numéricos, que involucran el uso de máquinas (análogas o digitales) presentan una serie de errores que provienen de diferentes fuentes:- del Modelo- de los datos- de truncamiento- de representación de los números (punto flotante)- $\ldots$***Meta:*** Categorizar y entender cada tipo de error y explorar algunas aproximaciones simples para analizarlas. Error en el modelo y los datosErrores en la formulación fundamental- Error en los datos: imprecisiones en las mediciones o incertezas en los parámetrosInfortunadamente no tenemos control de los errores en los datos y el modelo de forma directa pero podemos usar métodos que pueden ser más robustos en la presencia de estos tipos de errores. Error de truncamientoLos errores surgen de la expansión de funciones con una función simple, por ejemplo, $sin(x) \approx x$ para $|x|\approx0$. Error de representación de punto fotanteLos errores surgen de aproximar números reales con la representación en precisión finita de números en el computador. Definiciones básicasDado un valor verdadero de una función $f$ y una solución aproximada $F$, se define:- Error absoluto$$e_a=|f-F|$$- Error relativo$$e_r = \frac{e_a}{|f|}=\frac{|f-F|}{|f|}$$ Notación $\text{Big}-\mathcal{O}$sea $$f(x)= \mathcal{O}(g(x)) \text{ cuando } x \rightarrow a$$si y solo si$$|f(x)|\leq M|g(x)| \text{ cuando } |x-a| 0$$En la práctica, usamos la notación $\text{Big}-\mathcal{O}$ para decir algo sobre cómo se pueden comportar los términos que podemos haber dejado fuera de una serie. Veamos el siguiente ejemplo de la aproximación de la serie de Taylor: ***Ejemplo:***sea $f(x) = \sin x$ con $x_0 = 0$ entonces$$T_N(x) = \sum^N_{n=0} (-1)^{n} \frac{x^{2n+1}}{(2n+1)!}$$Podemos escribir $f(x)$ como$$f(x) = x - \frac{x^3}{6} + \frac{x^5}{120} + \mathcal{O}(x^7)$$Esto se vuelve más útil cuando lo vemos como lo hicimos antes con $\Delta x$:$$f(x) = \Delta x - \frac{\Delta x^3}{6} + \frac{\Delta x^5}{120} + \mathcal{O}(\Delta x^7)$$ Reglas para el error de propagación basado en la notación $\text{Big}-\mathcal{O}$En general, existen dos teoremas que no necesitan prueba y se mantienen cuando el valor de $x$ es grande:Sea$$\begin{aligned} f(x) &= p(x) + \mathcal{O}(x^n) \\ g(x) &= q(x) + \mathcal{O}(x^m) \\ k &= \max(n, m)\end{aligned}$$Entonces$$ f+g = p + q + \mathcal{O}(x^k)$$y\begin{align} f \cdot g &= p \cdot q + p \mathcal{O}(x^m) + q \mathcal{O}(x^n) + O(x^{n + m}) \\ &= p \cdot q + \mathcal{O}(x^{n+m})\end{align} De otra forma, si estamos interesados en valores pequeños de $x$, $\Delta x$, la expresión puede ser modificada como sigue:\begin{align} f(\Delta x) &= p(\Delta x) + \mathcal{O}(\Delta x^n) \\ g(\Delta x) &= q(\Delta x) + \mathcal{O}(\Delta x^m) \\ r &= \min(n, m)\end{align}entonces$$ f+g = p + q + O(\Delta x^r)$$y\begin{align} f \cdot g &= p \cdot q + p \cdot \mathcal{O}(\Delta x^m) + q \cdot \mathcal{O}(\Delta x^n) + \mathcal{O}(\Delta x^{n+m}) \\ &= p \cdot q + \mathcal{O}(\Delta x^r)\end{align} ***Nota:*** En este caso, supongamos que al menos el polinomio con $k=max(n,m)$ tiene la siguiente forma:$$ p(\Delta x) = 1 + p_1 \Delta x + p_2 \Delta x^2 + \ldots$$o$$ q(\Delta x) = 1 + q_1 \Delta x + q_2 \Delta x^2 + \ldots$$para que $\mathcal{O}(1)$ de modo que hay un término $\mathcal{O}(1)$ que garantiza la existencia de $\mathcal{O}(\Delta x^r)$ en el producto final. Para tener una idea de por qué importa más la potencia en $\Delta x$ al considerar la convergencia, la siguiente figura muestra cómo las diferentes potencias en la tasa de convergencia pueden afectar la rapidez con la que converge nuestra solución. Tenga en cuenta que aquí estamos dibujando los mismos datos de dos maneras diferentes. Graficar el error como una función de $\Delta x$ es una forma común de mostrar que un método numérico está haciendo lo que esperamos y muestra el comportamiento de convergencia correcto. Dado que los errores pueden reducirse rápidamente, es muy común trazar este tipo de gráficos en una escala log-log para visualizar fácilmente los resultados. Tenga en cuenta que si un método fuera realmente del orden $n$, será una función lineal en el espacio log-log con pendiente $n$.
###Code
import numpy as np
import matplotlib.pyplot as plt
dx = np.linspace(1.0, 1e-4, 100)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2.0)
axes = []
axes.append(fig.add_subplot(1, 2, 1))
axes.append(fig.add_subplot(1, 2, 2))
for n in range(1, 5):
axes[0].plot(dx, dx**n, label="$\Delta x^%s$" % n)
axes[1].loglog(dx, dx**n, label="$\Delta x^%s$" % n)
axes[0].legend(loc=2)
axes[1].set_xticks([10.0**(-n) for n in range(5)])
axes[1].set_yticks([10.0**(-n) for n in range(16)])
axes[1].legend(loc=4)
for n in range(2):
axes[n].set_title("Crecimiento del Error vs. $\Delta x^n$")
axes[n].set_xlabel("$\Delta x$")
axes[n].set_ylabel("Error Estimado")
axes[n].set_title("Crecimiento de las diferencias")
axes[n].set_xlabel("$\Delta x$")
axes[n].set_ylabel("Error Estimado")
plt.show()
###Output
_____no_output_____
###Markdown
Error de truncamiento***Teorema de Taylor:*** Sea $f(x) \in C^{m+1}[a,b]$ y $x_0 \in [a,b]$, para todo $x \in (a,b)$ existe un número $c = c(x)$ que se encuentra entre $x_0$ y $x$ tal que$$ f(x) = T_N(x) + R_N(x)$$donde $T_N(x)$ es la aproximación del polinomio de Taylor$$T_N(x) = \sum^N_{n=0} \frac{f^{(n)}(x_0)\times(x-x_0)^n}{n!}$$y $R_N(x)$ es el residuo (la parte de la serie que obviamos)$$R_N(x) = \frac{f^{(n+1)}(c) \times (x - x_0)^{n+1}}{(n+1)!}$$ Otra forma de pensar acerca de estos resultados consiste en reemplazar $x - x_0$ con $\Delta x$. La idea principal es que el residuo $R_N(x)$ se vuelve mas pequeño cuando $\Delta x \rightarrow 0$.$$T_N(x) = \sum^N_{n=0} \frac{f^{(n)}(x_0)\times \Delta x^n}{n!}$$y $R_N(x)$ es el residuo (la parte de la serie que obviamos)$$ R_N(x) = \frac{f^{(n+1)}(c) \times \Delta x^{n+1}}{(n+1)!} \leq M \Delta x^{n+1}$$ ***Ejemplo 1:***$f(x) = e^x$ con $x_0 = 0$Usando esto podemos encontrar expresiones para el error relativo y absoluto en función de $x$ asumiendo $N=2$. Derivadas:$$\begin{aligned} f'(x) &= e^x \\ f''(x) &= e^x \\ f^{(n)}(x) &= e^x\end{aligned}$$Polinomio de Taylor:$$\begin{aligned} T_N(x) &= \sum^N_{n=0} e^0 \frac{x^n}{n!} \Rightarrow \\ T_2(x) &= 1 + x + \frac{x^2}{2}\end{aligned}$$Restos:$$\begin{aligned} R_N(x) &= e^c \frac{x^{n+1}}{(n+1)!} = e^c \times \frac{x^3}{6} \quad \Rightarrow \\ R_2(x) &\leq \frac{e^1}{6} \approx 0.5\end{aligned}$$Precisión:$$ e^1 = 2.718\ldots \\ T_2(1) = 2.5 \Rightarrow e \approx 0.2 ~~ r \approx 0.1$$ ¡También podemos usar el paquete `sympy` que tiene la capacidad de calcular el polinomio de *Taylor* integrado!
###Code
import sympy
x = sympy.symbols('x')
f = sympy.symbols('f', cls=sympy.Function)
f = sympy.exp(x)
f.series(x0=0, n=5)
###Output
_____no_output_____
###Markdown
Graficando
###Code
x = np.linspace(-1, 1, 100)
T_N = 1.0 + x + x**2 / 2.0
R_N = np.exp(1) * x**3 / 6.0
plt.plot(x, T_N, 'r', x, np.exp(x), 'k', x, R_N, 'b')
plt.plot(0.0, 1.0, 'o', markersize=10)
plt.grid(True)
plt.xlabel("x")
plt.ylabel("$f(x)$, $T_N(x)$, $R_N(x)$")
plt.legend(["$T_N(x)$", "$f(x)$", "$R_N(x)$"], loc=2)
plt.show()
###Output
_____no_output_____
###Markdown
***Ejemplo 2:***Aproximar$$ f(x) = \frac{1}{x} \quad x_0 = 1,$$usando $x_0 = 1$ para el tercer termino de la serie de Taylor. $$\begin{aligned} f'(x) &= -\frac{1}{x^2} \\ f''(x) &= \frac{2}{x^3} \\ f^{(n)}(x) &= \frac{(-1)^n n!}{x^{n+1}}\end{aligned}$$$$\begin{aligned} T_N(x) &= \sum^N_{n=0} (-1)^n (x-1)^n \Rightarrow \\ T_2(x) &= 1 - (x - 1) + (x - 1)^2\end{aligned}$$$$\begin{aligned} R_N(x) &= \frac{(-1)^{n+1}(x - 1)^{n+1}}{c^{n+2}} \Rightarrow \\ R_2(x) &= \frac{-(x - 1)^{3}}{c^{4}}\end{aligned}$$
###Code
x = np.linspace(0.8, 2, 100)
T_N = 1.0 - (x-1) + (x-1)**2
R_N = -(x-1.0)**3 / (1.1**4)
plt.plot(x, T_N, 'r', x, 1.0 / x, 'k', x, R_N, 'b')
plt.plot(1.0, 1.0, 'o', markersize=10)
plt.grid(True)
plt.xlabel("x")
plt.ylabel("$f(x)$, $T_N(x)$, $R_N(x)$")
plt.legend(["$T_N(x)$", "$f(x)$", "$R_N(x)$"], loc=8)
plt.show()
###Output
_____no_output_____
###Markdown
En esta celda haz tus comentariosEsta cosa con esta vaina quizas tal vez-.-.-- Error de punto flotanteErrores surgen de aproximar números reales con números de precisión finita$$\pi \approx 3.14$$o $\frac{1}{3} \approx 0.333333333$ en decimal, los resultados forman un número finito de registros para representar cada número. Sistemas de punto flotanteLos números en sistemas de punto flotante se representan como una serie de bits que representan diferentes partes de un número. En los sistemas de punto flotante normalizados, existen algunas convenciones estándar para el uso de estos bits. En general, los números se almacenan dividiéndolos en la forma$$F = \pm d_1 . d_2 d_3 d_4 \ldots d_p \times \beta^E$$ donde1. $\pm$ es un bit único y representa el signo del número.2. $d_1 . d_2 d_3 d_4 \ldots d_p$ es la *mantisa*. observe que, técnicamente, el decimal se puede mover, pero en general, utilizando la notación científica, el decimal siempre se puede colocar en esta ubicación. Los digitos $d_2 d_3 d_4 \ldots d_p$ son llamados la *fracción* con $p$ digitos de precisión. Los sistemas normalizados específicamente ponen el punto decimal en el frente y asume $d_1 \neq 0$ a menos que el número sea exactamente $0$.3. $\beta$ es la *base*. Para el sistema binario $\beta = 2$, para decimal $\beta = 10$, etc.4. $E$ es el *exponente*, un entero en el rango $[E_{\min}, E_{\max}]$ Los puntos importantes en cualquier sistema de punto flotante es1. Existe un conjunto discreto y finito de números representables.2. Estos números representables no están distribuidos uniformemente en la línea real3. La aritmética en sistemas de punto flotante produce resultados diferentes de la aritmética de precisión infinita (es decir, matemática "real") Propiedades de los sistemas de punto flotanteTodos los sistemas de punto flotante se caracterizan por varios números importantes- Número normalizado reducido (underflow si está por debajo, relacionado con números sub-normales alrededor de cero)- Número normalizado más grande (overflow)- Cero- $\epsilon$ o $\epsilon_{mach}$- `Inf` y `nan` ***Ejemplo: Sistema de juguete***Considere el sistema decimal de 2 digitos de precisión (normalizado)$$f = \pm d_1 . d_2 \times 10^E$$con $E \in [-2, 0]$.**Numero y distribución de números**1. Cuántos números pueden representarse con este sistema?2. Cuál es la distribución en la línea real?3. Cuáles son los límites underflow y overflow? Cuántos números pueden representarse con este sistema?$$f = \pm d_1 . d_2 \times 10^E ~~~ \text{with} E \in [-2, 0]$$$$2 \times 9 \times 10 \times 3 + 1 = 541$$ Cuál es la distribución en la línea real?
###Code
d_1_values = [1, 2, 3, 4, 5, 6, 7, 8, 9]
d_2_values = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
E_values = [0, -1, -2]
fig = plt.figure(figsize=(10.0, 1.0))
axes = fig.add_subplot(1, 1, 1)
for E in E_values:
for d1 in d_1_values:
for d2 in d_2_values:
axes.plot( (d1 + d2 * 0.1) * 10**E, 0.0, 'r+', markersize=20)
axes.plot(-(d1 + d2 * 0.1) * 10**E, 0.0, 'r+', markersize=20)
axes.plot(0.0, 0.0, '+', markersize=20)
axes.plot([-10.0, 10.0], [0.0, 0.0], 'k')
axes.set_title("Distribución de Valores")
axes.set_yticks([])
axes.set_xlabel("x")
axes.set_ylabel("")
axes.set_xlim([-0.1, 0.1])
plt.show()
###Output
_____no_output_____
###Markdown
Cuáles son los límites superior (overflow) e inferior (underflow)?- El menor número que puede ser representado (underflow) es: $1.0 \times 10^{-2} = 0.01$- El mayor número que puede ser representado (overflow) es: $9.9 \times 10^0 = 9.9$ Sistema BinarioConsidere el sistema en base 2 de 2 dígitos de precisión$$f=\pm d_1 . d_2 \times 2^E \quad \text{with} \quad E \in [-1, 1]$$ Numero y distribución de números**1. Cuántos números pueden representarse con este sistema?2. Cuál es la distribución en la línea real?3. Cuáles son los límites underflow y overflow? Cuántos números pueden representarse en este sistema?$$f=\pm d_1 . d_2 \times 2^E ~~~~ \text{con} ~~~~ E \in [-1, 1]$$$$ 2 \times 1 \times 2 \times 3 + 1 = 13$$ Cuál es la distribución en la línea real?
###Code
d_1_values = [1]
d_2_values = [0, 1]
E_values = [1, 0, -1]
fig = plt.figure(figsize=(10.0, 1.0))
axes = fig.add_subplot(1, 1, 1)
for E in E_values:
for d1 in d_1_values:
for d2 in d_2_values:
axes.plot( (d1 + d2 * 0.5) * 2**E, 0.0, 'r+', markersize=20)
axes.plot(-(d1 + d2 * 0.5) * 2**E, 0.0, 'r+', markersize=20)
axes.plot(0.0, 0.0, 'r+', markersize=20)
axes.plot([-4.5, 4.5], [0.0, 0.0], 'k')
axes.set_title("Distribución de Valores")
axes.set_yticks([])
axes.set_xlabel("x")
axes.set_ylabel("")
axes.set_xlim([-3.5, 3.5])
plt.show()
###Output
_____no_output_____
###Markdown
Cuáles son los límites superior (*overflow*) e inferior (*underflow*)?- El menor número que puede ser representado (*underflow*) es: $1.0 \times 2^{-1} = 0.5$- El mayor número que puede ser representado (*overflow*) es: $1.1 \times 2^1 = 3$Observe que estos números son en sistema binario. Una rápida regla de oro:$$2^3 2^2 2^1 2^0 . 2^{-1} 2^{-2} 2^{-3}$$corresponde a8s, 4s, 2s, 1s . mitades, cuartos, octavos, $\ldots$ Sistema real - IEEE 754 sistema binario de punto flotante Precisión simple- Almacenamiento total es de 32 bits- Exponente de 8 bits $\Rightarrow E \in [-126, 127]$- Fracción 23 bits ($p = 24$)```s EEEEEEEE FFFFFFFFFFFFFFFFFFFFFFF0 1 8 9 31```Overflow $= 2^{127} \approx 3.4 \times 10^{38}$Underflow $= 2^{-126} \approx 1.2 \times 10^{-38}$$\epsilon_{\text{machine}} = 2^{-23} \approx 1.2 \times 10^{-7}$ Precisión doble- Almacenamiento total asignado es 64 bits- Exponenete de 11 bits $\Rightarrow E \in [-1022, 1024]$- Fracción de 52 bits ($p = 53$)```s EEEEEEEEEE FFFFFFFFFF FFFFFFFFFF FFFFFFFFFF FFFFFFFFFF FFFFFFFFFF FF0 1 11 12 63```Overflow $= 2^{1024} \approx 1.8 \times 10^{308}$Underflow $= 2^{-1022} \approx 2.2 \times 10^{-308}$$\epsilon_{\text{machine}} = 2^{-52} \approx 2.2 \times 10^{-16}$ Acceso de Python a números de la IEEEAccede a muchos parámetros importantes, como el epsilon de la máquina```pythonimport numpynumpy.finfo(float).eps```
###Code
import numpy
numpy.finfo(float).eps
print(numpy.finfo(numpy.float16))
print(numpy.finfo(numpy.float32))
print(numpy.finfo(float))
print(numpy.finfo(numpy.float128))
###Output
Machine parameters for float16
---------------------------------------------------------------
precision = 3 resolution = 1.00040e-03
machep = -10 eps = 9.76562e-04
negep = -11 epsneg = 4.88281e-04
minexp = -14 tiny = 6.10352e-05
maxexp = 16 max = 6.55040e+04
nexp = 5 min = -max
---------------------------------------------------------------
Machine parameters for float32
---------------------------------------------------------------
precision = 6 resolution = 1.0000000e-06
machep = -23 eps = 1.1920929e-07
negep = -24 epsneg = 5.9604645e-08
minexp = -126 tiny = 1.1754944e-38
maxexp = 128 max = 3.4028235e+38
nexp = 8 min = -max
---------------------------------------------------------------
Machine parameters for float64
---------------------------------------------------------------
precision = 15 resolution = 1.0000000000000001e-15
machep = -52 eps = 2.2204460492503131e-16
negep = -53 epsneg = 1.1102230246251565e-16
minexp = -1022 tiny = 2.2250738585072014e-308
maxexp = 1024 max = 1.7976931348623157e+308
nexp = 11 min = -max
---------------------------------------------------------------
###Markdown
Por qué debería importarnos esto?- Aritmética de punto flotante no es conmutativa o asociativa- Errores de punto flotante compuestos, No asuma que la precisión doble es suficiente- Mezclar precisión es muy peligroso Ejemplo 1: Aritmética simpleAritmética simple $\delta < \epsilon_{\text{machine}}$ $$(1+\delta) - 1 = 1 - 1 = 0$$ $$1 - 1 + \delta = \delta$$ Ejemplo 2: Cancelación catastróficaMiremos qué sucede cuando sumamos dos números $x$ y $y$ cuando $x+y \neq 0$. De hecho, podemos estimar estos límites haciendo un análisis de error. Aquí necesitamos presentar la idea de que cada operación de punto flotante introduce un error tal que$$ \text{fl}(x ~\text{op}~ y) = (x ~\text{op}~ y) (1 + \delta)$$donde $\text{fl}(\cdot)$ es una función que devuelve la representación de punto flotante de la expresión encerrada, $\text{op}$ es alguna operación (ex. $+, -, \times, /$), y $\delta$ es el error de punto flotante debido a $\text{op}$. De vuelta a nuestro problema en cuestión. El error de coma flotante debido a la suma es$$\text{fl}(x + y) = (x + y) (1 + \delta).$$Comparando esto con la solución verdadera usando un error relativo tenemos$$\begin{aligned} \frac{(x + y) - \text{fl}(x + y)}{x + y} &= \frac{(x + y) - (x + y) (1 + \delta)}{x + y} = \delta.\end{aligned}$$entonces si $\delta = \mathcal{O}(\epsilon_{\text{machine}})$ no estaremos muy preocupados. Que pasa si consideramos un error de punto flotante en la representación de $x$ y $y$, $x \neq y$, y decimos que $\delta_x$ y $\delta_y$ son la magnitud de los errores en su representación. Asumiremos que esto constituye el error de punto flotante en lugar de estar asociado con la operación en sí.Dado todo esto, tendríamos$$\begin{aligned} \text{fl}(x + y) &= x (1 + \delta_x) + y (1 + \delta_y) \\ &= x + y + x \delta_x + y \delta_y \\ &= (x + y) \left(1 + \frac{x \delta_x + y \delta_y}{x + y}\right)\end{aligned}$$ Calculando nuevamente el error relativo, tendremos$$\begin{aligned} \frac{x + y - (x + y) \left(1 + \frac{x \delta_x + y \delta_y}{x + y}\right)}{x + y} &= 1 - \left(1 + \frac{x \delta_x + y \delta_y}{x + y}\right) \\ &= \frac{x}{x + y} \delta_x + \frac{y}{x + y} \delta_y \\ &= \frac{1}{x + y} (x \delta_x + y \delta_y)\end{aligned}$$Lo importante aquí es que ahora el error depende de los valores de $x$ y $y$, y más importante aún, su suma. De particular preocupación es el tamaño relativo de $x + y$. A medida que se acerca a cero en relación con las magnitudes de $x$ y $y$, el error podría ser arbitrariamente grande. Esto se conoce como ***cancelación catastrófica***.
###Code
dx = numpy.array([10**(-n) for n in range(1, 16)])
x = 1.0 + dx
y = -numpy.ones(x.shape)
error = numpy.abs(x + y - dx) / (dx)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
axes = fig.add_subplot(1, 2, 1)
axes.loglog(dx, x + y, 'o-')
axes.set_xlabel("$\Delta x$")
axes.set_ylabel("$x + y$")
axes.set_title("$\Delta x$ vs. $x+y$")
axes = fig.add_subplot(1, 2, 2)
axes.loglog(dx, error, 'o-')
axes.set_xlabel("$\Delta x$")
axes.set_ylabel("$|x + y - \Delta x| / \Delta x$")
axes.set_title("Diferencia entre $x$ y $y$ vs. Error relativo")
plt.show()
###Output
_____no_output_____
###Markdown
Ejemplo 3: Evaluación de una funciónConsidere la función$$ f(x) = \frac{1 - \cos x}{x^2}$$con $x\in[-10^{-4}, 10^{-4}]$. Tomando el límite cuando $x \rightarrow 0$ podemos ver qué comportamiento esperaríamos ver al evaluar esta función:$$ \lim_{x \rightarrow 0} \frac{1 - \cos x}{x^2} = \lim_{x \rightarrow 0} \frac{\sin x}{2 x} = \lim_{x \rightarrow 0} \frac{\cos x}{2} = \frac{1}{2}.$$¿Qué hace la representación de punto flotante?
###Code
x = numpy.linspace(-1e-3, 1e-3, 100, dtype=numpy.float32)
error = (0.5 - (1.0 - numpy.cos(x)) / x**2) / 0.5
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, error, 'o')
axes.set_xlabel("x")
axes.set_ylabel("Error Relativo")
###Output
_____no_output_____
###Markdown
Ejemplo 4: Evaluación de un Polinomio $$f(x) = x^7 - 7x^6 + 21 x^5 - 35 x^4 + 35x^3-21x^2 + 7x - 1$$
###Code
x = numpy.linspace(0.988, 1.012, 1000, dtype=numpy.float16)
y = x**7 - 7.0 * x**6 + 21.0 * x**5 - 35.0 * x**4 + 35.0 * x**3 - 21.0 * x**2 + 7.0 * x - 1.0
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, y, 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_ylim((-0.1, 0.1))
axes.set_xlim((x[0], x[-1]))
plt.show()
###Output
_____no_output_____
###Markdown
Ejemplo 5: Evaluación de una función racionalCalcule $f(x) = x + 1$ por la función $$F(x) = \frac{x^2 - 1}{x - 1}$$¿Cuál comportamiento esperarías encontrar?
###Code
x = numpy.linspace(0.5, 1.5, 101, dtype=numpy.float16)
f_hat = (x**2 - 1.0) / (x - 1.0)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.abs(f_hat - (x + 1.0)))
axes.set_xlabel("$x$")
axes.set_ylabel("Error Absoluto")
plt.show()
###Output
_____no_output_____
###Markdown
Combinación de errorEn general, nos debemos ocupar de la combinación de error de truncamiento con el error de punto flotante.- Error de Truncamiento: errores que surgen de la aproximación de una función, truncamiento de una serie.$$\sin x \approx x - \frac{x^3}{3!} + \frac{x^5}{5!} + O(x^7)$$- Error de punto flotante: errores derivados de la aproximación de números reales con números de precisión finita$$\pi \approx 3.14$$o $\frac{1}{3} \approx 0.333333333$ en decimal, los resultados forman un número finito de registros para representar cada número. Ejemplo 1:Considere la aproximación de diferencias finitas donde $f(x) = e^x$ y estamos evaluando en $x=1$$$f'(x) \approx \frac{f(x + \Delta x) - f(x)}{\Delta x}$$Compare el error entre disminuir $\Delta x$ y la verdadera solucion $f'(1) = e$
###Code
delta_x = numpy.linspace(1e-20, 5.0, 100)
delta_x = numpy.array([2.0**(-n) for n in range(1, 60)])
x = 1.0
f_hat_1 = (numpy.exp(x + delta_x) - numpy.exp(x)) / (delta_x)
f_hat_2 = (numpy.exp(x + delta_x) - numpy.exp(x - delta_x)) / (2.0 * delta_x)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.loglog(delta_x, numpy.abs(f_hat_1 - numpy.exp(1)), 'o-', label="Unilateral")
axes.loglog(delta_x, numpy.abs(f_hat_2 - numpy.exp(1)), 's-', label="Centrado")
axes.legend(loc=3)
axes.set_xlabel("$\Delta x$")
axes.set_ylabel("Error Absoluto")
plt.show()
###Output
_____no_output_____
###Markdown
Ejemplo 2:Evalúe $e^x$ con la serie de *Taylor*$$e^x = \sum^\infty_{n=0} \frac{x^n}{n!}$$podemos elegir $n< \infty$ que puede aproximarse $e^x$ en un rango dado $x \in [a,b]$ tal que el error relativo $E$ satisfaga $E<8 \cdot \varepsilon_{\text{machine}}$?¿Cuál podría ser una mejor manera de simplemente evaluar el polinomio de Taylor directamente por varios $N$?
###Code
import scipy.special
def my_exp(x, N=10):
value = 0.0
for n in range(N + 1):
value += x**n / scipy.special.factorial(n)
return value
x = numpy.linspace(-2, 2, 100, dtype=numpy.float32)
for N in range(1, 50):
error = numpy.abs((numpy.exp(x) - my_exp(x, N=N)) / numpy.exp(x))
if numpy.all(error < 8.0 * numpy.finfo(float).eps):
break
print(N)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, error)
axes.set_xlabel("x")
axes.set_ylabel("Error Relativo")
plt.show()
###Output
_____no_output_____
###Markdown
Ejemplo 3: Error relativoDigamos que queremos calcular el error relativo de dos valores $x$ y $y$ usando $x$ como valor de normalización$$ E = \frac{x - y}{x}$$y$$ E = 1 - \frac{y}{x}$$son equivalentes. En precisión finita, ¿qué forma pidría esperarse que sea más precisa y por qué?Ejemplo tomado de [blog](https://nickhigham.wordpress.com/2017/08/14/how-and-how-not-to-compute-a-relative-error/) posteado por Nick Higham* Usando este modelo, la definición original contiene dos operaciones de punto flotante de manera que$$\begin{aligned} E_1 = \text{fl}\left(\frac{x - y}{x}\right) &= \text{fl}(\text{fl}(x - y) / x) \\ &= \left[ \frac{(x - y) (1 + \delta_+)}{x} \right ] (1 + \delta_/) \\ &= \frac{x - y}{x} (1 + \delta_+) (1 + \delta_/)\end{aligned}$$ Para la otra formulación tenemos$$\begin{aligned} E_2 = \text{fl}\left( 1 - \frac{y}{x} \right ) &= \text{fl}\left(1 - \text{fl}\left(\frac{y}{x}\right) \right) \\ &= \left(1 - \frac{y}{x} (1 + \delta_/) \right) (1 + \delta_-)\end{aligned}$$ Si suponemos que todos las $\text{op}$s tienen magnitudes de error similares, entonces podemos simplificar las cosas dejando que $$ |\delta_\ast| \le \epsilon.$$Para comparar las dos formulaciones, nuevamente usamos el error relativo entre el error relativo verdadero $e_i$ y nuestras versiones calculadas $E_i$ Definición original$$\begin{aligned} \frac{e - E_1}{e} &= \frac{\frac{x - y}{x} - \frac{x - y}{x} (1 + \delta_+) (1 + \delta_/)}{\frac{x - y}{x}} \\ &\le 1 - (1 + \epsilon) (1 + \epsilon) = 2 \epsilon + \epsilon^2\end{aligned}$$ Definición manipulada:$$\begin{aligned} \frac{e - E_2}{e} &= \frac{e - \left[1 - \frac{y}{x}(1 + \delta_/) \right] (1 + \delta_-)}{e} \\ &= \frac{e - \left[e - \frac{y}{x} \delta_/) \right] (1 + \delta_-)}{e} \\ &= \frac{e - \left[e + e\delta_- - \frac{y}{x} \delta_/ - \frac{y}{x} \delta_/ \delta_-)) \right] }{e} \\ &= - \delta_- + \frac{1}{e} \frac{y}{x} \left(\delta_/ + \delta_/ \delta_- \right) \\ &= - \delta_- + \frac{1 -e}{e} \left(\delta_/ + \delta_/ \delta_- \right) \\ &\le \epsilon + \left |\frac{1 - e}{e}\right | (\epsilon + \epsilon^2)\end{aligned}$$Vemos entonces que nuestro error de punto flotante dependerá de la magnitud relativa de $e$
###Code
# Based on the code by Nick Higham
# https://gist.github.com/higham/6f2ce1cdde0aae83697bca8577d22a6e
# Compares relative error formulations using single precision and compared to double precision
N = 501 # Note: Use 501 instead of 500 to avoid the zero value
d = numpy.finfo(numpy.float32).eps * 1e4
a = 3.0
x = a * numpy.ones(N, dtype=numpy.float32)
y = [x[i] + numpy.multiply((i - numpy.divide(N, 2.0, dtype=numpy.float32)), d, dtype=numpy.float32) for i in range(N)]
# Compute errors and "true" error
relative_error = numpy.empty((2, N), dtype=numpy.float32)
relative_error[0, :] = numpy.abs(x - y) / x
relative_error[1, :] = numpy.abs(1.0 - y / x)
exact = numpy.abs( (numpy.float64(x) - numpy.float64(y)) / numpy.float64(x))
# Compute differences between error calculations
error = numpy.empty((2, N))
for i in range(2):
error[i, :] = numpy.abs((relative_error[i, :] - exact) / numpy.abs(exact))
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.semilogy(y, error[0, :], '.', markersize=10, label="$|x-y|/|x|$")
axes.semilogy(y, error[1, :], '.', markersize=10, label="$|1-y/x|$")
axes.grid(True)
axes.set_xlabel("y")
axes.set_ylabel("Error Relativo")
axes.set_xlim((numpy.min(y), numpy.max(y)))
axes.set_ylim((5e-9, numpy.max(error[1, :])))
axes.set_title("Comparasión Error Relativo")
axes.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Algunos enlaces de utilidad con respecto al punto flotante IEEE:- [What Every Computer Scientist Should Know About Floating-Point Arithmetic](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html)- [IEEE 754 Floating Point Calculator](http://babbage.cs.qc.edu/courses/cs341/IEEE-754.html)- [Numerical Computing with IEEE Floating Point Arithmetic](http://epubs.siam.org/doi/book/10.1137/1.9780898718072) Operaciones de conteo- ***Error de truncamiento:*** *¿Por qué no usar más términos en la serie de Taylor?*- ***Error de punto flotante:*** *¿Por qué no utilizar la mayor precisión posible?* Ejemplo 1: Multiplicación matriz - vectorSea $A, B \in \mathbb{R}^{N \times N}$ y $x \in \mathbb{R}^N$.1. Cuenta el número aproximado de operaciones que tomará para calcular $Ax$2. Hacer lo mismo para $AB$ ***Producto Matriz-vector:*** Definiendo $[A]_i$ como la $i$-ésima fila de $A$ y $A_{ij}$ como la $i$,$j$-ésima entrada entonces$$ A x = \sum^N_{i=1} [A]_i \cdot x = \sum^N_{i=1} \sum^N_{j=1} A_{ij} x_j$$Tomando un caso en particular, siendo $N=3$, entonces la operación de conteo es$$ A x = [A]_1 \cdot v + [A]_2 \cdot v + [A]_3 \cdot v = \begin{bmatrix} A_{11} \times v_1 + A_{12} \times v_2 + A_{13} \times v_3 \\ A_{21} \times v_1 + A_{22} \times v_2 + A_{23} \times v_3 \\ A_{31} \times v_1 + A_{32} \times v_2 + A_{33} \times v_3 \end{bmatrix}$$Esto son 15 operaciones (6 sumas y 9 multiplicaciones) Tomando otro caso, siendo $N=4$, entonces el conteo de operaciones es:$$ A x = [A]_1 \cdot v + [A]_2 \cdot v + [A]_3 \cdot v = \begin{bmatrix} A_{11} \times v_1 + A_{12} \times v_2 + A_{13} \times v_3 + A_{14} \times v_4 \\ A_{21} \times v_1 + A_{22} \times v_2 + A_{23} \times v_3 + A_{24} \times v_4 \\ A_{31} \times v_1 + A_{32} \times v_2 + A_{33} \times v_3 + A_{34} \times v_4 \\ A_{41} \times v_1 + A_{42} \times v_2 + A_{43} \times v_3 + A_{44} \times v_4 \\ \end{bmatrix}$$Esto lleva a 28 operaciones (12 sumas y 16 multiplicaciones).Generalizando, hay $N^2$ mutiplicaciones y $N(N-1)$ sumas para un total de $$ \text{operaciones} = N (N - 1) + N^2 = \mathcal{O}(N^2).$$ ***Producto Matriz-Matriz ($AB$):*** Definiendo $[B]_j$ como la $j$-ésima columna de $B$ entonces$$ (A B)_{ij} = \sum^N_{i=1} \sum^N_{j=1} [A]_i \cdot [B]_j$$El producto interno de dos vectores es representado por $$ a \cdot b = \sum^N_{i=1} a_i b_i$$conduce a $\mathcal{O}(3N)$ operaciones. Como hay $N^2$ entradas en la matriz resultante, tendríamos $\mathcal{O}(N^3)$ operaciones Existen métodos para realizar la multiplicación matriz - matriz más rápido. En la siguiente figura vemos una colección de algoritmos a lo largo del tiempo que han podido limitar el número de operaciones en ciertas circunstancias$$ \mathcal{O}(N^\omega)$$ Ejemplo 2: Método de Horner para evaluar polinomiosDado$$P_N(x) = a_0 + a_1 x + a_2 x^2 + \ldots + a_N x^N$$ o$$P_N(x) = p_1 x^N + p_2 x^{N-1} + p_3 x^{N-2} + \ldots + p_{N+1}$$queremos encontrar la mejor vía para evaluar $P_N(x)$ Primero considere dos vías para escribir $P_3$$$ P_3(x) = p_1 x^3 + p_2 x^2 + p_3 x + p_4$$y usando multiplicación anidada$$ P_3(x) = ((p_1 x + p_2) x + p_3) x + p_4$$ Considere cuántas operaciones se necesitan para cada...$$ P_3(x) = p_1 x^3 + p_2 x^2 + p_3 x + p_4$$$$P_3(x) = \overbrace{p_1 \cdot x \cdot x \cdot x}^3 + \overbrace{p_2 \cdot x \cdot x}^2 + \overbrace{p_3 \cdot x}^1 + p_4$$ Sumando todas las operaciones, en general podemos pensar en esto como una pirámidepodemos estimar de esta manera que el algoritmo escrito de esta manera tomará aproximadamente $\mathcal{O}(N^2/2)$ operaciones para completar. Mirando nuetros otros medios de evaluación$$ P_3(x) = ((p_1 x + p_2) x + p_3) x + p_4$$Aquí encontramos que el método es $\mathcal{O}(N)$ (el 2 generalmente se ignora en estos casos). Lo importante es que la primera evaluación es $\mathcal{O}(N^2)$ y la segunda $\mathcal{O}(N)$! AlgoritmoComplete la función e implemente el método de *Horner*```pythondef eval_poly(p, x): """Evaluates polynomial given coefficients p at x Function to evaluate a polynomial in order N operations. The polynomial is defined as P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n] The value x should be a float. """ pass```
###Code
def eval_poly(p, x):
"""Evaluates polynomial given coefficients p at x
Function to evaluate a polynomial in order N operations. The polynomial is defined as
P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n]
The value x should be a float.
"""
### ADD CODE HERE
pass
# Scalar version
def eval_poly(p, x):
"""Evaluates polynomial given coefficients p at x
Function to evaluate a polynomial in order N operations. The polynomial is defined as
P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n]
The value x should be a float.
"""
y = p[0]
for coefficient in p[1:]:
y = y * x + coefficient
return y
# Vectorized version
def eval_poly(p, x):
"""Evaluates polynomial given coefficients p at x
Function to evaluate a polynomial in order N operations. The polynomial is defined as
P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n]
The value x can by a NumPy ndarray.
"""
y = numpy.ones(x.shape) * p[0]
for coefficient in p[1:]:
y = y * x + coefficient
return y
p = [1, -3, 10, 4, 5, 5]
x = numpy.linspace(-10, 10, 100)
plt.plot(x, eval_poly(p, x))
plt.show()
###Output
_____no_output_____ |
notebooks/losses_evaluation/Dstripes/basic/ellwlb/convolutional/AE/DstripesAE_Convolutional_reconst_1ellwlb_05sharpdiff.ipynb | ###Markdown
Settings
###Code
%env TF_KERAS = 1
import os
sep_local = os.path.sep
import sys
sys.path.append('..'+sep_local+'..')
print(sep_local)
os.chdir('..'+sep_local+'..'+sep_local+'..'+sep_local+'..'+sep_local+'..')
print(os.getcwd())
import tensorflow as tf
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Dataset loading
###Code
dataset_name='Dstripes'
import tensorflow as tf
train_ds = tf.data.Dataset.from_generator(
lambda: training_generator,
output_types=tf.float32 ,
output_shapes=tf.TensorShape((batch_size, ) + image_size)
)
test_ds = tf.data.Dataset.from_generator(
lambda: testing_generator,
output_types=tf.float32 ,
output_shapes=tf.TensorShape((batch_size, ) + image_size)
)
_instance_scale=1.0
for data in train_ds:
_instance_scale = float(data[0].numpy().max())
break
_instance_scale
import numpy as np
from collections.abc import Iterable
if isinstance(inputs_shape, Iterable):
_outputs_shape = np.prod(inputs_shape)
_outputs_shape
###Output
_____no_output_____
###Markdown
Model's Layers definition
###Code
units=20
c=50
enc_lays = [
tf.keras.layers.Conv2D(filters=units, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Conv2D(filters=units*9, kernel_size=3, strides=(2, 2), activation='relu'),
tf.keras.layers.Flatten(),
# No activation
tf.keras.layers.Dense(latents_dim)
]
dec_lays = [
tf.keras.layers.Dense(units=c*c*units, activation=tf.nn.relu),
tf.keras.layers.Reshape(target_shape=(c , c, units)),
tf.keras.layers.Conv2DTranspose(filters=units, kernel_size=3, strides=(2, 2), padding="SAME", activation='relu'),
tf.keras.layers.Conv2DTranspose(filters=units*3, kernel_size=3, strides=(2, 2), padding="SAME", activation='relu'),
# No activation
tf.keras.layers.Conv2DTranspose(filters=3, kernel_size=3, strides=(1, 1), padding="SAME")
]
###Output
_____no_output_____
###Markdown
Model definition
###Code
model_name = dataset_name+'AE_Convolutional_reconst_1ell_05ssmi'
experiments_dir='experiments'+sep_local+model_name
from training.autoencoding_basic.autoencoders.autoencoder import autoencoder as AE
inputs_shape=image_size
variables_params = \
[
{
'name': 'inference',
'inputs_shape':inputs_shape,
'outputs_shape':latents_dim,
'layers': enc_lays
}
,
{
'name': 'generative',
'inputs_shape':latents_dim,
'outputs_shape':inputs_shape,
'layers':dec_lays
}
]
from utils.data_and_files.file_utils import create_if_not_exist
_restore = os.path.join(experiments_dir, 'var_save_dir')
create_if_not_exist(_restore)
_restore
#to restore trained model, set filepath=_restore
ae = AE(
name=model_name,
latents_dim=latents_dim,
batch_size=batch_size,
variables_params=variables_params,
filepath=None
)
from evaluation.quantitive_metrics.sharp_difference import prepare_sharpdiff
from statistical.losses_utilities import similarity_to_distance
from statistical.ae_losses import expected_loglikelihood_with_lower_bound as ellwlb
ae.compile(loss={'x_logits': lambda x_true, x_logits: ellwlb(x_true, x_logits)+ 0.5*similarity_to_distance(prepare_sharpdiff([ae.batch_size]+ae.get_inputs_shape()))(x_true, x_logits)})
###Output
_____no_output_____
###Markdown
Callbacks
###Code
from training.callbacks.sample_generation import SampleGeneration
from training.callbacks.save_model import ModelSaver
es = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=1e-12,
patience=12,
verbose=1,
restore_best_weights=False
)
ms = ModelSaver(filepath=_restore)
csv_dir = os.path.join(experiments_dir, 'csv_dir')
create_if_not_exist(csv_dir)
csv_dir = os.path.join(csv_dir, ae.name+'.csv')
csv_log = tf.keras.callbacks.CSVLogger(csv_dir, append=True)
csv_dir
image_gen_dir = os.path.join(experiments_dir, 'image_gen_dir')
create_if_not_exist(image_gen_dir)
sg = SampleGeneration(latents_shape=latents_dim, filepath=image_gen_dir, gen_freq=5, save_img=True, gray_plot=False)
###Output
_____no_output_____
###Markdown
Model Training
###Code
from training.callbacks.disentangle_supervied import DisentanglementSuperviedMetrics
from training.callbacks.disentangle_unsupervied import DisentanglementUnsuperviedMetrics
gts_mertics = DisentanglementSuperviedMetrics(
ground_truth_data=eval_dataset,
representation_fn=lambda x: ae.encode(x),
random_state=np.random.RandomState(0),
file_Name=gts_csv,
num_train=10000,
num_test=100,
batch_size=batch_size,
continuous_factors=False,
gt_freq=10
)
gtu_mertics = DisentanglementUnsuperviedMetrics(
ground_truth_data=eval_dataset,
representation_fn=lambda x: ae.encode(x),
random_state=np.random.RandomState(0),
file_Name=gtu_csv,
num_train=20000,
num_test=500,
batch_size=batch_size,
gt_freq=10
)
ae.fit(
x=train_ds,
input_kw=None,
steps_per_epoch=int(1e4),
epochs=int(1e6),
verbose=2,
callbacks=[ es, ms, csv_log, sg, gts_mertics, gtu_mertics],
workers=-1,
use_multiprocessing=True,
validation_data=test_ds,
validation_steps=int(1e4)
)
###Output
_____no_output_____
###Markdown
Model Evaluation inception_score
###Code
from evaluation.generativity_metrics.inception_metrics import inception_score
is_mean, is_sigma = inception_score(ae, tolerance_threshold=1e-6, max_iteration=200)
print(f'inception_score mean: {is_mean}, sigma: {is_sigma}')
###Output
_____no_output_____
###Markdown
Frechet_inception_distance
###Code
from evaluation.generativity_metrics.inception_metrics import frechet_inception_distance
fis_score = frechet_inception_distance(ae, training_generator, tolerance_threshold=1e-6, max_iteration=10, batch_size=32)
print(f'frechet inception distance: {fis_score}')
###Output
_____no_output_____
###Markdown
perceptual_path_length_score
###Code
from evaluation.generativity_metrics.perceptual_path_length import perceptual_path_length_score
ppl_mean_score = perceptual_path_length_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200, batch_size=32)
print(f'perceptual path length score: {ppl_mean_score}')
###Output
_____no_output_____
###Markdown
precision score
###Code
from evaluation.generativity_metrics.precision_recall import precision_score
_precision_score = precision_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)
print(f'precision score: {_precision_score}')
###Output
_____no_output_____
###Markdown
recall score
###Code
from evaluation.generativity_metrics.precision_recall import recall_score
_recall_score = recall_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)
print(f'recall score: {_recall_score}')
###Output
_____no_output_____
###Markdown
Image Generation image reconstruction Training dataset
###Code
%load_ext autoreload
%autoreload 2
from training.generators.image_generation_testing import reconstruct_from_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'reconstruct_training_images_like_a_batch_dir')
create_if_not_exist(save_dir)
reconstruct_from_a_batch(ae, training_generator, save_dir)
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'reconstruct_testing_images_like_a_batch_dir')
create_if_not_exist(save_dir)
reconstruct_from_a_batch(ae, testing_generator, save_dir)
###Output
_____no_output_____
###Markdown
with Randomness
###Code
from training.generators.image_generation_testing import generate_images_like_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'generate_training_images_like_a_batch_dir')
create_if_not_exist(save_dir)
generate_images_like_a_batch(ae, training_generator, save_dir)
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'generate_testing_images_like_a_batch_dir')
create_if_not_exist(save_dir)
generate_images_like_a_batch(ae, testing_generator, save_dir)
###Output
_____no_output_____
###Markdown
Complete Randomness
###Code
from training.generators.image_generation_testing import generate_images_randomly
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'random_synthetic_dir')
create_if_not_exist(save_dir)
generate_images_randomly(ae, save_dir)
from training.generators.image_generation_testing import interpolate_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'interpolate_dir')
create_if_not_exist(save_dir)
interpolate_a_batch(ae, testing_generator, save_dir)
###Output
100%|██████████| 15/15 [00:00<00:00, 19.90it/s]
|
lab05-Logistic Classifier.ipynb | ###Markdown
Logistic Classifier> Pass, Fail 를 정해야 하는 상황에서 주로 사용 > ex)> - 이메일에서 스팸 메일(1)과 일반 메일(0)을 구분할 때 > - 페이스북에서 흥미 있을만한 피드(1)와 흥미 없을만한 피드(0) > - 카드를 사용할 때 평소에 주로 사용되는 패턴(0)인지 아닌지(1) sigmoid $g(z) = \frac{1}{(1 + e^{-z})}$ Logistic Hypothesis $H(X) = \frac{1}{1 + e^{-W^{T}X}}$ New Cost function for logistic$cost(W) = \frac{1}{m}\sum_{i=1}^{m}c(H(x),y)$ $c(H(x),y) = \left(\begin{array}{c} -log(H(x)) : y = 1 \\ -log(1 - H(x)) : y = 0 \end{array}\right)$y == 1: - H(x) = 1 -> -log(z) = 0 - H(x) = 0 -> -log(z) = infinityy == 0: - H(x) = 0 -> -log(1 - z) = 0 - H(x) = 1 -> -log(1 - z) = infinity$c(H(x),y) = -ylog(H(x))-(1-y)log(1 - H(x))$$cost(W) = -\frac{1}{m}\sum_{i=1}^{m}ylog(H(x))+(1-y)log(1 - H(x))$```pythoncost = tf.reduce_mean(-tf.reduce_sum(Y*tf.log(hypothesis) + (1-Y)*tf.log(1-hypothesis)))``` Minimize$W := W - a\frac{a}{aW}cost(W)$```pythona = tf.Variable(0.1)optimizer = tf.train.GradientDescentOptimizer(a)train = optimizer.minimize(cost)```
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Initialize Variables
###Code
x_data = [[1, 2], [2, 3], [3, 1], [4, 3], [5, 3], [6, 2]]
y_data = [[0], [0], [0], [1], [1], [1]]
X = tf.placeholder(tf.float32, shape=[None, 2])
Y = tf.placeholder(tf.float32, shape=[None, 1])
W = tf.Variable(tf.random_normal([2, 1]), name='weight')
b = tf.Variable(tf.random_normal([1], name='bias'))
###Output
_____no_output_____
###Markdown
Hypothesis $$g(z) = \frac{1}{(1 + e^{-z})}$$
###Code
hypothesis = tf.sigmoid(tf.matmul(X, W) + b)
###Output
_____no_output_____
###Markdown
Cost $$cost(W) = -\frac{1}{m}\sum_{i=1}^{m}ylog(H(x))+(1-y)log(1 - H(x))$$
###Code
cost = -tf.reduce_mean(Y * tf.log(hypothesis) + (1 - Y) * tf.log(1 - hypothesis))
###Output
_____no_output_____
###Markdown
Minimize $W := W - a\frac{a}{aW}cost(W)$
###Code
train = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost)
###Output
_____no_output_____
###Markdown
Accuacy computationTrue if hypothesis > 0.5 else False
###Code
predicted = tf.cast(hypothesis > 0.5, dtype=tf.float32)
accuracy = tf.reduce_mean(tf.cast(tf.equal(predicted, Y), dtype=tf.float32))
###Output
_____no_output_____
###Markdown
Launch graph
###Code
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for step in range(10001):
cost_val, _ = sess.run([cost, train], feed_dict={X: x_data, Y: y_data})
if step % 200 == 0:
print(step, cost_val)
h, c, a = sess.run([hypothesis, predicted, accuracy],
feed_dict={X: x_data, Y: y_data})
print("\nHypothesis: \n", h, "\nCorrect (Y): \n", c, "\nAccuracy: \n", a)
###Output
0 1.49585
200 0.791276
400 0.660343
600 0.62799
800 0.609996
1000 0.595703
1200 0.583445
1400 0.572775
1600 0.563442
1800 0.555251
2000 0.548038
2200 0.541665
2400 0.536015
2600 0.530987
2800 0.526497
3000 0.522474
3200 0.518858
3400 0.515597
3600 0.512647
3800 0.50997
4000 0.507535
4200 0.505313
4400 0.503281
4600 0.501418
4800 0.499706
5000 0.49813
5200 0.496675
5400 0.49533
5600 0.494083
5800 0.492927
6000 0.491852
6200 0.490851
6400 0.489917
6600 0.489046
6800 0.488231
7000 0.487467
7200 0.486752
7400 0.48608
7600 0.485449
7800 0.484855
8000 0.484296
8200 0.483769
8400 0.483272
8600 0.482802
8800 0.482358
9000 0.481938
9200 0.48154
9400 0.481163
9600 0.480806
9800 0.480467
10000 0.480145
Hypothesis:
[[ 0.41688105]
[ 0.92780554]
[ 0.23023723]
[ 0.94172376]
[ 0.18945585]
[ 0.76685083]
[ 0.93775189]
[ 0.58884984]
[ 0.26096523]
[ 0.53974587]
[ 0.71116656]
[ 0.17219089]
[ 0.28684005]
[ 0.29494804]
[ 0.75023347]
[ 0.45227915]
[ 0.73633605]
[ 0.85055715]
[ 0.81413311]
[ 0.5944863 ]
[ 0.68127936]
[ 0.10647288]
[ 0.6457361 ]
[ 0.64767402]
[ 0.36581579]
[ 0.93220043]
[ 0.52154273]
[ 0.65540928]
[ 0.70753211]
[ 0.44544899]
[ 0.94613391]
[ 0.87687635]
[ 0.5818994 ]
[ 0.83210534]
[ 0.36582404]
[ 0.65295577]
[ 0.8315475 ]
[ 0.58469158]
[ 0.42596531]
[ 0.37955987]
[ 0.82568109]
[ 0.14640802]
[ 0.41131675]
[ 0.06419802]
[ 0.60116422]
[ 0.93377 ]
[ 0.70040369]
[ 0.71719563]
[ 0.93373656]
[ 0.92876303]
[ 0.92602545]
[ 0.23068863]
[ 0.36059928]
[ 0.96465999]
[ 0.19774053]
[ 0.49510455]
[ 0.14952551]
[ 0.69030654]
[ 0.88374609]
[ 0.50717384]
[ 0.95442462]
[ 0.71199924]
[ 0.66868567]
[ 0.84752405]
[ 0.60102111]
[ 0.61586398]
[ 0.953704 ]
[ 0.67344397]
[ 0.85285628]
[ 0.65797621]
[ 0.26553506]
[ 0.69858801]
[ 0.9194811 ]
[ 0.92445129]
[ 0.88727981]
[ 0.78719646]
[ 0.41099364]
[ 0.86356157]
[ 0.88895428]
[ 0.92236131]
[ 0.86361796]
[ 0.81536293]
[ 0.3695488 ]
[ 0.81395257]
[ 0.53031045]
[ 0.86269563]
[ 0.37603334]
[ 0.89474708]
[ 0.9486714 ]
[ 0.75579691]
[ 0.79857856]
[ 0.68727791]
[ 0.72039884]
[ 0.55935055]
[ 0.90085405]
[ 0.97511351]
[ 0.88452196]
[ 0.60451883]
[ 0.2780287 ]
[ 0.66095662]
[ 0.63736236]
[ 0.95662051]
[ 0.79364526]
[ 0.77384228]
[ 0.89705855]
[ 0.67419505]
[ 0.91762608]
[ 0.79621458]
[ 0.4728826 ]
[ 0.34243372]
[ 0.92973763]
[ 0.88001245]
[ 0.41719416]
[ 0.47349927]
[ 0.64158934]
[ 0.84770697]
[ 0.8636961 ]
[ 0.91863286]
[ 0.13758755]
[ 0.7205714 ]
[ 0.84761429]
[ 0.65321475]
[ 0.62845397]
[ 0.80545974]
[ 0.68755829]
[ 0.83487433]
[ 0.8103947 ]
[ 0.64923054]
[ 0.47755244]
[ 0.43228 ]
[ 0.4209125 ]
[ 0.7716518 ]
[ 0.93131751]
[ 0.81632668]
[ 0.78177309]
[ 0.83836132]
[ 0.46655175]
[ 0.78673708]
[ 0.76342887]
[ 0.72205687]
[ 0.87274277]
[ 0.63704199]
[ 0.54937005]
[ 0.69729871]
[ 0.90687478]
[ 0.76730865]
[ 0.47537273]
[ 0.92738354]
[ 0.64251423]
[ 0.79953361]
[ 0.2606729 ]
[ 0.40188769]
[ 0.1078181 ]
[ 0.2345321 ]
[ 0.9129644 ]
[ 0.87585253]
[ 0.93689966]
[ 0.10250381]
[ 0.52664429]
[ 0.76684248]
[ 0.5804764 ]
[ 0.87140125]
[ 0.43466336]
[ 0.79702759]
[ 0.59995192]
[ 0.66741455]
[ 0.74043024]
[ 0.85479999]
[ 0.77565068]
[ 0.59889734]
[ 0.89007121]
[ 0.88060027]
[ 0.94889635]
[ 0.22037254]
[ 0.82936317]
[ 0.22137755]
[ 0.39365697]
[ 0.42324334]
[ 0.8874923 ]
[ 0.63698387]
[ 0.92210776]
[ 0.91408855]
[ 0.61357868]
[ 0.13696407]
[ 0.1994752 ]
[ 0.63420999]
[ 0.73333013]
[ 0.62060595]
[ 0.84908366]
[ 0.60839683]
[ 0.36654106]
[ 0.20685983]
[ 0.89521915]
[ 0.37489223]
[ 0.88293797]
[ 0.89232886]
[ 0.72728157]
[ 0.62568337]
[ 0.61747289]
[ 0.57982975]
[ 0.70287895]
[ 0.94215912]
[ 0.76238394]
[ 0.81216806]
[ 0.12779729]
[ 0.3269749 ]
[ 0.90582794]
[ 0.20096722]
[ 0.93331271]
[ 0.27322212]
[ 0.25023347]
[ 0.43161604]
[ 0.68655264]
[ 0.19985256]
[ 0.74940163]
[ 0.70891494]
[ 0.81639183]
[ 0.68044204]
[ 0.15308915]
[ 0.39564654]
[ 0.71900153]
[ 0.53045684]
[ 0.92157561]
[ 0.93155694]
[ 0.69254947]
[ 0.3731491 ]
[ 0.04688985]
[ 0.61255288]
[ 0.36612865]
[ 0.44457796]
[ 0.94912106]
[ 0.64980388]
[ 0.94612008]
[ 0.23031169]
[ 0.12675631]
[ 0.26495376]
[ 0.77176148]
[ 0.91699558]
[ 0.87638974]
[ 0.66641921]
[ 0.6706084 ]
[ 0.58202505]
[ 0.14522186]
[ 0.55485344]
[ 0.12255114]
[ 0.55768824]
[ 0.86173689]
[ 0.67953467]
[ 0.72190583]
[ 0.94505256]
[ 0.79951066]
[ 0.74786639]
[ 0.7802974 ]
[ 0.78031576]
[ 0.85344601]
[ 0.3839184 ]
[ 0.382397 ]
[ 0.54623765]
[ 0.81984138]
[ 0.64839602]
[ 0.69853848]
[ 0.80710524]
[ 0.3335292 ]
[ 0.51706284]
[ 0.64693677]
[ 0.64430833]
[ 0.42076701]
[ 0.90489894]
[ 0.7834264 ]
[ 0.93580425]
[ 0.55330044]
[ 0.78563762]
[ 0.80612719]
[ 0.8033421 ]
[ 0.71002251]
[ 0.85861319]
[ 0.34726936]
[ 0.55010706]
[ 0.64924526]
[ 0.36856201]
[ 0.83132565]
[ 0.30253497]
[ 0.61978745]
[ 0.93162006]
[ 0.77877843]
[ 0.84149718]
[ 0.67384273]
[ 0.50646508]
[ 0.66269612]
[ 0.42688239]
[ 0.44598848]
[ 0.65178233]
[ 0.6038453 ]
[ 0.61002326]
[ 0.65314889]
[ 0.21612601]
[ 0.67610306]
[ 0.902879 ]
[ 0.49203655]
[ 0.64948922]
[ 0.75616175]
[ 0.4403995 ]
[ 0.69198573]
[ 0.49258527]
[ 0.71173233]
[ 0.89914507]
[ 0.64397264]
[ 0.67787135]
[ 0.84915948]
[ 0.56416035]
[ 0.84542942]
[ 0.93650901]
[ 0.31568989]
[ 0.78027081]
[ 0.26168489]
[ 0.75909156]
[ 0.80733192]
[ 0.67655456]
[ 0.33369035]
[ 0.79355007]
[ 0.72245938]
[ 0.75183249]
[ 0.17805234]
[ 0.8057009 ]
[ 0.8399297 ]
[ 0.58497745]
[ 0.93601179]
[ 0.27209291]
[ 0.69534093]
[ 0.94548762]
[ 0.20090157]
[ 0.49780881]
[ 0.70371372]
[ 0.31853789]
[ 0.17897026]
[ 0.83336806]
[ 0.91696846]
[ 0.86269218]
[ 0.62958264]
[ 0.68592596]
[ 0.55863142]
[ 0.76118618]
[ 0.82508034]
[ 0.92914057]
[ 0.73622084]
[ 0.77286351]
[ 0.59750003]
[ 0.93698525]
[ 0.93702942]
[ 0.75756758]
[ 0.2765592 ]
[ 0.71002805]
[ 0.33479643]
[ 0.74327731]
[ 0.21152282]
[ 0.24605475]
[ 0.44803137]
[ 0.70435941]
[ 0.40894157]
[ 0.5519411 ]
[ 0.836604 ]
[ 0.66166413]
[ 0.84491599]
[ 0.94167536]
[ 0.75036323]
[ 0.11279627]
[ 0.48117578]
[ 0.83066708]
[ 0.8584168 ]
[ 0.69428927]
[ 0.27839118]
[ 0.86131275]
[ 0.89783347]
[ 0.3011477 ]
[ 0.61115003]
[ 0.84431779]
[ 0.83354867]
[ 0.87007898]
[ 0.907085 ]
[ 0.86790776]
[ 0.91230869]
[ 0.68439883]
[ 0.60255933]
[ 0.55237943]
[ 0.83599937]
[ 0.88098174]
[ 0.23427413]
[ 0.8124221 ]
[ 0.86504394]
[ 0.35120279]
[ 0.64275694]
[ 0.87521744]
[ 0.53467786]
[ 0.92107582]
[ 0.26258454]
[ 0.8422935 ]
[ 0.6134848 ]
[ 0.86919576]
[ 0.36920413]
[ 0.69723743]
[ 0.74249184]
[ 0.79580474]
[ 0.10980438]
[ 0.24644786]
[ 0.68605316]
[ 0.81904554]
[ 0.462547 ]
[ 0.78275895]
[ 0.48475188]
[ 0.3523204 ]
[ 0.855093 ]
[ 0.45878774]
[ 0.93192923]
[ 0.80811137]
[ 0.67173809]
[ 0.91825545]
[ 0.66637123]
[ 0.80446851]
[ 0.31623855]
[ 0.27412194]
[ 0.75298202]
[ 0.42610976]
[ 0.44294885]
[ 0.8953141 ]
[ 0.90438122]
[ 0.90698284]
[ 0.94568402]
[ 0.70592976]
[ 0.89633161]
[ 0.36112753]
[ 0.33763385]
[ 0.49343121]
[ 0.9421382 ]
[ 0.6158734 ]
[ 0.15736508]
[ 0.92570537]
[ 0.80076534]
[ 0.60092765]
[ 0.7914353 ]
[ 0.02436835]
[ 0.91601098]
[ 0.75903177]
[ 0.73981732]
[ 0.73022985]
[ 0.96220386]
[ 0.65815645]
[ 0.75761187]
[ 0.78027958]
[ 0.85192829]
[ 0.16893421]
[ 0.67745399]
[ 0.90236926]
[ 0.60926139]
[ 0.73533267]
[ 0.95066208]
[ 0.84537268]
[ 0.88164985]
[ 0.53562814]
[ 0.76623923]
[ 0.9303515 ]
[ 0.73596132]
[ 0.64909261]
[ 0.30384105]
[ 0.48502284]
[ 0.50920367]
[ 0.58813566]
[ 0.53680295]
[ 0.78570533]
[ 0.56433439]
[ 0.79321194]
[ 0.81072724]
[ 0.72353715]
[ 0.66395468]
[ 0.48516154]
[ 0.56809711]
[ 0.93053412]
[ 0.82343119]
[ 0.26125535]
[ 0.42131215]
[ 0.53494793]
[ 0.11029893]
[ 0.88119888]
[ 0.16111086]
[ 0.8989386 ]
[ 0.87855375]
[ 0.83489221]
[ 0.68732679]
[ 0.89583039]
[ 0.38139057]
[ 0.78650033]
[ 0.93474704]
[ 0.29442632]
[ 0.43835008]
[ 0.8683064 ]
[ 0.87305379]
[ 0.6718027 ]
[ 0.80572456]
[ 0.81910944]
[ 0.80527675]
[ 0.24340804]
[ 0.77131063]
[ 0.90593702]
[ 0.64564538]
[ 0.7799058 ]
[ 0.68395036]
[ 0.82140595]
[ 0.87892538]
[ 0.92485666]
[ 0.57439685]
[ 0.40991479]
[ 0.77133894]
[ 0.74630344]
[ 0.96621341]
[ 0.75717497]
[ 0.70146859]
[ 0.42650709]
[ 0.72715157]
[ 0.9256168 ]
[ 0.94998199]
[ 0.87900072]
[ 0.68014032]
[ 0.67568374]
[ 0.80110192]
[ 0.4788157 ]
[ 0.83401626]
[ 0.81153512]
[ 0.9027068 ]
[ 0.62552541]
[ 0.70974666]
[ 0.91050637]
[ 0.4795897 ]
[ 0.55643672]
[ 0.66060889]
[ 0.7201274 ]
[ 0.66369474]
[ 0.89124542]
[ 0.92153531]
[ 0.19450144]
[ 0.14080302]
[ 0.73907274]
[ 0.53481293]
[ 0.23555659]
[ 0.84665084]
[ 0.90262914]
[ 0.71773005]
[ 0.9335112 ]
[ 0.91407567]
[ 0.74846113]
[ 0.84562498]
[ 0.70856261]
[ 0.54385203]
[ 0.76347214]
[ 0.62175417]
[ 0.11068618]
[ 0.90224892]
[ 0.87110531]
[ 0.72347856]
[ 0.91417146]
[ 0.86645764]
[ 0.87759155]
[ 0.57212168]
[ 0.66680706]
[ 0.89509618]
[ 0.75082403]
[ 0.85109812]
[ 0.90016586]
[ 0.60209113]
[ 0.81139016]
[ 0.82554722]
[ 0.58624405]
[ 0.525473 ]
[ 0.13391536]
[ 0.26182422]
[ 0.82423967]
[ 0.61279625]
[ 0.67976874]
[ 0.5608412 ]
[ 0.93427044]
[ 0.44219014]
[ 0.81831837]
[ 0.28668204]
[ 0.89924109]
[ 0.3428492 ]
[ 0.77880251]
[ 0.60014766]
[ 0.8730334 ]
[ 0.59602767]
[ 0.20751537]
[ 0.79357302]
[ 0.94068676]
[ 0.39674586]
[ 0.91776347]
[ 0.86636245]
[ 0.85630351]
[ 0.81361735]
[ 0.41365433]
[ 0.33237925]
[ 0.71462619]
[ 0.20210628]
[ 0.9491685 ]
[ 0.32214037]
[ 0.9177193 ]
[ 0.87226182]
[ 0.41259056]
[ 0.20904005]
[ 0.69616568]
[ 0.41225082]
[ 0.84340352]
[ 0.70706499]
[ 0.97696459]
[ 0.55870044]
[ 0.62946641]
[ 0.78207392]
[ 0.82117403]
[ 0.08562216]
[ 0.73197067]
[ 0.82493764]
[ 0.84265095]
[ 0.64471585]
[ 0.47835499]
[ 0.61431897]
[ 0.90328753]
[ 0.6549809 ]
[ 0.75893563]
[ 0.82994372]
[ 0.8450138 ]
[ 0.80035049]
[ 0.56688702]
[ 0.8030982 ]
[ 0.89416397]
[ 0.69577032]
[ 0.95171291]
[ 0.77344936]
[ 0.63087052]
[ 0.50281918]
[ 0.84642887]
[ 0.84946364]
[ 0.4766202 ]
[ 0.64413607]
[ 0.22275889]
[ 0.5350703 ]
[ 0.79343444]
[ 0.94550925]
[ 0.8298769 ]
[ 0.71859682]
[ 0.77663803]
[ 0.88062626]
[ 0.49750641]
[ 0.92903239]
[ 0.61374396]
[ 0.85308069]
[ 0.31404302]
[ 0.09966533]
[ 0.26833397]
[ 0.34568974]
[ 0.7027576 ]
[ 0.82879227]
[ 0.58765155]
[ 0.72177035]
[ 0.8095367 ]
[ 0.46866333]
[ 0.37043333]
[ 0.90406108]
[ 0.87452424]
[ 0.40021741]
[ 0.69759929]
[ 0.18904211]
[ 0.39721888]
[ 0.74394727]
[ 0.71887022]
[ 0.89894772]
[ 0.97663373]
[ 0.18111059]
[ 0.70540541]
[ 0.62719041]
[ 0.48221904]
[ 0.71841627]
[ 0.74645579]
[ 0.8913874 ]
[ 0.72877866]
[ 0.5077492 ]
[ 0.67301393]
[ 0.1485285 ]
[ 0.68338221]
[ 0.53615022]
[ 0.9163515 ]
[ 0.54702967]
[ 0.54043573]
[ 0.79322654]
[ 0.6983813 ]
[ 0.4974699 ]
[ 0.74604422]
[ 0.65266776]
[ 0.35060769]
[ 0.63399243]
[ 0.87009937]
[ 0.83354288]
[ 0.58678347]
[ 0.77647477]
[ 0.28912219]
[ 0.84779984]
[ 0.57109576]
[ 0.74321699]
[ 0.40400079]
[ 0.6362564 ]
[ 0.84326041]
[ 0.1734062 ]
[ 0.30902317]
[ 0.79938996]
[ 0.82515514]
[ 0.79369205]
[ 0.90167701]
[ 0.79753965]
[ 0.69758874]
[ 0.71392387]
[ 0.79016072]
[ 0.7016052 ]
[ 0.79089254]
[ 0.46103302]
[ 0.4580946 ]
[ 0.89238876]
[ 0.78310847]
[ 0.63906658]
[ 0.2825163 ]
[ 0.87661988]
[ 0.84001672]
[ 0.83404917]
[ 0.65071118]
[ 0.88138038]
[ 0.86539263]
[ 0.79304379]
[ 0.40781942]
[ 0.88669872]
[ 0.90431464]
[ 0.34388387]
[ 0.15318789]
[ 0.70428842]
[ 0.40796256]
[ 0.81705213]
[ 0.31098726]
[ 0.46660894]
[ 0.46828592]
[ 0.77056372]
[ 0.86519831]
[ 0.13462779]
[ 0.36926594]
[ 0.63014525]
[ 0.49891067]
[ 0.52685046]
[ 0.80040377]
[ 0.16972142]
[ 0.91649461]
[ 0.17601216]
[ 0.85063279]
[ 0.72190869]
[ 0.72615618]
[ 0.81306463]
[ 0.74427438]
[ 0.89711392]]
Correct (Y):
[[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]]
Accuracy:
0.768116
###Markdown
Classifying diabetes
###Code
import tensorflow as tf
import numpy as np
xy = np.loadtxt('data/data-03-diabetes.csv', delimiter=',', dtype=np.float32)
x_data = xy[:, 0:-1]
y_data = xy[:, [-1]]
X = tf.placeholder(tf.float32, shape=[None, 8])
Y = tf.placeholder(tf.float32, shape=[None, 1])
W = tf.Variable(tf.random_normal([8, 1]), name='weight')
b = tf.Variable(tf.random_normal([1], name='bias'))
hypothesis = tf.sigmoid(tf.matmul(X, W) + b)
cost = -tf.reduce_mean(Y * tf.log(hypothesis) + (1 - Y) * tf.log(1 - hypothesis))
train = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost)
predicted = tf.cast(hypothesis > 0.5, dtype=tf.float32)
accuracy = tf.reduce_mean(tf.cast(tf.equal(predicted, Y), dtype=tf.float32))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for step in range(10001):
cost_val, _ = sess.run([cost, train], feed_dict={X: x_data, Y: y_data})
if step % 200 == 0:
print(step, cost_val)
h, c, a = sess.run([hypothesis, predicted, accuracy],
feed_dict={X: x_data, Y: y_data})
print("\nHypothesis: \n", h, "\nCorrect (Y): \n", c, "\nAccuracy: \n", a)
###Output
0 0.628972
200 0.604527
400 0.592711
600 0.583717
800 0.575783
1000 0.568538
1200 0.561873
1400 0.55573
1600 0.550063
1800 0.544833
2000 0.540003
2200 0.535541
2400 0.531415
2600 0.527597
2800 0.524062
3000 0.520787
3200 0.51775
3400 0.514931
3600 0.512313
3800 0.50988
4000 0.507617
4200 0.50551
4400 0.503547
4600 0.501717
4800 0.50001
5000 0.498416
5200 0.496926
5400 0.495533
5600 0.494229
5800 0.493009
6000 0.491865
6200 0.490792
6400 0.489786
6600 0.488841
6800 0.487953
7000 0.487118
7200 0.486334
7400 0.485595
7600 0.484899
7800 0.484243
8000 0.483625
8200 0.483042
8400 0.482491
8600 0.481972
8800 0.481481
9000 0.481017
9200 0.480578
9400 0.480163
9600 0.479769
9800 0.479397
10000 0.479045
Hypothesis:
[[ 0.38338163]
[ 0.93063539]
[ 0.19092147]
[ 0.94940913]
[ 0.10516249]
[ 0.7806353 ]
[ 0.92639726]
[ 0.49094453]
[ 0.22233228]
[ 0.61301565]
[ 0.77007383]
[ 0.13922971]
[ 0.2986398 ]
[ 0.19893201]
[ 0.7044872 ]
[ 0.46593416]
[ 0.73689806]
[ 0.7357102 ]
[ 0.8349548 ]
[ 0.66893315]
[ 0.70270658]
[ 0.11045787]
[ 0.65909219]
[ 0.63481885]
[ 0.33070722]
[ 0.94458032]
[ 0.56384349]
[ 0.72283584]
[ 0.69775867]
[ 0.47818315]
[ 0.94838119]
[ 0.92236477]
[ 0.58267605]
[ 0.83329904]
[ 0.33525923]
[ 0.64425147]
[ 0.82119751]
[ 0.59894592]
[ 0.40078875]
[ 0.38818845]
[ 0.875916 ]
[ 0.20856522]
[ 0.35180947]
[ 0.04491293]
[ 0.5370115 ]
[ 0.94095159]
[ 0.64210099]
[ 0.65680146]
[ 0.96055222]
[ 0.92460138]
[ 0.93718994]
[ 0.26070401]
[ 0.30810344]
[ 0.95567465]
[ 0.15506102]
[ 0.55289465]
[ 0.14716135]
[ 0.64119399]
[ 0.86502635]
[ 0.47064105]
[ 0.95358747]
[ 0.70492071]
[ 0.62010515]
[ 0.85401636]
[ 0.65004736]
[ 0.68225086]
[ 0.96534747]
[ 0.75004971]
[ 0.86586463]
[ 0.64911866]
[ 0.26272294]
[ 0.75107038]
[ 0.93128425]
[ 0.90013534]
[ 0.89779562]
[ 0.77498585]
[ 0.30235431]
[ 0.86361724]
[ 0.84000993]
[ 0.91233653]
[ 0.89541924]
[ 0.79455656]
[ 0.46835315]
[ 0.84656376]
[ 0.46462288]
[ 0.870583 ]
[ 0.31898507]
[ 0.89864814]
[ 0.94443983]
[ 0.79698181]
[ 0.81152499]
[ 0.69285792]
[ 0.7984497 ]
[ 0.54717922]
[ 0.88866252]
[ 0.9744153 ]
[ 0.86001325]
[ 0.66756964]
[ 0.26385742]
[ 0.62458599]
[ 0.72974783]
[ 0.96915394]
[ 0.77389598]
[ 0.75462025]
[ 0.96521294]
[ 0.63525486]
[ 0.91490626]
[ 0.83970648]
[ 0.48360777]
[ 0.25314283]
[ 0.95030993]
[ 0.86263663]
[ 0.33415604]
[ 0.55085474]
[ 0.63135034]
[ 0.78309864]
[ 0.84324604]
[ 0.9407531 ]
[ 0.11820281]
[ 0.6725964 ]
[ 0.87464261]
[ 0.67792827]
[ 0.62006283]
[ 0.64605814]
[ 0.68143833]
[ 0.82459635]
[ 0.86969286]
[ 0.68781513]
[ 0.4764688 ]
[ 0.33377686]
[ 0.38110825]
[ 0.75412583]
[ 0.94848818]
[ 0.79767448]
[ 0.79151028]
[ 0.82581019]
[ 0.47975922]
[ 0.77977419]
[ 0.79921848]
[ 0.73467827]
[ 0.8527177 ]
[ 0.57479709]
[ 0.49724072]
[ 0.68211651]
[ 0.91436183]
[ 0.78446543]
[ 0.4679859 ]
[ 0.93222094]
[ 0.66787159]
[ 0.79604608]
[ 0.30232963]
[ 0.41866186]
[ 0.08127707]
[ 0.22500233]
[ 0.90495479]
[ 0.87574583]
[ 0.95357132]
[ 0.07667268]
[ 0.59857363]
[ 0.77641773]
[ 0.57025051]
[ 0.85953063]
[ 0.48347804]
[ 0.82051563]
[ 0.55899096]
[ 0.65567607]
[ 0.71917415]
[ 0.91224438]
[ 0.80261695]
[ 0.59433246]
[ 0.88162249]
[ 0.87491554]
[ 0.95530385]
[ 0.21150824]
[ 0.84437603]
[ 0.28145275]
[ 0.36008289]
[ 0.44046211]
[ 0.90937608]
[ 0.60727626]
[ 0.942779 ]
[ 0.90334624]
[ 0.61995566]
[ 0.10263854]
[ 0.15383688]
[ 0.78575575]
[ 0.77105159]
[ 0.67083734]
[ 0.83729672]
[ 0.60168195]
[ 0.31221437]
[ 0.1199839 ]
[ 0.86172086]
[ 0.40976807]
[ 0.86853266]
[ 0.90465295]
[ 0.72736996]
[ 0.56706554]
[ 0.62876642]
[ 0.60611892]
[ 0.67937839]
[ 0.9578557 ]
[ 0.75834602]
[ 0.82270962]
[ 0.11003106]
[ 0.42192164]
[ 0.90989137]
[ 0.19864871]
[ 0.92681611]
[ 0.26770043]
[ 0.28762978]
[ 0.36850506]
[ 0.71973658]
[ 0.15422536]
[ 0.72944444]
[ 0.71399587]
[ 0.83302373]
[ 0.62936825]
[ 0.09940259]
[ 0.46829423]
[ 0.67249441]
[ 0.47623482]
[ 0.93786669]
[ 0.95144832]
[ 0.68122536]
[ 0.26630539]
[ 0.0346668 ]
[ 0.62550128]
[ 0.38771078]
[ 0.41931275]
[ 0.9698599 ]
[ 0.61973262]
[ 0.95596051]
[ 0.19221719]
[ 0.12700622]
[ 0.27608889]
[ 0.80019426]
[ 0.91171789]
[ 0.89162785]
[ 0.60128134]
[ 0.60819972]
[ 0.60441589]
[ 0.1513886 ]
[ 0.49759224]
[ 0.10155413]
[ 0.50447452]
[ 0.89721763]
[ 0.56611955]
[ 0.77644229]
[ 0.96552706]
[ 0.76747054]
[ 0.70644385]
[ 0.75618988]
[ 0.73246461]
[ 0.85093808]
[ 0.29698536]
[ 0.37088686]
[ 0.49791968]
[ 0.81881666]
[ 0.66182178]
[ 0.64951396]
[ 0.83564973]
[ 0.25622258]
[ 0.46658528]
[ 0.55955946]
[ 0.60361308]
[ 0.42265376]
[ 0.9114365 ]
[ 0.81719708]
[ 0.95581228]
[ 0.48381847]
[ 0.80496949]
[ 0.76682037]
[ 0.7974146 ]
[ 0.72719628]
[ 0.86098635]
[ 0.29040614]
[ 0.54937905]
[ 0.71535063]
[ 0.39597663]
[ 0.86142468]
[ 0.30229881]
[ 0.68065923]
[ 0.9332428 ]
[ 0.79460227]
[ 0.88415158]
[ 0.63953042]
[ 0.54163396]
[ 0.5685696 ]
[ 0.30745339]
[ 0.40108395]
[ 0.65666288]
[ 0.65764189]
[ 0.59895062]
[ 0.65425098]
[ 0.15861303]
[ 0.64661223]
[ 0.92938131]
[ 0.54432499]
[ 0.68378949]
[ 0.78379714]
[ 0.41730615]
[ 0.68334097]
[ 0.48583964]
[ 0.72244245]
[ 0.89333475]
[ 0.62817478]
[ 0.72556233]
[ 0.82038617]
[ 0.57727927]
[ 0.85677397]
[ 0.96063161]
[ 0.30793202]
[ 0.77203256]
[ 0.26194754]
[ 0.73606038]
[ 0.81499958]
[ 0.64843023]
[ 0.41297379]
[ 0.79541218]
[ 0.75637001]
[ 0.75037348]
[ 0.13497207]
[ 0.84847403]
[ 0.85189193]
[ 0.5914641 ]
[ 0.93402714]
[ 0.19855151]
[ 0.70938462]
[ 0.95167071]
[ 0.16783412]
[ 0.41611505]
[ 0.71227884]
[ 0.33001798]
[ 0.17152151]
[ 0.86193693]
[ 0.94100446]
[ 0.86348069]
[ 0.67242604]
[ 0.65423113]
[ 0.61808062]
[ 0.70978427]
[ 0.8109073 ]
[ 0.93729371]
[ 0.74050361]
[ 0.78207958]
[ 0.64272326]
[ 0.94921696]
[ 0.94527525]
[ 0.76187617]
[ 0.29894933]
[ 0.65676868]
[ 0.22596633]
[ 0.7476424 ]
[ 0.22033548]
[ 0.20441313]
[ 0.41100416]
[ 0.80437893]
[ 0.39585587]
[ 0.55138522]
[ 0.80605173]
[ 0.65948999]
[ 0.82512373]
[ 0.96576202]
[ 0.85834056]
[ 0.12268721]
[ 0.45851955]
[ 0.81542653]
[ 0.84652013]
[ 0.63638651]
[ 0.26037616]
[ 0.89859295]
[ 0.89682299]
[ 0.25657842]
[ 0.72581232]
[ 0.8765763 ]
[ 0.82957488]
[ 0.87213093]
[ 0.92251259]
[ 0.89289981]
[ 0.91152543]
[ 0.6846922 ]
[ 0.67032003]
[ 0.58843309]
[ 0.85555577]
[ 0.88696563]
[ 0.20055243]
[ 0.80367935]
[ 0.89231938]
[ 0.36170122]
[ 0.62413806]
[ 0.87381113]
[ 0.46593237]
[ 0.93240684]
[ 0.22689694]
[ 0.82292795]
[ 0.67842591]
[ 0.87474889]
[ 0.30258715]
[ 0.59960747]
[ 0.74504846]
[ 0.77114344]
[ 0.10181306]
[ 0.1951617 ]
[ 0.74312341]
[ 0.84120196]
[ 0.51176429]
[ 0.8237142 ]
[ 0.41146296]
[ 0.36999047]
[ 0.86622393]
[ 0.47134957]
[ 0.94427502]
[ 0.81502849]
[ 0.73928529]
[ 0.932558 ]
[ 0.62332046]
[ 0.78193206]
[ 0.30113089]
[ 0.27381781]
[ 0.6932646 ]
[ 0.39544466]
[ 0.52234411]
[ 0.92260706]
[ 0.90136021]
[ 0.92697704]
[ 0.96116477]
[ 0.73115849]
[ 0.90673882]
[ 0.25751764]
[ 0.32439622]
[ 0.49551997]
[ 0.95054448]
[ 0.66755539]
[ 0.20834474]
[ 0.93569303]
[ 0.79452819]
[ 0.58173108]
[ 0.75791681]
[ 0.02163063]
[ 0.93837112]
[ 0.79236698]
[ 0.72579807]
[ 0.75124645]
[ 0.97008371]
[ 0.66249651]
[ 0.73196137]
[ 0.77054691]
[ 0.82528454]
[ 0.10477228]
[ 0.6143716 ]
[ 0.91349995]
[ 0.63047373]
[ 0.7688657 ]
[ 0.95364678]
[ 0.85555637]
[ 0.89743257]
[ 0.65187949]
[ 0.73273289]
[ 0.91676652]
[ 0.71255565]
[ 0.57721919]
[ 0.29947287]
[ 0.5090481 ]
[ 0.46004102]
[ 0.51033151]
[ 0.59504604]
[ 0.76382411]
[ 0.58891952]
[ 0.83982927]
[ 0.84610701]
[ 0.75276309]
[ 0.71525866]
[ 0.45911908]
[ 0.60182095]
[ 0.93021095]
[ 0.85099584]
[ 0.17653745]
[ 0.36209369]
[ 0.45326146]
[ 0.09353124]
[ 0.8892135 ]
[ 0.15776426]
[ 0.905797 ]
[ 0.91341186]
[ 0.82253706]
[ 0.71407616]
[ 0.87991649]
[ 0.36535841]
[ 0.7818253 ]
[ 0.94992959]
[ 0.28242394]
[ 0.45702097]
[ 0.92279005]
[ 0.8718003 ]
[ 0.62908554]
[ 0.79873705]
[ 0.82963383]
[ 0.81914401]
[ 0.28811377]
[ 0.76203722]
[ 0.89188451]
[ 0.66035903]
[ 0.7567969 ]
[ 0.65020436]
[ 0.80709195]
[ 0.87397957]
[ 0.91925788]
[ 0.58519632]
[ 0.4786756 ]
[ 0.69403577]
[ 0.81863159]
[ 0.97531736]
[ 0.80446941]
[ 0.63935953]
[ 0.37683114]
[ 0.67077488]
[ 0.91976678]
[ 0.95980823]
[ 0.90372306]
[ 0.66133654]
[ 0.64673251]
[ 0.79895175]
[ 0.44006929]
[ 0.84454459]
[ 0.78053373]
[ 0.90374035]
[ 0.56496698]
[ 0.76066375]
[ 0.89849073]
[ 0.50570381]
[ 0.66920352]
[ 0.65534389]
[ 0.740408 ]
[ 0.68593448]
[ 0.92853934]
[ 0.94485343]
[ 0.22686143]
[ 0.12824658]
[ 0.73593467]
[ 0.61084569]
[ 0.36941612]
[ 0.85079134]
[ 0.91509092]
[ 0.76827788]
[ 0.94052768]
[ 0.91710669]
[ 0.75481653]
[ 0.84046608]
[ 0.71008104]
[ 0.44037017]
[ 0.75326961]
[ 0.61043644]
[ 0.07768288]
[ 0.91298413]
[ 0.86878109]
[ 0.74236941]
[ 0.91162765]
[ 0.88517052]
[ 0.86694831]
[ 0.58860397]
[ 0.63323402]
[ 0.89642721]
[ 0.84049416]
[ 0.84763563]
[ 0.89417446]
[ 0.68659484]
[ 0.75195873]
[ 0.77231783]
[ 0.59640056]
[ 0.47766829]
[ 0.11562827]
[ 0.26901141]
[ 0.79336125]
[ 0.59834456]
[ 0.65884125]
[ 0.53593582]
[ 0.92734224]
[ 0.36770019]
[ 0.80982155]
[ 0.35196266]
[ 0.88881862]
[ 0.32120648]
[ 0.81187785]
[ 0.62321264]
[ 0.86733091]
[ 0.60151637]
[ 0.22359447]
[ 0.77015966]
[ 0.89673048]
[ 0.37848917]
[ 0.88858455]
[ 0.90853006]
[ 0.83888292]
[ 0.83019489]
[ 0.4318569 ]
[ 0.27026826]
[ 0.65897352]
[ 0.21476029]
[ 0.95965332]
[ 0.31829849]
[ 0.92271554]
[ 0.85049772]
[ 0.35226649]
[ 0.23127452]
[ 0.72268611]
[ 0.39036956]
[ 0.85615861]
[ 0.78900075]
[ 0.98198557]
[ 0.59148282]
[ 0.55011433]
[ 0.83640772]
[ 0.8597126 ]
[ 0.10144567]
[ 0.70569932]
[ 0.80627769]
[ 0.89382708]
[ 0.61288333]
[ 0.46793097]
[ 0.62521219]
[ 0.8974694 ]
[ 0.61902934]
[ 0.78757435]
[ 0.8136456 ]
[ 0.88886136]
[ 0.76183999]
[ 0.52318954]
[ 0.80894142]
[ 0.92169732]
[ 0.71678191]
[ 0.96813273]
[ 0.83284169]
[ 0.60943824]
[ 0.51607317]
[ 0.83012086]
[ 0.87663519]
[ 0.44939911]
[ 0.6970132 ]
[ 0.14060655]
[ 0.58578849]
[ 0.74625534]
[ 0.94461602]
[ 0.8231715 ]
[ 0.74533242]
[ 0.73081273]
[ 0.89428389]
[ 0.36052632]
[ 0.93359232]
[ 0.67111385]
[ 0.90049964]
[ 0.32239446]
[ 0.09238458]
[ 0.37183908]
[ 0.36040202]
[ 0.63142908]
[ 0.86986101]
[ 0.60353214]
[ 0.69194221]
[ 0.7734046 ]
[ 0.48821923]
[ 0.34297279]
[ 0.89119935]
[ 0.93344915]
[ 0.48472461]
[ 0.71047121]
[ 0.15957622]
[ 0.43269509]
[ 0.6745435 ]
[ 0.62384766]
[ 0.89571315]
[ 0.97884893]
[ 0.13851734]
[ 0.65836394]
[ 0.65855283]
[ 0.50228631]
[ 0.74741715]
[ 0.73715377]
[ 0.8551423 ]
[ 0.78183234]
[ 0.54022133]
[ 0.71141559]
[ 0.20514016]
[ 0.70146286]
[ 0.50805104]
[ 0.91214073]
[ 0.59229445]
[ 0.5699392 ]
[ 0.75856155]
[ 0.77125776]
[ 0.45367438]
[ 0.77550095]
[ 0.67092496]
[ 0.40494433]
[ 0.56400138]
[ 0.8967191 ]
[ 0.84048367]
[ 0.49525532]
[ 0.68213773]
[ 0.26657701]
[ 0.84603792]
[ 0.52280134]
[ 0.79270381]
[ 0.29683214]
[ 0.56301409]
[ 0.85484338]
[ 0.10250711]
[ 0.34252101]
[ 0.79502881]
[ 0.81382275]
[ 0.79339433]
[ 0.9268328 ]
[ 0.78880095]
[ 0.71457642]
[ 0.76757729]
[ 0.85567635]
[ 0.70220971]
[ 0.82801497]
[ 0.48099971]
[ 0.54884344]
[ 0.86124712]
[ 0.829826 ]
[ 0.6871298 ]
[ 0.3376472 ]
[ 0.86627781]
[ 0.86489189]
[ 0.79531747]
[ 0.72092807]
[ 0.87830126]
[ 0.84954655]
[ 0.79429799]
[ 0.40572709]
[ 0.84872818]
[ 0.90253091]
[ 0.40442362]
[ 0.16686068]
[ 0.75140506]
[ 0.46060428]
[ 0.8426252 ]
[ 0.2899245 ]
[ 0.40466154]
[ 0.46188676]
[ 0.81057042]
[ 0.85221988]
[ 0.12473804]
[ 0.35398498]
[ 0.72966623]
[ 0.55509305]
[ 0.47764382]
[ 0.81243271]
[ 0.18860394]
[ 0.92171514]
[ 0.13298267]
[ 0.82895029]
[ 0.72764015]
[ 0.70952886]
[ 0.83735538]
[ 0.70708048]
[ 0.90327692]]
Correct (Y):
[[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 0.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]
[ 1.]]
Accuracy:
0.768116
|
Notebooks/Heart_Disease_Prediction.ipynb | ###Markdown
Hyperparameter Tuning
###Code
from sklearn.model_selection import RandomizedSearchCV
classifier = RandomForestClassifier(n_jobs = -1)
from scipy.stats import randint
param_dist={'max_depth':[3,5,10,None],
'n_estimators':[10,100,200,300,400,500],
'max_features':randint(1,31),
'criterion':['gini','entropy'],
'bootstrap':[True,False],
'min_samples_leaf':randint(1,31),
}
search_clfr = RandomizedSearchCV(classifier, param_distributions = param_dist, n_jobs=-1, n_iter = 40, cv = 9)
search_clfr.fit(X_train, y_train)
params = search_clfr.best_params_
score = search_clfr.best_score_
print(params)
print(score)
claasifier=RandomForestClassifier(n_jobs=-1, n_estimators=400,bootstrap= False,criterion='gini',max_depth=5,max_features=3,min_samples_leaf= 7)
classifier.fit(X_train, y_train)
confusion_matrix(y_test, classifier.predict(X_test))
print(f"Accuracy is {round(accuracy_score(y_test, classifier.predict(X_test))*100,2)}%")
import pickle
pickle.dump(classifier, open('heart.pkl', 'wb'))
###Output
_____no_output_____ |
Python/GoldBagDetector/gold_bag_detector_gcolab.ipynb | ###Markdown
Setup**For full setup read all 'NOTE:' comments and that should be enough** NOTE: ctrl+shift+I and paste it to consolefunction ConnectButton(){ console.log("Connect pushed"); document.querySelector("top-toolbar > colab-connect-button").shadowRoot.querySelector("connect").click() }setInterval(ConnectButton,60000); NOTE: AFTER RESETING RUNTIME ENABLE GPU ACCELERATION!
###Code
# Mount Drive
from google.colab import drive
drive.mount('/content/drive')
######### GLOBAL VIARIABLES SETUP ##########
##### NOTE: change directories to right ones #####
DIR_DETECTOR = "/content/drive/My Drive/Colab Content/GoldBagDetector/" # Will cd here
DIR_DATASET = "dataset-combined/"
PRETRAINED_MODEL = DIR_DATASET + "models/detection_model-ex-020--loss-0001.557.h5"
# PRETRAINED_MODEL = DIR_DATASET + "models/detection_model-ex-104--loss-2.93.h5"
global_id = 0 # To stop threads ath the end of program it's changed
print("Version should be 3.12.2 or higher. Otherwise upgrade protobuf")
!pip show protobuf | grep Version
%cd $DIR_DETECTOR
print("Dir check below. Good is no output:")
import os
if not os.path.exists(DIR_DETECTOR):
print("Path DIR_DETECTOR does not exist")
if not os.path.exists(DIR_DATASET):
print("Path DIR_DATASET does not exist")
if not os.path.exists(PRETRAINED_MODEL):
print("Path PRETRAINED_MODEL does not exist")
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
Version should be 3.12.2 or higher. Otherwise upgrade protobuf
Version: 4.0.0rc1
/content/drive/My Drive/Colab Content/GoldBagDetector
Dir check below. Good is no output:
###Markdown
On Error: __init__() got an unexpected keyword argument 'read_only_collections'If validation does not work reinstall protobuf and restart runtime
###Code
!pip uninstall --yes protobuf
!pip install 'protobuf>=3.0.0a3'
###Output
Uninstalling protobuf-3.12.2:
Successfully uninstalled protobuf-3.12.2
Collecting protobuf>=3.0.0a3
[?25l Downloading https://files.pythonhosted.org/packages/99/3c/1ed0b084a4d69e6d95972a1ecdafe015707503372f3e54a51a7f2dfba272/protobuf-4.0.0rc1-cp36-cp36m-manylinux1_x86_64.whl (1.3MB)
[K |████████████████████████████████| 1.3MB 3.5MB/s
[?25hRequirement already satisfied: six>=1.9 in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.0.0a3) (1.12.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.0.0a3) (49.1.0)
[31mERROR: tensorflow-metadata 0.22.2 has requirement protobuf<4,>=3.7, but you'll have protobuf 4.0.0rc1 which is incompatible.[0m
Installing collected packages: protobuf
Successfully installed protobuf-4.0.0rc1
###Markdown
Auth *pydrive* and install tensorflow
###Code
%cd $DIR_DETECTOR
# Init Drive autocleaner
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
my_drive = GoogleDrive(gauth)
!pip3 install tensorflow-gpu==1.13.1
!pip3 install imageai --upgrade
import tensorflow as tf
if tf.test.gpu_device_name() != "/device:GPU:0":
raise EnvironmentError("NO GPU ACCELERATION ENABLED")
else:
print("GPU GOOD")
###Output
/content/drive/My Drive/Colab Content/GoldBagDetector
Collecting tensorflow-gpu==1.13.1
[?25l Downloading https://files.pythonhosted.org/packages/7b/b1/0ad4ae02e17ddd62109cd54c291e311c4b5fd09b4d0678d3d6ce4159b0f0/tensorflow_gpu-1.13.1-cp36-cp36m-manylinux1_x86_64.whl (345.2MB)
[K |████████████████████████████████| 345.2MB 47kB/s
[?25hRequirement already satisfied: keras-applications>=1.0.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (1.0.8)
Requirement already satisfied: absl-py>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (0.9.0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (1.1.0)
Collecting tensorboard<1.14.0,>=1.13.0
[?25l Downloading https://files.pythonhosted.org/packages/0f/39/bdd75b08a6fba41f098b6cb091b9e8c7a80e1b4d679a581a0ccd17b10373/tensorboard-1.13.1-py3-none-any.whl (3.2MB)
[K |████████████████████████████████| 3.2MB 27.7MB/s
[?25hRequirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (1.30.0)
Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (1.1.2)
Requirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (4.0.0rc1)
Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (1.18.5)
Collecting tensorflow-estimator<1.14.0rc0,>=1.13.0
[?25l Downloading https://files.pythonhosted.org/packages/bb/48/13f49fc3fa0fdf916aa1419013bb8f2ad09674c275b4046d5ee669a46873/tensorflow_estimator-1.13.0-py2.py3-none-any.whl (367kB)
[K |████████████████████████████████| 368kB 39.8MB/s
[?25hRequirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (1.12.0)
Requirement already satisfied: gast>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (0.3.3)
Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (0.8.1)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==1.13.1) (0.34.2)
Requirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.6->tensorflow-gpu==1.13.1) (2.10.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.14.0,>=1.13.0->tensorflow-gpu==1.13.1) (1.0.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.14.0,>=1.13.0->tensorflow-gpu==1.13.1) (3.2.2)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.6.1->tensorflow-gpu==1.13.1) (49.1.0)
Collecting mock>=2.0.0
Downloading https://files.pythonhosted.org/packages/cd/74/d72daf8dff5b6566db857cfd088907bb0355f5dd2914c4b3ef065c790735/mock-4.0.2-py3-none-any.whl
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tensorboard<1.14.0,>=1.13.0->tensorflow-gpu==1.13.1) (1.7.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard<1.14.0,>=1.13.0->tensorflow-gpu==1.13.1) (3.1.0)
[31mERROR: tensorflow 2.2.0 has requirement tensorboard<2.3.0,>=2.2.0, but you'll have tensorboard 1.13.1 which is incompatible.[0m
[31mERROR: tensorflow 2.2.0 has requirement tensorflow-estimator<2.3.0,>=2.2.0, but you'll have tensorflow-estimator 1.13.0 which is incompatible.[0m
Installing collected packages: tensorboard, mock, tensorflow-estimator, tensorflow-gpu
Found existing installation: tensorboard 2.2.2
Uninstalling tensorboard-2.2.2:
Successfully uninstalled tensorboard-2.2.2
Found existing installation: tensorflow-estimator 2.2.0
Uninstalling tensorflow-estimator-2.2.0:
Successfully uninstalled tensorflow-estimator-2.2.0
Successfully installed mock-4.0.2 tensorboard-1.13.1 tensorflow-estimator-1.13.0 tensorflow-gpu-1.13.1
Collecting imageai
[?25l Downloading https://files.pythonhosted.org/packages/09/99/4023e191a343fb23f01ae02ac57a5ca58037c310e8d8c62f87638a3bafc7/imageai-2.1.5-py3-none-any.whl (180kB)
[K |████████████████████████████████| 184kB 3.5MB/s
[?25hRequirement already satisfied, skipping upgrade: pillow in /usr/local/lib/python3.6/dist-packages (from imageai) (7.0.0)
Requirement already satisfied, skipping upgrade: numpy in /usr/local/lib/python3.6/dist-packages (from imageai) (1.18.5)
Requirement already satisfied, skipping upgrade: matplotlib in /usr/local/lib/python3.6/dist-packages (from imageai) (3.2.2)
Requirement already satisfied, skipping upgrade: h5py in /usr/local/lib/python3.6/dist-packages (from imageai) (2.10.0)
Requirement already satisfied, skipping upgrade: scipy in /usr/local/lib/python3.6/dist-packages (from imageai) (1.4.1)
Requirement already satisfied, skipping upgrade: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->imageai) (2.4.7)
Requirement already satisfied, skipping upgrade: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->imageai) (2.8.1)
Requirement already satisfied, skipping upgrade: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->imageai) (1.2.0)
Requirement already satisfied, skipping upgrade: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->imageai) (0.10.0)
Requirement already satisfied, skipping upgrade: six in /usr/local/lib/python3.6/dist-packages (from h5py->imageai) (1.12.0)
Installing collected packages: imageai
Successfully installed imageai-2.1.5
###Markdown
Training
###Code
import tensorflow as tf
if tf.test.gpu_device_name() != "/device:GPU:0":
raise EnvironmentError("NO GPU ACCELERATION ENABLED")
# %cd /
%cd $DIR_DETECTOR
import os
import time
import glob
import random
import threading
from imageai.Detection.Custom import DetectionModelTrainer
######## VARIABLES ########
RUN_THREADS = True
JOB_DONE = False
THREAD_SLEEP = 300 # 300 = 5 min
global_id = 0
MODELS_TO_KEEP = 8 # NOTE: change depending on free space on drive
######## FUNCTIONS ########
def get_sorted_models_list():
model_list = glob.glob(DIR_DATASET + "models/*.h5")
# Sort by time modified oldest first
model_list = sorted(model_list, key=os.path.getmtime)
return model_list
def auto_clean_drive_trash():
global my_drive
try:
files_list = my_drive.ListFile({'q': "trashed = true"}).GetList()
print("Trash files to delete ({}): ".format(len(files_list)))
for file in files_list:
print("Trash \"{}\" deleted permanently.".format(file["title"]))
file.Delete()
except Exception:
pass
def manage_model_files(ID):
'''
Keep only best performing models
And autoclean trash
'''
global RUN_THREADS, THREAD_SLEEP, global_id
while True:
model_list = get_sorted_models_list()
# Shuffle worst performers
tmp_list = model_list[:-3]
random.shuffle(tmp_list)
model_list[:-3] = tmp_list
if len(model_list) > MODELS_TO_KEEP:
print("TOO MANY MODELS. LIST:")
print(model_list)
# Delete one bad model (shuffled)
print("Deleting: %s" % (model_list[0]))
os.remove(model_list[0])
# Autoclean Drive trash
auto_clean_drive_trash()
# Sleep
for i in range(THREAD_SLEEP):
if RUN_THREADS is False or ID != global_id:
return
time.sleep(1)
def log_time(ID):
global RUN_THREADS, THREAD_SLEEP, global_id
while True:
print (time.ctime())
filename = "my_log.txt"
if os.path.exists(filename):
append_write = 'a' # append if already exists
else:
append_write = 'w' # make a new file if not
f = open(filename, append_write)
f.write("{}\n{}\n".format(time.ctime(), get_sorted_models_list()))
f.close()
# Sleep
for i in range(THREAD_SLEEP):
if RUN_THREADS is False or ID != global_id:
return
time.sleep(1)
######## MAIN PROGRAM ########
# Run thread for file management
global_id = time.time()
t1 = threading.Thread(target=manage_model_files, kwargs=dict(ID=global_id))
t2 = threading.Thread(target=log_time, kwargs=dict(ID=global_id))
t1.start()
t2.start()
#Train model
##### NOTE: change train_from_pretrained_model to newest model every rerun #####
##### NOTE: change object_names_array= if data class have changed #####
trainer = DetectionModelTrainer()
trainer.setModelTypeAsYOLOv3()
trainer.setDataDirectory(data_directory=DIR_DATASET)
trainer.setTrainConfig(object_names_array=["goldbag"], batch_size=4,
num_experiments=150,
train_from_pretrained_model=PRETRAINED_MODEL)
trainer.trainModel()
print("Execution done. Going to sleep for 1 min")
RUN_THREADS = False
global_id = 0
time.sleep(60)
###Output
/content/drive/My Drive/Colab Content/GoldBagDetector
###Markdown
Evaluation
###Code
%cd $DIR_DETECTOR
from imageai.Detection.Custom import DetectionModelTrainer
trainer = DetectionModelTrainer()
trainer.setModelTypeAsYOLOv3()
trainer.setDataDirectory(data_directory=DIR_DATASET)
trainer.evaluateModel(model_path=DIR_DATASET + "models", json_path=DIR_DATASET + "json/detection_config.json", iou_threshold=0.8, object_threshold=0.9, nms_threshold=0.5)
###Output
/content/drive/My Drive/Colab Content/GoldBagDetector
###Markdown
Test Model
###Code
%cd $DIR_DETECTOR
from imageai.Detection.Custom import CustomObjectDetection
import os
import cv2
from google.colab.patches import cv2_imshow
import glob
import time
import random
detector = CustomObjectDetection()
detector.setModelTypeAsYOLOv3()
detector.setJsonPath(DIR_DATASET + "json/detection_config.json")
models = glob.glob(DIR_DATASET + "models/*.h5")
models = sorted(models, key=os.path.getmtime, reverse=True) # Sort by time modified best first
print("All models: {}".format(models))
validation_imgs = glob.glob(DIR_DATASET + "validation/images/*.jpg")
random.shuffle(validation_imgs)
print("Validation images: {}".format(validation_imgs))
for model in models: # [0:2]:
print("#################################")
print("#################################")
print("##############MODEL##############")
print("Validating model: {}".format(model))
print("#################################")
print("#################################")
print("Showing max 25 random images per model")
detector.setModelPath(model)
detector.loadModel()
count_detections = 0
count_img = 0
for img in validation_imgs: # [:25]:
count_img += 1
frame = cv2.imread(img)
frame_out, detections = detector.detectObjectsFromImage(input_type="array", input_image=frame,
output_type="array", minimum_percentage_probability=10)
save_path = DIR_DATASET + "evaluation-images/{}/{}".format(model[-21:], img[-10:])
print("Save path: {}".format(save_path))
# if not os.path.exists(save_path):
# os.makedirs(save_path)
# cv2.imwrite(frame_out, save_path)
for eachObject in detections:
count_detections += 1
print(eachObject["name"] , " : ", eachObject["percentage_probability"], " : ", eachObject["box_points"] )
print("Detected: {}/{}".format(count_detections, count_img))
cv2_imshow(frame_out)
###Output
Save path: dataset-combined/evaluation-images/014--loss-0001.504.h5/IMG375.jpg
goldbag : 55.435603857040405 : [530, 137, 634, 203]
Detected: 44/45
|
docs/examples/multibranch_trajectory/multibranch_trajectory.ipynb | ###Markdown
Multibranch TrajectoryThis example demonstrates the use of a Trajectory to encapsulate aseries of branching phases. OverviewFor this example, we build a system that contains two components: thefirst component represents a battery pack that contains multiple cellsin parallel, and the second component represents a bank of DC electricmotors (also in parallel) driving a gearbox to achieve a desired poweroutput. The battery cells have a state of charge that decays as currentis drawn from the battery. The open circuit voltage of the battery is afunction of the state of charge. At any point in time, the couplingbetween the battery and the motor component is solved with a Newtonsolver in the containing group for a line current that satisfies theequations.Both the battery and the motor models allow the number of cells and thenumber of motors to be modified by setting the _n\_parallel_ option intheir respective options dictionaries. For this model, we start with 3cells and 3 motors. We will simulate failure of a cell or battery bysetting _n\_parallel_ to 2.Branching phases are a set of linked phases in a trajectory where theinput ends of multiple phases are connected to the output of a singlephase. This way you can simulate alternative trajectory paths in thesame model. For this example, we will start with a single phase(_phase0_) that simulates the model for one hour. Three follow-onphases will be linked to the output of the first phase: _phase1_ willrun as normal, _phase1\_bfail_ will fail one of the battery cells, and_phase1\_mfail_ will fail a motor. All three of these phases startwhere _phase0_ leaves off, so they share the same initial time andstate of charge. Battery and Motor modelsThe models are loosely based on the work done in Chin {cite}`chin2019battery`.
###Code
"""
Simple dynamic model of a LI battery.
"""
import numpy as np
from scipy.interpolate import Akima1DInterpolator
import openmdao.api as om
# Data for open circuit voltage model.
train_SOC = np.array([0., 0.1, 0.25, 0.5, 0.75, 0.9, 1.0])
train_V_oc = np.array([3.5, 3.55, 3.65, 3.75, 3.9, 4.1, 4.2])
class Battery(om.ExplicitComponent):
"""
Model of a Lithium Ion battery.
"""
def initialize(self):
self.options.declare('num_nodes', default=1)
self.options.declare('n_series', default=1, desc='number of cells in series')
self.options.declare('n_parallel', default=3, desc='number of cells in parallel')
self.options.declare('Q_max', default=1.05,
desc='Max Energy Capacity of a battery cell in A*h')
self.options.declare('R_0', default=.025,
desc='Internal resistance of the battery (ohms)')
def setup(self):
num_nodes = self.options['num_nodes']
# Inputs
self.add_input('I_Li', val=np.ones(num_nodes), units='A',
desc='Current demanded per cell')
# State Variables
self.add_input('SOC', val=np.ones(num_nodes), units=None, desc='State of charge')
# Outputs
self.add_output('V_L',
val=np.ones(num_nodes),
units='V',
desc='Terminal voltage of the battery')
self.add_output('dXdt:SOC',
val=np.ones(num_nodes),
units='1/s',
desc='Time derivative of state of charge')
self.add_output('V_oc', val=np.ones(num_nodes), units='V',
desc='Open Circuit Voltage')
self.add_output('I_pack', val=0.1*np.ones(num_nodes), units='A',
desc='Total Pack Current')
self.add_output('V_pack', val=9.0*np.ones(num_nodes), units='V',
desc='Total Pack Voltage')
self.add_output('P_pack', val=1.0*np.ones(num_nodes), units='W',
desc='Total Pack Power')
# Derivatives
row_col = np.arange(num_nodes)
self.declare_partials(of='V_oc', wrt=['SOC'], rows=row_col, cols=row_col)
self.declare_partials(of='V_L', wrt=['SOC'], rows=row_col, cols=row_col)
self.declare_partials(of='V_L', wrt=['I_Li'], rows=row_col, cols=row_col)
self.declare_partials(of='dXdt:SOC', wrt=['I_Li'], rows=row_col, cols=row_col)
self.declare_partials(of='I_pack', wrt=['I_Li'], rows=row_col, cols=row_col)
self.declare_partials(of='V_pack', wrt=['SOC', 'I_Li'], rows=row_col, cols=row_col)
self.declare_partials(of='P_pack', wrt=['SOC', 'I_Li'], rows=row_col, cols=row_col)
self.voltage_model = Akima1DInterpolator(train_SOC, train_V_oc)
self.voltage_model_derivative = self.voltage_model.derivative()
def compute(self, inputs, outputs):
opt = self.options
I_Li = inputs['I_Li']
SOC = inputs['SOC']
V_oc = self.voltage_model(SOC, extrapolate=True)
outputs['V_oc'] = V_oc
outputs['V_L'] = V_oc - (I_Li * opt['R_0'])
outputs['dXdt:SOC'] = -I_Li / (3600.0 * opt['Q_max'])
outputs['I_pack'] = I_Li * opt['n_parallel']
outputs['V_pack'] = outputs['V_L'] * opt['n_series']
outputs['P_pack'] = outputs['I_pack'] * outputs['V_pack']
def compute_partials(self, inputs, partials):
opt = self.options
I_Li = inputs['I_Li']
SOC = inputs['SOC']
dV_dSOC = self.voltage_model_derivative(SOC, extrapolate=True)
partials['V_oc', 'SOC'] = dV_dSOC
partials['V_L', 'SOC'] = dV_dSOC
partials['V_L', 'I_Li'] = -opt['R_0']
partials['dXdt:SOC', 'I_Li'] = -1./(3600.0*opt['Q_max'])
n_parallel = opt['n_parallel']
n_series = opt['n_series']
V_oc = self.voltage_model(SOC, extrapolate=True)
V_L = V_oc - (I_Li * opt['R_0'])
partials['I_pack', 'I_Li'] = n_parallel
partials['V_pack', 'I_Li'] = -opt['R_0']
partials['V_pack', 'SOC'] = n_series * dV_dSOC
partials['P_pack', 'I_Li'] = n_parallel * n_series * (V_L - I_Li * opt['R_0'])
partials['P_pack', 'SOC'] = n_parallel * I_Li * n_series * dV_dSOC
# num_nodes = 1
# prob = om.Problem(model=Battery(num_nodes=num_nodes))
# model = prob.model
# prob.setup()
# prob.set_solver_print(level=2)
# prob.run_model()
# derivs = prob.check_partials(compact_print=True)
"""
Simple model for a set of motors in parallel where efficiency is a function of current.
"""
import numpy as np
import openmdao.api as om
class Motors(om.ExplicitComponent):
"""
Model for motors in parallel.
"""
def initialize(self):
self.options.declare('num_nodes', default=1)
self.options.declare('n_parallel', default=3, desc='number of motors in parallel')
def setup(self):
num_nodes = self.options['num_nodes']
# Inputs
self.add_input('power_out_gearbox', val=3.6*np.ones(num_nodes), units='W',
desc='Power at gearbox output')
self.add_input('current_in_motor', val=np.ones(num_nodes), units='A',
desc='Total current demanded')
# Outputs
self.add_output('power_in_motor', val=np.ones(num_nodes), units='W',
desc='Power required at motor input')
# Derivatives
row_col = np.arange(num_nodes)
self.declare_partials(of='power_in_motor', wrt=['*'], rows=row_col, cols=row_col)
def compute(self, inputs, outputs):
current = inputs['current_in_motor']
power_out = inputs['power_out_gearbox']
n_parallel = self.options['n_parallel']
# Simple linear curve fit for efficiency.
eff = 0.9 - 0.3 * current / n_parallel
outputs['power_in_motor'] = power_out / eff
def compute_partials(self, inputs, partials):
current = inputs['current_in_motor']
power_out = inputs['power_out_gearbox']
n_parallel = self.options['n_parallel']
eff = 0.9 - 0.3 * current / n_parallel
partials['power_in_motor', 'power_out_gearbox'] = 1.0 / eff
partials['power_in_motor', 'current_in_motor'] = 0.3 * power_out / (n_parallel * eff**2)
# num_nodes = 1
# prob = om.Problem(model=Motors(num_nodes=num_nodes))
# model = prob.model
# prob.setup()
# prob.run_model()
# derivs = prob.check_partials(compact_print=True)
"""
ODE for example that shows how to use multiple phases in Dymos to model failure of a battery cell
in a simple electrical system.
"""
import numpy as np
import openmdao.api as om
class BatteryODE(om.Group):
def initialize(self):
self.options.declare('num_nodes', default=1)
self.options.declare('num_battery', default=3)
self.options.declare('num_motor', default=3)
def setup(self):
num_nodes = self.options['num_nodes']
num_battery = self.options['num_battery']
num_motor = self.options['num_motor']
self.add_subsystem(name='pwr_balance',
subsys=om.BalanceComp(name='I_Li', val=1.0*np.ones(num_nodes),
rhs_name='pwr_out_batt',
lhs_name='P_pack',
units='A', eq_units='W', lower=0.0, upper=50.))
self.add_subsystem('battery', Battery(num_nodes=num_nodes, n_parallel=num_battery),
promotes_inputs=['SOC'],
promotes_outputs=['dXdt:SOC'])
self.add_subsystem('motors', Motors(num_nodes=num_nodes, n_parallel=num_motor))
self.connect('battery.P_pack', 'pwr_balance.P_pack')
self.connect('motors.power_in_motor', 'pwr_balance.pwr_out_batt')
self.connect('pwr_balance.I_Li', 'battery.I_Li')
self.connect('battery.I_pack', 'motors.current_in_motor')
self.nonlinear_solver = om.NewtonSolver(solve_subsystems=False, maxiter=20)
self.linear_solver = om.DirectSolver()
###Output
_____no_output_____
###Markdown
Building and running the problem
###Code
import matplotlib.pyplot as plt
import openmdao.api as om
import dymos as dm
from dymos.examples.battery_multibranch.battery_multibranch_ode import BatteryODE
from dymos.utils.lgl import lgl
prob = om.Problem()
opt = prob.driver = om.ScipyOptimizeDriver()
opt.declare_coloring()
opt.options['optimizer'] = 'SLSQP'
num_seg = 5
seg_ends, _ = lgl(num_seg + 1)
traj = prob.model.add_subsystem('traj', dm.Trajectory())
# First phase: normal operation.
transcription = dm.Radau(num_segments=num_seg, order=5, segment_ends=seg_ends, compressed=False)
phase0 = dm.Phase(ode_class=BatteryODE, transcription=transcription)
traj_p0 = traj.add_phase('phase0', phase0)
traj_p0.set_time_options(fix_initial=True, fix_duration=True)
traj_p0.add_state('state_of_charge', fix_initial=True, fix_final=False,
targets=['SOC'], rate_source='dXdt:SOC')
# Second phase: normal operation.
phase1 = dm.Phase(ode_class=BatteryODE, transcription=transcription)
traj_p1 = traj.add_phase('phase1', phase1)
traj_p1.set_time_options(fix_initial=False, fix_duration=True)
traj_p1.add_state('state_of_charge', fix_initial=False, fix_final=False,
targets=['SOC'], rate_source='dXdt:SOC')
traj_p1.add_objective('time', loc='final')
# Second phase, but with battery failure.
phase1_bfail = dm.Phase(ode_class=BatteryODE, ode_init_kwargs={'num_battery': 2},
transcription=transcription)
traj_p1_bfail = traj.add_phase('phase1_bfail', phase1_bfail)
traj_p1_bfail.set_time_options(fix_initial=False, fix_duration=True)
traj_p1_bfail.add_state('state_of_charge', fix_initial=False, fix_final=False,
targets=['SOC'], rate_source='dXdt:SOC')
# Second phase, but with motor failure.
phase1_mfail = dm.Phase(ode_class=BatteryODE, ode_init_kwargs={'num_motor': 2},
transcription=transcription)
traj_p1_mfail = traj.add_phase('phase1_mfail', phase1_mfail)
traj_p1_mfail.set_time_options(fix_initial=False, fix_duration=True)
traj_p1_mfail.add_state('state_of_charge', fix_initial=False, fix_final=False,
targets=['SOC'], rate_source='dXdt:SOC')
traj.link_phases(phases=['phase0', 'phase1'], vars=['state_of_charge', 'time'])
traj.link_phases(phases=['phase0', 'phase1_bfail'], vars=['state_of_charge', 'time'])
traj.link_phases(phases=['phase0', 'phase1_mfail'], vars=['state_of_charge', 'time'])
prob.model.options['assembled_jac_type'] = 'csc'
prob.model.linear_solver = om.DirectSolver(assemble_jac=True)
prob.setup()
prob['traj.phase0.t_initial'] = 0
prob['traj.phase0.t_duration'] = 1.0*3600
prob['traj.phase1.t_initial'] = 1.0*3600
prob['traj.phase1.t_duration'] = 1.0*3600
prob['traj.phase1_bfail.t_initial'] = 1.0*3600
prob['traj.phase1_bfail.t_duration'] = 1.0*3600
prob['traj.phase1_mfail.t_initial'] = 1.0*3600
prob['traj.phase1_mfail.t_duration'] = 1.0*3600
prob.set_solver_print(level=0)
dm.run_problem(prob)
soc0 = prob['traj.phase0.states:state_of_charge']
soc1 = prob['traj.phase1.states:state_of_charge']
soc1b = prob['traj.phase1_bfail.states:state_of_charge']
soc1m = prob['traj.phase1_mfail.states:state_of_charge']
# Plot Results
t0 = prob['traj.phases.phase0.time.time']/3600
t1 = prob['traj.phases.phase1.time.time']/3600
t1b = prob['traj.phases.phase1_bfail.time.time']/3600
t1m = prob['traj.phases.phase1_mfail.time.time']/3600
plt.subplot(2, 1, 1)
plt.plot(t0, soc0, 'b')
plt.plot(t1, soc1, 'b')
plt.plot(t1b, soc1b, 'r')
plt.plot(t1m, soc1m, 'c')
plt.xlabel('Time (hour)')
plt.ylabel('State of Charge (percent)')
I_Li0 = prob['traj.phases.phase0.rhs_all.pwr_balance.I_Li']
I_Li1 = prob['traj.phases.phase1.rhs_all.pwr_balance.I_Li']
I_Li1b = prob['traj.phases.phase1_bfail.rhs_all.pwr_balance.I_Li']
I_Li1m = prob['traj.phases.phase1_mfail.rhs_all.pwr_balance.I_Li']
plt.subplot(2, 1, 2)
plt.plot(t0, I_Li0, 'b')
plt.plot(t1, I_Li1, 'b')
plt.plot(t1b, I_Li1b, 'r')
plt.plot(t1m, I_Li1m, 'c')
plt.xlabel('Time (hour)')
plt.ylabel('Line Current (A)')
plt.legend(['Phase 1', 'Phase 2', 'Phase 2 Battery Fail', 'Phase 2 Motor Fail'], loc=2)
plt.show()
from openmdao.utils.assert_utils import assert_near_equal
# Final value for State of Chrage in each segment should be a good test.
print('State of Charge after 1 hour')
assert_near_equal(soc0[-1], 0.63464982, 1e-6)
print('State of Charge after 2 hours')
assert_near_equal(soc1[-1], 0.23794217, 1e-6)
print('State of Charge after 2 hours, battery fails at 1 hour')
assert_near_equal(soc1b[-1], 0.0281523, 1e-6)
print('State of Charge after 2 hours, motor fails at 1 hour')
assert_near_equal(soc1m[-1], 0.18625395, 1e-6)
###Output
_____no_output_____ |
src/main/ipynb/relational-python.ipynb | ###Markdown
The Relational Model- The relational model is a mathematical *abstraction* used to describehow data can be structured, queried and updated.- It is based on *set theory*.- It can be *implemented* in many different ways.- When it is implemented by storing relations on disk files, we have a *relational database*.- Functional programming languages such as Python naturally express many aspects of the relational model.- This is one of the reasons they are very useful for data science. Overview1. The formal definition of the relational model2. Representing relations in Python using collections of tuples3. Querying relational data using Python set comprehensions An Example Relational DatasetThe following slides use relations to describe:- students, - the courses they are taking - the prerequisites of courses- their grades- which department they are in
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Concepts- In a relational model, the data is a collection of *relations*.- Informally, a relation resembles a table of values.- When relations are stored on disk, they are called tables.- Each row represents a *fact* about a real-world entity or relationship.- The table name and column names are used to help interpret the meaning of the values.- A relational model is defined formally in terms of: - tuples - attributes - relations - domains TuplesA tuple is a mathematical abstraction which:- contains several other values- has a well-defined ordering over the values- can contain duplicate values- can contain values of different types- can contain the special value `None` or `Null`- is immutable; the values contained in the tuple cannot change over time The size of a tuple- We often restrict attention to tuples of a particular size or *degree*.- An $n-$tuple contains $n$ values. Attributes- An attribute refers to the value in a particular index of a tuple. Atomic values- Atomic values are values which are not stored in collections.- Atomic values cannot be further decomposed into other values.- A tuple is therefore *not* atomic.- A tuple that contains only atomic values is called a *flat tuple*. Domain- A *domain* $D_i$ is a set of atomic values.- Each attribute within a relation has the *same* domain.- Intuitively, a domain specifies the allowable values in a column $i$.- Examples:$D_1 = \mathbb{Z}$ $D_2 = \{ 15, 16, \ldots, 80 \}$ $D_3 = \{ "CS", \; "ECON", \; "PHYS" \}$ Relation schema- A *relation schema* is denoted by $R(A_1, A_2, \ldots, A_n)$.- Each *attribute* $A_i$ is the name of a role played by some domain $D_i$ in $R$.- $D_i$ is the *domain* of $A_i$ and is denoted by $\operatorname{dom}(A_i)$.- The *degree* or *arity* of a relation is the number of attributes $n$. Example- What is the arity of the following relation schema?~~~STUDENT(Name, Ssn, Home_phone, Address, Office_phone, Age, Gpa)~~~- Answer: 7- What is the name of the relation?- Answer: `STUDENT` Example of a Domain$\operatorname{dom}(Gpa) = [0, 4]$ Relations- The schema represents the structure of a *relation*.- A relation contains the actual data.- It is sometimes called the *relation state*, *relation intension* or *relation extension*.- Let $r(R)$ denote the relation $r$ of a relation schema $R$.- The relation $r$ consists of a set of $n$-tuples $r = \{t_1, t_2, \ldots, t_m\}$.- The $i^{th}$ value in tuple $t$ corresponds to the attribute $A_i$and is denoted $t.A_i$ or $t[i]$. Constraints- Domain constraints- Key constraints- NULL values Relational Datasets- So far we have discussed single relations.- A typical data-set will comprise many relations.- A relational dataset schema $(S, IC)$ comprises: - a set of relation schemas $S = \{ R_1, R_2, \ldots, R_k \}$ - a set of integrity constraints $IC$ - A relational dataset state $DB$ is a set of relation states $DB = \{ r_1, r_2, \ldots r_m \}$ - such that every $r_i$ satisfies every constraint in $IC$. Data definition language- The data definition language (DDL) provides a concrete syntax and semantics for describing a relational schema.- Most commonly we use *SQL* - Structured Query Language. Data query language- The data query language provides a concrete syntax and semantics for querying the relational dataset.- Formally a query is a function mapping from existing relation states to new relations.- That is, we map from one set of tuples to another set of tuples.- Typically the mapping involves some combination of set-theoretic functions, e.g. - subset of tuples that satisfy a predicate $p$ - $\{ x: x \in X \wedge p(x) \}$ - set union $X \cup Y$, difference $X - Y$, intersection $X \cap Y$ - Cartesian product $X \times Y$ - The most common data query language for relational databases is again SQL.. . .- Mathematically, there is nothing stopping us from using e.g. Python as a query language. Tuples in Python- Tuples in Python can be written by writing a sequence of values separated bycommas and surrounded by round brackets. For example:
###Code
tuple1 = (50, 6.5)
tuple2 = (1, 2, 'hello')
professor = ('Steve', 'Phelps', 'S6.18')
student = ('John', 'Doe', None)
###Output
_____no_output_____
###Markdown
- The individual values contain within a tuple can be obtained by indexingtheir position (counting from zero). To find the office number of the professor:
###Code
professor[2]
###Output
_____no_output_____
###Markdown
- Tuples are a very flexible way to represent single pieces of data.- We only allow *flat tuples*. The following is not allowed in a relational model:
###Code
this_is_not_allowed = (1, 3, (50, 6.5))
###Output
_____no_output_____
###Markdown
Sets of Tuples- How can we use tuples to represent data-*sets* and relations?- We can use collections of tuples, e.g. a set of tuples.- So now we can represent one or more students:
###Code
# Student tuples
smith = ('Smith', 17, 1, 'CS')
brown = ('Brown', 8, 2, 'CS')
# The student relation
students = {smith, brown}
###Output
_____no_output_____
###Markdown
Relational attributes in Python- Attributes are names for particular positions within a tuple.- We can use Python functions to represent relational attributes:
###Code
# The attributes of a student
def student_name(s):
return s[0]
def student_student_number(s):
return s[1]
###Output
_____no_output_____
###Markdown
- Note that different relations can have the same attribute.- Therefore we need some convention to distinguish attributes from different relations.- In the above code, `student_student_number` refers to the `student_number` attribute of the `student` relation. Queries in Python- We need some way to extract data from our data-set; i.e. to *query* the data.- A query will e.g.: - Take a subset of the tuples of a relation that satisfy a predicate. - *Join* two or more relations using a Cartesian product. - Take the intersection of tuples from two or more relations. - Take the union of tuples from two or more relations. - Python list comprehensions or set comprehensions provide all of this functionality. Relational queries in Python - The set of students whose name is "Smith":
###Code
{s for s in students if student_name(s) == 'Smith'}
###Output
_____no_output_____
###Markdown
This is equivalent to the SQL query:~~~SQLSELECT * FROM students WHERE students.name = "SMITH";~~~ Joining relations- Now let's create another relation called `grades` which has tuples of the form `(ssn, course-name, mark)`:
###Code
grades = { (17, 'python', 'A'), (17, 'algebra', 'B'), (17, 'algebra', 'A')}
###Output
_____no_output_____
###Markdown
and a function to return the mark for a given grade tuple:
###Code
def grade_mark(g):
return g[2]
###Output
_____no_output_____
###Markdown
Now we can join the two relations using a Cartesian product:
###Code
{(student_name(s), grade_mark(g)) for s in students for g in grades}
###Output
_____no_output_____
###Markdown
this is equivalent to the following SQL:~~~SQLSELECT students.name, grades.mark FROM students, grades;~~~ - We can also combine this with a predicate:
###Code
{(student_name(s), grade_mark(g)) for s in students for g in grades if student_name(s) == 'Smith'}
###Output
_____no_output_____
###Markdown
The Relational Model- The relational model is a mathematical *abstraction* used to describehow data can be structured, queried and updated.- It is based on *set theory*.- It can be *implemented* in many different ways.- When it is implemented by storing relations on disk files, we have a *relational database*.- Functional programming languages such as Python naturally express many aspects of the relational model.- This is one of the reasons they are very useful for data science. Overview1. The formal definition of the relational model2. Representing relations in Python using collections of tuples3. Querying relational data using Python set comprehensions An Example Relational DatasetThe following slides use relations to describe:- students, - the courses they are taking - the prerequisites of courses- their grades- which department they are in
###Code
import pandas as pd
df = pd.DataFrame({'name': ['Smith', 'Brown'], 'student_number': [17, 18], 'course_id': [1, 2], 'department': ['CS', 'CS']})
df
###Output
_____no_output_____
###Markdown
Concepts- In a relational model, the data is a collection of *relations*.- Informally, a relation resembles a table of values.- When relations are stored on disk, they are called tables.- Each row represents a *fact* about a real-world entity or relationship.- The table name and column names are used to help interpret the meaning of the values.- A relational model is defined formally in terms of: - tuples - attributes - relations - domains Illustration of a relationImage courtesy of [AutumnSnow](https://commons.wikimedia.org/wiki/User:AutumnSnow) TuplesA tuple is a mathematical abstraction which:- contains several other values- has a well-defined ordering over the values- can contain duplicate values- can contain values of different types- can contain the special value `None` or `Null`- is immutable; the values contained in the tuple cannot change over time The size of a tuple- We often restrict attention to tuples of a particular size or *degree*.- An $n-$tuple contains $n$ values. Attributes- An attribute refers to the value in a particular index of a tuple. Atomic values- Atomic values are values which are not stored in collections.- Atomic values cannot be further decomposed into other values.- A tuple is therefore *not* atomic.- A tuple that contains only atomic values is called a *flat tuple*. Domain- A *domain* $D_i$ is a set of atomic values.- Each attribute within a relation has the *same* domain.- Intuitively, a domain specifies the allowable values in a column $i$.- Examples:$D_1 = \mathbb{Z}$ $D_2 = \{ 15, 16, \ldots, 80 \}$ $D_3 = \{ "CS", \; "ECON", \; "PHYS" \}$ Relation schema- A *relation schema* is denoted by $R(A_1, A_2, \ldots, A_n)$.- Each *attribute* $A_i$ is the name of a role played by some domain $D_i$ in $R$.- $D_i$ is the *domain* of $A_i$ and is denoted by $\operatorname{dom}(A_i)$.- The *degree* or *arity* of a relation is the number of attributes $n$. Example- What is the arity of the following relation schema?~~~STUDENT(Name, Ssn, Home_phone, Address, Office_phone, Age, Gpa)~~~- Answer: 7- What is the name of the relation?- Answer: `STUDENT` Example of a Domain$\operatorname{dom}(Gpa) = [0, 4]$ Relations- The schema represents the structure of a *relation*.- A relation contains the actual data.- It is sometimes called the *relation state*, *relation intension* or *relation extension*.- Let $r(R)$ denote the relation $r$ of a relation schema $R$.- The relation $r$ consists of a set of $n$-tuples $r = \{t_1, t_2, \ldots, t_m\}$.- The $i^{th}$ value in tuple $t$ corresponds to the attribute $A_i$and is denoted $t.A_i$ or $t[i]$. Constraints- Domain constraints- Key constraints- NULL values Relational Datasets- So far we have discussed single relations.- A typical data-set will comprise many relations.- A relational dataset schema $(S, IC)$ comprises: - a set of relation schemas $S = \{ R_1, R_2, \ldots, R_k \}$ - a set of integrity constraints $IC$ - A relational dataset state $DB$ is a set of relation states $DB = \{ r_1, r_2, \ldots r_m \}$ - such that every $r_i$ satisfies every constraint in $IC$. Data definition language- The data definition language (DDL) provides a concrete syntax and semantics for describing a relational schema.- Most commonly we use *SQL* - Structured Query Language. Data query language- The data query language provides a concrete syntax and semantics for querying the relational dataset.- Formally a query is a function mapping from existing relation states to new relations.- That is, we map from one set of tuples to another set of tuples.- Typically the mapping involves some combination of set-theoretic functions, e.g. - subset of tuples that satisfy a predicate $p$ - $\{ x: x \in X \wedge p(x) \}$ - set union $X \cup Y$, difference $X - Y$, intersection $X \cap Y$ - Cartesian product $X \times Y$ - The most common data query language for relational databases is again SQL.. . .- Mathematically, there is nothing stopping us from using e.g. Python as a query language. Tuples in Python- Tuples in Python can be written by writing a sequence of values separated bycommas and surrounded by round brackets. For example:
###Code
tuple1 = (50, 6.5)
tuple2 = (1, 2, 'hello')
professor = ('Steve', 'Phelps', 'S6.18')
student = ('John', 'Doe', None)
###Output
_____no_output_____
###Markdown
- The individual values contain within a tuple can be obtained by indexingtheir position (counting from zero). To find the office number of the professor:
###Code
professor[2]
###Output
_____no_output_____
###Markdown
- Tuples are a very flexible way to represent single pieces of data.- We only allow *flat tuples*. The following is not allowed in a relational model:
###Code
this_is_not_allowed = (1, 3, (50, 6.5))
###Output
_____no_output_____
###Markdown
Sets of Tuples- How can we use tuples to represent data-*sets* and relations?- We can use collections of tuples, e.g. a set of tuples.- So now we can represent one or more students:
###Code
# Student tuples
smith = ('Smith', 17, 1, 'CS')
brown = ('Brown', 8, 2, 'CS')
# The student relation
students = {smith, brown}
###Output
_____no_output_____
###Markdown
Relational attributes in Python- Attributes are names for particular positions within a tuple.- We can use Python functions to represent relational attributes:
###Code
# The attributes of a student
def student_name(s):
return s[0]
def student_student_number(s):
return s[1]
###Output
_____no_output_____
###Markdown
- Note that different relations can have the same attribute.- Therefore we need some convention to distinguish attributes from different relations.- In the above code, `student_student_number` refers to the `student_number` attribute of the `student` relation. Queries in Python- We need some way to extract data from our data-set; i.e. to *query* the data.- A query will e.g.: - Take a subset of the tuples of a relation that satisfy a predicate. - *Join* two or more relations using a Cartesian product. - Take the intersection of tuples from two or more relations. - Take the union of tuples from two or more relations. - Python list comprehensions or set comprehensions provide all of this functionality. Relational queries in Python - The set of students whose name is "Smith":
###Code
{s for s in students if student_name(s) == 'Smith'}
###Output
_____no_output_____
###Markdown
This is equivalent to the SQL query:~~~SQLSELECT * FROM students WHERE students.name = "SMITH";~~~ Joining relations- Now let's create another relation called `grades` which has tuples of the form `(ssn, course-name, mark)`:
###Code
grades = { (17, 'python', 'A'), (17, 'algebra', 'B'), (17, 'algebra', 'A')}
###Output
_____no_output_____
###Markdown
and a function to return the mark for a given grade tuple:
###Code
def grade_mark(g):
return g[2]
###Output
_____no_output_____
###Markdown
Now we can join the two relations using a Cartesian product:
###Code
{(student_name(s), grade_mark(g)) for s in students for g in grades}
###Output
_____no_output_____
###Markdown
this is equivalent to the following SQL:~~~SQLSELECT students.name, grades.mark FROM students, grades;~~~ - We can also combine this with a predicate:
###Code
{(student_name(s), grade_mark(g)) for s in students for g in grades if student_name(s) == 'Smith'}
###Output
_____no_output_____ |
03_sequential_models.ipynb | ###Markdown
Import Data
###Code
train = pd.read_csv('data/labeledTrainData.tsv', sep='\t')
print(train.shape)
test = pd.read_csv('data/testData.tsv', sep='\t')
print(test.shape)
###Output
(25000, 2)
###Markdown
Pre-process Data
###Code
MAX_FEATURES = 25000
MAX_LEN = 350
list_sentences_train = train['review'].fillna("UNKNOWN").values.tolist()
list_sentences_test = test['review'].fillna("UNKNOWN").values.tolist()
tokenizer = text.Tokenizer(num_words=MAX_FEATURES)
tokenizer.fit_on_texts(list_sentences_train)
list_tokenized_train = tokenizer.texts_to_sequences(list_sentences_train)
list_tokenized_test = tokenizer.texts_to_sequences(list_sentences_test)
X = sequence.pad_sequences(list_tokenized_train, maxlen=MAX_LEN)
X_test = sequence.pad_sequences(list_tokenized_test, maxlen=MAX_LEN)
y = train['sentiment'].values.reshape(-1,1)
y_softmax = np.array([np.array([0,1]) if x == 1 else np.array([1,0]) for x in y])
N_CLASSES = 2
X_train, X_val, y_train, y_val = train_test_split(X, y_softmax, test_size=0.1, random_state=42)
###Output
_____no_output_____
###Markdown
Create Model - No External Knowledge
###Code
EMBED_SIZE = 8
CNN_FILTER_SIZE = 8
CNN_KERNEL_SIZE = 3
def create_model():
input_sequence = Input(shape=(MAX_LEN, ))
x = Embedding(input_dim=MAX_FEATURES, output_dim=EMBED_SIZE)(input_sequence)
x = Dropout(0.5)(x)
x = Conv1D(filters=CNN_FILTER_SIZE, kernel_size=CNN_KERNEL_SIZE, padding='same', kernel_regularizer=l2(0.0001))(x)
#x = Bidirectional(LSTM(32,
# return_sequences=True,
# kernel_regularizer=l2(0.0001)))(x)
#x = GlobalMaxPool1D()(x)
#x = AttentionWithContext()(x)
x = TimeDistributed(Dense(1, activation="elu", kernel_regularizer=l2(0.0001)))(x)
x = Flatten()(x)
x = BatchNormalization()(x)
x = Dense(8, activation="elu", kernel_regularizer=l2(0.0001))(x)
prediction = Dense(N_CLASSES, activation="softmax")(x)
opt = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
#opt = SGD(lr=0.001, momentum=0.0, decay=0.0, nesterov=False)
model = Model(inputs=input_sequence, outputs=prediction)
model.compile(loss='binary_crossentropy',
optimizer=opt,
metrics=['accuracy'])
return model
model = create_model()
BATCH_SIZE = 512
EPOCHS = 50
FILE_PATH = "models/keras_model_weights.hdf5"
checkpoint = ModelCheckpoint(FILE_PATH, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
early = EarlyStopping(monitor="val_loss", mode="min", patience=15)
callbacks_list = [checkpoint, early]
model.summary()
SVG(model_to_dot(model, show_shapes=True).create(prog='dot', format='svg'))
model.fit(X_train, y_train, batch_size=BATCH_SIZE, epochs=EPOCHS, validation_data=[X_val, y_val], callbacks=callbacks_list)
pd.DataFrame(model.history.history).plot(figsize=(12,8))
model = load_model(filepath=FILE_PATH, custom_objects={'AttentionWithContext': AttentionWithContext})
y_val_hat = model.predict(X_val)
print(accuracy_score(y_val, y_val_hat > 0.5))
print(roc_auc_score(y_val, y_val_hat))
print(confusion_matrix(y_val, y_val_hat > 0.5))
print(classification_report(y_val, y_val_hat > 0.5))
###Output
0.8848
0.955568316842
###Markdown
Useful tutorials:* http://konukoii.com/blog/2018/02/19/twitter-sentiment-analysis-using-combined-lstm-cnn-models/ Extract Activations
###Code
from IPython.display import display, HTML
def create_get_activation_function(model, output_layer_int):
inp = model.input
output = model.layers[output_layer_int].output
get_activations = K.function([inp]+ [K.learning_phase()], [output])
return get_activations
act_model = load_model(filepath=FILE_PATH, custom_objects={'AttentionWithContext': AttentionWithContext})
get_activations = create_get_activation_function(act_model, 3)
word_to_hash = tokenizer.word_index
hash_to_word = {v:k for k,v in word_to_hash.items()}
hash_to_word[0] = ''
cmap = plt.cm.get_cmap('RdYlGn')
example = 4505
html_string = '<p>For training example: ' + str(example) + '</p>'
for node in range(CNN_FILTER_SIZE):
activations = get_activations([[X_train[example]], 0.])[0]
text = [hash_to_word[x] for x in X_train[example]]
scaled_activations = activations[0,:,node] - activations[0,:,node].min()
scaled_activations = scaled_activations / scaled_activations.max()
scaled_activations = pd.rolling_mean(scaled_activations, CNN_KERNEL_SIZE, min_periods=1)
new_string = ''
for i, t in enumerate(text):
new_string += '<span style="background-color: ' + str(rgb2hex(cmap(scaled_activations[i]))) + '">' + t + '</span>' + ' '
html_string += '<p>CNN Filter: ' + str(node) + '</p><p>' + new_string + '</p>'
h = HTML(html_string)
display(h)
get_word_activations = create_get_activation_function(act_model, 5)
example = 4505
html_string = '<p>For training example: ' + str(example) + '</p>'
activations = get_word_activations([[X_train[example]], 0.])[0]
text = [hash_to_word[x] for x in X_train[example]]
scaled_activations = activations[0,:] - activations[0,:].min()
scaled_activations = scaled_activations / scaled_activations.max()
new_string = ''
for i, t in enumerate(text):
new_string += '<span style="background-color: ' + str(rgb2hex(cmap(scaled_activations[i]))) + '">' + t + '</span>' + ' '
html_string += '<p>Time Distributed Dense Output: <p>' + new_string + '</p>'
h = HTML(html_string)
display(h)
###Output
_____no_output_____
###Markdown
Word Embeddings
###Code
from scipy.spatial.distance import pdist, squareform
emb_layer = model.layers[1]
emb_layer_weights = emb_layer.get_weights()[0]
emb_layer_weights.shape
x_sq = squareform(pdist(emb_layer_weights[0:10000,:], metric='cosine'))
df_x_sq = pd.DataFrame(x_sq)
df_x_edge = df_x_sq.where(np.triu(np.ones(df_x_sq.shape)).astype(np.bool)).stack().reset_index()
df_x_edge.columns = ['source','target','weight']
df_x_edge['weight'].hist(bins=50)
df_x_edge = df_x_edge[df_x_edge['weight'] < 0.1]
df_x_edge = df_x_edge[df_x_edge.source != df_x_edge.target]
df_x_edge.shape
df_x_edge['source_word'] = df_x_edge['source'].apply(lambda x: hash_to_word[x])
df_x_edge['target_word'] = df_x_edge['target'].apply(lambda x: hash_to_word[x])
df_x_edge.sort_values(by='weight')
df_x_edge.to_csv('../data/combine_activation_sim.csv', index=False)
df_node_text = pd.DataFrame(df['text'], columns=['text'])
df_node_text['Id'] = df_node_text.index
df_node_text = df_node_text[['Id', 'text']]
from IPython.core.display import display, HTML
from string import Template
import json, random
random.seed(42)
n_nodes = 40
n_edges = 200
graph_data = { 'nodes': [], 'edges': [] }
for i in range(n_nodes):
graph_data['nodes'].append({
"id": "n" + str(i),
"label": "n" + str(i),
"x": random.uniform(0,1),
"y": random.uniform(0,1),
"size": random.uniform(0.2,1)
})
for j in range(n_edges):
x_center = random.uniform(0,1)
y_center = random.uniform(0,1)
x_dist = random.uniform(0.1,0.5)
y_dist = random.uniform(0.2,0.5)
neighborhood = []
for node in graph_data['nodes']:
if abs(node['x'] - x_center) < x_dist:
if abs(node['y'] - y_center) < y_dist:
neighborhood.append(int(node['id'].replace('n','')))
if len(neighborhood) >= 2:
ends = random.sample(neighborhood,2)
graph_data['edges'].append({
"id": "e" + str(j),
"source": "n" + str(ends[0]),
"target": "n" + str(ends[1])
})
js_text_template = Template('''
var g = $graph_data ;
s = new sigma({graph: g, container: '$container', settings: { defaultNodeColor: '#ec5148'} });
s.graph.nodes().forEach(function(n) {
n.originalColor = n.color;
});
s.graph.edges().forEach(function(e) {
e.originalColor = e.color;
});
s.bind('clickNode', function(e) {
var nodeId = e.data.node.id,
toKeep = s.graph.neighbors(nodeId);
toKeep[nodeId] = e.data.node;
s.graph.nodes().forEach(function(n) {
if (toKeep[n.id])
n.color = n.originalColor;
else
n.color = '#eee';
});
s.graph.edges().forEach(function(e) {
if (toKeep[e.source] && toKeep[e.target])
e.color = e.originalColor;
else
e.color = '#eee';
});
s.refresh();
});
s.bind('clickStage', function(e) {
s.graph.nodes().forEach(function(n) {
n.color = n.originalColor;
});
s.graph.edges().forEach(function(e) {
e.color = e.originalColor;
});
s.refresh();
});
''')
js_text = js_text_template.substitute({'graph_data': json.dumps(graph_data),
'container': 'graph-div'})
'../ml-notebooks/js/sigma.min.js'
html_template = Template('''
<script src="../ml-notebooks/js/sigma.min.js"></script>
<div id="graph-div" style="height:800px"></div>
<script> $js_text </script>
''')
HTML(html_template.substitute({'js_text': js_text}))
###Output
_____no_output_____
###Markdown
Create Model - Use Pretrained Embeddings, PoS Parsing Prep Data
###Code
MAX_FEATURES = 25000
MAX_LEN = 350
list_sentences_train = train['review'].fillna("UNKNOWN").values.tolist()
list_sentences_test = test['review'].fillna("UNKNOWN").values.tolist()
list_sentences_train_parsed = [transform_doc(x, MAX_LEN=1000) for x in list_sentences_train]
list_sentences_test_parsed = [transform_doc(x, MAX_LEN=1000) for x in list_sentences_test]
with open('data/list_sentences_train_parsed.pkl', 'wb') as f:
pickle.dump(list_sentences_train_parsed, f)
with open('data/list_sentences_test_parsed.pkl', 'wb') as f:
pickle.dump(list_sentences_test_parsed, f)
tokenizer = text.Tokenizer(num_words=MAX_FEATURES, filters='!"#$%&()*+,-/:;<=>?@[\\]^`{}~\t\n', lower=False)
tokenizer.fit_on_texts(list_sentences_train_parsed)
list_tokenized_train = tokenizer.texts_to_sequences(list_sentences_train_parsed)
list_tokenized_test = tokenizer.texts_to_sequences(list_sentences_test_parsed)
X_train = sequence.pad_sequences(list_tokenized_train, maxlen=MAX_LEN)
X_test = sequence.pad_sequences(list_tokenized_test, maxlen=MAX_LEN)
y = train['sentiment'].values.reshape(-1,1)
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.1, random_state=42)
###Output
_____no_output_____
###Markdown
Inspect
###Code
list_sentences_train_parsed[0]
list_tokenized_train[0]
index_word_dict = {v:k for k, v in tokenizer.word_index.items()}
word_index_dict = tokenizer.word_index
for w in list_tokenized_train[0]:
print(index_word_dict[w])
###Output
With|ADP
going|VERB
down|PART
at|ADP
the_moment|NOUN
with|ADP
MJ|ENT
i|PRON
've|VERB
started|VERB
listening|VERB
to|ADP
his_music|NOUN
|PUNCT
watching|VERB
here|ADV
and|CCONJ
there|ADV
|PUNCT
watched|VERB
and|CCONJ
watched|VERB
Moonwalker|ENT
again|ADV
.|PUNCT
Maybe|ADV
i|PRON
just|ADV
want|VERB
to|PART
get|VERB
into|ADP
this_guy|NOUN
who|NOUN
i|PRON
thought|VERB
was|VERB
really|ADV
cool|ADJ
in|ADP
the_eighties|DATE
just|ADV
to|PART
maybe|ADV
make|VERB
up|PART
my_mind|NOUN
whether|ADP
he|PRON
is|VERB
guilty|ADJ
or|CCONJ
innocent|ADJ
.|PUNCT
Moonwalker|ENT
is|VERB
|PUNCT
which|ADJ
i|PRON
remember|VERB
going|VERB
to|PART
see|VERB
at|ADP
the_cinema|NOUN
when|ADV
it|PRON
was|VERB
originally|ADV
released|VERB
.|PUNCT
Some|DET
of|ADP
it|PRON
has|VERB
subtle_messages|NOUN
about|ADP
towards|ADP
the_press|NOUN
and|CCONJ
also|ADV
of|ADP
drugs|NOUN
are|VERB
br|NOUN
br|SYM
Visually|ADV
impressive|ADJ
but|CCONJ
of|ADP
course|NOUN
this|DET
is|VERB
all|DET
about|ADP
Michael_Jackson|ENT
so|ADP
unless|ADP
you|PRON
remotely|ADV
like|ADP
MJ|ENT
in|ADP
anyway|ADV
then|ADV
you|PRON
are|VERB
going|VERB
to|PART
hate|VERB
this|DET
and|CCONJ
find|VERB
it|PRON
boring|ADJ
.|PUNCT
Some|DET
may|VERB
call|VERB
MJ|ENT
an|DET
for|ADP
consenting|VERB
to|ADP
the_making|NOUN
of|ADP
this_movie|NOUN
BUT|CCONJ
MJ|ENT
and|CCONJ
most|ADJ
of|ADP
his_fans|NOUN
would|VERB
say|VERB
that|ADP
he|PRON
made|VERB
it|PRON
for|ADP
the_fans|NOUN
which|ADJ
if|ADP
true|ADJ
is|VERB
really|ADV
nice|ADJ
of|ADP
him.
br|CARDINAL
br|PUNCT
The|SYM
actual|ADJ
feature|NOUN
film|NOUN
bit|NOUN
when|ADV
it|PRON
finally|ADV
starts|VERB
is|VERB
only|ADV
on|ADP
for|ADP
20_minutes|TIME
or|CCONJ
so|ADV
excluding|VERB
and|CCONJ
Joe_Pesci|ENT
is|VERB
convincing|ADJ
as|ADP
.|PUNCT
Why|ADV
he|PRON
wants|VERB
MJ|ENT
dead|ADJ
so|ADV
bad|ADJ
is|VERB
beyond|ADP
me|PRON
.|PUNCT
Because|ADP
MJ|ENT
overheard|VERB
his_plans|NOUN
|PUNCT
Nah|ENT
|PUNCT
ranted|VERB
that|ADP
he|PRON
wanted|VERB
people|NOUN
to|PART
know|VERB
it|PRON
is|VERB
he|PRON
who|NOUN
is|VERB
supplying|VERB
drugs|NOUN
etc|X
so|ADV
i|PRON
dunno|VERB
|PUNCT
maybe|ADV
he|PRON
just|ADV
hates|VERB
br|ENT
br|PUNCT
Lots|NOUN
of|ADP
in|ADP
this|DET
like|ADP
MJ|ENT
turning|VERB
into|ADP
a_car|NOUN
and|CCONJ
a_robot|NOUN
and|CCONJ
.|PUNCT
Also|ADV
|PUNCT
the_director|NOUN
must|VERB
have|VERB
had|VERB
the_patience|NOUN
of|ADP
a_saint|NOUN
when|ADV
it|PRON
came|VERB
to|ADP
filming|VERB
as|ADP
usually|ADV
directors|NOUN
hate|VERB
working|VERB
with|ADP
one_kid|NOUN
let|VERB
alone|ADV
a_whole_bunch|NOUN
of|ADP
them|PRON
performing|VERB
scene.
br|ENT
br|SYM
Bottom|NOUN
line|NOUN
|PUNCT
this_movie|NOUN
is|VERB
for|ADP
people|NOUN
who|NOUN
like|VERB
MJ|ENT
on|ADP
one_level|NOUN
or|CCONJ
another|DET
|PUNCT
which|ADJ
i|PRON
think|VERB
is|VERB
most_people|NOUN
|PUNCT
.|PUNCT
If|ADP
not|ADV
|PUNCT
then|ADV
stay|VERB
away|ADV
.|PUNCT
It|PRON
does|VERB
try|VERB
and|CCONJ
give|VERB
off|PART
and|CCONJ
in|ADP
this_movie|NOUN
is|VERB
a_girl|NOUN
|PUNCT
Michael_Jackson|ENT
is|VERB
truly|ADV
one|CARDINAL
of|ADP
ever|ADV
to|PART
grace|VERB
this_planet|NOUN
but|CCONJ
is|VERB
he|PRON
guilty|ADJ
|PUNCT
Well|INTJ
|PUNCT
with|ADP
all_the_attention|NOUN
i|PRON
've|VERB
gave|VERB
this_subject|NOUN
....|PUNCT
hmmm|INTJ
well|INTJ
i|PRON
do|VERB
n't|ADV
know|VERB
because|ADP
people|NOUN
can|VERB
be|VERB
different|ADJ
behind|ADP
closed_doors|NOUN
|PUNCT
i|PRON
know|VERB
this|DET
for|ADP
a_fact|NOUN
.|PUNCT
He|PRON
is|VERB
or|CCONJ
one|CARDINAL
of|ADP
.|PUNCT
I|PRON
hope|VERB
he|PRON
is|VERB
not|ADV
the|DET
latter|ADJ
.|PUNCT
###Markdown
Create Embedding Matrix(or load one if you have it)
###Code
w2v_model = Word2Vec.load('models/w2v_model_32_plaintext')
EMBED_SIZE = w2v_model.vector_size
print('The size of the gensim word2vec vocab is: {}'.format(len(w2v_model.wv.vocab.items())))
unknown_word_count = 0
def choose_embedded_vector(word, unknown_word_count, verbose=False):
if word in w2v_model.wv.vocab:
return w2v_model.wv.word_vec(word), unknown_word_count
else:
if verbose:
print('Unknown word: {}'.format(word))
return np.random.rand(EMBED_SIZE), (unknown_word_count+1)
index_word_dict = {v:k for k, v in tokenizer.word_index.items()}
word_index_dict = tokenizer.word_index
num_words = tokenizer.num_words + 1
print('The size of the keras token vocab is: {}'.format(len(index_word_dict)))
print('The tokenizer vocab is limited to: {}'.format(tokenizer.num_words))
embedding_weights = np.zeros((num_words, EMBED_SIZE))
for word, index in word_index_dict.items():
if index < num_words:
embedding_weights[index,:], unknown_word_count = choose_embedded_vector(word, unknown_word_count)
print('Total amount of words not found in gensim word2vec model: {}'.format(unknown_word_count))
print('Embedding matrix shape: {}'.format(embedding_weights.shape))
EMBED_SIZE
###Output
_____no_output_____
###Markdown
Train Model
###Code
CNN_FILTER_SIZE = 32
CNN_KERNEL_SIZE = 3
def create_model():
input_sequence = Input(shape=(MAX_LEN, ))
x = Embedding(input_dim=num_words, output_dim=EMBED_SIZE, input_length=MAX_LEN, mask_zero=False, weights=[embedding_weights], trainable=True)(input_sequence)
x = Dropout(0.5)(x)
x = Conv1D(filters=CNN_FILTER_SIZE, kernel_size=CNN_KERNEL_SIZE, padding='same', kernel_regularizer=l2(0.0001))(x)
#x = Bidirectional(LSTM(32,
# return_sequences=True,
# kernel_regularizer=l2(0.0001)))(x)
#x = GlobalMaxPool1D()(x)
x = AttentionWithContext()(x)
x = BatchNormalization()(x)
x = Dense(32, activation="elu", kernel_regularizer=l2(0.0001))(x)
prediction = Dense(N_CLASSES, activation="sigmoid")(x)
opt = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
#opt = SGD(lr=0.001, momentum=0.0, decay=0.0, nesterov=False)
model = Model(inputs=input_sequence, outputs=prediction)
model.compile(loss='binary_crossentropy',
optimizer=opt,
metrics=['accuracy'])
return model
model = create_model()
BATCH_SIZE = 512
EPOCHS = 50
FILE_PATH = "models/keras_model_weights.hdf5"
checkpoint = ModelCheckpoint(FILE_PATH, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
early = EarlyStopping(monitor="val_loss", mode="min", patience=15)
callbacks_list = [checkpoint, early]
model.summary()
model.fit(X_train, y_train, batch_size=BATCH_SIZE, epochs=EPOCHS, validation_data=[X_val, y_val], callbacks=callbacks_list)
pd.DataFrame(model.history.history).plot(figsize=(12,8))
model = load_model(filepath=FILE_PATH, custom_objects={'AttentionWithContext':AttentionWithContext})
y_hat = model.predict(X_val)
print(accuracy_score(y_val, y_hat > 0.5))
print(roc_auc_score(y_val, y_hat))
print(confusion_matrix(y_val, y_hat > 0.5))
print(classification_report(y_val, y_hat > 0.5))
###Output
0.8772
0.951000501915
[[1038 190]
[ 117 1155]]
precision recall f1-score support
0 0.90 0.85 0.87 1228
1 0.86 0.91 0.88 1272
avg / total 0.88 0.88 0.88 2500
###Markdown
Helper Functions A lot of the spacy code was pulled from examples: https://github.com/explosion
###Code
nlp = spacy.load('en_core_web_sm')
LABELS = {
'ENT': 'ENT',
'PERSON': 'ENT',
'NORP': 'ENT',
'FAC': 'ENT',
'ORG': 'ENT',
'GPE': 'ENT',
'LOC': 'ENT',
'LAW': 'ENT',
'PRODUCT': 'ENT',
'EVENT': 'ENT',
'WORK_OF_ART': 'ENT',
'LANGUAGE': 'ENT',
'DATE': 'DATE',
'TIME': 'TIME',
'PERCENT': 'PERCENT',
'MONEY': 'MONEY',
'QUANTITY': 'QUANTITY',
'ORDINAL': 'ORDINAL',
'CARDINAL': 'CARDINAL'
}
pre_format_re = re.compile(r'^[\`\*\~]')
post_format_re = re.compile(r'[\`\*\~]$')
url_re = re.compile(r'\[([^]]+)\]\(%%URL\)')
link_re = re.compile(r'\[([^]]+)\]\(https?://[^\)]+\)')
def strip_meta(text):
if type(text) == str:
text = link_re.sub(r'\1', text)
text = text.replace('>', '>').replace('<', '<')
text = pre_format_re.sub('', text)
text = post_format_re.sub('', text)
return text
else:
return ''
def represent_word(word):
if word.like_url:
return '%%URL|X'
text = re.sub(r'\s', '_', word.text)
tag = LABELS.get(word.ent_type_, word.pos_)
if not tag:
tag = '?'
return text + '|' + tag
def merge_clean_sentence(nlp, text, collapse_punctuation=True, collapse_phrases=True):
doc = nlp(text)
if collapse_punctuation:
spans = []
for word in doc[:-1]:
if word.is_punct:
continue
if not word.nbor(1).is_punct:
continue
start = word.i
end = word.i + 1
while end < len(doc) and doc[end].is_punct:
end += 1
span = doc[start : end]
spans.append(
(span.start_char, span.end_char,
{'tag': word.tag_, 'lemma': word.lemma_, 'ent_type': word.ent_type_})
)
for start, end, attrs in spans:
doc.merge(start, end, **attrs)
if collapse_phrases:
for np in list(doc.noun_chunks):
np.merge(tag=np.root.tag_, lemma=np.root.lemma_, ent_type=np.root.ent_type_)
return doc
def transform_doc(text, MAX_LEN):
d = merge_clean_sentence(nlp, text, collapse_punctuation=False, collapse_phrases=True)
strings = []
for sent in d.sents:
if sent.text.strip():
for w in sent:
if not w.is_space:
strings.append(represent_word(w))
if strings:
return ' '.join(strings[0:MAX_LEN])
else:
return ' '.join(['' for x in range(MAX_LEN)])
###Output
_____no_output_____
###Markdown
Attention adapted from: https://gist.github.com/cbaziotis/6428df359af27d58078ca5ed9792bd6d
###Code
def dot_product(x, kernel):
"""
Wrapper for dot product operation, in order to be compatible with both
Theano and Tensorflow
Args:
x (): input
kernel (): weights
Returns:
"""
if K.backend() == 'tensorflow':
return K.squeeze(K.dot(x, K.expand_dims(kernel)), axis=-1)
else:
return K.dot(x, kernel)
class AttentionWithContext(Layer):
"""
Attention operation, with a context/query vector, for temporal data.
Supports Masking.
Follows the work of Yang et al. [https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf]
"Hierarchical Attention Networks for Document Classification"
by using a context vector to assist the attention
# Input shape
3D tensor with shape: `(samples, steps, features)`.
# Output shape
2D tensor with shape: `(samples, features)`.
How to use:
Just put it on top of an RNN Layer (GRU/LSTM/SimpleRNN) with return_sequences=True.
The dimensions are inferred based on the output shape of the RNN.
Note: The layer has been tested with Keras 2.0.6
Example:
model.add(LSTM(64, return_sequences=True))
model.add(AttentionWithContext())
# next add a Dense layer (for classification/regression) or whatever...
"""
def __init__(self,
W_regularizer=None, u_regularizer=None, b_regularizer=None,
W_constraint=None, u_constraint=None, b_constraint=None,
bias=True, **kwargs):
self.supports_masking = True
self.init = initializers.get('glorot_uniform')
self.W_regularizer = regularizers.get(W_regularizer)
self.u_regularizer = regularizers.get(u_regularizer)
self.b_regularizer = regularizers.get(b_regularizer)
self.W_constraint = constraints.get(W_constraint)
self.u_constraint = constraints.get(u_constraint)
self.b_constraint = constraints.get(b_constraint)
self.bias = bias
super(AttentionWithContext, self).__init__(**kwargs)
def build(self, input_shape):
assert len(input_shape) == 3
self.W = self.add_weight((input_shape[-1], input_shape[-1],),
initializer=self.init,
name='{}_W'.format(self.name),
regularizer=self.W_regularizer,
constraint=self.W_constraint)
if self.bias:
self.b = self.add_weight((input_shape[-1],),
initializer='zero',
name='{}_b'.format(self.name),
regularizer=self.b_regularizer,
constraint=self.b_constraint)
self.u = self.add_weight((input_shape[-1],),
initializer=self.init,
name='{}_u'.format(self.name),
regularizer=self.u_regularizer,
constraint=self.u_constraint)
super(AttentionWithContext, self).build(input_shape)
def compute_mask(self, input, input_mask=None):
# do not pass the mask to the next layers
return None
def call(self, x, mask=None):
uit = dot_product(x, self.W)
if self.bias:
uit += self.b
uit = K.tanh(uit)
ait = dot_product(uit, self.u)
a = K.exp(ait)
# apply mask after the exp. will be re-normalized next
if mask is not None:
# Cast the mask to floatX to avoid float64 upcasting in theano
a *= K.cast(mask, K.floatx())
# in some cases especially in the early stages of training the sum may be almost zero
# and this results in NaN's. A workaround is to add a very small positive number ε to the sum.
# a /= K.cast(K.sum(a, axis=1, keepdims=True), K.floatx())
a /= K.cast(K.sum(a, axis=1, keepdims=True) + K.epsilon(), K.floatx())
a = K.expand_dims(a)
weighted_input = x * a
return K.sum(weighted_input, axis=1)
def compute_output_shape(self, input_shape):
return input_shape[0], input_shape[-1]
###Output
_____no_output_____ |
community_model_approaches_26.ipynb | ###Markdown
Community modelingIn this notebook we will implement a method to create community models of two or more species specific metabolic models using cobrapy.
###Code
model_DP = cobra.io.read_sbml_model("models/consistent_DP_SNM.xml")
model_SA = cobra.io.read_sbml_model("models/consistent_iYS854_SNM.xml")
print("Growth: ", model_DP.slim_optimize())
print("Growth: ", model_SA.slim_optimize())
for rec in model_SA.reactions:
rec.lower_bound = max(rec.lower_bound, -1000)
rec.upper_bound = min(rec.upper_bound, 1000)
snm3 = pd.read_csv("SNM3.csv", sep =";")
snm3.head()
BIOMASS_DP = "Growth"
BIOMASS_SA = "BIOMASS_iYS_wild_type"
models = [model_DP.copy(), model_SA.copy()]
from community_models import *
import json
compm_SA = json.loads(open("compm_SA.json").read())
compm_DP = json.loads(open("compm_DP.json").read())
model_DP.medium = compm_DP
model_SA.medium = compm_SA
model1 = Model(model_DP, BIOMASS_DP)
model2 = Model(model_SA, BIOMASS_SA)
community_model1 = model1 + model2
community_model2 = MIP_community_model(model1, model2)
community_model3_1_1 = create_bag_of_react_model([model_DP, model_SA],[BIOMASS_DP, BIOMASS_SA], [1,1])
community_model3_10_1 = create_bag_of_react_model([model_DP, model_SA],[BIOMASS_DP, BIOMASS_SA], [10,1])
community_model1.set_weights([1,1])
print("MBR Weights 1:1: ", community_model1.slim_optimize())
single_growth = community_model1.optimize().x[community_model1.objective_c != 0]
print("DP growth: ", single_growth[0])
print("SA growth: ", single_growth[1])
community_model1.set_weights([10,1])
print("MBR Weights 10:1: ", community_model1.slim_optimize())
single_growth = community_model1.optimize().x[community_model1.objective_c != 0]
print("DP growth: ", single_growth[0])
print("SA growth: ", single_growth[1])
community_model2.weights = [1,1]
print("MBR Weights 1:1: ", community_model2.optimize())
print("SA growth: ", community_model2.x2[community_model2.obj2].x)
print("DP growth: ", community_model2.x1[community_model2.obj1].x)
community_model2.weights = [10,1]
print("MBR Weights 10:1: ", community_model2.optimize())
print("SA growth: ", community_model2.x2[community_model2.obj2].x)
print("DP growth: ", community_model2.x1[community_model2.obj1].x)
print("MBR Weights 1:1: ", community_model3_1_1.slim_optimize())
print("SA growth: " + str(community_model3_1_1.reactions.get_by_id(BIOMASS_SA).flux))
print("DP growth: " + str(community_model3_1_1.reactions.get_by_id(BIOMASS_DP).flux))
print("MBR Weights 10:1: ", community_model3_10_1.slim_optimize())
print("SA growth: " + str(community_model3_10_1.reactions.get_by_id(BIOMASS_SA).flux))
print("DP growth: " + str(community_model3_10_1.reactions.get_by_id(BIOMASS_DP).flux))
coopm = community_model2.compute_coopm()
coopm2 = optimize_coopm_community(community_model3_1_1, community_model3_1_1.slim_optimize(), [BIOMASS_DP, BIOMASS_SA], [1,1])
coopm
coopm2
community_model2.set_medium(coopm)
community_model2.weights = [1,1]
print("MBR Weights 1:1: ", community_model2.optimize())
print("SA growth: ", community_model2.x2[community_model2.obj2].x)
print("DP growth: ", community_model2.x1[community_model2.obj1].x)
community_model2.weights = [10,1]
print("MBR Weights 10:1: ", community_model2.optimize())
print("SA growth: ", community_model2.x2[community_model2.obj2].x)
print("DP growth: ", community_model2.x1[community_model2.obj1].x)
community_model3_1_1.medium = coopm
print("MBR Weights 1:1: ", community_model3_1_1.slim_optimize())
print("SA growth: " + str(community_model3_1_1.reactions.get_by_id(BIOMASS_SA).flux))
print("DP growth: " + str(community_model3_1_1.reactions.get_by_id(BIOMASS_DP).flux))
community_model3_10_1.medium = coopm
print("MBR Weights 10:1: ", community_model3_10_1.slim_optimize())
print("SA growth: " + str(community_model3_10_1.reactions.get_by_id(BIOMASS_SA).flux))
print("DP growth: " + str(community_model3_10_1.reactions.get_by_id(BIOMASS_DP).flux))
community_model2.set_medium(coopm2)
community_model2.weights = [1,1]
print("MBR Weights 1:1: ", community_model2.optimize())
print("SA growth: ", community_model2.x2[community_model2.obj2].x)
print("DP growth: ", community_model2.x1[community_model2.obj1].x)
community_model2.weights = [10,1]
print("MBR Weights 10:1: ", community_model2.optimize())
print("SA growth: ", community_model2.x2[community_model2.obj2].x)
print("DP growth: ", community_model2.x1[community_model2.obj1].x)
community_model3_1_1.medium = coopm2
print("MBR Weights 1:1: ", community_model3_1_1.slim_optimize())
print("SA growth: " + str(community_model3_1_1.reactions.get_by_id(BIOMASS_SA).flux))
print("DP growth: " + str(community_model3_1_1.reactions.get_by_id(BIOMASS_DP).flux))
community_model3_10_1.medium = coopm2
print("MBR Weights 10:1: ", community_model3_10_1.slim_optimize())
print("SA growth: " + str(community_model3_10_1.reactions.get_by_id(BIOMASS_SA).flux))
print("DP growth: " + str(community_model3_10_1.reactions.get_by_id(BIOMASS_DP).flux))
###Output
MBR Weights 1:1: 0.5117389225411352
SA growth: 0.2558694612705676
DP growth: 0.2558694612705676
MBR Weights 10:1: 2.8145640739762436
SA growth: 0.2558694612705676
DP growth: 0.2558694612705676
###Markdown
COOPM alphas modelHere is a collection of COOPM medias for different alpha values
###Code
community_model2 = MIP_community_model(model1, model2)
alphas = [0.,0.01,0.1,0.2,0.5,0.8,0.9,0.99,1.]
coopms = []
for alpha in alphas:
coopms.append(community_model2.compute_alpha_coopm(alpha))
df = pd.DataFrame(coopms)
df.index = alphas
df.T.plot.bar(figsize=(20,10))
plt.yscale("log")
plt.ylabel("COOPM medium flux")
plt.xlabel("COOPM medium")
plt.savefig("COOPM_alpha_plot.pdf")
###Output
_____no_output_____ |
content/exercises/exercise1/exercise1.ipynb | ###Markdown
APCOMP 295 Advanced Practical Data Science Homework 1 - Google Cloud Setup, Docker & Flask**Harvard University****Fall 2020****Instructors**: Pavlos Protopapas **Instructions**: - Each assignment is graded out of 5 points. - Submit this homework individually at Canvas. We illustrated two very simple applications of containers during the demo in class and demonstrated them step by step. Now we should learn how to deploy a more complex application on the cloud. Real-world applications are often composed of many different components. Putting it altogether comprises many steps, which we walk you through below. It is not unusual that the tutorial instructions available on the internet, or in this case the assignment instructions, do not match exactly with what you will face - this is due to the quick development cycles of the libraries and user interfaces. Therefore, it is one of the learning objectives of this homework to learn how to become more comfortable with all the components, follow tutorials and how all connect together. You should use all resources available, including the class forum, or other forums and do not hesitate to ask the teaching staff. We will be using Google Cloud, and one of the goals of this assignment is to get you set up with Google Cloud. The final goal is to run your first Docker application with Flask on the cloud ! Remember to delete your virtual machines/clusters. Question 1: Create Accounts at Cloud Services (0.5 point)If you have not already done so, create accounts at the following cloud services we will be using in the course:- GitHub: www.github.com- Google Cloud: https://cloud.google.com/ (new users may be offered \$300 free trial - valid for 3 months - See this for more https://cloud.google.com/free/docs/gcp-free-tier ) - DockerHub: https://hub.docker.com/ Submit:1. your username for GitHub and Google Cloud2. a screenshot of the “profile” page of your GitHub account and the Google Cloud console3. A screenshot of the “Account settings” (top half) page of Docker Hub account. Example Submission:GitHub username = rashmigb Google username = [email protected] (Your screenshot may look different, if you are exising user) Docker Hub:  1.2 SubmissionGithub Username: simonwarcholGoogle username = [email protected]Dockerhub username = simonwarchol Question 2: Create project on Google Cloud and Install Google cloud sdk (1 point) 1. Redeem your Google Cloud credit (You will need @g.harvard.edu email account, look for announcement on Ed before proceeding) (this should create a billing account “AC295 Advance Practical Data Science”) Every project on Google cloud must be associated with a billing account. 2. Create a project on Google Cloud Console (top left -> “Select a project” -> New project OR Dashboard -> Create Project) (if you are using .harvard you may select the .harvard organization).Project name: ac295-data-science3. We will be using gcloud command line. Please follow the instructions on this page to install https://cloud.google.com/sdk/docs/quickstarts . Submit: 1. Screen shot of your new project2. output from `gcloud config list`. Example Submission:```(base) Rashmis-MBP-2:~ rashmi$ gcloud config list[compute]region = us-east1zone = us-east1-b[core]account = [email protected]_usage_reporting = Falseproject = ac295datascienceYour active configuration is: [default](base) Rashmis-MBP-2:~ rashmi$ ``` 2.1 Submission Question 3: Set up SSH Keys (0.5 points)Many tasks in deploying apps to the cloud are done at the command line. SSH is an essential tool for secure use of command line interfaces to remote systems. SSH is also more convenient than password authentication once you’ve set it up.If you have not already done so, create a default SSH key on your main computer.Configure GitHub so that you can access your repositories using SSH. You can follow this tutorial:https://docs.github.com/en/github/authenticating-to-github/adding-a-new-ssh-key-to-your-github-account .Try to clone one of your repositories at the command line using the SSH interface. Submit:1. a screenshot of your GitHub account showing at least one SSH key.2. a screenshot of a terminal where you clone any repo with the SSH. Please note that it is always safe to share the public part of your SSH key as in submission 1, but you should **never** share a private RSA key with anyone!As an example of cloning by SSH, when I type: ` git clone [email protected]:Harvard-IACS/2020F-AC295-private.git` this command works "by magic" on a computer that has one of the SSH keys I’ve configured on GitHub in the default key location `~/.ssh/id_rsa` (linux and Mac) or `C:\Users\\.ssh\id_rsa` (Windows 10) Example Submission:   3.1 Submission Flask Flask is a python based web framework. There are many others for e.g. Django, FastApi and next month will be more. We’ll stick to Flask in this course for many of the examples we will be doing. Though this class is not about teaching web developement, it is advisable to familiarize yourself with it. In the next question we will be using a very simple flask appecho.py. We found the following video to be very instructive (aka a little long): Youtube - https://youtu.be/Z1RJmh_OqeA Question 4: Your First Docker App on Google Cloud (2 points) We are going to create an echo service, which displays back what we send. We will do this on Google Cloud and also create a docker image on Google Cloud.Follow these steps (watch the demo video https://youtu.be/aI6jTjwxWVI and replicate yourself):- Add a firewall rule (the Compute Engine API must first be enabled before firewall rules can be created). If you do not see the firewall at the top of the pull down menu, look for "Networking". - Create a Virtual Machine on google cloud.- Install pip3, pandas, flask. - Copy echo.py and Dockerfile from https://github.com/bgweber/StartupDataScience/tree/master/containers/echo to the VM- Install Docker => https://docs.docker.com/engine/install/debian/ - Create Dockerfile, Docker Image and run - **Must delete VM after you are done.** Submit (i) screenshot displaying a running VM on google cloud (ii) screenshot of the browser where echo service displays results (with url) (iii) screenshot of docker container running and docker images. (iv) log/notes of commands used. Also, don’t hesitate to post questions on the Ed class discussion board. One of your classmates or the teaching staff can help.
###Code
from IPython.display import YouTubeVideo
YouTubeVideo('aI6jTjwxWVI') #https://youtu.be/aI6jTjwxWVI
###Output
_____no_output_____
###Markdown
4.2 Submission Question 5: Echo Translation (1 point) Now we learn all the mechanics, lets create a similar system as in Question 4 but this time the system should display the input text translated to a language of your choice. **Submit:** Modified echo.py i.e. copy paste the code in a cell below (Please use cell -> Raw NBConvert) and (i) screenshot displaying a running VM on google cloud (ii) screenshot of the browser where echo service displays results (with url) (iii) screenshot of docker container running and docker images. (iv) log/notes of commands used. **Hints:** - Make sure you can display characters that are not ASCII. Try different languages. - Hint: `pip install googletrans`  5.1 Submission
###Code
# load Flask
import flask
from googletrans import Translator
app = flask.Flask(__name__)
# define a predict function as an endpoint
@app.route("/predict", methods=["GET", "POST"])
def predict():
data = {"success": False}
# get the request parameters
params = flask.request.json
if (params == None):
params = flask.request.args
# if parameters are found, echo the msg parameter
if (params != None):
translator = Translator()
data["language"] = "Spanish"
data["response"] = params.get("msg")
# converting from english to spanish
translated_text = translator.translate(data["response"], src='en', dest="es")
data["translated_text"] = translated_text.text
data["success"] = True
# return a response in json format
return flask.jsonify(data)
# This allows for unicode characters in the response
app.config['JSON_AS_ASCII'] = False
# start the flask app, allow remote connections
app.run(host='0.0.0.0')
###Output
_____no_output_____ |
lab2/astr3400_lab2.ipynb | ###Markdown
Lab 2: set up a numerical hydrostatic equilibriumFollow all of the steps below. You will need to write your own code where prompted (e.g., to calculate enclosed mass and the pressure profile). Please answer all questions, and make plots where requested. For grading, it would be useful to write short comments like those provided to explain what your code is doing.Collaborate in teams, ask your classmates, and reach out to the teaching team when stuck! define a set of zone locations in radius
###Code
# number of zone centers
n_zones = 128
# inner and outer radius in cm
r_inner = 1e8
r_outer = 1e12
# calculate the radius of each zone *interface*, with the innermost interface at r_inner and the outermost at r_outer
r_I = r_inner*10**(np.arange(n_zones+1)/n_zones*np.log10(r_outer/r_inner))
# now use the interface locations to calculate the zone centers, halfway in between inner/outer interface
# this is the size of each zone
Delta_r = r_I[1:]-r_I[:-1]
# this is the set of zone centers
r_zones = r_I[:-1]+Delta_r/2.
# let's visualize the grid, choosing every 4th point
# note that the plot is on a log scale in x, while the zone centers are supposed to be midway between the interfaces
# as defined on a linear scale
for rr in r_I[::4]:
plt.semilogx(rr+np.zeros(50),np.arange(50)/49.,linestyle='-',color='k')
plt.semilogx(r_zones[1::4],np.zeros_like(r_zones[1::4])+0.5,marker='o',linestyle='',markersize=10)
###Output
_____no_output_____
###Markdown
set a "power law" density profile $\rho(r) \propto r^{-2}$
###Code
# let the inner density be some arbitrary value in g cm^-3, here a value typical of Sun-like stars on the main sequence
rho0 = 1e2
# calculate the density profile at zone centers
rho_zones = rho0*(r_zones/r_inner)**(-2.)
###Output
_____no_output_____
###Markdown
1. calculate the mass enclosed in each zone and the initial net velocity
###Code
m_zones =
v_zones = np.zeros_like(rho_zones)
###Output
_____no_output_____
###Markdown
2. use the discretized hydrostatic equilibrium equation to calculate the pressure at each interface--think about how to do the calculation one zone at a time, going backwards from the outer zone--what is the pressure at the outer boundary? (the "boundary condition" needed to solve the differential equation)
###Code
# solve for P_I
P_I = np.zeros_like(r_I)
###Output
_____no_output_____
###Markdown
3. test how well our differenced equation worksCompare the left hand side and right hand side of the numerical hydrostatic equilibrium equation. Measure the error as for example |left hand side - right hand side| / |left hand side|
###Code
# enter code here
###Output
_____no_output_____
###Markdown
4. calculate P at zone centers
###Code
P_zones =
###Output
_____no_output_____
###Markdown
now we want to put this set of fluid variables (along with v = 0) into the time-dependent hydro code! 5. make a prediction: when we do this, what do you expect for the behavior of rho(r), P(r), v(r) as functions of time? write your prediction here in 1-2 sentences now let's setup a hydro problem using the data generated abovethe cell below is defining a hydrodynamics problem using our hydro3.py code, defining initial and boundary conditions, and then replacing its own initial data with what we have generated above.you do not need to edit this cell, but please reach out with questions if you're wondering what it does.
###Code
import hydro3
# define a dictionary of arguments that we will pass to the hydrodynamics code specifying the problem to run
args = {'nz':4000,'ut0':3e5,'udens':1e-5,'utslope':0.,'pin':0,'piston_eexp':1e51,'v_piston':1e9,'piston_stop':10,'r_outer':5e13,'rmin':1e7,'t_stop':1e6,'noplot':1}
# define the variable h which is a "lagrange_hydro_1d" object (instance of a class)
h = hydro3.lagrange_hydro_1d(**args)
# variables stored within our object h are accessed by h.variable_name
h.bctype=[h.INFLOW, h.OUTFLOW]
h.itype=h.POWERLAW
h.setup_initial_conditions()
# here we replace the code's initial conditions data with our own
# (no need to edit these lines!)
# number of zones
h.nz = n_zones
# zones.r are the outer interface positions
h.r_inner = r_I[0]/2.
h.zones.r = r_I[1:]
h.zones.dr = r_I[1:]-r_I[:-1]
# v = 0 everywhere initially
h.zones.v = np.zeros_like(h.zones.r)
# density, mass, pressure at zone centers
h.zones.mass = dm
h.zones.mcum = m_zones
h.zones.d = rho_zones
h.zones.p = P_zones
# equation of state to compute u/rho from p
h.zones.e = 1./(h.gamma-1.)*h.zones.p/h.zones.d
# there's no mass inside the inner boundary
h.mass_r_inner = h.zones.mass[0]
# artificial viscosity (ignore for now!)
h.zones.q = hydro3.get_viscosity(h.zones,h.v_inner,h.C_q)
h.initialize_boundary_conditions()
###Output
_____no_output_____
###Markdown
let's run the code and see what happens!
###Code
h.run()
###Output
_____no_output_____
###Markdown
6. make plots of:--mass density vs radius (log-log "plt.loglog")--velocity vs radius (linear-log "plt.semilogx"). Qualitatively, what does it look like has happened? Does this match your expectations? Why or why not?
###Code
# plotting code here
###Output
_____no_output_____
###Markdown
7. finally, measure a few global quantities: $E_{\rm tot}$, $E_k$ ($K$), $E_{\rm int}$ ($U$), $E_{\rm grav}$ ($W$)What do you think the kinetic energy should be in hydrostatic equilibrium?Then use the Virial theorem to calculate (or look up or ask about) expected relationships between the thermal (internal) energy and the gravitational energy, and in turn the total (kinetic+gravitational+thermal) energy.How well does your result agree with expectations? You can measure fractional errors for example as |expected value - numerical value| / |expected value|. When the expected value is zero, you could instead use something like we did above: |left hand side - right hand side| / |left hand side|.
###Code
# calculate quantities and errors here
###Output
_____no_output_____ |
notebooks/basic_joinnodes.ipynb | ###Markdown
JoinNodeJoinNode have the opposite effect of [iterables](basic_iteration.ipynb). Where `iterables` split up the execution workflow into many different branches, a JoinNode merges them back into on node. For a more detailed explanation, check out [JoinNode, synchronize and itersource](http://nipype.readthedocs.io/en/latest/users/joinnode_and_itersource.html) from the main homepage. Simple exampleLet's consider the very simple example depicted at the top of this page: ```pythonfrom nipype import Node, JoinNode, Workflow Specify fake input node Aa = Node(interface=A(), name="a") Iterate over fake node B's input 'in_file?b = Node(interface=B(), name="b")b.iterables = ('in_file', [file1, file2]) Pass results on to fake node Cc = Node(interface=C(), name="c") Join forked execution workflow in fake node Dd = JoinNode(interface=D(), joinsource="b", joinfield="in_files", name="d") Put everything into a workflow as usualworkflow = Workflow(name="workflow")workflow.connect([(a, b, [('subject', 'subject')]), (b, c, [('out_file', 'in_file')]) (c, d, [('out_file', 'in_files')]) ])``` As you can see, setting up a ``JoinNode`` is rather simple. The only difference to a normal ``Node`` are the ``joinsource`` and the ``joinfield``. ``joinsource`` specifies from which node the information to join is coming and the ``joinfield`` specifies the input field of the JoinNode where the information to join will be entering the node. More realistic exampleLet's consider another example where we have one node that iterates over 3 different numbers and generates randome numbers. Another node joins those three different numbers (each coming from a separate branch of the workflow) into one list. To make the whole thing a bit more realistic, the second node will use the ``Function`` interface to do something with those numbers, before we spit them out again.
###Code
from nipype import JoinNode, Node, Workflow
from nipype.interfaces.utility import Function, IdentityInterface
def get_data_from_id(id):
"""Generate a random number based on id"""
import numpy as np
return id + np.random.rand()
def merge_and_scale_data(data2):
"""Scale the input list by 1000"""
import numpy as np
return (np.array(data2) * 1000).tolist()
node1 = Node(Function(input_names=['id'],
output_names=['data1'],
function=get_data_from_id),
name='get_data')
node1.iterables = ('id', [1, 2, 3])
node2 = JoinNode(Function(input_names=['data2'],
output_names=['data_scaled'],
function=merge_and_scale_data),
name='scale_data',
joinsource=node1,
joinfield=['data2'])
wf = Workflow(name='testjoin')
wf.connect(node1, 'data1', node2, 'data2')
eg = wf.run()
wf.write_graph(graph2use='exec')
from IPython.display import Image
Image(filename='graph_detailed.png')
###Output
_____no_output_____
###Markdown
Now, let's look at the input and output of the joinnode:
###Code
res = [node for node in eg.nodes() if 'scale_data' in node.name][0].result
res.outputs
res.inputs
###Output
_____no_output_____
###Markdown
Extending to multiple nodesWe extend the workflow by using three nodes. Note that even this workflow, the joinsource corresponds to the node containing iterables and the joinfield corresponds to the input port of the JoinNode that aggregates the iterable branches. As before the graph below shows how the execution process is setup.
###Code
def get_data_from_id(id):
import numpy as np
return id + np.random.rand()
def scale_data(data2):
import numpy as np
return data2
def replicate(data3, nreps=2):
return data3 * nreps
node1 = Node(Function(input_names=['id'],
output_names=['data1'],
function=get_data_from_id),
name='get_data')
node1.iterables = ('id', [1, 2, 3])
node2 = Node(Function(input_names=['data2'],
output_names=['data_scaled'],
function=scale_data),
name='scale_data')
node3 = JoinNode(Function(input_names=['data3'],
output_names=['data_repeated'],
function=replicate),
name='replicate_data',
joinsource=node1,
joinfield=['data3'])
wf = Workflow(name='testjoin')
wf.connect(node1, 'data1', node2, 'data2')
wf.connect(node2, 'data_scaled', node3, 'data3')
eg = wf.run()
wf.write_graph(graph2use='exec')
Image(filename='graph_detailed.png')
###Output
_____no_output_____
###Markdown
Exercise 1You have list of DOB of the subjects in a few various format : ``["10 February 1984", "March 5 1990", "April 2 1782", "June 6, 1988", "12 May 1992"]``, and you want to sort the list.You can use ``Node`` with ``iterables`` to extract day, month and year, and use [datetime.datetime](https://docs.python.org/2/library/datetime.html) to unify the format that can be compared, and ``JoinNode`` to sort the list.
###Code
# write your solution here
# the list of all DOB
dob_subjects = ["10 February 1984", "March 5 1990", "April 2 1782", "June 6, 1988", "12 May 1992"]
# let's start from creating Node with iterable to split all strings from the list
from nipype import Node, JoinNode, Function, Workflow
def split_dob(dob_string):
return dob_string.split()
split_node = Node(Function(input_names=["dob_string"],
output_names=["split_list"],
function=split_dob),
name="splitting")
#split_node.inputs.dob_string = "10 February 1984"
split_node.iterables = ("dob_string", dob_subjects)
# and now let's work on the date format more, independently for every element
# sometimes the second element has an extra "," that we should remove
def remove_comma(str_list):
str_list[1] = str_list[1].replace(",", "")
return str_list
cleaning_node = Node(Function(input_names=["str_list"],
output_names=["str_list_clean"],
function=remove_comma),
name="cleaning")
# now we can extract year, month, day from our list and create ``datetime.datetim`` object
def datetime_format(date_list):
import datetime
# year is always the last
year = int(date_list[2])
#day and month can be in the first or second position
# we can use datetime.datetime.strptime to convert name of the month to integer
try:
day = int(date_list[0])
month = datetime.datetime.strptime(date_list[1], "%B").month
except(ValueError):
day = int(date_list[1])
month = datetime.datetime.strptime(date_list[0], "%B").month
# and create datetime.datetime format
return datetime.datetime(year, month, day)
datetime_node = Node(Function(input_names=["date_list"],
output_names=["datetime"],
function=datetime_format),
name="datetime")
# now we are ready to create JoinNode and sort the list of DOB
def sorting_dob(datetime_list):
datetime_list.sort()
return datetime_list
sorting_node = JoinNode(Function(input_names=["datetime_list"],
output_names=["dob_sorted"],
function=sorting_dob),
joinsource=split_node, # this is the node that used iterables for x
joinfield=['datetime_list'],
name="sorting")
# and we're ready to create workflow
ex1_wf = Workflow(name="sorting_dob")
ex1_wf.connect(split_node, "split_list", cleaning_node, "str_list")
ex1_wf.connect(cleaning_node, "str_list_clean", datetime_node, "date_list")
ex1_wf.connect(datetime_node, "datetime", sorting_node, "datetime_list")
# you can check the graph
from IPython.display import Image
ex1_wf.write_graph(graph2use='exec')
Image(filename='graph_detailed.png')
# and run the workflow
ex1_res = ex1_wf.run()
# you can check list of all nodes
ex1_res.nodes()
# and check the results from sorting_dob.sorting
list(ex1_res.nodes())[0].result.outputs
###Output
_____no_output_____
###Markdown
JoinNode, synchronize and itersourceJoinNode has the opposite effect of [iterables](basic_iteration.ipynb). Where `iterables` split up the execution workflow into many different branches, a `JoinNode` merges them back into on node. A `JoinNode` generalizes `MapNode` to operate in conjunction with an upstream `iterable` node to reassemble downstream results, e.g.: Simple exampleLet's consider the very simple example depicted at the top of this page: ```pythonfrom nipype import Node, JoinNode, Workflow Specify fake input node Aa = Node(interface=A(), name="a") Iterate over fake node B's input 'in_file?b = Node(interface=B(), name="b")b.iterables = ('in_file', [file1, file2]) Pass results on to fake node Cc = Node(interface=C(), name="c") Join forked execution workflow in fake node Dd = JoinNode(interface=D(), joinsource="b", joinfield="in_files", name="d") Put everything into a workflow as usualworkflow = Workflow(name="workflow")workflow.connect([(a, b, [('subject', 'subject')]), (b, c, [('out_file', 'in_file')]) (c, d, [('out_file', 'in_files')]) ])``` As you can see, setting up a ``JoinNode`` is rather simple. The only difference to a normal ``Node`` is the ``joinsource`` and the ``joinfield``. ``joinsource`` specifies from which node the information to join is coming and the ``joinfield`` specifies the input field of the `JoinNode` where the information to join will be entering the node. This example assumes that interface `A` has one output *subject*, interface `B` has two inputs *subject* and *in_file* and one output *out_file*, interface `C` has one input *in_file* and one output *out_file*, and interface `D` has one list input *in_files*. The *images* variable is a list of three input image file names.As with *iterables* and the `MapNode` *iterfield*, the *joinfield* can be a list of fields. Thus, the declaration in the previous example is equivalent to the following: ```pythond = JoinNode(interface=D(), joinsource="b", joinfield=["in_files"], name="d")``` The *joinfield* defaults to all of the JoinNode input fields, so the declaration is also equivalent to the following: ```pythond = JoinNode(interface=D(), joinsource="b", name="d")``` In this example, the node `C` *out_file* outputs are collected into the `JoinNode` `D` *in_files* input list. The *in_files* order is the same as the upstream `B` node iterables order.The `JoinNode` input can be filtered for unique values by specifying the *unique* flag, e.g.: ```pythond = JoinNode(interface=D(), joinsource="b", unique=True, name="d")``` `synchronize`The `Node` `iterables` parameter can be be a single field or a list of fields. If it is a list, then execution is performed over all permutations of the list items. For example: ```pythonb.iterables = [("m", [1, 2]), ("n", [3, 4])]``` results in the execution graph:where `B13` has inputs *m* = 1, *n* = 3, `B14` has inputs *m* = 1, *n* = 4, etc.The `synchronize` parameter synchronizes the iterables lists, e.g.: ```pythonb.iterables = [("m", [1, 2]), ("n", [3, 4])]b.synchronize = True``` results in the execution graph:where the iterable inputs are selected in lock-step by index, i.e.: (*m*, *n*) = (1, 3) and (2, 4)for `B13` and `B24`, resp. `itersource`The `itersource` feature allows you to expand a downstream `iterable` based on a mapping of an upstream `iterable`. For example: ```pythona = Node(interface=A(), name="a")b = Node(interface=B(), name="b")b.iterables = ("m", [1, 2])c = Node(interface=C(), name="c")d = Node(interface=D(), name="d")d.itersource = ("b", "m")d.iterables = [("n", {1:[3,4], 2:[5,6]})]my_workflow = Workflow(name="my_workflow")my_workflow.connect([(a,b,[('out_file','in_file')]), (b,c,[('out_file','in_file')]) (c,d,[('out_file','in_file')]) ])``` results in the execution graph:In this example, all interfaces have input `in_file` and output `out_file`. In addition, interface `B` has input *m* and interface `D` has input *n*. A Python dictionary associates the `B` node input value with the downstream `D` node *n* iterable values.This example can be extended with a summary `JoinNode`:```pythone = JoinNode(interface=E(), joinsource="d", joinfield="in_files", name="e")my_workflow.connect(d, 'out_file', e, 'in_files')``` resulting in the graph:The combination of `iterables`, `MapNode`, `JoinNode`, `synchronize` and `itersource` enables the creation of arbitrarily complex workflow graphs. The astute workflow builder will recognize that this flexibility is both a blessing and a curse. These advanced features are handy additions to the Nipype toolkit when used judiciously. More realistic `JoinNode` exampleLet's consider another example where we have one node that iterates over 3 different numbers and generates random numbers. Another node joins those three different numbers (each coming from a separate branch of the workflow) into one list. To make the whole thing a bit more realistic, the second node will use the ``Function`` interface to do something with those numbers, before we spit them out again.
###Code
from nipype import JoinNode, Node, Workflow
from nipype.interfaces.utility import Function, IdentityInterface
def get_data_from_id(id):
"""Generate a random number based on id"""
import numpy as np
return id + np.random.rand()
def merge_and_scale_data(data2):
"""Scale the input list by 1000"""
import numpy as np
return (np.array(data2) * 1000).tolist()
node1 = Node(Function(input_names=['id'],
output_names=['data1'],
function=get_data_from_id),
name='get_data')
node1.iterables = ('id', [1, 2, 3])
node2 = JoinNode(Function(input_names=['data2'],
output_names=['data_scaled'],
function=merge_and_scale_data),
name='scale_data',
joinsource=node1,
joinfield=['data2'])
wf = Workflow(name='testjoin')
wf.connect(node1, 'data1', node2, 'data2')
eg = wf.run()
wf.write_graph(graph2use='exec')
from IPython.display import Image
Image(filename='graph_detailed.png')
###Output
_____no_output_____
###Markdown
Now, let's look at the input and output of the joinnode:
###Code
res = [node for node in eg.nodes() if 'scale_data' in node.name][0].result
res.outputs
res.inputs
###Output
_____no_output_____
###Markdown
Extending to multiple nodesWe extend the workflow by using three nodes. Note that even this workflow, the joinsource corresponds to the node containing iterables and the joinfield corresponds to the input port of the JoinNode that aggregates the iterable branches. As before the graph below shows how the execution process is set up.
###Code
def get_data_from_id(id):
import numpy as np
return id + np.random.rand()
def scale_data(data2):
import numpy as np
return data2
def replicate(data3, nreps=2):
return data3 * nreps
node1 = Node(Function(input_names=['id'],
output_names=['data1'],
function=get_data_from_id),
name='get_data')
node1.iterables = ('id', [1, 2, 3])
node2 = Node(Function(input_names=['data2'],
output_names=['data_scaled'],
function=scale_data),
name='scale_data')
node3 = JoinNode(Function(input_names=['data3'],
output_names=['data_repeated'],
function=replicate),
name='replicate_data',
joinsource=node1,
joinfield=['data3'])
wf = Workflow(name='testjoin')
wf.connect(node1, 'data1', node2, 'data2')
wf.connect(node2, 'data_scaled', node3, 'data3')
eg = wf.run()
wf.write_graph(graph2use='exec')
Image(filename='graph_detailed.png')
###Output
_____no_output_____
###Markdown
Exercise 1You have list of DOB of the subjects in a few various format : ``["10 February 1984", "March 5 1990", "April 2 1782", "June 6, 1988", "12 May 1992"]``, and you want to sort the list.You can use ``Node`` with ``iterables`` to extract day, month and year, and use [datetime.datetime](https://docs.python.org/2/library/datetime.html) to unify the format that can be compared, and ``JoinNode`` to sort the list.
###Code
# write your solution here
# the list of all DOB
dob_subjects = ["10 February 1984", "March 5 1990", "April 2 1782", "June 6, 1988", "12 May 1992"]
# let's start from creating Node with iterable to split all strings from the list
from nipype import Node, JoinNode, Function, Workflow
def split_dob(dob_string):
return dob_string.split()
split_node = Node(Function(input_names=["dob_string"],
output_names=["split_list"],
function=split_dob),
name="splitting")
#split_node.inputs.dob_string = "10 February 1984"
split_node.iterables = ("dob_string", dob_subjects)
# and now let's work on the date format more, independently for every element
# sometimes the second element has an extra "," that we should remove
def remove_comma(str_list):
str_list[1] = str_list[1].replace(",", "")
return str_list
cleaning_node = Node(Function(input_names=["str_list"],
output_names=["str_list_clean"],
function=remove_comma),
name="cleaning")
# now we can extract year, month, day from our list and create ``datetime.datetim`` object
def datetime_format(date_list):
import datetime
# year is always the last
year = int(date_list[2])
#day and month can be in the first or second position
# we can use datetime.datetime.strptime to convert name of the month to integer
try:
day = int(date_list[0])
month = datetime.datetime.strptime(date_list[1], "%B").month
except(ValueError):
day = int(date_list[1])
month = datetime.datetime.strptime(date_list[0], "%B").month
# and create datetime.datetime format
return datetime.datetime(year, month, day)
datetime_node = Node(Function(input_names=["date_list"],
output_names=["datetime"],
function=datetime_format),
name="datetime")
# now we are ready to create JoinNode and sort the list of DOB
def sorting_dob(datetime_list):
datetime_list.sort()
return datetime_list
sorting_node = JoinNode(Function(input_names=["datetime_list"],
output_names=["dob_sorted"],
function=sorting_dob),
joinsource=split_node, # this is the node that used iterables for x
joinfield=['datetime_list'],
name="sorting")
# and we're ready to create workflow
ex1_wf = Workflow(name="sorting_dob")
ex1_wf.connect(split_node, "split_list", cleaning_node, "str_list")
ex1_wf.connect(cleaning_node, "str_list_clean", datetime_node, "date_list")
ex1_wf.connect(datetime_node, "datetime", sorting_node, "datetime_list")
# you can check the graph
from IPython.display import Image
ex1_wf.write_graph(graph2use='exec')
Image(filename='graph_detailed.png')
# and run the workflow
ex1_res = ex1_wf.run()
# you can check list of all nodes
ex1_res.nodes()
# and check the results from sorting_dob.sorting
list(ex1_res.nodes())[0].result.outputs
###Output
_____no_output_____
###Markdown
JoinNode, synchronize and itersourceJoinNode have the opposite effect of [iterables](basic_iteration.ipynb). Where `iterables` split up the execution workflow into many different branches, a `JoinNode` merges them back into on node. A `JoinNode` generalizes `MapNode` to operate in conjunction with an upstream `iterable` node to reassemble downstream results, e.g.: Simple exampleLet's consider the very simple example depicted at the top of this page: ```pythonfrom nipype import Node, JoinNode, Workflow Specify fake input node Aa = Node(interface=A(), name="a") Iterate over fake node B's input 'in_file?b = Node(interface=B(), name="b")b.iterables = ('in_file', [file1, file2]) Pass results on to fake node Cc = Node(interface=C(), name="c") Join forked execution workflow in fake node Dd = JoinNode(interface=D(), joinsource="b", joinfield="in_files", name="d") Put everything into a workflow as usualworkflow = Workflow(name="workflow")workflow.connect([(a, b, [('subject', 'subject')]), (b, c, [('out_file', 'in_file')]) (c, d, [('out_file', 'in_files')]) ])``` As you can see, setting up a ``JoinNode`` is rather simple. The only difference to a normal ``Node`` are the ``joinsource`` and the ``joinfield``. ``joinsource`` specifies from which node the information to join is coming and the ``joinfield`` specifies the input field of the `JoinNode` where the information to join will be entering the node. This example assumes that interface `A` has one output *subject*, interface `B` has two inputs *subject* and *in_file* and one output *out_file*, interface `C` has one input *in_file* and one output *out_file*, and interface `D` has one list input *in_files*. The *images* variable is a list of three input image file names.As with *iterables* and the `MapNode` *iterfield*, the *joinfield* can be a list of fields. Thus, the declaration in the previous example is equivalent to the following: ```pythond = JoinNode(interface=D(), joinsource="b", joinfield=["in_files"], name="d")``` The *joinfield* defaults to all of the JoinNode input fields, so the declaration is also equivalent to the following: ```pythond = JoinNode(interface=D(), joinsource="b", name="d")``` In this example, the node `C` *out_file* outputs are collected into the `JoinNode` `D` *in_files* input list. The *in_files* order is the same as the upstream `B` node iterables order.The `JoinNode` input can be filtered for unique values by specifying the *unique* flag, e.g.: ```pythond = JoinNode(interface=D(), joinsource="b", unique=True, name="d")``` `synchronize`The `Node` `iterables` parameter can be be a single field or a list of fields. If it is a list, then execution is performed over all permutations of the list items. For example: ```pythonb.iterables = [("m", [1, 2]), ("n", [3, 4])]``` results in the execution graph:where `B13` has inputs *m* = 1, *n* = 3, `B14` has inputs *m* = 1, *n* = 4, etc.The `synchronize` parameter synchronizes the iterables lists, e.g.: ```pythonb.iterables = [("m", [1, 2]), ("n", [3, 4])]b.synchronize = True``` results in the execution graph:where the iterable inputs are selected in lock-step by index, i.e.: (*m*, *n*) = (1, 3) and (2, 4)for `B13` and `B24`, resp. `itersource`The `itersource` feature allows you to expand a downstream `iterable` based on a mapping of an upstream `iterable`. For example: ```pythona = Node(interface=A(), name="a")b = Node(interface=B(), name="b")b.iterables = ("m", [1, 2])c = Node(interface=C(), name="c")d = Node(interface=D(), name="d")d.itersource = ("b", "m")d.iterables = [("n", {1:[3,4], 2:[5,6]})]my_workflow = Workflow(name="my_workflow")my_workflow.connect([(a,b,[('out_file','in_file')]), (b,c,[('out_file','in_file')]) (c,d,[('out_file','in_file')]) ])``` results in the execution graph:In this example, all interfaces have input `in_file` and output `out_file`. In addition, interface `B` has input *m* and interface `D` has input *n*. A Python dictionary associates the `B` node input value with the downstream `D` node *n* iterable values.This example can be extended with a summary `JoinNode`:```pythone = JoinNode(interface=E(), joinsource="d", joinfield="in_files", name="e")my_workflow.connect(d, 'out_file', e, 'in_files')``` resulting in the graph:The combination of `iterables`, `MapNode`, `JoinNode`, `synchronize` and `itersource` enables the creation of arbitrarily complex workflow graphs. The astute workflow builder will recognize that this flexibility is both a blessing and a curse. These advanced features are handy additions to the Nipype toolkit when used judiciously. More realistic `JoinNode` exampleLet's consider another example where we have one node that iterates over 3 different numbers and generates randome numbers. Another node joins those three different numbers (each coming from a separate branch of the workflow) into one list. To make the whole thing a bit more realistic, the second node will use the ``Function`` interface to do something with those numbers, before we spit them out again.
###Code
from nipype import JoinNode, Node, Workflow
from nipype.interfaces.utility import Function, IdentityInterface
def get_data_from_id(id):
"""Generate a random number based on id"""
import numpy as np
return id + np.random.rand()
def merge_and_scale_data(data2):
"""Scale the input list by 1000"""
import numpy as np
return (np.array(data2) * 1000).tolist()
node1 = Node(Function(input_names=['id'],
output_names=['data1'],
function=get_data_from_id),
name='get_data')
node1.iterables = ('id', [1, 2, 3])
node2 = JoinNode(Function(input_names=['data2'],
output_names=['data_scaled'],
function=merge_and_scale_data),
name='scale_data',
joinsource=node1,
joinfield=['data2'])
wf = Workflow(name='testjoin')
wf.connect(node1, 'data1', node2, 'data2')
eg = wf.run()
wf.write_graph(graph2use='exec')
from IPython.display import Image
Image(filename='graph_detailed.png')
###Output
_____no_output_____
###Markdown
Now, let's look at the input and output of the joinnode:
###Code
res = [node for node in eg.nodes() if 'scale_data' in node.name][0].result
res.outputs
res.inputs
###Output
_____no_output_____
###Markdown
Extending to multiple nodesWe extend the workflow by using three nodes. Note that even this workflow, the joinsource corresponds to the node containing iterables and the joinfield corresponds to the input port of the JoinNode that aggregates the iterable branches. As before the graph below shows how the execution process is setup.
###Code
def get_data_from_id(id):
import numpy as np
return id + np.random.rand()
def scale_data(data2):
import numpy as np
return data2
def replicate(data3, nreps=2):
return data3 * nreps
node1 = Node(Function(input_names=['id'],
output_names=['data1'],
function=get_data_from_id),
name='get_data')
node1.iterables = ('id', [1, 2, 3])
node2 = Node(Function(input_names=['data2'],
output_names=['data_scaled'],
function=scale_data),
name='scale_data')
node3 = JoinNode(Function(input_names=['data3'],
output_names=['data_repeated'],
function=replicate),
name='replicate_data',
joinsource=node1,
joinfield=['data3'])
wf = Workflow(name='testjoin')
wf.connect(node1, 'data1', node2, 'data2')
wf.connect(node2, 'data_scaled', node3, 'data3')
eg = wf.run()
wf.write_graph(graph2use='exec')
Image(filename='graph_detailed.png')
###Output
_____no_output_____
###Markdown
Exercise 1You have list of DOB of the subjects in a few various format : ``["10 February 1984", "March 5 1990", "April 2 1782", "June 6, 1988", "12 May 1992"]``, and you want to sort the list.You can use ``Node`` with ``iterables`` to extract day, month and year, and use [datetime.datetime](https://docs.python.org/2/library/datetime.html) to unify the format that can be compared, and ``JoinNode`` to sort the list.
###Code
# write your solution here
# the list of all DOB
dob_subjects = ["10 February 1984", "March 5 1990", "April 2 1782", "June 6, 1988", "12 May 1992"]
# let's start from creating Node with iterable to split all strings from the list
from nipype import Node, JoinNode, Function, Workflow
def split_dob(dob_string):
return dob_string.split()
split_node = Node(Function(input_names=["dob_string"],
output_names=["split_list"],
function=split_dob),
name="splitting")
#split_node.inputs.dob_string = "10 February 1984"
split_node.iterables = ("dob_string", dob_subjects)
# and now let's work on the date format more, independently for every element
# sometimes the second element has an extra "," that we should remove
def remove_comma(str_list):
str_list[1] = str_list[1].replace(",", "")
return str_list
cleaning_node = Node(Function(input_names=["str_list"],
output_names=["str_list_clean"],
function=remove_comma),
name="cleaning")
# now we can extract year, month, day from our list and create ``datetime.datetim`` object
def datetime_format(date_list):
import datetime
# year is always the last
year = int(date_list[2])
#day and month can be in the first or second position
# we can use datetime.datetime.strptime to convert name of the month to integer
try:
day = int(date_list[0])
month = datetime.datetime.strptime(date_list[1], "%B").month
except(ValueError):
day = int(date_list[1])
month = datetime.datetime.strptime(date_list[0], "%B").month
# and create datetime.datetime format
return datetime.datetime(year, month, day)
datetime_node = Node(Function(input_names=["date_list"],
output_names=["datetime"],
function=datetime_format),
name="datetime")
# now we are ready to create JoinNode and sort the list of DOB
def sorting_dob(datetime_list):
datetime_list.sort()
return datetime_list
sorting_node = JoinNode(Function(input_names=["datetime_list"],
output_names=["dob_sorted"],
function=sorting_dob),
joinsource=split_node, # this is the node that used iterables for x
joinfield=['datetime_list'],
name="sorting")
# and we're ready to create workflow
ex1_wf = Workflow(name="sorting_dob")
ex1_wf.connect(split_node, "split_list", cleaning_node, "str_list")
ex1_wf.connect(cleaning_node, "str_list_clean", datetime_node, "date_list")
ex1_wf.connect(datetime_node, "datetime", sorting_node, "datetime_list")
# you can check the graph
from IPython.display import Image
ex1_wf.write_graph(graph2use='exec')
Image(filename='graph_detailed.png')
# and run the workflow
ex1_res = ex1_wf.run()
# you can check list of all nodes
ex1_res.nodes()
# and check the results from sorting_dob.sorting
list(ex1_res.nodes())[0].result.outputs
###Output
_____no_output_____
###Markdown
JoinNodeJoinNode have the opposite effect of [iterables](basic_iteration.ipynb). Where `iterables` split up the execution workflow into many different branches, a JoinNode merges them back into on node. For a more detailed explanation, check out [JoinNode, synchronize and itersource](http://nipype.readthedocs.io/en/latest/users/joinnode_and_itersource.html) from the main homepage. Simple exampleLet's consider the very simple example depicted at the top of this page: ```pythonfrom nipype import Node, JoinNode, Workflow Specify fake input node Aa = Node(interface=A(), name="a") Iterate over fake node B's input 'in_file?b = Node(interface=B(), name="b")b.iterables = ('in_file', [file1, file2]) Pass results on to fake node Cc = Node(interface=C(), name="c") Join forked execution workflow in fake node Dd = JoinNode(interface=D(), joinsource="b", joinfield="in_files", name="d") Put everything into a workflow as usualworkflow = Workflow(name="workflow")workflow.connect([(a, b, [('subject', 'subject')]), (b, c, [('out_file', 'in_file')]) (c, d, [('out_file', 'in_files')]) ])``` As you can see, setting up a ``JoinNode`` is rather simple. The only difference to a normal ``Node`` are the ``joinsource`` and the ``joinfield``. ``joinsource`` specifies from which node the information to join is coming and the ``joinfield`` specifies the input field of the JoinNode where the information to join will be entering the node. More realistic exampleLet's consider another example where we have one node that iterates over 3 different numbers and generates randome numbers. Another node joins those three different numbers (each coming from a separate branch of the workflow) into one list. To make the whole thing a bit more realistic, the second node will use the ``Function`` interface to do something with those numbers, before we spit them out again.
###Code
from nipype import JoinNode, Node, Workflow
from nipype.interfaces.utility import Function, IdentityInterface
def get_data_from_id(id):
"""Generate a random number based on id"""
import numpy as np
return id + np.random.rand()
def merge_and_scale_data(data2):
"""Scale the input list by 1000"""
import numpy as np
return (np.array(data2) * 1000).tolist()
node1 = Node(Function(input_names=['id'],
output_names=['data1'],
function=get_data_from_id),
name='get_data')
node1.iterables = ('id', [1, 2, 3])
node2 = JoinNode(Function(input_names=['data2'],
output_names=['data_scaled'],
function=merge_and_scale_data),
name='scale_data',
joinsource=node1,
joinfield=['data2'])
wf = Workflow(name='testjoin')
wf.connect(node1, 'data1', node2, 'data2')
eg = wf.run()
wf.write_graph(graph2use='exec')
from IPython.display import Image
Image(filename='graph_detailed.dot.png')
###Output
_____no_output_____
###Markdown
Now, let's look at the input and output of the joinnode:
###Code
res = [node for node in eg.nodes() if 'scale_data' in node.name][0].result
res.outputs
res.inputs
###Output
_____no_output_____
###Markdown
Extending to multiple nodesWe extend the workflow by using three nodes. Note that even this workflow, the joinsource corresponds to the node containing iterables and the joinfield corresponds to the input port of the JoinNode that aggregates the iterable branches. As before the graph below shows how the execution process is setup.
###Code
def get_data_from_id(id):
import numpy as np
return id + np.random.rand()
def scale_data(data2):
import numpy as np
return data2
def replicate(data3, nreps=2):
return data3 * nreps
node1 = Node(Function(input_names=['id'],
output_names=['data1'],
function=get_data_from_id),
name='get_data')
node1.iterables = ('id', [1, 2, 3])
node2 = Node(Function(input_names=['data2'],
output_names=['data_scaled'],
function=scale_data),
name='scale_data')
node3 = JoinNode(Function(input_names=['data3'],
output_names=['data_repeated'],
function=replicate),
name='replicate_data',
joinsource=node1,
joinfield=['data3'])
wf = Workflow(name='testjoin')
wf.connect(node1, 'data1', node2, 'data2')
wf.connect(node2, 'data_scaled', node3, 'data3')
eg = wf.run()
wf.write_graph(graph2use='exec')
Image(filename='graph_detailed.dot.png')
###Output
_____no_output_____ |
MNIST - DeepNN/deep-neural-network-mnist-tuned-hyperparameters.ipynb | ###Markdown
Exercises 10. Combine all the methods and try to achieve 98.5%+ accuracy.Achieving 98.5% accuracy with the methodology we've seen so far is extremely hard. A more realistic exercise would be to achieve 98%+ accuracy. However, being pushed to the limit (trying to achieve 98.5%), you have probably learned a whole lot about the machine learning process.Here is a link where you can check the results that some leading academics got on the MNIST (using different methodologies):https://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results**Solution**After some fine tuning, I decided to brute-force the algorithm and created 10 hidden layers with 5000 hidden units each. hidden_layer_size = 50 batch_size = 150 NUM_EPOCHS = 10 All activation functions are ReLu.There are better solutions using this methodology, this one is just superior to the one in the lessons. Due to the width and the depth of the algorithm, it took my computer 3 hours and 50 mins to train it.Also, we defined a custom optimizer.We create the custom optimizer with: custom_optimizer = tf.keras.optimizers.Adam(learning_rate=0.0001)Then we change the respective argument in model.compile to reflect this: model.compile(optimizer=custom_optimizer, loss='sparse_categorical_crossentropy', metrics=['accuracy']) While Adam adapts to the problem, if the orders of magnitude are too different, it may not have time to adjust accordingly. We start overfitting before we can reach a neat solution. Deep Neural Network for MNIST ClassificationWe'll apply all the knowledge from the lectures in this section to write a deep neural network. The problem we've chosen is referred to as the "Hello World" of deep learning because for most students it is the first deep learning algorithm they see.The dataset is called MNIST and refers to handwritten digit recognition. You can find more about it on Yann LeCun's website (Director of AI Research, Facebook). He is one of the pioneers of what we've been talking about and of more complex approaches that are widely used today, such as covolutional neural networks (CNNs). The dataset provides 70,000 images (28x28 pixels) of handwritten digits (1 digit per image). The goal is to write an algorithm that detects which digit is written. Since there are only 10 digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9), this is a classification problem with 10 classes. Our goal would be to build a neural network with 2 hidden layers. Import the relevant packages
###Code
import numpy as np
import tensorflow as tf
# TensorFLow includes a data provider for MNIST that we'll use.
# It comes with the tensorflow-datasets module, therefore, if you haven't please install the package using
# pip install tensorflow-datasets
# or
# conda install tensorflow-datasets
import tensorflow_datasets as tfds
# these datasets will be stored in C:\Users\*USERNAME*\tensorflow_datasets\...
# the first time you download a dataset, it is stored in the respective folder
# every other time, it is automatically loading the copy on your computer
###Output
C:\Users\Asus\anaconda3\envs\vision-project\lib\site-packages\tqdm\auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
###Markdown
DataThat's where we load and preprocess our data.
###Code
# remember the comment from above
# these datasets will be stored in C:\Users\*USERNAME*\tensorflow_datasets\...
# the first time you download a dataset, it is stored in the respective folder
# every other time, it is automatically loading the copy on your computer
# tfds.load actually loads a dataset (or downloads and then loads if that's the first time you use it)
# in our case, we are interesteed in the MNIST; the name of the dataset is the only mandatory argument
# there are other arguments we can specify, which we can find useful
# mnist_dataset = tfds.load(name='mnist', as_supervised=True)
mnist_dataset, mnist_info = tfds.load(name='mnist', with_info=True, as_supervised=True)
# with_info=True will also provide us with a tuple containing information about the version, features, number of samples
# we will use this information a bit below and we will store it in mnist_info
# as_supervised=True will load the dataset in a 2-tuple structure (input, target)
# alternatively, as_supervised=False, would return a dictionary
# obviously we prefer to have our inputs and targets separated
# once we have loaded the dataset, we can easily extract the training and testing dataset with the built references
mnist_train, mnist_test = mnist_dataset['train'], mnist_dataset['test']
# by default, TF has training and testing datasets, but no validation sets
# thus we must split it on our own
# we start by defining the number of validation samples as a % of the train samples
# this is also where we make use of mnist_info (we don't have to count the observations)
num_validation_samples = 0.1 * mnist_info.splits['train'].num_examples
# let's cast this number to an integer, as a float may cause an error along the way
num_validation_samples = tf.cast(num_validation_samples, tf.int64)
# let's also store the number of test samples in a dedicated variable (instead of using the mnist_info one)
num_test_samples = mnist_info.splits['test'].num_examples
# once more, we'd prefer an integer (rather than the default float)
num_test_samples = tf.cast(num_test_samples, tf.int64)
# normally, we would like to scale our data in some way to make the result more numerically stable
# in this case we will simply prefer to have inputs between 0 and 1
# let's define a function called: scale, that will take an MNIST image and its label
def scale(image, label):
# we make sure the value is a float
image = tf.cast(image, tf.float32)
# since the possible values for the inputs are 0 to 255 (256 different shades of grey)
# if we divide each element by 255, we would get the desired result -> all elements will be between 0 and 1
image /= 255.
return image, label
# the method .map() allows us to apply a custom transformation to a given dataset
# we have already decided that we will get the validation data from mnist_train, so
scaled_train_and_validation_data = mnist_train.map(scale)
# finally, we scale and batch the test data
# we scale it so it has the same magnitude as the train and validation
# there is no need to shuffle it, because we won't be training on the test data
# there would be a single batch, equal to the size of the test data
test_data = mnist_test.map(scale)
# let's also shuffle the data
BUFFER_SIZE = 10000
# this BUFFER_SIZE parameter is here for cases when we're dealing with enormous datasets
# then we can't shuffle the whole dataset in one go because we can't fit it all in memory
# so instead TF only stores BUFFER_SIZE samples in memory at a time and shuffles them
# if BUFFER_SIZE=1 => no shuffling will actually happen
# if BUFFER_SIZE >= num samples => shuffling is uniform
# BUFFER_SIZE in between - a computational optimization to approximate uniform shuffling
# luckily for us, there is a shuffle method readily available and we just need to specify the buffer size
shuffled_train_and_validation_data = scaled_train_and_validation_data.shuffle(BUFFER_SIZE)
# once we have scaled and shuffled the data, we can proceed to actually extracting the train and validation
# our validation data would be equal to 10% of the training set, which we've already calculated
# we use the .take() method to take that many samples
# finally, we create a batch with a batch size equal to the total number of validation samples
validation_data = shuffled_train_and_validation_data.take(num_validation_samples)
# similarly, the train_data is everything else, so we skip as many samples as there are in the validation dataset
train_data = shuffled_train_and_validation_data.skip(num_validation_samples)
# determine the batch size
BATCH_SIZE = 150
# we can also take advantage of the occasion to batch the train data
# this would be very helpful when we train, as we would be able to iterate over the different batches
train_data = train_data.batch(BATCH_SIZE)
validation_data = validation_data.batch(num_validation_samples)
# batch the test data
test_data = test_data.batch(num_test_samples)
# takes next batch (it is the only batch)
# because as_supervized=True, we've got a 2-tuple structure
validation_inputs, validation_targets = next(iter(validation_data))
###Output
_____no_output_____
###Markdown
Model Outline the modelWhen thinking about a deep learning algorithm, we mostly imagine building the model. So, let's do it :)
###Code
input_size = 784
output_size = 10
# Use same hidden layer size for both hidden layers. Not a necessity.
hidden_layer_size = 50
# define how the model will look like
model = tf.keras.Sequential([
# the first layer (the input layer)
# each observation is 28x28x1 pixels, therefore it is a tensor of rank 3
# since we don't know CNNs yet, we don't know how to feed such input into our net, so we must flatten the images
# there is a convenient method 'Flatten' that simply takes our 28x28x1 tensor and orders it into a (None,)
# or (28x28x1,) = (784,) vector
# this allows us to actually create a feed forward neural network
tf.keras.layers.Flatten(input_shape=(28, 28, 1)), # input layer
# tf.keras.layers.Dense is basically implementing: output = activation(dot(input, weight) + bias)
# it takes several arguments, but the most important ones for us are the hidden_layer_size and the activation function
tf.keras.layers.Dense(hidden_layer_size, activation='relu'), # 1st hidden layer
tf.keras.layers.Dense(hidden_layer_size, activation='relu'), # 2nd hidden layer
tf.keras.layers.Dense(hidden_layer_size, activation='relu'), # 3rd hidden layer
tf.keras.layers.Dense(hidden_layer_size, activation='relu'), # 4th hidden layer
tf.keras.layers.Dense(hidden_layer_size, activation='relu'), # 5th hidden layer
tf.keras.layers.Dense(hidden_layer_size, activation='relu'), # 6th hidden layer
tf.keras.layers.Dense(hidden_layer_size, activation='relu'), # 7th hidden layer
tf.keras.layers.Dense(hidden_layer_size, activation='relu'), # 8th hidden layer
tf.keras.layers.Dense(hidden_layer_size, activation='relu'), # 9th hidden layer
tf.keras.layers.Dense(hidden_layer_size, activation='relu'), # 10th hidden layer
# the final layer is no different, we just make sure to activate it with softmax
tf.keras.layers.Dense(output_size, activation='softmax') # output layer
])
###Output
_____no_output_____
###Markdown
Choose the optimizer and the loss function
###Code
# we define the optimizer we'd like to use,
# the loss function,
# and the metrics we are interested in obtaining at each iteration
custom_optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
model.compile(optimizer=custom_optimizer, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
TrainingThat's where we train the model we have built.
###Code
# determine the maximum number of epochs
NUM_EPOCHS = 10
# we fit the model, specifying the
# training data
# the total number of epochs
# and the validation data we just created ourselves in the format: (inputs,targets)
model.fit(train_data, epochs=NUM_EPOCHS, validation_data=(validation_inputs, validation_targets), verbose =2)
###Output
Epoch 1/10
360/360 - 3s - loss: 0.6483 - accuracy: 0.7809 - val_loss: 0.3038 - val_accuracy: 0.9115
Epoch 2/10
360/360 - 3s - loss: 0.2094 - accuracy: 0.9376 - val_loss: 0.1610 - val_accuracy: 0.9535
Epoch 3/10
360/360 - 3s - loss: 0.1562 - accuracy: 0.9530 - val_loss: 0.1442 - val_accuracy: 0.9593
Epoch 4/10
360/360 - 3s - loss: 0.1273 - accuracy: 0.9626 - val_loss: 0.1410 - val_accuracy: 0.9590
Epoch 5/10
360/360 - 3s - loss: 0.1119 - accuracy: 0.9668 - val_loss: 0.1294 - val_accuracy: 0.9640
Epoch 6/10
360/360 - 3s - loss: 0.0994 - accuracy: 0.9694 - val_loss: 0.1131 - val_accuracy: 0.9688
Epoch 7/10
360/360 - 3s - loss: 0.0860 - accuracy: 0.9742 - val_loss: 0.1080 - val_accuracy: 0.9688
Epoch 8/10
360/360 - 3s - loss: 0.0801 - accuracy: 0.9757 - val_loss: 0.0928 - val_accuracy: 0.9727
Epoch 9/10
360/360 - 3s - loss: 0.0715 - accuracy: 0.9785 - val_loss: 0.0794 - val_accuracy: 0.9757
Epoch 10/10
360/360 - 3s - loss: 0.0649 - accuracy: 0.9803 - val_loss: 0.0746 - val_accuracy: 0.9785
###Markdown
Test the modelAs we discussed in the lectures, after training on the training data and validating on the validation data, we test the final prediction power of our model by running it on the test dataset that the algorithm has NEVER seen before.It is very important to realize that fiddling with the hyperparameters overfits the validation dataset. The test is the absolute final instance. You should not test before you are completely done with adjusting your model.If you adjust your model after testing, you will start overfitting the test dataset, which will defeat its purpose.
###Code
test_loss, test_accuracy = model.evaluate(test_data)
# We can apply some nice formatting if we want to
print('Test loss: {0:.2f}. Test accuracy: {1:.2f}%'.format(test_loss, test_accuracy*100.))
###Output
Test loss: 0.12. Test accuracy: 96.65%
|
Assesment.ipynb | ###Markdown
###Code
print('Start')
#function to convert first letter of word to lowercase and rest to uppercase
def convertWord(word):
return word[0].lower() + word[1:].upper()
# The function is expected to return a string.
# The function accepts string as parameter.
def logic(my_input):
# Write your code here and remove pass statement
# Don't print anything. Just return the intended output
# You can create other functions and call from here
input_list = my_input.split()
output_str = ""
for word in input_list:
output_str = output_str+convertWord(word)
return output_str
# Do not edit below
# Get the input
my_input = input()
# Print output returned from the logic function
print(logic(my_input))
# The function is expected to return an integer.
# The function accepts an string array(list of input) and an integer(length of input) as parameters.
def logic(inputs, input_length):
# Write your code here and remove pass statement
# You can create other functions and call from here
# Don't print anything. Just return the intended output
#both primary and secondary are arrays of length 26 for 26 alphabets
#primary array for common characters assuming all chars are common
primary = [True]*26
for i in range(input_length):
#secondary array for common charactes assuming none are common
secondary = [False]*26
#for every character in each strings
for j in range(len(inputs[i])):
if(primary[ord(inputs[i][j]) - ord('a')]):
#if the character is present in all strings we will mark it common in secondary
secondary[ord(inputs[i][j])-ord('a')] = True
#copy whole secondary array to primary
for i in range(26):
primary[i] = secondary[i]
#list to store common characters in string
common_chars = []
for i in range(26):
if(primary[i]):
# to obtain character represented by ascii of the character in primary
common_chars.append("%c " % (i+ord('a')))
return len(common_chars)
# Do not edit below
# Get the input
input_length = int(input())
inputs = []
for x in range(input_length):
inputs.append(input())
# Print output returned from the logic function
print(logic(inputs, input_length))
#function to check even
def isEven(number):
return True if number%2 == 0 else False
# The function is expected to return a string.
# The function accepts string as parameter.
def logic(my_input):
# Write your code here and remove pass statement
# Don't print anything. Just return the intended output
# You can create other functions and call from here
zero_count = 0
one_count = 0
max_zero_count = 0
max_one_count = 0
for i in range(len(my_input)):
if my_input[i] == '0':
one_count = 0
zero_count += 1
max_zero_count = max(zero_count,max_zero_count)
elif my_input[i] == '1':
zero_count = 0
one_count += 1
max_one_count = max(one_count,max_one_count)
if isEven(max_zero_count) and not isEven(max_one_count):
return 'yes'
else:
return 'no'
# Do not edit below
# Get the input
my_input = input()
# Print output returned from the logic function
print(logic(my_input))
print('End')
###Output
_____no_output_____ |
astropy_fits_tutorial.ipynb | ###Markdown
This sets up to components that we need for the program
###Code
import numpy as np
import sep
import matplotlib.pyplot as plt
from astropy.io import fits
%matplotlib inline
from matplotlib import rcParams
rcParams['figure.figsize'] = [10., 8.]
###Output
_____no_output_____
###Markdown
This finds the file in my documents and renames it to fname
###Code
fname = fits.util.get_testdata_filepath('image.fits')
hdul = fits.open(fname)
hdul.info()
###Output
_____no_output_____
###Markdown
This helps read the file and stores it in "data"
###Code
data = fits.getdata(fname)
print(type(data))
print(data.shape)
###Output
_____no_output_____
###Markdown
plots the image of fits image
###Code
plt.imshow(data, cmap='gray')
plt.colorbar()
###Output
_____no_output_____
###Markdown
gets the data of the background
###Code
bkg = sep.Background(data)
###Output
_____no_output_____
###Markdown
prints the background color data numbers
###Code
print(bkg.globalback)
print(bkg.globalrms)
###Output
_____no_output_____
###Markdown
stores the image of the background to bkg_image
###Code
bkg_image = bkg.back()
###Output
_____no_output_____
###Markdown
plts bkg.back into an image
###Code
plt.imshow(bkg_image, interpolation='nearest', cmap='gray', origin='lower')
plt.colorbar();
###Output
_____no_output_____
###Markdown
stores the other color found in the background in bkg_rms
###Code
bkg_rms = bkg.rms()
###Output
_____no_output_____
###Markdown
plots the bkg_rms image
###Code
plt.imshow(bkg_rms, interpolation='nearest', cmap='gray', origin='lower')
plt.colorbar();
###Output
_____no_output_____
###Markdown
this the subtraction of the background from the original image
###Code
data_sub = data - bkg
###Output
_____no_output_____
###Markdown
this helps detect the objects that are found in the picture
###Code
objects = sep.extract(data_sub, 1.5, err=bkg.globalrms)
###Output
_____no_output_____
###Markdown
Prints out the number of objects found in the picture
###Code
len(objects)
###Output
_____no_output_____
###Markdown
This coordinates the center of the objects and detects where they are.This also circles the objects in red using Ellipse
###Code
from matplotlib.patches import Ellipse
# plot background-subtracted image
fig, ax = plt.subplots()
m, s = np.mean(data_sub), np.std(data_sub)
im = ax.imshow(data_sub, interpolation='nearest', cmap='gray',
vmin=m-s, vmax=m+s, origin='lower')
# plot an ellipse for each object
for i in range(len(objects)):
e = Ellipse(xy=(objects['x'][i], objects['y'][i]),
width=6*objects['a'][i],
height=6*objects['b'][i],
angle=objects['theta'][i] * 180. / np.pi)
e.set_facecolor('none')
e.set_edgecolor('red')
ax.add_artist(e)
###Output
_____no_output_____
###Markdown
Name of available fields
###Code
objects.dtype.names
###Output
_____no_output_____
###Markdown
This performs circular aperture photometry with 3 pixel radius at the location of the objects
###Code
flux, fluxerr, flag = sep.sum_circle(data_sub, objects['x'], objects['y'],
3.0, err=bkg.globalrms, gain=1.0)
###Output
_____no_output_____
###Markdown
This shows the result of the first 10 objects found
###Code
for i in range(10):
print("object {:d}: flux = {:f} +/- {:f}".format(i, flux[i], fluxerr[i]))
###Output
_____no_output_____
###Markdown
Saves the figures to png files
###Code
plt.imsave("figure1.png", data)
plt.imsave("figure2.png", bkg_image)
plt.imsave("figure3.png", bkg_rms)
plt.imsave("figure4.png", data_sub)
###Output
_____no_output_____ |
Crash Course on Python/.ipynb_checkpoints/Module 6-checkpoint.ipynb | ###Markdown
Practice Notebook - Putting It All Together Hello, coders! Below we have code similar to what we wrote in the last video. Go ahead and run the following cell that defines our `get_event_date`, `current_users` and `generate_report` methods.
###Code
def get_event_date(event):
return event.date
def current_users(events):
events.sort(key=get_event_date)
machines = {}
for event in events:
if event.machine not in machines:
machines[event.machine] = set()
if event.type == "login":
machines[event.machine].add(event.user)
elif event.type == "logout" and event.user in machines[event.machine]:
machines[event.machine].remove(event.user)
return machines
def generate_report(machines):
for machine, users in machines.items():
if len(users) > 0:
user_list = ", ".join(users)
print("{}: {}".format(machine, user_list))
###Output
_____no_output_____
###Markdown
No output should be generated from running the custom function definitions above. To check that our code is doing everything it's supposed to do, we need an `Event` class. The code in the next cell below initializes our `Event` class. Go ahead and run this cell next.
###Code
class Event:
def __init__(self, event_date, event_type, machine_name, user):
self.date = event_date
self.type = event_type
self.machine = machine_name
self.user = user
###Output
_____no_output_____
###Markdown
Ok, we have an `Event` class that has a constructor and sets the necessary attributes. Next let's create some events and add them to a list by running the following cell.
###Code
events = [
Event('2020-01-21 12:45:56', 'login', 'myworkstation.local', 'jordan'),
Event('2020-01-22 15:53:42', 'logout', 'webserver.local', 'jordan'),
Event('2020-01-21 18:53:21', 'login', 'webserver.local', 'lane'),
Event('2020-01-22 10:25:34', 'logout', 'myworkstation.local', 'jordan'),
Event('2020-01-21 08:20:01', 'login', 'webserver.local', 'jordan'),
Event('2020-01-23 11:24:35', 'logout', 'mailserver.local', 'chris'),
]
###Output
_____no_output_____
###Markdown
Now we've got a bunch of events. Let's feed these events into our `custom_users` function and see what happens.
###Code
users = current_users(events)
print(users)
###Output
{'webserver.local': {'lane'}, 'myworkstation.local': set(), 'mailserver.local': set()}
###Markdown
Uh oh. The code in the previous cell produces an error message. This is because we have a user in our `events` list that was logged out of a machine he was not logged into. Do you see which user this is? Make edits to the first cell containing our custom function definitions to see if you can fix this error message. There may be more than one way to do so. Remember when you have finished making your edits, rerun that cell as well as the cell that feeds the `events` list into our `custom_users` function to see whether the error message has been fixed. Once the error message has been cleared and you have correctly outputted a dictionary with machine names as keys, your custom functions are properly finished. Great! Now try generating the report by running the next cell.
###Code
generate_report(users)
###Output
webserver.local: lane
###Markdown
Whoop whoop! Success! The error message has been cleared and the desired output is produced. You are all done with this practice notebook. Way to go! Final Project - Word Cloud For this project, you'll create a "word cloud" from a text by writing a script. This script needs to process the text, remove punctuation, ignore case and words that do not contain all alphabets, count the frequencies, and ignore uninteresting or irrelevant words. A dictionary is the output of the `calculate_frequencies` function. The `wordcloud` module will then generate the image from your dictionary. For the input text of your script, you will need to provide a file that contains text only. For the text itself, you can copy and paste the contents of a website you like. Or you can use a site like [Project Gutenberg](https://www.gutenberg.org/) to find books that are available online. You could see what word clouds you can get from famous books, like a Shakespeare play or a novel by Jane Austen. Save this as a .txt file somewhere on your computer.Now you will need to upload your input file here so that your script will be able to process it. To do the upload, you will need an uploader widget. Run the following cell to perform all the installs and imports for your word cloud script and uploader widget. It may take a minute for all of this to run and there will be a lot of output messages. But, be patient. Once you get the following final line of output, the code is done executing. Then you can continue on with the rest of the instructions for this notebook.**Enabling notebook extension fileupload/extension...****- Validating: OK**
###Code
# Here are all the installs and imports you will need for your word cloud script and uploader widget
!pip install wordcloud
!pip install fileupload
!pip install ipywidgets
!jupyter nbextension install --py --user fileupload
!jupyter nbextension enable --py fileupload
import wordcloud
import numpy as np
from matplotlib import pyplot as plt
from IPython.display import display
import fileupload
import io
import sys
###Output
Requirement already satisfied: wordcloud in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (1.7.0)
Requirement already satisfied: pillow in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from wordcloud) (7.0.0)
Requirement already satisfied: numpy>=1.6.1 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from wordcloud) (1.18.1)
Requirement already satisfied: matplotlib in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from wordcloud) (3.1.3)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from matplotlib->wordcloud) (2.4.6)
Requirement already satisfied: python-dateutil>=2.1 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from matplotlib->wordcloud) (2.8.1)
Requirement already satisfied: kiwisolver>=1.0.1 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from matplotlib->wordcloud) (1.1.0)
Requirement already satisfied: cycler>=0.10 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from matplotlib->wordcloud) (0.10.0)
Requirement already satisfied: six>=1.5 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from python-dateutil>=2.1->matplotlib->wordcloud) (1.14.0)
Requirement already satisfied: setuptools in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from kiwisolver>=1.0.1->matplotlib->wordcloud) (46.0.0.post20200309)
Requirement already satisfied: fileupload in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (0.1.5)
Requirement already satisfied: notebook>=4.2 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from fileupload) (6.0.3)
Requirement already satisfied: traitlets>=4.2 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from fileupload) (4.3.3)
Requirement already satisfied: ipywidgets>=5.1 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from fileupload) (7.5.1)
Requirement already satisfied: jupyter-core>=4.6.1 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from notebook>=4.2->fileupload) (4.6.1)
Requirement already satisfied: ipykernel in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from notebook>=4.2->fileupload) (5.1.4)
Requirement already satisfied: prometheus-client in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from notebook>=4.2->fileupload) (0.7.1)
Requirement already satisfied: jinja2 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from notebook>=4.2->fileupload) (2.11.1)
Requirement already satisfied: Send2Trash in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from notebook>=4.2->fileupload) (1.5.0)
Requirement already satisfied: tornado>=5.0 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from notebook>=4.2->fileupload) (6.0.3)
Requirement already satisfied: jupyter-client>=5.3.4 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from notebook>=4.2->fileupload) (5.3.4)
Requirement already satisfied: pyzmq>=17 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from notebook>=4.2->fileupload) (18.1.1)
Requirement already satisfied: terminado>=0.8.1 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from notebook>=4.2->fileupload) (0.8.3)
Requirement already satisfied: nbformat in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from notebook>=4.2->fileupload) (5.0.4)
Requirement already satisfied: ipython-genutils in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from notebook>=4.2->fileupload) (0.2.0)
Requirement already satisfied: nbconvert in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from notebook>=4.2->fileupload) (5.6.1)
Requirement already satisfied: decorator in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from traitlets>=4.2->fileupload) (4.4.1)
Requirement already satisfied: six in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from traitlets>=4.2->fileupload) (1.14.0)
Requirement already satisfied: ipython>=4.0.0; python_version >= "3.3" in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from ipywidgets>=5.1->fileupload) (7.12.0)
Requirement already satisfied: widgetsnbextension~=3.5.0 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from ipywidgets>=5.1->fileupload) (3.5.1)
Requirement already satisfied: appnope; platform_system == "Darwin" in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from ipykernel->notebook>=4.2->fileupload) (0.1.0)
Requirement already satisfied: MarkupSafe>=0.23 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from jinja2->notebook>=4.2->fileupload) (1.1.1)
Requirement already satisfied: python-dateutil>=2.1 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from jupyter-client>=5.3.4->notebook>=4.2->fileupload) (2.8.1)
Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from nbformat->notebook>=4.2->fileupload) (3.2.0)
Requirement already satisfied: entrypoints>=0.2.2 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from nbconvert->notebook>=4.2->fileupload) (0.3)
Requirement already satisfied: mistune<2,>=0.8.1 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from nbconvert->notebook>=4.2->fileupload) (0.8.4)
Requirement already satisfied: pandocfilters>=1.4.1 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from nbconvert->notebook>=4.2->fileupload) (1.4.2)
Requirement already satisfied: testpath in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from nbconvert->notebook>=4.2->fileupload) (0.4.4)
Requirement already satisfied: defusedxml in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from nbconvert->notebook>=4.2->fileupload) (0.6.0)
Requirement already satisfied: pygments in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from nbconvert->notebook>=4.2->fileupload) (2.5.2)
Requirement already satisfied: bleach in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from nbconvert->notebook>=4.2->fileupload) (3.1.0)
Requirement already satisfied: pickleshare in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=5.1->fileupload) (0.7.5)
Requirement already satisfied: setuptools>=18.5 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=5.1->fileupload) (46.0.0.post20200309)
Requirement already satisfied: backcall in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=5.1->fileupload) (0.1.0)
Requirement already satisfied: jedi>=0.10 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=5.1->fileupload) (0.14.1)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=5.1->fileupload) (3.0.3)
Requirement already satisfied: pexpect; sys_platform != "win32" in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=5.1->fileupload) (4.8.0)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from jsonschema!=2.5.0,>=2.4->nbformat->notebook>=4.2->fileupload) (1.5.0)
Requirement already satisfied: pyrsistent>=0.14.0 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from jsonschema!=2.5.0,>=2.4->nbformat->notebook>=4.2->fileupload) (0.15.7)
Requirement already satisfied: attrs>=17.4.0 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from jsonschema!=2.5.0,>=2.4->nbformat->notebook>=4.2->fileupload) (19.3.0)
Requirement already satisfied: webencodings in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from bleach->nbconvert->notebook>=4.2->fileupload) (0.5.1)
Requirement already satisfied: parso>=0.5.0 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from jedi>=0.10->ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=5.1->fileupload) (0.5.2)
Requirement already satisfied: wcwidth in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=5.1->fileupload) (0.1.8)
Requirement already satisfied: ptyprocess>=0.5 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from pexpect; sys_platform != "win32"->ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=5.1->fileupload) (0.6.0)
Requirement already satisfied: zipp>=0.5 in /Users/halston/opt/anaconda3/lib/python3.7/site-packages (from importlib-metadata; python_version < "3.8"->jsonschema!=2.5.0,>=2.4->nbformat->notebook>=4.2->fileupload) (2.2.0)
###Markdown
Whew! That was a lot. All of the installs and imports for your word cloud script and uploader widget have been completed. **IMPORTANT!** If this was your first time running the above cell containing the installs and imports, you will need save this notebook now. Then under the File menu above, select Close and Halt. When the notebook has completely shut down, reopen it. This is the only way the necessary changes will take affect.To upload your text file, run the following cell that contains all the code for a custom uploader widget. Once you run this cell, a "Browse" button should appear below it. Click this button and navigate the window to locate your saved text file.
###Code
# This is the uploader widget
def _upload():
_upload_widget = fileupload.FileUploadWidget()
def _cb(change):
global file_contents
decoded = io.StringIO(change['owner'].data.decode('utf-8'))
filename = change['owner'].filename
print('Uploaded `{}` ({:.2f} kB)'.format(
filename, len(decoded.read()) / 2 **10))
file_contents = decoded.getvalue()
_upload_widget.observe(_cb, names='data')
display(_upload_widget)
_upload()
###Output
_____no_output_____
###Markdown
The uploader widget saved the contents of your uploaded file into a string object named *file_contents* that your word cloud script can process. This was a lot of preliminary work, but you are now ready to begin your script. Write a function in the cell below that iterates through the words in *file_contents*, removes punctuation, and counts the frequency of each word. Oh, and be sure to make it ignore word case, words that do not contain all alphabets and boring words like "and" or "the". Then use it in the `generate_from_frequencies` function to generate your very own word cloud!**Hint:** Try storing the results of your iteration in a dictionary before passing them into wordcloud via the `generate_from_frequencies` function.
###Code
def calculate_frequencies(file_contents):
# Here is a list of punctuations and uninteresting words you can use to process your text
punctuations = '''!()-[]{};:'"\,<>./?@#$%^&*_~'''
uninteresting_words = ["the", "a", "to", "if", "is", "it", "of", "and", "or", "an", "as", "i", "me", "my", \
"we", "our", "ours", "you", "your", "yours", "he", "she", "him", "his", "her", "hers", "its", "they", "them", \
"their", "what", "which", "who", "whom", "this", "that", "am", "are", "was", "were", "be", "been", "being", \
"have", "has", "had", "do", "does", "did", "but", "at", "by", "with", "from", "here", "when", "where", "how", \
"all", "any", "both", "each", "few", "more", "some", "such", "no", "nor", "too", "very", "can", "will", "just"]
# LEARNER CODE START HERE
file_contents2 = ""
for index, char in enumerate(file_contents):
if char.isalpha() == True or char.isspace():
file_contents2 += char
file_contents2 = file_contents2.split()
file_without_uninteresting_words = []
for word in file_contents2:
if word.lower() not in uninteresting_words and word.isalpha() == True:
file_without_uninteresting_words.append(word)
frequencies = {}
for word in file_without_uninteresting_words:
if word.lower() not in frequencies:
frequencies[word.lower()] = 1
else:
frequencies[word.lower()] += 1
#wordcloud
cloud = wordcloud.WordCloud()
cloud.generate_from_frequencies(frequencies)
return cloud.to_array()
###Output
_____no_output_____
###Markdown
If you have done everything correctly, your word cloud image should appear after running the cell below. Fingers crossed!
###Code
# Display your wordcloud image
myimage = calculate_frequencies(file_contents)
plt.imshow(myimage, interpolation = 'nearest')
plt.axis('off')
plt.show()
###Output
_____no_output_____ |
experiments/3_incident_points_9_ambo/06_qambo_prioritised_replay_noisy_3dqn.ipynb | ###Markdown
Prioritised Replay Noisy Duelling Double Deep Q Learning - A simple ambulance dispatch point allocation modelDouble Deep Q Learning - A simple ambulance dispatch point allocation model Reinforcement learning introduction RL involves:* Trial and error search* Receiving and maximising reward (often delayed)* Linking state -> action -> reward* Must be able to sense something of their environment* Involves uncertainty in sensing and linking action to reward* Learning -> improved choice of actions over time* All models find a way to balance best predicted action vs. exploration Elements of RL* *Environment*: all observable and unobservable information relevant to us* *Observation*: sensing the environment* *State*: the perceived (or perceivable) environment * *Agent*: senses environment, decides on action, receives and monitors rewards* *Action*: may be discrete (e.g. turn left) or continuous (accelerator pedal)* *Policy* (how to link state to action; often based on probabilities)* *Reward signal*: aim is to accumulate maximum reward over time* *Value function* of a state: prediction of likely/possible long-term reward* *Q*: prediction of likely/possible long-term reward of an *action** *Advantage*: The difference in Q between actions in a given state (sums to zero for all actions)* *Model* (optional): a simulation of the environment Types of model* *Model-based*: have model of environment (e.g. a board game)* *Model-free*: used when environment not fully known* *Policy-based*: identify best policy directly* *Value-based*: estimate value of a decision* *Off-policy*: can learn from historic data from other agent* *On-policy*: requires active learning from current decisions Duelling Deep Q Networks for Reinforcement LearningQ = The expected future rewards discounted over time. This is what we are trying to maximise.The aim is to teach a network to take the current state observations and recommend the action with greatest Q.Duelling is very similar to Double DQN, except that the policy net splits into two. One component reduces to a single value, which will model the state *value*. The other component models the *advantage*, the difference in Q between different actions (the mean value is subtracted from all values, so that the advtantage always sums to zero). These are aggregated to produce Q for each action. Q is learned through the Bellman equation, where the Q of any state and action is the immediate reward achieved + the discounted maximum Q value (the best action taken) of next best action, where gamma is the discount rate.$$Q(s,a)=r + \gamma.maxQ(s',a')$$ Key DQN components General method for Q learning:Overall aim is to create a neural network that predicts Q. Improvement comes from improved accuracy in predicting 'current' understood Q, and in revealing more about Q as knowledge is gained (some rewards only discovered after time). Target networks are used to stabilise models, and are only updated at intervals. Changes to Q values may lead to changes in closely related states (i.e. states close to the one we are in at the time) and as the network tries to correct for errors it can become unstable and suddenly lose signficiant performance. Target networks (e.g. to assess Q) are updated only infrequently (or gradually), so do not have this instability problem. Training networksDouble DQN contains two networks. This ammendment, from simple DQN, is to decouple training of Q for current state and target Q derived from next state which are closely correlated when comparing input features.The *policy network* is used to select action (action with best predicted Q) when playing the game.When training, the predicted best *action* (best predicted Q) is taken from the *policy network*, but the *policy network* is updated using the predicted Q value of the next state from the *target network* (which is updated from the policy network less frequently). So, when training, the action is selected using Q values from the *policy network*, but the the *policy network* is updated to better predict the Q value of that action from the *target network*. The *policy network* is copied across to the *target network* every *n* steps (e.g. 1000). Noisy layersNoisy layers are an alternative to epsilon-greedy exploration (here, we leave the epsilon-greedy code in the model, but set it to reduce to zero immediately after the period of fully random action choice).For every weight in the layer we have a random value that we draw from the normal distribution. This random value is used to add noise to the output. The parameters for the extent of noise for each weight, sigma, are stored within the layer and get trained as part of the standard back-propogation.A modification to normal nosiy layers is to use layers with ‘factorized gaussian noise’. This reduces the number of random numbers to be sampled (so is less computationally expensive). There are two random vectors, one with the size of the input, and the other with the size of the output. A random matrix is created by calculating the outer product of the two vectors. Prioritised replayIn standard DQN samples are taken randomly from the memory (replay buffer). In *prioritised replay* samples are taken in proportion to their loss when training the network; where the network has the greatest error in predicting the target valur of a state/action, then those samples will be sampled more frequently (which will reduce the error in the network until the sample is not prioritised). In other words, the training focuses more heavenly on samples it gets most wrong, and spends less time training on samples that it can acurately predict already.This priority may also be used as a weight for training the network, but this i snot implemented here; we use loss just for sampling.When we use the loss for priority we add a small value (1e-5) t the loss. This avoids any sample having zero priority (and never having a chance of being sampled). For frequency of sampling we also raise the loss to the power of 'alpha' (default value of 0.6). Smaller values of alpha will compress the differences between samples, making the priority weighting less significant in the frequency of sampling. ReferencesDouble DQN: van Hasselt H, Guez A, Silver D. (2015) Deep Reinforcement Learning with Double Q-learning. arXiv:150906461 http://arxiv.org/abs/1509.06461Duelling DDQN:Wang Z, Schaul T, Hessel M, et al. (2016) Dueling Network Architectures for Deep Reinforcement Learning. arXiv:151106581 http://arxiv.org/abs/1511.06581Noisy networks:Fortunato M, Azar MG, Piot B, et al. (2019) Noisy Networks for Exploration. arXiv:170610295 http://arxiv.org/abs/1706.10295Prioritised replay:Schaul T, Quan J, Antonoglou I, et al (2016). Prioritized Experience Replay. arXiv:151105952 http://arxiv.org/abs/1511.05952Code for the nosiy layers comes from:Lapan, M. (2020). Deep Reinforcement Learning Hands-On: Apply modern RL methods to practical problems of chatbots, robotics, discrete optimization, web automation, and more, 2nd Edition. Packt Publishing. Code structure
###Code
################################################################################
# 1 Import packages #
################################################################################
from amboworld.environment import Env
import math
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import random
import torch
import torch.nn as nn
import torch.optim as optim
from torch.nn import functional as F
# Use a double ended queue (deque) for memory
# When memory is full, this will replace the oldest value with the new one
from collections import deque
# Supress all warnings (e.g. deprecation warnings) for regular use
import warnings
warnings.filterwarnings("ignore")
################################################################################
# 2 Define model parameters #
################################################################################
# Set whether to display on screen (slows model)
DISPLAY_ON_SCREEN = False
# Discount rate of future rewards
GAMMA = 0.99
# Learing rate for neural network
LEARNING_RATE = 0.003
# Maximum number of game steps (state, action, reward, next state) to keep
MEMORY_SIZE = 10000000
# Sample batch size for policy network update
BATCH_SIZE = 5
# Number of game steps to play before starting training (all random actions)
REPLAY_START_SIZE = 50000
# Number of steps between policy -> target network update
SYNC_TARGET_STEPS = 1000
# Exploration rate (epsilon) is probability of choosing a random action
EXPLORATION_MAX = 1.0
EXPLORATION_MIN = 0.0
# Reduction in epsilon with each game step
EXPLORATION_DECAY = 0.0
# Training episodes
TRAINING_EPISODES = 50
# Save results
RESULTS_NAME = 'pr_noisy_d3qn'
# SIM PARAMETERS
RANDOM_SEED = 42
SIM_DURATION = 5000
NUMBER_AMBULANCES = 9
NUMBER_INCIDENT_POINTS = 3
INCIDENT_RADIUS = 2
NUMBER_DISPTACH_POINTS = 25
AMBOWORLD_SIZE = 50
INCIDENT_INTERVAL = 20
EPOCHS = 2
AMBO_SPEED = 60
AMBO_FREE_FROM_HOSPITAL = False
################################################################################
# 3 Define DQN (Duelling Deep Q Network) class #
# (Used for both policy and target nets) #
################################################################################
"""
Code for nosiy layers comes from:
Lapan, M. (2020). Deep Reinforcement Learning Hands-On: Apply modern RL methods
to practical problems of chatbots, robotics, discrete optimization,
web automation, and more, 2nd Edition. Packt Publishing.
"""
class NoisyLinear(nn.Linear):
"""
Noisy layer for network.
For every weight in the layer we have a random value that we draw from the
normal distribution.Paraemters for the noise, sigma, are stored within the
layer and get trained as part of the standard back-propogation.
'register_buffer' is used to create tensors in the network that are not
updated during back-propogation. They are used to create normal
distributions to add noise (multiplied by sigma which is a paramater in the
network).
"""
def __init__(self, in_features, out_features,
sigma_init=0.017, bias=True):
super(NoisyLinear, self).__init__(
in_features, out_features, bias=bias)
w = torch.full((out_features, in_features), sigma_init)
self.sigma_weight = nn.Parameter(w)
z = torch.zeros(out_features, in_features)
self.register_buffer("epsilon_weight", z)
if bias:
w = torch.full((out_features,), sigma_init)
self.sigma_bias = nn.Parameter(w)
z = torch.zeros(out_features)
self.register_buffer("epsilon_bias", z)
self.reset_parameters()
def reset_parameters(self):
std = math.sqrt(3 / self.in_features)
self.weight.data.uniform_(-std, std)
self.bias.data.uniform_(-std, std)
def forward(self, input):
self.epsilon_weight.normal_()
bias = self.bias
if bias is not None:
self.epsilon_bias.normal_()
bias = bias + self.sigma_bias * \
self.epsilon_bias.data
v = self.sigma_weight * self.epsilon_weight.data + self.weight
return F.linear(input, v, bias)
class NoisyFactorizedLinear(nn.Linear):
"""
NoisyNet layer with factorized gaussian noise. This reduces the number of
random numbers to be sampled (so less computationally expensive). There are
two random vectors. One with the size of the input, and the other with the
size of the output. A random matrix is create by calculating the outer
product of the two vectors.
'register_buffer' is used to create tensors in the network that are not
updated during back-propogation. They are used to create normal
distributions to add noise (multiplied by sigma which is a paramater in the
network).
"""
def __init__(self, in_features, out_features,
sigma_zero=0.4, bias=True):
super(NoisyFactorizedLinear, self).__init__(
in_features, out_features, bias=bias)
sigma_init = sigma_zero / math.sqrt(in_features)
w = torch.full((out_features, in_features), sigma_init)
self.sigma_weight = nn.Parameter(w)
z1 = torch.zeros(1, in_features)
self.register_buffer("epsilon_input", z1)
z2 = torch.zeros(out_features, 1)
self.register_buffer("epsilon_output", z2)
if bias:
w = torch.full((out_features,), sigma_init)
self.sigma_bias = nn.Parameter(w)
def forward(self, input):
self.epsilon_input.normal_()
self.epsilon_output.normal_()
func = lambda x: torch.sign(x) * torch.sqrt(torch.abs(x))
eps_in = func(self.epsilon_input.data)
eps_out = func(self.epsilon_output.data)
bias = self.bias
if bias is not None:
bias = bias + self.sigma_bias * eps_out.t()
noise_v = torch.mul(eps_in, eps_out)
v = self.weight + self.sigma_weight * noise_v
return F.linear(input, v, bias)
class DQN(nn.Module):
"""Deep Q Network. Udes for both policy (action) and target (Q) networks."""
def __init__(self, observation_space, action_space):
"""Constructor method. Set up neural nets."""
# nerurones per hidden layer = 2 * max of observations or actions
neurons_per_layer = 2 * max(observation_space, action_space)
# Set starting exploration rate
self.exploration_rate = EXPLORATION_MAX
# Set up action space (choice of possible actions)
self.action_space = action_space
# First layerswill be common to both Advantage and value
super(DQN, self).__init__()
self.feature = nn.Sequential(
nn.Linear(observation_space, neurons_per_layer),
nn.ReLU()
)
# Advantage has same number of outputs as the action space
self.advantage = nn.Sequential(
NoisyFactorizedLinear(neurons_per_layer, neurons_per_layer),
nn.ReLU(),
NoisyFactorizedLinear(neurons_per_layer, action_space)
)
# State value has only one output (one value per state)
self.value = nn.Sequential(
nn.Linear(neurons_per_layer, neurons_per_layer),
nn.ReLU(),
nn.Linear(neurons_per_layer, 1)
)
def act(self, state):
"""Act either randomly or by redicting action that gives max Q"""
# Act randomly if random number < exploration rate
if np.random.rand() < self.exploration_rate:
action = random.randrange(self.action_space)
else:
# Otherwise get predicted Q values of actions
q_values = self.forward(torch.FloatTensor(state))
# Get index of action with best Q
action = np.argmax(q_values.detach().numpy()[0])
return action
def forward(self, x):
x = self.feature(x)
advantage = self.advantage(x)
value = self.value(x)
action_q = value + advantage - advantage.mean()
return action_q
################################################################################
# 4 Define policy net training function #
################################################################################
def optimize(policy_net, target_net, memory):
"""
Update model by sampling from memory.
Uses policy network to predict best action (best Q).
Uses target network to provide target of Q for the selected next action.
"""
# Do not try to train model if memory is less than reqired batch size
if len(memory) < BATCH_SIZE:
return
# Reduce exploration rate (exploration rate is stored in policy net)
policy_net.exploration_rate *= EXPLORATION_DECAY
policy_net.exploration_rate = max(EXPLORATION_MIN,
policy_net.exploration_rate)
# Sample a random batch from memory
batch = memory.sample(BATCH_SIZE)
for state, action, reward, state_next, terminal, index in batch:
state_action_values = policy_net(torch.FloatTensor(state))
# Get target Q for policy net update
if not terminal:
# For non-terminal actions get Q from policy net
expected_state_action_values = policy_net(torch.FloatTensor(state))
# Detach next state values from gradients to prevent updates
expected_state_action_values = expected_state_action_values.detach()
# Get next state action with best Q from the policy net (double DQN)
policy_next_state_values = policy_net(torch.FloatTensor(state_next))
policy_next_state_values = policy_next_state_values.detach()
best_action = np.argmax(policy_next_state_values[0].numpy())
# Get target net next state
next_state_action_values = target_net(torch.FloatTensor(state_next))
# Use detach again to prevent target net gradients being updated
next_state_action_values = next_state_action_values.detach()
best_next_q = next_state_action_values[0][best_action].numpy()
updated_q = reward + (GAMMA * best_next_q)
expected_state_action_values[0][action] = updated_q
else:
# For termal actions Q = reward (-1)
expected_state_action_values = policy_net(torch.FloatTensor(state))
# Detach values from gradients to prevent gradient update
expected_state_action_values = expected_state_action_values.detach()
# Set Q for all actions to reward (-1)
expected_state_action_values[0] = reward
# Set net to training mode
policy_net.train()
# Reset net gradients
policy_net.optimizer.zero_grad()
# calculate loss
loss_v = nn.MSELoss()(state_action_values, expected_state_action_values)
# Backpropogate loss
loss_v.backward()
# Update replay buffer (add 1e-5 to loss to avoid zero priority with no
# chance of being sampled).
loss_numpy = loss_v.data.numpy()
memory.update_priorities(index, loss_numpy + 1e-5)
# Update network gradients
policy_net.optimizer.step()
return
################################################################################
# 5 Define prioritised replay memory class #
################################################################################
class NaivePrioritizedBuffer():
"""
Based on code from https://github.com/higgsfield/RL-Adventure
Each sample (state, action, reward, next_state, done) has an associated
priority, which is the loss from training the policy network. The priority
is used to adjust the frequency of sampling.
"""
def __init__(self, capacity=MEMORY_SIZE, prob_alpha=0.6):
self.prob_alpha = prob_alpha
self.capacity = capacity
self.buffer = []
self.pos = 0
self.priorities = np.zeros((capacity,), dtype=np.float32)
def remember(self, state, action, reward, next_state, done):
"""
Add sample (state, action, reward, next_state, done) to memory, or
replace oldest sample if memory full"""
max_prio = self.priorities.max() if self.buffer else 1.0
if len(self.buffer) < self.capacity:
# Add new sample when room in memory
self.buffer.append((state, action, reward, next_state, done))
else:
# Replace sample when memory full
self.buffer[self.pos] = (state, action, reward, next_state, done)
# Set maximum priority present
self.priorities[self.pos] = max_prio
# Increment replacement position
self.pos = (self.pos + 1) % self.capacity
def sample(self, batch_size, beta=0.4):
# Get priorities
if len(self.buffer) == self.capacity:
prios = self.priorities
else:
prios = self.priorities[:self.pos]
# Raise priorities by the square of 'alpha'
# (lower alpha compresses differences)
probs = prios ** self.prob_alpha
# Normlaise priorities
probs /= probs.sum()
# Sample using priorities for relative sampling frequency
indices = np.random.choice(len(self.buffer), batch_size, p=probs)
samples = [self.buffer[idx] for idx in indices]
# Add index to sample (used to update priority after getting new loss)
batch = []
for index, sample in enumerate(samples):
sample = list(sample)
sample.append(indices[index])
batch.append(sample)
return batch
def update_priorities(self, index, priority):
"""Update sample priority with new loss"""
self.priorities[index] = priority
def __len__(self):
return len(self.buffer)
################################################################################
# 6 Define results plotting function #
################################################################################
def plot_results(run, exploration, score, mean_call_to_arrival,
mean_assignment_to_arrival):
"""Plot and report results at end of run"""
# Set up chart (ax1 and ax2 share x-axis to combine two plots on one graph)
fig = plt.figure(figsize=(6,6))
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
# Plot results
lns1 = ax1.plot(
run, exploration, label='exploration', color='g', linestyle=':')
lns2 = ax2.plot(run, mean_call_to_arrival,
label='call to arrival', color='r')
lns3 = ax2.plot(run, mean_assignment_to_arrival,
label='assignment to arrival', color='b', linestyle='--')
# Get combined legend
lns = lns1 + lns2 + lns3
labs = [l.get_label() for l in lns]
ax1.legend(lns, labs, loc='upper center', bbox_to_anchor=(0.5, -0.1), ncol=3)
# Set axes
ax1.set_xlabel('run')
ax1.set_ylabel('exploration')
ax2.set_ylabel('Response time')
filename = 'output/' + RESULTS_NAME +'.png'
plt.savefig(filename, dpi=300)
plt.show()
################################################################################
# 7 Main program #
################################################################################
def qambo():
"""Main program loop"""
############################################################################
# 8 Set up environment #
############################################################################
# Set up game environemnt
sim = Env(
random_seed = RANDOM_SEED,
duration_incidents = SIM_DURATION,
number_ambulances = NUMBER_AMBULANCES,
number_incident_points = NUMBER_INCIDENT_POINTS,
incident_interval = INCIDENT_INTERVAL,
number_epochs = EPOCHS,
number_dispatch_points = NUMBER_DISPTACH_POINTS,
incident_range = INCIDENT_RADIUS,
max_size = AMBOWORLD_SIZE,
ambo_kph = AMBO_SPEED,
ambo_free_from_hospital = AMBO_FREE_FROM_HOSPITAL
)
# Get number of observations returned for state
observation_space = sim.observation_size
# Get number of actions possible
action_space = sim.action_number
############################################################################
# 9 Set up policy and target nets #
############################################################################
# Set up policy and target neural nets (and keep best net performance)
policy_net = DQN(observation_space, action_space)
target_net = DQN(observation_space, action_space)
best_net = DQN(observation_space, action_space)
# Set loss function and optimizer
policy_net.optimizer = optim.Adam(
params=policy_net.parameters(), lr=LEARNING_RATE)
# Copy weights from policy_net to target
target_net.load_state_dict(policy_net.state_dict())
# Set target net to eval rather than training mode
# We do not train target net - ot is copied from policy net at intervals
target_net.eval()
############################################################################
# 10 Set up memory #
############################################################################
# Set up memomry
memory = NaivePrioritizedBuffer()
############################################################################
# 11 Set up + start training loop #
############################################################################
# Set up run counter and learning loop
run = 0
all_steps = 0
continue_learning = True
best_reward = -np.inf
# Set up list for results
results_run = []
results_exploration = []
results_score = []
results_mean_call_to_arrival = []
results_mean_assignment_to_arrival = []
# Continue repeating games (episodes) until target complete
while continue_learning:
########################################################################
# 12 Play episode #
########################################################################
# Increment run (episode) counter
run += 1
########################################################################
# 13 Reset game #
########################################################################
# Reset game environment and get first state observations
state = sim.reset()
# Reset total reward and rewards list
total_reward = 0
rewards = []
# Reshape state into 2D array with state obsverations as first 'row'
state = np.reshape(state, [1, observation_space])
# Continue loop until episode complete
while True:
####################################################################
# 14 Game episode loop #
####################################################################
####################################################################
# 15 Get action #
####################################################################
# Get action to take (se eval mode to avoid dropout layers)
policy_net.eval()
action = policy_net.act(state)
####################################################################
# 16 Play action (get S', R, T) #
####################################################################
# Act
state_next, reward, terminal, info = sim.step(action)
total_reward += reward
# Update trackers
rewards.append(reward)
# Reshape state into 2D array with state observations as first 'row'
state_next = np.reshape(state_next, [1, observation_space])
# Update display if needed
if DISPLAY_ON_SCREEN:
sim.render()
####################################################################
# 17 Add S/A/R/S/T to memory #
####################################################################
# Record state, action, reward, new state & terminal
memory.remember(state, action, reward, state_next, terminal)
# Update state
state = state_next
####################################################################
# 18 Check for end of episode #
####################################################################
# Actions to take if end of game episode
if terminal:
# Get exploration rate
exploration = policy_net.exploration_rate
# Clear print row content
clear_row = '\r' + ' ' * 79 + '\r'
print(clear_row, end='')
print(f'Run: {run}, ', end='')
print(f'Exploration: {exploration: .3f}, ', end='')
average_reward = np.mean(rewards)
print(f'Average reward: {average_reward:4.1f}, ', end='')
mean_assignment_to_arrival = np.mean(info['assignment_to_arrival'])
print(f'Mean assignment to arrival: {mean_assignment_to_arrival:4.1f}, ', end='')
mean_call_to_arrival = np.mean(info['call_to_arrival'])
print(f'Mean call to arrival: {mean_call_to_arrival:4.1f}, ', end='')
demand_met = info['fraction_demand_met']
print(f'Demand met {demand_met:0.3f}')
# Add to results lists
results_run.append(run)
results_exploration.append(exploration)
results_score.append(total_reward)
results_mean_call_to_arrival.append(mean_call_to_arrival)
results_mean_assignment_to_arrival.append(mean_assignment_to_arrival)
# Save model if best reward
total_reward = np.sum(rewards)
if total_reward > best_reward:
best_reward = total_reward
# Copy weights to best net
best_net.load_state_dict(policy_net.state_dict())
################################################################
# 18b Check for end of learning #
################################################################
if run == TRAINING_EPISODES:
continue_learning = False
# End episode loop
break
####################################################################
# 19 Update policy net #
####################################################################
# Avoid training model if memory is not of sufficient length
if len(memory) > REPLAY_START_SIZE:
# Update policy net
optimize(policy_net, target_net, memory)
################################################################
# 20 Update target net periodically #
################################################################
# Use load_state_dict method to copy weights from policy net
if all_steps % SYNC_TARGET_STEPS == 0:
target_net.load_state_dict(policy_net.state_dict())
############################################################################
# 21 Learning complete - plot and save results #
############################################################################
# Target reached. Plot results
plot_results(results_run, results_exploration, results_score,
results_mean_call_to_arrival, results_mean_assignment_to_arrival)
# SAVE RESULTS
run_details = pd.DataFrame()
run_details['run'] = results_run
run_details['exploration '] = results_exploration
run_details['mean_call_to_arrival'] = results_mean_call_to_arrival
run_details['mean_assignment_to_arrival'] = results_mean_assignment_to_arrival
filename = 'output/' + RESULTS_NAME + '.csv'
run_details.to_csv(filename, index=False)
############################################################################
# Test best model #
############################################################################
print()
print('Test Model')
print('----------')
best_net.exploration_rate = 0
best_net.eval()
# Set up results dictionary
results = dict()
results['call_to_arrival'] = []
results['assign_to_arrival'] = []
results['demand_met'] = []
# Replicate model runs
for run in range(30):
# Reset game environment and get first state observations
state = sim.reset()
state = np.reshape(state, [1, observation_space])
# Continue loop until episode complete
while True:
# Get action to take (se eval mode to avoid dropout layers)
best_net.eval()
action = best_net.act(state)
# Act
state_next, reward, terminal, info = sim.step(action)
# Reshape state into 2D array with state observations as first 'row'
state_next = np.reshape(state_next, [1, observation_space])
# Update state
state = state_next
if terminal:
print(f'Run: {run}, ', end='')
mean_assignment_to_arrival = np.mean(info['assignment_to_arrival'])
print(f'Mean assignment to arrival: {mean_assignment_to_arrival:4.1f}, ', end='')
mean_call_to_arrival = np.mean(info['call_to_arrival'])
print(f'Mean call to arrival: {mean_call_to_arrival:4.1f}, ', end='')
demand_met = info['fraction_demand_met']
print(f'Demand met: {demand_met:0.3f}')
# Add to results
results['call_to_arrival'].append(mean_call_to_arrival)
results['assign_to_arrival'].append(mean_assignment_to_arrival)
results['demand_met'].append(demand_met)
# End episode loop
break
results = pd.DataFrame(results)
filename = './output/results_' + RESULTS_NAME +'.csv'
results.to_csv(filename, index=False)
print()
print(results.describe())
return run_details
######################## MODEL ENTRY POINT #####################################
# Run model and return last run results
last_run = qambo()
###Output
Run: 1, Exploration: 1.000, Average reward: -445.2, Mean assignment to arrival: 18.4, Mean call to arrival: 19.0, Demand met 1.000
Run: 2, Exploration: 1.000, Average reward: -445.7, Mean assignment to arrival: 18.4, Mean call to arrival: 18.9, Demand met 1.000
Run: 3, Exploration: 1.000, Average reward: -448.0, Mean assignment to arrival: 18.6, Mean call to arrival: 19.1, Demand met 1.000
Run: 4, Exploration: 1.000, Average reward: -440.7, Mean assignment to arrival: 18.4, Mean call to arrival: 18.9, Demand met 0.999
Run: 5, Exploration: 1.000, Average reward: -448.9, Mean assignment to arrival: 18.6, Mean call to arrival: 19.1, Demand met 0.999
Run: 6, Exploration: 1.000, Average reward: -455.0, Mean assignment to arrival: 18.7, Mean call to arrival: 19.2, Demand met 1.000
Run: 7, Exploration: 1.000, Average reward: -438.2, Mean assignment to arrival: 18.3, Mean call to arrival: 18.8, Demand met 1.000
Run: 8, Exploration: 1.000, Average reward: -444.2, Mean assignment to arrival: 18.4, Mean call to arrival: 18.9, Demand met 1.000
Run: 9, Exploration: 1.000, Average reward: -454.8, Mean assignment to arrival: 18.7, Mean call to arrival: 19.2, Demand met 1.000
Run: 10, Exploration: 1.000, Average reward: -458.7, Mean assignment to arrival: 18.7, Mean call to arrival: 19.2, Demand met 1.000
Run: 11, Exploration: 0.000, Average reward: -284.8, Mean assignment to arrival: 14.7, Mean call to arrival: 15.2, Demand met 1.000
Run: 12, Exploration: 0.000, Average reward: -212.0, Mean assignment to arrival: 13.3, Mean call to arrival: 13.8, Demand met 1.000
Run: 13, Exploration: 0.000, Average reward: -308.6, Mean assignment to arrival: 15.2, Mean call to arrival: 15.7, Demand met 1.000
Run: 14, Exploration: 0.000, Average reward: -247.1, Mean assignment to arrival: 13.8, Mean call to arrival: 14.3, Demand met 1.000
Run: 15, Exploration: 0.000, Average reward: -299.2, Mean assignment to arrival: 15.0, Mean call to arrival: 15.5, Demand met 1.000
Run: 16, Exploration: 0.000, Average reward: -335.2, Mean assignment to arrival: 15.9, Mean call to arrival: 16.4, Demand met 1.000
Run: 17, Exploration: 0.000, Average reward: -257.1, Mean assignment to arrival: 14.3, Mean call to arrival: 14.8, Demand met 1.000
Run: 18, Exploration: 0.000, Average reward: -173.5, Mean assignment to arrival: 11.8, Mean call to arrival: 12.3, Demand met 1.000
Run: 19, Exploration: 0.000, Average reward: -165.3, Mean assignment to arrival: 11.5, Mean call to arrival: 12.0, Demand met 1.000
Run: 20, Exploration: 0.000, Average reward: -204.8, Mean assignment to arrival: 12.8, Mean call to arrival: 13.3, Demand met 1.000
Run: 21, Exploration: 0.000, Average reward: -212.4, Mean assignment to arrival: 12.9, Mean call to arrival: 13.4, Demand met 1.000
Run: 22, Exploration: 0.000, Average reward: -233.6, Mean assignment to arrival: 13.8, Mean call to arrival: 14.3, Demand met 1.000
Run: 23, Exploration: 0.000, Average reward: -221.4, Mean assignment to arrival: 13.1, Mean call to arrival: 13.6, Demand met 1.000
Run: 24, Exploration: 0.000, Average reward: -222.2, Mean assignment to arrival: 13.3, Mean call to arrival: 13.8, Demand met 1.000
Run: 25, Exploration: 0.000, Average reward: -214.5, Mean assignment to arrival: 13.1, Mean call to arrival: 13.6, Demand met 1.000
Run: 26, Exploration: 0.000, Average reward: -219.8, Mean assignment to arrival: 13.1, Mean call to arrival: 13.6, Demand met 1.000
Run: 27, Exploration: 0.000, Average reward: -272.4, Mean assignment to arrival: 14.3, Mean call to arrival: 14.8, Demand met 1.000
Run: 28, Exploration: 0.000, Average reward: -222.6, Mean assignment to arrival: 12.5, Mean call to arrival: 13.0, Demand met 1.000
Run: 29, Exploration: 0.000, Average reward: -296.4, Mean assignment to arrival: 15.3, Mean call to arrival: 15.8, Demand met 1.000
Run: 30, Exploration: 0.000, Average reward: -205.5, Mean assignment to arrival: 12.8, Mean call to arrival: 13.3, Demand met 1.000
Run: 31, Exploration: 0.000, Average reward: -172.4, Mean assignment to arrival: 11.5, Mean call to arrival: 12.0, Demand met 1.000
Run: 32, Exploration: 0.000, Average reward: -251.0, Mean assignment to arrival: 13.9, Mean call to arrival: 14.4, Demand met 1.000
Run: 33, Exploration: 0.000, Average reward: -234.2, Mean assignment to arrival: 13.3, Mean call to arrival: 13.8, Demand met 1.000
Run: 34, Exploration: 0.000, Average reward: -241.8, Mean assignment to arrival: 13.8, Mean call to arrival: 14.3, Demand met 1.000
Run: 35, Exploration: 0.000, Average reward: -254.4, Mean assignment to arrival: 13.9, Mean call to arrival: 14.4, Demand met 1.000
Run: 36, Exploration: 0.000, Average reward: -260.9, Mean assignment to arrival: 14.0, Mean call to arrival: 14.5, Demand met 1.000
Run: 37, Exploration: 0.000, Average reward: -246.3, Mean assignment to arrival: 13.6, Mean call to arrival: 14.1, Demand met 1.000
Run: 38, Exploration: 0.000, Average reward: -236.1, Mean assignment to arrival: 13.3, Mean call to arrival: 13.8, Demand met 1.000
Run: 39, Exploration: 0.000, Average reward: -285.7, Mean assignment to arrival: 14.9, Mean call to arrival: 15.4, Demand met 1.000
Run: 40, Exploration: 0.000, Average reward: -287.6, Mean assignment to arrival: 14.9, Mean call to arrival: 15.4, Demand met 1.000
Run: 41, Exploration: 0.000, Average reward: -256.3, Mean assignment to arrival: 14.1, Mean call to arrival: 14.6, Demand met 1.000
Run: 42, Exploration: 0.000, Average reward: -276.6, Mean assignment to arrival: 14.6, Mean call to arrival: 15.1, Demand met 1.000
Run: 43, Exploration: 0.000, Average reward: -260.9, Mean assignment to arrival: 14.4, Mean call to arrival: 14.9, Demand met 1.000
Run: 44, Exploration: 0.000, Average reward: -261.0, Mean assignment to arrival: 14.1, Mean call to arrival: 14.6, Demand met 1.000
Run: 45, Exploration: 0.000, Average reward: -309.2, Mean assignment to arrival: 15.5, Mean call to arrival: 16.0, Demand met 1.000
Run: 46, Exploration: 0.000, Average reward: -227.7, Mean assignment to arrival: 13.3, Mean call to arrival: 13.8, Demand met 1.000
Run: 47, Exploration: 0.000, Average reward: -262.1, Mean assignment to arrival: 14.4, Mean call to arrival: 14.9, Demand met 1.000
Run: 48, Exploration: 0.000, Average reward: -230.3, Mean assignment to arrival: 13.4, Mean call to arrival: 13.9, Demand met 1.000
Run: 49, Exploration: 0.000, Average reward: -294.3, Mean assignment to arrival: 15.3, Mean call to arrival: 15.8, Demand met 1.000
Run: 50, Exploration: 0.000, Average reward: -298.1, Mean assignment to arrival: 15.3, Mean call to arrival: 15.8, Demand met 1.000
|
lijin-THU:notes-python/04-scipy/04.06-integration-in-python.ipynb | ###Markdown
积分 符号积分 积分与求导的关系:$$\frac{d}{dx} F(x) = f(x)\Rightarrow F(x) = \int f(x) dx$$符号运算可以用 `sympy` 模块完成。先导入 `init_printing` 模块方便其显示:
###Code
from sympy import init_printing
init_printing()
from sympy import symbols, integrate
import sympy
###Output
_____no_output_____
###Markdown
产生 x 和 y 两个符号变量,并进行运算:
###Code
x, y = symbols('x y')
sympy.sqrt(x ** 2 + y ** 2)
###Output
_____no_output_____
###Markdown
对于生成的符号变量 `z`,我们将其中的 `x` 利用 `subs` 方法替换为 `3`:
###Code
z = sympy.sqrt(x ** 2 + y ** 2)
z.subs(x, 3)
###Output
_____no_output_____
###Markdown
再替换 `y`:
###Code
z.subs(x, 3).subs(y, 4)
###Output
_____no_output_____
###Markdown
还可以从 `sympy.abc` 中导入现成的符号变量:
###Code
from sympy.abc import theta
y = sympy.sin(theta) ** 2
y
###Output
_____no_output_____
###Markdown
对 y 进行积分:
###Code
Y = integrate(y)
Y
###Output
_____no_output_____
###Markdown
计算 $Y(\pi) - Y(0)$:
###Code
import numpy as np
np.set_printoptions(precision=3)
Y.subs(theta, np.pi) - Y.subs(theta, 0)
###Output
_____no_output_____
###Markdown
计算 $\int_0^\pi y d\theta$ :
###Code
integrate(y, (theta, 0, sympy.pi))
###Output
_____no_output_____
###Markdown
显示的是字符表达式,查看具体数值可以使用 `evalf()` 方法,或者传入 `numpy.pi`,而不是 `sympy.pi` :
###Code
integrate(y, (theta, 0, sympy.pi)).evalf()
integrate(y, (theta, 0, np.pi))
###Output
_____no_output_____
###Markdown
根据牛顿莱布尼兹公式,这两个数值应该相等。产生不定积分对象:
###Code
Y_indef = sympy.Integral(y)
Y_indef
print type(Y_indef)
###Output
<class 'sympy.integrals.integrals.Integral'>
###Markdown
定积分:
###Code
Y_def = sympy.Integral(y, (theta, 0, sympy.pi))
Y_def
###Output
_____no_output_____
###Markdown
产生函数 $Y(x) = \int_0^x sin^2(\theta) d\theta$,并将其向量化:
###Code
Y_raw = lambda x: integrate(y, (theta, 0, x))
Y = np.vectorize(Y_raw)
%matplotlib inline
import matplotlib.pyplot as plt
x = np.linspace(0, 2 * np.pi)
p = plt.plot(x, Y(x))
t = plt.title(r'$Y(x) = \int_0^x sin^2(\theta) d\theta$')
###Output
_____no_output_____
###Markdown
数值积分 数值积分:$$F(x) = \lim_{n \rightarrow \infty} \sum_{i=0}^{n-1} f(x_i)(x_{i+1}-x_i) \Rightarrow F(x) = \int_{x_0}^{x_n} f(x) dx$$导入贝塞尔函数:
###Code
from scipy.special import jv
def f(x):
return jv(2.5, x)
x = np.linspace(0, 10)
p = plt.plot(x, f(x), 'k-')
###Output
_____no_output_____
###Markdown
`quad` 函数 Quadrature 积分的原理参见:http://en.wikipedia.org/wiki/Numerical_integrationQuadrature_rules_based_on_interpolating_functionsquad 返回一个 (积分值,误差) 组成的元组:
###Code
from scipy.integrate import quad
interval = [0, 6.5]
value, max_err = quad(f, *interval)
###Output
_____no_output_____
###Markdown
积分值:
###Code
print value
###Output
1.28474297234
###Markdown
最大误差:
###Code
print max_err
###Output
2.34181853668e-09
###Markdown
积分区间图示,蓝色为正,红色为负:
###Code
print "integral = {:.9f}".format(value)
print "upper bound on error: {:.2e}".format(max_err)
x = np.linspace(0, 10, 100)
p = plt.plot(x, f(x), 'k-')
x = np.linspace(0, 6.5, 45)
p = plt.fill_between(x, f(x), where=f(x)>0, color="blue")
p = plt.fill_between(x, f(x), where=f(x)<0, color="red", interpolate=True)
###Output
integral = 1.284742972
upper bound on error: 2.34e-09
###Markdown
积分到无穷
###Code
from numpy import inf
interval = [0., inf]
def g(x):
return np.exp(-x ** 1/2)
value, max_err = quad(g, *interval)
x = np.linspace(0, 10, 50)
fig = plt.figure(figsize=(10,3))
p = plt.plot(x, g(x), 'k-')
p = plt.fill_between(x, g(x))
plt.annotate(r"$\int_0^{\infty}e^{-x^1/2}dx = $" + "{}".format(value), (4, 0.6),
fontsize=16)
print "upper bound on error: {:.1e}".format(max_err)
###Output
upper bound on error: 7.2e-11
###Markdown
双重积分 假设我们要进行如下的积分:$$ I_n = \int \limits_0^{\infty} \int \limits_1^{\infty} \frac{e^{-xt}}{t^n}dt dx = \frac{1}{n}$$
###Code
def h(x, t, n):
"""core function, takes x, t, n"""
return np.exp(-x * t) / (t ** n)
###Output
_____no_output_____
###Markdown
一种方式是调用两次 `quad` 函数,不过这里 `quad` 的返回值不能向量化,所以使用了修饰符 `vectorize` 将其向量化:
###Code
from numpy import vectorize
@vectorize
def int_h_dx(t, n):
"""Time integrand of h(x)."""
return quad(h, 0, np.inf, args=(t, n))[0]
@vectorize
def I_n(n):
return quad(int_h_dx, 1, np.inf, args=(n))
I_n([0.5, 1.0, 2.0, 5])
###Output
_____no_output_____
###Markdown
或者直接调用 `dblquad` 函数,并将积分参数传入,传入方式有多种,后传入的先进行积分:
###Code
from scipy.integrate import dblquad
@vectorize
def I(n):
"""Same as I_n, but using the built-in dblquad"""
x_lower = 0
x_upper = np.inf
return dblquad(h,
lambda t_lower: 1, lambda t_upper: np.inf,
x_lower, x_upper, args=(n,))
I_n([0.5, 1.0, 2.0, 5])
###Output
_____no_output_____
###Markdown
采样点积分 trapz 方法 和 simps 方法
###Code
from scipy.integrate import trapz, simps
###Output
_____no_output_____
###Markdown
`sin` 函数, `100` 个采样点和 `5` 个采样点:
###Code
x_s = np.linspace(0, np.pi, 5)
y_s = np.sin(x_s)
x = np.linspace(0, np.pi, 100)
y = np.sin(x)
p = plt.plot(x, y, 'k:')
p = plt.plot(x_s, y_s, 'k+-')
p = plt.fill_between(x_s, y_s, color="gray")
###Output
_____no_output_____
###Markdown
采用 [trapezoidal 方法](https://en.wikipedia.org/wiki/Trapezoidal_rule) 和 [simpson 方法](https://en.wikipedia.org/wiki/Simpson%27s_rule) 对这些采样点进行积分(函数积分为 2):
###Code
result_s = trapz(y_s, x_s)
result_s_s = simps(y_s, x_s)
result = trapz(y, x)
print "Trapezoidal Integration over 5 points : {:.3f}".format(result_s)
print "Simpson Integration over 5 points : {:.3f}".format(result_s_s)
print "Trapezoidal Integration over 100 points : {:.3f}".format(result)
###Output
Trapezoidal Integration over 5 points : 1.896
Simpson Integration over 5 points : 2.005
Trapezoidal Integration over 100 points : 2.000
###Markdown
使用 ufunc 进行积分 `Numpy` 中有很多 `ufunc` 对象:
###Code
type(np.add)
np.info(np.add.accumulate)
###Output
accumulate(array, axis=0, dtype=None, out=None)
Accumulate the result of applying the operator to all elements.
For a one-dimensional array, accumulate produces results equivalent to::
r = np.empty(len(A))
t = op.identity # op = the ufunc being applied to A's elements
for i in range(len(A)):
t = op(t, A[i])
r[i] = t
return r
For example, add.accumulate() is equivalent to np.cumsum().
For a multi-dimensional array, accumulate is applied along only one
axis (axis zero by default; see Examples below) so repeated use is
necessary if one wants to accumulate over multiple axes.
Parameters
----------
array : array_like
The array to act on.
axis : int, optional
The axis along which to apply the accumulation; default is zero.
dtype : data-type code, optional
The data-type used to represent the intermediate results. Defaults
to the data-type of the output array if such is provided, or the
the data-type of the input array if no output array is provided.
out : ndarray, optional
A location into which the result is stored. If not provided a
freshly-allocated array is returned.
Returns
-------
r : ndarray
The accumulated values. If `out` was supplied, `r` is a reference to
`out`.
Examples
--------
1-D array examples:
>>> np.add.accumulate([2, 3, 5])
array([ 2, 5, 10])
>>> np.multiply.accumulate([2, 3, 5])
array([ 2, 6, 30])
2-D array examples:
>>> I = np.eye(2)
>>> I
array([[ 1., 0.],
[ 0., 1.]])
Accumulate along axis 0 (rows), down columns:
>>> np.add.accumulate(I, 0)
array([[ 1., 0.],
[ 1., 1.]])
>>> np.add.accumulate(I) # no axis specified = axis zero
array([[ 1., 0.],
[ 1., 1.]])
Accumulate along axis 1 (columns), through rows:
>>> np.add.accumulate(I, 1)
array([[ 1., 1.],
[ 0., 1.]])
###Markdown
`np.add.accumulate` 相当于 `cumsum` :
###Code
result_np = np.add.accumulate(y) * (x[1] - x[0]) - (x[1] - x[0]) / 2
p = plt.plot(x, - np.cos(x) + np.cos(0), 'rx')
p = plt.plot(x, result_np)
###Output
_____no_output_____
###Markdown
速度比较 计算积分:$$\int_0^x sin \theta d\theta$$
###Code
import sympy
from sympy.abc import x, theta
sympy_x = x
x = np.linspace(0, 20 * np.pi, 1e+4)
y = np.sin(x)
sympy_y = vectorize(lambda x: sympy.integrate(sympy.sin(theta), (theta, 0, x)))
###Output
_____no_output_____
###Markdown
`numpy` 方法:
###Code
%timeit np.add.accumulate(y) * (x[1] - x[0])
y0 = np.add.accumulate(y) * (x[1] - x[0])
print y0[-1]
###Output
The slowest run took 4.32 times longer than the fastest. This could mean that an intermediate result is being cached
10000 loops, best of 3: 56.2 µs per loop
-2.34138044756e-17
###Markdown
`quad` 方法:
###Code
%timeit quad(np.sin, 0, 20 * np.pi)
y2 = quad(np.sin, 0, 20 * np.pi, full_output=True)
print "result = ", y2[0]
print "number of evaluations", y2[-1]['neval']
###Output
10000 loops, best of 3: 40.5 µs per loop
result = 3.43781337153e-15
number of evaluations 21
###Markdown
`trapz` 方法:
###Code
%timeit trapz(y, x)
y1 = trapz(y, x)
print y1
###Output
10000 loops, best of 3: 105 µs per loop
-4.4408920985e-16
###Markdown
`simps` 方法:
###Code
%timeit simps(y, x)
y3 = simps(y, x)
print y3
###Output
1000 loops, best of 3: 801 µs per loop
3.28428554968e-16
###Markdown
`sympy` 积分方法:
###Code
%timeit sympy_y(20 * np.pi)
y4 = sympy_y(20 * np.pi)
print y4
###Output
100 loops, best of 3: 6.86 ms per loop
0
|
voila/notebooks/bqplot.ipynb | ###Markdown
So easy, *voilà*!In this example notebook, we demonstrate how voila can render custom Jupyter widgets such as [bqplot](https://github.com/bloomberg/bqplot).
###Code
!pip install bqplot
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from bqplot import pyplot as plt
plt.figure(1, title='Line Chart')
np.random.seed(0)
n = 200
x = np.linspace(0.0, 10.0, n)
y = np.cumsum(np.random.randn(n))
plt.plot(x, y)
plt.show()
###Output
_____no_output_____ |
examples/climate-resilience/notebooks/scratch.ipynb | ###Markdown
---
###Code
import geopandas as gpd
site_json_file_path = "../../data/LMsites.json"
sites = gpd.read_file(site_json_file_path)
print(sites.shape)
sites
import importlib
from climate_resilience import downloader as dd
importlib.reload(dd)
output_dir = "gee_downloader_testing" # Folder located in Google Drive. Will be created if not already present.
site_json_file_path = "../../../data/LMsites.json"
yaml_path = "../scripts/download_params.yml"
sd_obj = dd.SitesDownloader(
folder=output_dir,
site_json_file_path=site_json_file_path,
# latitude_range=(30, 50),
longitude_range=(-150, -120),
)
print(sd_obj.sites.shape)
sd_obj.sites
sd_obj.sites
'1' + 3
###Output
_____no_output_____ |
tutorials/W3D2_HiddenDynamics/student/W3D2_Outro.ipynb | ###Markdown
Outro **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** Video
###Code
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Qk4y127kB", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"5yKREz2kchE", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Daily surveyDon't forget to complete your reflections and content check in the daily survey! Please be patient after logging in as there isa small delay before you will be redirected to the survey. Slides
###Code
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/nqcy3/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
Outro **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** Video
###Code
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Qk4y127kB", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"5yKREz2kchE", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
ReflectionsDon't forget to complete your reflections and content checks! Slides
###Code
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/nqcy3/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
Outro **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** Video
###Code
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Qk4y127kB", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"5yKREz2kchE", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Daily surveyDon't forget to complete your reflections and content check in the daily survey! Please be patient after logging in as there isa small delay before you will be redirected to the survey. Slides
###Code
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/nqcy3/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
[](https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W3D2_HiddenDynamics/student/W3D2_Outro.ipynb) Outro **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** Video
###Code
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Qk4y127kB", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"5yKREz2kchE", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
ReflectionsDon't forget to complete your reflections and content checks! Slides
###Code
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/nqcy3/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____ |
my_attempts/model_4.ipynb | ###Markdown
Non-linear Regression AnalysisTo study different forms of non-linear regression Import the required mobules:
###Code
import numpy as np
import pandas as pd
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
There are 6 main forms of regressions, represented algebraically as:Linear: $$y = mx + c$$Quadratic: $$y = ax^2 + bx + c$$Cubic: $$y = ax^3 + bx^2 + cx + d$$Exponential: $$y = e^x$$Logarithmic: $$y = \log (x)$$Sigmoidal: $$y = a + \frac{b}{1 + c^{(x -d)}}$$Note that the polynomial functions can continue to infinity, but they aren't much useful in most cases concerning machine learning.These equations can be graphically represented as:
###Code
x = np.arange(-5, 5, 0.1)
y_lin = 2*x + 3
y_sq = np.power(x, 2)
y_cub = x**3 + x**2 + x + 3
y_exp = np.exp(x)
y_log = np.log(x)
y_sig = 1 - 4/(1 + np.power(3, x-2))
y_noise = 2*np.random.normal(size=x.size)
fig, axes = plt.subplots(2, 3, figsize=(14, 10))
axes[0, 0].plot(x, y_lin + y_noise) + axes[0, 0].plot(x, y_lin, color='red')
axes[0, 0].set_title('Linear function')
axes[0, 1].plot(x, y_sq + y_noise) + axes[0, 1].plot(x, y_sq, color='red')
axes[0, 1].set_title('Quadratic function')
axes[0, 2].plot(x, y_cub + 10*y_noise) + axes[0, 2].plot(x, y_cub, color='red')
axes[0, 2].set_title('Cubic function')
axes[1, 0].plot(x, y_exp + 3*y_noise) + axes[1, 0].plot(x, y_exp, color='red')
axes[1, 0].set_title('Exponential function')
axes[1, 1].plot(x, y_log + y_noise/10) + axes[1, 1].plot(x, y_log, color='red')
axes[1, 1].set_title('Logarithmic function')
axes[1, 2].plot(x, y_sig + y_noise/10) + axes[1, 2].plot(x, y_sig, color='red')
axes[1, 2].set_title('Sigmoidal function')
plt.show()
###Output
<ipython-input-2-cce48272129f>:6: RuntimeWarning: invalid value encountered in log
y_log = np.log(x)
###Markdown
To see a more practical example, let's look at China's GDP data between 1960 and 2015:
###Code
df = pd.read_csv("../datasets/china_gdp.csv")
df.head()
###Output
_____no_output_____
###Markdown
Let's see how this data looks:
###Code
plt.scatter(df['Year'].values, df['Value'].values)
plt.show()
###Output
_____no_output_____
###Markdown
Even though the graph looks like an exponential function, it is known that exponential functions observed in real life are almost always just sigmoidal functions (since no resource can be exhausted to infinity, so every exponential graph must flatten at some point). Thus, we will create a sigmoid model to fit to this data:
###Code
def sig_func(x, b_1, b_2):
return 1/(1 + np.exp(-b_1*(x - b_2)))
y_model = sig_func(df['Year'].values, 0.1, 1990)
###Output
_____no_output_____
###Markdown
Let's look at the sigmoid equation (also called logistic equation) again, but in a different form this time:$$\hat{y} = \frac{1}{1 + e^{\beta_1(x - \beta_2)}}$$The parameter $\beta_1$ determines the steepness of the curve i.e. how fast the values rise up from ~ 0 to ~ 1. On the other hand, the parameter $\beta_2$ determines the position of the curve on the x-axis i.e. at what value of x the value of y begins to rise. Now, lets apply this sigmoid function to the GDP data:
###Code
plt.plot(df['Year'].values, y_model*15000000000000)
plt.scatter(df['Year'].values, df['Value'].values, color='red')
plt.show()
###Output
_____no_output_____
###Markdown
As is quite visible, the two graphs don't match up. To make this work, we need to find out the values of $\beta_1$ and $\beta_2$. This is essentially what machine learning does: it finds out the value of these parameters so that the resulting graph overlaps with the given data to a good extent (the phrase 'good extent' is what data scientists have to define for their context). To begin this, we need to normalize the x and y values for the algorithm to work:
###Code
x, y = df['Year'].values/max(df['Year'].values), df['Value'].values/max(df['Value'].values)
###Output
_____no_output_____
###Markdown
Now, the `curve_fit` function from `scipy` can determine the best values for the function we provide to fit the model to the data we provide:
###Code
beta, _ = curve_fit(sig_func, x, y)
###Output
_____no_output_____
###Markdown
The values of $\beta_1$ and $\beta_2$ are determined to be:
###Code
from IPython.display import display, Latex, Markdown
display(Markdown("{} = {}, {} = {}".format(r"$\beta_1$", beta[0], r"$\beta _2$", beta[1])))
###Output
_____no_output_____
###Markdown
To see how good our model is, lets plot it on the original dataset:
###Code
x_new = np.linspace(1960, 2015, 55)
x_new /= max(x_new)
y_new = sig_func(x_new, *beta)
plt.figure(figsize=(8, 6))
plt.scatter(x, y, label='data')
plt.plot(x_new, y_new, label='model', color='red')
plt.xlim((0.9725, 1.0005))
plt.xlabel('Year')
plt.ylabel('GDP')
plt.show()
###Output
_____no_output_____ |
1.4. SEPP Tree, Alpha, Beta, Rarefy (Qiime2).ipynb | ###Markdown
This notebook is to use Qiime2-2020.2 environment to construct phylogenetic tree, assign taxonomy, rarefaction, calculate alpha and beta diversities and generate PCoA plots
###Code
### 1. import files
qiime tools import \
--input-path ../data/57316_mros_deblur_otus_unrare.biom \
--type 'FeatureTable[Frequency]' \
--input-format BIOMV210Format \
--output-path ../data/57316_mros_deblur_otus_unrare.qza
qiime tools import \
--input-path ../Qiita_Study11274_ID57316/57316_reference-hit.seqs.fa \
--output-path ../data/57316_reference-hit.seqs.qza \
--type "FeatureData[Sequence]"
qiime tools import \
--input-path ../Qiita_Study11274_ID57316/57316_insertion_tree.relabelled.tre \
--output-path ../data/57316_sepp_tree.qza \
--type 'Phylogeny[Rooted]'
# filter feature table that only contains fragments that are in the insertion tree
qiime fragment-insertion filter-features \
--i-table ../data/57316_mros_deblur_otus_unrare.qza \
--i-tree ../data/57316_sepp_tree.qza \
--o-filtered-table ../data/57316_filtered-table-deblur.qza \
--o-removed-table ../data/57316_removed-table.qza
### 2. assign taxonomy
wget https://github.com/BenKaehler/readytowear/raw/master/data/gg_13_8/515f-806r/human-stool.qza
wget https://github.com/BenKaehler/readytowear/raw/master/data/gg_13_8/515f-806r/ref-seqs-v4.qza
wget https://github.com/BenKaehler/readytowear/raw/master/data/gg_13_8/515f-806r/ref-tax.qza
qiime feature-classifier fit-classifier-naive-bayes \
--i-reference-reads ../data/ref-seqs-v4.qza \
--i-reference-taxonomy ../data/ref-tax.qza \
--i-class-weight ../data/human-stool.qza \
--o-classifier ../data/gg138_v4_human-stool_classifier.qza
qiime feature-classifier classify-sklearn \
--i-reads ../data/57316_reference-hit.seqs.qza \
--i-classifier ../data/gg138_v4_human-stool_classifier.qza \
--o-classification ../data/57316_bespoke-taxonomy.qza
qiime metadata tabulate \
--m-input-file ../data/57316_bespoke-taxonomy.qza \
--m-input-file ../data/57316_reference-hit.seqs.qza \
--o-visualization ../visu/57316_bespoke-taxonomy.qzv
qiime tools export \
--input-path ../data/57316_bespoke-taxonomy.qza \
--output-path ../data/57316_deblur_taxonomy
### 3. Alpha rarefaction
qiime diversity alpha-rarefaction \
--i-table ../data/57316_filtered-table-deblur.qza \
--i-phylogeny ../data/57316_sepp_tree.qza \
--p-max-depth 50000 \
--m-metadata-file ../data/mapping_MrOS_add.txt \
--o-visualization ../visu/57316-alpha-rarefaction.qzv
### 4. Compute alpha and beta diversities (lose one sample at 11111)
qiime diversity core-metrics-phylogenetic \
--i-table ../data/57316_filtered-table-deblur.qza \
--i-phylogeny ../data/57316_sepp_tree.qza \
--p-sampling-depth 11111 \
--m-metadata-file ../data/mapping_MrOS_add.txt \
--p-n-jobs 1 \
--output-dir ../data/57316-core-metrics-results
# export alpha and beta diversities
qiime tools export \
--input-path ../data/57316-core-metrics-results/faith_pd_vector.qza \
--output-path ../data/57316-alpha_PD
qiime tools export \
--input-path ../data/57316-core-metrics-results/unweighted_unifrac_pcoa_results.qza \
--output-path ../data/57316-unweighted_unifrac_pcoa_results
qiime tools export \
--input-path ../data/57316-core-metrics-results/unweighted_unifrac_distance_matrix.qza \
--output-path ../data/57316-unweighted_unifrac_distance
# rarefy feature table
qiime feature-table rarefy \
--i-table ../data/57316_filtered-table-deblur.qza \
--p-sampling-depth 11111 \
--o-rarefied-table ../data/57316_mros_deblur_otus_rare.qza
# convert qza to biom
qiime tools export \
--input-path ../data/57316_mros_deblur_otus_rare.qza \
--output-path ../data/57316_mros_otus_rare_exp
# convert biom to txt
biom convert -i ../data/57316_mros_otus_rare_exp/feature-table.biom \
-o ../data/57316_mros_otus_rare_exp/57316_feature-table-rare.txt \
--to-tsv
###Output
_____no_output_____ |
silx/processing/histogram/solution/histogram.ipynb | ###Markdown
Histogram vs Histogram_lut
###Code
import numpy
from silx.math.histogram import Histogramnd, HistogramndLut
from silx.gui.plot import Plot1D, Plot2D
%gui qt
###Output
_____no_output_____
###Markdown
This function create some data with noise.
###Code
def createDataSet():
shape = (1000, 1000)
xcenter = shape[0]/2
ycenter = shape[1]/2
t = numpy.zeros(shape)
y, x=numpy.ogrid[:t.shape[0], :t.shape[1]]
r=1.0+numpy.sin(numpy.sqrt((x-xcenter)**2+(y-ycenter)**2)/20.0)
return r + numpy.random.rand(shape[0], shape[1])
data = createDataSet()
###Output
_____no_output_____
###Markdown
Simple display of the fist element of the list
###Code
p = Plot2D()
p.addImage(legend='dataExample', data=data)
p.show()
###Output
_____no_output_____
###Markdown
Exercise : use Histogramnd to compute azimutal integration we compute raddi to center for each pixel
###Code
def computeradius(data):
xcenter=data.shape[0]/2
ycenter=data.shape[1]/2
y, x=numpy.ogrid[:data.shape[0], :data.shape[1]]
r=numpy.sqrt((x-xcenter)**2+(y-ycenter)**2)
return r
radii = computeradius(data)
plotRadii = Plot2D()
plotRadii.addImage(radii)
plotRadii.show()
###Output
_____no_output_____
###Markdown
 plot the histogram of the radiidocumentation : - http://pythonhosted.org/silx/modules/math/histogram.html
###Code
nb_bins = int(numpy.ceil(radii.max()))
histo_range = [0, nb_bins]
histogram=Histogramnd(sample=radii.ravel(),
n_bins=nb_bins,
histo_range=histo_range)
plotHisto = Plot1D()
plotHisto.addCurve(x=range(nb_bins), y=histogram.histo, color='red')
plotHisto.show()
###Output
_____no_output_____
###Markdown
 compute azimutal integrationgoal : get the mean contribution of each pixels for each radius step 1 : get the contribution of each pixels for each radius
###Code
nb_bins = int(numpy.ceil(radii.max()))
histo_range = [0, nb_bins]
histogram=Histogramnd(sample=radii.ravel(),
n_bins=nb_bins,
histo_range=histo_range,
weights=data.ravel())
###Output
_____no_output_____
###Markdown
step 2 : get the mean and plot it
###Code
plotHisto = Plot1D()
binscenter=(histogram.edges[0][1:] + histogram.edges[0][0:-1]) / 2.0
plotHisto.addCurve(x=binscenter, y=histogram.histo, legend='h unweighted')
plotHisto.addCurve(x=binscenter, y=histogram.weighted_histo, legend='h weighted')
normalization=histogram.weighted_histo/histogram.histo
plotHisto.addCurve(x=binscenter, y=normalization, legend='integration')
plotHisto.show()
###Output
_____no_output_____
###Markdown
Exercice : compute the azimutal integration over n imageswe want to reproduced the same action but over a stack of image : - pixel distance two the center is not evolving - only pixel values are
###Code
dataset = [ createDataSet() for i in range(10) ]
###Output
_____no_output_____
###Markdown
First way : using Histogramnd
###Code
def computeDataSetHisto():
histogram=None
for d in dataset:
if histogram is None:
histogram=Histogramnd(radii.ravel(),
n_bins=nb_bins,
histo_range=histo_range,
weights=d.ravel())
else:
histogram.accumulate(radii.ravel(), weights=d.ravel())
return histogram
# plot It
plotDataSetHistoNd = Plot1D()
histogramDS = computeDataSetHisto()
binscenter=(histogramDS.edges[0][1:] + histogramDS.edges[0][0:-1]) / 2.0
normalization=histogramDS.weighted_histo/histogramDS.histo
plotDataSetHistoNd.addCurve(x=binscenter, y=normalization, color='red')
plotDataSetHistoNd.show()
###Output
_____no_output_____
###Markdown
second way : using HistogramndLut
###Code
def computeDataSetHistoLut():
histogram=HistogramndLut(radii.ravel(),
n_bins=nb_bins,
histo_range=histo_range)
for d in dataset:
histogram.accumulate(d.ravel())
return histogram
# plot It
plotDataSetHistoLut = Plot1D()
histogramLut = computeDataSetHistoLut()
normalization=histogramLut.weighted_histo()/histogramDS.histo
plotDataSetHistoLut.addCurve(binscenter, y=normalization, color='red')
plotDataSetHistoLut.show()
###Output
_____no_output_____
###Markdown
Compare results
###Code
numpy.array_equal(histogramLut.weighted_histo(), histogramDS.weighted_histo)
###Output
_____no_output_____
###Markdown
Compare execution time
###Code
%timeit computeDataSetHisto()
%timeit computeDataSetHistoLut()
###Output
_____no_output_____
###Markdown
Histogram vs Histogram_lut
###Code
import numpy
from silx.math.histogram import Histogramnd, HistogramndLut
from silx.gui.plot import Plot1D, Plot2D
%gui qt
###Output
_____no_output_____
###Markdown
This function create some data with noise.
###Code
def createDataSet():
shape = (1000, 1000)
xcenter = shape[0]/2
ycenter = shape[1]/2
t = numpy.zeros(shape)
y, x = numpy.ogrid[:t.shape[0], :t.shape[1]]
r = 1.0 + numpy.sin(numpy.sqrt((x-xcenter)**2+(y-ycenter)**2)/20.0)
return r + numpy.random.rand(shape[0], shape[1])
data = createDataSet()
###Output
_____no_output_____
###Markdown
Simple display of the fist element of the list
###Code
p = Plot2D()
p.addImage(legend='dataExample', data=data)
p.show()
###Output
_____no_output_____
###Markdown
Exercise : use Histogramnd to compute azimutal integration we compute raddi to center for each pixel
###Code
def computeradius(data):
xcenter = data.shape[0] / 2
ycenter = data.shape[1] / 2
y, x = numpy.ogrid[:data.shape[0], :data.shape[1]]
r=numpy.sqrt((x-xcenter)**2+(y-ycenter)**2)
return r
radii = computeradius(data)
plotRadii = Plot2D()
plotRadii.addImage(radii)
plotRadii.show()
###Output
_____no_output_____
###Markdown
 plot the histogram of the radiidocumentation : - http://www.silx.org/doc/silx/dev/modules/math/histogram.html
###Code
nb_bins = int(numpy.ceil(radii.max()))
histo_range = [0, nb_bins]
histogram=Histogramnd(sample=radii.ravel(),
n_bins=nb_bins,
histo_range=histo_range)
plotHisto = Plot1D()
plotHisto.addCurve(x=range(nb_bins), y=histogram.histo, color='red')
plotHisto.show()
###Output
_____no_output_____
###Markdown
 compute azimutal integrationgoal : get the mean contribution of each pixels for each radius step 1 : get the contribution of each pixels for each radius
###Code
nb_bins = int(numpy.ceil(radii.max()))
histo_range = [0, nb_bins]
histogram = Histogramnd(sample=radii.ravel(),
n_bins=nb_bins,
histo_range=histo_range,
weights=data.ravel())
###Output
_____no_output_____
###Markdown
step 2 : get the mean and plot it
###Code
plotHisto = Plot1D()
binscenter = (histogram.edges[0][1:] + histogram.edges[0][0:-1]) / 2.0
plotHisto.addCurve(x=binscenter,
y=histogram.histo,
legend='h unweighted')
plotHisto.addCurve(x=binscenter,
y=histogram.weighted_histo,
legend='h weighted')
normalization = histogram.weighted_histo / histogram.histo
plotHisto.addCurve(x=binscenter,
y=normalization,
legend='integration')
plotHisto.show()
###Output
_____no_output_____
###Markdown
Exercice : compute the azimutal integration over n imageswe want to reproduced the same action but over a stack of image : - pixel distance two the center is not evolving - only pixel values are
###Code
dataset = [createDataSet() for i in range(10)]
###Output
_____no_output_____
###Markdown
First way : using Histogramnd
###Code
def computeDataSetHisto():
histogram = None
for d in dataset:
if histogram is None:
histogram = Histogramnd(radii.ravel(),
n_bins=nb_bins,
histo_range=histo_range,
weights=d.ravel())
else:
histogram.accumulate(radii.ravel(), weights=d.ravel())
return histogram
# plot It
plotDataSetHistoNd = Plot1D()
histogramDS = computeDataSetHisto()
binscenter = (histogramDS.edges[0][1:] + histogramDS.edges[0][0:-1]) / 2.0
normalization = histogramDS.weighted_histo / histogramDS.histo
plotDataSetHistoNd.addCurve(x=binscenter, y=normalization, color='red')
plotDataSetHistoNd.show()
###Output
_____no_output_____
###Markdown
second way : using HistogramndLut
###Code
def computeDataSetHistoLut():
histogram = HistogramndLut(radii.ravel(),
n_bins=nb_bins,
histo_range=histo_range)
for d in dataset:
histogram.accumulate(d.ravel())
return histogram
# plot It
plotDataSetHistoLut = Plot1D()
histogramLut = computeDataSetHistoLut()
normalization = histogramLut.weighted_histo() / histogramDS.histo
plotDataSetHistoLut.addCurve(binscenter,
y=normalization,
color='red')
plotDataSetHistoLut.show()
###Output
_____no_output_____
###Markdown
Compare results
###Code
numpy.array_equal(histogramLut.weighted_histo(), histogramDS.weighted_histo)
###Output
_____no_output_____
###Markdown
Compare execution time
###Code
%timeit computeDataSetHisto()
%timeit computeDataSetHistoLut()
###Output
_____no_output_____ |
examples/tutorial/08_Advanced_Dashboards.ipynb | ###Markdown
div.container { width: 100% }Tutorial 8. Advanced Dashboards At this point we have learned how to build interactive apps and dashboards with Panel, how to quickly build visualizations with hvPlot, and add custom interactivity by using HoloViews. In this section we will work on putting all of this together to build complex, and efficient data processing pipelines, controlled by Panel widgets.
###Code
import colorcet as cc
import dask.dataframe as dd
import holoviews as hv
import numpy as np
import panel as pn
import xarray as xr
import hvplot.pandas # noqa: API import
import hvplot.xarray # noqa: API import
pn.extension()
###Output
_____no_output_____
###Markdown
Before we get started let's once again load the earthquake and population data and define the basic plots, which we will build the dashboard around.
###Code
df = dd.read_parquet('../data/earthquakes.parq').repartition(npartitions=4).persist()
most_severe = df[df.mag >= 7].compute()
ds = xr.open_dataarray('../data/gpw_v4_population_density_rev11_2010_2pt5_min.nc')
cleaned_ds = ds.where(ds.values != ds.nodatavals).sel(band=1)
cleaned_ds.name = 'population'
mag_cmap = cc.CET_L4[::-1]
high_mag_points = most_severe.hvplot.points(
x='longitude', y='latitude', c='mag', hover_cols=['place', 'time'],
cmap=mag_cmap, tools=['tap']).opts(selection_line_color='black')
rasterized_pop = cleaned_ds.hvplot.image(
rasterize=True, cmap='kbc_r', height=500, width=833,
xaxis=None, yaxis=None).opts(logz=True)
###Output
_____no_output_____
###Markdown
Building Pipelines In the previous sections we built a little function to cache the closest earthquakes since the computation can take a little while. An alternative to this approach is to start building a pipeline in HoloViews to do this very thing. Instead of writing a function that operates directly on the data, we rewrite the function to accept a Dataset and the index. This function again filters the closest earthquakes within the region and returns a new Dataset:
###Code
from holoviews.streams import Selection1D
def earthquakes_around_point(ds, index, degrees_dist=0.5):
if not index:
return ds.iloc[[]]
row = high_mag_points.data.iloc[index[0]]
half_dist = degrees_dist / 2.0
df = ds.data
nearest = df[((df['latitude'] - row.latitude).abs() < half_dist)
& ((df['longitude'] - row.longitude).abs() < half_dist)].compute()
return hv.Dataset(nearest)
###Output
_____no_output_____
###Markdown
Now we declare a HoloViews ``Dataset``, an ``Selection1D`` stream and use the ``apply`` method to apply the function to the dataset. The most important part is that we can now provide the selection stream's index parameter to this apply method. This sets up a pipeline which filters the Dataset based on the current index:
###Code
dataset = hv.Dataset(df)
index_stream = Selection1D(source=high_mag_points, index=[-3])
filtered_ds = dataset.apply(earthquakes_around_point, index=index_stream.param.index)
###Output
_____no_output_____
###Markdown
The filtered Dataset object itself doesn't actually display anything but it provides an intermediate pipeline stage which will feed the actual visualizations. The next step therefore is to extend this pipeline to build the visualizations from this filtered dataset. For this purpose we define some functions which take the dataset as input and then generate a plot:
###Code
hv.opts.defaults(
hv.opts.Histogram(toolbar=None),
hv.opts.Scatter(toolbar=None)
)
def histogram(ds):
return ds.data.hvplot.hist(y='mag', bin_range=(0,10), bins=20, color='red', width=400, height=250)
def scatter(ds):
return ds.data.hvplot.scatter('time', 'mag', color='green', width=400, height=250, padding=0.1)
# We also redefine the VLine
def vline_callback(index):
if not index:
return hv.VLine(0)
row = most_severe.iloc[index[0]]
return hv.VLine(row.time).opts(line_width=1, color='black')
temporal_vline = hv.DynamicMap(vline_callback, streams=[index_stream])
dynamic_scatter = filtered_ds.apply(scatter)
dynamic_histogram = filtered_ds.apply(histogram)
###Output
_____no_output_____
###Markdown
Now that we have defined our visualizations using lazily evaluated pipelines we can start visualizing it. This time we will use Panel to lay out the plots:
###Code
pn.Column(rasterized_pop * high_mag_points, pn.Row(dynamic_scatter * temporal_vline, dynamic_histogram))
###Output
_____no_output_____
###Markdown
ExerciseDefine another function like the ``histogram`` or ``scatter`` function and then ``apply`` it to the ``filtered_ds``. Observe how this too will respond to changes in the selected earthquake. Solution```pythondef bivariate(ds): return ds.data.hvplot.bivariate('mag', 'depth') filtered_ds.apply(bivariate)``` Connecting widgets to the pipelineAt this point you may be thinking that we haven't done anything we haven't already seen in the previous sections. However, apart from automatically handling the caching of computations, building visualization pipelines in this way provides one major benefit - we can inject parameters at any stage of the pipeline. These parameters can come from anywhere including from Panel widgets, allowing us to expose control over any aspect of our pipeline. You may have noticed that the ``earthquakes_around_point`` function takes two arguments, the ``index`` of the point **and** the ``degrees_dist``, which defines the size of the region around the selected earthquake we will select points in. Using ``.apply`` we can declare a ``FloatSlider`` widget and then inject its ``value`` parameter into the pipeline (ensure that an earthquake is selected in the map above):
###Code
dist_slider = pn.widgets.FloatSlider(name='Degree Distance', value=0.5, start=0.1, end=2)
filtered_ds = dataset.apply(earthquakes_around_point, index=index_stream.param.index,
degrees_dist=dist_slider.param.value)
pn.Column(dist_slider, pn.Row(filtered_ds.apply(histogram), filtered_ds.apply(scatter)))
###Output
_____no_output_____
###Markdown
When the widget value changes the pipeline will re-execute the part of the pipeline downstream from the function and update the plot. This ensures that only the parts of the pipeline that are actually needed are re-executed.The ``.apply`` method can also be used to apply options depending on some widget value, e.g. we can create a colormap selector and then use ``.apply.opts`` to connect it to the ``rasterized_pop`` plot:
###Code
cmaps = {n: cc.palette[n][::-1] for n in ['kbc', 'fire', 'bgy', 'bgyw', 'bmy', 'gray', 'kbc']}
cmap_selector = pn.widgets.Select(name='Colormap', options=cmaps)
rasterized_pop_cmapped = rasterized_pop.apply.opts(cmap=cmap_selector.param.value)
pn.Column(cmap_selector, rasterized_pop_cmapped)
###Output
_____no_output_____
###Markdown
ExerciseUse the ``.apply.opts`` method to control the style of some existing component, e.g. the ``size`` of the points in the ``dynamic_scatter`` plot or the ``color`` of the ``dynamic_histogram``.HintUse a ``ColorPicker`` widget to control the ``color`` or a ``FloatSlider`` widget to control the ``size``. Solution```pythoncolor_picker = pn.widgets.ColorPicker(name='Color', value='00f300')size_slider = pn.widgets.FloatSlider(name='Size', value=5, start=1, end=30) color_histogram = dynamic_histogram.apply.opts(color=color_picker.param.value)size_scatter = dynamic_scatter.apply.opts(size=size_slider.param.value) pn.Column( pn.Row(color_picker, size_slider), pn.Row(color_histogram, size_scatter))``` Connecting panels to streamsAt this point we have learned how to connect parameters on Panel objects to a pipeline and we earlier learned how we can use parameters to declare dynamic Panel components. So, this section should be nothing new;, we will simply try to connect the index parameter of the selection stream to a panel to try to compute the number of people in the region around an earthquake.Since we have a population density dataset we can approximate how many people are affected by a particular earthquake. Of course, this value is only a rough approximation, as it ignores the curvature of the earth, assumes isotropic spreading of the earthquake, and assumes that the population did not change between the measurement and the earthquake.
###Code
@pn.depends(index_stream.param.index, dist_slider.param.value)
def affected_population(index, distance):
if not index:
return "No earthquake was selected."
sel = most_severe.iloc[index[0]]
lon, lat = sel.longitude, sel.latitude
lon_dist = (np.cos(np.deg2rad(lat)) * 111.321543) * distance
lat_dist = 111.321543 * distance
hdist = distance / 2.
mean_density = cleaned_ds.sel(x=slice(lon-hdist, lon+hdist), y=slice(lat+hdist, lat-hdist)).mean().item()
population = (lat_dist * lon_dist) * mean_density
return 'Approximate population around {place}, where a magnitude {mag} earthquake hit on {date} is {pop:.0f}.'.format(
pop=population, mag=sel.mag, place=sel.place, date=sel.time)
def bounds(index, value):
if not index:
return hv.Bounds((0, 0, 0, 0))
sel = most_severe.iloc[index[0]]
hdist = value / 2.
lon, lat = sel.longitude, sel.latitude
return hv.Bounds((lon-hdist, lat-hdist, lon+hdist, lat+hdist))
dynamic_bounds = hv.DynamicMap(bounds, streams=[index_stream, dist_slider.param.value])
pn.Column(pn.panel(affected_population, width=400), rasterized_pop * high_mag_points * dynamic_bounds, dist_slider)
###Output
_____no_output_____
###Markdown
The full dashboardFinally let us put all these components together into an overall dashboard, which we will mark as ``servable`` so we can ``panel serve`` this notebook.
###Code
title = '## Major Earthquakes 2000-2018'
logo = pn.panel('../assets/usgs_logo.png', width=200)
widgets = pn.WidgetBox(dist_slider, cmap_selector)
header = pn.Row(pn.Column(title, pn.panel(affected_population, width=400)),
pn.layout.Spacer(width=10), widgets, pn.layout.HSpacer(), logo)
dynamic_scatter = filtered_ds.apply(scatter)
dynamic_histogram = filtered_ds.apply(histogram)
temporal_vline = hv.DynamicMap(vline_callback, streams=[index_stream])
rasterized_pop_cmapped = rasterized_pop.apply.opts(cmap=cmap_selector.param.value)
dynamic_bounds = hv.DynamicMap(bounds, streams=[index_stream, dist_slider.param.value])
body = pn.Row(
rasterized_pop_cmapped * high_mag_points * dynamic_bounds,
pn.Column(dynamic_scatter * temporal_vline, dynamic_histogram),
)
pn.Column(header, body).servable()
###Output
_____no_output_____
###Markdown
div.container { width: 100% }Tutorial 8. Advanced Dashboards At this point we have learned how to build interactive apps and dashboards with Panel, how to quickly build visualizations with hvPlot, and add custom interactivity by using HoloViews. In this section we will work on putting all of this together to build complex, and efficient data processing pipelines, controlled by Panel widgets.
###Code
import colorcet as cc
import dask.dataframe as dd
import holoviews as hv
import numpy as np
import panel as pn
import xarray as xr
import hvplot.pandas # noqa: API import
import hvplot.xarray # noqa: API import
pn.extension()
###Output
_____no_output_____
###Markdown
Before we get started let's once again load the earthquake and population data and define the basic plots, which we will build the dashboard around.
###Code
df = dd.read_parquet('../data/earthquakes.parq').repartition(npartitions=4).persist()
most_severe = df[df.mag >= 7].compute()
ds = xr.open_dataarray('../data/gpw_v4_population_density_rev11_2010_2pt5_min.nc')
cleaned_ds = ds.where(ds.values != ds.nodatavals).sel(band=1)
cleaned_ds.name = 'population'
mag_cmap = cc.CET_L4[::-1]
high_mag_points = most_severe.hvplot.points(
x='longitude', y='latitude', c='mag', hover_cols=['place', 'time'],
cmap=mag_cmap, tools=['tap'], selection_line_color='black')
rasterized_pop = cleaned_ds.hvplot.image(
rasterize=True, cmap='kbc', logz=True, clim=(1, np.nan),
height=500, width=833, xaxis=None, yaxis=None).opts(bgcolor='black')
###Output
_____no_output_____
###Markdown
Building Pipelines In the previous sections we built a little function to cache the closest earthquakes since the computation can take a little while. An alternative to this approach is to start building a pipeline in HoloViews to do this very thing. Instead of writing a function that operates directly on the data, we rewrite the function to accept a Dataset and the index. This function again filters the closest earthquakes within the region and returns a new Dataset:
###Code
from holoviews.streams import Selection1D
def earthquakes_around_point(ds, index, degrees_dist=0.5):
if not index:
return ds.iloc[[]]
row = high_mag_points.data.iloc[index[0]]
half_dist = degrees_dist / 2.0
df = ds.data
nearest = df[((df['latitude'] - row.latitude).abs() < half_dist)
& ((df['longitude'] - row.longitude).abs() < half_dist)].compute()
return hv.Dataset(nearest)
###Output
_____no_output_____
###Markdown
Now we declare a HoloViews ``Dataset``, an ``Selection1D`` stream and use the ``apply`` method to apply the function to the dataset. The most important part is that we can now provide the selection stream's index parameter to this apply method. This sets up a pipeline which filters the Dataset based on the current index:
###Code
dataset = hv.Dataset(df)
index_stream = Selection1D(source=high_mag_points, index=[-3])
filtered_ds = dataset.apply(earthquakes_around_point, index=index_stream.param.index)
###Output
_____no_output_____
###Markdown
The filtered Dataset object itself doesn't actually display anything but it provides an intermediate pipeline stage which will feed the actual visualizations. The next step therefore is to extend this pipeline to build the visualizations from this filtered dataset. For this purpose we define some functions which take the dataset as input and then generate a plot:
###Code
hv.opts.defaults(
hv.opts.Histogram(toolbar=None),
hv.opts.Scatter(toolbar=None)
)
def histogram(ds):
return ds.data.hvplot.hist(y='mag', bin_range=(0, 10), bins=20, color='red', width=400, height=250)
def scatter(ds):
return ds.data.hvplot.scatter('time', 'mag', color='green', width=400, height=250, padding=0.1)
# We also redefine the VLine
def vline_callback(index):
if not index:
return hv.VLine(0)
row = most_severe.iloc[index[0]]
return hv.VLine(row.time).opts(line_width=1, color='black')
temporal_vline = hv.DynamicMap(vline_callback, streams=[index_stream])
dynamic_scatter = filtered_ds.apply(scatter)
dynamic_histogram = filtered_ds.apply(histogram)
###Output
_____no_output_____
###Markdown
Now that we have defined our visualizations using lazily evaluated pipelines we can start visualizing it. This time we will use Panel to lay out the plots:
###Code
pn.Column(
rasterized_pop * high_mag_points,
pn.Row(
dynamic_scatter * temporal_vline,
dynamic_histogram
)
)
###Output
_____no_output_____
###Markdown
ExerciseDefine another function like the ``histogram`` or ``scatter`` function and then ``apply`` it to the ``filtered_ds``. Observe how this too will respond to changes in the selected earthquake. Solution```pythondef bivariate(ds): return ds.data.hvplot.bivariate('mag', 'depth') filtered_ds.apply(bivariate)``` Connecting widgets to the pipelineAt this point you may be thinking that we haven't done anything we haven't already seen in the previous sections. However, apart from automatically handling the caching of computations, building visualization pipelines in this way provides one major benefit - we can inject parameters at any stage of the pipeline. These parameters can come from anywhere including from Panel widgets, allowing us to expose control over any aspect of our pipeline. You may have noticed that the ``earthquakes_around_point`` function takes two arguments, the ``index`` of the point **and** the ``degrees_dist``, which defines the size of the region around the selected earthquake we will select points in. Using ``.apply`` we can declare a ``FloatSlider`` widget and then inject its ``value`` parameter into the pipeline (ensure that an earthquake is selected in the map above):
###Code
dist_slider = pn.widgets.FloatSlider(name='Degree Distance', value=0.5, start=0.1, end=2)
filtered_ds = dataset.apply(earthquakes_around_point, index=index_stream.param.index,
degrees_dist=dist_slider)
pn.Column(
dist_slider,
pn.Row(
filtered_ds.apply(histogram),
filtered_ds.apply(scatter)
)
)
###Output
_____no_output_____
###Markdown
When the widget value changes the pipeline will re-execute the part of the pipeline downstream from the function and update the plot. This ensures that only the parts of the pipeline that are actually needed are re-executed.The ``.apply`` method can also be used to apply options depending on some widget value, e.g. we can create a colormap selector and then use ``.apply.opts`` to connect it to the ``rasterized_pop`` plot:
###Code
cmaps = {n: cc.palette[n] for n in ['kbc', 'fire', 'bgy', 'bgyw', 'bmy', 'gray', 'kbc']}
cmap_selector = pn.widgets.Select(name='Colormap', options=cmaps)
rasterized_pop_cmapped = rasterized_pop.apply.opts(cmap=cmap_selector)
pn.Column(cmap_selector, rasterized_pop_cmapped)
###Output
_____no_output_____
###Markdown
ExerciseUse the ``.apply.opts`` method to control the style of some existing component, e.g. the ``size`` of the points in the ``dynamic_scatter`` plot or the ``color`` of the ``dynamic_histogram``.HintUse a ``ColorPicker`` widget to control the ``color`` or a ``FloatSlider`` widget to control the ``size``. Solution```pythoncolor_picker = pn.widgets.ColorPicker(name='Color', value='00f300')size_slider = pn.widgets.FloatSlider(name='Size', value=5, start=1, end=30) color_histogram = dynamic_histogram.apply.opts(color=color_picker.param.value)size_scatter = dynamic_scatter.apply.opts(size=size_slider.param.value) pn.Column( pn.Row(color_picker, size_slider), pn.Row(color_histogram, size_scatter))``` Connecting panels to streamsAt this point we have learned how to connect parameters on Panel objects to a pipeline and we earlier learned how we can use parameters to declare dynamic Panel components. So, this section should be nothing new;, we will simply try to connect the index parameter of the selection stream to a panel to try to compute the number of people in the region around an earthquake.Since we have a population density dataset we can approximate how many people are affected by a particular earthquake. Of course, this value is only a rough approximation, as it ignores the curvature of the earth, assumes isotropic spreading of the earthquake, and assumes that the population did not change between the measurement and the earthquake.
###Code
@pn.depends(index_stream.param.index, dist_slider)
def affected_population(index, distance):
if not index:
return "No earthquake was selected."
sel = most_severe.iloc[index[0]]
lon, lat = sel.longitude, sel.latitude
lon_dist = (np.cos(np.deg2rad(lat)) * 111.321543) * distance
lat_dist = 111.321543 * distance
hdist = distance / 2.
mean_density = cleaned_ds.sel(x=slice(lon-hdist, lon+hdist), y=slice(lat+hdist, lat-hdist)).mean().item()
population = (lat_dist * lon_dist) * mean_density
return 'Approximate population around {place}, where a magnitude {mag} earthquake hit on {date} is {pop:.0f}.'.format(
pop=population, mag=sel.mag, place=sel.place, date=sel.time)
def bounds(index, value):
if not index:
return hv.Bounds((0, 0, 0, 0))
sel = most_severe.iloc[index[0]]
hdist = value / 2.
lon, lat = sel.longitude, sel.latitude
return hv.Bounds((lon-hdist, lat-hdist, lon+hdist, lat+hdist))
dynamic_bounds = hv.DynamicMap(bounds, streams=[index_stream, dist_slider.param.value])
pn.Column(pn.panel(affected_population, width=400), rasterized_pop * high_mag_points * dynamic_bounds, dist_slider)
###Output
_____no_output_____
###Markdown
The full dashboardFinally let us put all these components together into an overall dashboard, which we will mark as ``servable`` so we can ``panel serve`` this notebook.
###Code
title = '## Major Earthquakes 2000-2018'
logo = pn.panel('../assets/usgs_logo.png', width=200, align='center')
widgets = pn.WidgetBox(dist_slider, cmap_selector, margin=5)
header = pn.Row(pn.Column(title, pn.panel(affected_population, width=400)),
pn.layout.Spacer(width=10), logo, pn.layout.HSpacer(), widgets)
dynamic_scatter = filtered_ds.apply(scatter)
dynamic_histogram = filtered_ds.apply(histogram)
temporal_vline = hv.DynamicMap(vline_callback, streams=[index_stream])
rasterized_pop_cmapped = rasterized_pop.apply.opts(cmap=cmap_selector.param.value)
dynamic_bounds = hv.DynamicMap(bounds, streams=[index_stream, dist_slider.param.value])
body = pn.Row(
rasterized_pop_cmapped * high_mag_points * dynamic_bounds,
pn.Column(dynamic_scatter * temporal_vline, dynamic_histogram),
)
pn.Column(header, body).servable()
###Output
_____no_output_____ |
ImageLabeler/ImageLabeler_v1.ipynb | ###Markdown
Pasta Image LabelerThe purpose of this code is to quickly accept / reject training images by eye. Will only work on MacOS.Goal: - open a preview of an image- press enter to accept -- leaving the image in place- press any other key -- moving the image to a "reject" folder
###Code
# imports
import numpy as np
import subprocess as sp
from os import path
import getch
###Output
_____no_output_____
###Markdown
open a single test image
###Code
test_image_path = '/Users/jarredgreen/Downloads/instalooter/carbonara/2081350632616431848.jpg'
image_name = path.basename(test_image_path)
with sp.Popen(["qlmanage", "-p", test_image_path]) as pp:
x = input("%s ?" % image_name)
if x == '':
print("YES: the image %s is accepted" % image_name)
else:
print("NO : the image %s is rejected" % image_name)
# then close the image with:
pp.terminate()
###Output
2081350632616431848.jpg ?
###Markdown
try with GetchThis works better!! Resize image
###Code
import imageio
import skimage
import matplotlib.pyplot as plt
import math
test_image_notSquare = '/Users/jarredgreen/Documents/Deepasta/deepasta/ImageLabeler/test_images/carbonara/2147274122770605680.jpg'
# im = imageio.imread(test_image_path)
im = imageio.imread(test_image_notSquare)
# resize
# im2 = skimage.transform.resize(im, [128, 128])
# rotate
#im2 = skimage.transform.rotate(im, [90])
def resize_image(im, dim, quiet=True):
'''resizes image im to square with dimensions dim'''
if not quiet: print('rescaling to %s x %s' % (dim, dim))
return skimage.transform.resize(im, [dim, dim])
###Output
_____no_output_____
###Markdown
center crop
###Code
def crop_center(im, quiet=True):
'''Crops an image to square about the center using skimage
needs: imageio, skimage, math
'''
if not quiet: print('cropping to a square')
if not quiet: print('old shape: %s' % str(im.shape))
photo_dim = np.array(im.shape)[:2]
bigger_dim, smaller_dim = np.amax(photo_dim), np.amin(photo_dim)
height, width = photo_dim[0], photo_dim[1]
diff1 = math.ceil((bigger_dim - smaller_dim ) / 2)
diff2 = math.floor((bigger_dim - smaller_dim ) / 2)
if width == height:
if not quiet: print('already square!')
elif width > height:
im = skimage.util.crop(im, ((0,0),(diff1,diff2),(0,0)))
if not quiet: print('new shape: %s' % str(im.shape))
else:
im = skimage.util.crop(im, ((diff1,diff2),(0,0),(0,0)))
if not quiet: print('new shape: %s' % str(im.shape))
return im
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
im = crop_center(im, quiet=False)
im = skimage.img_as_ubyte(resize_image(im, 128, quiet=False))
skimage.io.imshow(im)
io.show()
###Output
cropping to a square
old shape: (128, 128, 3)
already square!
rescaling to 128 x 128
###Markdown
write the image
###Code
imageio.imwrite('~/Downloads/newimage.jpg', skimage.img_as_ubyte(im))
def write_image(image, path, filename, quiet=True):
'''writes the new image!'''
imageio.imwrite('%s/%s' % (path, filename), skimage.img_as_ubyte(im))
if not quiet: print('image saved to %s/%s' % (path, filename))
write_image(im, '~/Downloads', 'newimage2.jpg', quiet=False)
###Output
image saved to ~/Downloads/newimage2.jpg
###Markdown
entire processing chain
###Code
def process_image(path, image_name, destination_folder, quiet=False):
#open_image
im = imageio.imread(path)
im = crop_center(im, quiet=quiet)
im = resize_image(im, 128, quiet=quiet)
write_image(im, '~/Downloads', 'newimage2.jpg', quiet=quiet)
process_image(test_image_notSquare, 'newimage2.jpg', quiet=False)
###Output
cropping to a square
old shape: (543, 750, 3)
new shape: (543, 543, 3)
rescaling to 128 x 128
image saved to ~/Downloads/newimage2.jpg
###Markdown
Instaloader
###Code
import instaloader
# Get instance
L = instaloader.Instaloader()
L.get_hashtag_posts('cat')
###Output
_____no_output_____ |
examples/00_quick_start/sequential_recsys_amazondataset.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Sequential Recommender Quick Start Example: SLi_Rec : Adaptive User Modeling with Long and Short-Term Preferences for Personailzed RecommendationUnlike a general recommender such as Matrix Factorization or xDeepFM (in the repo) which doesn't consider the order of the user's activities, sequential recommender systems take the sequence of the user behaviors as context and the goal is to predict the items that the user will interact in a short time (in an extreme case, the item that the user will interact next).This notebook aims to give you a quick example of how to train a sequential model based on a public Amazon dataset. Currently, we can support NextItNet \[4\], GRU4Rec \[2\], Caser \[3\], A2SVD \[1\], SLi_Rec \[1\], and SUM \[5\]. Without loss of generality, this notebook takes [SLi_Rec model](https://www.microsoft.com/en-us/research/uploads/prod/2019/07/IJCAI19-ready_v1.pdf) for example.SLi_Rec \[1\] is a deep learning-based model aims at capturing both long and short-term user preferences for precise recommender systems. To summarize, SLi_Rec has the following key properties:* It adopts the attentive "Asymmetric-SVD" paradigm for long-term modeling;* It takes both time irregularity and semantic irregularity into consideration by modifying the gating logic in LSTM.* It uses an attention mechanism to dynamic fuse the long-term component and short-term component.In this notebook, we test SLi_Rec on a subset of the public dataset: [Amazon_reviews](http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Movies_and_TV_5.json.gz) and [Amazon_metadata](http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/meta_Movies_and_TV.json.gz)This notebook is well tested under TF 1.15.0. 0. Global Settings and Imports
###Code
import sys
import os
import logging
import papermill as pm
import scrapbook as sb
from tempfile import TemporaryDirectory
import numpy as np
import tensorflow as tf
tf.get_logger().setLevel('ERROR') # only show error messages
from reco_utils.common.timer import Timer
from reco_utils.common.constants import SEED
from reco_utils.recommender.deeprec.deeprec_utils import (
prepare_hparams
)
from reco_utils.dataset.amazon_reviews import download_and_extract, data_preprocessing
from reco_utils.dataset.download_utils import maybe_download
from reco_utils.recommender.deeprec.models.sequential.sli_rec import SLI_RECModel as SeqModel
#### to use the other model, use one of the following lines:
# from reco_utils.recommender.deeprec.models.sequential.asvd import A2SVDModel as SeqModel
# from reco_utils.recommender.deeprec.models.sequential.caser import CaserModel as SeqModel
# from reco_utils.recommender.deeprec.models.sequential.gru4rec import GRU4RecModel as SeqModel
# from reco_utils.recommender.deeprec.models.sequential.sum import SUMModel as SeqModel
#from reco_utils.recommender.deeprec.models.sequential.nextitnet import NextItNetModel
from reco_utils.recommender.deeprec.io.sequential_iterator import SequentialIterator
#from reco_utils.recommender.deeprec.io.nextitnet_iterator import NextItNetIterator
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
## ATTENTION: change to the corresponding config file, e.g., caser.yaml for CaserModel, sum.yaml for SUMModel
yaml_file = '../../reco_utils/recommender/deeprec/config/sli_rec.yaml'
###Output
_____no_output_____
###Markdown
Parameters
###Code
EPOCHS = 10
BATCH_SIZE = 400
RANDOM_SEED = SEED # Set None for non-deterministic result
data_path = os.path.join("..", "..", "tests", "resources", "deeprec", "slirec")
###Output
_____no_output_____
###Markdown
1. Input data formatThe input data contains 8 columns, i.e., ` ` columns are seperated by `"\t"`. item_id and category_id denote the target item and category, which means that for this instance, we want to guess whether user user_id will interact with item_id at timestamp. `` columns record the user behavior list up to ``, elements are separated by commas. `` is a binary value with 1 for positive instances and 0 for negative instances. One example for an instance is: `1 A1QQ86H5M2LVW2 B0059XTU1S Movies 1377561600 B002ZG97WE,B004IK30PA,B000BNX3AU,B0017ANB08,B005LAIHW2 Movies,Movies,Movies,Movies,Movies 1304294400,1304812800,1315785600,1316304000,1356998400` In data preprocessing stage, we have a script to generate some ID mapping dictionaries, so user_id, item_id and category_id will be mapped into interager index starting from 1. And you need to tell the input iterator where is the ID mapping files are. (For example, in the next section, we have some mapping files like user_vocab, item_vocab, and cate_vocab). The data preprocessing script is at [reco_utils/dataset/amazon_reviews.py](../../reco_utils/dataset/amazon_reviews.py), you need to call the `_create_vocab(train_file, user_vocab, item_vocab, cate_vocab)` function. Note that ID vocabulary only creates from the train_file, so the new IDs in valid_file or test_file will be regarded as unknown IDs and assigned with a defualt 0 index.Only the SLi_Rec model is time-aware. For the other models, you can just pad some meaningless timestamp in the data files to fill up the format, the models will ignore these columns.We use Softmax to the loss function. In training and evalution stage, we group 1 positive instance with num_ngs negative instances. Pair-wise ranking can be regarded as a special case of Softmax ranking, where num_ngs is set to 1. More specifically, for training and evalation, you need to organize the data file such that each one positive instance is followd by num_ngs negative instances. Our program will take 1+num_ngs lines as a unit for Softmax calculation. num_ngs is a parameter you need to pass to the `prepare_hparams`, `fit` and `run_eval` function. `train_num_ngs` in `prepare_hparams` denotes the number of negative instances for training, where a recommended number is 4. `valid_num_ngs` and `num_ngs` in `fit` and `run_eval` denote the number in evalution. In evaluation, the model calculates metrics among the 1+num_ngs instances. For the `predict` function, since we only need to calcuate a socre for each individual instance, there is no need for num_ngs setting. More details and examples will be provided in the following sections.For training stage, if you don't want to prepare negative instances, you can just provide positive instances and set the parameter `need_sample=True, train_num_ngs=train_num_ngs` for function `prepare_hparams`, our model will dynamicly sample `train_num_ngs` instances as negative samples in each mini batch. Amazon datasetNow let's start with a public dataset containing product reviews and metadata from Amazon, which is widely used as a benchmark dataset in recommemdation systems field.
###Code
# for test
train_file = os.path.join(data_path, r'train_data')
valid_file = os.path.join(data_path, r'valid_data')
test_file = os.path.join(data_path, r'test_data')
user_vocab = os.path.join(data_path, r'user_vocab.pkl')
item_vocab = os.path.join(data_path, r'item_vocab.pkl')
cate_vocab = os.path.join(data_path, r'category_vocab.pkl')
output_file = os.path.join(data_path, r'output.txt')
reviews_name = 'reviews_Movies_and_TV_5.json'
meta_name = 'meta_Movies_and_TV.json'
reviews_file = os.path.join(data_path, reviews_name)
meta_file = os.path.join(data_path, meta_name)
train_num_ngs = 4 # number of negative instances with a positive instance for training
valid_num_ngs = 4 # number of negative instances with a positive instance for validation
test_num_ngs = 9 # number of negative instances with a positive instance for testing
sample_rate = 0.01 # sample a small item set for training and testing here for fast example
input_files = [reviews_file, meta_file, train_file, valid_file, test_file, user_vocab, item_vocab, cate_vocab]
if not os.path.exists(train_file):
download_and_extract(reviews_name, reviews_file)
download_and_extract(meta_name, meta_file)
data_preprocessing(*input_files, sample_rate=sample_rate, valid_num_ngs=valid_num_ngs, test_num_ngs=test_num_ngs)
#### uncomment this for the NextItNet model, because it does not need to unfold the user history
# data_preprocessing(*input_files, sample_rate=sample_rate, valid_num_ngs=valid_num_ngs, test_num_ngs=test_num_ngs, is_history_expanding=False)
###Output
_____no_output_____
###Markdown
1.1 Prepare hyper-parametersprepare_hparams() will create a full set of hyper-parameters for model training, such as learning rate, feature number, and dropout ratio. We can put those parameters in a yaml file (a complete list of parameters can be found under our config folder) , or pass parameters as the function's parameters (which will overwrite yaml settings).Parameters hints: `need_sample` controls whether to perform dynamic negative sampling in mini-batch. `train_num_ngs` indicates how many negative instances followed by one positive instances. Examples: (1) `need_sample=True and train_num_ngs=4`: There are only positive instances in your training file. Our model will dynamically sample 4 negative instances for each positive instances in mini-batch. Note that if need_sample is set to True, train_num_ngs should be greater than zero. (2) `need_sample=False and train_num_ngs=4`: In your training file, each one positive line is followed by 4 negative lines. Note that if need_sample is set to False, you must provide a traiing file with negative instances, and train_num_ngs should match the number of negative number in your training file.
###Code
### NOTE:
### remember to use `_create_vocab(train_file, user_vocab, item_vocab, cate_vocab)` to generate the user_vocab, item_vocab and cate_vocab files, if you are using your own dataset rather than using our demo Amazon dataset.
hparams = prepare_hparams(yaml_file,
embed_l2=0.,
layer_l2=0.,
learning_rate=0.001, # set to 0.01 if batch normalization is disable
epochs=EPOCHS,
batch_size=BATCH_SIZE,
show_step=20,
MODEL_DIR=os.path.join(data_path, "model/"),
SUMMARIES_DIR=os.path.join(data_path, "summary/"),
user_vocab=user_vocab,
item_vocab=item_vocab,
cate_vocab=cate_vocab,
need_sample=True,
train_num_ngs=train_num_ngs, # provides the number of negative instances for each positive instance for loss computation.
)
###Output
_____no_output_____
###Markdown
1.2 Create data loaderDesignate a data iterator for the model. All our sequential models use SequentialIterator. data format is introduced aboved. Validation and testing data are files after negative sampling offline with the number of `` and ``.
###Code
input_creator = SequentialIterator
#### uncomment this for the NextItNet model, because it needs a special data iterator for training
#input_creator = NextItNetIterator
###Output
_____no_output_____
###Markdown
2. Create modelWhen both hyper-parameters and data iterator are ready, we can create a model:
###Code
model = SeqModel(hparams, input_creator, seed=RANDOM_SEED)
## sometimes we don't want to train a model from scratch
## then we can load a pre-trained model like this:
#model.load_model(r'your_model_path')
###Output
_____no_output_____
###Markdown
Now let's see what is the model's performance at this point (without starting training):
###Code
# test_num_ngs is the number of negative lines after each positive line in your test_file
print(model.run_eval(test_file, num_ngs=test_num_ngs))
###Output
{'auc': 0.4857, 'logloss': 0.6931, 'mean_mrr': 0.2665, 'ndcg@2': 0.1357, 'ndcg@4': 0.2186, 'ndcg@6': 0.2905, 'group_auc': 0.4849}
###Markdown
AUC=0.5 is a state of random guess. We can see that before training, the model behaves like random guessing. 2.1 Train modelNext we want to train the model on a training set, and check the performance on a validation dataset. Training the model is as simple as a function call:
###Code
with Timer() as train_time:
model = model.fit(train_file, valid_file, valid_num_ngs=valid_num_ngs)
# valid_num_ngs is the number of negative lines after each positive line in your valid_file
# we will evaluate the performance of model on valid_file every epoch
print('Time cost for training is {0:.2f} mins'.format(train_time.interval/60.0))
###Output
step 20 , total_loss: 1.6078, data_loss: 1.6078
step 40 , total_loss: 1.6054, data_loss: 1.6054
eval valid at epoch 1: auc:0.4975,logloss:0.6929,mean_mrr:0.4592,ndcg@2:0.3292,ndcg@4:0.5125,ndcg@6:0.5915,group_auc:0.4994
step 20 , total_loss: 1.5786, data_loss: 1.5786
step 40 , total_loss: 1.4193, data_loss: 1.4193
eval valid at epoch 2: auc:0.6486,logloss:0.6946,mean_mrr:0.5567,ndcg@2:0.472,ndcg@4:0.6292,ndcg@6:0.6669,group_auc:0.6363
step 20 , total_loss: 1.3229, data_loss: 1.3229
step 40 , total_loss: 1.3079, data_loss: 1.3079
eval valid at epoch 3: auc:0.6887,logloss:0.8454,mean_mrr:0.6032,ndcg@2:0.537,ndcg@4:0.6705,ndcg@6:0.7022,group_auc:0.683
step 20 , total_loss: 1.3521, data_loss: 1.3521
step 40 , total_loss: 1.2250, data_loss: 1.2250
eval valid at epoch 4: auc:0.6978,logloss:0.7005,mean_mrr:0.6236,ndcg@2:0.5622,ndcg@4:0.6881,ndcg@6:0.7175,group_auc:0.699
step 20 , total_loss: 1.2826, data_loss: 1.2826
step 40 , total_loss: 1.2795, data_loss: 1.2795
eval valid at epoch 5: auc:0.7152,logloss:0.6695,mean_mrr:0.6382,ndcg@2:0.582,ndcg@4:0.7009,ndcg@6:0.7286,group_auc:0.7139
step 20 , total_loss: 1.2214, data_loss: 1.2214
step 40 , total_loss: 1.2521, data_loss: 1.2521
eval valid at epoch 6: auc:0.722,logloss:0.6141,mean_mrr:0.637,ndcg@2:0.5796,ndcg@4:0.6993,ndcg@6:0.7276,group_auc:0.7116
step 20 , total_loss: 1.1884, data_loss: 1.1884
step 40 , total_loss: 1.1957, data_loss: 1.1957
eval valid at epoch 7: auc:0.7287,logloss:0.6183,mean_mrr:0.6417,ndcg@2:0.5875,ndcg@4:0.7031,ndcg@6:0.7312,group_auc:0.7167
step 20 , total_loss: 1.1779, data_loss: 1.1779
step 40 , total_loss: 1.1616, data_loss: 1.1616
eval valid at epoch 8: auc:0.7342,logloss:0.6584,mean_mrr:0.6538,ndcg@2:0.6006,ndcg@4:0.7121,ndcg@6:0.7402,group_auc:0.7248
step 20 , total_loss: 1.1299, data_loss: 1.1299
step 40 , total_loss: 1.2055, data_loss: 1.2055
eval valid at epoch 9: auc:0.7324,logloss:0.6268,mean_mrr:0.6541,ndcg@2:0.5981,ndcg@4:0.7129,ndcg@6:0.7404,group_auc:0.7239
step 20 , total_loss: 1.1927, data_loss: 1.1927
step 40 , total_loss: 1.1909, data_loss: 1.1909
eval valid at epoch 10: auc:0.7369,logloss:0.6122,mean_mrr:0.6611,ndcg@2:0.6087,ndcg@4:0.7181,ndcg@6:0.7457,group_auc:0.731
[(1, {'auc': 0.4975, 'logloss': 0.6929, 'mean_mrr': 0.4592, 'ndcg@2': 0.3292, 'ndcg@4': 0.5125, 'ndcg@6': 0.5915, 'group_auc': 0.4994}), (2, {'auc': 0.6486, 'logloss': 0.6946, 'mean_mrr': 0.5567, 'ndcg@2': 0.472, 'ndcg@4': 0.6292, 'ndcg@6': 0.6669, 'group_auc': 0.6363}), (3, {'auc': 0.6887, 'logloss': 0.8454, 'mean_mrr': 0.6032, 'ndcg@2': 0.537, 'ndcg@4': 0.6705, 'ndcg@6': 0.7022, 'group_auc': 0.683}), (4, {'auc': 0.6978, 'logloss': 0.7005, 'mean_mrr': 0.6236, 'ndcg@2': 0.5622, 'ndcg@4': 0.6881, 'ndcg@6': 0.7175, 'group_auc': 0.699}), (5, {'auc': 0.7152, 'logloss': 0.6695, 'mean_mrr': 0.6382, 'ndcg@2': 0.582, 'ndcg@4': 0.7009, 'ndcg@6': 0.7286, 'group_auc': 0.7139}), (6, {'auc': 0.722, 'logloss': 0.6141, 'mean_mrr': 0.637, 'ndcg@2': 0.5796, 'ndcg@4': 0.6993, 'ndcg@6': 0.7276, 'group_auc': 0.7116}), (7, {'auc': 0.7287, 'logloss': 0.6183, 'mean_mrr': 0.6417, 'ndcg@2': 0.5875, 'ndcg@4': 0.7031, 'ndcg@6': 0.7312, 'group_auc': 0.7167}), (8, {'auc': 0.7342, 'logloss': 0.6584, 'mean_mrr': 0.6538, 'ndcg@2': 0.6006, 'ndcg@4': 0.7121, 'ndcg@6': 0.7402, 'group_auc': 0.7248}), (9, {'auc': 0.7324, 'logloss': 0.6268, 'mean_mrr': 0.6541, 'ndcg@2': 0.5981, 'ndcg@4': 0.7129, 'ndcg@6': 0.7404, 'group_auc': 0.7239}), (10, {'auc': 0.7369, 'logloss': 0.6122, 'mean_mrr': 0.6611, 'ndcg@2': 0.6087, 'ndcg@4': 0.7181, 'ndcg@6': 0.7457, 'group_auc': 0.731})]
best epoch: 10
Time cost for training is 3.22 mins
###Markdown
2.2 Evaluate modelAgain, let's see what is the model's performance now (after training):
###Code
res_syn = model.run_eval(test_file, num_ngs=test_num_ngs)
print(res_syn)
sb.glue("res_syn", res_syn)
###Output
_____no_output_____
###Markdown
If we want to get the full prediction scores rather than evaluation metrics, we can do this:
###Code
model = model.predict(test_file, output_file)
# The data was downloaded in tmpdir folder. You can delete them manually if you do not need them any more.
###Output
_____no_output_____
###Markdown
2.3 Running models with large datasetHere are performances using the whole amazon dataset among popular sequential models with 1,697,533 positive instances.Settings for reproducing the results:`learning_rate=0.001, dropout=0.3, item_embedding_dim=32, cate_embedding_dim=8, l2_norm=0, batch_size=400, train_num_ngs=4, valid_num_ngs=4, test_num_ngs=49`We compare the running time with CPU only and with GPU on the larger dataset. It appears that GPU can significantly accelerate the training. Hardware specification for running the large dataset: GPU: Tesla P100-PCIE-16GBCPU: 6 cores Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz | Models | AUC | g-AUC | NDCG@2 | NDCG@10 | seconds per epoch on GPU | seconds per epoch on CPU| config || :------| :------: | :------: | :------: | :------: | :------: | :------: | :------ || A2SVD | 0.8251 | 0.8178 | 0.2922 | 0.4264 | 249.5 | 440.0 | N/A || GRU4Rec | 0.8411 | 0.8332 | 0.3213 | 0.4547 | 439.0 | 4285.0 | max_seq_length=50, hidden_size=40|| Caser | 0.8244 | 0.8171 | 0.283 | 0.4194 | 314.3 | 5369.9 | T=1, n_v=128, n_h=128, L=3, min_seq_length=5|| SLi_Rec | 0.8631 | 0.8519 | 0.3491 | 0.4842 | 549.6 | 5014.0 | attention_size=40, max_seq_length=50, hidden_size=40|| NextItNet* | 0.6793 | 0.6769 | 0.0602 | 0.1733 | 112.0 | 214.5 | min_seq_length=3, dilations=\[1,2,4,1,2,4\], kernel_size=3 || SUM | 0.8481 | 0.8406 | 0.3394 | 0.4774 | 1005.0 | 9427.0 | hidden_size=40, slots=4, dropout=0| Note 1: The five models are grid searched with a coarse granularity and the results are for reference only. Note 2: NextItNet model requires a dataset with strong sequence property, but the Amazon dataset used in this notebook does not meet that requirement, so NextItNet Model may not performance good. If you wish to use other datasets with strong sequence property, NextItNet is recommended. Note 3: Time cost of NextItNet Model is significantly shorter than other models because it doesn't need a history expanding of training data. 3. Online servingIn this section, we provide a simple example to illustrate how we can use the trained model to serve for production demand.Suppose we are in a new session. First let's load a previous trained model:
###Code
model_best_trained = SeqModel(hparams, input_creator, seed=RANDOM_SEED)
path_best_trained = os.path.join(hparams.MODEL_DIR, "best_model")
print('loading saved model in {0}'.format(path_best_trained))
model_best_trained.load_model(path_best_trained)
###Output
loading saved model in ../../tests/resources/deeprec/slirec/model/best_model
INFO:tensorflow:Restoring parameters from ../../tests/resources/deeprec/slirec/model/best_model
###Markdown
Let's see if we load the model correctly. The testing metrics should be close to the numbers we have in the training stage.
###Code
model_best_trained.run_eval(test_file, num_ngs=test_num_ngs)
###Output
_____no_output_____
###Markdown
And we make predictions using this model. In the next step, we will make predictions using a serving model. Then we can check if the two result files are consistent.
###Code
model_best_trained.predict(test_file, output_file)
###Output
_____no_output_____
###Markdown
Exciting. Now let's start our quick journey of online serving. For efficient and flexible serving, usually we only keep the necessary computation nodes and froze the TF model to a single pb file, so that we can easily compute scores with this unified pb file in both Python or Java:
###Code
with model_best_trained.sess as sess:
graph_def = model_best_trained.graph.as_graph_def()
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
graph_def,
["pred"]
)
outfilepath = os.path.join(hparams.MODEL_DIR, "serving_model.pb")
with tf.gfile.GFile(outfilepath, 'wb') as f:
f.write(output_graph_def.SerializeToString())
###Output
_____no_output_____
###Markdown
The serving logic is as simple as feeding the feature values to the corresponding input nodes, and fetch the score from the output node. In our model, input nodes are some placeholders and control variables (such as is_training, layer_keeps). We can get the nodes by their name:
###Code
class LoadFrozedPredModel:
def __init__(self, graph):
self.pred = graph.get_tensor_by_name('import/pred:0')
self.items = graph.get_tensor_by_name('import/items:0')
self.cates = graph.get_tensor_by_name('import/cates:0')
self.item_history = graph.get_tensor_by_name('import/item_history:0')
self.item_cate_history = graph.get_tensor_by_name('import/item_cate_history:0')
self.mask = graph.get_tensor_by_name('import/mask:0')
self.time_from_first_action = graph.get_tensor_by_name('import/time_from_first_action:0')
self.time_to_now = graph.get_tensor_by_name('import/time_to_now:0')
self.layer_keeps = graph.get_tensor_by_name('import/layer_keeps:0')
self.is_training = graph.get_tensor_by_name('import/is_training:0')
def infer_as_serving(model, infile, outfile, hparams, iterator, sess):
preds = []
for batch_data_input in iterator.load_data_from_file(infile, batch_num_ngs=0):
if batch_data_input:
feed_dict = {
model.layer_keeps:np.ones(3, dtype=np.float32),
model.is_training:False,
model.items: batch_data_input[iterator.items],
model.cates: batch_data_input[iterator.cates],
model.item_history: batch_data_input[iterator.item_history],
model.item_cate_history: batch_data_input[iterator.item_cate_history],
model.mask: batch_data_input[iterator.mask],
model.time_from_first_action: batch_data_input[iterator.time_from_first_action],
model.time_to_now: batch_data_input[iterator.time_to_now]
}
step_pred = sess.run(model.pred, feed_dict=feed_dict)
preds.extend(np.reshape(step_pred, -1))
with open(outfile, "w") as wt:
for line in preds:
wt.write('{0}\n'.format(line))
###Output
_____no_output_____
###Markdown
Here is the main pipeline for inferring in an online serving manner. You can compare the 'output_serving.txt' with 'output.txt' to see if the results are consistent.The input file format is the same as introduced in Section 1 'Input data format'. In serving stage, since we do not need a groundtrue lable, so for the label column, you can simply place any number like a zero. The iterator will parse the input file and convert into the required format for model's feed_dictionary.
###Code
G = tf.Graph()
with tf.gfile.GFile(
os.path.join(hparams.MODEL_DIR, "serving_model.pb"),
'rb'
) as f, G.as_default():
graph_def_optimized = tf.GraphDef()
graph_def_optimized.ParseFromString(f.read())
#### uncomment this line if you want to check what conent is included in the graph
#print('graph_def_optimized = ' + str(graph_def_optimized))
with tf.Session(graph=G) as sess:
tf.import_graph_def(graph_def_optimized)
model = LoadFrozedPredModel(sess.graph)
serving_output_file = os.path.join(data_path, r'output_serving.txt')
iterator = input_creator(hparams, tf.Graph())
infer_as_serving(model, test_file, serving_output_file, hparams, iterator, sess)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Sequential Recommender Quick Start Example: SLi_Rec : Adaptive User Modeling with Long and Short-Term Preferences for Personailzed RecommendationUnlike a general recommender such as Matrix Factorization or xDeepFM (in the repo) which doesn't consider the order of the user's activities, sequential recommender systems take the sequence of the user behaviors as context and the goal is to predict the items that the user will interact in a short time (in an extreme case, the item that the user will interact next).This notebook aims to give you a quick example of how to train a sequential model based on a public Amazon dataset. Currently, we can support NextItNet \[4\], GRU4Rec \[2\], Caser \[3\], A2SVD \[1\] and SLi_Rec \[1\]. Without loss of generality, this notebook takes [SLi_Rec model](https://www.microsoft.com/en-us/research/uploads/prod/2019/07/IJCAI19-ready_v1.pdf) for example.SLi_Rec \[1\] is a deep learning-based model aims at capturing both long and short-term user preferences for precise recommender systems. To summarize, SLi_Rec has the following key properties:* It adopts the attentive "Asymmetric-SVD" paradigm for long-term modeling;* It takes both time irregularity and semantic irregularity into consideration by modifying the gating logic in LSTM.* It uses an attention mechanism to dynamic fuse the long-term component and short-term component.In this notebook, we test SLi_Rec on a subset of the public dataset: [Amazon_reviews](http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Movies_and_TV_5.json.gz) and [Amazon_metadata](http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/meta_Movies_and_TV.json.gz)This notebook is well tested under TF 1.15.0. 0. Global Settings and Imports
###Code
import sys
sys.path.append("../../")
import os
import logging
import papermill as pm
import scrapbook as sb
from tempfile import TemporaryDirectory
import numpy as np
import tensorflow as tf
tf.get_logger().setLevel('ERROR') # only show error messages
from reco_utils.common.timer import Timer
from reco_utils.common.constants import SEED
from reco_utils.recommender.deeprec.deeprec_utils import (
prepare_hparams
)
from reco_utils.dataset.amazon_reviews import download_and_extract, data_preprocessing
from reco_utils.dataset.download_utils import maybe_download
from reco_utils.recommender.deeprec.models.sequential.sli_rec import SLI_RECModel as SeqModel
#### to use the other model, use one of the following lines:
# from reco_utils.recommender.deeprec.models.sequential.asvd import A2SVDModel as SeqModel
# from reco_utils.recommender.deeprec.models.sequential.caser import CaserModel as SeqModel
# from reco_utils.recommender.deeprec.models.sequential.gru4rec import GRU4RecModel as SeqModel
#from reco_utils.recommender.deeprec.models.sequential.nextitnet import NextItNetModel
from reco_utils.recommender.deeprec.io.sequential_iterator import SequentialIterator
#from reco_utils.recommender.deeprec.io.nextitnet_iterator import NextItNetIterator
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
## ATTENTION: change to the corresponding config file, e.g., caser.yaml for CaserModel
yaml_file = '../../reco_utils/recommender/deeprec/config/sli_rec.yaml'
###Output
_____no_output_____
###Markdown
Parameters
###Code
EPOCHS = 10
BATCH_SIZE = 400
RANDOM_SEED = SEED # Set None for non-deterministic result
data_path = os.path.join("..", "..", "tests", "resources", "deeprec", "slirec")
###Output
_____no_output_____
###Markdown
1. Input data formatThe input data contains 8 columns, i.e., ` ` columns are seperated by `"\t"`. item_id and category_id denote the target item and category, which means that for this instance, we want to guess whether user user_id will interact with item_id at timestamp. `` columns record the user behavior list up to ``, elements are separated by commas. `` is a binary value with 1 for positive instances and 0 for negative instances. One example for an instance is: `1 A1QQ86H5M2LVW2 B0059XTU1S Movies 1377561600 B002ZG97WE,B004IK30PA,B000BNX3AU,B0017ANB08,B005LAIHW2 Movies,Movies,Movies,Movies,Movies 1304294400,1304812800,1315785600,1316304000,1356998400` In data preprocessing stage, we have a script to generate some ID mapping dictionaries, so user_id, item_id and category_id will be mapped into interager index starting from 1. And you need to tell the input iterator where is the ID mapping files are. (For example, in the next section, we have some mapping files like user_vocab, item_vocab, and cate_vocab). The data preprocessing script is at [reco_utils/dataset/amazon_reviews.py](../../reco_utils/dataset/amazon_reviews.py), you need to call the `_create_vocab(train_file, user_vocab, item_vocab, cate_vocab)` function. Note that ID vocabulary only creates from the train_file, so the new IDs in valid_file or test_file will be regarded as unknown IDs and assigned with a defualt 0 index.Only the SLi_Rec model is time-aware. For the other models, you can just pad some meaningless timestamp in the data files to fill up the format, the models will ignore these columns.We use Softmax to the loss function. In training and evalution stage, we group 1 positive instance with num_ngs negative instances. Pair-wise ranking can be regarded as a special case of Softmax ranking, where num_ngs is set to 1. More specifically, for training and evalation, you need to organize the data file such that each one positive instance is followd by num_ngs negative instances. Our program will take 1+num_ngs lines as a unit for Softmax calculation. num_ngs is a parameter you need to pass to the `prepare_hparams`, `fit` and `run_eval` function. `train_num_ngs` in `prepare_hparams` denotes the number of negative instances for training, where a recommended number is 4. `valid_num_ngs` and `num_ngs` in `fit` and `run_eval` denote the number in evalution. In evaluation, the model calculates metrics among the 1+num_ngs instances. For the `predict` function, since we only need to calcuate a socre for each individual instance, there is no need for num_ngs setting. More details and examples will be provided in the following sections.For training stage, if you don't want to prepare negative instances, you can just provide positive instances and set the parameter `need_sample=True, train_num_ngs=train_num_ngs` for function `prepare_hparams`, our model will dynamicly sample `train_num_ngs` instances as negative samples in each mini batch. Amazon datasetNow let's start with a public dataset containing product reviews and metadata from Amazon, which is widely used as a benchmark dataset in recommemdation systems field.
###Code
# for test
train_file = os.path.join(data_path, r'train_data')
valid_file = os.path.join(data_path, r'valid_data')
test_file = os.path.join(data_path, r'test_data')
user_vocab = os.path.join(data_path, r'user_vocab.pkl')
item_vocab = os.path.join(data_path, r'item_vocab.pkl')
cate_vocab = os.path.join(data_path, r'category_vocab.pkl')
output_file = os.path.join(data_path, r'output.txt')
reviews_name = 'reviews_Movies_and_TV_5.json'
meta_name = 'meta_Movies_and_TV.json'
reviews_file = os.path.join(data_path, reviews_name)
meta_file = os.path.join(data_path, meta_name)
train_num_ngs = 4 # number of negative instances with a positive instance for training
valid_num_ngs = 4 # number of negative instances with a positive instance for validation
test_num_ngs = 9 # number of negative instances with a positive instance for testing
sample_rate = 0.01 # sample a small item set for training and testing here for fast example
input_files = [reviews_file, meta_file, train_file, valid_file, test_file, user_vocab, item_vocab, cate_vocab]
if not os.path.exists(train_file):
download_and_extract(reviews_name, reviews_file)
download_and_extract(meta_name, meta_file)
data_preprocessing(*input_files, sample_rate=sample_rate, valid_num_ngs=valid_num_ngs, test_num_ngs=test_num_ngs)
#### uncomment this for the NextItNet model, because it does not need to unfold the user history
# data_preprocessing(*input_files, sample_rate=sample_rate, valid_num_ngs=valid_num_ngs, test_num_ngs=test_num_ngs, is_history_expanding=False)
###Output
_____no_output_____
###Markdown
1.1 Prepare hyper-parametersprepare_hparams() will create a full set of hyper-parameters for model training, such as learning rate, feature number, and dropout ratio. We can put those parameters in a yaml file (a complete list of parameters can be found under our config folder) , or pass parameters as the function's parameters (which will overwrite yaml settings).Parameters hints: `need_sample` controls whether to perform dynamic negative sampling in mini-batch. `train_num_ngs` indicates how many negative instances followed by one positive instances. Examples: (1) `need_sample=True and train_num_ngs=4`: There are only positive instances in your training file. Our model will dynamically sample 4 negative instances for each positive instances in mini-batch. Note that if need_sample is set to True, train_num_ngs should be greater than zero. (2) `need_sample=False and train_num_ngs=4`: In your training file, each one positive line is followed by 4 negative lines. Note that if need_sample is set to False, you must provide a traiing file with negative instances, and train_num_ngs should match the number of negative number in your training file.
###Code
### NOTE:
### remember to use `_create_vocab(train_file, user_vocab, item_vocab, cate_vocab)` to generate the user_vocab, item_vocab and cate_vocab files, if you are using your own dataset rather than using our demo Amazon dataset.
hparams = prepare_hparams(yaml_file,
embed_l2=0.,
layer_l2=0.,
learning_rate=0.001, # set to 0.01 if batch normalization is disable
epochs=EPOCHS,
batch_size=BATCH_SIZE,
show_step=20,
MODEL_DIR=os.path.join(data_path, "model/"),
SUMMARIES_DIR=os.path.join(data_path, "summary/"),
user_vocab=user_vocab,
item_vocab=item_vocab,
cate_vocab=cate_vocab,
need_sample=True,
train_num_ngs=train_num_ngs, # provides the number of negative instances for each positive instance for loss computation.
)
###Output
_____no_output_____
###Markdown
1.2 Create data loaderDesignate a data iterator for the model. All our sequential models use SequentialIterator. data format is introduced aboved. Validation and testing data are files after negative sampling offline with the number of `` and ``.
###Code
input_creator = SequentialIterator
#### uncomment this for the NextItNet model, because it needs a special data iterator for training
#input_creator = NextItNetIterator
###Output
_____no_output_____
###Markdown
2. Create modelWhen both hyper-parameters and data iterator are ready, we can create a model:
###Code
model = SeqModel(hparams, input_creator, seed=RANDOM_SEED)
## sometimes we don't want to train a model from scratch
## then we can load a pre-trained model like this:
#model.load_model(r'your_model_path')
###Output
_____no_output_____
###Markdown
Now let's see what is the model's performance at this point (without starting training):
###Code
# test_num_ngs is the number of negative lines after each positive line in your test_file
print(model.run_eval(test_file, num_ngs=test_num_ngs))
###Output
{'auc': 0.4857, 'logloss': 0.6931, 'mean_mrr': 0.2665, 'ndcg@2': 0.1357, 'ndcg@4': 0.2186, 'ndcg@6': 0.2905, 'group_auc': 0.4849}
###Markdown
AUC=0.5 is a state of random guess. We can see that before training, the model behaves like random guessing. 2.1 Train modelNext we want to train the model on a training set, and check the performance on a validation dataset. Training the model is as simple as a function call:
###Code
with Timer() as train_time:
model = model.fit(train_file, valid_file, valid_num_ngs=valid_num_ngs)
# valid_num_ngs is the number of negative lines after each positive line in your valid_file
# we will evaluate the performance of model on valid_file every epoch
print('Time cost for training is {0:.2f} mins'.format(train_time.interval/60.0))
###Output
step 20 , total_loss: 1.6078, data_loss: 1.6078
step 40 , total_loss: 1.6054, data_loss: 1.6054
eval valid at epoch 1: auc:0.4975,logloss:0.6929,mean_mrr:0.4592,ndcg@2:0.3292,ndcg@4:0.5125,ndcg@6:0.5915,group_auc:0.4994
step 20 , total_loss: 1.5786, data_loss: 1.5786
step 40 , total_loss: 1.4193, data_loss: 1.4193
eval valid at epoch 2: auc:0.6486,logloss:0.6946,mean_mrr:0.5567,ndcg@2:0.472,ndcg@4:0.6292,ndcg@6:0.6669,group_auc:0.6363
step 20 , total_loss: 1.3229, data_loss: 1.3229
step 40 , total_loss: 1.3079, data_loss: 1.3079
eval valid at epoch 3: auc:0.6887,logloss:0.8454,mean_mrr:0.6032,ndcg@2:0.537,ndcg@4:0.6705,ndcg@6:0.7022,group_auc:0.683
step 20 , total_loss: 1.3521, data_loss: 1.3521
step 40 , total_loss: 1.2250, data_loss: 1.2250
eval valid at epoch 4: auc:0.6978,logloss:0.7005,mean_mrr:0.6236,ndcg@2:0.5622,ndcg@4:0.6881,ndcg@6:0.7175,group_auc:0.699
step 20 , total_loss: 1.2826, data_loss: 1.2826
step 40 , total_loss: 1.2795, data_loss: 1.2795
eval valid at epoch 5: auc:0.7152,logloss:0.6695,mean_mrr:0.6382,ndcg@2:0.582,ndcg@4:0.7009,ndcg@6:0.7286,group_auc:0.7139
step 20 , total_loss: 1.2214, data_loss: 1.2214
step 40 , total_loss: 1.2521, data_loss: 1.2521
eval valid at epoch 6: auc:0.722,logloss:0.6141,mean_mrr:0.637,ndcg@2:0.5796,ndcg@4:0.6993,ndcg@6:0.7276,group_auc:0.7116
step 20 , total_loss: 1.1884, data_loss: 1.1884
step 40 , total_loss: 1.1957, data_loss: 1.1957
eval valid at epoch 7: auc:0.7287,logloss:0.6183,mean_mrr:0.6417,ndcg@2:0.5875,ndcg@4:0.7031,ndcg@6:0.7312,group_auc:0.7167
step 20 , total_loss: 1.1779, data_loss: 1.1779
step 40 , total_loss: 1.1616, data_loss: 1.1616
eval valid at epoch 8: auc:0.7342,logloss:0.6584,mean_mrr:0.6538,ndcg@2:0.6006,ndcg@4:0.7121,ndcg@6:0.7402,group_auc:0.7248
step 20 , total_loss: 1.1299, data_loss: 1.1299
step 40 , total_loss: 1.2055, data_loss: 1.2055
eval valid at epoch 9: auc:0.7324,logloss:0.6268,mean_mrr:0.6541,ndcg@2:0.5981,ndcg@4:0.7129,ndcg@6:0.7404,group_auc:0.7239
step 20 , total_loss: 1.1927, data_loss: 1.1927
step 40 , total_loss: 1.1909, data_loss: 1.1909
eval valid at epoch 10: auc:0.7369,logloss:0.6122,mean_mrr:0.6611,ndcg@2:0.6087,ndcg@4:0.7181,ndcg@6:0.7457,group_auc:0.731
[(1, {'auc': 0.4975, 'logloss': 0.6929, 'mean_mrr': 0.4592, 'ndcg@2': 0.3292, 'ndcg@4': 0.5125, 'ndcg@6': 0.5915, 'group_auc': 0.4994}), (2, {'auc': 0.6486, 'logloss': 0.6946, 'mean_mrr': 0.5567, 'ndcg@2': 0.472, 'ndcg@4': 0.6292, 'ndcg@6': 0.6669, 'group_auc': 0.6363}), (3, {'auc': 0.6887, 'logloss': 0.8454, 'mean_mrr': 0.6032, 'ndcg@2': 0.537, 'ndcg@4': 0.6705, 'ndcg@6': 0.7022, 'group_auc': 0.683}), (4, {'auc': 0.6978, 'logloss': 0.7005, 'mean_mrr': 0.6236, 'ndcg@2': 0.5622, 'ndcg@4': 0.6881, 'ndcg@6': 0.7175, 'group_auc': 0.699}), (5, {'auc': 0.7152, 'logloss': 0.6695, 'mean_mrr': 0.6382, 'ndcg@2': 0.582, 'ndcg@4': 0.7009, 'ndcg@6': 0.7286, 'group_auc': 0.7139}), (6, {'auc': 0.722, 'logloss': 0.6141, 'mean_mrr': 0.637, 'ndcg@2': 0.5796, 'ndcg@4': 0.6993, 'ndcg@6': 0.7276, 'group_auc': 0.7116}), (7, {'auc': 0.7287, 'logloss': 0.6183, 'mean_mrr': 0.6417, 'ndcg@2': 0.5875, 'ndcg@4': 0.7031, 'ndcg@6': 0.7312, 'group_auc': 0.7167}), (8, {'auc': 0.7342, 'logloss': 0.6584, 'mean_mrr': 0.6538, 'ndcg@2': 0.6006, 'ndcg@4': 0.7121, 'ndcg@6': 0.7402, 'group_auc': 0.7248}), (9, {'auc': 0.7324, 'logloss': 0.6268, 'mean_mrr': 0.6541, 'ndcg@2': 0.5981, 'ndcg@4': 0.7129, 'ndcg@6': 0.7404, 'group_auc': 0.7239}), (10, {'auc': 0.7369, 'logloss': 0.6122, 'mean_mrr': 0.6611, 'ndcg@2': 0.6087, 'ndcg@4': 0.7181, 'ndcg@6': 0.7457, 'group_auc': 0.731})]
best epoch: 10
Time cost for training is 3.22 mins
###Markdown
2.2 Evaluate modelAgain, let's see what is the model's performance now (after training):
###Code
res_syn = model.run_eval(test_file, num_ngs=test_num_ngs)
print(res_syn)
sb.glue("res_syn", res_syn)
###Output
_____no_output_____
###Markdown
If we want to get the full prediction scores rather than evaluation metrics, we can do this:
###Code
model = model.predict(test_file, output_file)
# The data was downloaded in tmpdir folder. You can delete them manually if you do not need them any more.
###Output
_____no_output_____
###Markdown
2.3 Running models with large datasetHere are performances using the whole amazon dataset among popular sequential models with 1,697,533 positive instances.Settings for reproducing the results:`learning_rate=0.001, dropout=0.3, item_embedding_dim=32, cate_embedding_dim=8, l2_norm=0, batch_size=400, train_num_ngs=4, valid_num_ngs=4, test_num_ngs=49`We compare the running time with CPU only and with GPU on the larger dataset. It appears that GPU can significantly accelerate the training. Hardware specification for running the large dataset: GPU: Tesla P100-PCIE-16GBCPU: 6 cores Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz | Models | AUC | g-AUC | NDCG@2 | NDCG@10 | seconds per epoch on GPU | seconds per epoch on CPU| config || :------| :------: | :------: | :------: | :------: | :------: | :------: | :------ || A2SVD | 0.8251 | 0.8178 | 0.2922 | 0.4264 | 249.5 | 440.0 | N/A || GRU4Rec | 0.8411 | 0.8332 | 0.3213 | 0.4547 | 439.0 | 4285.0 | max_seq_length=50, hidden_size=40|| Caser | 0.8244 | 0.8171 | 0.283 | 0.4194 | 314.3 | 5369.9 | T=1, n_v=128, n_h=128, L=3, min_seq_length=5|| SLi_Rec | 0.8631 | 0.8519 | 0.3491 | 0.4842 | 549.6 | 5014.0 | attention_size=40, max_seq_length=50, hidden_size=40|| NextItNet* | 0.6793 | 0.6769 | 0.0602 | 0.1733 | 112.0 | 214.5 | min_seq_length=3, dilations=\[1,2,4,1,2,4\], kernel_size=3 | Note 1: The five models are grid searched with a coarse granularity and the results are for reference only. Note 2: NextItNet model requires a dataset with strong sequence property, but the Amazon dataset used in this notebook does not meet that requirement, so NextItNet Model may not performance good. If you wish to use other datasets with strong sequence property, NextItNet is recommended. Note 3: Time cost of NextItNet Model is significantly shorter than other models because it doesn't need a history expanding of training data. 3. Online servingIn this section, we provide a simple example to illustrate how we can use the trained model to serve for production demand.Suppose we are in a new session. First let's load a previous trained model:
###Code
model_best_trained = SeqModel(hparams, input_creator, seed=RANDOM_SEED)
path_best_trained = os.path.join(hparams.MODEL_DIR, "best_model")
print('loading saved model in {0}'.format(path_best_trained))
model_best_trained.load_model(path_best_trained)
###Output
loading saved model in ../../tests/resources/deeprec/slirec/model/best_model
INFO:tensorflow:Restoring parameters from ../../tests/resources/deeprec/slirec/model/best_model
###Markdown
Let's see if we load the model correctly. The testing metrics should be close to the numbers we have in the training stage.
###Code
model_best_trained.run_eval(test_file, num_ngs=test_num_ngs)
###Output
_____no_output_____
###Markdown
And we make predictions using this model. In the next step, we will make predictions using a serving model. Then we can check if the two result files are consistent.
###Code
model_best_trained.predict(test_file, output_file)
###Output
_____no_output_____
###Markdown
Exciting. Now let's start our quick journey of online serving. For efficient and flexible serving, usually we only keep the necessary computation nodes and froze the TF model to a single pb file, so that we can easily compute scores with this unified pb file in both Python or Java:
###Code
with model_best_trained.sess as sess:
graph_def = model_best_trained.graph.as_graph_def()
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
graph_def,
["pred"]
)
outfilepath = os.path.join(hparams.MODEL_DIR, "serving_model.pb")
with tf.gfile.GFile(outfilepath, 'wb') as f:
f.write(output_graph_def.SerializeToString())
###Output
_____no_output_____
###Markdown
The serving logic is as simple as feeding the feature values to the corresponding input nodes, and fetch the score from the output node. In our model, input nodes are some placeholders and control variables (such as is_training, layer_keeps). We can get the nodes by their name:
###Code
class LoadFrozedPredModel:
def __init__(self, graph):
self.pred = graph.get_tensor_by_name('import/pred:0')
self.items = graph.get_tensor_by_name('import/items:0')
self.cates = graph.get_tensor_by_name('import/cates:0')
self.item_history = graph.get_tensor_by_name('import/item_history:0')
self.item_cate_history = graph.get_tensor_by_name('import/item_cate_history:0')
self.mask = graph.get_tensor_by_name('import/mask:0')
self.time_from_first_action = graph.get_tensor_by_name('import/time_from_first_action:0')
self.time_to_now = graph.get_tensor_by_name('import/time_to_now:0')
self.layer_keeps = graph.get_tensor_by_name('import/layer_keeps:0')
self.is_training = graph.get_tensor_by_name('import/is_training:0')
def infer_as_serving(model, infile, outfile, hparams, iterator, sess):
preds = []
for batch_data_input in iterator.load_data_from_file(infile, batch_num_ngs=0):
if batch_data_input:
feed_dict = {
model.layer_keeps:np.ones(3, dtype=np.float32),
model.is_training:False,
model.items: batch_data_input[iterator.items],
model.cates: batch_data_input[iterator.cates],
model.item_history: batch_data_input[iterator.item_history],
model.item_cate_history: batch_data_input[iterator.item_cate_history],
model.mask: batch_data_input[iterator.mask],
model.time_from_first_action: batch_data_input[iterator.time_from_first_action],
model.time_to_now: batch_data_input[iterator.time_to_now]
}
step_pred = sess.run(model.pred, feed_dict=feed_dict)
preds.extend(np.reshape(step_pred, -1))
with open(outfile, "w") as wt:
for line in preds:
wt.write('{0}\n'.format(line))
###Output
_____no_output_____
###Markdown
Here is the main pipeline for inferring in an online serving manner. You can compare the 'output_serving.txt' with 'output.txt' to see if the results are consistent.The input file format is the same as introduced in Section 1 'Input data format'. In serving stage, since we do not need a groundtrue lable, so for the label column, you can simply place any number like a zero. The iterator will parse the input file and convert into the required format for model's feed_dictionary.
###Code
G = tf.Graph()
with tf.gfile.GFile(
os.path.join(hparams.MODEL_DIR, "serving_model.pb"),
'rb'
) as f, G.as_default():
graph_def_optimized = tf.GraphDef()
graph_def_optimized.ParseFromString(f.read())
#### uncomment this line if you want to check what conent is included in the graph
#print('graph_def_optimized = ' + str(graph_def_optimized))
with tf.Session(graph=G) as sess:
tf.import_graph_def(graph_def_optimized)
model = LoadFrozedPredModel(sess.graph)
serving_output_file = os.path.join(data_path, r'output_serving.txt')
iterator = input_creator(hparams, tf.Graph())
infer_as_serving(model, test_file, serving_output_file, hparams, iterator, sess)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Sequential Recommender Quick Start Example: SLi_Rec : Adaptive User Modeling with Long and Short-Term Preferences for Personailzed RecommendationUnlike a general recommender such as Matrix Factorization or xDeepFM (in the repo) which doesn't consider the order of the user's activities, sequential recommender systems take the sequence of the user behaviors as context and the goal is to predict the items that the user will interact in a short time (in an extreme case, the item that the user will interact next).This notebook aims to give you a quick example of how to train a sequential model based on a public Amazon dataset. Currently, we can support NextItNet \[4\], GRU4Rec \[2\], Caser \[3\], A2SVD \[1\], SLi_Rec \[1\], and SUM \[5\]. Without loss of generality, this notebook takes [SLi_Rec model](https://www.microsoft.com/en-us/research/uploads/prod/2019/07/IJCAI19-ready_v1.pdf) for example.SLi_Rec \[1\] is a deep learning-based model aims at capturing both long and short-term user preferences for precise recommender systems. To summarize, SLi_Rec has the following key properties:* It adopts the attentive "Asymmetric-SVD" paradigm for long-term modeling;* It takes both time irregularity and semantic irregularity into consideration by modifying the gating logic in LSTM.* It uses an attention mechanism to dynamic fuse the long-term component and short-term component.In this notebook, we test SLi_Rec on a subset of the public dataset: [Amazon_reviews](http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Movies_and_TV_5.json.gz) and [Amazon_metadata](http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/meta_Movies_and_TV.json.gz)This notebook is well tested under TF 1.15.0. 0. Global Settings and Imports
###Code
import sys
sys.path.append("../../")
import os
import logging
import papermill as pm
import scrapbook as sb
from tempfile import TemporaryDirectory
import numpy as np
import tensorflow as tf
tf.get_logger().setLevel('ERROR') # only show error messages
from reco_utils.common.timer import Timer
from reco_utils.common.constants import SEED
from reco_utils.recommender.deeprec.deeprec_utils import (
prepare_hparams
)
from reco_utils.dataset.amazon_reviews import download_and_extract, data_preprocessing
from reco_utils.dataset.download_utils import maybe_download
from reco_utils.recommender.deeprec.models.sequential.sli_rec import SLI_RECModel as SeqModel
#### to use the other model, use one of the following lines:
# from reco_utils.recommender.deeprec.models.sequential.asvd import A2SVDModel as SeqModel
# from reco_utils.recommender.deeprec.models.sequential.caser import CaserModel as SeqModel
# from reco_utils.recommender.deeprec.models.sequential.gru4rec import GRU4RecModel as SeqModel
# from reco_utils.recommender.deeprec.models.sequential.sum import SUMModel as SeqModel
#from reco_utils.recommender.deeprec.models.sequential.nextitnet import NextItNetModel
from reco_utils.recommender.deeprec.io.sequential_iterator import SequentialIterator
#from reco_utils.recommender.deeprec.io.nextitnet_iterator import NextItNetIterator
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
## ATTENTION: change to the corresponding config file, e.g., caser.yaml for CaserModel, sum.yaml for SUMModel
yaml_file = '../../reco_utils/recommender/deeprec/config/sli_rec.yaml'
###Output
_____no_output_____
###Markdown
Parameters
###Code
EPOCHS = 10
BATCH_SIZE = 400
RANDOM_SEED = SEED # Set None for non-deterministic result
data_path = os.path.join("..", "..", "tests", "resources", "deeprec", "slirec")
###Output
_____no_output_____
###Markdown
1. Input data formatThe input data contains 8 columns, i.e., ` ` columns are seperated by `"\t"`. item_id and category_id denote the target item and category, which means that for this instance, we want to guess whether user user_id will interact with item_id at timestamp. `` columns record the user behavior list up to ``, elements are separated by commas. `` is a binary value with 1 for positive instances and 0 for negative instances. One example for an instance is: `1 A1QQ86H5M2LVW2 B0059XTU1S Movies 1377561600 B002ZG97WE,B004IK30PA,B000BNX3AU,B0017ANB08,B005LAIHW2 Movies,Movies,Movies,Movies,Movies 1304294400,1304812800,1315785600,1316304000,1356998400` In data preprocessing stage, we have a script to generate some ID mapping dictionaries, so user_id, item_id and category_id will be mapped into interager index starting from 1. And you need to tell the input iterator where is the ID mapping files are. (For example, in the next section, we have some mapping files like user_vocab, item_vocab, and cate_vocab). The data preprocessing script is at [reco_utils/dataset/amazon_reviews.py](../../reco_utils/dataset/amazon_reviews.py), you need to call the `_create_vocab(train_file, user_vocab, item_vocab, cate_vocab)` function. Note that ID vocabulary only creates from the train_file, so the new IDs in valid_file or test_file will be regarded as unknown IDs and assigned with a defualt 0 index.Only the SLi_Rec model is time-aware. For the other models, you can just pad some meaningless timestamp in the data files to fill up the format, the models will ignore these columns.We use Softmax to the loss function. In training and evalution stage, we group 1 positive instance with num_ngs negative instances. Pair-wise ranking can be regarded as a special case of Softmax ranking, where num_ngs is set to 1. More specifically, for training and evalation, you need to organize the data file such that each one positive instance is followd by num_ngs negative instances. Our program will take 1+num_ngs lines as a unit for Softmax calculation. num_ngs is a parameter you need to pass to the `prepare_hparams`, `fit` and `run_eval` function. `train_num_ngs` in `prepare_hparams` denotes the number of negative instances for training, where a recommended number is 4. `valid_num_ngs` and `num_ngs` in `fit` and `run_eval` denote the number in evalution. In evaluation, the model calculates metrics among the 1+num_ngs instances. For the `predict` function, since we only need to calcuate a socre for each individual instance, there is no need for num_ngs setting. More details and examples will be provided in the following sections.For training stage, if you don't want to prepare negative instances, you can just provide positive instances and set the parameter `need_sample=True, train_num_ngs=train_num_ngs` for function `prepare_hparams`, our model will dynamicly sample `train_num_ngs` instances as negative samples in each mini batch. Amazon datasetNow let's start with a public dataset containing product reviews and metadata from Amazon, which is widely used as a benchmark dataset in recommemdation systems field.
###Code
# for test
train_file = os.path.join(data_path, r'train_data')
valid_file = os.path.join(data_path, r'valid_data')
test_file = os.path.join(data_path, r'test_data')
user_vocab = os.path.join(data_path, r'user_vocab.pkl')
item_vocab = os.path.join(data_path, r'item_vocab.pkl')
cate_vocab = os.path.join(data_path, r'category_vocab.pkl')
output_file = os.path.join(data_path, r'output.txt')
reviews_name = 'reviews_Movies_and_TV_5.json'
meta_name = 'meta_Movies_and_TV.json'
reviews_file = os.path.join(data_path, reviews_name)
meta_file = os.path.join(data_path, meta_name)
train_num_ngs = 4 # number of negative instances with a positive instance for training
valid_num_ngs = 4 # number of negative instances with a positive instance for validation
test_num_ngs = 9 # number of negative instances with a positive instance for testing
sample_rate = 0.01 # sample a small item set for training and testing here for fast example
input_files = [reviews_file, meta_file, train_file, valid_file, test_file, user_vocab, item_vocab, cate_vocab]
if not os.path.exists(train_file):
download_and_extract(reviews_name, reviews_file)
download_and_extract(meta_name, meta_file)
data_preprocessing(*input_files, sample_rate=sample_rate, valid_num_ngs=valid_num_ngs, test_num_ngs=test_num_ngs)
#### uncomment this for the NextItNet model, because it does not need to unfold the user history
# data_preprocessing(*input_files, sample_rate=sample_rate, valid_num_ngs=valid_num_ngs, test_num_ngs=test_num_ngs, is_history_expanding=False)
###Output
_____no_output_____
###Markdown
1.1 Prepare hyper-parametersprepare_hparams() will create a full set of hyper-parameters for model training, such as learning rate, feature number, and dropout ratio. We can put those parameters in a yaml file (a complete list of parameters can be found under our config folder) , or pass parameters as the function's parameters (which will overwrite yaml settings).Parameters hints: `need_sample` controls whether to perform dynamic negative sampling in mini-batch. `train_num_ngs` indicates how many negative instances followed by one positive instances. Examples: (1) `need_sample=True and train_num_ngs=4`: There are only positive instances in your training file. Our model will dynamically sample 4 negative instances for each positive instances in mini-batch. Note that if need_sample is set to True, train_num_ngs should be greater than zero. (2) `need_sample=False and train_num_ngs=4`: In your training file, each one positive line is followed by 4 negative lines. Note that if need_sample is set to False, you must provide a traiing file with negative instances, and train_num_ngs should match the number of negative number in your training file.
###Code
### NOTE:
### remember to use `_create_vocab(train_file, user_vocab, item_vocab, cate_vocab)` to generate the user_vocab, item_vocab and cate_vocab files, if you are using your own dataset rather than using our demo Amazon dataset.
hparams = prepare_hparams(yaml_file,
embed_l2=0.,
layer_l2=0.,
learning_rate=0.001, # set to 0.01 if batch normalization is disable
epochs=EPOCHS,
batch_size=BATCH_SIZE,
show_step=20,
MODEL_DIR=os.path.join(data_path, "model/"),
SUMMARIES_DIR=os.path.join(data_path, "summary/"),
user_vocab=user_vocab,
item_vocab=item_vocab,
cate_vocab=cate_vocab,
need_sample=True,
train_num_ngs=train_num_ngs, # provides the number of negative instances for each positive instance for loss computation.
)
###Output
_____no_output_____
###Markdown
1.2 Create data loaderDesignate a data iterator for the model. All our sequential models use SequentialIterator. data format is introduced aboved. Validation and testing data are files after negative sampling offline with the number of `` and ``.
###Code
input_creator = SequentialIterator
#### uncomment this for the NextItNet model, because it needs a special data iterator for training
#input_creator = NextItNetIterator
###Output
_____no_output_____
###Markdown
2. Create modelWhen both hyper-parameters and data iterator are ready, we can create a model:
###Code
model = SeqModel(hparams, input_creator, seed=RANDOM_SEED)
## sometimes we don't want to train a model from scratch
## then we can load a pre-trained model like this:
#model.load_model(r'your_model_path')
###Output
_____no_output_____
###Markdown
Now let's see what is the model's performance at this point (without starting training):
###Code
# test_num_ngs is the number of negative lines after each positive line in your test_file
print(model.run_eval(test_file, num_ngs=test_num_ngs))
###Output
{'auc': 0.4857, 'logloss': 0.6931, 'mean_mrr': 0.2665, 'ndcg@2': 0.1357, 'ndcg@4': 0.2186, 'ndcg@6': 0.2905, 'group_auc': 0.4849}
###Markdown
AUC=0.5 is a state of random guess. We can see that before training, the model behaves like random guessing. 2.1 Train modelNext we want to train the model on a training set, and check the performance on a validation dataset. Training the model is as simple as a function call:
###Code
with Timer() as train_time:
model = model.fit(train_file, valid_file, valid_num_ngs=valid_num_ngs)
# valid_num_ngs is the number of negative lines after each positive line in your valid_file
# we will evaluate the performance of model on valid_file every epoch
print('Time cost for training is {0:.2f} mins'.format(train_time.interval/60.0))
###Output
step 20 , total_loss: 1.6078, data_loss: 1.6078
step 40 , total_loss: 1.6054, data_loss: 1.6054
eval valid at epoch 1: auc:0.4975,logloss:0.6929,mean_mrr:0.4592,ndcg@2:0.3292,ndcg@4:0.5125,ndcg@6:0.5915,group_auc:0.4994
step 20 , total_loss: 1.5786, data_loss: 1.5786
step 40 , total_loss: 1.4193, data_loss: 1.4193
eval valid at epoch 2: auc:0.6486,logloss:0.6946,mean_mrr:0.5567,ndcg@2:0.472,ndcg@4:0.6292,ndcg@6:0.6669,group_auc:0.6363
step 20 , total_loss: 1.3229, data_loss: 1.3229
step 40 , total_loss: 1.3079, data_loss: 1.3079
eval valid at epoch 3: auc:0.6887,logloss:0.8454,mean_mrr:0.6032,ndcg@2:0.537,ndcg@4:0.6705,ndcg@6:0.7022,group_auc:0.683
step 20 , total_loss: 1.3521, data_loss: 1.3521
step 40 , total_loss: 1.2250, data_loss: 1.2250
eval valid at epoch 4: auc:0.6978,logloss:0.7005,mean_mrr:0.6236,ndcg@2:0.5622,ndcg@4:0.6881,ndcg@6:0.7175,group_auc:0.699
step 20 , total_loss: 1.2826, data_loss: 1.2826
step 40 , total_loss: 1.2795, data_loss: 1.2795
eval valid at epoch 5: auc:0.7152,logloss:0.6695,mean_mrr:0.6382,ndcg@2:0.582,ndcg@4:0.7009,ndcg@6:0.7286,group_auc:0.7139
step 20 , total_loss: 1.2214, data_loss: 1.2214
step 40 , total_loss: 1.2521, data_loss: 1.2521
eval valid at epoch 6: auc:0.722,logloss:0.6141,mean_mrr:0.637,ndcg@2:0.5796,ndcg@4:0.6993,ndcg@6:0.7276,group_auc:0.7116
step 20 , total_loss: 1.1884, data_loss: 1.1884
step 40 , total_loss: 1.1957, data_loss: 1.1957
eval valid at epoch 7: auc:0.7287,logloss:0.6183,mean_mrr:0.6417,ndcg@2:0.5875,ndcg@4:0.7031,ndcg@6:0.7312,group_auc:0.7167
step 20 , total_loss: 1.1779, data_loss: 1.1779
step 40 , total_loss: 1.1616, data_loss: 1.1616
eval valid at epoch 8: auc:0.7342,logloss:0.6584,mean_mrr:0.6538,ndcg@2:0.6006,ndcg@4:0.7121,ndcg@6:0.7402,group_auc:0.7248
step 20 , total_loss: 1.1299, data_loss: 1.1299
step 40 , total_loss: 1.2055, data_loss: 1.2055
eval valid at epoch 9: auc:0.7324,logloss:0.6268,mean_mrr:0.6541,ndcg@2:0.5981,ndcg@4:0.7129,ndcg@6:0.7404,group_auc:0.7239
step 20 , total_loss: 1.1927, data_loss: 1.1927
step 40 , total_loss: 1.1909, data_loss: 1.1909
eval valid at epoch 10: auc:0.7369,logloss:0.6122,mean_mrr:0.6611,ndcg@2:0.6087,ndcg@4:0.7181,ndcg@6:0.7457,group_auc:0.731
[(1, {'auc': 0.4975, 'logloss': 0.6929, 'mean_mrr': 0.4592, 'ndcg@2': 0.3292, 'ndcg@4': 0.5125, 'ndcg@6': 0.5915, 'group_auc': 0.4994}), (2, {'auc': 0.6486, 'logloss': 0.6946, 'mean_mrr': 0.5567, 'ndcg@2': 0.472, 'ndcg@4': 0.6292, 'ndcg@6': 0.6669, 'group_auc': 0.6363}), (3, {'auc': 0.6887, 'logloss': 0.8454, 'mean_mrr': 0.6032, 'ndcg@2': 0.537, 'ndcg@4': 0.6705, 'ndcg@6': 0.7022, 'group_auc': 0.683}), (4, {'auc': 0.6978, 'logloss': 0.7005, 'mean_mrr': 0.6236, 'ndcg@2': 0.5622, 'ndcg@4': 0.6881, 'ndcg@6': 0.7175, 'group_auc': 0.699}), (5, {'auc': 0.7152, 'logloss': 0.6695, 'mean_mrr': 0.6382, 'ndcg@2': 0.582, 'ndcg@4': 0.7009, 'ndcg@6': 0.7286, 'group_auc': 0.7139}), (6, {'auc': 0.722, 'logloss': 0.6141, 'mean_mrr': 0.637, 'ndcg@2': 0.5796, 'ndcg@4': 0.6993, 'ndcg@6': 0.7276, 'group_auc': 0.7116}), (7, {'auc': 0.7287, 'logloss': 0.6183, 'mean_mrr': 0.6417, 'ndcg@2': 0.5875, 'ndcg@4': 0.7031, 'ndcg@6': 0.7312, 'group_auc': 0.7167}), (8, {'auc': 0.7342, 'logloss': 0.6584, 'mean_mrr': 0.6538, 'ndcg@2': 0.6006, 'ndcg@4': 0.7121, 'ndcg@6': 0.7402, 'group_auc': 0.7248}), (9, {'auc': 0.7324, 'logloss': 0.6268, 'mean_mrr': 0.6541, 'ndcg@2': 0.5981, 'ndcg@4': 0.7129, 'ndcg@6': 0.7404, 'group_auc': 0.7239}), (10, {'auc': 0.7369, 'logloss': 0.6122, 'mean_mrr': 0.6611, 'ndcg@2': 0.6087, 'ndcg@4': 0.7181, 'ndcg@6': 0.7457, 'group_auc': 0.731})]
best epoch: 10
Time cost for training is 3.22 mins
###Markdown
2.2 Evaluate modelAgain, let's see what is the model's performance now (after training):
###Code
res_syn = model.run_eval(test_file, num_ngs=test_num_ngs)
print(res_syn)
sb.glue("res_syn", res_syn)
###Output
_____no_output_____
###Markdown
If we want to get the full prediction scores rather than evaluation metrics, we can do this:
###Code
model = model.predict(test_file, output_file)
# The data was downloaded in tmpdir folder. You can delete them manually if you do not need them any more.
###Output
_____no_output_____
###Markdown
2.3 Running models with large datasetHere are performances using the whole amazon dataset among popular sequential models with 1,697,533 positive instances.Settings for reproducing the results:`learning_rate=0.001, dropout=0.3, item_embedding_dim=32, cate_embedding_dim=8, l2_norm=0, batch_size=400, train_num_ngs=4, valid_num_ngs=4, test_num_ngs=49`We compare the running time with CPU only and with GPU on the larger dataset. It appears that GPU can significantly accelerate the training. Hardware specification for running the large dataset: GPU: Tesla P100-PCIE-16GBCPU: 6 cores Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz | Models | AUC | g-AUC | NDCG@2 | NDCG@10 | seconds per epoch on GPU | seconds per epoch on CPU| config || :------| :------: | :------: | :------: | :------: | :------: | :------: | :------ || A2SVD | 0.8251 | 0.8178 | 0.2922 | 0.4264 | 249.5 | 440.0 | N/A || GRU4Rec | 0.8411 | 0.8332 | 0.3213 | 0.4547 | 439.0 | 4285.0 | max_seq_length=50, hidden_size=40|| Caser | 0.8244 | 0.8171 | 0.283 | 0.4194 | 314.3 | 5369.9 | T=1, n_v=128, n_h=128, L=3, min_seq_length=5|| SLi_Rec | 0.8631 | 0.8519 | 0.3491 | 0.4842 | 549.6 | 5014.0 | attention_size=40, max_seq_length=50, hidden_size=40|| NextItNet* | 0.6793 | 0.6769 | 0.0602 | 0.1733 | 112.0 | 214.5 | min_seq_length=3, dilations=\[1,2,4,1,2,4\], kernel_size=3 || SUM | 0.8481 | 0.8406 | 0.3394 | 0.4774 | 1005.0 | 9427.0 | hidden_size=40, slots=4, dropout=0| Note 1: The five models are grid searched with a coarse granularity and the results are for reference only. Note 2: NextItNet model requires a dataset with strong sequence property, but the Amazon dataset used in this notebook does not meet that requirement, so NextItNet Model may not performance good. If you wish to use other datasets with strong sequence property, NextItNet is recommended. Note 3: Time cost of NextItNet Model is significantly shorter than other models because it doesn't need a history expanding of training data. 3. Online servingIn this section, we provide a simple example to illustrate how we can use the trained model to serve for production demand.Suppose we are in a new session. First let's load a previous trained model:
###Code
model_best_trained = SeqModel(hparams, input_creator, seed=RANDOM_SEED)
path_best_trained = os.path.join(hparams.MODEL_DIR, "best_model")
print('loading saved model in {0}'.format(path_best_trained))
model_best_trained.load_model(path_best_trained)
###Output
loading saved model in ../../tests/resources/deeprec/slirec/model/best_model
INFO:tensorflow:Restoring parameters from ../../tests/resources/deeprec/slirec/model/best_model
###Markdown
Let's see if we load the model correctly. The testing metrics should be close to the numbers we have in the training stage.
###Code
model_best_trained.run_eval(test_file, num_ngs=test_num_ngs)
###Output
_____no_output_____
###Markdown
And we make predictions using this model. In the next step, we will make predictions using a serving model. Then we can check if the two result files are consistent.
###Code
model_best_trained.predict(test_file, output_file)
###Output
_____no_output_____
###Markdown
Exciting. Now let's start our quick journey of online serving. For efficient and flexible serving, usually we only keep the necessary computation nodes and froze the TF model to a single pb file, so that we can easily compute scores with this unified pb file in both Python or Java:
###Code
with model_best_trained.sess as sess:
graph_def = model_best_trained.graph.as_graph_def()
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
graph_def,
["pred"]
)
outfilepath = os.path.join(hparams.MODEL_DIR, "serving_model.pb")
with tf.gfile.GFile(outfilepath, 'wb') as f:
f.write(output_graph_def.SerializeToString())
###Output
_____no_output_____
###Markdown
The serving logic is as simple as feeding the feature values to the corresponding input nodes, and fetch the score from the output node. In our model, input nodes are some placeholders and control variables (such as is_training, layer_keeps). We can get the nodes by their name:
###Code
class LoadFrozedPredModel:
def __init__(self, graph):
self.pred = graph.get_tensor_by_name('import/pred:0')
self.items = graph.get_tensor_by_name('import/items:0')
self.cates = graph.get_tensor_by_name('import/cates:0')
self.item_history = graph.get_tensor_by_name('import/item_history:0')
self.item_cate_history = graph.get_tensor_by_name('import/item_cate_history:0')
self.mask = graph.get_tensor_by_name('import/mask:0')
self.time_from_first_action = graph.get_tensor_by_name('import/time_from_first_action:0')
self.time_to_now = graph.get_tensor_by_name('import/time_to_now:0')
self.layer_keeps = graph.get_tensor_by_name('import/layer_keeps:0')
self.is_training = graph.get_tensor_by_name('import/is_training:0')
def infer_as_serving(model, infile, outfile, hparams, iterator, sess):
preds = []
for batch_data_input in iterator.load_data_from_file(infile, batch_num_ngs=0):
if batch_data_input:
feed_dict = {
model.layer_keeps:np.ones(3, dtype=np.float32),
model.is_training:False,
model.items: batch_data_input[iterator.items],
model.cates: batch_data_input[iterator.cates],
model.item_history: batch_data_input[iterator.item_history],
model.item_cate_history: batch_data_input[iterator.item_cate_history],
model.mask: batch_data_input[iterator.mask],
model.time_from_first_action: batch_data_input[iterator.time_from_first_action],
model.time_to_now: batch_data_input[iterator.time_to_now]
}
step_pred = sess.run(model.pred, feed_dict=feed_dict)
preds.extend(np.reshape(step_pred, -1))
with open(outfile, "w") as wt:
for line in preds:
wt.write('{0}\n'.format(line))
###Output
_____no_output_____
###Markdown
Here is the main pipeline for inferring in an online serving manner. You can compare the 'output_serving.txt' with 'output.txt' to see if the results are consistent.The input file format is the same as introduced in Section 1 'Input data format'. In serving stage, since we do not need a groundtrue lable, so for the label column, you can simply place any number like a zero. The iterator will parse the input file and convert into the required format for model's feed_dictionary.
###Code
G = tf.Graph()
with tf.gfile.GFile(
os.path.join(hparams.MODEL_DIR, "serving_model.pb"),
'rb'
) as f, G.as_default():
graph_def_optimized = tf.GraphDef()
graph_def_optimized.ParseFromString(f.read())
#### uncomment this line if you want to check what conent is included in the graph
#print('graph_def_optimized = ' + str(graph_def_optimized))
with tf.Session(graph=G) as sess:
tf.import_graph_def(graph_def_optimized)
model = LoadFrozedPredModel(sess.graph)
serving_output_file = os.path.join(data_path, r'output_serving.txt')
iterator = input_creator(hparams, tf.Graph())
infer_as_serving(model, test_file, serving_output_file, hparams, iterator, sess)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Sequential Recommender Quick Start Example: SLi_Rec : Adaptive User Modeling with Long and Short-Term Preferences for Personailzed RecommendationUnlike a general recommender such as Matrix Factorization or xDeepFM (in the repo) which doesn't consider the order of the user's activities, sequential recommender systems take the sequence of the user behaviors as context and the goal is to predict the items that the user will interact in a short time (in an extreme case, the item that the user will interact next).This notebook aims to give you a quick example of how to train a sequential model based on a public Amazon dataset. Currently, we can support NextItNet \[4\], GRU4Rec \[2\], Caser \[3\], A2SVD \[1\], SLi_Rec \[1\], and SUM \[5\]. Without loss of generality, this notebook takes [SLi_Rec model](https://www.microsoft.com/en-us/research/uploads/prod/2019/07/IJCAI19-ready_v1.pdf) for example.SLi_Rec \[1\] is a deep learning-based model aims at capturing both long and short-term user preferences for precise recommender systems. To summarize, SLi_Rec has the following key properties:* It adopts the attentive "Asymmetric-SVD" paradigm for long-term modeling;* It takes both time irregularity and semantic irregularity into consideration by modifying the gating logic in LSTM.* It uses an attention mechanism to dynamic fuse the long-term component and short-term component.In this notebook, we test SLi_Rec on a subset of the public dataset: [Amazon_reviews](http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Movies_and_TV_5.json.gz) and [Amazon_metadata](http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/meta_Movies_and_TV.json.gz)This notebook is well tested under TF 1.15.0. 0. Global Settings and Imports
###Code
import sys
import os
import logging
import papermill as pm
import scrapbook as sb
from tempfile import TemporaryDirectory
import numpy as np
import tensorflow as tf
tf.get_logger().setLevel('ERROR') # only show error messages
from recommenders.utils.timer import Timer
from recommenders.utils.constants import SEED
from recommenders.models.deeprec.deeprec_utils import (
prepare_hparams
)
from recommenders.datasets.amazon_reviews import download_and_extract, data_preprocessing
from recommenders.datasets.download_utils import maybe_download
from recommenders.models.deeprec.models.sequential.sli_rec import SLI_RECModel as SeqModel
#### to use the other model, use one of the following lines:
# from recommenders.models.deeprec.models.sequential.asvd import A2SVDModel as SeqModel
# from recommenders.models.deeprec.models.sequential.caser import CaserModel as SeqModel
# from recommenders.models.deeprec.models.sequential.gru4rec import GRU4RecModel as SeqModel
# from recommenders.models.deeprec.models.sequential.sum import SUMModel as SeqModel
#from recommenders.models.deeprec.models.sequential.nextitnet import NextItNetModel
from recommenders.models.deeprec.io.sequential_iterator import SequentialIterator
#from recommenders.models.deeprec.io.nextitnet_iterator import NextItNetIterator
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
## ATTENTION: change to the corresponding config file, e.g., caser.yaml for CaserModel, sum.yaml for SUMModel
yaml_file = '../../recommenders/models/deeprec/config/sli_rec.yaml'
###Output
_____no_output_____
###Markdown
Parameters
###Code
EPOCHS = 10
BATCH_SIZE = 400
RANDOM_SEED = SEED # Set None for non-deterministic result
data_path = os.path.join("..", "..", "tests", "resources", "deeprec", "slirec")
###Output
_____no_output_____
###Markdown
1. Input data formatThe input data contains 8 columns, i.e., ` ` columns are seperated by `"\t"`. item_id and category_id denote the target item and category, which means that for this instance, we want to guess whether user user_id will interact with item_id at timestamp. `` columns record the user behavior list up to ``, elements are separated by commas. `` is a binary value with 1 for positive instances and 0 for negative instances. One example for an instance is: `1 A1QQ86H5M2LVW2 B0059XTU1S Movies 1377561600 B002ZG97WE,B004IK30PA,B000BNX3AU,B0017ANB08,B005LAIHW2 Movies,Movies,Movies,Movies,Movies 1304294400,1304812800,1315785600,1316304000,1356998400` In data preprocessing stage, we have a script to generate some ID mapping dictionaries, so user_id, item_id and category_id will be mapped into interager index starting from 1. And you need to tell the input iterator where is the ID mapping files are. (For example, in the next section, we have some mapping files like user_vocab, item_vocab, and cate_vocab). The data preprocessing script is at [recommenders/dataset/amazon_reviews.py](../../recommenders/dataset/amazon_reviews.py), you need to call the `_create_vocab(train_file, user_vocab, item_vocab, cate_vocab)` function. Note that ID vocabulary only creates from the train_file, so the new IDs in valid_file or test_file will be regarded as unknown IDs and assigned with a defualt 0 index.Only the SLi_Rec model is time-aware. For the other models, you can just pad some meaningless timestamp in the data files to fill up the format, the models will ignore these columns.We use Softmax to the loss function. In training and evalution stage, we group 1 positive instance with num_ngs negative instances. Pair-wise ranking can be regarded as a special case of Softmax ranking, where num_ngs is set to 1. More specifically, for training and evalation, you need to organize the data file such that each one positive instance is followd by num_ngs negative instances. Our program will take 1+num_ngs lines as a unit for Softmax calculation. num_ngs is a parameter you need to pass to the `prepare_hparams`, `fit` and `run_eval` function. `train_num_ngs` in `prepare_hparams` denotes the number of negative instances for training, where a recommended number is 4. `valid_num_ngs` and `num_ngs` in `fit` and `run_eval` denote the number in evalution. In evaluation, the model calculates metrics among the 1+num_ngs instances. For the `predict` function, since we only need to calcuate a socre for each individual instance, there is no need for num_ngs setting. More details and examples will be provided in the following sections.For training stage, if you don't want to prepare negative instances, you can just provide positive instances and set the parameter `need_sample=True, train_num_ngs=train_num_ngs` for function `prepare_hparams`, our model will dynamicly sample `train_num_ngs` instances as negative samples in each mini batch. Amazon datasetNow let's start with a public dataset containing product reviews and metadata from Amazon, which is widely used as a benchmark dataset in recommemdation systems field.
###Code
# for test
train_file = os.path.join(data_path, r'train_data')
valid_file = os.path.join(data_path, r'valid_data')
test_file = os.path.join(data_path, r'test_data')
user_vocab = os.path.join(data_path, r'user_vocab.pkl')
item_vocab = os.path.join(data_path, r'item_vocab.pkl')
cate_vocab = os.path.join(data_path, r'category_vocab.pkl')
output_file = os.path.join(data_path, r'output.txt')
reviews_name = 'reviews_Movies_and_TV_5.json'
meta_name = 'meta_Movies_and_TV.json'
reviews_file = os.path.join(data_path, reviews_name)
meta_file = os.path.join(data_path, meta_name)
train_num_ngs = 4 # number of negative instances with a positive instance for training
valid_num_ngs = 4 # number of negative instances with a positive instance for validation
test_num_ngs = 9 # number of negative instances with a positive instance for testing
sample_rate = 0.01 # sample a small item set for training and testing here for fast example
input_files = [reviews_file, meta_file, train_file, valid_file, test_file, user_vocab, item_vocab, cate_vocab]
if not os.path.exists(train_file):
download_and_extract(reviews_name, reviews_file)
download_and_extract(meta_name, meta_file)
data_preprocessing(*input_files, sample_rate=sample_rate, valid_num_ngs=valid_num_ngs, test_num_ngs=test_num_ngs)
#### uncomment this for the NextItNet model, because it does not need to unfold the user history
# data_preprocessing(*input_files, sample_rate=sample_rate, valid_num_ngs=valid_num_ngs, test_num_ngs=test_num_ngs, is_history_expanding=False)
###Output
_____no_output_____
###Markdown
1.1 Prepare hyper-parametersprepare_hparams() will create a full set of hyper-parameters for model training, such as learning rate, feature number, and dropout ratio. We can put those parameters in a yaml file (a complete list of parameters can be found under our config folder) , or pass parameters as the function's parameters (which will overwrite yaml settings).Parameters hints: `need_sample` controls whether to perform dynamic negative sampling in mini-batch. `train_num_ngs` indicates how many negative instances followed by one positive instances. Examples: (1) `need_sample=True and train_num_ngs=4`: There are only positive instances in your training file. Our model will dynamically sample 4 negative instances for each positive instances in mini-batch. Note that if need_sample is set to True, train_num_ngs should be greater than zero. (2) `need_sample=False and train_num_ngs=4`: In your training file, each one positive line is followed by 4 negative lines. Note that if need_sample is set to False, you must provide a traiing file with negative instances, and train_num_ngs should match the number of negative number in your training file.
###Code
### NOTE:
### remember to use `_create_vocab(train_file, user_vocab, item_vocab, cate_vocab)` to generate the user_vocab, item_vocab and cate_vocab files, if you are using your own dataset rather than using our demo Amazon dataset.
hparams = prepare_hparams(yaml_file,
embed_l2=0.,
layer_l2=0.,
learning_rate=0.001, # set to 0.01 if batch normalization is disable
epochs=EPOCHS,
batch_size=BATCH_SIZE,
show_step=20,
MODEL_DIR=os.path.join(data_path, "model/"),
SUMMARIES_DIR=os.path.join(data_path, "summary/"),
user_vocab=user_vocab,
item_vocab=item_vocab,
cate_vocab=cate_vocab,
need_sample=True,
train_num_ngs=train_num_ngs, # provides the number of negative instances for each positive instance for loss computation.
)
###Output
_____no_output_____
###Markdown
1.2 Create data loaderDesignate a data iterator for the model. All our sequential models use SequentialIterator. data format is introduced aboved. Validation and testing data are files after negative sampling offline with the number of `` and ``.
###Code
input_creator = SequentialIterator
#### uncomment this for the NextItNet model, because it needs a special data iterator for training
#input_creator = NextItNetIterator
###Output
_____no_output_____
###Markdown
2. Create modelWhen both hyper-parameters and data iterator are ready, we can create a model:
###Code
model = SeqModel(hparams, input_creator, seed=RANDOM_SEED)
## sometimes we don't want to train a model from scratch
## then we can load a pre-trained model like this:
#model.load_model(r'your_model_path')
###Output
_____no_output_____
###Markdown
Now let's see what is the model's performance at this point (without starting training):
###Code
# test_num_ngs is the number of negative lines after each positive line in your test_file
print(model.run_eval(test_file, num_ngs=test_num_ngs))
###Output
{'auc': 0.4857, 'logloss': 0.6931, 'mean_mrr': 0.2665, 'ndcg@2': 0.1357, 'ndcg@4': 0.2186, 'ndcg@6': 0.2905, 'group_auc': 0.4849}
###Markdown
AUC=0.5 is a state of random guess. We can see that before training, the model behaves like random guessing. 2.1 Train modelNext we want to train the model on a training set, and check the performance on a validation dataset. Training the model is as simple as a function call:
###Code
with Timer() as train_time:
model = model.fit(train_file, valid_file, valid_num_ngs=valid_num_ngs)
# valid_num_ngs is the number of negative lines after each positive line in your valid_file
# we will evaluate the performance of model on valid_file every epoch
print('Time cost for training is {0:.2f} mins'.format(train_time.interval/60.0))
###Output
step 20 , total_loss: 1.6078, data_loss: 1.6078
step 40 , total_loss: 1.6054, data_loss: 1.6054
eval valid at epoch 1: auc:0.4975,logloss:0.6929,mean_mrr:0.4592,ndcg@2:0.3292,ndcg@4:0.5125,ndcg@6:0.5915,group_auc:0.4994
step 20 , total_loss: 1.5786, data_loss: 1.5786
step 40 , total_loss: 1.4193, data_loss: 1.4193
eval valid at epoch 2: auc:0.6486,logloss:0.6946,mean_mrr:0.5567,ndcg@2:0.472,ndcg@4:0.6292,ndcg@6:0.6669,group_auc:0.6363
step 20 , total_loss: 1.3229, data_loss: 1.3229
step 40 , total_loss: 1.3079, data_loss: 1.3079
eval valid at epoch 3: auc:0.6887,logloss:0.8454,mean_mrr:0.6032,ndcg@2:0.537,ndcg@4:0.6705,ndcg@6:0.7022,group_auc:0.683
step 20 , total_loss: 1.3521, data_loss: 1.3521
step 40 , total_loss: 1.2250, data_loss: 1.2250
eval valid at epoch 4: auc:0.6978,logloss:0.7005,mean_mrr:0.6236,ndcg@2:0.5622,ndcg@4:0.6881,ndcg@6:0.7175,group_auc:0.699
step 20 , total_loss: 1.2826, data_loss: 1.2826
step 40 , total_loss: 1.2795, data_loss: 1.2795
eval valid at epoch 5: auc:0.7152,logloss:0.6695,mean_mrr:0.6382,ndcg@2:0.582,ndcg@4:0.7009,ndcg@6:0.7286,group_auc:0.7139
step 20 , total_loss: 1.2214, data_loss: 1.2214
step 40 , total_loss: 1.2521, data_loss: 1.2521
eval valid at epoch 6: auc:0.722,logloss:0.6141,mean_mrr:0.637,ndcg@2:0.5796,ndcg@4:0.6993,ndcg@6:0.7276,group_auc:0.7116
step 20 , total_loss: 1.1884, data_loss: 1.1884
step 40 , total_loss: 1.1957, data_loss: 1.1957
eval valid at epoch 7: auc:0.7287,logloss:0.6183,mean_mrr:0.6417,ndcg@2:0.5875,ndcg@4:0.7031,ndcg@6:0.7312,group_auc:0.7167
step 20 , total_loss: 1.1779, data_loss: 1.1779
step 40 , total_loss: 1.1616, data_loss: 1.1616
eval valid at epoch 8: auc:0.7342,logloss:0.6584,mean_mrr:0.6538,ndcg@2:0.6006,ndcg@4:0.7121,ndcg@6:0.7402,group_auc:0.7248
step 20 , total_loss: 1.1299, data_loss: 1.1299
step 40 , total_loss: 1.2055, data_loss: 1.2055
eval valid at epoch 9: auc:0.7324,logloss:0.6268,mean_mrr:0.6541,ndcg@2:0.5981,ndcg@4:0.7129,ndcg@6:0.7404,group_auc:0.7239
step 20 , total_loss: 1.1927, data_loss: 1.1927
step 40 , total_loss: 1.1909, data_loss: 1.1909
eval valid at epoch 10: auc:0.7369,logloss:0.6122,mean_mrr:0.6611,ndcg@2:0.6087,ndcg@4:0.7181,ndcg@6:0.7457,group_auc:0.731
[(1, {'auc': 0.4975, 'logloss': 0.6929, 'mean_mrr': 0.4592, 'ndcg@2': 0.3292, 'ndcg@4': 0.5125, 'ndcg@6': 0.5915, 'group_auc': 0.4994}), (2, {'auc': 0.6486, 'logloss': 0.6946, 'mean_mrr': 0.5567, 'ndcg@2': 0.472, 'ndcg@4': 0.6292, 'ndcg@6': 0.6669, 'group_auc': 0.6363}), (3, {'auc': 0.6887, 'logloss': 0.8454, 'mean_mrr': 0.6032, 'ndcg@2': 0.537, 'ndcg@4': 0.6705, 'ndcg@6': 0.7022, 'group_auc': 0.683}), (4, {'auc': 0.6978, 'logloss': 0.7005, 'mean_mrr': 0.6236, 'ndcg@2': 0.5622, 'ndcg@4': 0.6881, 'ndcg@6': 0.7175, 'group_auc': 0.699}), (5, {'auc': 0.7152, 'logloss': 0.6695, 'mean_mrr': 0.6382, 'ndcg@2': 0.582, 'ndcg@4': 0.7009, 'ndcg@6': 0.7286, 'group_auc': 0.7139}), (6, {'auc': 0.722, 'logloss': 0.6141, 'mean_mrr': 0.637, 'ndcg@2': 0.5796, 'ndcg@4': 0.6993, 'ndcg@6': 0.7276, 'group_auc': 0.7116}), (7, {'auc': 0.7287, 'logloss': 0.6183, 'mean_mrr': 0.6417, 'ndcg@2': 0.5875, 'ndcg@4': 0.7031, 'ndcg@6': 0.7312, 'group_auc': 0.7167}), (8, {'auc': 0.7342, 'logloss': 0.6584, 'mean_mrr': 0.6538, 'ndcg@2': 0.6006, 'ndcg@4': 0.7121, 'ndcg@6': 0.7402, 'group_auc': 0.7248}), (9, {'auc': 0.7324, 'logloss': 0.6268, 'mean_mrr': 0.6541, 'ndcg@2': 0.5981, 'ndcg@4': 0.7129, 'ndcg@6': 0.7404, 'group_auc': 0.7239}), (10, {'auc': 0.7369, 'logloss': 0.6122, 'mean_mrr': 0.6611, 'ndcg@2': 0.6087, 'ndcg@4': 0.7181, 'ndcg@6': 0.7457, 'group_auc': 0.731})]
best epoch: 10
Time cost for training is 3.22 mins
###Markdown
2.2 Evaluate modelAgain, let's see what is the model's performance now (after training):
###Code
res_syn = model.run_eval(test_file, num_ngs=test_num_ngs)
print(res_syn)
sb.glue("res_syn", res_syn)
###Output
_____no_output_____
###Markdown
If we want to get the full prediction scores rather than evaluation metrics, we can do this:
###Code
model = model.predict(test_file, output_file)
# The data was downloaded in tmpdir folder. You can delete them manually if you do not need them any more.
###Output
_____no_output_____
###Markdown
2.3 Running models with large datasetHere are performances using the whole amazon dataset among popular sequential models with 1,697,533 positive instances.Settings for reproducing the results:`learning_rate=0.001, dropout=0.3, item_embedding_dim=32, cate_embedding_dim=8, l2_norm=0, batch_size=400, train_num_ngs=4, valid_num_ngs=4, test_num_ngs=49`We compare the running time with CPU only and with GPU on the larger dataset. It appears that GPU can significantly accelerate the training. Hardware specification for running the large dataset: GPU: Tesla P100-PCIE-16GBCPU: 6 cores Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz | Models | AUC | g-AUC | NDCG@2 | NDCG@10 | seconds per epoch on GPU | seconds per epoch on CPU| config || :------| :------: | :------: | :------: | :------: | :------: | :------: | :------ || A2SVD | 0.8251 | 0.8178 | 0.2922 | 0.4264 | 249.5 | 440.0 | N/A || GRU4Rec | 0.8411 | 0.8332 | 0.3213 | 0.4547 | 439.0 | 4285.0 | max_seq_length=50, hidden_size=40|| Caser | 0.8244 | 0.8171 | 0.283 | 0.4194 | 314.3 | 5369.9 | T=1, n_v=128, n_h=128, L=3, min_seq_length=5|| SLi_Rec | 0.8631 | 0.8519 | 0.3491 | 0.4842 | 549.6 | 5014.0 | attention_size=40, max_seq_length=50, hidden_size=40|| NextItNet* | 0.6793 | 0.6769 | 0.0602 | 0.1733 | 112.0 | 214.5 | min_seq_length=3, dilations=\[1,2,4,1,2,4\], kernel_size=3 || SUM | 0.8481 | 0.8406 | 0.3394 | 0.4774 | 1005.0 | 9427.0 | hidden_size=40, slots=4, dropout=0| Note 1: The five models are grid searched with a coarse granularity and the results are for reference only. Note 2: NextItNet model requires a dataset with strong sequence property, but the Amazon dataset used in this notebook does not meet that requirement, so NextItNet Model may not performance good. If you wish to use other datasets with strong sequence property, NextItNet is recommended. Note 3: Time cost of NextItNet Model is significantly shorter than other models because it doesn't need a history expanding of training data. 3. Online servingIn this section, we provide a simple example to illustrate how we can use the trained model to serve for production demand.Suppose we are in a new session. First let's load a previous trained model:
###Code
model_best_trained = SeqModel(hparams, input_creator, seed=RANDOM_SEED)
path_best_trained = os.path.join(hparams.MODEL_DIR, "best_model")
print('loading saved model in {0}'.format(path_best_trained))
model_best_trained.load_model(path_best_trained)
###Output
loading saved model in ../../tests/resources/deeprec/slirec/model/best_model
INFO:tensorflow:Restoring parameters from ../../tests/resources/deeprec/slirec/model/best_model
###Markdown
Let's see if we load the model correctly. The testing metrics should be close to the numbers we have in the training stage.
###Code
model_best_trained.run_eval(test_file, num_ngs=test_num_ngs)
###Output
_____no_output_____
###Markdown
And we make predictions using this model. In the next step, we will make predictions using a serving model. Then we can check if the two result files are consistent.
###Code
model_best_trained.predict(test_file, output_file)
###Output
_____no_output_____
###Markdown
Exciting. Now let's start our quick journey of online serving. For efficient and flexible serving, usually we only keep the necessary computation nodes and froze the TF model to a single pb file, so that we can easily compute scores with this unified pb file in both Python or Java:
###Code
with model_best_trained.sess as sess:
graph_def = model_best_trained.graph.as_graph_def()
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
graph_def,
["pred"]
)
outfilepath = os.path.join(hparams.MODEL_DIR, "serving_model.pb")
with tf.gfile.GFile(outfilepath, 'wb') as f:
f.write(output_graph_def.SerializeToString())
###Output
_____no_output_____
###Markdown
The serving logic is as simple as feeding the feature values to the corresponding input nodes, and fetch the score from the output node. In our model, input nodes are some placeholders and control variables (such as is_training, layer_keeps). We can get the nodes by their name:
###Code
class LoadFrozedPredModel:
def __init__(self, graph):
self.pred = graph.get_tensor_by_name('import/pred:0')
self.items = graph.get_tensor_by_name('import/items:0')
self.cates = graph.get_tensor_by_name('import/cates:0')
self.item_history = graph.get_tensor_by_name('import/item_history:0')
self.item_cate_history = graph.get_tensor_by_name('import/item_cate_history:0')
self.mask = graph.get_tensor_by_name('import/mask:0')
self.time_from_first_action = graph.get_tensor_by_name('import/time_from_first_action:0')
self.time_to_now = graph.get_tensor_by_name('import/time_to_now:0')
self.layer_keeps = graph.get_tensor_by_name('import/layer_keeps:0')
self.is_training = graph.get_tensor_by_name('import/is_training:0')
def infer_as_serving(model, infile, outfile, hparams, iterator, sess):
preds = []
for batch_data_input in iterator.load_data_from_file(infile, batch_num_ngs=0):
if batch_data_input:
feed_dict = {
model.layer_keeps:np.ones(3, dtype=np.float32),
model.is_training:False,
model.items: batch_data_input[iterator.items],
model.cates: batch_data_input[iterator.cates],
model.item_history: batch_data_input[iterator.item_history],
model.item_cate_history: batch_data_input[iterator.item_cate_history],
model.mask: batch_data_input[iterator.mask],
model.time_from_first_action: batch_data_input[iterator.time_from_first_action],
model.time_to_now: batch_data_input[iterator.time_to_now]
}
step_pred = sess.run(model.pred, feed_dict=feed_dict)
preds.extend(np.reshape(step_pred, -1))
with open(outfile, "w") as wt:
for line in preds:
wt.write('{0}\n'.format(line))
###Output
_____no_output_____
###Markdown
Here is the main pipeline for inferring in an online serving manner. You can compare the 'output_serving.txt' with 'output.txt' to see if the results are consistent.The input file format is the same as introduced in Section 1 'Input data format'. In serving stage, since we do not need a groundtrue lable, so for the label column, you can simply place any number like a zero. The iterator will parse the input file and convert into the required format for model's feed_dictionary.
###Code
G = tf.Graph()
with tf.gfile.GFile(
os.path.join(hparams.MODEL_DIR, "serving_model.pb"),
'rb'
) as f, G.as_default():
graph_def_optimized = tf.GraphDef()
graph_def_optimized.ParseFromString(f.read())
#### uncomment this line if you want to check what conent is included in the graph
#print('graph_def_optimized = ' + str(graph_def_optimized))
with tf.Session(graph=G) as sess:
tf.import_graph_def(graph_def_optimized)
model = LoadFrozedPredModel(sess.graph)
serving_output_file = os.path.join(data_path, r'output_serving.txt')
iterator = input_creator(hparams, tf.Graph())
infer_as_serving(model, test_file, serving_output_file, hparams, iterator, sess)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Sequential Recommender Quick Start Example: SLi_Rec : Adaptive User Modeling with Long and Short-Term Preferences for Personailzed RecommendationUnlike a general recommender such as Matrix Factorization or xDeepFM (in the repo) which doesn't consider the order of the user's activities, sequential recommender systems take the sequence of the user behaviors as context and the goal is to predict the items that the user will interact in a short time (in an extreme case, the item that the user will interact next).This notebook aims to give you a quick example of how to train a sequential model based on a public Amazon dataset. Currently, we can support NextItNet \[4\], GRU4Rec \[2\], Caser \[3\], A2SVD \[1\], SLi_Rec \[1\], and SUM \[5\]. Without loss of generality, this notebook takes [SLi_Rec model](https://www.microsoft.com/en-us/research/uploads/prod/2019/07/IJCAI19-ready_v1.pdf) for example.SLi_Rec \[1\] is a deep learning-based model aims at capturing both long and short-term user preferences for precise recommender systems. To summarize, SLi_Rec has the following key properties:* It adopts the attentive "Asymmetric-SVD" paradigm for long-term modeling;* It takes both time irregularity and semantic irregularity into consideration by modifying the gating logic in LSTM.* It uses an attention mechanism to dynamic fuse the long-term component and short-term component.In this notebook, we test SLi_Rec on a subset of the public dataset: [Amazon_reviews](http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Movies_and_TV_5.json.gz) and [Amazon_metadata](http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/meta_Movies_and_TV.json.gz)This notebook is tested under TF 2.6. 0. Global Settings and Imports
###Code
import sys
import os
import logging
import papermill as pm
import scrapbook as sb
from tempfile import TemporaryDirectory
import numpy as np
import tensorflow.compat.v1 as tf
tf.get_logger().setLevel('ERROR') # only show error messages
from recommenders.utils.timer import Timer
from recommenders.utils.constants import SEED
from recommenders.models.deeprec.deeprec_utils import (
prepare_hparams
)
from recommenders.datasets.amazon_reviews import download_and_extract, data_preprocessing
from recommenders.datasets.download_utils import maybe_download
from recommenders.models.deeprec.models.sequential.sli_rec import SLI_RECModel as SeqModel
#### to use the other model, use one of the following lines:
# from recommenders.models.deeprec.models.sequential.asvd import A2SVDModel as SeqModel
# from recommenders.models.deeprec.models.sequential.caser import CaserModel as SeqModel
# from recommenders.models.deeprec.models.sequential.gru4rec import GRU4RecModel as SeqModel
# from recommenders.models.deeprec.models.sequential.sum import SUMModel as SeqModel
#from recommenders.models.deeprec.models.sequential.nextitnet import NextItNetModel
from recommenders.models.deeprec.io.sequential_iterator import SequentialIterator
#from recommenders.models.deeprec.io.nextitnet_iterator import NextItNetIterator
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
## ATTENTION: change to the corresponding config file, e.g., caser.yaml for CaserModel, sum.yaml for SUMModel
yaml_file = '../../recommenders/models/deeprec/config/sli_rec.yaml'
###Output
_____no_output_____
###Markdown
Parameters
###Code
EPOCHS = 10
BATCH_SIZE = 400
RANDOM_SEED = SEED # Set None for non-deterministic result
data_path = os.path.join("..", "..", "tests", "resources", "deeprec", "slirec")
###Output
_____no_output_____
###Markdown
1. Input data formatThe input data contains 8 columns, i.e., ` ` columns are seperated by `"\t"`. item_id and category_id denote the target item and category, which means that for this instance, we want to guess whether user user_id will interact with item_id at timestamp. `` columns record the user behavior list up to ``, elements are separated by commas. `` is a binary value with 1 for positive instances and 0 for negative instances. One example for an instance is: `1 A1QQ86H5M2LVW2 B0059XTU1S Movies 1377561600 B002ZG97WE,B004IK30PA,B000BNX3AU,B0017ANB08,B005LAIHW2 Movies,Movies,Movies,Movies,Movies 1304294400,1304812800,1315785600,1316304000,1356998400` In data preprocessing stage, we have a script to generate some ID mapping dictionaries, so user_id, item_id and category_id will be mapped into interager index starting from 1. And you need to tell the input iterator where is the ID mapping files are. (For example, in the next section, we have some mapping files like user_vocab, item_vocab, and cate_vocab). The data preprocessing script is at [recommenders/dataset/amazon_reviews.py](../../recommenders/dataset/amazon_reviews.py), you need to call the `_create_vocab(train_file, user_vocab, item_vocab, cate_vocab)` function. Note that ID vocabulary only creates from the train_file, so the new IDs in valid_file or test_file will be regarded as unknown IDs and assigned with a defualt 0 index.Only the SLi_Rec model is time-aware. For the other models, you can just pad some meaningless timestamp in the data files to fill up the format, the models will ignore these columns.We use Softmax to the loss function. In training and evalution stage, we group 1 positive instance with `num_ngs` negative instances. Pair-wise ranking can be regarded as a special case of softmax ranking, where `num_ngs` is set to 1. More specifically, for training and evalation, you need to organize the data file such that each one positive instance is followed by `num_ngs` negative instances. Our program will take `1+num_ngs` lines as a unit for Softmax calculation. `num_ngs` is a parameter you need to pass to the `prepare_hparams`, `fit` and `run_eval` function. `train_num_ngs` in `prepare_hparams` denotes the number of negative instances for training, where a recommended number is 4. `valid_num_ngs` and `num_ngs` in `fit` and `run_eval` denote the number in evalution. In evaluation, the model calculates metrics among the `1+num_ngs` instances. For the `predict` function, since we only need to calcuate a score for each individual instance, there is no need for `num_ngs` setting. More details and examples will be provided in the following sections.For training stage, if you don't want to prepare negative instances, you can just provide positive instances and set the parameter `need_sample=True, train_num_ngs=train_num_ngs` for function `prepare_hparams`, our model will dynamicly sample `train_num_ngs` instances as negative samples in each mini batch. Amazon datasetNow let's start with a public dataset containing product reviews and metadata from Amazon, which is widely used as a benchmark dataset in recommemdation systems field.
###Code
# for test
train_file = os.path.join(data_path, r'train_data')
valid_file = os.path.join(data_path, r'valid_data')
test_file = os.path.join(data_path, r'test_data')
user_vocab = os.path.join(data_path, r'user_vocab.pkl')
item_vocab = os.path.join(data_path, r'item_vocab.pkl')
cate_vocab = os.path.join(data_path, r'category_vocab.pkl')
output_file = os.path.join(data_path, r'output.txt')
reviews_name = 'reviews_Movies_and_TV_5.json'
meta_name = 'meta_Movies_and_TV.json'
reviews_file = os.path.join(data_path, reviews_name)
meta_file = os.path.join(data_path, meta_name)
train_num_ngs = 4 # number of negative instances with a positive instance for training
valid_num_ngs = 4 # number of negative instances with a positive instance for validation
test_num_ngs = 9 # number of negative instances with a positive instance for testing
sample_rate = 0.01 # sample a small item set for training and testing here for fast example
input_files = [reviews_file, meta_file, train_file, valid_file, test_file, user_vocab, item_vocab, cate_vocab]
if not os.path.exists(train_file):
download_and_extract(reviews_name, reviews_file)
download_and_extract(meta_name, meta_file)
data_preprocessing(*input_files, sample_rate=sample_rate, valid_num_ngs=valid_num_ngs, test_num_ngs=test_num_ngs)
#### uncomment this for the NextItNet model, because it does not need to unfold the user history
# data_preprocessing(*input_files, sample_rate=sample_rate, valid_num_ngs=valid_num_ngs, test_num_ngs=test_num_ngs, is_history_expanding=False)
###Output
_____no_output_____
###Markdown
1.1 Prepare hyper-parametersprepare_hparams() will create a full set of hyper-parameters for model training, such as learning rate, feature number, and dropout ratio. We can put those parameters in a yaml file (a complete list of parameters can be found under our config folder) , or pass parameters as the function's parameters (which will overwrite yaml settings).Parameters hints: `need_sample` controls whether to perform dynamic negative sampling in mini-batch. `train_num_ngs` indicates how many negative instances followed by one positive instances. Examples: (1) `need_sample=True and train_num_ngs=4`: There are only positive instances in your training file. Our model will dynamically sample 4 negative instances for each positive instances in mini-batch. Note that if need_sample is set to True, train_num_ngs should be greater than zero. (2) `need_sample=False and train_num_ngs=4`: In your training file, each one positive line is followed by 4 negative lines. Note that if need_sample is set to False, you must provide a traiing file with negative instances, and train_num_ngs should match the number of negative number in your training file.
###Code
### NOTE:
### remember to use `_create_vocab(train_file, user_vocab, item_vocab, cate_vocab)` to generate the user_vocab, item_vocab and cate_vocab files, if you are using your own dataset rather than using our demo Amazon dataset.
hparams = prepare_hparams(yaml_file,
embed_l2=0.,
layer_l2=0.,
learning_rate=0.001, # set to 0.01 if batch normalization is disable
epochs=EPOCHS,
batch_size=BATCH_SIZE,
show_step=20,
MODEL_DIR=os.path.join(data_path, "model/"),
SUMMARIES_DIR=os.path.join(data_path, "summary/"),
user_vocab=user_vocab,
item_vocab=item_vocab,
cate_vocab=cate_vocab,
need_sample=True,
train_num_ngs=train_num_ngs, # provides the number of negative instances for each positive instance for loss computation.
)
###Output
_____no_output_____
###Markdown
1.2 Create data loaderDesignate a data iterator for the model. All our sequential models use SequentialIterator. data format is introduced aboved. Validation and testing data are files after negative sampling offline with the number of `` and ``.
###Code
input_creator = SequentialIterator
#### uncomment this for the NextItNet model, because it needs a special data iterator for training
#input_creator = NextItNetIterator
###Output
_____no_output_____
###Markdown
2. Create modelWhen both hyper-parameters and data iterator are ready, we can create a model:
###Code
model = SeqModel(hparams, input_creator, seed=RANDOM_SEED)
## sometimes we don't want to train a model from scratch
## then we can load a pre-trained model like this:
#model.load_model(r'your_model_path')
###Output
_____no_output_____
###Markdown
Now let's see what is the model's performance at this point (without starting training):
###Code
# test_num_ngs is the number of negative lines after each positive line in your test_file
print(model.run_eval(test_file, num_ngs=test_num_ngs))
###Output
{'auc': 0.4857, 'logloss': 0.6931, 'mean_mrr': 0.2665, 'ndcg@2': 0.1357, 'ndcg@4': 0.2186, 'ndcg@6': 0.2905, 'group_auc': 0.4849}
###Markdown
AUC=0.5 is a state of random guess. We can see that before training, the model behaves like random guessing. 2.1 Train modelNext we want to train the model on a training set, and check the performance on a validation dataset. Training the model is as simple as a function call:
###Code
with Timer() as train_time:
model = model.fit(train_file, valid_file, valid_num_ngs=valid_num_ngs)
# valid_num_ngs is the number of negative lines after each positive line in your valid_file
# we will evaluate the performance of model on valid_file every epoch
print('Time cost for training is {0:.2f} mins'.format(train_time.interval/60.0))
###Output
step 20 , total_loss: 1.6078, data_loss: 1.6078
step 40 , total_loss: 1.6054, data_loss: 1.6054
eval valid at epoch 1: auc:0.4975,logloss:0.6929,mean_mrr:0.4592,ndcg@2:0.3292,ndcg@4:0.5125,ndcg@6:0.5915,group_auc:0.4994
step 20 , total_loss: 1.5786, data_loss: 1.5786
step 40 , total_loss: 1.4193, data_loss: 1.4193
eval valid at epoch 2: auc:0.6486,logloss:0.6946,mean_mrr:0.5567,ndcg@2:0.472,ndcg@4:0.6292,ndcg@6:0.6669,group_auc:0.6363
step 20 , total_loss: 1.3229, data_loss: 1.3229
step 40 , total_loss: 1.3079, data_loss: 1.3079
eval valid at epoch 3: auc:0.6887,logloss:0.8454,mean_mrr:0.6032,ndcg@2:0.537,ndcg@4:0.6705,ndcg@6:0.7022,group_auc:0.683
step 20 , total_loss: 1.3521, data_loss: 1.3521
step 40 , total_loss: 1.2250, data_loss: 1.2250
eval valid at epoch 4: auc:0.6978,logloss:0.7005,mean_mrr:0.6236,ndcg@2:0.5622,ndcg@4:0.6881,ndcg@6:0.7175,group_auc:0.699
step 20 , total_loss: 1.2826, data_loss: 1.2826
step 40 , total_loss: 1.2795, data_loss: 1.2795
eval valid at epoch 5: auc:0.7152,logloss:0.6695,mean_mrr:0.6382,ndcg@2:0.582,ndcg@4:0.7009,ndcg@6:0.7286,group_auc:0.7139
step 20 , total_loss: 1.2214, data_loss: 1.2214
step 40 , total_loss: 1.2521, data_loss: 1.2521
eval valid at epoch 6: auc:0.722,logloss:0.6141,mean_mrr:0.637,ndcg@2:0.5796,ndcg@4:0.6993,ndcg@6:0.7276,group_auc:0.7116
step 20 , total_loss: 1.1884, data_loss: 1.1884
step 40 , total_loss: 1.1957, data_loss: 1.1957
eval valid at epoch 7: auc:0.7287,logloss:0.6183,mean_mrr:0.6417,ndcg@2:0.5875,ndcg@4:0.7031,ndcg@6:0.7312,group_auc:0.7167
step 20 , total_loss: 1.1779, data_loss: 1.1779
step 40 , total_loss: 1.1616, data_loss: 1.1616
eval valid at epoch 8: auc:0.7342,logloss:0.6584,mean_mrr:0.6538,ndcg@2:0.6006,ndcg@4:0.7121,ndcg@6:0.7402,group_auc:0.7248
step 20 , total_loss: 1.1299, data_loss: 1.1299
step 40 , total_loss: 1.2055, data_loss: 1.2055
eval valid at epoch 9: auc:0.7324,logloss:0.6268,mean_mrr:0.6541,ndcg@2:0.5981,ndcg@4:0.7129,ndcg@6:0.7404,group_auc:0.7239
step 20 , total_loss: 1.1927, data_loss: 1.1927
step 40 , total_loss: 1.1909, data_loss: 1.1909
eval valid at epoch 10: auc:0.7369,logloss:0.6122,mean_mrr:0.6611,ndcg@2:0.6087,ndcg@4:0.7181,ndcg@6:0.7457,group_auc:0.731
[(1, {'auc': 0.4975, 'logloss': 0.6929, 'mean_mrr': 0.4592, 'ndcg@2': 0.3292, 'ndcg@4': 0.5125, 'ndcg@6': 0.5915, 'group_auc': 0.4994}), (2, {'auc': 0.6486, 'logloss': 0.6946, 'mean_mrr': 0.5567, 'ndcg@2': 0.472, 'ndcg@4': 0.6292, 'ndcg@6': 0.6669, 'group_auc': 0.6363}), (3, {'auc': 0.6887, 'logloss': 0.8454, 'mean_mrr': 0.6032, 'ndcg@2': 0.537, 'ndcg@4': 0.6705, 'ndcg@6': 0.7022, 'group_auc': 0.683}), (4, {'auc': 0.6978, 'logloss': 0.7005, 'mean_mrr': 0.6236, 'ndcg@2': 0.5622, 'ndcg@4': 0.6881, 'ndcg@6': 0.7175, 'group_auc': 0.699}), (5, {'auc': 0.7152, 'logloss': 0.6695, 'mean_mrr': 0.6382, 'ndcg@2': 0.582, 'ndcg@4': 0.7009, 'ndcg@6': 0.7286, 'group_auc': 0.7139}), (6, {'auc': 0.722, 'logloss': 0.6141, 'mean_mrr': 0.637, 'ndcg@2': 0.5796, 'ndcg@4': 0.6993, 'ndcg@6': 0.7276, 'group_auc': 0.7116}), (7, {'auc': 0.7287, 'logloss': 0.6183, 'mean_mrr': 0.6417, 'ndcg@2': 0.5875, 'ndcg@4': 0.7031, 'ndcg@6': 0.7312, 'group_auc': 0.7167}), (8, {'auc': 0.7342, 'logloss': 0.6584, 'mean_mrr': 0.6538, 'ndcg@2': 0.6006, 'ndcg@4': 0.7121, 'ndcg@6': 0.7402, 'group_auc': 0.7248}), (9, {'auc': 0.7324, 'logloss': 0.6268, 'mean_mrr': 0.6541, 'ndcg@2': 0.5981, 'ndcg@4': 0.7129, 'ndcg@6': 0.7404, 'group_auc': 0.7239}), (10, {'auc': 0.7369, 'logloss': 0.6122, 'mean_mrr': 0.6611, 'ndcg@2': 0.6087, 'ndcg@4': 0.7181, 'ndcg@6': 0.7457, 'group_auc': 0.731})]
best epoch: 10
Time cost for training is 3.22 mins
###Markdown
2.2 Evaluate modelAgain, let's see what is the model's performance now (after training):
###Code
res_syn = model.run_eval(test_file, num_ngs=test_num_ngs)
print(res_syn)
sb.glue("res_syn", res_syn)
###Output
_____no_output_____
###Markdown
If we want to get the full prediction scores rather than evaluation metrics, we can do this:
###Code
model = model.predict(test_file, output_file)
# The data was downloaded in tmpdir folder. You can delete them manually if you do not need them any more.
###Output
_____no_output_____
###Markdown
2.3 Running models with large datasetHere are performances using the whole amazon dataset among popular sequential models with 1,697,533 positive instances.Settings for reproducing the results:`learning_rate=0.001, dropout=0.3, item_embedding_dim=32, cate_embedding_dim=8, l2_norm=0, batch_size=400, train_num_ngs=4, valid_num_ngs=4, test_num_ngs=49`We compare the running time with CPU only and with GPU on the larger dataset. It appears that GPU can significantly accelerate the training. Hardware specification for running the large dataset: GPU: Tesla P100-PCIE-16GBCPU: 6 cores Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz | Models | AUC | g-AUC | NDCG@2 | NDCG@10 | seconds per epoch on GPU | seconds per epoch on CPU| config || :------| :------: | :------: | :------: | :------: | :------: | :------: | :------ || A2SVD | 0.8251 | 0.8178 | 0.2922 | 0.4264 | 249.5 | 440.0 | N/A || GRU4Rec | 0.8411 | 0.8332 | 0.3213 | 0.4547 | 439.0 | 4285.0 | max_seq_length=50, hidden_size=40|| Caser | 0.8244 | 0.8171 | 0.283 | 0.4194 | 314.3 | 5369.9 | T=1, n_v=128, n_h=128, L=3, min_seq_length=5|| SLi_Rec | 0.8631 | 0.8519 | 0.3491 | 0.4842 | 549.6 | 5014.0 | attention_size=40, max_seq_length=50, hidden_size=40|| NextItNet* | 0.6793 | 0.6769 | 0.0602 | 0.1733 | 112.0 | 214.5 | min_seq_length=3, dilations=\[1,2,4,1,2,4\], kernel_size=3 || SUM | 0.8481 | 0.8406 | 0.3394 | 0.4774 | 1005.0 | 9427.0 | hidden_size=40, slots=4, dropout=0| Note 1: The five models are grid searched with a coarse granularity and the results are for reference only. Note 2: NextItNet model requires a dataset with strong sequence property, but the Amazon dataset used in this notebook does not meet that requirement, so NextItNet Model may not performance good. If you wish to use other datasets with strong sequence property, NextItNet is recommended. Note 3: Time cost of NextItNet Model is significantly shorter than other models because it doesn't need a history expanding of training data. 3. Loading Trained ModelsIn this section, we provide a simple example to illustrate how we can use the trained model to serve for production demand.Suppose we are in a new session. First let's load a previous trained model:
###Code
model_best_trained = SeqModel(hparams, input_creator, seed=RANDOM_SEED)
path_best_trained = os.path.join(hparams.MODEL_DIR, "best_model")
print('loading saved model in {0}'.format(path_best_trained))
model_best_trained.load_model(path_best_trained)
###Output
loading saved model in ../../tests/resources/deeprec/slirec/model/best_model
INFO:tensorflow:Restoring parameters from ../../tests/resources/deeprec/slirec/model/best_model
###Markdown
Let's see if we load the model correctly. The testing metrics should be close to the numbers we have in the training stage.
###Code
model_best_trained.run_eval(test_file, num_ngs=test_num_ngs)
###Output
_____no_output_____
###Markdown
And we make predictions using this model. In the next step, we will make predictions using a serving model. Then we can check if the two result files are consistent.
###Code
model_best_trained.predict(test_file, output_file)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Sequential Recommender Quick Start Example: SLi_Rec : Adaptive User Modeling with Long and Short-Term Preferences for Personailzed RecommendationUnlike a general recommender such as Matrix Factorization or xDeepFM (in the repo) which doesn't consider the order of the user's activities, sequential recommender systems take the sequence of the user behaviors as context and the goal is to predict the items that the user will interact in a short time (in an extreme case, the item that the user will interact next).This notebook aims to give you a quick example of how to train a sequential model based on a public Amazon dataset. Currently, we can support NextItNet \[4\], GRU4Rec \[2\], Caser \[3\], A2SVD \[1\] and SLi_Rec \[1\]. Without loss of generality, this notebook takes [SLi_Rec model](https://www.microsoft.com/en-us/research/uploads/prod/2019/07/IJCAI19-ready_v1.pdf) for example.SLi_Rec \[1\] is a deep learning-based model aims at capturing both long and short-term user preferences for precise recommender systems. To summarize, SLi_Rec has the following key properties:* It adopts the attentive "Asymmetric-SVD" paradigm for long-term modeling;* It takes both time irregularity and semantic irregularity into consideration by modifying the gating logic in LSTM.* It uses an attention mechanism to dynamic fuse the long-term component and short-term component.In this notebook, we test SLi_Rec on a subset of the public dataset: [Amazon_reviews](http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Movies_and_TV_5.json.gz) and [Amazon_metadata](http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/meta_Movies_and_TV.json.gz) 0. Global Settings and Imports
###Code
import sys
sys.path.append("../../")
import os
import logging
import papermill as pm
from tempfile import TemporaryDirectory
import tensorflow as tf
import time
from reco_utils.common.constants import SEED
from reco_utils.recommender.deeprec.deeprec_utils import (
prepare_hparams
)
from reco_utils.dataset.amazon_reviews import download_and_extract, data_preprocessing
from reco_utils.dataset.download_utils import maybe_download
from reco_utils.recommender.deeprec.models.sequential.sli_rec import SLI_RECModel
#### to use the other model, use one of the following lines:
#from reco_utils.recommender.deeprec.models.sequential.asvd import A2SVDModel
#from reco_utils.recommender.deeprec.models.sequential.caser import CaserModel
#from reco_utils.recommender.deeprec.models.sequential.gru4rec import GRU4RecModel
#from reco_utils.recommender.deeprec.models.sequential.nextitnet import NextItNetModel
from reco_utils.recommender.deeprec.io.sequential_iterator import SequentialIterator
#from reco_utils.recommender.deeprec.io.nextitnet_iterator import NextItNetIterator
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
###Output
System version: 3.6.8 |Anaconda, Inc.| (default, Feb 21 2019, 18:30:04) [MSC v.1916 64 bit (AMD64)]
Tensorflow version: 1.12.0
###Markdown
Parameters
###Code
EPOCHS = 10
BATCH_SIZE = 400
RANDOM_SEED = SEED # Set None for non-deterministic result
yaml_file = '../../reco_utils/recommender/deeprec/config/sli_rec.yaml'
data_path = os.path.join("..", "..", "tests", "resources", "deeprec", "slirec")
###Output
_____no_output_____
###Markdown
1. Input data formatThe input data contains 8 columns, i.e., ` ` columns are seperated by `"\t"`. item_id and category_id denote the target item and category, which means that for this instance, we want to guess whether user user_id will interact with item_id at timestamp. `` columns record the user behavior list up to ``, elements are separated by commas. `` is a binary value with 1 for positive instances and 0 for negative instances. One example for an instance is: `1 A1QQ86H5M2LVW2 B0059XTU1S Movies 1377561600 B002ZG97WE,B004IK30PA,B000BNX3AU,B0017ANB08,B005LAIHW2 Movies,Movies,Movies,Movies,Movies 1304294400,1304812800,1315785600,1316304000,1356998400` Only the SLi_Rec model is time-aware. For the other models, you can just pad some meaningless timestamp in the data files to fill up the format, the models will ignore these columns.We use Softmax to the loss function. In training and evalution stage, we group 1 positive instance with num_ngs negative instances. Pair-wise ranking can be regarded as a special case of Softmax ranking, where num_ngs is set to 1. More specifically, for training and evalation, you need to organize the data file such that each one positive instance is followd by num_ngs negative instances. Our program will take 1+num_ngs lines as a unit for Softmax calculation. num_ngs is a parameter you need to pass to the `prepare_hparams`, `fit` and `run_eval` function. `train_num_ngs` in `prepare_hparams` denotes the number of negative instances for training, where a recommended number is 4. `valid_num_ngs` and `num_ngs` in `fit` and `run_eval` denote the number in evalution. In evaluation, the model calculates metrics among the 1+num_ngs instances. For the `predict` function, since we only need to calcuate a socre for each individual instance, there is no need for num_ngs setting. More details and examples will be provided in the following sections.For training stage, if you don't want to prepare negative instances, you can just provide positive instances and set the parameter `need_sample=True, train_num_ngs=train_num_ngs` for function `prepare_hparams`, our model will dynamicly sample `train_num_ngs` instances as negative samples in each mini batch. Amazon datasetNow let's start with a public dataset containing product reviews and metadata from Amazon, which is widely used as a benchmark dataset in recommemdation systems field.
###Code
# for test
train_file = os.path.join(data_path, r'train_data')
valid_file = os.path.join(data_path, r'valid_data')
test_file = os.path.join(data_path, r'test_data')
user_vocab = os.path.join(data_path, r'user_vocab.pkl')
item_vocab = os.path.join(data_path, r'item_vocab.pkl')
cate_vocab = os.path.join(data_path, r'category_vocab.pkl')
output_file = os.path.join(data_path, r'output.txt')
reviews_name = 'reviews_Movies_and_TV_5.json'
meta_name = 'meta_Movies_and_TV.json'
reviews_file = os.path.join(data_path, reviews_name)
meta_file = os.path.join(data_path, meta_name)
train_num_ngs = 4 # number of negative instances with a positive instance for training
valid_num_ngs = 4 # number of negative instances with a positive instance for validation
test_num_ngs = 9 # number of negative instances with a positive instance for testing
sample_rate = 0.01 # sample a small item set for training and testing here for fast example
input_files = [reviews_file, meta_file, train_file, valid_file, test_file, user_vocab, item_vocab, cate_vocab]
if not os.path.exists(train_file):
download_and_extract(reviews_name, reviews_file)
download_and_extract(meta_name, meta_file)
data_preprocessing(*input_files, sample_rate=sample_rate, valid_num_ngs=valid_num_ngs, test_num_ngs=test_num_ngs)
#### uncomment this for the NextItNet model, because it does not need to unfold the user history
# data_preprocessing(*input_files, sample_rate=sample_rate, valid_num_ngs=valid_num_ngs, test_num_ngs=test_num_ngs, is_history_expanding=False)
###Output
100%|██████████████████████████████████████████████████████████████████████████████| 692k/692k [02:17<00:00, 5.02kKB/s]
100%|████████████████████████████████████████████████████████████████████████████| 97.5k/97.5k [00:24<00:00, 4.00kKB/s]
###Markdown
1.1 Prepare hyper-parametersprepare_hparams() will create a full set of hyper-parameters for model training, such as learning rate, feature number, and dropout ratio. We can put those parameters in a yaml file (a complete list of parameters can be found under our config folder) , or pass parameters as the function's parameters (which will overwrite yaml settings).Parameters hints: `need_sample` controls whether to perform dynamic negative sampling in mini-batch. `train_num_ngs` indicates how many negative instances followed by one positive instances. Examples: (1) `need_sample=True and train_num_ngs=4`: There are only positive instances in your training file. Our model will dynamically sample 4 negative instances for each positive instances in mini-batch. Note that if need_sample is set to True, train_num_ngs should be greater than zero. (2) `need_sample=False and train_num_ngs=4`: In your training file, each one positive line is followed by 4 negative lines. Note that if need_sample is set to False, you must provide a traiing file with negative instances, and train_num_ngs should match the number of negative number in your training file.
###Code
hparams = prepare_hparams(yaml_file,
embed_l2=0.,
layer_l2=0.,
learning_rate=0.001,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
show_step=20,
MODEL_DIR=os.path.join(data_path, "model/"),
SUMMARIES_DIR=os.path.join(data_path, "summary/"),
user_vocab=user_vocab,
item_vocab=item_vocab,
cate_vocab=cate_vocab,
need_sample=True,
train_num_ngs=train_num_ngs, # provides the number of negative instances for each positive instance for loss computation.
)
###Output
_____no_output_____
###Markdown
1.2 Create data loaderDesignate a data iterator for the model. All our sequential models use SequentialIterator. data format is introduced aboved. Validation and testing data are files after negative sampling offline with the number of `` and ``.
###Code
input_creator = SequentialIterator
#### uncomment this for the NextItNet model, because it needs a special data iterator for training
#input_creator = NextItNetIterator
###Output
_____no_output_____
###Markdown
2. Create modelWhen both hyper-parameters and data iterator are ready, we can create a model:
###Code
model = SLI_RECModel(hparams, input_creator, seed=RANDOM_SEED)
## of course you can create models like ASVDModel, CaserModel and GRU4RecModel in the same manner
## sometimes we don't want to train a model from scratch
## then we can load a pre-trained model like this:
#model.load_model(r'your_model_path')
###Output
_____no_output_____
###Markdown
Now let's see what is the model's performance at this point (without starting training):
###Code
print(model.run_eval(test_file, num_ngs=test_num_ngs)) # test_num_ngs is the number of negative lines after each positive line in your test_file
###Output
{'auc': 0.5114, 'logloss': 0.6931, 'mean_mrr': 0.29, 'ndcg2': 0.4517, 'ndcg4': 0.4517, 'ndcg6': 0.4517, 'ndcg8': 0.4517, 'ndcg10': 0.4517, 'group_auc': 0.512}
###Markdown
AUC=0.5 is a state of random guess. We can see that before training, the model behaves like random guessing. 2.1 Train modelNext we want to train the model on a training set, and check the performance on a validation dataset. Training the model is as simple as a function call:
###Code
start_time = time.time()
model = model.fit(train_file, valid_file, valid_num_ngs=valid_num_ngs)
# valid_num_ngs is the number of negative lines after each positive line in your valid_file
# we will evaluate the performance of model on valid_file every epoch
end_time = time.time()
print('Time cost for training is {0:.2f} mins'.format((end_time-start_time)/60.0))
###Output
step 20 , total_loss: 1.6097, data_loss: 1.6097
step 40 , total_loss: 1.6087, data_loss: 1.6087
eval valid at epoch 1: auc:0.4895,logloss:0.693,mean_mrr:0.4475,ndcg2:0.5827,ndcg4:0.5827,ndcg6:0.5827,ndcg8:0.5827,ndcg10:0.5827,group_auc:0.4907
step 20 , total_loss: 1.6069, data_loss: 1.6069
step 40 , total_loss: 1.4812, data_loss: 1.4812
eval valid at epoch 2: auc:0.5625,logloss:0.6931,mean_mrr:0.4916,ndcg2:0.6164,ndcg4:0.6164,ndcg6:0.6164,ndcg8:0.6164,ndcg10:0.6164,group_auc:0.5422
step 20 , total_loss: 1.4089, data_loss: 1.4089
step 40 , total_loss: 1.3968, data_loss: 1.3968
eval valid at epoch 3: auc:0.684,logloss:0.6957,mean_mrr:0.5984,ndcg2:0.6985,ndcg4:0.6985,ndcg6:0.6985,ndcg8:0.6985,ndcg10:0.6985,group_auc:0.6787
step 20 , total_loss: 1.2920, data_loss: 1.2920
step 40 , total_loss: 1.3227, data_loss: 1.3227
eval valid at epoch 4: auc:0.6965,logloss:0.6827,mean_mrr:0.6145,ndcg2:0.7107,ndcg4:0.7107,ndcg6:0.7107,ndcg8:0.7107,ndcg10:0.7107,group_auc:0.6914
step 20 , total_loss: 1.3205, data_loss: 1.3205
step 40 , total_loss: 1.2936, data_loss: 1.2936
eval valid at epoch 5: auc:0.6986,logloss:0.6657,mean_mrr:0.6192,ndcg2:0.7142,ndcg4:0.7142,ndcg6:0.7142,ndcg8:0.7142,ndcg10:0.7142,group_auc:0.6965
step 20 , total_loss: 1.2575, data_loss: 1.2575
step 40 , total_loss: 1.2785, data_loss: 1.2785
eval valid at epoch 6: auc:0.7055,logloss:0.6147,mean_mrr:0.6197,ndcg2:0.7146,ndcg4:0.7146,ndcg6:0.7146,ndcg8:0.7146,ndcg10:0.7146,group_auc:0.699
step 20 , total_loss: 1.2735, data_loss: 1.2735
step 40 , total_loss: 1.2838, data_loss: 1.2838
eval valid at epoch 7: auc:0.7205,logloss:0.6434,mean_mrr:0.6345,ndcg2:0.7257,ndcg4:0.7257,ndcg6:0.7257,ndcg8:0.7257,ndcg10:0.7257,group_auc:0.7092
step 20 , total_loss: 1.1849, data_loss: 1.1849
step 40 , total_loss: 1.1954, data_loss: 1.1954
eval valid at epoch 8: auc:0.7234,logloss:0.6514,mean_mrr:0.6413,ndcg2:0.7308,ndcg4:0.7308,ndcg6:0.7308,ndcg8:0.7308,ndcg10:0.7308,group_auc:0.715
step 20 , total_loss: 1.2023, data_loss: 1.2023
step 40 , total_loss: 1.1818, data_loss: 1.1818
eval valid at epoch 9: auc:0.7285,logloss:0.6794,mean_mrr:0.639,ndcg2:0.7292,ndcg4:0.7292,ndcg6:0.7292,ndcg8:0.7292,ndcg10:0.7292,group_auc:0.7152
step 20 , total_loss: 1.1680, data_loss: 1.1680
step 40 , total_loss: 1.1911, data_loss: 1.1911
eval valid at epoch 10: auc:0.7317,logloss:0.6242,mean_mrr:0.6454,ndcg2:0.7339,ndcg4:0.7339,ndcg6:0.7339,ndcg8:0.7339,ndcg10:0.7339,group_auc:0.7181
[(1, {'auc': 0.4895, 'logloss': 0.693, 'mean_mrr': 0.4475, 'ndcg2': 0.5827, 'ndcg4': 0.5827, 'ndcg6': 0.5827, 'ndcg8': 0.5827, 'ndcg10': 0.5827, 'group_auc': 0.4907}), (2, {'auc': 0.5625, 'logloss': 0.6931, 'mean_mrr': 0.4916, 'ndcg2': 0.6164, 'ndcg4': 0.6164, 'ndcg6': 0.6164, 'ndcg8': 0.6164, 'ndcg10': 0.6164, 'group_auc': 0.5422}), (3, {'auc': 0.684, 'logloss': 0.6957, 'mean_mrr': 0.5984, 'ndcg2': 0.6985, 'ndcg4': 0.6985, 'ndcg6': 0.6985, 'ndcg8': 0.6985, 'ndcg10': 0.6985, 'group_auc': 0.6787}), (4, {'auc': 0.6965, 'logloss': 0.6827, 'mean_mrr': 0.6145, 'ndcg2': 0.7107, 'ndcg4': 0.7107, 'ndcg6': 0.7107, 'ndcg8': 0.7107, 'ndcg10': 0.7107, 'group_auc': 0.6914}), (5, {'auc': 0.6986, 'logloss': 0.6657, 'mean_mrr': 0.6192, 'ndcg2': 0.7142, 'ndcg4': 0.7142, 'ndcg6': 0.7142, 'ndcg8': 0.7142, 'ndcg10': 0.7142, 'group_auc': 0.6965}), (6, {'auc': 0.7055, 'logloss': 0.6147, 'mean_mrr': 0.6197, 'ndcg2': 0.7146, 'ndcg4': 0.7146, 'ndcg6': 0.7146, 'ndcg8': 0.7146, 'ndcg10': 0.7146, 'group_auc': 0.699}), (7, {'auc': 0.7205, 'logloss': 0.6434, 'mean_mrr': 0.6345, 'ndcg2': 0.7257, 'ndcg4': 0.7257, 'ndcg6': 0.7257, 'ndcg8': 0.7257, 'ndcg10': 0.7257, 'group_auc': 0.7092}), (8, {'auc': 0.7234, 'logloss': 0.6514, 'mean_mrr': 0.6413, 'ndcg2': 0.7308, 'ndcg4': 0.7308, 'ndcg6': 0.7308, 'ndcg8': 0.7308, 'ndcg10': 0.7308, 'group_auc': 0.715}), (9, {'auc': 0.7285, 'logloss': 0.6794, 'mean_mrr': 0.639, 'ndcg2': 0.7292, 'ndcg4': 0.7292, 'ndcg6': 0.7292, 'ndcg8': 0.7292, 'ndcg10': 0.7292, 'group_auc': 0.7152}), (10, {'auc': 0.7317, 'logloss': 0.6242, 'mean_mrr': 0.6454, 'ndcg2': 0.7339, 'ndcg4': 0.7339, 'ndcg6': 0.7339, 'ndcg8': 0.7339, 'ndcg10': 0.7339, 'group_auc': 0.7181})]
best epoch: 10
Time cost for training is 9.53 mins
###Markdown
2.2 Evaluate modelAgain, let's see what is the model's performance now (after training):
###Code
res_syn = model.run_eval(test_file, num_ngs=test_num_ngs)
print(res_syn)
pm.record("res_syn", res_syn)
###Output
{'auc': 0.7111, 'logloss': 0.6447, 'mean_mrr': 0.4673, 'ndcg2': 0.5934, 'ndcg4': 0.5934, 'ndcg6': 0.5934, 'ndcg8': 0.5934, 'ndcg10': 0.5934, 'group_auc': 0.698}
###Markdown
If we want to get the full prediction scores rather than evaluation metrics, we can do this:
###Code
model = model.predict(test_file, output_file)
# The data was downloaded in tmpdir folder. You can delete them manually if you do not need them any more.
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Sequential Recommender Quick Start Example: SLi_Rec : Adaptive User Modeling with Long and Short-Term Preferences for Personailzed RecommendationUnlike a general recommender such as Matrix Factorization or xDeepFM (in the repo) which doesn't consider the order of the user's activities, sequential recommender systems take the sequence of the user behaviors as context and the goal is to predict the items that the user will interact in a short time (in an extreme case, the item that the user will interact next).This notebook aims to give you a quick example of how to train a sequential model based on a public Amazon dataset. Currently, we can support NextItNet \[4\], GRU4Rec \[2\], Caser \[3\], A2SVD \[1\] and SLi_Rec \[1\]. Without loss of generality, this notebook takes [SLi_Rec model](https://www.microsoft.com/en-us/research/uploads/prod/2019/07/IJCAI19-ready_v1.pdf) for example.SLi_Rec \[1\] is a deep learning-based model aims at capturing both long and short-term user preferences for precise recommender systems. To summarize, SLi_Rec has the following key properties:* It adopts the attentive "Asymmetric-SVD" paradigm for long-term modeling;* It takes both time irregularity and semantic irregularity into consideration by modifying the gating logic in LSTM.* It uses an attention mechanism to dynamic fuse the long-term component and short-term component.In this notebook, we test SLi_Rec on a subset of the public dataset: [Amazon_reviews](http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Movies_and_TV_5.json.gz) and [Amazon_metadata](http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/meta_Movies_and_TV.json.gz)This notebook is well tested under TF 1.15.0. 0. Global Settings and Imports
###Code
import sys
sys.path.append("../../")
import os
import logging
import papermill as pm
import scrapbook as sb
from tempfile import TemporaryDirectory
import tensorflow as tf
import time
import numpy as np
from reco_utils.common.constants import SEED
from reco_utils.recommender.deeprec.deeprec_utils import (
prepare_hparams
)
from reco_utils.dataset.amazon_reviews import download_and_extract, data_preprocessing
from reco_utils.dataset.download_utils import maybe_download
from reco_utils.recommender.deeprec.models.sequential.sli_rec import SLI_RECModel as SeqModel
#### to use the other model, use one of the following lines:
# from reco_utils.recommender.deeprec.models.sequential.asvd import A2SVDModel as SeqModel
# from reco_utils.recommender.deeprec.models.sequential.caser import CaserModel as SeqModel
# from reco_utils.recommender.deeprec.models.sequential.gru4rec import GRU4RecModel as SeqModel
#from reco_utils.recommender.deeprec.models.sequential.nextitnet import NextItNetModel
from reco_utils.recommender.deeprec.io.sequential_iterator import SequentialIterator
#from reco_utils.recommender.deeprec.io.nextitnet_iterator import NextItNetIterator
## ATTENTION: change to the corresponding config file, e.g., caser.yaml for CaserModel
yaml_file = '../../reco_utils/recommender/deeprec/config/sli_rec.yaml'
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
###Output
System version: 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34)
[GCC 7.3.0]
Tensorflow version: 1.15.0
###Markdown
Parameters
###Code
EPOCHS = 10
BATCH_SIZE = 400
RANDOM_SEED = SEED # Set None for non-deterministic result
data_path = os.path.join("..", "..", "tests", "resources", "deeprec", "slirec")
###Output
_____no_output_____
###Markdown
1. Input data formatThe input data contains 8 columns, i.e., ` ` columns are seperated by `"\t"`. item_id and category_id denote the target item and category, which means that for this instance, we want to guess whether user user_id will interact with item_id at timestamp. `` columns record the user behavior list up to ``, elements are separated by commas. `` is a binary value with 1 for positive instances and 0 for negative instances. One example for an instance is: `1 A1QQ86H5M2LVW2 B0059XTU1S Movies 1377561600 B002ZG97WE,B004IK30PA,B000BNX3AU,B0017ANB08,B005LAIHW2 Movies,Movies,Movies,Movies,Movies 1304294400,1304812800,1315785600,1316304000,1356998400` In data preprocessing stage, we have a script to generate some ID mapping dictionaries, so user_id, item_id and category_id will be mapped into interager index starting from 1. And you need to tell the input iterator where is the ID mapping files are. (For example, in the next section, we have some mapping files like user_vocab, item_vocab, and cate_vocab). The data preprocessing script is at https://github.com/microsoft/recommenders/blob/master/reco_utils/dataset/amazon_reviews.py, you need to call the `_create_vocab(train_file, user_vocab, item_vocab, cate_vocab)` function. Note that ID vocabulary only creates from the train_file, so the new IDs in valid_file or test_file will be regarded as unknown IDs and assigned with a defualt 0 index.Only the SLi_Rec model is time-aware. For the other models, you can just pad some meaningless timestamp in the data files to fill up the format, the models will ignore these columns.We use Softmax to the loss function. In training and evalution stage, we group 1 positive instance with num_ngs negative instances. Pair-wise ranking can be regarded as a special case of Softmax ranking, where num_ngs is set to 1. More specifically, for training and evalation, you need to organize the data file such that each one positive instance is followd by num_ngs negative instances. Our program will take 1+num_ngs lines as a unit for Softmax calculation. num_ngs is a parameter you need to pass to the `prepare_hparams`, `fit` and `run_eval` function. `train_num_ngs` in `prepare_hparams` denotes the number of negative instances for training, where a recommended number is 4. `valid_num_ngs` and `num_ngs` in `fit` and `run_eval` denote the number in evalution. In evaluation, the model calculates metrics among the 1+num_ngs instances. For the `predict` function, since we only need to calcuate a socre for each individual instance, there is no need for num_ngs setting. More details and examples will be provided in the following sections.For training stage, if you don't want to prepare negative instances, you can just provide positive instances and set the parameter `need_sample=True, train_num_ngs=train_num_ngs` for function `prepare_hparams`, our model will dynamicly sample `train_num_ngs` instances as negative samples in each mini batch. Amazon datasetNow let's start with a public dataset containing product reviews and metadata from Amazon, which is widely used as a benchmark dataset in recommemdation systems field.
###Code
# for test
train_file = os.path.join(data_path, r'train_data')
valid_file = os.path.join(data_path, r'valid_data')
test_file = os.path.join(data_path, r'test_data')
user_vocab = os.path.join(data_path, r'user_vocab.pkl')
item_vocab = os.path.join(data_path, r'item_vocab.pkl')
cate_vocab = os.path.join(data_path, r'category_vocab.pkl')
output_file = os.path.join(data_path, r'output.txt')
reviews_name = 'reviews_Movies_and_TV_5.json'
meta_name = 'meta_Movies_and_TV.json'
reviews_file = os.path.join(data_path, reviews_name)
meta_file = os.path.join(data_path, meta_name)
train_num_ngs = 4 # number of negative instances with a positive instance for training
valid_num_ngs = 4 # number of negative instances with a positive instance for validation
test_num_ngs = 9 # number of negative instances with a positive instance for testing
sample_rate = 0.01 # sample a small item set for training and testing here for fast example
input_files = [reviews_file, meta_file, train_file, valid_file, test_file, user_vocab, item_vocab, cate_vocab]
if not os.path.exists(train_file):
download_and_extract(reviews_name, reviews_file)
download_and_extract(meta_name, meta_file)
data_preprocessing(*input_files, sample_rate=sample_rate, valid_num_ngs=valid_num_ngs, test_num_ngs=test_num_ngs)
#### uncomment this for the NextItNet model, because it does not need to unfold the user history
# data_preprocessing(*input_files, sample_rate=sample_rate, valid_num_ngs=valid_num_ngs, test_num_ngs=test_num_ngs, is_history_expanding=False)
###Output
_____no_output_____
###Markdown
1.1 Prepare hyper-parametersprepare_hparams() will create a full set of hyper-parameters for model training, such as learning rate, feature number, and dropout ratio. We can put those parameters in a yaml file (a complete list of parameters can be found under our config folder) , or pass parameters as the function's parameters (which will overwrite yaml settings).Parameters hints: `need_sample` controls whether to perform dynamic negative sampling in mini-batch. `train_num_ngs` indicates how many negative instances followed by one positive instances. Examples: (1) `need_sample=True and train_num_ngs=4`: There are only positive instances in your training file. Our model will dynamically sample 4 negative instances for each positive instances in mini-batch. Note that if need_sample is set to True, train_num_ngs should be greater than zero. (2) `need_sample=False and train_num_ngs=4`: In your training file, each one positive line is followed by 4 negative lines. Note that if need_sample is set to False, you must provide a traiing file with negative instances, and train_num_ngs should match the number of negative number in your training file.
###Code
### NOTE:
### remember to use `_create_vocab(train_file, user_vocab, item_vocab, cate_vocab)` to generate the user_vocab, item_vocab and cate_vocab files, if you are using your own dataset rather than using our demo Amazon dataset.
hparams = prepare_hparams(yaml_file,
embed_l2=0.,
layer_l2=0.,
learning_rate=0.001, # set to 0.01 if batch normalization is disable
epochs=EPOCHS,
batch_size=BATCH_SIZE,
show_step=20,
MODEL_DIR=os.path.join(data_path, "model/"),
SUMMARIES_DIR=os.path.join(data_path, "summary/"),
user_vocab=user_vocab,
item_vocab=item_vocab,
cate_vocab=cate_vocab,
need_sample=True,
train_num_ngs=train_num_ngs, # provides the number of negative instances for each positive instance for loss computation.
)
###Output
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
* https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.
###Markdown
1.2 Create data loaderDesignate a data iterator for the model. All our sequential models use SequentialIterator. data format is introduced aboved. Validation and testing data are files after negative sampling offline with the number of `` and ``.
###Code
input_creator = SequentialIterator
#### uncomment this for the NextItNet model, because it needs a special data iterator for training
#input_creator = NextItNetIterator
###Output
_____no_output_____
###Markdown
2. Create modelWhen both hyper-parameters and data iterator are ready, we can create a model:
###Code
model = SeqModel(hparams, input_creator, seed=RANDOM_SEED)
## sometimes we don't want to train a model from scratch
## then we can load a pre-trained model like this:
#model.load_model(r'your_model_path')
###Output
WARNING:tensorflow:From ../../reco_utils/recommender/deeprec/models/sequential/sequential_base_model.py:43: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
WARNING:tensorflow:From ../../reco_utils/recommender/deeprec/models/sequential/sequential_base_model.py:64: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.
WARNING:tensorflow:From ../../reco_utils/recommender/deeprec/models/sequential/sequential_base_model.py:253: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead.
WARNING:tensorflow:From ../../reco_utils/recommender/deeprec/models/sequential/sequential_base_model.py:275: The name tf.summary.histogram is deprecated. Please use tf.compat.v1.summary.histogram instead.
WARNING:tensorflow:From ../../reco_utils/recommender/deeprec/models/sequential/sli_rec.py:64: dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `keras.layers.RNN(cell)`, which is equivalent to this API
WARNING:tensorflow:From ../../reco_utils/recommender/deeprec/models/sequential/rnn_cell_implement.py:621: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
WARNING:tensorflow:From /home/v-xdeng/.conda/envs/reco_tf15/lib/python3.6/site-packages/tensorflow_core/python/ops/rnn.py:244: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
WARNING:tensorflow:From ../../reco_utils/recommender/deeprec/models/base_model.py:677: batch_normalization (from tensorflow.python.layers.normalization) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.BatchNormalization instead. In particular, `tf.control_dependencies(tf.GraphKeys.UPDATE_OPS)` should not be used (consult the `tf.keras.layers.batch_normalization` documentation).
WARNING:tensorflow:From /home/v-xdeng/.conda/envs/reco_tf15/lib/python3.6/site-packages/tensorflow_core/python/layers/normalization.py:327: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.__call__` method instead.
WARNING:tensorflow:From ../../reco_utils/recommender/deeprec/models/base_model.py:340: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
WARNING:tensorflow:From ../../reco_utils/recommender/deeprec/models/sequential/sequential_base_model.py:332: The name tf.trainable_variables is deprecated. Please use tf.compat.v1.trainable_variables instead.
###Markdown
Now let's see what is the model's performance at this point (without starting training):
###Code
print(model.run_eval(test_file, num_ngs=test_num_ngs)) # test_num_ngs is the number of negative lines after each positive line in your test_file
###Output
{'auc': 0.5131, 'logloss': 0.6931, 'mean_mrr': 0.289, 'ndcg@2': 0.1609, 'ndcg@4': 0.2475, 'ndcg@6': 0.3219, 'group_auc': 0.5134}
###Markdown
AUC=0.5 is a state of random guess. We can see that before training, the model behaves like random guessing. 2.1 Train modelNext we want to train the model on a training set, and check the performance on a validation dataset. Training the model is as simple as a function call:
###Code
start_time = time.time()
model = model.fit(train_file, valid_file, valid_num_ngs=valid_num_ngs)
# valid_num_ngs is the number of negative lines after each positive line in your valid_file
# we will evaluate the performance of model on valid_file every epoch
end_time = time.time()
print('Time cost for training is {0:.2f} mins'.format((end_time-start_time)/60.0))
###Output
WARNING:tensorflow:From ../../reco_utils/recommender/deeprec/models/sequential/sequential_base_model.py:105: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead.
step 20 , total_loss: 1.6105, data_loss: 1.6105
eval valid at epoch 1: auc:0.4977,logloss:0.6933,mean_mrr:0.4526,ndcg@2:0.3198,ndcg@4:0.51,ndcg@6:0.5866,group_auc:0.4972
step 20 , total_loss: 1.5950, data_loss: 1.5950
eval valid at epoch 2: auc:0.5648,logloss:0.7007,mean_mrr:0.4957,ndcg@2:0.3825,ndcg@4:0.553,ndcg@6:0.6197,group_auc:0.5484
step 20 , total_loss: 1.4578, data_loss: 1.4578
eval valid at epoch 3: auc:0.6493,logloss:0.816,mean_mrr:0.5831,ndcg@2:0.507,ndcg@4:0.6476,ndcg@6:0.6866,group_auc:0.6532
step 20 , total_loss: 1.2790, data_loss: 1.2790
eval valid at epoch 4: auc:0.7018,logloss:0.7818,mean_mrr:0.6176,ndcg@2:0.5572,ndcg@4:0.6838,ndcg@6:0.7131,group_auc:0.6969
step 20 , total_loss: 1.3249, data_loss: 1.3249
eval valid at epoch 5: auc:0.7208,logloss:0.6877,mean_mrr:0.6466,ndcg@2:0.5921,ndcg@4:0.7101,ndcg@6:0.7349,group_auc:0.722
step 20 , total_loss: 1.2396, data_loss: 1.2396
eval valid at epoch 6: auc:0.7336,logloss:0.6063,mean_mrr:0.6554,ndcg@2:0.6022,ndcg@4:0.7173,ndcg@6:0.7416,group_auc:0.7298
step 20 , total_loss: 1.1432, data_loss: 1.1432
eval valid at epoch 7: auc:0.7408,logloss:0.611,mean_mrr:0.6659,ndcg@2:0.614,ndcg@4:0.7267,ndcg@6:0.7494,group_auc:0.7383
step 20 , total_loss: 1.1373, data_loss: 1.1373
eval valid at epoch 8: auc:0.7454,logloss:0.6499,mean_mrr:0.6721,ndcg@2:0.6216,ndcg@4:0.7334,ndcg@6:0.7541,group_auc:0.7445
step 20 , total_loss: 1.1958, data_loss: 1.1958
eval valid at epoch 9: auc:0.7536,logloss:0.5951,mean_mrr:0.6715,ndcg@2:0.6222,ndcg@4:0.7323,ndcg@6:0.7537,group_auc:0.7454
step 20 , total_loss: 1.1403, data_loss: 1.1403
eval valid at epoch 10: auc:0.7553,logloss:0.5822,mean_mrr:0.6753,ndcg@2:0.6254,ndcg@4:0.7357,ndcg@6:0.7566,group_auc:0.7486
WARNING:tensorflow:From /home/v-xdeng/.conda/envs/reco_tf15/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py:963: remove_checkpoint (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to delete files with this prefix.
[(1, {'auc': 0.4977, 'logloss': 0.6933, 'mean_mrr': 0.4526, 'ndcg@2': 0.3198, 'ndcg@4': 0.51, 'ndcg@6': 0.5866, 'group_auc': 0.4972}), (2, {'auc': 0.5648, 'logloss': 0.7007, 'mean_mrr': 0.4957, 'ndcg@2': 0.3825, 'ndcg@4': 0.553, 'ndcg@6': 0.6197, 'group_auc': 0.5484}), (3, {'auc': 0.6493, 'logloss': 0.816, 'mean_mrr': 0.5831, 'ndcg@2': 0.507, 'ndcg@4': 0.6476, 'ndcg@6': 0.6866, 'group_auc': 0.6532}), (4, {'auc': 0.7018, 'logloss': 0.7818, 'mean_mrr': 0.6176, 'ndcg@2': 0.5572, 'ndcg@4': 0.6838, 'ndcg@6': 0.7131, 'group_auc': 0.6969}), (5, {'auc': 0.7208, 'logloss': 0.6877, 'mean_mrr': 0.6466, 'ndcg@2': 0.5921, 'ndcg@4': 0.7101, 'ndcg@6': 0.7349, 'group_auc': 0.722}), (6, {'auc': 0.7336, 'logloss': 0.6063, 'mean_mrr': 0.6554, 'ndcg@2': 0.6022, 'ndcg@4': 0.7173, 'ndcg@6': 0.7416, 'group_auc': 0.7298}), (7, {'auc': 0.7408, 'logloss': 0.611, 'mean_mrr': 0.6659, 'ndcg@2': 0.614, 'ndcg@4': 0.7267, 'ndcg@6': 0.7494, 'group_auc': 0.7383}), (8, {'auc': 0.7454, 'logloss': 0.6499, 'mean_mrr': 0.6721, 'ndcg@2': 0.6216, 'ndcg@4': 0.7334, 'ndcg@6': 0.7541, 'group_auc': 0.7445}), (9, {'auc': 0.7536, 'logloss': 0.5951, 'mean_mrr': 0.6715, 'ndcg@2': 0.6222, 'ndcg@4': 0.7323, 'ndcg@6': 0.7537, 'group_auc': 0.7454}), (10, {'auc': 0.7553, 'logloss': 0.5822, 'mean_mrr': 0.6753, 'ndcg@2': 0.6254, 'ndcg@4': 0.7357, 'ndcg@6': 0.7566, 'group_auc': 0.7486})]
best epoch: 10
Time cost for training is 2.63 mins
###Markdown
2.2 Evaluate modelAgain, let's see what is the model's performance now (after training):
###Code
res_syn = model.run_eval(test_file, num_ngs=test_num_ngs)
print(res_syn)
sb.glue("res_syn", res_syn)
###Output
{'auc': 0.7249, 'logloss': 0.5924, 'mean_mrr': 0.4946, 'ndcg@2': 0.4075, 'ndcg@4': 0.5107, 'ndcg@6': 0.5607, 'group_auc': 0.7133}
###Markdown
If we want to get the full prediction scores rather than evaluation metrics, we can do this:
###Code
model = model.predict(test_file, output_file)
# The data was downloaded in tmpdir folder. You can delete them manually if you do not need them any more.
###Output
_____no_output_____
###Markdown
2.3 Running models with large datasetHere are performances using the whole amazon dataset among popular sequential models with 1,697,533 positive instances.Settings for reproducing the results:`learning_rate=0.001, dropout=0.3, item_embedding_dim=32, cate_embedding_dim=8, l2_norm=0, batch_size=400, train_num_ngs=4, valid_num_ngs=4, test_num_ngs=49`We compare the running time with CPU only and with GPU on the larger dataset. It appears that GPU can significantly accelerate the training. Hardware specification for running the large dataset: GPU: Tesla P100-PCIE-16GBCPU: 6 cores Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz | Models | AUC | g-AUC | NDCG@2 | NDCG@10 | seconds per epoch on GPU | seconds per epoch on CPU| config || :------| :------: | :------: | :------: | :------: | :------: | :------: | :------ || A2SVD | 0.8251 | 0.8178 | 0.2922 | 0.4264 | 249.5 | 440.0 | N/A || GRU4Rec | 0.8411 | 0.8332 | 0.3213 | 0.4547 | 439.0 | 4285.0 | max_seq_length=50, hidden_size=40|| Caser | 0.8244 | 0.8171 | 0.283 | 0.4194 | 314.3 | 5369.9 | T=1, n_v=128, n_h=128, L=3, min_seq_length=5|| SLi_Rec | 0.8631 | 0.8519 | 0.3491 | 0.4842 | 549.6 | 5014.0 | attention_size=40, max_seq_length=50, hidden_size=40|| NextItNet* | 0.6793 | 0.6769 | 0.0602 | 0.1733 | 112.0 | 214.5 | min_seq_length=3, dilations=\[1,2,4,1,2,4\], kernel_size=3 | Note 1: The five models are grid searched with a coarse granularity and the results are for reference only. Note 2: NextItNet model requires a dataset with strong sequence property, but the Amazon dataset used in this notebook does not meet that requirement, so NextItNet Model may not performance good. If you wish to use other datasets with strong sequence property, NextItNet is recommended. Note 3: Time cost of NextItNet Model is significantly shorter than other models because it doesn't need a history expanding of training data. 3. Online servingIn this section, we provide a simple example to illustrate how we can use the trained model to serve for production demand.Suppose we are in a new session. First let's load a previous trained model:
###Code
model_best_trained = SeqModel(hparams, input_creator, seed=RANDOM_SEED)
path_best_trained = os.path.join(hparams.MODEL_DIR, "best_model")
print('loading saved model in {0}'.format(path_best_trained))
model_best_trained.load_model(path_best_trained)
###Output
loading saved model in ../../tests/resources/deeprec/slirec/model/best_model
INFO:tensorflow:Restoring parameters from ../../tests/resources/deeprec/slirec/model/best_model
###Markdown
Let's see if we load the model correctly. The testing metrics should be close to the numbers we have in the training stage.
###Code
model_best_trained.run_eval(test_file, num_ngs=test_num_ngs)
###Output
_____no_output_____
###Markdown
And we make predictions using this model. In the next step, we will make predictions using a serving model. Then we can check if the two result files are consistent.
###Code
model_best_trained.predict(test_file, output_file)
###Output
_____no_output_____
###Markdown
Exciting. Now let's start our quick journey of online serving. For efficient and flexible serving, usually we only keep the necessary computation nodes and froze the TF model to a single pb file, so that we can easily compute scores with this unified pb file in both Python or Java:
###Code
with model_best_trained.sess as sess:
graph_def = model_best_trained.graph.as_graph_def()
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
graph_def,
["pred"]
)
outfilepath = os.path.join(hparams.MODEL_DIR, "serving_model.pb")
with tf.gfile.GFile(outfilepath, 'wb') as f:
f.write(output_graph_def.SerializeToString())
###Output
WARNING:tensorflow:From <ipython-input-16-dd62857b20ba>:6: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.convert_variables_to_constants`
WARNING:tensorflow:From /home/v-xdeng/.conda/envs/reco_tf15/lib/python3.6/site-packages/tensorflow_core/python/framework/graph_util_impl.py:277: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
INFO:tensorflow:Froze 61 variables.
INFO:tensorflow:Converted 61 variables to const ops.
###Markdown
The serving logic is as simple as feeding the feature values to the corresponding input nodes, and fetch the score from the output node. In our model, input nodes are some placeholders and control variables (such as is_training, layer_keeps). We can get the nodes by their name:
###Code
class LoadFrozedPredModel:
def __init__(self, graph):
self.pred = graph.get_tensor_by_name('import/pred:0')
self.items = graph.get_tensor_by_name('import/items:0')
self.cates = graph.get_tensor_by_name('import/cates:0')
self.item_history = graph.get_tensor_by_name('import/item_history:0')
self.item_cate_history = graph.get_tensor_by_name('import/item_cate_history:0')
self.mask = graph.get_tensor_by_name('import/mask:0')
self.time_from_first_action = graph.get_tensor_by_name('import/time_from_first_action:0')
self.time_to_now = graph.get_tensor_by_name('import/time_to_now:0')
self.layer_keeps = graph.get_tensor_by_name('import/layer_keeps:0')
self.is_training = graph.get_tensor_by_name('import/is_training:0')
def infer_as_serving(model, infile, outfile, hparams, iterator, sess):
preds = []
for batch_data_input in iterator.load_data_from_file(infile, batch_num_ngs=0):
if batch_data_input:
feed_dict = {
model.layer_keeps:np.ones(3, dtype=np.float32),
model.is_training:False,
model.items: batch_data_input[iterator.items],
model.cates: batch_data_input[iterator.cates],
model.item_history: batch_data_input[iterator.item_history],
model.item_cate_history: batch_data_input[iterator.item_cate_history],
model.mask: batch_data_input[iterator.mask],
model.time_from_first_action: batch_data_input[iterator.time_from_first_action],
model.time_to_now: batch_data_input[iterator.time_to_now]
}
step_pred = sess.run(model.pred, feed_dict=feed_dict)
preds.extend(np.reshape(step_pred, -1))
with open(outfile, "w") as wt:
for line in preds:
wt.write('{0}\n'.format(line))
###Output
_____no_output_____
###Markdown
Here is the main pipeline for inferring in an online serving manner. You can compare the 'output_serving.txt' with 'output.txt' to see if the results are consistent.The input file format is the same as introduced in Section 1 'Input data format'. In serving stage, since we do not need a groundtrue lable, so for the label column, you can simply place any number like a zero. The iterator will parse the input file and convert into the required format for model's feed_dictionary.
###Code
G = tf.Graph()
with tf.gfile.GFile(
os.path.join(hparams.MODEL_DIR, "serving_model.pb"),
'rb'
) as f, G.as_default():
graph_def_optimized = tf.GraphDef()
graph_def_optimized.ParseFromString(f.read())
#### uncomment this line if you want to check what conent is included in the graph
#print('graph_def_optimized = ' + str(graph_def_optimized))
with tf.Session(graph=G) as sess:
tf.import_graph_def(graph_def_optimized)
model = LoadFrozedPredModel(sess.graph)
serving_output_file = os.path.join(data_path, r'output_serving.txt')
iterator = input_creator(hparams, tf.Graph())
infer_as_serving(model, test_file, serving_output_file, hparams, iterator, sess)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Sequential Recommender Quick Start Example: SLi_Rec : Adaptive User Modeling with Long and Short-Term Preferences for Personailzed RecommendationUnlike a general recommender such as Matrix Factorization or xDeepFM (in the repo) which doesn't consider the order of the user's activities, sequential recommender systems take the sequence of the user behaviors as context and the goal is to predict the items that the user will interact in a short time (in an extreme case, the item that the user will interact next).This notebook aims to give you a quick example of how to train a sequential model based on a public Amazon dataset. Currently, we can support NextItNet \[4\], GRU4Rec \[2\], Caser \[3\], A2SVD \[1\], SLi_Rec \[1\], and SUM \[5\]. Without loss of generality, this notebook takes [SLi_Rec model](https://www.microsoft.com/en-us/research/uploads/prod/2019/07/IJCAI19-ready_v1.pdf) for example.SLi_Rec \[1\] is a deep learning-based model aims at capturing both long and short-term user preferences for precise recommender systems. To summarize, SLi_Rec has the following key properties:* It adopts the attentive "Asymmetric-SVD" paradigm for long-term modeling;* It takes both time irregularity and semantic irregularity into consideration by modifying the gating logic in LSTM.* It uses an attention mechanism to dynamic fuse the long-term component and short-term component.In this notebook, we test SLi_Rec on a subset of the public dataset: [Amazon_reviews](http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Movies_and_TV_5.json.gz) and [Amazon_metadata](http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/meta_Movies_and_TV.json.gz)This notebook is tested under TF 2.6. 0. Global Settings and Imports
###Code
import sys
import os
import logging
import papermill as pm
import scrapbook as sb
from tempfile import TemporaryDirectory
import numpy as np
import tensorflow.compat.v1 as tf
tf.get_logger().setLevel('ERROR') # only show error messages
from recommenders.utils.timer import Timer
from recommenders.utils.constants import SEED
from recommenders.models.deeprec.deeprec_utils import (
prepare_hparams
)
from recommenders.datasets.amazon_reviews import download_and_extract, data_preprocessing
from recommenders.datasets.download_utils import maybe_download
from recommenders.models.deeprec.models.sequential.sli_rec import SLI_RECModel as SeqModel
#### to use the other model, use one of the following lines:
# from recommenders.models.deeprec.models.sequential.asvd import A2SVDModel as SeqModel
# from recommenders.models.deeprec.models.sequential.caser import CaserModel as SeqModel
# from recommenders.models.deeprec.models.sequential.gru4rec import GRU4RecModel as SeqModel
# from recommenders.models.deeprec.models.sequential.sum import SUMModel as SeqModel
#from recommenders.models.deeprec.models.sequential.nextitnet import NextItNetModel
from recommenders.models.deeprec.io.sequential_iterator import SequentialIterator
#from recommenders.models.deeprec.io.nextitnet_iterator import NextItNetIterator
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
## ATTENTION: change to the corresponding config file, e.g., caser.yaml for CaserModel, sum.yaml for SUMModel
yaml_file = '../../recommenders/models/deeprec/config/sli_rec.yaml'
###Output
_____no_output_____
###Markdown
Parameters
###Code
EPOCHS = 10
BATCH_SIZE = 400
RANDOM_SEED = SEED # Set None for non-deterministic result
data_path = os.path.join("..", "..", "tests", "resources", "deeprec", "slirec")
###Output
_____no_output_____
###Markdown
1. Input data formatThe input data contains 8 columns, i.e., ` ` columns are seperated by `"\t"`. item_id and category_id denote the target item and category, which means that for this instance, we want to guess whether user user_id will interact with item_id at timestamp. `` columns record the user behavior list up to ``, elements are separated by commas. `` is a binary value with 1 for positive instances and 0 for negative instances. One example for an instance is: `1 A1QQ86H5M2LVW2 B0059XTU1S Movies 1377561600 B002ZG97WE,B004IK30PA,B000BNX3AU,B0017ANB08,B005LAIHW2 Movies,Movies,Movies,Movies,Movies 1304294400,1304812800,1315785600,1316304000,1356998400` In data preprocessing stage, we have a script to generate some ID mapping dictionaries, so user_id, item_id and category_id will be mapped into interager index starting from 1. And you need to tell the input iterator where is the ID mapping files are. (For example, in the next section, we have some mapping files like user_vocab, item_vocab, and cate_vocab). The data preprocessing script is at [recommenders/dataset/amazon_reviews.py](../../recommenders/dataset/amazon_reviews.py), you need to call the `_create_vocab(train_file, user_vocab, item_vocab, cate_vocab)` function. Note that ID vocabulary only creates from the train_file, so the new IDs in valid_file or test_file will be regarded as unknown IDs and assigned with a defualt 0 index.Only the SLi_Rec model is time-aware. For the other models, you can just pad some meaningless timestamp in the data files to fill up the format, the models will ignore these columns.We use Softmax to the loss function. In training and evalution stage, we group 1 positive instance with num_ngs negative instances. Pair-wise ranking can be regarded as a special case of Softmax ranking, where num_ngs is set to 1. More specifically, for training and evalation, you need to organize the data file such that each one positive instance is followd by num_ngs negative instances. Our program will take 1+num_ngs lines as a unit for Softmax calculation. num_ngs is a parameter you need to pass to the `prepare_hparams`, `fit` and `run_eval` function. `train_num_ngs` in `prepare_hparams` denotes the number of negative instances for training, where a recommended number is 4. `valid_num_ngs` and `num_ngs` in `fit` and `run_eval` denote the number in evalution. In evaluation, the model calculates metrics among the 1+num_ngs instances. For the `predict` function, since we only need to calcuate a socre for each individual instance, there is no need for num_ngs setting. More details and examples will be provided in the following sections.For training stage, if you don't want to prepare negative instances, you can just provide positive instances and set the parameter `need_sample=True, train_num_ngs=train_num_ngs` for function `prepare_hparams`, our model will dynamicly sample `train_num_ngs` instances as negative samples in each mini batch. Amazon datasetNow let's start with a public dataset containing product reviews and metadata from Amazon, which is widely used as a benchmark dataset in recommemdation systems field.
###Code
# for test
train_file = os.path.join(data_path, r'train_data')
valid_file = os.path.join(data_path, r'valid_data')
test_file = os.path.join(data_path, r'test_data')
user_vocab = os.path.join(data_path, r'user_vocab.pkl')
item_vocab = os.path.join(data_path, r'item_vocab.pkl')
cate_vocab = os.path.join(data_path, r'category_vocab.pkl')
output_file = os.path.join(data_path, r'output.txt')
reviews_name = 'reviews_Movies_and_TV_5.json'
meta_name = 'meta_Movies_and_TV.json'
reviews_file = os.path.join(data_path, reviews_name)
meta_file = os.path.join(data_path, meta_name)
train_num_ngs = 4 # number of negative instances with a positive instance for training
valid_num_ngs = 4 # number of negative instances with a positive instance for validation
test_num_ngs = 9 # number of negative instances with a positive instance for testing
sample_rate = 0.01 # sample a small item set for training and testing here for fast example
input_files = [reviews_file, meta_file, train_file, valid_file, test_file, user_vocab, item_vocab, cate_vocab]
if not os.path.exists(train_file):
download_and_extract(reviews_name, reviews_file)
download_and_extract(meta_name, meta_file)
data_preprocessing(*input_files, sample_rate=sample_rate, valid_num_ngs=valid_num_ngs, test_num_ngs=test_num_ngs)
#### uncomment this for the NextItNet model, because it does not need to unfold the user history
# data_preprocessing(*input_files, sample_rate=sample_rate, valid_num_ngs=valid_num_ngs, test_num_ngs=test_num_ngs, is_history_expanding=False)
###Output
_____no_output_____
###Markdown
1.1 Prepare hyper-parametersprepare_hparams() will create a full set of hyper-parameters for model training, such as learning rate, feature number, and dropout ratio. We can put those parameters in a yaml file (a complete list of parameters can be found under our config folder) , or pass parameters as the function's parameters (which will overwrite yaml settings).Parameters hints: `need_sample` controls whether to perform dynamic negative sampling in mini-batch. `train_num_ngs` indicates how many negative instances followed by one positive instances. Examples: (1) `need_sample=True and train_num_ngs=4`: There are only positive instances in your training file. Our model will dynamically sample 4 negative instances for each positive instances in mini-batch. Note that if need_sample is set to True, train_num_ngs should be greater than zero. (2) `need_sample=False and train_num_ngs=4`: In your training file, each one positive line is followed by 4 negative lines. Note that if need_sample is set to False, you must provide a traiing file with negative instances, and train_num_ngs should match the number of negative number in your training file.
###Code
### NOTE:
### remember to use `_create_vocab(train_file, user_vocab, item_vocab, cate_vocab)` to generate the user_vocab, item_vocab and cate_vocab files, if you are using your own dataset rather than using our demo Amazon dataset.
hparams = prepare_hparams(yaml_file,
embed_l2=0.,
layer_l2=0.,
learning_rate=0.001, # set to 0.01 if batch normalization is disable
epochs=EPOCHS,
batch_size=BATCH_SIZE,
show_step=20,
MODEL_DIR=os.path.join(data_path, "model/"),
SUMMARIES_DIR=os.path.join(data_path, "summary/"),
user_vocab=user_vocab,
item_vocab=item_vocab,
cate_vocab=cate_vocab,
need_sample=True,
train_num_ngs=train_num_ngs, # provides the number of negative instances for each positive instance for loss computation.
)
###Output
_____no_output_____
###Markdown
1.2 Create data loaderDesignate a data iterator for the model. All our sequential models use SequentialIterator. data format is introduced aboved. Validation and testing data are files after negative sampling offline with the number of `` and ``.
###Code
input_creator = SequentialIterator
#### uncomment this for the NextItNet model, because it needs a special data iterator for training
#input_creator = NextItNetIterator
###Output
_____no_output_____
###Markdown
2. Create modelWhen both hyper-parameters and data iterator are ready, we can create a model:
###Code
model = SeqModel(hparams, input_creator, seed=RANDOM_SEED)
## sometimes we don't want to train a model from scratch
## then we can load a pre-trained model like this:
#model.load_model(r'your_model_path')
###Output
_____no_output_____
###Markdown
Now let's see what is the model's performance at this point (without starting training):
###Code
# test_num_ngs is the number of negative lines after each positive line in your test_file
print(model.run_eval(test_file, num_ngs=test_num_ngs))
###Output
{'auc': 0.4857, 'logloss': 0.6931, 'mean_mrr': 0.2665, 'ndcg@2': 0.1357, 'ndcg@4': 0.2186, 'ndcg@6': 0.2905, 'group_auc': 0.4849}
###Markdown
AUC=0.5 is a state of random guess. We can see that before training, the model behaves like random guessing. 2.1 Train modelNext we want to train the model on a training set, and check the performance on a validation dataset. Training the model is as simple as a function call:
###Code
with Timer() as train_time:
model = model.fit(train_file, valid_file, valid_num_ngs=valid_num_ngs)
# valid_num_ngs is the number of negative lines after each positive line in your valid_file
# we will evaluate the performance of model on valid_file every epoch
print('Time cost for training is {0:.2f} mins'.format(train_time.interval/60.0))
###Output
step 20 , total_loss: 1.6078, data_loss: 1.6078
step 40 , total_loss: 1.6054, data_loss: 1.6054
eval valid at epoch 1: auc:0.4975,logloss:0.6929,mean_mrr:0.4592,ndcg@2:0.3292,ndcg@4:0.5125,ndcg@6:0.5915,group_auc:0.4994
step 20 , total_loss: 1.5786, data_loss: 1.5786
step 40 , total_loss: 1.4193, data_loss: 1.4193
eval valid at epoch 2: auc:0.6486,logloss:0.6946,mean_mrr:0.5567,ndcg@2:0.472,ndcg@4:0.6292,ndcg@6:0.6669,group_auc:0.6363
step 20 , total_loss: 1.3229, data_loss: 1.3229
step 40 , total_loss: 1.3079, data_loss: 1.3079
eval valid at epoch 3: auc:0.6887,logloss:0.8454,mean_mrr:0.6032,ndcg@2:0.537,ndcg@4:0.6705,ndcg@6:0.7022,group_auc:0.683
step 20 , total_loss: 1.3521, data_loss: 1.3521
step 40 , total_loss: 1.2250, data_loss: 1.2250
eval valid at epoch 4: auc:0.6978,logloss:0.7005,mean_mrr:0.6236,ndcg@2:0.5622,ndcg@4:0.6881,ndcg@6:0.7175,group_auc:0.699
step 20 , total_loss: 1.2826, data_loss: 1.2826
step 40 , total_loss: 1.2795, data_loss: 1.2795
eval valid at epoch 5: auc:0.7152,logloss:0.6695,mean_mrr:0.6382,ndcg@2:0.582,ndcg@4:0.7009,ndcg@6:0.7286,group_auc:0.7139
step 20 , total_loss: 1.2214, data_loss: 1.2214
step 40 , total_loss: 1.2521, data_loss: 1.2521
eval valid at epoch 6: auc:0.722,logloss:0.6141,mean_mrr:0.637,ndcg@2:0.5796,ndcg@4:0.6993,ndcg@6:0.7276,group_auc:0.7116
step 20 , total_loss: 1.1884, data_loss: 1.1884
step 40 , total_loss: 1.1957, data_loss: 1.1957
eval valid at epoch 7: auc:0.7287,logloss:0.6183,mean_mrr:0.6417,ndcg@2:0.5875,ndcg@4:0.7031,ndcg@6:0.7312,group_auc:0.7167
step 20 , total_loss: 1.1779, data_loss: 1.1779
step 40 , total_loss: 1.1616, data_loss: 1.1616
eval valid at epoch 8: auc:0.7342,logloss:0.6584,mean_mrr:0.6538,ndcg@2:0.6006,ndcg@4:0.7121,ndcg@6:0.7402,group_auc:0.7248
step 20 , total_loss: 1.1299, data_loss: 1.1299
step 40 , total_loss: 1.2055, data_loss: 1.2055
eval valid at epoch 9: auc:0.7324,logloss:0.6268,mean_mrr:0.6541,ndcg@2:0.5981,ndcg@4:0.7129,ndcg@6:0.7404,group_auc:0.7239
step 20 , total_loss: 1.1927, data_loss: 1.1927
step 40 , total_loss: 1.1909, data_loss: 1.1909
eval valid at epoch 10: auc:0.7369,logloss:0.6122,mean_mrr:0.6611,ndcg@2:0.6087,ndcg@4:0.7181,ndcg@6:0.7457,group_auc:0.731
[(1, {'auc': 0.4975, 'logloss': 0.6929, 'mean_mrr': 0.4592, 'ndcg@2': 0.3292, 'ndcg@4': 0.5125, 'ndcg@6': 0.5915, 'group_auc': 0.4994}), (2, {'auc': 0.6486, 'logloss': 0.6946, 'mean_mrr': 0.5567, 'ndcg@2': 0.472, 'ndcg@4': 0.6292, 'ndcg@6': 0.6669, 'group_auc': 0.6363}), (3, {'auc': 0.6887, 'logloss': 0.8454, 'mean_mrr': 0.6032, 'ndcg@2': 0.537, 'ndcg@4': 0.6705, 'ndcg@6': 0.7022, 'group_auc': 0.683}), (4, {'auc': 0.6978, 'logloss': 0.7005, 'mean_mrr': 0.6236, 'ndcg@2': 0.5622, 'ndcg@4': 0.6881, 'ndcg@6': 0.7175, 'group_auc': 0.699}), (5, {'auc': 0.7152, 'logloss': 0.6695, 'mean_mrr': 0.6382, 'ndcg@2': 0.582, 'ndcg@4': 0.7009, 'ndcg@6': 0.7286, 'group_auc': 0.7139}), (6, {'auc': 0.722, 'logloss': 0.6141, 'mean_mrr': 0.637, 'ndcg@2': 0.5796, 'ndcg@4': 0.6993, 'ndcg@6': 0.7276, 'group_auc': 0.7116}), (7, {'auc': 0.7287, 'logloss': 0.6183, 'mean_mrr': 0.6417, 'ndcg@2': 0.5875, 'ndcg@4': 0.7031, 'ndcg@6': 0.7312, 'group_auc': 0.7167}), (8, {'auc': 0.7342, 'logloss': 0.6584, 'mean_mrr': 0.6538, 'ndcg@2': 0.6006, 'ndcg@4': 0.7121, 'ndcg@6': 0.7402, 'group_auc': 0.7248}), (9, {'auc': 0.7324, 'logloss': 0.6268, 'mean_mrr': 0.6541, 'ndcg@2': 0.5981, 'ndcg@4': 0.7129, 'ndcg@6': 0.7404, 'group_auc': 0.7239}), (10, {'auc': 0.7369, 'logloss': 0.6122, 'mean_mrr': 0.6611, 'ndcg@2': 0.6087, 'ndcg@4': 0.7181, 'ndcg@6': 0.7457, 'group_auc': 0.731})]
best epoch: 10
Time cost for training is 3.22 mins
###Markdown
2.2 Evaluate modelAgain, let's see what is the model's performance now (after training):
###Code
res_syn = model.run_eval(test_file, num_ngs=test_num_ngs)
print(res_syn)
sb.glue("res_syn", res_syn)
###Output
_____no_output_____
###Markdown
If we want to get the full prediction scores rather than evaluation metrics, we can do this:
###Code
model = model.predict(test_file, output_file)
# The data was downloaded in tmpdir folder. You can delete them manually if you do not need them any more.
###Output
_____no_output_____
###Markdown
2.3 Running models with large datasetHere are performances using the whole amazon dataset among popular sequential models with 1,697,533 positive instances.Settings for reproducing the results:`learning_rate=0.001, dropout=0.3, item_embedding_dim=32, cate_embedding_dim=8, l2_norm=0, batch_size=400, train_num_ngs=4, valid_num_ngs=4, test_num_ngs=49`We compare the running time with CPU only and with GPU on the larger dataset. It appears that GPU can significantly accelerate the training. Hardware specification for running the large dataset: GPU: Tesla P100-PCIE-16GBCPU: 6 cores Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz | Models | AUC | g-AUC | NDCG@2 | NDCG@10 | seconds per epoch on GPU | seconds per epoch on CPU| config || :------| :------: | :------: | :------: | :------: | :------: | :------: | :------ || A2SVD | 0.8251 | 0.8178 | 0.2922 | 0.4264 | 249.5 | 440.0 | N/A || GRU4Rec | 0.8411 | 0.8332 | 0.3213 | 0.4547 | 439.0 | 4285.0 | max_seq_length=50, hidden_size=40|| Caser | 0.8244 | 0.8171 | 0.283 | 0.4194 | 314.3 | 5369.9 | T=1, n_v=128, n_h=128, L=3, min_seq_length=5|| SLi_Rec | 0.8631 | 0.8519 | 0.3491 | 0.4842 | 549.6 | 5014.0 | attention_size=40, max_seq_length=50, hidden_size=40|| NextItNet* | 0.6793 | 0.6769 | 0.0602 | 0.1733 | 112.0 | 214.5 | min_seq_length=3, dilations=\[1,2,4,1,2,4\], kernel_size=3 || SUM | 0.8481 | 0.8406 | 0.3394 | 0.4774 | 1005.0 | 9427.0 | hidden_size=40, slots=4, dropout=0| Note 1: The five models are grid searched with a coarse granularity and the results are for reference only. Note 2: NextItNet model requires a dataset with strong sequence property, but the Amazon dataset used in this notebook does not meet that requirement, so NextItNet Model may not performance good. If you wish to use other datasets with strong sequence property, NextItNet is recommended. Note 3: Time cost of NextItNet Model is significantly shorter than other models because it doesn't need a history expanding of training data. 3. Loading Trained ModelsIn this section, we provide a simple example to illustrate how we can use the trained model to serve for production demand.Suppose we are in a new session. First let's load a previous trained model:
###Code
model_best_trained = SeqModel(hparams, input_creator, seed=RANDOM_SEED)
path_best_trained = os.path.join(hparams.MODEL_DIR, "best_model")
print('loading saved model in {0}'.format(path_best_trained))
model_best_trained.load_model(path_best_trained)
###Output
loading saved model in ../../tests/resources/deeprec/slirec/model/best_model
INFO:tensorflow:Restoring parameters from ../../tests/resources/deeprec/slirec/model/best_model
###Markdown
Let's see if we load the model correctly. The testing metrics should be close to the numbers we have in the training stage.
###Code
model_best_trained.run_eval(test_file, num_ngs=test_num_ngs)
###Output
_____no_output_____
###Markdown
And we make predictions using this model. In the next step, we will make predictions using a serving model. Then we can check if the two result files are consistent.
###Code
model_best_trained.predict(test_file, output_file)
###Output
_____no_output_____ |
Exercise_Notebook.ipynb | ###Markdown
DATA CLEANING WORKSHOP, CDS 2020, Colgate University REDUCING NOAA DATASETS Fairuz Ishraque'22 Importing the required LibrariesWe will start by importing the required python modules or libraries. The two essential libraries for this exercies are **numpy** and **pandas**. If you want to plot your data too it's a good idea to import the matplotlib library.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Importing the NOAA .csv file as a pandas DataframeTables in pandas are called dataframes. This format of the table makes it really easy to manipulate the columns and rows of the table for classifying as well as cleaning th dataset.
###Code
Data = pd.read_csv('Helsinki_Station_Data(1952-2017).txt', sep='\s+', na_values='-9999', skiprows=[1])
###Output
_____no_output_____
###Markdown
It's always a good idea to look at the relevant information of the loaded dataframe before proceeding with any form of analysis or editing on the data.
###Code
#Insert your code below
#Insert your code below
###Output
_____no_output_____
###Markdown
Finding the TMAX temperature for the summer of '69This is an exercise on perfomring string slicing as well as conditional indexing of the dataset 1. Slicing the month numbers out of the DATE column and assigning them to the month names
###Code
#Converting the DATE column into string values from integers. This procedure makes the slicing of dates possible
Data['DATE_str'] = Data['DATE'].astype(str)
#Slicing the day values out of DATE_str
Data['DATE_yrmonth'] = Data['DATE_str'].str.slice(start=0, stop=6)
#Converting DATE_str into integer values
Data['DATE_yrmonthint'] = Data['DATE_yrmonth'].astype(int)
#creating a column of integers with the year in them
Data['DATE_yrint'] = Data['DATE_str'].str.slice(start=0, stop=4)
Data['DATE_yrint'] = Data['DATE_yrint'].astype(int)
#creating a column for just the number of months. This makes the mapping of month names to the month numbers possible
Data['DATE_month'] = Data['DATE_str'].str.slice(start=4, stop=6)
###Output
_____no_output_____
###Markdown
2. Creating a dictionary that maps the month names to the dataset
###Code
dict_months = {'01':'January', '02':'February', '03':'March', '04':'April',\
'05':'May', '06':'June', '07':'July', '08':'August', '09':'September',\
'10':'October', '11':'November', '12':'December'}
###Output
_____no_output_____
###Markdown
3. Performing a conditional indexing on the dataframe. This is similar to if-else statement you might be familiar with, but now the condtitions are applied to the entire dataframe. In this case we are extracting a subset of our main dataframe as a series that only contains the data from May 1969 to August 1969.
###Code
#insert your code below
###Output
_____no_output_____
###Markdown
4. Finally, finding the max tempertaure during the summer of '69
###Code
#insert you code below
###Output
_____no_output_____
###Markdown
And Voila! We're done! Calculating Monthly Average TemperaturesThis is an exercise on grouping data and iterating through a dataset 1. Grouping the monthly data
###Code
#insert your code below
###Output
_____no_output_____
###Markdown
2. Iterating over the group to get the mean monthly temperature values
###Code
#insert your loop below
###Output
_____no_output_____
###Markdown
3. Creating a column in monthlyData for month numbers and then remapping the month numbers to month names using the previously defined dictionary. Also, creating a column with the Month Numbers as integer values
###Code
#insert your code below
###Output
_____no_output_____
###Markdown
4. Creating another column in monthlyData for temperature values in celsius
###Code
#insert your code below
###Output
_____no_output_____
###Markdown
5. Writing a function that converts temperatures from Fahrenheit to Celcius
###Code
#insert your function below
###Output
_____no_output_____
###Markdown
6. Iterating the conversion through the monthlyData dataframe and adding it to the new column TempsC
###Code
#insert your loop below (iterrows)
###Output
_____no_output_____
###Markdown
Now, let's look at the new Dataframe we have created!
###Code
#insert your code below
###Output
_____no_output_____
###Markdown
We can also save this dataframe into a new csv file if we want to! Calculating Monthly Temperature AnomaliesHere by temperature anomalies we mean how much the average temperatures for each month in every year throughout the dataset varied from the mean temperature of that month for the whole dataset.For example, the anomaly for the average temperature of January 1972 is the difference between the avergae temperature of January 1972 from the mean of the average temperatures of all the Januaries recorded in our dataframe.
###Code
##creating the column avgTempsC in the Dataframe
Data['avgTempsC'] = None
##iterating the conversion and adding it to the column
for idx, row in Data.iterrows():
#conversion
celsius2 = FahrToCelsius(row['TAVG'])
#adding the values to the empty column
Data.loc[idx,'avgTempsC'] = celsius2
##grouping data across the years based on the months
referenceTemps = pd.DataFrame()
grouped2 = Data.groupby('DATE_month')
##the columns that we wanna aggregate
mean_cols2 = ['avgTempsC']
##iterating over the groups to get the mean values of each month over the years 1952-80
for key, group in grouped2:
mean_values2 = group[mean_cols2].mean()
mean_values2['Month'] = key
referenceTemps = referenceTemps.append(mean_values2, ignore_index=True)
##mapping month names to month numbers. The dictionary has already been created.
referenceTemps['Month'] = referenceTemps['Month'].map(dict_months)
##merging the tables referenceTemps and monthlyData
monthlyData = monthlyData.merge(referenceTemps, on='Month')
monthlyData['Diff'] = monthlyData['TempsC']-monthlyData['avgTempsC']
monthlyData.head()
plt.scatter(monthlyData.index, monthlyData['Diff'], c=monthlyData['MonthNum'], cmap='rainbow')
plt.xlabel('Index')
plt.ylabel('Temperature Anomalies (in Degrees Celcius)')
plt.title('Temperature anomalies at Helsinki from 1952-2017 (Sorted by each Month)')
plt.colorbar(ticks=[1,2,3,4,5,6,7,8,9,10,11,12], label='Number of Months')
###Output
_____no_output_____ |
ML/DAT8-master/notebooks/09_model_evaluation.ipynb | ###Markdown
Model Evaluation Review of last class- Goal was to predict the **response value** of an **unknown observation** - predict the species of an unknown iris - predict the position of an unknown NBA player- Made predictions using KNN models with **different values of K**- Need a way to choose the **"best" model**: the one that "generalizes" to "out-of-sample" data**Solution:** Create a procedure that **estimates** how well a model is likely to perform on out-of-sample data and use that to choose between models.**Note:** These procedures can be used with **any machine learning model**, not only KNN. Evaluation procedure 1: Train and test on the entire dataset 1. Train the model on the **entire dataset**.2. Test the model on the **same dataset**, and evaluate how well we did by comparing the **predicted** response values with the **true** response values.
###Code
# read the NBA data into a DataFrame
import pandas as pd
url = 'https://raw.githubusercontent.com/justmarkham/DAT4-students/master/kerry/Final/NBA_players_2015.csv'
nba = pd.read_csv(url, index_col=0)
# map positions to numbers
nba['pos_num'] = nba.pos.map({'C':0, 'F':1, 'G':2})
# create feature matrix (X)
feature_cols = ['ast', 'stl', 'blk', 'tov', 'pf']
X = nba[feature_cols]
# create response vector (y)
y = nba.pos_num
###Output
_____no_output_____
###Markdown
KNN (K=50)
###Code
# import the class
from sklearn.neighbors import KNeighborsClassifier
# instantiate the model
knn = KNeighborsClassifier(n_neighbors=50)
# train the model on the entire dataset
knn.fit(X, y)
# predict the response values for the observations in X ("test the model")
knn.predict(X)
# store the predicted response values
y_pred_class = knn.predict(X)
###Output
_____no_output_____
###Markdown
To evaluate a model, we also need an **evaluation metric:**- Numeric calculation used to **quantify** the performance of a model- Appropriate metric depends on the **goals** of your problemMost common choices for classification problems:- **Classification accuracy**: percentage of correct predictions ("reward function" since higher is better)- **Classification error**: percentage of incorrect predictions ("loss function" since lower is better)In this case, we'll use classification accuracy.
###Code
# compute classification accuracy
from sklearn import metrics
print metrics.accuracy_score(y, y_pred_class)
###Output
0.665271966527
###Markdown
This is known as **training accuracy** because we are evaluating the model on the same data we used to train the model. KNN (K=1)
###Code
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X, y)
y_pred_class = knn.predict(X)
print metrics.accuracy_score(y, y_pred_class)
###Output
1.0
###Markdown
Problems with training and testing on the same data- Goal is to estimate likely performance of a model on **out-of-sample data**- But, maximizing training accuracy rewards **overly complex models** that won't necessarily generalize- Unnecessarily complex models **overfit** the training data: - Will do well when tested using the in-sample data - May do poorly on out-of-sample data - Learns the "noise" in the data rather than the "signal" - From Quora: [What is an intuitive explanation of overfitting?](http://www.quora.com/What-is-an-intuitive-explanation-of-overfitting/answer/Jessica-Su)**Thus, training accuracy is not a good estimate of out-of-sample accuracy.**  Evaluation procedure 2: Train/test split 1. Split the dataset into two pieces: a **training set** and a **testing set**.2. Train the model on the **training set**.3. Test the model on the **testing set**, and evaluate how well we did.What does this accomplish?- Model can be trained and tested on **different data** (we treat testing data like out-of-sample data).- Response values are known for the testing set, and thus **predictions can be evaluated**.This is known as **testing accuracy** because we are evaluating the model on an independent "test set" that was not used during model training.**Testing accuracy is a better estimate of out-of-sample performance than training accuracy.** Understanding "unpacking"
###Code
def min_max(nums):
smallest = min(nums)
largest = max(nums)
return [smallest, largest]
min_and_max = min_max([1, 2, 3])
print min_and_max
print type(min_and_max)
the_min, the_max = min_max([1, 2, 3])
print the_min
print type(the_min)
print the_max
print type(the_max)
###Output
1
<type 'int'>
3
<type 'int'>
###Markdown
Understanding the `train_test_split` function
###Code
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
# before splitting
print X.shape
# after splitting
print X_train.shape
print X_test.shape
# before splitting
print y.shape
# after splitting
print y_train.shape
print y_test.shape
###Output
(478L,)
(358L,)
(120L,)
###Markdown
 Understanding the `random_state` parameter
###Code
# WITHOUT a random_state parameter
X_train, X_test, y_train, y_test = train_test_split(X, y)
# print the first element of each object
print X_train.head(1)
print X_test.head(1)
print y_train.head(1)
print y_test.head(1)
# WITH a random_state parameter
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=99)
# print the first element of each object
print X_train.head(1)
print X_test.head(1)
print y_train.head(1)
print y_test.head(1)
###Output
ast stl blk tov pf
401 2.9 1.3 0.2 1.4 2.3
ast stl blk tov pf
32 1.5 0.9 0.6 1.1 3.1
401 2
Name: pos_num, dtype: int64
32 1
Name: pos_num, dtype: int64
###Markdown
Using the train/test split procedure (K=1)
###Code
# STEP 1: split X and y into training and testing sets (using random_state for reproducibility)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=99)
# STEP 2: train the model on the training set (using K=1)
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
# STEP 3: test the model on the testing set, and check the accuracy
y_pred_class = knn.predict(X_test)
print metrics.accuracy_score(y_test, y_pred_class)
###Output
0.616666666667
###Markdown
Repeating for K=50
###Code
knn = KNeighborsClassifier(n_neighbors=50)
knn.fit(X_train, y_train)
y_pred_class = knn.predict(X_test)
print metrics.accuracy_score(y_test, y_pred_class)
###Output
0.675
###Markdown
 Comparing testing accuracy with null accuracy Null accuracy is the accuracy that could be achieved by **always predicting the most frequent class**. It is a benchmark against which you may want to measure your classification model.
###Code
# examine the class distribution
y_test.value_counts()
# compute null accuracy
y_test.value_counts().head(1) / len(y_test)
###Output
_____no_output_____
###Markdown
Searching for the "best" value of K
###Code
# calculate TRAINING ERROR and TESTING ERROR for K=1 through 100
k_range = range(1, 101)
training_error = []
testing_error = []
for k in k_range:
# instantiate the model with the current K value
knn = KNeighborsClassifier(n_neighbors=k)
# calculate training error
knn.fit(X, y)
y_pred_class = knn.predict(X)
training_accuracy = metrics.accuracy_score(y, y_pred_class)
training_error.append(1 - training_accuracy)
# calculate testing error
knn.fit(X_train, y_train)
y_pred_class = knn.predict(X_test)
testing_accuracy = metrics.accuracy_score(y_test, y_pred_class)
testing_error.append(1 - testing_accuracy)
# allow plots to appear in the notebook
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# create a DataFrame of K, training error, and testing error
column_dict = {'K': k_range, 'training error':training_error, 'testing error':testing_error}
df = pd.DataFrame(column_dict).set_index('K').sort_index(ascending=False)
df.head()
# plot the relationship between K (HIGH TO LOW) and TESTING ERROR
df.plot(y='testing error')
plt.xlabel('Value of K for KNN')
plt.ylabel('Error (lower is better)')
# find the minimum testing error and the associated K value
df.sort('testing error').head()
# alternative method
min(zip(testing_error, k_range))
###Output
_____no_output_____
###Markdown
What could we conclude?- When using KNN on this dataset with these features, the **best value for K** is likely to be around 14.- Given the statistics of an **unknown player**, we estimate that we would be able to correctly predict his position about 74% of the time. Training error versus testing error
###Code
# plot the relationship between K (HIGH TO LOW) and both TRAINING ERROR and TESTING ERROR
df.plot()
plt.xlabel('Value of K for KNN')
plt.ylabel('Error (lower is better)')
###Output
_____no_output_____
###Markdown
- **Training error** decreases as model complexity increases (lower value of K)- **Testing error** is minimized at the optimum model complexity  Making predictions on out-of-sample data Given the statistics of a (truly) unknown player, how do we predict his position?
###Code
# instantiate the model with the best known parameters
knn = KNeighborsClassifier(n_neighbors=14)
# re-train the model with X and y (not X_train and y_train) - why?
knn.fit(X, y)
# make a prediction for an out-of-sample observation
knn.predict([1, 1, 0, 1, 2])
###Output
_____no_output_____
###Markdown
Disadvantages of train/test split? What would happen if the `train_test_split` function had split the data differently? Would we get the same exact results as before?
###Code
# try different values for random_state
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=98)
knn = KNeighborsClassifier(n_neighbors=50)
knn.fit(X_train, y_train)
y_pred_class = knn.predict(X_test)
print metrics.accuracy_score(y_test, y_pred_class)
###Output
0.641666666667
|
ipynb/Understanding-Tensorflow-Distributions-Shapes.ipynb | ###Markdown
BasicsThere are three important concepts associated with TensorFlow Distributions shapes:* *Event shape* describes the shape of a single draw from the distribution; it may be dependent across dimensions. For scalar distributions, the event shape is []. For a 5-dimensional MultivariateNormal, the event shape is [5].* *Batch shape* describes independent, not identically distributed draws, aka 'batch' of distributions.* *Sample shape* describes independent, identically distributed draws of batches from the distribution family.The event shape and the batch shape are properties of a `Distribution` object, whereas the sample shape is associated with a specific call to sample or `log_prob`. Scalar DistributionsAs we noted above, a `Distribution` object has defined event and batch shapes. We'll start with a utility to describe distributions.
###Code
def describe_distributions(distributions):
print('\n'.join([str(d) for d in distributions]))
###Output
_____no_output_____
###Markdown
In this section we'll explore *scalar* distributions: distributions with an event shape of []. A typical example is the Poisson distribution, specified by a rate.
###Code
poisson_distributions = [
tfd.Poisson(rate=1., name='One Poisson Scalar Batch'),
tfd.Poisson(rate=[1., 10., 100.], name='Three Poissons'),
tfd.Poisson(rate=[[1., 10., 100.], [2, 20., 200.]],
name='Two-by-three Poissons'),
tfd.Poisson(rate=[1.], name='One Poisson Vector Batch'),
tfd.Poisson(rate=[[1.]], name='One Poisson Expanded Batch')
]
describe_distributions(poisson_distributions)
###Output
tfp.distributions.Poisson("One Poisson Scalar Batch/", batch_shape=(), event_shape=(), dtype=float32)
tfp.distributions.Poisson("Three Poissons/", batch_shape=(3,), event_shape=(), dtype=float32)
tfp.distributions.Poisson("Two-by-three Poissons/", batch_shape=(2, 3), event_shape=(), dtype=float32)
tfp.distributions.Poisson("One Poisson Vector Batch/", batch_shape=(1,), event_shape=(), dtype=float32)
tfp.distributions.Poisson("One Poisson Expanded Batch/", batch_shape=(1, 1), event_shape=(), dtype=float32)
###Markdown
The Poisson distribution is a scalar distribution, so its event shape is always []. If we specify more rates, these show up in the batch shape. The final pair of examples is interesting: there is only a single rate, but because that rate is embedded in a numpy array with non-empty shape, that shape becomes the batch shape.The standard Normal distribution is also a scalar. Its event shape is [], just like for the Poisson, but we'll play with it to see our first example of *broadcasting*. The Normal is specified using `loc` and `scale` parameters.
###Code
normal_distribution = [
tfd.Normal(loc=0., scale=1., name='Standard'),
tfd.Normal(loc=[0.], scale=1., name='Standard Vector Batch'),
tfd.Normal(loc=[0., 1, 2, 3], scale=1., name='Different Locs'),
tfd.Normal(loc=[0., 1, 2, 3], scale=[[1.], [5.]],
name='Broadcasting Scale')
]
describe_distributions(normal_distribution)
###Output
tfp.distributions.Normal("Standard/", batch_shape=(), event_shape=(), dtype=float32)
tfp.distributions.Normal("Standard Vector Batch/", batch_shape=(1,), event_shape=(), dtype=float32)
tfp.distributions.Normal("Different Locs/", batch_shape=(4,), event_shape=(), dtype=float32)
tfp.distributions.Normal("Broadcasting Scale/", batch_shape=(2, 4), event_shape=(), dtype=float32)
###Markdown
The interesting example above is the `Broadcasting Scale` distribution. The `loc` parameter has shape [4], and the `scale` parameter has shape `[2, 1]`. Using Numpy broadcasting rules, the batch shape is `[2, 4]` Sampling Scalar DistributionsThere are two main things we can do with distributions: we can `sample` from them and we can compute `log_probs`. Let's explore sampling first. The basic rule is that when we sample from a distribution, the resulting Tensor has shape `[sample_shape, batch_shape, event_shape]`, where `batch_shape` and `event_shape` are provided by the `Distribution` object, and `sample_shape` is provided by the call to `sample`. For scalar distributions, `event_shape = []`, so the Tensor returned from sample will have shape `[sample_shape, batch_shape]`. Let's try it:
###Code
def describe_sample_tensor_shape(sample_shape, distribution):
print('Sample shape:', sample_shape)
print('Return sample tensor shape:', distribution.sample(sample_shape).shape)
def describe_sample_tensor_shapes(distributions, sample_shapes):
started = False
for distribution in distributions:
print(distribution)
for sample_shape in sample_shapes:
describe_sample_tensor_shape(sample_shape, distribution)
print()
sample_shapes = [1, 2, [1, 5], [3, 4, 5]]
describe_sample_tensor_shapes(poisson_distributions, sample_shapes)
describe_sample_tensor_shapes(normal_distribution, sample_shapes)
###Output
tfp.distributions.Normal("Standard/", batch_shape=(), event_shape=(), dtype=float32)
Sample shape: 1
Return sample tensor shape: (1,)
Sample shape: 2
Return sample tensor shape: (2,)
Sample shape: [1, 5]
Return sample tensor shape: (1, 5)
Sample shape: [3, 4, 5]
Return sample tensor shape: (3, 4, 5)
tfp.distributions.Normal("Standard Vector Batch/", batch_shape=(1,), event_shape=(), dtype=float32)
Sample shape: 1
Return sample tensor shape: (1, 1)
Sample shape: 2
Return sample tensor shape: (2, 1)
Sample shape: [1, 5]
Return sample tensor shape: (1, 5, 1)
Sample shape: [3, 4, 5]
Return sample tensor shape: (3, 4, 5, 1)
tfp.distributions.Normal("Different Locs/", batch_shape=(4,), event_shape=(), dtype=float32)
Sample shape: 1
Return sample tensor shape: (1, 4)
Sample shape: 2
Return sample tensor shape: (2, 4)
Sample shape: [1, 5]
Return sample tensor shape: (1, 5, 4)
Sample shape: [3, 4, 5]
Return sample tensor shape: (3, 4, 5, 4)
tfp.distributions.Normal("Broadcasting Scale/", batch_shape=(2, 4), event_shape=(), dtype=float32)
Sample shape: 1
Return sample tensor shape: (1, 2, 4)
Sample shape: 2
Return sample tensor shape: (2, 2, 4)
Sample shape: [1, 5]
Return sample tensor shape: (1, 5, 2, 4)
Sample shape: [3, 4, 5]
Return sample tensor shape: (3, 4, 5, 2, 4)
###Markdown
Computing `log_prob` For Scalar Distributions`log_prob` takes as input a (non-empty) tensor representing the location(s) at which to compute the `log_prob` for the distribution. In the most straightforward case, this tensor will have a shape of the form `[sample_shape, batch_shape, event_shape]`, where `batch_shape` and `event_shape` match the batch and event shapes of the distribution. Recall once more that for scalar distributions, `event_shape = []`, so the input tensor has shape `[sample_shape, batch_shape]` In this case, we get back a tensor of shape `[sample_shape, batch_shape]`:
###Code
three_poissons = tfd.Poisson(rate=[1., 10., 100.], name='Three Poissons')
three_poissons
three_poissons.log_prob([[1., 10., 100.], [100, 10, 1]]) # sample_shape is [2]
three_poissons.log_prob([[[[1., 10., 100.], [100., 10., 1.]]]]) # sample shape is [1, 1, 2].
###Output
_____no_output_____
###Markdown
Note how in the first example, the input and output have shape [2, 3] and in the second example they have shape [1, 1, 2, 3].
###Code
three_poissons.log_prob([10.])
###Output
_____no_output_____
###Markdown
The tensor `[10.]` (with shape [1]) is broadcast across the `batch_shape` of 3, so we evaluate all three Poissons' log probability at the value 10.
###Code
three_poissons.log_prob([[[1.], [10.]], [[100.], [1000.]]])
###Output
_____no_output_____
###Markdown
In the above example, the input tensor has shape `[2, 2, 1]`, while the distributions object has a batch shape of 3. So for each of the `[2, 2]` sample dimensions, the single value provided gets broadcast to each of the three Poissons.A possibly useful way to think of it: because `three_poissons` has `batch_shape = [2, 3]`, a call to `log_prob` must take a Tensor whose last dimension is either 1 or 3; anything else is an error. (The numpy broadcasting rules treat the special case of a scalar as being totally equivalent to a Tensor of shape `[1]`.)
###Code
poisson_2_by_3 = tfd.Poisson(
rate=[[1., 10., 100.,], [2., 20., 200.]],
name='Two-by-Three Poissons')
poisson_2_by_3.log_prob(1.)
poisson_2_by_3.log_prob([1., 10., 100.])
poisson_2_by_3.log_prob([[1., 10., 100.], [1., 10., 100.]])
poisson_2_by_3.log_prob([[1., 1., 1.], [2., 2., 2.]])
poisson_2_by_3.log_prob([[1.], [2.]])
###Output
_____no_output_____
###Markdown
The above examples involved broadcasting over the batch, but the sample shape was empty. Suppose we have a collection of values, and we want to get the log probability of each value at each point in the batch. We could do it manually:
###Code
poisson_2_by_3.log_prob([[[1., 1., 1.], [1., 1., 1.]], [[2., 2., 2.], [2., 2., 2.]]])
###Output
_____no_output_____
###Markdown
Or we could let broadcasting handle the last batch dimension:
###Code
poisson_2_by_3.log_prob([[[1.], [1.]], [[2.], [2.]]])
###Output
_____no_output_____
###Markdown
Suppose we had a long list of values we wanted to evaluate at every batch point. For that, the following notation, which adds extra dimensions of size 1 to the right side of the shape, is extremely useful:
###Code
poisson_2_by_3.log_prob(tf.constant([1., 2.])[..., tf.newaxis, tf.newaxis])
###Output
_____no_output_____
###Markdown
This is an instance of strided slice notation.
###Code
three_poissons.log_prob([[1.], [10.], [50.], [100.]])
three_poissons.log_prob(tf.constant([1., 10., 50., 100.])[..., tf.newaxis])
###Output
_____no_output_____
###Markdown
Multivariate distributionsWe now turn to multivariate distributions, which have non-empty event shape. Let's look at multinomial distributions
###Code
multinomial_distributions = [
# Multinomial is a vector-valued distribution: if we have k classes,
# an individual sample from the distribution has k values in it, so the
# event shape is `[k]`.
tfd.Multinomial(total_count=100, probs=[.5, .4, .1],
name='One Multinomial'),
tfd.Multinomial(total_count=[100., 1000.], probs=[.5,.4, .1],
name='Two Multinomials Same Probs'),
tfd.Multinomial(total_count=100., probs=[[.5, .4, .1], [.1, .2, .7]],
name='Two Multinomials Same Counts'),
tfd.Multinomial(total_count=[100., 1000],
probs=[[.5, .4, .1],[.1, .2, .7]],
name='Two Multinomials Different Everything')
]
describe_distributions(multinomial_distributions)
describe_sample_tensor_shapes(multinomial_distributions, sample_shapes)
###Output
tfp.distributions.Multinomial("One Multinomial/", batch_shape=(), event_shape=(3,), dtype=float32)
Sample shape: 1
Return sample tensor shape: (1, 3)
Sample shape: 2
Return sample tensor shape: (2, 3)
Sample shape: [1, 5]
Return sample tensor shape: (1, 5, 3)
Sample shape: [3, 4, 5]
Return sample tensor shape: (3, 4, 5, 3)
tfp.distributions.Multinomial("Two Multinomials Same Probs/", batch_shape=(2,), event_shape=(3,), dtype=float32)
Sample shape: 1
Return sample tensor shape: (1, 2, 3)
Sample shape: 2
Return sample tensor shape: (2, 2, 3)
Sample shape: [1, 5]
Return sample tensor shape: (1, 5, 2, 3)
Sample shape: [3, 4, 5]
Return sample tensor shape: (3, 4, 5, 2, 3)
tfp.distributions.Multinomial("Two Multinomials Same Counts/", batch_shape=(2,), event_shape=(3,), dtype=float32)
Sample shape: 1
Return sample tensor shape: (1, 2, 3)
Sample shape: 2
Return sample tensor shape: (2, 2, 3)
Sample shape: [1, 5]
Return sample tensor shape: (1, 5, 2, 3)
Sample shape: [3, 4, 5]
Return sample tensor shape: (3, 4, 5, 2, 3)
tfp.distributions.Multinomial("Two Multinomials Different Everything/", batch_shape=(2,), event_shape=(3,), dtype=float32)
Sample shape: 1
Return sample tensor shape: (1, 2, 3)
Sample shape: 2
Return sample tensor shape: (2, 2, 3)
Sample shape: [1, 5]
Return sample tensor shape: (1, 5, 2, 3)
Sample shape: [3, 4, 5]
Return sample tensor shape: (3, 4, 5, 2, 3)
###Markdown
Computing log probabilities is equally straightforward. Let's work on an example with diagonal Multivariate Normal distributions. (Multinomials are not very broadcast friendly, since the constraints on the counts and probabilities mean broadcasting will often produce inadmissible values.) We'll use . batch of 2 3-D distributions with the same mean but different scales (standard deviations):
###Code
two_multivariate_normals = tfd.MultivariateNormalDiag(loc=[1., 2., 3.], scale_identity_multiplier=[1., 2.])
two_multivariate_normals
###Output
_____no_output_____
###Markdown
Now let's evaluate the log probability of each batch point at its mean and at a shifted mean:
###Code
two_multivariate_normals.log_prob([[[1., 2., 3.]], [[3., 4., 5]]])
###Output
_____no_output_____
###Markdown
Exactly equivalently, we can use [Strided slide](https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/strided-slice) to insert an extra shape=1 dimension in the middle of a constant.
###Code
two_multivariate_normals.log_prob(
tf.constant([[1., 2., 3.], [3., 4., 5.]])[:, tf.newaxis, :])
###Output
_____no_output_____
###Markdown
On the other hand, if we don't insert the extra dimension, we pass `[1, 2, 3]` to the first batch point and `[3, 4, 5]` to the second:
###Code
two_multivariate_normals.log_prob(tf.constant([[1., 2., 3], [3, 4, 5]]))
###Output
_____no_output_____
###Markdown
Shape Manipulation Techiques The Reshape BijectorThe `Reshape` bijector can be used to reshape the *event_shape* of a distribution. Let's see an example:
###Code
six_way_multinomial = tfd.Multinomial(total_count=1000., probs=[.3, .25, .2, .15, .08, .02])
six_way_multinomial
###Output
_____no_output_____
###Markdown
We created a multinomial with an event shape of `[6]`. The Reshape Bijector allows us to treat this as a distribution with an event shape of `[2, 3]`.A `Bijector` represents a differentiable, one-to-one function on an open subset of $\mathbb{R}^n$. `Bijectors` are used in conjunction with `TransformedDistribution`, which models a distribution $p(y)$ in terms of a base distribution $p(x)$ and a `Bijector` that represents $Y=g(X)$.
###Code
transformed_multinomial = tfd.TransformedDistribution(
distribution=six_way_multinomial,
bijector=tfb.Reshape(event_shape_out=[2, 3]))
transformed_multinomial
six_way_multinomial.log_prob([500, 100, 100, 150, 100, 50])
transformed_multinomial.log_prob([[500, 100, 100], [150, 100, 50]])
###Output
_____no_output_____
###Markdown
This is the *only* thing the `Reshape` bijector can do: it cannot turn event dimensions into batch dimensions or vice-versa. The Independent DistributionThe `Independent` distribution is used to treat a collection of independent, not-necessarily-identical (aka a batch of) distributions as a single distribution. More concisely, `Independent` allows us to convert dimensions in `batch_shape` to dimensions in `event_shape`. We'll illustrate by example.
###Code
two_by_five_bernoulli = tfd.Bernoulli(
probs=[[0.05, .1, .15, .2, .25], [.3, .35, .4, .45, .5]],
name='Two By Five Bernoulli')
two_by_five_bernoulli
###Output
_____no_output_____
###Markdown
We can think of this as two-by-five array of coins with the associated probabilities of heads. Let's evaluate the probability of a particular, arbitrary set of ones and zeros:
###Code
pattern = [[1., 0., 0., 1., 0.], [0., 0., 1, 1, 1]]
two_by_five_bernoulli.log_prob(pattern)
###Output
_____no_output_____
###Markdown
We can use `Independent` to turn this into two different "sets of five Bernoullis', which is useful if we want to consider a 'row' of coin flips coming up in a given pattern as a single outcome:
###Code
two_sets_of_five = tfd.Independent(distribution=two_by_five_bernoulli,
reinterpreted_batch_ndims=1,
name='Two sets of five')
two_sets_of_five
###Output
_____no_output_____
###Markdown
Mathematically, we are computing the log probability of each 'set' of five by summing the log probabilities of the five 'independent' coin flips in the set, which is where the distribution gets its name:
###Code
two_sets_of_five.log_prob(pattern)
###Output
_____no_output_____
###Markdown
We can go even further and use `Independent` to create a distribution where individual events are a set of two-by-five Bernoullis:
###Code
one_set_of_two_by_five = tfd.Independent(
distribution=two_by_five_bernoulli, reinterpreted_batch_ndims=2,
name="One set of two by five")
one_set_of_two_by_five
###Output
_____no_output_____
###Markdown
It's worth noting that from the perspective of `sample`, using `Independent` changes nothing:
###Code
describe_sample_tensor_shapes(
[two_by_five_bernoulli,
two_sets_of_five,
one_set_of_two_by_five], ([[3, 5]]))
###Output
tfp.distributions.Bernoulli("Two By Five Bernoulli/", batch_shape=(2, 5), event_shape=(), dtype=int32)
Sample shape: [3, 5]
Return sample tensor shape: (3, 5, 2, 5)
tfp.distributions.Independent("Two sets of five/", batch_shape=(2,), event_shape=(5,), dtype=int32)
Sample shape: [3, 5]
Return sample tensor shape: (3, 5, 2, 5)
tfp.distributions.Independent("One set of two by five/", batch_shape=(), event_shape=(2, 5), dtype=int32)
Sample shape: [3, 5]
Return sample tensor shape: (3, 5, 2, 5)
|
python/analyzing the effect neighbors on attraction coefficients that emerge from the particle model.ipynb | ###Markdown
analyzing the effect of screening by neighbors on attractionTim Tyree10.27.2021
###Code
from lib.my_initialization import *
# import scipy
# from scipy import stats
%load_ext autoreload
%autoreload 2
#neighbors=0
input_fn_star=f"/home/timothytyree/Documents/GitHub/bgmc/python/data/osg_output/run_17_ar_star.csv"
df=pd.read_csv(input_fn_star)
#map columns of star df to df
df['varkappa']=df['astar']
df['r']=df['rstar']
df['model_name_full']=df['model_name']
df['rkp']=1/2*np.pi*df['r']**2*df['kappa']
df0=df.copy()
#neighbors=1
input_fn_star=f"/home/timothytyree/Documents/GitHub/bgmc/python/data/osg_output/run_19_ar_star.csv"
df=pd.read_csv(input_fn_star)
#map columns of star df to df
df['varkappa']=df['astar']
df['r']=df['rstar']
df['model_name_full']=df['model_name']
df['rkp']=1/2*np.pi*df['r']**2*df['kappa']
df1=df.copy()
assert not df0.isnull().any().any()
assert not df1.isnull().any().any()
#compute xy values
df=df0.copy()
#slice data
boofk=df['model_name_full']=='fk_pbc'
boolr=df['model_name_full']=='lr_pbc'
#restrict to only the settings with positive D
boofk&=df['D']>0
boolr&=df['D']>0
x_values_fk0=df.loc[boofk,'rkp'].values
y_values_fk0=df.loc[boofk,'astar'].values
x_values_lr0=df.loc[boolr,'rkp'].values
y_values_lr0=df.loc[boolr,'astar'].values
#compute xy values
df=df1.copy()
#slice data
boofk=df['model_name_full']=='fk_pbc'
boolr=df['model_name_full']=='lr_pbc'
#restrict to only the settings with positive D
boofk&=df['D']>0
boolr&=df['D']>0
x_values_fk1=df.loc[boofk,'rkp'].values
y_values_fk1=df.loc[boofk,'astar'].values
x_values_lr1=df.loc[boolr,'rkp'].values
y_values_lr1=df.loc[boolr,'astar'].values
fig,ax=plt.subplots(figsize=(6,4))
ax.scatter(x_values_fk0,y_values_fk0,c='C0',alpha=0.7,s=15,label='Fenton-Karma, vector-summed')
ax.scatter(x_values_lr0,y_values_lr0,c='C1',alpha=0.7,s=15,label='Luo-Rudy, vector-summed')
ax.scatter(x_values_fk1,y_values_fk1,c='C0',marker='^',alpha=0.7,label='Fenton-Karma, neighbors-only')
ax.scatter(x_values_lr1,y_values_lr1,c='C1',marker='^',alpha=0.7,label='Luo-Rudy, neighbors-only')
format_plot(ax=ax,xlabel=r"$\frac{1}{2}\kappa \pi r^2$ (cm$^2$/s)",ylabel=r"a (cm$^2$/s)")#,use_loglog=True)
ax.set_xlim((0,20))
ax.set_ylim((0,20))
ax.legend(fontsize=10,loc='lower right')
plt.show()
fig,ax=plt.subplots(figsize=(6,4))
ax.scatter(x_values_fk0,y_values_fk0,c='C0',alpha=0.7,s=15,label='Fenton-Karma, vector-summed')
ax.scatter(x_values_lr0,y_values_lr0,c='C1',alpha=0.7,s=15,label='Luo-Rudy, vector-summed')
ax.scatter(x_values_fk1,y_values_fk1,c='C0',marker='^',alpha=0.7,label='Fenton-Karma, neighbors-only')
ax.scatter(x_values_lr1,y_values_lr1,c='C1',marker='^',alpha=0.7,label='Luo-Rudy, neighbors-only')
format_plot(ax=ax,xlabel=r"$\frac{1}{2}\kappa \pi r^2$ (cm$^2$/s)",ylabel=r"a (cm$^2$/s)")#,use_loglog=True)
#add dotted lines with visually reasonable estimates for a
xv=np.arange(0,20,0.1)
ax.plot(xv,0.*xv+1.6,'C0--')
ax.plot(xv,0.*xv+8.5,'C1--')
ax.set_xlim((0,20))
ax.set_ylim((0,20))
ax.legend(fontsize=10,loc='lower right')
ax.set_title(f"dotted lines correspond to\na=1.6 (blue) and a=8.5 (orange)\n")
plt.show()
fig,ax=plt.subplots(figsize=(6,4))
ax.scatter(x_values_fk0,y_values_fk0,c='C0',alpha=0.7,s=15,label='Fenton-Karma, vector-summed')
# ax.scatter(x_values_lr0,y_values_lr0,c='C1',alpha=0.7,s=15,label='Luo-Rudy, vector-summed')
ax.scatter(x_values_fk1,y_values_fk1,c='C0',marker='^',alpha=0.7,label='Fenton-Karma, neighbors-only')
# ax.scatter(x_values_lr1,y_values_lr1,c='C1',marker='^',alpha=0.7,label='Luo-Rudy, neighbors-only')
format_plot(ax=ax,xlabel=r"$\frac{1}{2}\kappa \pi r^2$ (cm$^2$/s)",ylabel=r"a (cm$^2$/s)")#,use_loglog=True)
#add dotted lines with visually reasonable estimates for a
xv=np.arange(0,20,0.1)
# ax.plot(xv,0.*xv+1.5,'C0--')
# ax.plot(xv,0.*xv+8.5,'C1--')
# ax.set_xlim((0,20))
# ax.set_ylim((0,20))
ax.legend(fontsize=10,loc='lower right')
# ax.set_title(f"dotted lines correspond to\na=1.5 (blue) and a=8.5 (orange)\n")
plt.show()
fig,ax=plt.subplots(figsize=(6,4))
# ax.scatter(x_values_fk0,y_values_fk0,c='C0',alpha=0.7,s=15,label='Fenton-Karma, vector-summed')
ax.scatter(x_values_lr0,y_values_lr0,c='C1',alpha=0.7,s=15,label='Luo-Rudy, vector-summed')
# ax.scatter(x_values_fk1,y_values_fk1,c='C0',marker='^',alpha=0.7,label='Fenton-Karma, neighbors-only')
ax.scatter(x_values_lr1,y_values_lr1,c='C1',marker='^',alpha=0.7,label='Luo-Rudy, neighbors-only')
format_plot(ax=ax,xlabel=r"$\frac{1}{2}\kappa \pi r^2$ (cm$^2$/s)",ylabel=r"a (cm$^2$/s)")#,use_loglog=True)
#add dotted lines with visually reasonable estimates for a
xv=np.arange(0,20,0.1)
ax.plot(xv,0.*xv+8.25,'k--')
ax.plot(xv,0.*xv+8.5,'C1--')
# ax.set_xlim((0,20))
# ax.set_ylim((0,20))
ax.legend(fontsize=10,loc='center left')
ax.set_title(f"dotted lines correspond to\na=8.5 (orange) and a=8.25 (black)\n")
plt.show()
###Output
_____no_output_____ |
unit1.ipynb | ###Markdown
Unit 1---1. [Introducing Notebooks and Markdown](section1)2. [Getting help](section2)3. [NumPy](section3) 1. Introducing Notebooks and MarkdownIPython - Interactive python. This notebook. If you want to read more, [look here](https://jupyter.readthedocs.io/en/latest/projects/architecture/content-architecture.html)We'll run IPython through Jupyter. This allows you to **tell a story**.Becoming proficient in Python - remember: Text Using Markdown*****If you double click on this cell**, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using [Markdown](http://daringfireball.net/projects/markdown/syntax), which is a way to format text using headers, links, italics, and many other options. Hit _shift_ + _enter_ or _shift_ + _return_ on your keyboard to show the formatted text again. This is called "running" the cell, and you can also do it using the run button in the toolbar. Code cellsOne great advantage of IPython notebooks is that you can show your Python code alongside the results, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. The following cell is a code cell.
###Code
print ('hello world')
###Output
hello world
###Markdown
The last line of every code cell will be displayed by default, even if you don't print it.
###Code
2 + 2 # The result of this line will not be displayed
3 + 3 # The result of this line will be displayed, because it is the last line of the cell
###Output
_____no_output_____
###Markdown
Nicely formatted resultsIPython notebooks allow you to display nicely formatted results, such as plots and tables, directly inthe notebook. You'll learn how to use the following libraries later on in this course, but for now here's apreview of what IPython notebook can do.If you run the next cell, you should see the values displayed as a table.
###Code
# Pandas is a software library for data manipulation and analysis. You'll learn to use it later in this course.
import pandas as pd
df = pd.DataFrame({'a': [2, 4, 6, 8], 'b': [1, 3, 5, 7]})
df
###Output
_____no_output_____
###Markdown
If you run the next cell, you should see a scatter plot of the function y = x^2
###Code
%pylab inline
import matplotlib.pyplot as plt
xs = range(-30, 31)
ys = [x ** 2 for x in xs]
plt.scatter(xs, ys)
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Creating cells To create a new **code cell**, click "Insert > Insert Cell [Above or Below]". A code cell will automatically be created.To create a new **markdown cell**, first follow the process above to create a code cell, then change the type from "Code" to "Markdown" using the dropdown next to the run, stop, and restart buttons. Re-running cellsIf you find a bug in your code, you can always update the cell and re-run it. However, any cells that come afterward won't be automatically updated. Try it out below. First run each of the three cells. The first two don't have any output, but you will be able to tell they've run because a number will appear next to them, for example, "In [5]".
###Code
class_name = "Intro to Data Analysis"
message = class_name + " is great!"
message
###Output
_____no_output_____
###Markdown
Once you've run all three cells, try modifying the first one to set `class_name` to your name, rather than "Intro to Data Analysis", so you can print that you are awesome. Then rerun the first and third cells without rerunning the second.You should have seen that the third cell still printed "Intro to Data Analysis is awesome!" That's because you didn't rerun the second cell, so even though the `class_name` variable was updated, the `message` variable was not. Now try rerunning the second cell, and then the third.You should have seen the output change to "*your name* is awesome!" Often, after changing a cell, you'll want to rerun all the cells below it. You can do that quickly by clicking "Run > Run Selected Cell and All Below".One final thing to remember: if you shut down the kernel after saving your notebook, the cells' output will still show up as you left it at the end of your session when you start the notebook back up. However, the state of the kernel will be reset. If you are actively working on a notebook, remember to re-run your cells to set up your working environment to really pick up where you last left off. 2. Getting HelpBased on the Python Data Science Handbook, [chapter 1](https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/01.01-Help-And-Documentation.ipynbscrollTo=AZ5zCKeKEO-x)help???
###Code
help(len)
len?
L = [1, 2, 3]
L.insert?
L?
###Output
_____no_output_____
###Markdown
This will even work for objects you create yourself:
###Code
def square(a):
"Return the square of a."
return a ** 2
square?
###Output
_____no_output_____
###Markdown
Use ?? to view the source code
###Code
square??
###Output
_____no_output_____
###Markdown
If the source code is implemented in python. If not, you won't see it.
###Code
len??
###Output
_____no_output_____
###Markdown
Tab completions
###Code
an_apple = 27
an_example = 42
###Output
_____no_output_____
###Markdown
Now type `an` and press `tab`
###Code
#an
###Output
_____no_output_____
###Markdown
Wildcards
###Code
str.*find*?
###Output
_____no_output_____
###Markdown
3. NumPy Learning to do the impossible. "Waterfall" by M.C.Escher, 1961. Our data is mostly numerical data, e.g., stock prices, sales figures, sensor measurements, sports scores, database tables, etc. The Numpy library provides specialized data structures, functions, and other tools for numerical computing in Python. Documentation is [here](https://numpy.org/learn/)
###Code
import numpy as np
list1 = [1, 2, 5]
list2 = [1, 4, 6]
list1 == list2
list1 + list2
arr1_np = np.array([1, 2, 5])
arr2_np = np.array([1, 4, 6])
arr1_np == arr2_np
arr1_np + arr2_np
###Output
_____no_output_____
###Markdown
Numpy arrays are better than list for operating on numerical data:- *Ease of use:* small, concise, and intuitive mathematical expressions rather than using loops & custom functions.- *Performance:* Numpy operations and functions are implemented internally in C++, which makes them much faster than using Python statements & loops that are interpreted at runtime
###Code
x = np.array([1,2,"cat"])
y = np.array([1,3,"cat"])
x==y
###Output
_____no_output_____
###Markdown
Unit 1---1. [Introducing Notebooks and Markdown](section1)2. [Getting help](section2)3. [NumPy](section3) 1. Introducing Notebooks and MarkdownIPython - Interactive python. This notebook. If you want to read more, [look here](https://jupyter.readthedocs.io/en/latest/projects/architecture/content-architecture.html)We'll run IPython through Jupyter. This allows you to **tell a story**.Becoming proficient in Python - remember: Text Using Markdown*****If you double click on this cell**, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using [Markdown](http://daringfireball.net/projects/markdown/syntax), which is a way to format text using headers, links, italics, and many other options. Hit _shift_ + _enter_ or _shift_ + _return_ on your keyboard to show the formatted text again. This is called "running" the cell, and you can also do it using the run button in the toolbar. Code cellsOne great advantage of IPython notebooks is that you can show your Python code alongside the results, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. The following cell is a code cell.
###Code
# Hit shift + enter or use the run button to run this cell and see the results
print ('hello world')
# The last line of every code cell will be displayed by default,
# even if you don't print it. Run this cell to see how this works.
2 + 2 # The result of this line will not be displayed
3 + 3 # The result of this line will be displayed, because it is the last line of the cell
###Output
_____no_output_____
###Markdown
Nicely formatted resultsIPython notebooks allow you to display nicely formatted results, such as plots and tables, directly inthe notebook. You'll learn how to use the following libraries later on in this course, but for now here's apreview of what IPython notebook can do.
###Code
# If you run this cell, you should see the values displayed as a table.
# Pandas is a software library for data manipulation and analysis. You'll learn to use it later in this course.
import pandas as pd
df = pd.DataFrame({'a': [2, 4, 6, 8], 'b': [1, 3, 5, 7]})
df
# If you run this cell, you should see a scatter plot of the function y = x^2
%pylab inline
import matplotlib.pyplot as plt
xs = range(-30, 31)
ys = [x ** 2 for x in xs]
plt.scatter(xs, ys)
###Output
_____no_output_____
###Markdown
Creating cells To create a new **code cell**, click "Insert > Insert Cell [Above or Below]". A code cell will automatically be created.To create a new **markdown cell**, first follow the process above to create a code cell, then change the type from "Code" to "Markdown" using the dropdown next to the run, stop, and restart buttons. Re-running cellsIf you find a bug in your code, you can always update the cell and re-run it. However, any cells that come afterward won't be automatically updated. Try it out below. First run each of the three cells. The first two don't have any output, but you will be able to tell they've run because a number will appear next to them, for example, "In [5]". The third cell should output the message "Intro to Data Analysis is awesome!"
###Code
class_name = "Intro to Data Analysis"
message = class_name + " is awesome!"
message
###Output
_____no_output_____
###Markdown
Once you've run all three cells, try modifying the first one to set `class_name` to your name, rather than "Intro to Data Analysis", so you can print that you are awesome. Then rerun the first and third cells without rerunning the second.You should have seen that the third cell still printed "Intro to Data Analysis is awesome!" That's because you didn't rerun the second cell, so even though the `class_name` variable was updated, the `message` variable was not. Now try rerunning the second cell, and then the third.You should have seen the output change to "*your name* is awesome!" Often, after changing a cell, you'll want to rerun all the cells below it. You can do that quickly by clicking "Cell > Run All Below".One final thing to remember: if you shut down the kernel after saving your notebook, the cells' output will still show up as you left it at the end of your session when you start the notebook back up. However, the state of the kernel will be reset. If you are actively working on a notebook, remember to re-run your cells to set up your working environment to really pick up where you last left off. 2. Getting HelpBased on the Python Data Science Handbook, [chapter 1](https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/01.01-Help-And-Documentation.ipynbscrollTo=AZ5zCKeKEO-x)help???
###Code
help(len)
len?
L = [1, 2, 3]
L.insert?
L?
###Output
_____no_output_____
###Markdown
This will even work for objects you create yourself:
###Code
def square(a):
"""Return the square of a."""
return a ** 2
square?
Use ?? to view the source code
square??
###Output
_____no_output_____
###Markdown
If the source code is implemented in python. If not, you won't see it.
###Code
len??
###Output
_____no_output_____
###Markdown
Tab completions
###Code
an_apple = 27
an_example = 42
###Output
_____no_output_____
###Markdown
Now try typing:*an* L.
###Code
#an
#L.
###Output
_____no_output_____
###Markdown
Wildcards
###Code
str.*find*?
###Output
_____no_output_____
###Markdown
3. NumPy Learning to do the impossible. "Waterfall" by M.C.Escher, 1961. Our data is mostly numerical data, e.g., stock prices, sales figures, sensor measurements, sports scores, database tables, etc. The Numpy library provides specialized data structures, functions, and other tools for numerical computing in Python. Documentation is [here](https://numpy.org/learn/)
###Code
import numpy as np
list1 = [1, 2, 5]
list2 = [1, 4, 6]
list1 == list2
#list1
list1 + list2
arr1_np = np.array(list1)
arr2_np = np.array(list2)
arr1_np == arr2_np
arr1_np + arr2_np
###Output
_____no_output_____ |
notebooks/model/Gardenier20.ipynb | ###Markdown
FRB host redshift distributionHere we make use of frbpoppy (Gardenier et al. 2020, https://github.com/davidgardenier/frbpoppy ) to estimate the redshift distribution of the observed sample of FRBs.This is done by assuming an intrinsic reshift distribution, generating a mock sample of FRBs and applying telescope selection effects in a Monte-Carlo simulation within frbpoppy, prodiving a sample of observed FRBs and their properties.The distribution of source redshifts $z$ of the resulting sample serves as prior $\pi(z)$, which is different according to assumed intrinsic redshift distribution and observing telescope.
###Code
## set frbpoppy parameters to match assumption in IGM model
PreFRBLE_population = { 'z_max':6., 'W_m':0.307115, 'W_v':0.692885, 'H_0':67.77 }
## other parameters use defaults, as to match the "complex" scenario presented by Gardenier et al. 2020
###Output
_____no_output_____
###Markdown
exemplary use of frbpoppy
###Code
PLOT = False
# Generate an FRB population
cosmic_pop = CosmicPopulation(1e5, name='example', **PreFRBLE_population )
# Setup a survey
survey = Survey('chime')
# Observe the FRB population
survey_pop = SurveyPopulation(cosmic_pop, survey, rate_limit=False)
# Check the detection rates
print(survey_pop.rates())
# Plot populations
if PLOT:
plot(cosmic_pop, survey_pop, frbcat=False) # frbcat='parkes')
###Output
cosmic_pop.py | Generating example population
cosmic_pop.py | Finished generating example population
survey_pop.py | Surveying example with chime
rates.py | chime Days FRBs
rates.py | ------------------------------------------
rates.py | In population 1.0 100000
rates.py | Detected 1.0 6.501
rates.py | Too late 1.0 0.0
rates.py | Too faint 1.0 429.831
rates.py | Outside survey 1.0 99563.668
rates.py | /Gpc^3 365.25 0.108
rates.py | Expected 0.1538 1
rates.py | ------------------------------------------
rates.py |
###Markdown
Here we perform the actual computation for all telescopes and redshift distributions, or populations, listed in PreFRBLE.parameter. !!! this takes a whilefor computation on cluster which does not allow jupyter notebooks, use _Gardenier20__BigComputation.py_
###Code
from time import time
t0 = time()
ASKAP_population = PreFRBLE_population.copy()
ASKAP_population['z_max'] = 1.3 ## ASKAP only observes to less than this distance, thus do not sample higher distance to obtain a bigger sample in results
N = 1e7
#N = 1e5
for population, color in zip( populations, ['blue','red','green']):
# Generate an FRB population
cosmic_pop = CosmicPopulation(N, name=population, n_model=populations_FRBpoppy[population], **PreFRBLE_population )
# cosmic_pop = CosmicPopulation(N, name=population, n_model=populations_FRBpoppy[population], **ASKAP_population )
print( 'cosmic: %i =? %i' % ( len(cosmic_pop.frbs.z), N ) )
P, x = Histogram(cosmic_pop.frbs.z, density=True, bins=60, range=[0,6])
plt.plot( x[:-1]+np.diff(x)/2, P, label='cosmic population '+population, linestyle=':', color=color)
Write2h5( likelihood_file_redshift, [P,x], [ KeyRedshift( population, "None", axis ) for axis in ["P","x"] ] )
for telescope in telescopes[1:]:
# for telescope in telescopes[:1]:
#Setup a survey
survey = Survey( telescopes_FRBpoppy[telescope] )
# Observe the FRB population
survey_pop = SurveyPopulation(cosmic_pop, survey, rate_limit=False)
print( '%s: %i, %f' % ( telescope, len(survey_pop.frbs.z), float(len(survey_pop.frbs.z))/N ) )
P, x = Histogram(survey_pop.frbs.z, density=True, bins=60, range=[0,6])
survey_pop = 0
plt.plot( x[:-1]+np.diff(x)/2, P, label=telescope+' selection '+population, color=color)
Write2h5( likelihood_file_redshift, [P,x], [ KeyRedshift( population, telescope, axis ) for axis in ["P","x"] ] )
cosmic_pop = 0
plt.yscale('log')
plt.xlabel('redshift')
plt.ylabel('likelihood')
plt.legend()
print( "this took %.2f minutes" % ( (time()-t0)/60 ) )
###Output
cosmic_pop.py | Generating SFR population
cosmic_pop.py | Finished generating SFR population
cosmic: 10000000 =? 10000000
survey_pop.py | Surveying SFR with chime
CHIME: 118822, 0.011882
survey_pop.py | Surveying SFR with parkes
Parkes: 134915, 0.013492
cosmic_pop.py | Generating coV population
cosmic_pop.py | Finished generating coV population
cosmic: 10000000 =? 10000000
survey_pop.py | Surveying coV with chime
CHIME: 112447, 0.011245
survey_pop.py | Surveying coV with parkes
Parkes: 122008, 0.012201
cosmic_pop.py | Generating SMD population
cosmic_pop.py | Finished generating SMD population
cosmic: 10000000 =? 10000000
survey_pop.py | Surveying SMD with chime
CHIME: 401226, 0.040123
survey_pop.py | Surveying SMD with parkes
Parkes: 396802, 0.039680
this took 40.13 minutes
###Markdown
here is a plot of the results. See _notebooks/Likelihood.ipynb_ for more beautiful plots
###Code
for population, color in zip( populations, ['blue','red','green']):
#population = populations_FRBpoppy[population]
P, x = GetLikelihood_Redshift( population=population, telescope='None')
plt.plot( x[:-1]+np.diff(x)/2, P, label='cosmic population '+population, linestyle='-', color=color, linewidth=2)
for telescope, linestyle in zip( telescopes, ['-','--','-.',':']):
#telescope = telescopes_FRBpoppy[telescope]
P, x = GetLikelihood_Redshift( population=population, telescope=telescope)
plt.plot( x[:-1]+np.diff(x)/2, P, label=telescope+' selection '+population, color=color, linestyle=linestyle)
plt.yscale('log')
plt.xlabel('redshift')
plt.ylabel('likelihood')
plt.legend()
###Output
_____no_output_____ |
notebooks/bin_images.ipynb | ###Markdown
[](https://neutronimaging.pages.ornl.gov/tutorial/notebooks/bin_images) Select Your IPTS
###Code
from __code.bin_images import BinHandler
from __code import system
system.System.select_working_dir()
from __code.__all import custom_style
custom_style.style()
###Output
_____no_output_____
###Markdown
Select Images to Rebin
###Code
o_bin = BinHandler(working_dir = system.System.get_working_dir())
o_bin.select_images()
###Output
_____no_output_____
###Markdown
Select Bin Parameter
###Code
o_bin.select_bin_parameter()
###Output
_____no_output_____
###Markdown
Export
###Code
o_bin.select_export_folder()
###Output
_____no_output_____
###Markdown
[](https://neutronimaging.pages.ornl.gov/tutorial/notebooks/bin_images) Select Your IPTS
###Code
from __code.bin_images import BinHandler
from __code import system
system.System.select_working_dir()
from __code.__all import custom_style
custom_style.style()
###Output
_____no_output_____
###Markdown
Select Images to Rebin
###Code
o_bin = BinHandler(working_dir = system.System.get_working_dir())
o_bin.select_images()
###Output
_____no_output_____
###Markdown
Select Bin Parameter
###Code
o_bin.select_bin_parameter()
###Output
_____no_output_____
###Markdown
Export
###Code
o_bin.select_export_folder()
###Output
_____no_output_____ |
BST.ipynb | ###Markdown
BINARY SEARCH TREE
###Code
# this code makes the tree that we'll traverse
class Node(object):
def __init__(self,value = None):
self.value = value
self.left = None
self.right = None
def set_value(self,value):
self.value = value
def get_value(self):
return self.value
def set_left_child(self,left):
self.left = left
def set_right_child(self, right):
self.right = right
def get_left_child(self):
return self.left
def get_right_child(self):
return self.right
def has_left_child(self):
return self.left != None
def has_right_child(self):
return self.right != None
# define __repr_ to decide what a print statement displays for a Node object
def __repr__(self):
return f"Node({self.get_value()})"
def __str__(self):
return f"Node({self.get_value()})"
from collections import deque
class Queue():
def __init__(self):
self.q = deque()
def enq(self,value):
self.q.appendleft(value)
def deq(self):
if len(self.q) > 0:
return self.q.pop()
else:
return None
def __len__(self):
return len(self.q)
def __repr__(self):
if len(self.q) > 0:
s = "<enqueue here>\n_________________\n"
s += "\n_________________\n".join([str(item) for item in self.q])
s += "\n_________________\n<dequeue here>"
return s
else:
return "<queue is empty>"
class Tree():
def __init__(self):
self.root = None
def set_root(self,value):
self.root = Node(value)
def get_root(self):
return self.root
def compare(self,node, new_node):
"""
0 means new_node equals node
-1 means new node less than existing node
1 means new node greater than existing node
"""
if new_node.get_value() == node.get_value():
return 0
elif new_node.get_value() < node.get_value():
return -1 # traverse left
else: #new_node > node
return 1 # traverse right
def insert_with_loop(self,new_value):
new_node = Node(new_value)
node = self.get_root()
if node == None:
self.root = new_node
return
while(True):
comparison = self.compare(node, new_node)
if comparison == 0:
# override with new node's value
node.set_value(new_node.get_value())
break # override node, and stop looping
elif comparison == -1:
# go left
if node.has_left_child():
node = node.get_left_child()
else:
node.set_left_child(new_node)
break #inserted node, so stop looping
else: #comparison == 1
# go right
if node.has_right_child():
node = node.get_right_child()
else:
node.set_right_child(new_node)
break # inserted node, so stop looping
def insert_with_recursion(self,value):
if self.get_root() == None:
self.set_root(value)
return
#otherwise, use recursion to insert the node
self.insert_recursively(self.get_root(), Node(value))
def insert_recursively(self,node,new_node):
comparison = self.compare(node,new_node)
if comparison == 0:
# equal
node.set_value(new_node.get_value())
elif comparison == -1:
# traverse left
if node.has_left_child():
self.insert_recursively(node.get_left_child(),new_node)
else:
node.set_left_child(new_node)
else: #comparison == 1
# traverse right
if node.has_right_child():
self.insert_recursively(node.get_right_child(), new_node)
else:
node.set_right_child(new_node)
def __repr__(self):
level = 0
q = Queue()
visit_order = list()
node = self.get_root()
q.enq( (node,level) )
while(len(q) > 0):
node, level = q.deq()
if node == None:
visit_order.append( ("<empty>", level))
continue
visit_order.append( (node, level) )
if node.has_left_child():
q.enq( (node.get_left_child(), level +1 ))
else:
q.enq( (None, level +1) )
if node.has_right_child():
q.enq( (node.get_right_child(), level +1 ))
else:
q.enq( (None, level +1) )
s = "Tree\n"
previous_level = -1
for i in range(len(visit_order)):
node, level = visit_order[i]
if level == previous_level:
s += " | " + str(node)
else:
s += "\n" + str(node)
previous_level = level
return s
tree = Tree()
tree.insert_with_recursion(5)
tree.insert_with_recursion(6)
tree.insert_with_recursion(4)
tree.insert_with_recursion(2)
tree.insert_with_recursion(5) # insert duplicate
print(tree)
# Solution
class Tree():
def __init__(self):
self.root = None
def set_root(self,value):
self.root = Node(value)
def get_root(self):
return self.root
def compare(self,node, new_node):
"""
0 means new_node equals node
-1 means new node less than existing node
1 means new node greater than existing node
"""
if new_node.get_value() == node.get_value():
return 0
elif new_node.get_value() < node.get_value():
return -1
else:
return 1
def insert(self,new_value):
new_node = Node(new_value)
node = self.get_root()
if node == None:
self.root = new_node
return
while(True):
comparison = self.compare(node, new_node)
if comparison == 0:
# override with new node
node = new_node
break # override node, and stop looping
elif comparison == -1:
# go left
if node.has_left_child():
node = node.get_left_child()
else:
node.set_left_child(new_node)
break #inserted node, so stop looping
else: #comparison == 1
# go right
if node.has_right_child():
node = node.get_right_child()
else:
node.set_right_child(new_node)
break # inserted node, so stop looping
def search(self,value):
node = self.get_root()
s_node = Node(value)
while(True):
comparison = self.compare(node,s_node)
if comparison == 0:
return True
elif comparison == -1:
if node.has_left_child():
node = node.get_left_child()
else:
return False
else:
if node.has_right_child():
node = node.get_right_child()
else:
return False
def __repr__(self):
level = 0
q = Queue()
visit_order = list()
node = self.get_root()
q.enq( (node,level) )
while(len(q) > 0):
node, level = q.deq()
if node == None:
visit_order.append( ("<empty>", level))
continue
visit_order.append( (node, level) )
if node.has_left_child():
q.enq( (node.get_left_child(), level +1 ))
else:
q.enq( (None, level +1) )
if node.has_right_child():
q.enq( (node.get_right_child(), level +1 ))
else:
q.enq( (None, level +1) )
s = "Tree\n"
previous_level = -1
for i in range(len(visit_order)):
node, level = visit_order[i]
if level == previous_level:
s += " | " + str(node)
else:
s += "\n" + str(node)
previous_level = level
return s
tree = Tree()
tree.insert(5)
tree.insert(6)
tree.insert(4)
tree.insert(2)
print(f"""
search for 8: {tree.search(8)}
search for 2: {tree.search(2)}
""")
print(tree)
###Output
_____no_output_____
###Markdown
Binary Search Tree
###Code
class BSTnode(object):
def __init__(self, v = 0):
self.val = v
self.left = None
self.right = None
self.parent = None
def __repr__(self):
return self.val.__repr__()
class BST(object):
def __init__(self):
self.root = None
def insert(self, v):
n = BSTnode(v)
if self.root is None:
self.root = n
else:
self.rinsert(self.root, n)
def rinsert(self, p, n):
if n.val < p.val:
if p.left is None:
p.left = n
n.parent = p
else:
self.rinsert(p.left, n)
else:
if p.right is None:
p.right = n
n.parent = p
else:
self.rinsert(p.right, n)
def find(self, v):
return self.rfind(self.root, v)
def rfind(self, n, v):
if n == None:
return None
if n.val == v:
return n
if n.val < v:
return self.rfind(n.right, v)
return self.rfind(n.left, v)
def traverse(self):
print("traverse")
self.inorder(self.root)
def inorder(self, root):
if root is not None:
print(root)
self.inorder(root.left)
self.inorder(root.right)
def getMin(self):
n = self.root
if n == None:
return None
while n.left is not None:
n = n.left
return n
def getMax(self):
n = self.root
if n == None:
return None
while n.right is not None:
n = n.right
return n
def test():
tree = BST()
tree.insert(2)
tree.insert(1)
tree.insert(3)
tree.insert(4)
tree.traverse()
print(tree.find(4))
print(tree.getMin())
print(tree.getMax())
test()
###Output
traverse
2
1
3
4
4
1
4
|
Natural_Disaster_Detection_System.ipynb | ###Markdown
**Natural Disaster Detection System** **Load the Dataset**
###Code
from google.colab import drive
drive.mount('/gdrive')
%cd /gdrive
###Output
Mounted at /gdrive
/gdrive
###Markdown
**Import Necessary Packages**
###Code
import matplotlib
matplotlib.use("Agg")
import matplotlib.pyplot as plt
import numpy as np
import argparse
import pickle
import cv2
import sys
import os
import tempfile
from imutils import paths
from collections import deque
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import VGG16
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.models import load_model
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.callbacks import LambdaCallback
from tensorflow.keras.callbacks import *
from tensorflow.keras import backend as K
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from google.colab.patches import cv2_imshow
###Output
_____no_output_____
###Markdown
***Cyclical Learning Rate Callback***
###Code
class CyclicLR(Callback):
def __init__(self, base_lr=0.001, max_lr=0.006, step_size=2000., mode='triangular',
gamma=1., scale_fn=None, scale_mode='cycle'):
super(CyclicLR, self).__init__()
self.base_lr = base_lr
self.max_lr = max_lr
self.step_size = step_size
self.mode = mode
self.gamma = gamma
if scale_fn == None:
if self.mode == 'triangular':
self.scale_fn = lambda x: 1.
self.scale_mode = 'cycle'
elif self.mode == 'triangular2':
self.scale_fn = lambda x: 1 / (2. ** (x - 1))
self.scale_mode = 'cycle'
elif self.mode == 'exp_range':
self.scale_fn = lambda x: gamma ** (x)
self.scale_mode = 'iterations'
else:
self.scale_fn = scale_fn
self.scale_mode = scale_mode
self.clr_iterations = 0.
self.trn_iterations = 0.
self.history = {}
self._reset()
def _reset(self, new_base_lr=None, new_max_lr=None,
new_step_size=None):
if new_base_lr != None:
self.base_lr = new_base_lr
if new_max_lr != None:
self.max_lr = new_max_lr
if new_step_size != None:
self.step_size = new_step_size
self.clr_iterations = 0.
def clr(self):
cycle = np.floor(1 + self.clr_iterations / (2 * self.step_size))
x = np.abs(self.clr_iterations / self.step_size - 2 * cycle + 1)
if self.scale_mode == 'cycle':
return self.base_lr + (self.max_lr - self.base_lr) * np.maximum(0, (1 - x)) * self.scale_fn(cycle)
else:
return self.base_lr + (self.max_lr - self.base_lr) * np.maximum(0, (1 - x)) * self.scale_fn(
self.clr_iterations)
def on_train_begin(self, logs={}):
logs = logs or {}
if self.clr_iterations == 0:
K.set_value(self.model.optimizer.lr, self.base_lr)
else:
K.set_value(self.model.optimizer.lr, self.clr())
def on_batch_end(self, epoch, logs=None):
logs = logs or {}
self.trn_iterations += 1
self.clr_iterations += 1
self.history.setdefault('lr', []).append(K.get_value(self.model.optimizer.lr))
self.history.setdefault('iterations', []).append(self.trn_iterations)
for k, v in logs.items():
self.history.setdefault(k, []).append(v)
K.set_value(self.model.optimizer.lr, self.clr())
###Output
_____no_output_____
###Markdown
***Learning Rate Finder***
###Code
class LearningRateFinder:
def __init__(self, model, stopFactor=4, beta=0.98):
self.model = model
self.stopFactor = stopFactor
self.beta = beta
self.lrs = []
self.losses = []
self.lrMult = 1
self.avgLoss = 0
self.bestLoss = 1e9
self.batchNum = 0
self.weightsFile = None
def reset(self):
self.lrs = []
self.losses = []
self.lrMult = 1
self.avgLoss = 0
self.bestLoss = 1e9
self.batchNum = 0
self.weightsFile = None
def is_data_iter(self, data):
iterClasses = ["NumpyArrayIterator", "DirectoryIterator",
"DataFrameIterator", "Iterator", "Sequence"]
return data.__class__.__name__ in iterClasses
def on_batch_end(self, batch, logs):
lr = K.get_value(self.model.optimizer.lr)
self.lrs.append(lr)
l = logs["loss"]
self.batchNum += 1
self.avgLoss = (self.beta * self.avgLoss) + ((1 - self.beta) * l)
smooth = self.avgLoss / (1 - (self.beta ** self.batchNum))
self.losses.append(smooth)
stopLoss = self.stopFactor * self.bestLoss
if self.batchNum > 1 and smooth > stopLoss:
self.model.stop_training = True
return
if self.batchNum == 1 or smooth < self.bestLoss:
self.bestLoss = smooth
lr *= self.lrMult
K.set_value(self.model.optimizer.lr, lr)
def find(self, trainData, startLR, endLR, epochs=None,
stepsPerEpoch=None, batchSize=32, sampleSize=2048,
verbose=1):
self.reset()
useGen = self.is_data_iter(trainData)
if useGen and stepsPerEpoch is None:
msg = "Using generator without supplying stepsPerEpoch"
raise Exception(msg)
elif not useGen:
numSamples = len(trainData[0])
stepsPerEpoch = np.ceil(numSamples / float(batchSize))
if epochs is None:
epochs = int(np.ceil(sampleSize / float(stepsPerEpoch)))
numBatchUpdates = epochs * stepsPerEpoch
self.lrMult = (endLR / startLR) ** (1.0 / numBatchUpdates)
self.weightsFile = tempfile.mkstemp()[1]
self.model.save_weights(self.weightsFile)
origLR = K.get_value(self.model.optimizer.lr)
K.set_value(self.model.optimizer.lr, startLR)
callback = LambdaCallback(on_batch_end=lambda batch, logs:
self.on_batch_end(batch, logs))
if useGen:
self.model.fit_generator(
trainData,
steps_per_epoch=stepsPerEpoch,
epochs=epochs,
verbose=verbose,
callbacks=[callback])
else:
self.model.fit(
trainData[0], trainData[1],
batch_size=batchSize,
epochs=epochs,
callbacks=[callback],
verbose=verbose)
self.model.load_weights(self.weightsFile)
K.set_value(self.model.optimizer.lr, origLR)
def plot_loss(self, skipBegin=10, skipEnd=1, title=""):
lrs = self.lrs[skipBegin:-skipEnd]
losses = self.losses[skipBegin:-skipEnd]
plt.plot(lrs, losses)
plt.xscale("log")
plt.xlabel("Learning Rate (Log Scale)")
plt.ylabel("Loss")
if title != "":
plt.title(title)
###Output
_____no_output_____
###Markdown
**Set the Path to Dataset**
###Code
DATASET_PATH = '/gdrive/MyDrive/Cyclone_Wildfire_Flood_Earthquake_Database'
Cyclone='/gdrive/MyDrive/Cyclone_Wildfire_Flood_Earthquake_Database/Cyclone'
Earthquake='/gdrive/MyDrive/Cyclone_Wildfire_Flood_Earthquake_Database/Earthquake'
Wildfire="/gdrive/MyDrive/Cyclone_Wildfire_Flood_Earthquake_Database/Flood"
Flood="/gdrive/MyDrive/Cyclone_Wildfire_Flood_Earthquake_Database/Wildfire"
# initializing the class labels in dataset
CLASSES = ["Cyclone", "Earthquake", "Flood", "Wildfire"]
###Output
_____no_output_____
###Markdown
**Set the Hyperparameter Values**
###Code
MIN_LR = 1e-6 #minimum learning rate
MAX_LR = 1e-4 #maximum learning rate
BATCH_SIZE = 32 #batch size
STEP_SIZE = 8 #step size
CLR_METHOD = "triangular" #Cyclical Learning Rate Method
NUM_EPOCHS = 48 #number of epochs
###Output
_____no_output_____
###Markdown
**Set the Path to Saved Model and Output Curves**
###Code
output = '/content/output'
MODEL_PATH = os.path.sep.join(["/content/output", "natural_disaster.model"])
LRFIND_PLOT_PATH = os.path.sep.join(["/content/output", "lrfind_plot.png"])
TRAINING_PLOT_LOSS_PATH = os.path.sep.join(["/content/output", "training_plot_loss.png"])
CLR_PLOT_PATH = os.path.sep.join(["/content/output", "clr_plot.png"])
TRAINING_PLOT_ACCURACY_PATH = os.path.sep.join(["/content/output", "training_plot_accuracy.png"])
CONFUSION_MATRIX_PATH = os.path.sep.join(["/content/output", "confusion_matrix.png"])
###Output
_____no_output_____
###Markdown
**Data Splitting and Image Preprocessing**
###Code
#define train,test,validation split ratio
TRAIN_SPLIT = 0.75
VAL_SPLIT = 0.1
TEST_SPLIT = 0.25
print("Loading images...")
imagePaths = list(paths.list_images(DATASET_PATH))
data = []
labels = []
for imagePath in imagePaths:
label = imagePath.split(os.path.sep)[-2]
image = cv2.imread(imagePath) #load the image
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) #convert it to RGB channel ordering
image = cv2.resize(image, (224, 224)) # resize it to be a fixed 224x224 pixels, ignoring aspect ratio
data.append(image)
labels.append(label)
print("processing images...")
data = np.array(data, dtype="float32")
labels = np.array(labels)
lb = LabelBinarizer()
labels = lb.fit_transform(labels)
# partition the data into training and testing splits
(trainX, testX, trainY, testY) = train_test_split(data, labels,
test_size=TEST_SPLIT, random_state=42)
# take the validation split from the training split
(trainX, valX, trainY, valY) = train_test_split(trainX, trainY,
test_size=VAL_SPLIT, random_state=84)
# initialize the training data augmentation object
aug = ImageDataGenerator(
rotation_range=30,
zoom_range=0.15,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.15,
horizontal_flip=True,
fill_mode="nearest")
###Output
Loading images...
processing images...
###Markdown
**Load VGG16 Network**
###Code
baseModel = VGG16(weights="imagenet", include_top=False,
input_tensor=Input(shape=(224, 224, 3)))
###Output
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5
58892288/58889256 [==============================] - 0s 0us/step
###Markdown
**Build the Model**
###Code
headModel = baseModel.output
headModel = Flatten(name="flatten")(headModel)
headModel = Dense(512, activation="relu")(headModel)
headModel = Dropout(0.5)(headModel)
headModel = Dense(len(CLASSES), activation="softmax")(headModel)
model = Model(inputs=baseModel.input, outputs=headModel)
for layer in baseModel.layers:
layer.trainable = False
###Output
_____no_output_____
###Markdown
**Compile Model**
###Code
print("Compiling model...")
opt = SGD(lr=MIN_LR, momentum=0.9)
model.compile(loss="categorical_crossentropy", optimizer=opt,
metrics=["accuracy"])
###Output
Compiling model...
###Markdown
**Find Learning Rate**
###Code
print("Finding learning rate...")
lrf = LearningRateFinder(model)
lrf.find(
aug.flow(trainX, trainY, batch_size=BATCH_SIZE),
1e-10, 1e+1,
stepsPerEpoch=np.ceil((trainX.shape[0] / float(BATCH_SIZE))),
epochs=20,
batchSize=BATCH_SIZE)
lrf.plot_loss()
plt.savefig(LRFIND_PLOT_PATH)
print("Learning rate finder complete")
stepSize = STEP_SIZE * (trainX.shape[0] // BATCH_SIZE)
clr = CyclicLR(
mode=CLR_METHOD,
base_lr=MIN_LR,
max_lr=MAX_LR,
step_size=stepSize)
###Output
Finding learning rate...
###Markdown
**Train the Network/Fit the Model**
###Code
print("Training network...")
H = model.fit_generator(
aug.flow(trainX, trainY, batch_size=BATCH_SIZE),
validation_data=(valX, valY),
steps_per_epoch=trainX.shape[0] // BATCH_SIZE,
epochs=NUM_EPOCHS,
callbacks=[clr],
verbose=1)
print("Network trained")
###Output
Training network...
###Markdown
**Evaluate the Network**
###Code
print("Evaluating network...")
predictions = model.predict(testX, batch_size=BATCH_SIZE)
print('Classification Report: ')
print(classification_report(testY.argmax(axis=1),
predictions.argmax(axis=1), target_names=CLASSES))
###Output
Evaluating network...
Classification Report:
precision recall f1-score support
Cyclone 0.98 0.97 0.98 244
Earthquake 0.97 0.91 0.94 328
Flood 0.86 0.95 0.90 249
Wildfire 0.96 0.95 0.96 286
accuracy 0.94 1107
macro avg 0.94 0.95 0.94 1107
weighted avg 0.95 0.94 0.94 1107
###Markdown
**Save the Model to Disk**
###Code
print("Serializing network to '{}'...".format(MODEL_PATH))
model.save(MODEL_PATH)
###Output
Serializing network to '/content/output/natural_disaster.model'...
INFO:tensorflow:Assets written to: /content/output/natural_disaster.model/assets
###Markdown
**Plot and Save Loss Curve**
###Code
N = np.arange(0, NUM_EPOCHS)
plt.style.use("ggplot")
plt.figure()
plt.plot(N, H.history["loss"], label="train_loss")
plt.plot(N, H.history["val_loss"], label="val_loss")
plt.title("Training Loss")
plt.xlabel("Epoch #")
plt.ylabel("Loss")
plt.legend(loc="lower left")
plt.show()
plt.savefig(TRAINING_PLOT_LOSS_PATH)
###Output
_____no_output_____
###Markdown
**Plot and Save Accuracy Curve**
###Code
N = np.arange(0, NUM_EPOCHS)
plt.style.use("ggplot")
plt.figure()
plt.plot(N, H.history["accuracy"], label="train_acc")
plt.plot(N, H.history["val_accuracy"], label="val_acc")
plt.title("Training Accuracy")
plt.xlabel("Epoch #")
plt.ylabel("Accuracy")
plt.legend(loc="lower left")
plt.savefig(TRAINING_PLOT_ACCURACY_PATH)
plt.show()
###Output
_____no_output_____
###Markdown
**Plot and Save Learning Rate History Curve**
###Code
N = np.arange(0, len(clr.history["lr"]))
plt.figure()
plt.plot(N, clr.history["lr"])
plt.title("Cyclical Learning Rate (CLR)")
plt.xlabel("Training Iterations")
plt.ylabel("Learning Rate")
plt.savefig(CLR_PLOT_PATH)
plt.show()
###Output
_____no_output_____
###Markdown
**Plot Confusion Matrix**
###Code
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix1(y_true, y_pred, classes,
normalize=False,
title=None,
cmap=plt.cm.Blues):
if not title:
if normalize:
title = 'Normalized confusion matrix'
else:
title = 'Confusion matrix, without normalization'
cm = confusion_matrix(y_true, y_pred)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
fig, ax = plt.subplots()
im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
ax.figure.colorbar(im, ax=ax)
ax.set(xticks=np.arange(cm.shape[1]),
yticks=np.arange(cm.shape[0]),
xticklabels=classes, yticklabels=classes,
title=title,
ylabel='True label',
xlabel='Predicted label')
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, format(cm[i, j], fmt),
ha="center", va="center",
color="white" if cm[i, j] > thresh else "black")
fig.tight_layout()
return ax
np.set_printoptions(precision=2)
y_test = testY.argmax(axis=1)
y_pred = predictions.argmax(axis=1)
lb = ["Cyclone", "Earthquake", "Flood", "Wildfire"] #Thunderstorm, Building_Collapse
# Plot normalized confusion matrix
plot_confusion_matrix1(y_test, y_pred, classes=lb, normalize=True,
title='Normalized confusion matrix')
plt.savefig(CONFUSION_MATRIX_PATH)
!pip install twilio
from twilio.rest import Client
###Output
Collecting twilio
[?25l Downloading https://files.pythonhosted.org/packages/e9/0e/d54630e6daae43dd74d44a94f52d1072b5332c374d699938d7d1db20a54c/twilio-6.50.1.tar.gz (457kB)
[K |████████████████████████████████| 460kB 8.7MB/s
[?25hRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from twilio) (1.15.0)
Requirement already satisfied: pytz in /usr/local/lib/python3.6/dist-packages (from twilio) (2018.9)
Collecting PyJWT>=1.4.2
Downloading https://files.pythonhosted.org/packages/91/5f/5cff1c3696e0d574f5741396550c9a308dde40704d17e39e94b89c07d789/PyJWT-2.0.0-py3-none-any.whl
Requirement already satisfied: requests>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from twilio) (2.23.0)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests>=2.0.0->twilio) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests>=2.0.0->twilio) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests>=2.0.0->twilio) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests>=2.0.0->twilio) (2020.12.5)
Building wheels for collected packages: twilio
Building wheel for twilio (setup.py) ... [?25l[?25hdone
Created wheel for twilio: filename=twilio-6.50.1-py2.py3-none-any.whl size=1208685 sha256=fd5534674045e08d4e5055fb1fb219c9b2867317d1e79a9909b403ed56d0fbfa
Stored in directory: /root/.cache/pip/wheels/17/10/6c/1b04371d399b059dcea195e00729e096fd959e1e35b0e7c8a2
Successfully built twilio
Installing collected packages: PyJWT, twilio
Successfully installed PyJWT-2.0.0 twilio-6.50.1
###Markdown
**Predict the Video**
###Code
from twilio.rest import Client
input='/gdrive/MyDrive/videos/cyclone_1.mp4'
size=128
display=1
# load the trained model from disk
print("Loading model and label binarizer...")
model = load_model(MODEL_PATH)
mean = np.array([123.68, 116.779, 103.939][::1], dtype="float32")
Q = deque(maxlen=size) #predictions queue
print("Processing video...")
vs = cv2.VideoCapture(input) #initializing video stream
writer = None #pointer to output video file
(W, H) = (None, None) #intialize frame dimensions
client = Client("ACac0f843371e740077a4aa7734e4e2ad7", "4a5605c2a39d5b18dc0383f68b6b7415")
prelabel = ''
ok = 'Normal'
fi_label = []
framecount = 0
while True:
(grabbed,frame) = vs.read()
if not grabbed:
break
if W is None or H is None:
(H,W)=frame.shape[:2]
framecount = framecount + 1
output=frame.copy()
frame=cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
frame=cv2.resize(frame,(224,224))
frame=frame.astype("float32")
frame=frame - mean
preds = model.predict(np.expand_dims(frame,axis=0))[0]
prediction=preds.argmax(axis=0)
Q.append(preds)
results = np.array(Q).mean(axis=0)
maxprobab=np.max(results)
i=np.argmax(results)
label=CLASSES[i]
rest = 1-maxprobab
diff= (maxprobab)-(rest)
th=100
if diff>0.80:
th=diff
fi_label = np.append(fi_label, label)
text = "Alert : {} - {:.2f}%".format((label), maxprobab * 100)
cv2.putText(output, text, (35, 50), cv2.FONT_HERSHEY_SIMPLEX, 1.25, (0, 255, 0), 5)
if label != prelabel:
client.messages \
.create(to="+911234567890",
from_="+19388882407",
body='\n'+ str(text))
prelabel = label
if writer is None:
fourcc = cv2.VideoWriter_fourcc(*"mp4v")
writer = cv2.VideoWriter('/content/output/result.mp4', fourcc, 30,(W, H), True)
writer.write(output)
print('Frame count', framecount)
print('Count label', fi_label)
#cv2_imshow(output)
writer.release()
vs.release()
###Output
_____no_output_____ |
1.3 Data Wrangling.ipynb | ###Markdown
Data Wrangling This may include further munging, data visualization, data aggregation, training a statistical model, as well as many other potential uses. Data munging as a process typically follows a set of general steps which begin with extracting the data in a raw form from the data source, "munging" the raw data using algorithms (e.g. sorting) or parsing the data into predefined data structures, and finally depositing the resulting content into a data sink for storage and future use.
###Code
import pandas as pd
import os
customer_churn_dataset = os.path.join(os.path.abspath(os.path.curdir), 'data', 'customer-churn-model', 'Customer Churn Model.txt')
data = pd.read_csv(customer_churn_dataset)
data.head()
###Output
_____no_output_____
###Markdown
Create a subset of data Subset of a single Series
###Code
account_length = data["Account Length"]
account_length.head()
type(account_length)
subset = data[["Account Length", "Phone", "Eve Charge", "Day Calls"]]
subset.head()
type(subset)
desired_columns = ["Account Length", "Phone", "Eve Charge", "Night Calls"]
subset = data[desired_columns]
subset.head()
desired_columns = ["Account Length", "VMail Message", "Day Calls"]
desired_columns
all_columns_list = data.columns.values.tolist()
all_columns_list
sublist = [x for x in all_columns_list if x not in desired_columns]
sublist
subset = data[sublist]
subset.head()
###Output
_____no_output_____
###Markdown
Subset of Rows - SlicingThe operation of selecting multiple rows in the Data Frame is sometimes called Slicing
###Code
data[1:25]
data[10:35]
data[:8] # equivalent to data[1:8]
data[3320:]
###Output
_____no_output_____
###Markdown
Row Slicing with boolean conditions
###Code
# Selecting values with Day Mins > 300
data1 = data[data["Day Mins"]>300]
data1.shape
# Selecting values with State = "NY"
data2 = data[data["State"]=="NY"]
data2.shape
## AND -> &
data3 = data[(data["Day Mins"]>300) & (data["State"]=="NY")]
data3.shape
## OR -> |
data4 = data[(data["Day Mins"]>300) | (data["State"]=="NY")]
data4.shape
data5 = data[data["Day Calls"]< data["Night Calls"]]
data5.shape
data6 = data[data["Day Mins"]<data["Night Mins"]]
data6.shape
subset_first_50 = data[["Day Mins", "Night Mins", "Account Length"]][:50]
subset_first_50.head()
subset[:10]
###Output
_____no_output_____
###Markdown
Filtrado con ix -> loc e iloc
###Code
data.iloc[1:10, 3:6] ## Primeras 10 filas, columnas de la 3 a la 6
data.iloc[:,3:6] # all rows, third to sixth columns
data.iloc[1:10,:] # All cols, rows from 1 to 10
data.iloc[1:10, [2,5,7]] # selecting specific columns
data.iloc[[1,5,8,36], [2,5,7]]
data.loc[[1,5,8,36], ["Area Code", "VMail Plan", "Day Mins"]]
###Output
_____no_output_____
###Markdown
Inserting new colums in a Data Frame
###Code
data["Total Mins"] = data["Day Mins"] + data["Night Mins"] + data["Eve Mins"]
data["Total Mins"].head()
data["Total Calls"] = data["Day Calls"] + data["Night Calls"] + data["Eve Calls"]
data["Total Calls"].head()
data.shape
data.head()
###Output
_____no_output_____ |
HWs/HW04_solutions.ipynb | ###Markdown
HW 04 Solutions 1. (a) output is 0-5 V(b) best DAS input range would be 0-10 V, there will be clipping otherwise.(c) Pressure transducer has 3 sources of elemental uncertainties: linearity, repeatability, and hysterisis. We will use the RSS of the 3 elemental uncertainties to find the uncertainty in the pressure transducer.first we have to select a common unit to work with for the pressure transducer and the DAS. Because the uncertainty in the PT are given in full scale (FS), it is preferrable to work in absolut voltages.
###Code
import numpy
# Pressure transducer
FS_PT = 5 # V
u_PT_lin = 0.0025 * FS_PT # V
u_PT_rep = 0.0006 * FS_PT # V
u_PT_hys = 0.001 * FS_PT # V
print("linearity %1.4f" % u_PT_lin, 'V')
print( "repeatability %1.4f" % u_PT_rep, 'V')
print("hysterisi %1.4f" % u_PT_hys, 'V')
u_PT = numpy.sqrt(u_PT_lin**2 + u_PT_rep**2 + u_PT_hys**2)
print("Uncertainty of pressure transducer, u_PT = %1.4f" % u_PT , ' V')
# DAS
N = 16 # bits
Range = 10 # V, guarantee no clipping
Q = Range/2**(N+1) # V
LSD = 2 * Q # V
u_DAS_lin = 2 * LSD # V
u_DAS_gain = 2 * LSD # V
u_DAS = numpy.sqrt(u_DAS_lin**2 + u_DAS_gain**2 + Q**2)
print("linearity %1.6f" % u_DAS_lin, ' V')
print("gain %1.6f" % u_DAS_gain, ' V')
print("quantization error %1.6f" % Q, ' V')
print("Uncertainty of DAS %1.6f" %u_DAS, ' V')
###Output
linearity 0.0125 V
repeatability 0.0030 V
hysterisi 0.0050 V
Uncertainty of pressure transducer, u_PT = 0.0138 V
linearity 0.000305 V
gain 0.000305 V
quantization error 0.000076 V
Uncertainty of DAS 0.000438 V
###Markdown
(d) We use of the RSS of the PT and DAS to find the overall uncertainty of the system
###Code
u_overall = numpy.sqrt(u_PT**2 + u_DAS**2)
print("overall uncertainty %1.6f" %u_overall, ' V')
###Output
overall uncertainty 0.013800 V
###Markdown
(e) The overall uncertainty is dominated by the pressure transducer. I would try and replace it. 2. The overall accuracy is the RSS of the thermocouple and its display. I would compute and report the uncertainties in $^\circ C$.
###Code
u_TC = 0.5 # C
u_display = 0.5 # C
u_overall_TC = numpy.sqrt(u_TC**2 + u_display**2)
print("overall uncertainty is %1.4f" % u_overall_TC, 'C')
###Output
overall uncertainty is 0.7071 C
###Markdown
3. (a) Remember the ideal gas law:\begin{align}\rho = \frac{P}{RT} = P^1 R^{-1} T^{-1}\end{align}The ideal gas constant $R$ is very well known and we can assume it is know perfectly (ie its uncertainty is negligible).Using uncertainty of a result, one has:\begin{align}\frac{u_\rho}{\rho} = \sqrt{\left( 1 \cdot \frac{u_P}{P}\right)^2 + \left( -1 \cdot \frac{u_T}{T} \right)^2}\end{align}I need to select a unit for this analysis. I will work as percentage error, which is ideal for the relative uncertainties I am using here, and convert to the appropriate unit when necessary.
###Code
u_PP = 0.005 # relative
u_T = 1 # K
T = 288 # temperature is the thermodynamic temperature in Kelvin
u_rhorho = numpy.sqrt(u_PP**2 + (u_T/T)**2)
print("relative temperature uncertainty %1.4f" % (u_T/T*100), '%')
print("relative uncertainty on density %1.4f" % (u_rhorho*100), ' %')
###Output
relative temperature uncertainty 0.3472 %
relative uncertainty on density 0.6087 %
###Markdown
(b) If we increase the accuracy of the temperature measurement from $\pm 1^\circ C$ to $\pm 0.5^\circ C$, our relative error on temperature measurement is now halved, or $\pm 0.1736\%$, so we have hope to meet our resolution target on the density. To solve this optimization problem, the trick is not to inverse the governing equation (the result) first, but to invert the uncertainty of a result equation...Starting from the equation above and taking the square of it:\begin{align}\left( \frac{u_\rho}{\rho} \right)^2 & = \left( \frac{u_P}{P}\right)^2 + \left( \frac{u_T}{T} \right)^2 \\\left( \frac{u_P}{P}\right)^2 & = \left( \frac{u_\rho}{\rho} \right)^2 - \left( \frac{u_T}{T} \right)^2 \end{align}
###Code
u_TT = 0.5/288 # relative
u_rhrh = 0.0025 # relative
u_PP2 = numpy.sqrt(u_rhrh**2 - u_TT**2)
print("Relative pressure uncertainty must be no larger than %1.4f" % (u_PP2*100), '%')
###Output
Relative pressure uncertainty must be no larger than 0.1799 %
|
3.2_mini-batch_gradient_descent_v3.ipynb | ###Markdown
Linear Regression 1D: Training Two Parameter Mini-Batch Gradient Decent Objective How to use Mini-Batch Gradient Descent to train model. Table of ContentsIn this Lab, you will practice training a model by using Mini-Batch Gradient Descent. Make Some Data Create the Model and Cost Function (Total Loss) Train the Model: Batch Gradient Descent Train the Model: Stochastic Gradient Descent with Dataset DataLoader Train the Model: Mini Batch Gradient Decent: Batch Size Equals 5 Train the Model: Mini Batch Gradient Decent: Batch Size Equals 10Estimated Time Needed: 30 min Preparation We'll need the following libraries:
###Code
# Import the libraries we need for this lab
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
###Output
_____no_output_____
###Markdown
The class plot_error_surfaces is just to help you visualize the data space and the parameter space during training and has nothing to do with PyTorch.
###Code
# The class for plotting the diagrams
class plot_error_surfaces(object):
# Constructor
def __init__(self, w_range, b_range, X, Y, n_samples = 30, go = True):
W = np.linspace(-w_range, w_range, n_samples)
B = np.linspace(-b_range, b_range, n_samples)
w, b = np.meshgrid(W, B)
Z = np.zeros((30, 30))
count1 = 0
self.y = Y.numpy()
self.x = X.numpy()
for w1, b1 in zip(w, b):
count2 = 0
for w2, b2 in zip(w1, b1):
Z[count1, count2] = np.mean((self.y - w2 * self.x + b2) ** 2)
count2 += 1
count1 += 1
self.Z = Z
self.w = w
self.b = b
self.W = []
self.B = []
self.LOSS = []
self.n = 0
if go == True:
plt.figure()
plt.figure(figsize = (7.5, 5))
plt.axes(projection = '3d').plot_surface(self.w, self.b, self.Z, rstride = 1, cstride = 1, cmap = 'viridis', edgecolor = 'none')
plt.title('Loss Surface')
plt.xlabel('w')
plt.ylabel('b')
plt.show()
plt.figure()
plt.title('Loss Surface Contour')
plt.xlabel('w')
plt.ylabel('b')
plt.contour(self.w, self.b, self.Z)
plt.show()
# Setter
def set_para_loss(self, W, B, loss):
self.n = self.n + 1
self.W.append(W)
self.B.append(B)
self.LOSS.append(loss)
# Plot diagram
def final_plot(self):
ax = plt.axes(projection = '3d')
ax.plot_wireframe(self.w, self.b, self.Z)
ax.scatter(self.W, self.B, self.LOSS, c = 'r', marker = 'x', s = 200, alpha = 1)
plt.figure()
plt.contour(self.w, self.b, self.Z)
plt.scatter(self.W, self.B, c = 'r', marker = 'x')
plt.xlabel('w')
plt.ylabel('b')
plt.show()
# Plot diagram
def plot_ps(self):
plt.subplot(121)
plt.ylim()
plt.plot(self.x, self.y, 'ro', label = "training points")
plt.plot(self.x, self.W[-1] * self.x + self.B[-1], label = "estimated line")
plt.xlabel('x')
plt.ylabel('y')
plt.title('Data Space Iteration: '+ str(self.n))
plt.subplot(122)
plt.contour(self.w, self.b, self.Z)
plt.scatter(self.W, self.B, c = 'r', marker = 'x')
plt.title('Loss Surface Contour')
plt.xlabel('w')
plt.ylabel('b')
plt.show()
###Output
_____no_output_____
###Markdown
Make Some Data Import PyTorch and set random seed:
###Code
# Import PyTorch library
import torch
torch.manual_seed(1)
###Output
_____no_output_____
###Markdown
Generate values from -3 to 3 that create a line with a slope of 1 and a bias of -1. This is the line that you need to estimate. Add some noise to the data:
###Code
# Generate the data with noise and the line
X = torch.arange(-3, 3, 0.1).view(-1, 1)
f = 1 * X - 1
Y = f + 0.1 * torch.randn(X.size())
###Output
_____no_output_____
###Markdown
Plot the results:
###Code
# Plot the line and the data
plt.plot(X.numpy(), Y.numpy(), 'rx', label = 'y')
plt.plot(X.numpy(), f.numpy(), label = 'f')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Create the Model and Cost Function (Total Loss) Define the forward function:
###Code
# Define the prediction function
def forward(x):
return w * x + b
###Output
_____no_output_____
###Markdown
Define the cost or criterion function:
###Code
# Define the cost function
def criterion(yhat, y):
return torch.mean((yhat - y) ** 2)
###Output
_____no_output_____
###Markdown
Create a plot_error_surfaces object to visualize the data space and the parameter space during training:
###Code
# Create a plot_error_surfaces object.
get_surface = plot_error_surfaces(15, 13, X, Y, 30)
###Output
_____no_output_____
###Markdown
Train the Model: Batch Gradient Descent (BGD) Define train_model_BGD function.
###Code
# Define the function for training model
w = torch.tensor(-15.0, requires_grad = True)
b = torch.tensor(-10.0, requires_grad = True)
lr = 0.1
LOSS_BGD = []
def train_model_BGD(epochs):
for epoch in range(epochs):
Yhat = forward(X)
loss = criterion(Yhat, Y)
LOSS_BGD.append(loss)
get_surface.set_para_loss(w.data.tolist(), b.data.tolist(), loss.tolist())
get_surface.plot_ps()
loss.backward()
w.data = w.data - lr * w.grad.data
b.data = b.data - lr * b.grad.data
w.grad.data.zero_()
b.grad.data.zero_()
###Output
_____no_output_____
###Markdown
Run 10 epochs of batch gradient descent: bug data space is 1 iteration ahead of parameter space.
###Code
# Run train_model_BGD with 10 iterations
train_model_BGD(10)
###Output
_____no_output_____
###Markdown
Stochastic Gradient Descent (SGD) with Dataset DataLoader Create a plot_error_surfaces object to visualize the data space and the parameter space during training:
###Code
# Create a plot_error_surfaces object.
get_surface = plot_error_surfaces(15, 13, X, Y, 30, go = False)
###Output
_____no_output_____
###Markdown
Import Dataset and DataLoader libraries
###Code
# Import libraries
from torch.utils.data import Dataset, DataLoader
###Output
_____no_output_____
###Markdown
Create Data class
###Code
# Create class Data
class Data(Dataset):
# Constructor
def __init__(self):
self.x = torch.arange(-3, 3, 0.1).view(-1, 1)
self.y = 1 * X - 1
self.len = self.x.shape[0]
# Getter
def __getitem__(self, index):
return self.x[index], self.y[index]
# Get length
def __len__(self):
return self.len
###Output
_____no_output_____
###Markdown
Create a dataset object and a dataloader object:
###Code
# Create Data object and DataLoader object
dataset = Data()
trainloader = DataLoader(dataset = dataset, batch_size = 1)
###Output
_____no_output_____
###Markdown
Define train_model_SGD function for training the model.
###Code
# Define train_model_SGD function
w = torch.tensor(-15.0, requires_grad = True)
b = torch.tensor(-10.0, requires_grad = True)
LOSS_SGD = []
lr = 0.1
def train_model_SGD(epochs):
for epoch in range(epochs):
Yhat = forward(X)
get_surface.set_para_loss(w.data.tolist(), b.data.tolist(), criterion(Yhat, Y).tolist())
get_surface.plot_ps()
LOSS_SGD.append(criterion(forward(X), Y).tolist())
for x, y in trainloader:
yhat = forward(x)
loss = criterion(yhat, y)
get_surface.set_para_loss(w.data.tolist(), b.data.tolist(), loss.tolist())
loss.backward()
w.data = w.data - lr * w.grad.data
b.data = b.data - lr * b.grad.data
w.grad.data.zero_()
b.grad.data.zero_()
get_surface.plot_ps()
###Output
_____no_output_____
###Markdown
Run 10 epochs of stochastic gradient descent: bug data space is 1 iteration ahead of parameter space.
###Code
# Run train_model_SGD(iter) with 10 iterations
train_model_SGD(10)
###Output
_____no_output_____
###Markdown
Mini Batch Gradient Descent: Batch Size Equals 5 Create a plot_error_surfaces object to visualize the data space and the parameter space during training:
###Code
# Create a plot_error_surfaces object.
get_surface = plot_error_surfaces(15, 13, X, Y, 30, go = False)
###Output
_____no_output_____
###Markdown
Create Data object and create a Dataloader object where the batch size equals 5:
###Code
# Create DataLoader object and Data object
dataset = Data()
trainloader = DataLoader(dataset = dataset, batch_size = 5)
###Output
_____no_output_____
###Markdown
Define train_model_Mini5 function to train the model.
###Code
# Define train_model_Mini5 function
w = torch.tensor(-15.0, requires_grad = True)
b = torch.tensor(-10.0, requires_grad = True)
LOSS_MINI5 = []
lr = 0.1
def train_model_Mini5(epochs):
for epoch in range(epochs):
Yhat = forward(X)
get_surface.set_para_loss(w.data.tolist(), b.data.tolist(), criterion(Yhat, Y).tolist())
get_surface.plot_ps()
LOSS_MINI5.append(criterion(forward(X), Y).tolist())
for x, y in trainloader:
yhat = forward(x)
loss = criterion(yhat, y)
get_surface.set_para_loss(w.data.tolist(), b.data.tolist(), loss.tolist())
loss.backward()
w.data = w.data - lr * w.grad.data
b.data = b.data - lr * b.grad.data
w.grad.data.zero_()
b.grad.data.zero_()
###Output
_____no_output_____
###Markdown
Run 10 epochs of mini-batch gradient descent: bug data space is 1 iteration ahead of parameter space.
###Code
# Run train_model_Mini5 with 10 iterations.
train_model_Mini5(10)
###Output
_____no_output_____
###Markdown
Mini Batch Gradient Descent: Batch Size Equals 10 Create a plot_error_surfaces object to visualize the data space and the parameter space during training:
###Code
# Create a plot_error_surfaces object.
get_surface = plot_error_surfaces(15, 13, X, Y, 30, go = False)
###Output
_____no_output_____
###Markdown
Create Data object and create a Dataloader object batch size equals 10
###Code
# Create DataLoader object
dataset = Data()
trainloader = DataLoader(dataset = dataset, batch_size = 10)
###Output
_____no_output_____
###Markdown
Define train_model_Mini10 function for training the model.
###Code
# Define train_model_Mini5 function
w = torch.tensor(-15.0, requires_grad = True)
b = torch.tensor(-10.0, requires_grad = True)
LOSS_MINI10 = []
lr = 0.1
def train_model_Mini10(epochs):
for epoch in range(epochs):
Yhat = forward(X)
get_surface.set_para_loss(w.data.tolist(), b.data.tolist(), criterion(Yhat, Y).tolist())
get_surface.plot_ps()
LOSS_MINI10.append(criterion(forward(X),Y).tolist())
for x, y in trainloader:
yhat = forward(x)
loss = criterion(yhat, y)
get_surface.set_para_loss(w.data.tolist(), b.data.tolist(), loss.tolist())
loss.backward()
w.data = w.data - lr * w.grad.data
b.data = b.data - lr * b.grad.data
w.grad.data.zero_()
b.grad.data.zero_()
###Output
_____no_output_____
###Markdown
Run 10 epochs of mini-batch gradient descent: bug data space is 1 iteration ahead of parameter space.
###Code
# Run train_model_Mini5 with 10 iterations.
train_model_Mini10(10)
###Output
_____no_output_____
###Markdown
Plot the loss for each epoch:
###Code
# Plot out the LOSS for each method
plt.plot(LOSS_BGD,label = "Batch Gradient Descent")
plt.plot(LOSS_SGD,label = "Stochastic Gradient Descent")
plt.plot(LOSS_MINI5,label = "Mini-Batch Gradient Descent, Batch size: 5")
plt.plot(LOSS_MINI10,label = "Mini-Batch Gradient Descent, Batch size: 10")
plt.legend()
###Output
_____no_output_____
###Markdown
Practice Perform mini batch gradient descent with a batch size of 20. Store the total loss for each epoch in the list LOSS20.
###Code
# Practice: Perform mini batch gradient descent with a batch size of 20.
dataset = Data()
###Output
_____no_output_____
###Markdown
Double-click here for the solution.<!-- trainloader = DataLoader(dataset = dataset, batch_size = 20)w = torch.tensor(-15.0, requires_grad = True)b = torch.tensor(-10.0, requires_grad = True)LOSS_MINI20 = []lr = 0.1def my_train_model(epochs): for epoch in range(epochs): Yhat = forward(X) get_surface.set_para_loss(w.data.tolist(), b.data.tolist(), criterion(Yhat, Y).tolist()) get_surface.plot_ps() LOSS_MINI20.append(criterion(forward(X), Y).tolist()) for x, y in trainloader: yhat = forward(x) loss = criterion(yhat, y) get_surface.set_para_loss(w.data.tolist(), b.data.tolist(), loss.tolist()) loss.backward() w.data = w.data - lr * w.grad.data b.data = b.data - lr * b.grad.data w.grad.data.zero_() b.grad.data.zero_()my_train_model(10)--> Plot a graph that shows the LOSS results for all the methods.
###Code
# Practice: Plot a graph to show all the LOSS functions
# Type your code here
###Output
_____no_output_____ |
FindingTheNumbers.ipynb | ###Markdown
Finding the numbersYou are given an array A containing 2*N+2 positive numbers, out of which N numbers are repeated exactly once and the other two numbers occur exactly once and are distinct. You need to find the other two numbers and print them in ascending order.Input :The first line contains a value T, which denotes the number of test cases. Then T test cases follow .The first line of each test case contains a value N. The next line contains 2*N+2 space separated integers.Output :Print in a new line the two numbers in ascending order.Constraints :* 1<=T<=100* 1<=N<=10^6* 1<=A[i]<=5*10^8Example:``` Input : 2 2 1 2 3 2 1 4 1 2 1 3 2 Output : 3 4 1 3```See [The Google example problem](http://practice.geeksforgeeks.org/problems/finding-the-numbers/0)
###Code
# %load TestHarness
debugging = False
debugging = True
debugging2 = False
logging = True
def dbg(f, *args):
if debugging:
print((' DBG:' + f).format(*args))
def dbg2(f, *args):
if debugging2:
print((' DBG2:' + f).format(*args))
def log(f, *args):
if logging:
print((f).format(*args))
def log_error(f, *args):
if logging:
print(('*** ERROR:' + f).format(*args))
def class_name(instance):
return type(instance).__name__
#------------------------------------------------------------------------------
import time
from datetime import timedelta
#------------------------------------------------------------------------------
class TestCase(object):
def __init__(self, name, method, inputs, expected, catchExceptions=False):
self.name = name
self.method = method
self.inputs = inputs
self.expected = expected
self.catchExceptions = catchExceptions
def run(self):
if self.catchExceptions:
try:
return self.method(*self.inputs)
except Exception as x:
return x
else:
return self.method(*self.inputs)
#------------------------------------------------------------------------------
class TestSet(object):
def __init__(self, cases):
self.cases = cases
def run_tests(self, repeat=1):
count = 0
errors = 0
total_time = 0
for case in self.cases:
count += 1
start_time = time.time()
for iteration in range(repeat):
dbg2("*** Running '{0}' iteration {1}", case.name, iteration+1)
result = case.run()
elapsed_time = time.time() - start_time
total_time += elapsed_time
if callable(case.expected):
if not case.expected(result):
errors += 1
log_error("Test {0} failed. Returned {1}", case.name, result)
elif result != case.expected:
errors += 1
log_error('Test {0} failed. Returned "{1}", expected "{2}"', case.name, result, case.expected)
if errors:
log_error("Tests passed: {0}; Failures: {1}", count-errors, errors)
else:
log("All {0} tests passed.", count)
log("Elapsed test time: {0}", timedelta(seconds=total_time))
def undups(A):
nope = set()
for v in A:
if v in nope:
nope.remove(v)
else:
nope.add(v)
return sorted(nope)
undups([1, 2, 3, 2, 1, 4])
c1 = TestCase('short', undups, [[1, 2, 3, 2, 1, 4]], [3, 4] )
c2 = TestCase('long', undups, [[2, 1, 3, 2]], [1, 3] )
tester = TestSet([c1, c2])
tester.run_tests()
def bestspanx(A):
maxs = 0
pair = (0, 0)
n = len(A)
for i in range(n):
for j in range(i, n):
if A[i] <= A[j]:
diff = j - i
if maxs < diff:
maxs = diff
pair = (A[i], A[j])
return maxs, pair
def bestspan(A):
maxs = 0
pair = (0, 0)
n = len(A)
for i in range(n):
# No point in looking at any pairs that don't
# improve our score.
for j in range(i + maxs + 1, n):
if A[i] <= A[j]:
diff = j - i
if maxs < diff:
maxs = diff
pair = (A[i], A[j])
return maxs, pair
bestspan([7, 6, 3, 5, 4, 2, 1, -1])
import random
r = [random.randint(1, 100)-x for x in range(10000)]
bestspan(r)
bestspanx(r)
###Output
_____no_output_____ |
3-object-tracking-and-localization/activities/8-vehicle-motion-and-calculus/2. Speed from Position Data.ipynb | ###Markdown
Speed from Position DataIn this Notebook you'll work with data just like the data you'll be using in the final project for this course. That data comes from CSVs that looks like this:| timestamp | displacement | yaw_rate | acceleration || :-------: | :----------: | :------: | :----------: || 0.0 | 0 | 0.0 | 0.0 || 0.25 | 0.0 | 0.0 | 19.6 || 0.5 | 1.225 | 0.0 | 19.6 || 0.75 | 3.675 | 0.0 | 19.6 || 1.0 | 7.35 | 0.0 | 19.6 || 1.25 | 12.25 | 0.0 | 0.0 || 1.5 | 17.15 | -2.82901631903 | 0.0 || 1.75 | 22.05 | -2.82901631903 | 0.0 || 2.0 | 26.95 | -2.82901631903 | 0.0 || 2.25 | 31.85 | -2.82901631903 | 0.0 |
###Code
from helpers import process_data
from matplotlib import pyplot as plt
PARALLEL_PARK_DATA = process_data("parallel_park.pickle")
# This is what the first few entries in the parallel
# park data look like.
PARALLEL_PARK_DATA[:5]
# In this exercise we'll be differentiating (taking the
# derivative of) displacement data. This will require
# using only the first two columns of this data.
timestamps = [row[0] for row in PARALLEL_PARK_DATA]
displacements = [row[1] for row in PARALLEL_PARK_DATA]
# You'll use these data in the next lesson on integration
# You can ignore them for now.
yaw_rates = [row[2] for row in PARALLEL_PARK_DATA]
accelerations = [row[3] for row in PARALLEL_PARK_DATA]
plt.title("Displacement vs Time while Parallel Parking")
plt.xlabel("Time (seconds)")
plt.ylabel("Displacement (meters)")
plt.scatter(timestamps, displacements)
plt.show()
###Output
_____no_output_____
###Markdown
In the graph above, you can see displacement vs time data for a car as it parallel parks. Note that backwards motion winds back the odometer and reduces displacement (this isn't actually how odometers work on modern cars. Sorry Ferris Bueller)Note how for approximately 4 seconds the motion is backwards and then for the last two the car goes forwards.Let's look at some data somewhere in the middle of this trajectory
###Code
print(timestamps[20:22])
print(displacements[20:22])
###Output
[1.25, 1.3125]
[-1.4087500000000004, -1.5312500000000004]
###Markdown
So you can see that at $t=1.25$ the car has displacement $x=-1.40875$ and at $t=1.3125$ the car has displacement $x=-1.53125$This means we could calculate the speed / slope as follows:$$\text{slope} = \frac{\text{vertical change}}{\text{horizontal change}} = \frac{\Delta x}{\Delta t}$$and for the numbers I just mentioned this would mean:$$\frac{\Delta x}{\Delta t} = \frac{-1.53125 - -1.40875}{1.3125 - 1.25} = \frac{-0.1225 \text{ meters}}{0.0625\text{ seconds}} = -1.96 \frac{m}{s}$$So I can say the following:> Between $t=1.25$ and $t=1.3125$ the vehicle had an **average speed** of **-1.96 meters per second**I could make this same calculation in code as follows
###Code
delta_x = displacements[21] - displacements[20]
delta_t = timestamps[21] - timestamps[20]
slope = delta_x / delta_t
print(slope)
###Output
-1.9600000000000009
###Markdown
Earlier in this lesson you worked with truly continuous functions. In that situation you could make $\Delta t$ as small as you wanted!But now we have real data, which means the size of $\Delta t$ is dictated by how frequently we made measurements of displacement. In this case it looks like subsequent measurements are separated by$$\Delta t = 0.0625 \text{ seconds}$$In the `get_derivative_from_data` function below, I demonstrate how to "take a derivative" of real data. Read through this code and understand how it works: in the next notebook you'll be asked to reproduce this code yourself.
###Code
def get_derivative_from_data(position_data, time_data):
"""
Calculates a list of speeds from position_data and
time_data.
Arguments:
position_data - a list of values corresponding to
vehicle position
time_data - a list of values (equal in length to
position_data) which give timestamps for each
position measurement
Returns:
speeds - a list of values (which is shorter
by ONE than the input lists) of speeds.
"""
# 1. Check to make sure the input lists have same length
if len(position_data) != len(time_data):
raise(ValueError, "Data sets must have same length")
# 2. Prepare empty list of speeds
speeds = []
# 3. Get first values for position and time
previous_position = position_data[0]
previous_time = time_data[0]
# 4. Begin loop through all data EXCEPT first entry
for i in range(1, len(position_data)):
# 5. get position and time data for this timestamp
position = position_data[i]
time = time_data[i]
# 6. Calculate delta_x and delta_t
delta_x = position - previous_position
delta_t = time - previous_time
# 7. Speed is slope. Calculate it and append to list
speed = delta_x / delta_t
speeds.append(speed)
# 8. Update values for next iteration of the loop.
previous_position = position
previous_time = time
return speeds
# 9. Call this function with appropriate arguments
speeds = get_derivative_from_data(displacements, timestamps)
# 10. Prepare labels for a plot
plt.title("Speed vs Time while Parallel Parking")
plt.xlabel("Time (seconds)")
plt.ylabel("Speed (m / s)")
# 11. Make the plot! Note the slicing of timestamps!
plt.scatter(timestamps[1:], speeds)
plt.show()
###Output
_____no_output_____ |
DREAMER/DREAMER_Arousal_LSTM_64_16.ipynb | ###Markdown
DREAMER Arousal EMI-LSTM 64_16 Adapted from Microsoft's notebooks, available at https://github.com/microsoft/EdgeML authored by Dennis et al. Imports
###Code
import pandas as pd
import numpy as np
from tabulate import tabulate
import os
import datetime as datetime
import pickle as pkl
###Output
_____no_output_____
###Markdown
DataFrames from CSVs
###Code
df = pd.read_csv('/home/sf/data/DREAMER/DREAMER_combined.csv',index_col=0)
###Output
/home/sf/.local/lib/python3.6/site-packages/numpy/lib/arraysetops.py:472: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
mask |= (ar1 == a)
###Markdown
Preprocessing
###Code
df.columns
###Output
_____no_output_____
###Markdown
Split Ground Truth
###Code
filtered_train = df.drop(['Movie', 'Person', 'Arousal','Dominance', 'Valence'], axis=1)
filtered_target = df['Arousal']
filtered_target = filtered_target.replace({1:0,2:1,3:2,4:3,5:4})
print(filtered_target.shape)
print(filtered_train.shape)
y = filtered_target.values.reshape(85744, 128) # 128 is the size of 1 bag, 85744 = (size of the entire set) / 128
###Output
_____no_output_____
###Markdown
Convert to 3D - (Bags, Timesteps, Features)
###Code
len(filtered_train.columns)
x = filtered_train.values
print(x.shape)
x = x.reshape(int(len(x) / 128), 128, 16)
print(x.shape)
###Output
(10975232, 16)
(85744, 128, 16)
###Markdown
Filter Overlapping Bags
###Code
# filtering bags that overlap with another class
bags_to_remove = []
for i in range(len(y)):
if len(set(y[i])) > 1:
bags_to_remove.append(i)
print(bags_to_remove)
x = np.delete(x, bags_to_remove, axis=0)
y = np.delete(y, bags_to_remove, axis=0)
x.shape
y.shape
###Output
_____no_output_____
###Markdown
Categorical Representation
###Code
one_hot_list = []
for i in range(len(y)):
one_hot_list.append(set(y[i]).pop())
categorical_y_ver = one_hot_list
categorical_y_ver = np.array(categorical_y_ver)
categorical_y_ver.shape
x.shape[1]
def one_hot(y, numOutput):
y = np.reshape(y, [-1])
ret = np.zeros([y.shape[0], numOutput])
for i, label in enumerate(y):
ret[i, label] = 1
return ret
###Output
_____no_output_____
###Markdown
Extract 3D Normalized Data with Validation Set
###Code
from sklearn.model_selection import train_test_split
import pathlib
x_train_val_combined, x_test, y_train_val_combined, y_test = train_test_split(x, categorical_y_ver, test_size=0.20, random_state=42)
y_test
extractedDir = '/home/sf/data/DREAMER/Arousal'
timesteps = x_train_val_combined.shape[-2]
feats = x_train_val_combined.shape[-1]
trainSize = int(x_train_val_combined.shape[0]*0.9)
x_train, x_val = x_train_val_combined[:trainSize], x_train_val_combined[trainSize:]
y_train, y_val = y_train_val_combined[:trainSize], y_train_val_combined[trainSize:]
# normalization
x_train = np.reshape(x_train, [-1, feats])
mean = np.mean(x_train, axis=0)
std = np.std(x_train, axis=0)
# normalize train
x_train = x_train - mean
x_train = x_train / std
x_train = np.reshape(x_train, [-1, timesteps, feats])
# normalize val
x_val = np.reshape(x_val, [-1, feats])
x_val = x_val - mean
x_val = x_val / std
x_val = np.reshape(x_val, [-1, timesteps, feats])
# normalize test
x_test = np.reshape(x_test, [-1, feats])
x_test = x_test - mean
x_test = x_test / std
x_test = np.reshape(x_test, [-1, timesteps, feats])
# shuffle test, as this was remaining
idx = np.arange(len(x_test))
np.random.shuffle(idx)
x_test = x_test[idx]
y_test = y_test[idx]
# one-hot encoding of labels
numOutput = 5
y_train = one_hot(y_train, numOutput)
y_val = one_hot(y_val, numOutput)
y_test = one_hot(y_test, numOutput)
extractedDir += '/'
pathlib.Path(extractedDir + 'RAW').mkdir(parents=True, exist_ok = True)
np.save(extractedDir + "RAW/x_train", x_train)
np.save(extractedDir + "RAW/y_train", y_train)
np.save(extractedDir + "RAW/x_test", x_test)
np.save(extractedDir + "RAW/y_test", y_test)
np.save(extractedDir + "RAW/x_val", x_val)
np.save(extractedDir + "RAW/y_val", y_val)
print(extractedDir)
ls /home/sf/data/DREAMER/Arousal/RAW
np.load('/home/sf/data/DREAMER/Arousal/RAW/x_train.npy').shape
###Output
_____no_output_____
###Markdown
Make 4D EMI Data (Bags, Subinstances, Subinstance Length, Features)
###Code
def loadData(dirname):
x_train = np.load(dirname + '/' + 'x_train.npy')
y_train = np.load(dirname + '/' + 'y_train.npy')
x_test = np.load(dirname + '/' + 'x_test.npy')
y_test = np.load(dirname + '/' + 'y_test.npy')
x_val = np.load(dirname + '/' + 'x_val.npy')
y_val = np.load(dirname + '/' + 'y_val.npy')
return x_train, y_train, x_test, y_test, x_val, y_val
def bagData(X, Y, subinstanceLen, subinstanceStride):
numClass = 5
numSteps = 128
numFeats = 16
assert X.ndim == 3
assert X.shape[1] == numSteps
assert X.shape[2] == numFeats
assert subinstanceLen <= numSteps
assert subinstanceLen > 0
assert subinstanceStride <= numSteps
assert subinstanceStride >= 0
assert len(X) == len(Y)
assert Y.ndim == 2
assert Y.shape[1] == numClass
x_bagged = []
y_bagged = []
for i, point in enumerate(X[:, :, :]):
instanceList = []
start = 0
end = subinstanceLen
while True:
x = point[start:end, :]
if len(x) < subinstanceLen:
x_ = np.zeros([subinstanceLen, x.shape[1]])
x_[:len(x), :] = x[:, :]
x = x_
instanceList.append(x)
if end >= numSteps:
break
start += subinstanceStride
end += subinstanceStride
bag = np.array(instanceList)
numSubinstance = bag.shape[0]
label = Y[i]
label = np.argmax(label)
labelBag = np.zeros([numSubinstance, numClass])
labelBag[:, label] = 1
x_bagged.append(bag)
label = np.array(labelBag)
y_bagged.append(label)
return np.array(x_bagged), np.array(y_bagged)
def makeEMIData(subinstanceLen, subinstanceStride, sourceDir, outDir):
x_train, y_train, x_test, y_test, x_val, y_val = loadData(sourceDir)
x, y = bagData(x_train, y_train, subinstanceLen, subinstanceStride)
np.save(outDir + '/x_train.npy', x)
np.save(outDir + '/y_train.npy', y)
print('Num train %d' % len(x))
x, y = bagData(x_test, y_test, subinstanceLen, subinstanceStride)
np.save(outDir + '/x_test.npy', x)
np.save(outDir + '/y_test.npy', y)
print('Num test %d' % len(x))
x, y = bagData(x_val, y_val, subinstanceLen, subinstanceStride)
np.save(outDir + '/x_val.npy', x)
np.save(outDir + '/y_val.npy', y)
print('Num val %d' % len(x))
subinstanceLen = 64
subinstanceStride = 16
extractedDir = '/home/sf/data/DREAMER/Arousal'
from os import mkdir
mkdir('/home/sf/data/DREAMER/Arousal' + '/%d_%d/' % (subinstanceLen, subinstanceStride))
rawDir = extractedDir + '/RAW'
sourceDir = rawDir
outDir = extractedDir + '/%d_%d/' % (subinstanceLen, subinstanceStride)
makeEMIData(subinstanceLen, subinstanceStride, sourceDir, outDir)
np.load('/home/sf/data/DREAMER/Arousal/64_16/y_train.npy').shape
print(x_train.shape)
print(y_train.shape)
from edgeml.graph.rnn import EMI_DataPipeline
from edgeml.graph.rnn import EMI_BasicLSTM, EMI_FastGRNN, EMI_FastRNN, EMI_GRU
from edgeml.trainer.emirnnTrainer import EMI_Trainer, EMI_Driver
import edgeml.utils
def lstm_experiment_generator(params, path = './DSAAR/64_16/'):
"""
Function that will generate the experiments to be run.
Inputs :
(1) Dictionary params, to set the network parameters.
(2) Name of the Model to be run from [EMI-LSTM, EMI-FastGRNN, EMI-GRU]
(3) Path to the dataset, where the csv files are present.
"""
#Copy the contents of the params dictionary.
lstm_dict = {**params}
#---------------------------PARAM SETTING----------------------#
# Network parameters for our LSTM + FC Layer
NUM_HIDDEN = params["NUM_HIDDEN"]
NUM_TIMESTEPS = params["NUM_TIMESTEPS"]
ORIGINAL_NUM_TIMESTEPS = params["ORIGINAL_NUM_TIMESTEPS"]
NUM_FEATS = params["NUM_FEATS"]
FORGET_BIAS = params["FORGET_BIAS"]
NUM_OUTPUT = params["NUM_OUTPUT"]
USE_DROPOUT = True if (params["USE_DROPOUT"] == 1) else False
KEEP_PROB = params["KEEP_PROB"]
# For dataset API
PREFETCH_NUM = params["PREFETCH_NUM"]
BATCH_SIZE = params["BATCH_SIZE"]
# Number of epochs in *one iteration*
NUM_EPOCHS = params["NUM_EPOCHS"]
# Number of iterations in *one round*. After each iteration,
# the model is dumped to disk. At the end of the current
# round, the best model among all the dumped models in the
# current round is picked up..
NUM_ITER = params["NUM_ITER"]
# A round consists of multiple training iterations and a belief
# update step using the best model from all of these iterations
NUM_ROUNDS = params["NUM_ROUNDS"]
LEARNING_RATE = params["LEARNING_RATE"]
# A staging direcory to store models
MODEL_PREFIX = params["MODEL_PREFIX"]
#----------------------END OF PARAM SETTING----------------------#
#----------------------DATA LOADING------------------------------#
x_train, y_train = np.load(path + 'x_train.npy'), np.load(path + 'y_train.npy')
x_test, y_test = np.load(path + 'x_test.npy'), np.load(path + 'y_test.npy')
x_val, y_val = np.load(path + 'x_val.npy'), np.load(path + 'y_val.npy')
# BAG_TEST, BAG_TRAIN, BAG_VAL represent bag_level labels. These are used for the label update
# step of EMI/MI RNN
BAG_TEST = np.argmax(y_test[:, 0, :], axis=1)
BAG_TRAIN = np.argmax(y_train[:, 0, :], axis=1)
BAG_VAL = np.argmax(y_val[:, 0, :], axis=1)
NUM_SUBINSTANCE = x_train.shape[1]
print("x_train shape is:", x_train.shape)
print("y_train shape is:", y_train.shape)
print("x_test shape is:", x_val.shape)
print("y_test shape is:", y_val.shape)
#----------------------END OF DATA LOADING------------------------------#
#----------------------COMPUTATION GRAPH--------------------------------#
# Define the linear secondary classifier
def createExtendedGraph(self, baseOutput, *args, **kwargs):
W1 = tf.Variable(np.random.normal(size=[NUM_HIDDEN, NUM_OUTPUT]).astype('float32'), name='W1')
B1 = tf.Variable(np.random.normal(size=[NUM_OUTPUT]).astype('float32'), name='B1')
y_cap = tf.add(tf.tensordot(baseOutput, W1, axes=1), B1, name='y_cap_tata')
self.output = y_cap
self.graphCreated = True
def restoreExtendedGraph(self, graph, *args, **kwargs):
y_cap = graph.get_tensor_by_name('y_cap_tata:0')
self.output = y_cap
self.graphCreated = True
def feedDictFunc(self, keep_prob=None, inference=False, **kwargs):
if inference is False:
feedDict = {self._emiGraph.keep_prob: keep_prob}
else:
feedDict = {self._emiGraph.keep_prob: 1.0}
return feedDict
EMI_BasicLSTM._createExtendedGraph = createExtendedGraph
EMI_BasicLSTM._restoreExtendedGraph = restoreExtendedGraph
if USE_DROPOUT is True:
EMI_Driver.feedDictFunc = feedDictFunc
inputPipeline = EMI_DataPipeline(NUM_SUBINSTANCE, NUM_TIMESTEPS, NUM_FEATS, NUM_OUTPUT)
emiLSTM = EMI_BasicLSTM(NUM_SUBINSTANCE, NUM_HIDDEN, NUM_TIMESTEPS, NUM_FEATS,
forgetBias=FORGET_BIAS, useDropout=USE_DROPOUT)
emiTrainer = EMI_Trainer(NUM_TIMESTEPS, NUM_OUTPUT, lossType='xentropy',
stepSize=LEARNING_RATE)
tf.reset_default_graph()
g1 = tf.Graph()
with g1.as_default():
# Obtain the iterators to each batch of the data
x_batch, y_batch = inputPipeline()
# Create the forward computation graph based on the iterators
y_cap = emiLSTM(x_batch)
# Create loss graphs and training routines
emiTrainer(y_cap, y_batch)
#------------------------------END OF COMPUTATION GRAPH------------------------------#
#-------------------------------------EMI DRIVER-------------------------------------#
with g1.as_default():
emiDriver = EMI_Driver(inputPipeline, emiLSTM, emiTrainer)
emiDriver.initializeSession(g1)
y_updated, modelStats = emiDriver.run(numClasses=NUM_OUTPUT, x_train=x_train,
y_train=y_train, bag_train=BAG_TRAIN,
x_val=x_val, y_val=y_val, bag_val=BAG_VAL,
numIter=NUM_ITER, keep_prob=KEEP_PROB,
numRounds=NUM_ROUNDS, batchSize=BATCH_SIZE,
numEpochs=NUM_EPOCHS, modelPrefix=MODEL_PREFIX,
fracEMI=0.5, updatePolicy='top-k', k=1)
#-------------------------------END OF EMI DRIVER-------------------------------------#
#-----------------------------------EARLY SAVINGS-------------------------------------#
"""
Early Prediction Policy: We make an early prediction based on the predicted classes
probability. If the predicted class probability > minProb at some step, we make
a prediction at that step.
"""
def earlyPolicy_minProb(instanceOut, minProb, **kwargs):
assert instanceOut.ndim == 2
classes = np.argmax(instanceOut, axis=1)
prob = np.max(instanceOut, axis=1)
index = np.where(prob >= minProb)[0]
if len(index) == 0:
assert (len(instanceOut) - 1) == (len(classes) - 1)
return classes[-1], len(instanceOut) - 1
index = index[0]
return classes[index], index
def getEarlySaving(predictionStep, numTimeSteps, returnTotal=False):
predictionStep = predictionStep + 1
predictionStep = np.reshape(predictionStep, -1)
totalSteps = np.sum(predictionStep)
maxSteps = len(predictionStep) * numTimeSteps
savings = 1.0 - (totalSteps / maxSteps)
if returnTotal:
return savings, totalSteps
return savings
#--------------------------------END OF EARLY SAVINGS---------------------------------#
#----------------------------------------BEST MODEL-----------------------------------#
k = 2
predictions, predictionStep = emiDriver.getInstancePredictions(x_test, y_test, earlyPolicy_minProb,
minProb=0.99, keep_prob=1.0)
bagPredictions = emiDriver.getBagPredictions(predictions, minSubsequenceLen=k, numClass=NUM_OUTPUT)
print('Accuracy at k = %d: %f' % (k, np.mean((bagPredictions == BAG_TEST).astype(int))))
mi_savings = (1 - NUM_TIMESTEPS / ORIGINAL_NUM_TIMESTEPS)
emi_savings = getEarlySaving(predictionStep, NUM_TIMESTEPS)
total_savings = mi_savings + (1 - mi_savings) * emi_savings
print('Savings due to MI-RNN : %f' % mi_savings)
print('Savings due to Early prediction: %f' % emi_savings)
print('Total Savings: %f' % (total_savings))
#Store in the dictionary.
lstm_dict["k"] = k
lstm_dict["accuracy"] = np.mean((bagPredictions == BAG_TEST).astype(int))
lstm_dict["total_savings"] = total_savings
lstm_dict["y_test"] = BAG_TEST
lstm_dict["y_pred"] = bagPredictions
# A slightly more detailed analysis method is provided.
df = emiDriver.analyseModel(predictions, BAG_TEST, NUM_SUBINSTANCE, NUM_OUTPUT)
print (tabulate(df, headers=list(df.columns), tablefmt='grid'))
lstm_dict["detailed analysis"] = df
#----------------------------------END OF BEST MODEL-----------------------------------#
#----------------------------------PICKING THE BEST MODEL------------------------------#
devnull = open(os.devnull, 'r')
for val in modelStats:
round_, acc, modelPrefix, globalStep = val
emiDriver.loadSavedGraphToNewSession(modelPrefix, globalStep, redirFile=devnull)
predictions, predictionStep = emiDriver.getInstancePredictions(x_test, y_test, earlyPolicy_minProb,
minProb=0.99, keep_prob=1.0)
bagPredictions = emiDriver.getBagPredictions(predictions, minSubsequenceLen=k, numClass=NUM_OUTPUT)
print("Round: %2d, Validation accuracy: %.4f" % (round_, acc), end='')
print(', Test Accuracy (k = %d): %f, ' % (k, np.mean((bagPredictions == BAG_TEST).astype(int))), end='')
print('Additional savings: %f' % getEarlySaving(predictionStep, NUM_TIMESTEPS))
#-------------------------------END OF PICKING THE BEST MODEL--------------------------#
return lstm_dict
def experiment_generator(params, path, model = 'lstm'):
if (model == 'lstm'): return lstm_experiment_generator(params, path)
elif (model == 'fastgrnn'): return fastgrnn_experiment_generator(params, path)
elif (model == 'gru'): return gru_experiment_generator(params, path)
elif (model == 'baseline'): return baseline_experiment_generator(params, path)
return
import tensorflow as tf
# Baseline EMI-LSTM
dataset = 'DREAMER_AROUSAL'
path = '/home/sf/data/DREAMER/Arousal'+ '/%d_%d/' % (subinstanceLen, subinstanceStride)
#Choose model from among [lstm, fastgrnn, gru]
model = 'lstm'
# Dictionary to set the parameters.
params = {
"NUM_HIDDEN" : 128,
"NUM_TIMESTEPS" : 64, #subinstance length.
"ORIGINAL_NUM_TIMESTEPS" : 128,
"NUM_FEATS" : 16,
"FORGET_BIAS" : 1.0,
"NUM_OUTPUT" : 5,
"USE_DROPOUT" : 1, # '1' -> True. '0' -> False
"KEEP_PROB" : 0.75,
"PREFETCH_NUM" : 5,
"BATCH_SIZE" : 32,
"NUM_EPOCHS" : 2,
"NUM_ITER" : 4,
"NUM_ROUNDS" : 10,
"LEARNING_RATE" : 0.001,
"FRAC_EMI" : 0.5,
"MODEL_PREFIX" : dataset + '/model-' + str(model)
}
#Preprocess data, and load the train,test and validation splits.
lstm_dict = lstm_experiment_generator(params, path)
#Create the directory to store the results of this run.
# dirname = ""
# dirname = "./Results" + ''.join(dirname) + "/"+dataset+"/"+model
# pathlib.Path(dirname).mkdir(parents=True, exist_ok=True)
# print ("Results for this run have been saved at" , dirname, ".")
/
dirname = "" + model
pathlib.Path(dirname).mkdir(parents=True, exist_ok=True)
print ("Results for this run have been saved at" , dirname, ".")
now = datetime.datetime.now()
filename = list((str(now.year),"-",str(now.month),"-",str(now.day),"|",str(now.hour),"-",str(now.minute)))
filename = ''.join(filename)
#Save the dictionary containing the params and the results.
pkl.dump(lstm_dict,open(dirname + "/lstm_dict_" + filename + ".pkl",mode='wb'))
###Output
WARNING: Logging before flag parsing goes to stderr.
W0811 06:07:09.419061 139879956309824 deprecation_wrapper.py:119] From /home/sf/data/EdgeML/tf/edgeml/graph/rnn.py:1141: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
W0811 06:07:09.438090 139879956309824 deprecation_wrapper.py:119] From /home/sf/data/EdgeML/tf/edgeml/graph/rnn.py:1153: The name tf.data.Iterator is deprecated. Please use tf.compat.v1.data.Iterator instead.
W0811 06:07:09.439491 139879956309824 deprecation.py:323] From /home/sf/data/EdgeML/tf/edgeml/graph/rnn.py:1153: DatasetV1.output_types (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.data.get_output_types(dataset)`.
W0811 06:07:09.440599 139879956309824 deprecation.py:323] From /home/sf/data/EdgeML/tf/edgeml/graph/rnn.py:1154: DatasetV1.output_shapes (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.data.get_output_shapes(dataset)`.
W0811 06:07:09.446861 139879956309824 deprecation.py:323] From /home/sf/.local/lib/python3.6/site-packages/tensorflow/python/data/ops/iterator_ops.py:348: Iterator.output_types (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.data.get_output_types(iterator)`.
W0811 06:07:09.448351 139879956309824 deprecation.py:323] From /home/sf/.local/lib/python3.6/site-packages/tensorflow/python/data/ops/iterator_ops.py:349: Iterator.output_shapes (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.data.get_output_shapes(iterator)`.
W0811 06:07:09.449451 139879956309824 deprecation.py:323] From /home/sf/.local/lib/python3.6/site-packages/tensorflow/python/data/ops/iterator_ops.py:351: Iterator.output_classes (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.data.get_output_classes(iterator)`.
W0811 06:07:09.453782 139879956309824 deprecation_wrapper.py:119] From /home/sf/data/EdgeML/tf/edgeml/graph/rnn.py:1159: The name tf.add_to_collection is deprecated. Please use tf.compat.v1.add_to_collection instead.
W0811 06:07:09.459037 139879956309824 deprecation.py:323] From /home/sf/data/EdgeML/tf/edgeml/graph/rnn.py:1396: BasicLSTMCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This class is equivalent as tf.keras.layers.LSTMCell, and will be replaced by that in Tensorflow 2.0.
|
examples/notebooks/documentation/transforms/python/elementwise/regex-py.ipynb | ###Markdown
View the docs
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License")
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
###Output
_____no_output_____
###Markdown
RegexlocalStorage.setItem('language', 'language-py') Pydoc Filters input string elements based on a regex. May also transform them based on the matching groups. SetupTo run a code cell, you can click the **Run cell** button at the top left of the cell,or select it and press **`Shift+Enter`**.Try modifying a code cell and re-running it to see what happens.> To learn more about Colab, see> [Welcome to Colaboratory!](https://colab.sandbox.google.com/notebooks/welcome.ipynb).First, let's install the `apache-beam` module.
###Code
!pip install --quiet -U apache-beam
###Output
_____no_output_____
###Markdown
ExamplesIn the following examples, we create a pipeline with a `PCollection` of text strings.Then, we use the `Regex` transform to search, replace, and split through the text elements using[regular expressions](https://docs.python.org/3/library/re.html).You can use tools to help you create and test your regular expressions, such as[regex101](https://regex101.com/).Make sure to specify the Python flavor at the left side bar.Lets look at the[regular expression `(?P[^\s,]+), *(\w+), *(\w+)`](https://regex101.com/r/Z7hTTj/3)for example.It matches anything that is not a whitespace `\s` (`[ \t\n\r\f\v]`) or comma `,`until a comma is found and stores that in the named group `icon`,this can match even `utf-8` strings.Then it matches any number of whitespaces, followed by at least one word character`\w` (`[a-zA-Z0-9_]`), which is stored in the second group for the *name*.It does the same with the third group for the *duration*.> *Note:* To avoid unexpected string escaping in your regular expressions,> it is recommended to use> [raw strings](https://docs.python.org/3/reference/lexical_analysis.html?highlight=rawstring-and-bytes-literals)> such as `r'raw-string'` instead of `'escaped-string'`. Example 1: Regex match`Regex.matches` keeps only the elements that match the regular expression,returning the matched group.The argument `group` is set to `0` (the entire match) by default,but can be set to a group number like `3`, or to a named group like `'icon'`.`Regex.matches` starts to match the regular expression at the beginning of the string.To match until the end of the string, add `'$'` at the end of the regular expression.To start matching at any point instead of the beginning of the string, use[`Regex.find(regex)`](example-4-regex-find).
###Code
import apache_beam as beam
# Matches a named group 'icon', and then two comma-separated groups.
regex = r'(?P<icon>[^\s,]+), *(\w+), *(\w+)'
with beam.Pipeline() as pipeline:
plants_matches = (
pipeline
| 'Garden plants' >> beam.Create([
'🍓, Strawberry, perennial',
'🥕, Carrot, biennial ignoring trailing words',
'🍆, Eggplant, perennial',
'🍅, Tomato, annual',
'🥔, Potato, perennial',
'# 🍌, invalid, format',
'invalid, 🍉, format',
])
| 'Parse plants' >> beam.Regex.matches(regex)
| beam.Map(print))
###Output
_____no_output_____
###Markdown
View source code Example 2: Regex match with all groups`Regex.all_matches` keeps only the elements that match the regular expression,returning *all groups* as a list.The groups are returned in the order encountered in the regular expression,including `group 0` (the entire match) as the first group.`Regex.all_matches` starts to match the regular expression at the beginning of the string.To match until the end of the string, add `'$'` at the end of the regular expression.To start matching at any point instead of the beginning of the string, use[`Regex.find_all(regex, group=Regex.ALL, outputEmpty=False)`](example-5-regex-find-all).
###Code
import apache_beam as beam
# Matches a named group 'icon', and then two comma-separated groups.
regex = r'(?P<icon>[^\s,]+), *(\w+), *(\w+)'
with beam.Pipeline() as pipeline:
plants_all_matches = (
pipeline
| 'Garden plants' >> beam.Create([
'🍓, Strawberry, perennial',
'🥕, Carrot, biennial ignoring trailing words',
'🍆, Eggplant, perennial',
'🍅, Tomato, annual',
'🥔, Potato, perennial',
'# 🍌, invalid, format',
'invalid, 🍉, format',
])
| 'Parse plants' >> beam.Regex.all_matches(regex)
| beam.Map(print))
###Output
_____no_output_____
###Markdown
View source code Example 3: Regex match into key-value pairs`Regex.matches_kv` keeps only the elements that match the regular expression,returning a key-value pair using the specified groups.The argument `keyGroup` is set to a group number like `3`, or to a named group like `'icon'`.The argument `valueGroup` is set to `0` (the entire match) by default,but can be set to a group number like `3`, or to a named group like `'icon'`.`Regex.matches_kv` starts to match the regular expression at the beginning of the string.To match until the end of the string, add `'$'` at the end of the regular expression.To start matching at any point instead of the beginning of the string, use[`Regex.find_kv(regex, keyGroup)`](example-6-regex-find-as-key-value-pairs).
###Code
import apache_beam as beam
# Matches a named group 'icon', and then two comma-separated groups.
regex = r'(?P<icon>[^\s,]+), *(\w+), *(\w+)'
with beam.Pipeline() as pipeline:
plants_matches_kv = (
pipeline
| 'Garden plants' >> beam.Create([
'🍓, Strawberry, perennial',
'🥕, Carrot, biennial ignoring trailing words',
'🍆, Eggplant, perennial',
'🍅, Tomato, annual',
'🥔, Potato, perennial',
'# 🍌, invalid, format',
'invalid, 🍉, format',
])
| 'Parse plants' >> beam.Regex.matches_kv(regex, keyGroup='icon')
| beam.Map(print))
###Output
_____no_output_____
###Markdown
View source code Example 4: Regex find`Regex.find` keeps only the elements that match the regular expression,returning the matched group.The argument `group` is set to `0` (the entire match) by default,but can be set to a group number like `3`, or to a named group like `'icon'`.`Regex.find` matches the first occurrence of the regular expression in the string.To start matching at the beginning, add `'^'` at the beginning of the regular expression.To match until the end of the string, add `'$'` at the end of the regular expression.If you need to match from the start only, consider using[`Regex.matches(regex)`](example-1-regex-match).
###Code
import apache_beam as beam
# Matches a named group 'icon', and then two comma-separated groups.
regex = r'(?P<icon>[^\s,]+), *(\w+), *(\w+)'
with beam.Pipeline() as pipeline:
plants_matches = (
pipeline
| 'Garden plants' >> beam.Create([
'# 🍓, Strawberry, perennial',
'# 🥕, Carrot, biennial ignoring trailing words',
'# 🍆, Eggplant, perennial - 🍌, Banana, perennial',
'# 🍅, Tomato, annual - 🍉, Watermelon, annual',
'# 🥔, Potato, perennial',
])
| 'Parse plants' >> beam.Regex.find(regex)
| beam.Map(print))
###Output
_____no_output_____
###Markdown
View source code Example 5: Regex find all`Regex.find_all` returns a list of all the matches of the regular expression,returning the matched group.The argument `group` is set to `0` by default, but can be set to a group number like `3`, to a named group like `'icon'`, or to `Regex.ALL` to return all groups.The argument `outputEmpty` is set to `True` by default, but can be set to `False` to skip elements where no matches were found.`Regex.find_all` matches the regular expression anywhere it is found in the string.To start matching at the beginning, add `'^'` at the start of the regular expression.To match until the end of the string, add `'$'` at the end of the regular expression.If you need to match all groups from the start only, consider using[`Regex.all_matches(regex)`](example-2-regex-match-with-all-groups).
###Code
import apache_beam as beam
# Matches a named group 'icon', and then two comma-separated groups.
regex = r'(?P<icon>[^\s,]+), *(\w+), *(\w+)'
with beam.Pipeline() as pipeline:
plants_find_all = (
pipeline
| 'Garden plants' >> beam.Create([
'# 🍓, Strawberry, perennial',
'# 🥕, Carrot, biennial ignoring trailing words',
'# 🍆, Eggplant, perennial - 🍌, Banana, perennial',
'# 🍅, Tomato, annual - 🍉, Watermelon, annual',
'# 🥔, Potato, perennial',
])
| 'Parse plants' >> beam.Regex.find_all(regex)
| beam.Map(print))
###Output
_____no_output_____
###Markdown
View source code Example 6: Regex find as key-value pairs`Regex.find_kv` returns a list of all the matches of the regular expression,returning a key-value pair using the specified groups.The argument `keyGroup` is set to a group number like `3`, or to a named group like `'icon'`.The argument `valueGroup` is set to `0` (the entire match) by default,but can be set to a group number like `3`, or to a named group like `'icon'`.`Regex.find_kv` matches the first occurrence of the regular expression in the string.To start matching at the beginning, add `'^'` at the beginning of the regular expression.To match until the end of the string, add `'$'` at the end of the regular expression.If you need to match as key-value pairs from the start only, consider using[`Regex.matches_kv(regex)`](example-3-regex-match-into-key-value-pairs).
###Code
import apache_beam as beam
# Matches a named group 'icon', and then two comma-separated groups.
regex = r'(?P<icon>[^\s,]+), *(\w+), *(\w+)'
with beam.Pipeline() as pipeline:
plants_matches_kv = (
pipeline
| 'Garden plants' >> beam.Create([
'# 🍓, Strawberry, perennial',
'# 🥕, Carrot, biennial ignoring trailing words',
'# 🍆, Eggplant, perennial - 🍌, Banana, perennial',
'# 🍅, Tomato, annual - 🍉, Watermelon, annual',
'# 🥔, Potato, perennial',
])
| 'Parse plants' >> beam.Regex.find_kv(regex, keyGroup='icon')
| beam.Map(print))
###Output
_____no_output_____
###Markdown
View source code Example 7: Regex replace all`Regex.replace_all` returns the string with all the occurrences of the regular expression replaced by another string.You can also use[backreferences](https://docs.python.org/3/library/re.html?highlight=backreferencere.sub)on the `replacement`.
###Code
import apache_beam as beam
with beam.Pipeline() as pipeline:
plants_replace_all = (
pipeline
| 'Garden plants' >> beam.Create([
'🍓 : Strawberry : perennial',
'🥕 : Carrot : biennial',
'🍆\t:\tEggplant\t:\tperennial',
'🍅 : Tomato : annual',
'🥔 : Potato : perennial',
])
| 'To CSV' >> beam.Regex.replace_all(r'\s*:\s*', ',')
| beam.Map(print))
###Output
_____no_output_____
###Markdown
View source code Example 8: Regex replace first`Regex.replace_first` returns the string with the first occurrence of the regular expression replaced by another string.You can also use[backreferences](https://docs.python.org/3/library/re.html?highlight=backreferencere.sub)on the `replacement`.
###Code
import apache_beam as beam
with beam.Pipeline() as pipeline:
plants_replace_first = (
pipeline
| 'Garden plants' >> beam.Create([
'🍓, Strawberry, perennial',
'🥕, Carrot, biennial',
'🍆,\tEggplant, perennial',
'🍅, Tomato, annual',
'🥔, Potato, perennial',
])
| 'As dictionary' >> beam.Regex.replace_first(r'\s*,\s*', ': ')
| beam.Map(print))
###Output
_____no_output_____
###Markdown
View source code Example 9: Regex split`Regex.split` returns the list of strings that were delimited by the specified regular expression.The argument `outputEmpty` is set to `False` by default, but can be set to `True` to keep empty items in the output list.
###Code
import apache_beam as beam
with beam.Pipeline() as pipeline:
plants_split = (
pipeline
| 'Garden plants' >> beam.Create([
'🍓 : Strawberry : perennial',
'🥕 : Carrot : biennial',
'🍆\t:\tEggplant : perennial',
'🍅 : Tomato : annual',
'🥔 : Potato : perennial',
])
| 'Parse plants' >> beam.Regex.split(r'\s*:\s*')
| beam.Map(print))
###Output
_____no_output_____
###Markdown
View the docs
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License")
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
###Output
_____no_output_____
###Markdown
RegexlocalStorage.setItem('language', 'language-py') Pydoc Filters input string elements based on a regex. May also transform them based on the matching groups. SetupTo run a code cell, you can click the **Run cell** button at the top left of the cell,or select it and press **`Shift+Enter`**.Try modifying a code cell and re-running it to see what happens.> To learn more about Colab, see> [Welcome to Colaboratory!](https://colab.sandbox.google.com/notebooks/welcome.ipynb).First, let's install the `apache-beam` module.
###Code
!pip install --quiet -U apache-beam
###Output
_____no_output_____
###Markdown
ExamplesIn the following examples, we create a pipeline with a `PCollection` of text strings.Then, we use the `Regex` transform to search, replace, and split through the text elements using[regular expressions](https://docs.python.org/3/library/re.html).You can use tools to help you create and test your regular expressions, such as[regex101](https://regex101.com/).Make sure to specify the Python flavor at the left side bar.Lets look at the[regular expression `(?P[^\s,]+), *(\w+), *(\w+)`](https://regex101.com/r/Z7hTTj/3)for example.It matches anything that is not a whitespace `\s` (`[ \t\n\r\f\v]`) or comma `,`until a comma is found and stores that in the named group `icon`,this can match even `utf-8` strings.Then it matches any number of whitespaces, followed by at least one word character`\w` (`[a-zA-Z0-9_]`), which is stored in the second group for the *name*.It does the same with the third group for the *duration*.> *Note:* To avoid unexpected string escaping in your regular expressions,> it is recommended to use> [raw strings](https://docs.python.org/3/reference/lexical_analysis.html?highlight=rawstring-and-bytes-literals)> such as `r'raw-string'` instead of `'escaped-string'`. Example 1: Regex match`Regex.matches` keeps only the elements that match the regular expression,returning the matched group.The argument `group` is set to `0` (the entire match) by default,but can be set to a group number like `3`, or to a named group like `'icon'`.`Regex.matches` starts to match the regular expression at the beginning of the string.To match until the end of the string, add `'$'` at the end of the regular expression.To start matching at any point instead of the beginning of the string, use[`Regex.find(regex)`](example-4-regex-find).
###Code
import apache_beam as beam
# Matches a named group 'icon', and then two comma-separated groups.
regex = r'(?P<icon>[^\s,]+), *(\w+), *(\w+)'
with beam.Pipeline() as pipeline:
plants_matches = (
pipeline
| 'Garden plants' >> beam.Create([
'🍓, Strawberry, perennial',
'🥕, Carrot, biennial ignoring trailing words',
'🍆, Eggplant, perennial',
'🍅, Tomato, annual',
'🥔, Potato, perennial',
'# 🍌, invalid, format',
'invalid, 🍉, format',
])
| 'Parse plants' >> beam.Regex.matches(regex)
| beam.Map(print)
)
###Output
_____no_output_____
###Markdown
View source code Example 2: Regex match with all groups`Regex.all_matches` keeps only the elements that match the regular expression,returning *all groups* as a list.The groups are returned in the order encountered in the regular expression,including `group 0` (the entire match) as the first group.`Regex.all_matches` starts to match the regular expression at the beginning of the string.To match until the end of the string, add `'$'` at the end of the regular expression.To start matching at any point instead of the beginning of the string, use[`Regex.find_all(regex, group=Regex.ALL, outputEmpty=False)`](example-5-regex-find-all).
###Code
import apache_beam as beam
# Matches a named group 'icon', and then two comma-separated groups.
regex = r'(?P<icon>[^\s,]+), *(\w+), *(\w+)'
with beam.Pipeline() as pipeline:
plants_all_matches = (
pipeline
| 'Garden plants' >> beam.Create([
'🍓, Strawberry, perennial',
'🥕, Carrot, biennial ignoring trailing words',
'🍆, Eggplant, perennial',
'🍅, Tomato, annual',
'🥔, Potato, perennial',
'# 🍌, invalid, format',
'invalid, 🍉, format',
])
| 'Parse plants' >> beam.Regex.all_matches(regex)
| beam.Map(print)
)
###Output
_____no_output_____
###Markdown
View source code Example 3: Regex match into key-value pairs`Regex.matches_kv` keeps only the elements that match the regular expression,returning a key-value pair using the specified groups.The argument `keyGroup` is set to a group number like `3`, or to a named group like `'icon'`.The argument `valueGroup` is set to `0` (the entire match) by default,but can be set to a group number like `3`, or to a named group like `'icon'`.`Regex.matches_kv` starts to match the regular expression at the beginning of the string.To match until the end of the string, add `'$'` at the end of the regular expression.To start matching at any point instead of the beginning of the string, use[`Regex.find_kv(regex, keyGroup)`](example-6-regex-find-as-key-value-pairs).
###Code
import apache_beam as beam
# Matches a named group 'icon', and then two comma-separated groups.
regex = r'(?P<icon>[^\s,]+), *(\w+), *(\w+)'
with beam.Pipeline() as pipeline:
plants_matches_kv = (
pipeline
| 'Garden plants' >> beam.Create([
'🍓, Strawberry, perennial',
'🥕, Carrot, biennial ignoring trailing words',
'🍆, Eggplant, perennial',
'🍅, Tomato, annual',
'🥔, Potato, perennial',
'# 🍌, invalid, format',
'invalid, 🍉, format',
])
| 'Parse plants' >> beam.Regex.matches_kv(regex, keyGroup='icon')
| beam.Map(print)
)
###Output
_____no_output_____
###Markdown
View source code Example 4: Regex find`Regex.find` keeps only the elements that match the regular expression,returning the matched group.The argument `group` is set to `0` (the entire match) by default,but can be set to a group number like `3`, or to a named group like `'icon'`.`Regex.find` matches the first occurrence of the regular expression in the string.To start matching at the beginning, add `'^'` at the beginning of the regular expression.To match until the end of the string, add `'$'` at the end of the regular expression.If you need to match from the start only, consider using[`Regex.matches(regex)`](example-1-regex-match).
###Code
import apache_beam as beam
# Matches a named group 'icon', and then two comma-separated groups.
regex = r'(?P<icon>[^\s,]+), *(\w+), *(\w+)'
with beam.Pipeline() as pipeline:
plants_matches = (
pipeline
| 'Garden plants' >> beam.Create([
'# 🍓, Strawberry, perennial',
'# 🥕, Carrot, biennial ignoring trailing words',
'# 🍆, Eggplant, perennial - 🍌, Banana, perennial',
'# 🍅, Tomato, annual - 🍉, Watermelon, annual',
'# 🥔, Potato, perennial',
])
| 'Parse plants' >> beam.Regex.find(regex)
| beam.Map(print)
)
###Output
_____no_output_____
###Markdown
View source code Example 5: Regex find all`Regex.find_all` returns a list of all the matches of the regular expression,returning the matched group.The argument `group` is set to `0` by default, but can be set to a group number like `3`, to a named group like `'icon'`, or to `Regex.ALL` to return all groups.The argument `outputEmpty` is set to `True` by default, but can be set to `False` to skip elements where no matches were found.`Regex.find_all` matches the regular expression anywhere it is found in the string.To start matching at the beginning, add `'^'` at the start of the regular expression.To match until the end of the string, add `'$'` at the end of the regular expression.If you need to match all groups from the start only, consider using[`Regex.all_matches(regex)`](example-2-regex-match-with-all-groups).
###Code
import apache_beam as beam
# Matches a named group 'icon', and then two comma-separated groups.
regex = r'(?P<icon>[^\s,]+), *(\w+), *(\w+)'
with beam.Pipeline() as pipeline:
plants_find_all = (
pipeline
| 'Garden plants' >> beam.Create([
'# 🍓, Strawberry, perennial',
'# 🥕, Carrot, biennial ignoring trailing words',
'# 🍆, Eggplant, perennial - 🍌, Banana, perennial',
'# 🍅, Tomato, annual - 🍉, Watermelon, annual',
'# 🥔, Potato, perennial',
])
| 'Parse plants' >> beam.Regex.find_all(regex)
| beam.Map(print)
)
###Output
_____no_output_____
###Markdown
View source code Example 6: Regex find as key-value pairs`Regex.find_kv` returns a list of all the matches of the regular expression,returning a key-value pair using the specified groups.The argument `keyGroup` is set to a group number like `3`, or to a named group like `'icon'`.The argument `valueGroup` is set to `0` (the entire match) by default,but can be set to a group number like `3`, or to a named group like `'icon'`.`Regex.find_kv` matches the first occurrence of the regular expression in the string.To start matching at the beginning, add `'^'` at the beginning of the regular expression.To match until the end of the string, add `'$'` at the end of the regular expression.If you need to match as key-value pairs from the start only, consider using[`Regex.matches_kv(regex)`](example-3-regex-match-into-key-value-pairs).
###Code
import apache_beam as beam
# Matches a named group 'icon', and then two comma-separated groups.
regex = r'(?P<icon>[^\s,]+), *(\w+), *(\w+)'
with beam.Pipeline() as pipeline:
plants_matches_kv = (
pipeline
| 'Garden plants' >> beam.Create([
'# 🍓, Strawberry, perennial',
'# 🥕, Carrot, biennial ignoring trailing words',
'# 🍆, Eggplant, perennial - 🍌, Banana, perennial',
'# 🍅, Tomato, annual - 🍉, Watermelon, annual',
'# 🥔, Potato, perennial',
])
| 'Parse plants' >> beam.Regex.find_kv(regex, keyGroup='icon')
| beam.Map(print)
)
###Output
_____no_output_____
###Markdown
View source code Example 7: Regex replace all`Regex.replace_all` returns the string with all the occurrences of the regular expression replaced by another string.You can also use[backreferences](https://docs.python.org/3/library/re.html?highlight=backreferencere.sub)on the `replacement`.
###Code
import apache_beam as beam
with beam.Pipeline() as pipeline:
plants_replace_all = (
pipeline
| 'Garden plants' >> beam.Create([
'🍓 : Strawberry : perennial',
'🥕 : Carrot : biennial',
'🍆\t:\tEggplant\t:\tperennial',
'🍅 : Tomato : annual',
'🥔 : Potato : perennial',
])
| 'To CSV' >> beam.Regex.replace_all(r'\s*:\s*', ',')
| beam.Map(print)
)
###Output
_____no_output_____
###Markdown
View source code Example 8: Regex replace first`Regex.replace_first` returns the string with the first occurrence of the regular expression replaced by another string.You can also use[backreferences](https://docs.python.org/3/library/re.html?highlight=backreferencere.sub)on the `replacement`.
###Code
import apache_beam as beam
with beam.Pipeline() as pipeline:
plants_replace_first = (
pipeline
| 'Garden plants' >> beam.Create([
'🍓, Strawberry, perennial',
'🥕, Carrot, biennial',
'🍆,\tEggplant, perennial',
'🍅, Tomato, annual',
'🥔, Potato, perennial',
])
| 'As dictionary' >> beam.Regex.replace_first(r'\s*,\s*', ': ')
| beam.Map(print)
)
###Output
_____no_output_____
###Markdown
View source code Example 9: Regex split`Regex.split` returns the list of strings that were delimited by the specified regular expression.The argument `outputEmpty` is set to `False` by default, but can be set to `True` to keep empty items in the output list.
###Code
import apache_beam as beam
with beam.Pipeline() as pipeline:
plants_split = (
pipeline
| 'Garden plants' >> beam.Create([
'🍓 : Strawberry : perennial',
'🥕 : Carrot : biennial',
'🍆\t:\tEggplant : perennial',
'🍅 : Tomato : annual',
'🥔 : Potato : perennial',
])
| 'Parse plants' >> beam.Regex.split(r'\s*:\s*')
| beam.Map(print)
)
###Output
_____no_output_____ |
notebooks/data_downloader.ipynb | ###Markdown
Setup of the AnnData object**Author:** [Severin Dicks](https://github.com/Intron7) (IBSM Freiburg) This notebook is just downloader and sets up the AnnData object (https://anndata.readthedocs.io/en/latest/index.html) we will be working with. In this example workflow we'll be looking at a dataset of ca. 90000 cells from lungcancer patients published by [Quin et al., Cell Research 2020](https://www.nature.com/articles/s41422-020-0355-0).
###Code
import gdown
import os
url = 'https://drive.google.com/uc?id=1eoK0m2ML1uNLc80L6yBuPrkJqsDF-QWj'
os.makedirs("./h5",exist_ok=True)
output = './h5/adata.raw.h5ad'
gdown.download(url, output, quiet=True)
###Output
_____no_output_____
###Markdown
Setup of the AnnData object**Author:** [Severin Dicks](https://github.com/Intron7) (IBSM Freiburg) This notebook is just downloader and sets up the AnnData object (https://anndata.readthedocs.io/en/latest/index.html) we will be working with. In this example workflow we'll be looking at a dataset of ca. 90000 cells from lungcancer patients published by [Quin et al., Cell Research 2020](https://www.nature.com/articles/s41422-020-0355-0).
###Code
import wget
import scanpy as sc
import os
import tarfile
import pandas as pd
###Output
_____no_output_____
###Markdown
First we download the countmartix and metadata file from the Lambrechts lab website.
###Code
count_file = './data/LC_counts.tar.gz'
if not os.path.exists(count_file):
os.makedirs("./data",exist_ok=True)
wget.download("http://blueprint.lambrechtslab.org/download/LC_counts.tar.gz", out="./data")
wget.download("http://blueprint.lambrechtslab.org/download/LC_metadata.csv.gz", out="./data")
###Output
_____no_output_____
###Markdown
We than decompress the data.
###Code
tar = tarfile.open(count_file, "r:gz")
tar.extractall("./data")
tar.close()
###Output
_____no_output_____
###Markdown
Now we can start creating our AnnData object with scanpy (https://scanpy.readthedocs.io/en/stable/index.html).
###Code
adata = sc.read_10x_mtx("./data/export/LC_counts/")
###Output
_____no_output_____
###Markdown
Next we have to append the metadata to `adata.obs`.
###Code
obs_df = pd.read_csv("./data/LC_metadata.csv.gz",compression="gzip", index_col=0)
obs_df
###Output
_____no_output_____
###Markdown
In this case `adata.obs` and the meta_data in `obs_df` have the identical number of cells and the cell barcodes are in the same order. We can therefore just replace `.obs` with `obs_df`
###Code
adata.obs = obs_df
###Output
_____no_output_____
###Markdown
Since `PatientNumber` is a category and not a numerical value we have to change its type. In some cases scanpy doesn't like integers as categories. So we convert it to `str`
###Code
adata.obs.PatientNumber = adata.obs.PatientNumber.astype(str)
###Output
_____no_output_____
###Markdown
During the saving of the adata object string based columns in `.obs` are transformed are changed into categorical data.
###Code
os.makedirs("./h5",exist_ok=True)
adata.write("./h5/adata.raw.h5ad")
###Output
... storing 'PatientNumber' as categorical
... storing 'TumorType' as categorical
... storing 'TumorSite' as categorical
... storing 'CellType' as categorical
|
notebooks/bigquery:nvdb.standardized.veglenker.ipynb | ###Markdown
Denne spørringen henter ut lengden på det nåværende vegnettet for riks- og europaveg i Norge.Uttrekket ser på en forenklet utgave av europa- og riksvegnettet for kjørende, der adskilte felt og løp bare er gitt i én retning.Dette gir en lengde for hovedårene i vegnettet men inkluderer da altså ikke lengdene for hvert kjørefelt, og heller ikke sideanlegg slik som gang- og sykkelveg.
###Code
query = f"""
SELECT
SUM(ST_LENGTH(geometri))/1000 vegnettKm
FROM
`{project}.standardized.veglenker`
WHERE
# Nåværende vegnett
(metadata.sluttdato IS NULL OR metadata.sluttdato >= CURRENT_DATE())
# Hovedtrase
AND type = "HOVED"
# Europaveg og Riksveg
AND vegsystemreferanse.vegsystem.vegkategori in ("E", "R")
# Eksisterende veg som er del av det operative vegnettet
AND vegsystemreferanse.vegsystem.fase in ("V")
# Behold bare ett løp for adskilte tunelløp
AND (vegsystemreferanse.strekning.adskilteLop IN ("Med", "Nei") OR vegsystemreferanse.strekning.adskilteLop IS NULL)
AND typeVeg IN (
"Rundkjøring",
"Enkel bilveg",
"Kanalisert veg",
"Rampe");
"""
print(query)
client.query(query).to_dataframe()
###Output
_____no_output_____
###Markdown
Denne spørringen henter ut det nåværende vegnettet for riks- og europaveg i Trøndelag fylke.Uttrekket viser en forenklet utgave av europa- og riksvegnettet for kjørende, der adskilte felt og løp bare er gitt i én retning.Dette gir en oversikt over hvordan vegnettet overordnet ser ut, men unnlater en del detaljer, som feks hvor mange felt vegene har og hvor det finnes gang- og sykkelveg.
###Code
query = f"""
SELECT
vegsystemreferanse.kortform, veglenkesekvensid, veglenkenummer, segmentnummer, startposisjon, sluttposisjon, typeVeg, detaljnivaa, kommune, geometri
FROM
`{project}.standardized.veglenker`
WHERE
# Trøndelag
fylke = 50
# Nåværende vegnett
AND (metadata.sluttdato IS NULL OR metadata.sluttdato >= CURRENT_DATE())
# Hovedtrase
AND type = "HOVED"
# Europaveg og Riksveg
AND vegsystemreferanse.vegsystem.vegkategori in ("E", "R")
# Eksisterende veg som er del av det operative vegnettet
AND vegsystemreferanse.vegsystem.fase in ("V")
# Behold bare ett løp for adskilte tunelløp
AND (vegsystemreferanse.strekning.adskilteLop IN ("Med", "Nei") OR vegsystemreferanse.strekning.adskilteLop IS NULL)
AND typeVeg IN (
"Rundkjøring",
"Enkel bilveg",
"Kanalisert veg",
"Rampe");
"""
print(query)
vegnettTrondelag = client.query(query).to_dataframe()
vegnettTrondelag
###Output
_____no_output_____
###Markdown
Denne spørringen viser hvordan riks- og europavegnettet for et gitt tidspunkt kan hentes ut, med ellers samme filter som over.
###Code
query = f"""
SELECT
vegsystemreferanse.kortform,
geometri
FROM
`{project}.standardized.veglenker`
WHERE
# Trøndelag
fylke = 50
# Vegnett for et gitt tidspunkt
AND (
# Opprettet før det aktuelle tidspunktet
metadata.startdato < '2020-01-01'
AND
# Avsluttet etter tidspunktet eller er fortsatt aktiv
(metadata.sluttdato >= '2020-01-01' OR metadata.sluttdato IS NULL)
)
# Hovedtrase
AND type = "HOVED"
# Europaveg og Riksveg
AND vegsystemreferanse.vegsystem.vegkategori in ("E", "R")
# Eksisterende veg som er del av det operative vegnettet
AND vegsystemreferanse.vegsystem.fase in ("V")
# Behold bare ett løp for adskilte tunelløp
AND (vegsystemreferanse.strekning.adskilteLop IN ("Med", "Nei") OR vegsystemreferanse.strekning.adskilteLop IS NULL)
AND typeVeg IN (
"Rundkjøring",
"Enkel bilveg",
"Kanalisert veg",
"Rampe");
"""
print(query)
client.query(query).to_dataframe()
###Output
_____no_output_____
###Markdown
Denne spørringen viser hvordan vegeier og vegens navn kan utledes fra de andre feltene.
###Code
query = f"""
SELECT
FORMAT("%s%s%d",
vegsystemreferanse.vegsystem.vegkategori,
vegsystemreferanse.vegsystem.fase,
vegsystemreferanse.vegsystem.nummer) vegnavn,
IF(
EXISTS(SELECT * FROM UNNEST(kontraktsomraader) WHERE REGEXP_CONTAINS(navn, "NV_")),
"Stat, Nye Veier",
"Stat, Statens vegvesen") vegeier,
FROM
`{project}.standardized.veglenker`
WHERE
# Trøndelag
fylke = 50
# Nåværende vegnett
AND (metadata.sluttdato IS NULL OR metadata.sluttdato >= CURRENT_DATE())
# Hovedtrase
AND type = "HOVED"
# Europaveg og Riksveg
AND vegsystemreferanse.vegsystem.vegkategori in ("E", "R")
# Eksisterende veg som er del av det operative vegnettet
AND vegsystemreferanse.vegsystem.fase in ("V")
# Behold bare ett løp for adskilte tunelløp
AND (vegsystemreferanse.strekning.adskilteLop IN ("Med", "Nei") OR vegsystemreferanse.strekning.adskilteLop IS NULL)
AND typeVeg IN (
"Rundkjøring",
"Enkel bilveg",
"Kanalisert veg",
"Rampe");
"""
print(query)
client.query(query).to_dataframe()
###Output
_____no_output_____
###Markdown
Denne spørringen viser hvordan man kan avdekke veglenkesekvenser som ble blitt opprettet i perioden 2020-01-01 til 2021-01-01. Merk at spørringen returnerer alle veglenker i sekvensen, ikke bare de som ble opprettet i den aktuelle perioden.
###Code
query = f"""
WITH
veglenker AS
(SELECT *
FROM `{project}.standardized.veglenker`
WHERE
# Hovedtrase
type = "HOVED"
# Europaveg og Riksveg
AND vegsystemreferanse.vegsystem.vegkategori in ("E", "R")
# Eksisterende veg som er del av det operative vegnettet
AND vegsystemreferanse.vegsystem.fase in ("V")
# Behold bare ett løp for adskilte tunelløp
AND (vegsystemreferanse.strekning.adskilteLop IN ("Med", "Nei") OR vegsystemreferanse.strekning.adskilteLop IS NULL)
AND typeVeg IN (
"Rundkjøring",
"Enkel bilveg",
"Kanalisert veg",
"Rampe")),
nye_veglenkesekvenser AS
(SELECT
veglenkesekvensid,
ST_UNION_AGG(geometri) geometri,
MIN(metadata.startdato) AS tidligste_startdato
FROM veglenker
GROUP BY veglenkesekvensid
HAVING
# Sekvensen ble opprettet i perioden
tidligste_startdato BETWEEN '2020-01-01' AND '2021-01-01')
SELECT * FROM nye_veglenkesekvenser
"""
print(query)
client.query(query).to_dataframe()
###Output
_____no_output_____
###Markdown
Denne spørringen viser hvordan man kan avdekke veglenkesekvenser som ble endelig avsluttet i perioden 2020-01-01 til 2021-01-01. Dette er en grei måte å få oversikt over større endringer i vegnettet, men den får ikke med seg de tilfeller hvor det blir avsluttet noen, men ikke alle, veglenker i en sekvens. [Se denne presentasjonen for mer informasjon om forholdet mellom veglenker og veglenkesekvenser.](https://vegvesen-my.sharepoint.com/:p:/g/personal/jan_kristian_jensen_vegvesen_no/ERTPmjdinZxIh23EwpQeigYBn_lFAzyJHuuisQyU4_qVhg)Merk at spørringen returnerer alle veglenker i sekvensen, ikke bare de som ble avsluttet i den aktuelle perioden.
###Code
query = f"""
WITH
veglenker AS
(SELECT *
FROM `{project}.standardized.veglenker`
WHERE
# Hovedtrase
type = "HOVED"
# Europaveg og Riksveg
AND vegsystemreferanse.vegsystem.vegkategori in ("E", "R")
# Eksisterende veg som er del av det operative vegnettet
AND vegsystemreferanse.vegsystem.fase in ("V")
# Behold bare ett løp for adskilte tunelløp
AND (vegsystemreferanse.strekning.adskilteLop IN ("Med", "Nei") OR vegsystemreferanse.strekning.adskilteLop IS NULL)
AND typeVeg IN (
"Rundkjøring",
"Enkel bilveg",
"Kanalisert veg",
"Rampe")),
stengte_veglenkesekvenser AS
(SELECT
veglenkesekvensid,
ST_UNION_AGG(geometri) geometri,
COUNT(*) AS totalt_antall_veglenker,
SUM(
CASE
WHEN metadata.sluttdato IS NOT NULL THEN 1
ELSE 0
END) antall_stengte_veglenker,
MAX(metadata.sluttdato) AS siste_sluttdato
FROM veglenker
GROUP BY veglenkesekvensid
HAVING
# alle veglenkene i veglenkesekvensen er stengt
totalt_antall_veglenker = antall_stengte_veglenker
# siste sluttdato i perioden
AND siste_sluttdato BETWEEN '2020-01-01' AND '2021-01-01')
SELECT * FROM stengte_veglenkesekvenser
"""
print(query)
client.query(query).to_dataframe()
###Output
_____no_output_____ |
examples/Logica_example_News_connections.ipynb | ###Markdown
Logica example: News connectionsHere we find a sequence in the news that connects pop singer Justin Bieber to the Lieutenant Governor of British Columbia Janet Austin.We use [GDELT](https://www.gdeltproject.org/data.html) public BigQuery table. Install Logica
###Code
!pip install logica
###Output
Requirement already satisfied: logica in /usr/local/lib/python3.6/dist-packages (1.3.10)
###Markdown
Extract the news graph
###Code
!mkdir lib
%%writefile lib/util.l
ExtractPeople(people_str) = people :-
people List= (
person :-
person in Split(people_str, ";"),
# Mistakes in the dataset:
person != "los angeles",
person != "las vegas"
);
Gdelt2020(..r) :-
`gdelt-bq.gdeltv2.gkg`(..r),
Substr(ToString(r.date), 0, 4) == "2020";
from logica import colab_logica
%%logica SaveEdges
import lib.util.ExtractPeople;
import lib.util.Gdelt2020;
@Dataset("logica_graph_example");
EdgeWeight(person1, person2, url? AnyValue= url) += 1 :-
Gdelt2020(persons:, documentidentifier: url),
people == ExtractPeople(persons),
person1 in people,
person2 in people,
person1 != person2;
@Ground(Edge);
Edge(a, b, url:) :- EdgeWeight(a, b, url:) > 120;
SaveEdges() += 1 :- Edge();
###Output
_____no_output_____
###Markdown
Finding all paths from Justin Bieber
###Code
%%logica SaveP3
@Dataset("logica_graph_example");
@Ground(Edge);
Root() = "justin bieber";
P0(person: Root(), path: ["__start__", Root()]);
@Ground(P1);
P1(person:, path? AnyValue= path) distinct :-
P0(person: prev_person, path: prev_path),
Edge(prev_person, person),
~P0(person:),
path == ArrayConcat(prev_path, [person]);
P1(person:, path? AnyValue= path) distinct :-
P0(person:, path:);
# Few steps of recursion are convenient to describe via
# repeated functor calls.
P2 := P1(P0: P1);
P3 := P2(P1: P2);
SaveP3() += 1 :- P3();
###Output
_____no_output_____
###Markdown
Printing the path
###Code
%%logica PathExplanation
@Dataset("logica_graph_example");
@Ground(P3);
@Ground(Edge);
@Ground(ThePath);
ThePath() = path :-
P3(person: "janet austin", path:);
@OrderBy(PathExplanation, "step");
PathExplanation(step: i, from_person: p1, to_person: p2, url:) :-
path == ThePath(),
i in RangeOf(path),
i > 0,
i < ArrayLength(path) - 1,
p1 == Element(path, i),
p2 == Element(path, i + 1),
Edge(p1, p2, url:);
for step, from_person, to_person, url in zip(PathExplanation['step'],
PathExplanation['from_person'],
PathExplanation['to_person'],
PathExplanation['url']):
print("%s is (or was) connected to %s by URL: %s" % (from_person, to_person, url))
###Output
justin bieber is (or was) connected to justin trudeau by URL: https://misionesonline.net/2020/06/28/espectaculos-el-concierto-virtual-global-goal-unite-for-our-future-junto-donaciones-para-la-lucha-contra-el-coronavirus/
justin trudeau is (or was) connected to janet austin by URL: https://www.macleans.ca/politics/ottawa/john-horgan-savvy-opportunist-or-practical-realist/
|
Moire-one-dim.ipynb | ###Markdown
Moire 1D latticeI will use this notebook to explore the physics of a model that is two 1D chains displaced by a small amount $\epsilon$. I am interested in the band structure of this bi-lattice. I will model it as a tight-binding lattice with two hopping parameters $t_1$ and $t_2$.
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def create_H(t1,t2,N):
'''
t1,t2 are two TB parameters,
N is the number of bilattice points
The size of the Hamiltonian is 2N x 2N
'''
def matrix_element(x,y):
if x == y:
return t1+t2
elif min(x,y) % 2 == 0 and abs(x-y) == 1:
return -t1
elif min(x,y) % 2 == 1 and abs(x-y) == 1:
return -t2
else:
return 0
vec_matrix_ele = np.vectorize(matrix_element)
X,Y = np.meshgrid(np.arange(2*N),np.arange(2*N))
H = vec_matrix_ele(X,Y)
return H
# simple Hamiltonian
H = create_H(1,1,100)
u,v = np.linalg.eig(H)
plt.plot(np.sort(np.abs(u)))
plt.xlabel("eigenval index")
plt.ylabel("energy")
t1 = 1
t2 =32
H = create_H(t1,t2,100)
u,v = np.linalg.eig(H)
plt.plot(np.sort(np.abs(u)))
plt.xlabel("eigenval index")
plt.ylabel("energy")
data = []
t2_vec = np.logspace(-1,1,100)
for t2 in t2_vec:
H = create_H(t1,t2,100)
u,v = np.linalg.eig(H)
data.append(np.sort(np.abs(u))/(t1+t2))
plt.pcolor(data)
###Output
_____no_output_____ |
originals/Python_05_arrays_plotting.ipynb | ###Markdown
VektoriseringVektorisering brukes mye i beregninger, ettersom å regne med vektorer både blir raskere og gir mer kompakt kode enn å bruke lister. En matematisk operasjon utføres på hele vektoren som én enhet, og bruk av løkker er derfor ikke nødvendig. Mer at det er noen typer problem som ikke kan vektoriseres, som f.eks numerisk integrasjon. Til vektorisering i Python bruker vi modulen `numpy`. Det er en konvensjon å importere `numpy` som `np`. Vektorene kalles arrays.
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Tomt ArrayMed tomme arrays menes arrays av en bestemt lengde som kun inneholder null-verdier. De er altså ikke egentlig tomme, men trivielle. I motsetning til lister må `numpy` sine arrays ha en bestemt lengde når de opprettes. Under vises et eksempel på hvordan du oppretter et såkalt tomt array ved hjelp av funksjonen `numpy.zeros()`.
###Code
n = 10 # length of array
empty_array = np.zeros(n)
print(empty_array)
###Output
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
###Markdown
`numpy.linspace()``linspace(min_value, max_value, n)` tar inn en startverdi, en sluttverdi og hvor mange elementer arrayet skal inneholde. Den returnerer et array med `n` elementer fra og med `min_value` til og med `max_value`. Verdien til elementene i arrayet vil øke jevnt.
###Code
x_min = 0
x_max = 100
n = 11
x_array = np.linspace(x_min, x_max, n)
print(x_array)
###Output
[ 0. 10. 20. 30. 40. 50. 60. 70. 80. 90. 100.]
###Markdown
Forskjeller mellom Python-lister og numpy-arraysLister i Python oppfører seg som "ting". Adderer du to lister, vil du få en lengre liste som inneholder alle elementene fra begge listene du adderte.Arrays oppfører seg som vektorer. Adderer du to arrays, returneres et array hvor hvert element er summen av elementene på tilsvarende plass i de to originale arrayene. Skal man legge sammen to arrays, må de derfor ha samme lengde.Eksempelet under viser hvordan arrays og lister oppfører seg annerledes når de ganges med et tall.
###Code
my_list = [0, 1, 2, 3, 4]
my_array = np.linspace(0, 4, 5)
print("my_list:", my_list)
print("my_array:", my_array)
print("2*list: ", 2*my_list)
print("2*array: ", 2*my_array)
###Output
my_list: [0, 1, 2, 3, 4]
my_array: [ 0. 1. 2. 3. 4.]
2*list: [0, 1, 2, 3, 4, 0, 1, 2, 3, 4]
2*array: [ 0. 2. 4. 6. 8.]
###Markdown
Plotting For å visualisere data brukes modulen `matplotlib.pyplot` i IN1900. Det er en konvensjon å importere denne modulen som `plt`.
###Code
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
`plot(x_list, y_list)` er kommandoen for å plotte. Her plottes `y_list` mot `x_list`. Både Python sine lister og arrays kan brukes til plotting. Listene/arrayene må ha samme lengde for at de skal kunne plottes sammen.`xlabel()` og `ylabel()` brukes for å navngi aksene. `title()` angir tittel på plottet. Plotter du mer enn én linje, bør du bruke `legend()` for å spesifisere hva de forskjellige linjene viser. Dette kan enten gjøres ved å legge inn en liste med navn på linjene i `legend()`, eller ved å legge ved `label` i `plot()`-kallet, slik som vist på eksempelet under. Da må man likevel kalle på `legend()`, men uten å legge ved argumenter i kallet.`show()` kalles på til slutt. Dette gjør at plottet vises når du kjører programmet.Under er et eksempel på et plot. Merk at f.eks `"x"` står for at punktene skal plottes som kryss.
###Code
%matplotlib notebook
from numpy import sin, cos, pi
theta = np.linspace(0, 2*pi, 200)
x0, y0 = (0, 0)
plt.figure(0, figsize=(6, 6))
plt.plot(cos(theta), sin(theta), label="Circle")
plt.plot(x0, y0, "x", label="Centre")
plt.legend()
plt.title("The Unit Circle")
plt.axis([-1.5, 1.5, -1.5, 1.5])
plt.show()
###Output
_____no_output_____ |
Part_2_User_Input_Twitter_Analysis/User Input Twitter Data.ipynb | ###Markdown
This creates a csv file of the entered hashtag by the user and populates data from recently pull tweets based on the number to pull by the user.Check the folder of the name of the hashtag with .csv to see the results.
###Code
# importing required libraries
import tweepy
import csv
import datetime
import time
#define and store your credintials
consumer_key = "jzn0NU9EviCRRbONbUXX9a8VN"
consumer_secret = "ULsKu9BjBPmZ3yY5NdS6EXUhGBNWKUWxtwKqFktBeqsOq1Y3ZQ"
access_token = "781482721-6928Gtnj95bK82PW3fYDxHFvU5T4l3SPI4VVF1X2"
access_token_secret = "fTxclLJ4oxEmqshRhSbBibGoUiNq1l6941C0VyREdTf41"
#Tweepy auth
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
#Enter the keyword (kw) that you want to collect tweets based on it
kw = input('Enter keyword: ')
#Enter the number of the tweets you want to fetch
num = int(input('Number of tweet (enter -1 if you want max number) : '))
# Open/Create a file to append data
csvFile = open(kw+'.csv', 'w+')
#Use csv Writer
csvWriter = csv.writer(csvFile)
#Columns that you want to print in CSV file
csvWriter.writerow(['Tweet_Date', 'user_id','Followers','Tweet_text'])
apicalls = 0
counter =0
for tweet in tweepy.Cursor(api.search,q=kw,count=100).items():
apicalls = apicalls+1
if (apicalls == 150*100):
print("sleep")
apicalls = 0
time.sleep(15*60)
csvWriter.writerow([tweet.created_at, tweet.user.screen_name, tweet.user.followers_count, tweet.text.encode('utf-8')])
counter=counter+1
if num==-1:
pass
elif counter==num:
break
csvFile.close()
print("Fetch Finished")
###Output
Fetch Finished
###Markdown
This creates a csv file of the entered hashtag by the user and populates data from recently pull tweets based on the number to pull by the user.Check the folder of the name of the hashtag with .csv to see the results.
###Code
# importing required libraries
import tweepy
import csv
import datetime
import time
#define and store your credintials
consumer_key = "consumer_secret = "access_token = "access_token_secret = "
#Tweepy auth
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
#Enter the keyword (kw) that you want to collect tweets based on it
kw = input('Enter keyword: ')
#Enter the number of the tweets you want to fetch
num = int(input('Number of tweet (enter -1 if you want max number) : '))
# Open/Create a file to append data
csvFile = open(kw+'.csv', 'w+')
#Use csv Writer
csvWriter = csv.writer(csvFile)
#Columns that you want to print in CSV file
csvWriter.writerow(['Tweet_Date', 'user_id','Followers','Tweet_text'])
apicalls = 0
counter =0
for tweet in tweepy.Cursor(api.search,q=kw,count=100).items():
apicalls = apicalls+1
if (apicalls == 150*100):
print("sleep")
apicalls = 0
time.sleep(15*60)
csvWriter.writerow([tweet.created_at, tweet.user.screen_name, tweet.user.followers_count, tweet.text.encode('utf-8')])
counter=counter+1
if num==-1:
pass
elif counter==num:
break
csvFile.close()
print("Fetch Finished")
###Output
Fetch Finished
|
ch02git/04Publishing.ipynb | ###Markdown
Publishing We're still in our working directory:
###Code
import os
top_dir = os.getcwd()
git_dir = os.path.join(top_dir, 'learning_git')
working_dir=os.path.join(git_dir, 'git_example')
os.chdir(working_dir)
working_dir
###Output
_____no_output_____
###Markdown
Sharing your work So far, all our work has been on our own computer. But a big part of the point of version control is keeping your work safe, on remote servers. Another part is making it easy to share your work with the world In this example, we'll be using the "GitHub" cloud repository to store and publish our work. If you have not done so already, you should create an account on GitHub: go to [https://github.com/](https://github.com/), fill in a username and password, and click on "sign up for free". Creating a repositoryOk, let's create a repository to store our work. Hit "new repository" on the right of the github home screen, or click [here](https://github.com/new). Fill in a short name, and a description. Choose a "public" repository. Don't choose to add a Readme. Paying for GitHubFor this software carpentry course, you should use public repositories in your personal account for your example work: it's good to share! GitHub is free for open source, but in general, charges a fee if you want to keep your work private. In the future, you might want to keep your work on GitHub private. Students can get free private repositories on GitHub, by going to [https://github.com/edu] and filling in a form. UCL pays for private GitHub repositories for UCL research groups: you can find the service details on our [web page](../../infrastructure/github.html). Adding a new remote to your repositoryInstructions will appear, once you've created the repository, as to how to add this new "remote" server to your repository, in the lower box on the screen. Mine say:
###Code
%%bash
git remote add origin [email protected]:UCL/github-example.git
%%bash
git push -uf origin master # I have an extra `f` switch here.
#You should copy the instructions from YOUR repository.
###Output
Branch master set up to track remote branch master from origin.
###Markdown
RemotesThe first command sets up the server as a new `remote`, called `origin`. Git, unlike some earlier version control systems is a "distributed" version control system, which means you can work with multiple remote servers. Usually, commands that work with remotes allow you to specify the remote to use, but assume the `origin` remote if you don't. Here, `git push` will push your whole history onto the server, and now you'll be able to see it on the internet! Refresh your web browser where the instructions were, and you'll see your repository! Let's add these commands to our diagram:
###Code
message="""
Working Directory -> Staging Area : git add
Staging Area -> Local Repository : git commit
Working Directory -> Local Repository : git commit -a
Staging Area -> Working Directory : git checkout
Local Repository -> Staging Area : git reset
Local Repository -> Working Directory: git reset --hard
Local Repository -> Remote Repository : git push
"""
from wsd import wsd
%matplotlib inline
wsd(message)
###Output
_____no_output_____
###Markdown
Playing with GitHubTake a few moments to click around and work your way through the GitHub interface. Try clicking on 'index.md' to see the content of the file: notice how the markdown renders prettily.Click on "commits" near the top of the screen, to see all the changes you've made. Click on the commit number next to the right of a change, to see what changes it includes: removals are shown in red, and additions in green. Working with multiple files Some new contentSo far, we've only worked with one file. Let's add another: ``` bashvim lakeland.md```
###Code
%%writefile lakeland.md
Lakeland
========
Cumbria has some pretty hills, and lakes too.
cat lakeland.md
###Output
Lakeland
========
Cumbria has some pretty hills, and lakes too.
###Markdown
Git will not by default commit your new file
###Code
%%bash
git commit -am "Try to add Lakeland"
###Output
On branch master
Your branch is up-to-date with 'origin/master'.
Untracked files:
lakeland.md
wsd.py
wsd.pyc
nothing added to commit but untracked files present
###Markdown
This didn't do anything, because we've not told git to track the new file yet. Tell git about the new file
###Code
%%bash
git add lakeland.md
git commit -am "Add lakeland"
###Output
[master 76322e5] Add lakeland
1 file changed, 4 insertions(+)
create mode 100644 lakeland.md
###Markdown
Ok, now we have added the change about Cumbria to the file. Let's publish it to the origin repository.
###Code
%%bash
git push
###Output
To [email protected]:UCL/github-example.git
e533bb0..76322e5 master -> master
###Markdown
Visit GitHub, and notice this change is on your repository on the server. We could have said `git push origin` to specify the remote to use, but origin is the default. Changing two files at once What if we change both files?
###Code
%%writefile lakeland.md
Lakeland
========
Cumbria has some pretty hills, and lakes too
Mountains:
* Helvellyn
%%writefile index.md
Mountains and Lakes in the UK
===================
Engerland is not very mountainous.
But has some tall hills, and maybe a
mountain or two depending on your definition.
%%bash
git status
###Output
On branch master
Your branch is up-to-date with 'origin/master'.
Changes not staged for commit:
modified: index.md
modified: lakeland.md
Untracked files:
wsd.py
wsd.pyc
no changes added to commit
###Markdown
These changes should really be separate commits. We can do this with careful use of git add, to **stage** first one commit, then the other.
###Code
%%bash
git add index.md
git commit -m "Include lakes in the scope"
###Output
[master cdd35b8] Include lakes in the scope
1 file changed, 2 insertions(+), 2 deletions(-)
###Markdown
Because we "staged" only index.md, the changes to lakeland.md were not included in that commit.
###Code
%%bash
git commit -am "Add Helvellyn"
%%bash
git log --oneline
%%bash
git push
message="""
participant "Jim's remote" as M
participant "Jim's repo" as R
participant "Jim's index" as I
participant Jim as J
note right of J: vim index.md
note right of J: vim lakeland.md
note right of J: git add index.md
J->I: Add *only* the changes to index.md to the staging area
note right of J: git commit -m "Include lakes"
I->R: Make a commit from currently staged changes: index.md only
note right of J: git commit -am "Add Helvellyn"
J->I: Stage *all remaining* changes, (lakeland.md)
I->R: Make a commit from currently staged changes
note right of J: git push
R->M: Transfer commits to Github
"""
wsd(message)
###Output
_____no_output_____
###Markdown
Publishing We're still in our working directory:
###Code
import os
top_dir = os.getcwd()
git_dir = os.path.join(top_dir, 'learning_git')
working_dir = os.path.join(git_dir, 'git_example')
os.chdir(working_dir)
working_dir
###Output
_____no_output_____
###Markdown
Sharing your work So far, all our work has been on our own computer. But a big part of the point of version control is keeping your work safe, on remote servers. Another part is making it easy to share your work with the world In this example, we'll be using the "GitHub" cloud repository to store and publish our work. If you have not done so already, you should create an account on GitHub: go to [https://github.com/](https://github.com/), fill in a username and password, and click on "sign up for free". Creating a repositoryOk, let's create a repository to store our work. Hit "new repository" on the right of the github home screen, or click [here](https://github.com/new). Fill in a short name, and a description. Choose a "public" repository. Don't choose to add a Readme. Paying for GitHubFor this course, you should use public repositories in your personal account for your example work: it's good to share! GitHub is free for open source, but in general, charges a fee if you want to keep your work private. In the future, you might want to keep your work on GitHub private. Students can get free private repositories on GitHub, by going to [GitHub Education](https://github.com/edu) and filling in a form (look for the Student Developer Pack). UCL pays for private GitHub repositories for UCL research groups: you can find the service details on our [web page](https://www.ucl.ac.uk/isd/services/research-it/research-software-development/github/accessing-github-for-research). Adding a new remote to your repositoryInstructions will appear, once you've created the repository, as to how to add this new "remote" server to your repository, in the lower box on the screen. Mine say:
###Code
%%bash
git remote add origin [email protected]:UCL/github-example.git
%%bash
git push -uf origin master # I have an extra `f` switch here.
#You should copy the instructions from YOUR repository.
###Output
Branch master set up to track remote branch master from origin.
###Markdown
RemotesThe first command sets up the server as a new `remote`, called `origin`. Git, unlike some earlier version control systems is a "distributed" version control system, which means you can work with multiple remote servers. Usually, commands that work with remotes allow you to specify the remote to use, but assume the `origin` remote if you don't. Here, `git push` will push your whole history onto the server, and now you'll be able to see it on the internet! Refresh your web browser where the instructions were, and you'll see your repository! Let's add these commands to our diagram:
###Code
message="""
Working Directory -> Staging Area : git add
Staging Area -> Local Repository : git commit
Working Directory -> Local Repository : git commit -a
Staging Area -> Working Directory : git checkout
Local Repository -> Staging Area : git reset
Local Repository -> Working Directory: git reset --hard
Local Repository -> Remote Repository : git push
"""
from wsd import wsd
%matplotlib inline
wsd(message)
###Output
_____no_output_____
###Markdown
Playing with GitHubTake a few moments to click around and work your way through the GitHub interface. Try clicking on 'index.md' to see the content of the file: notice how the markdown renders prettily.Click on "commits" near the top of the screen, to see all the changes you've made. Click on the commit number next to the right of a change, to see what changes it includes: removals are shown in red, and additions in green. Working with multiple files Some new contentSo far, we've only worked with one file. Let's add another: ``` bashvim lakeland.md```
###Code
%%writefile lakeland.md
Lakeland
========
Cumbria has some pretty hills, and lakes too.
cat lakeland.md
###Output
Lakeland
========
Cumbria has some pretty hills, and lakes too.
###Markdown
Git will not by default commit your new file
###Code
%%bash
git commit -am "Try to add Lakeland"
###Output
On branch master
Your branch is up-to-date with 'origin/master'.
Untracked files:
lakeland.md
wsd.py
wsd.pyc
nothing added to commit but untracked files present
###Markdown
This didn't do anything, because we've not told git to track the new file yet. Tell git about the new file
###Code
%%bash
git add lakeland.md
git commit -am "Add lakeland"
###Output
[master 76322e5] Add lakeland
1 file changed, 4 insertions(+)
create mode 100644 lakeland.md
###Markdown
Ok, now we have added the change about Cumbria to the file. Let's publish it to the origin repository.
###Code
%%bash
git push
###Output
To [email protected]:UCL/github-example.git
e533bb0..76322e5 master -> master
###Markdown
Visit GitHub, and notice this change is on your repository on the server. We could have said `git push origin` to specify the remote to use, but origin is the default. Changing two files at once What if we change both files?
###Code
%%writefile lakeland.md
Lakeland
========
Cumbria has some pretty hills, and lakes too
Mountains:
* Helvellyn
%%writefile index.md
Mountains and Lakes in the UK
===================
Engerland is not very mountainous.
But has some tall hills, and maybe a
mountain or two depending on your definition.
%%bash
git status
###Output
On branch master
Your branch is up-to-date with 'origin/master'.
Changes not staged for commit:
modified: index.md
modified: lakeland.md
Untracked files:
wsd.py
wsd.pyc
no changes added to commit
###Markdown
These changes should really be separate commits. We can do this with careful use of git add, to **stage** first one commit, then the other.
###Code
%%bash
git add index.md
git commit -m "Include lakes in the scope"
###Output
[master cdd35b8] Include lakes in the scope
1 file changed, 2 insertions(+), 2 deletions(-)
###Markdown
Because we "staged" only index.md, the changes to lakeland.md were not included in that commit.
###Code
%%bash
git commit -am "Add Helvellyn"
%%bash
git log --oneline
%%bash
git push
message="""
participant "Jim's remote" as M
participant "Jim's repo" as R
participant "Jim's index" as I
participant Jim as J
note right of J: vim index.md
note right of J: vim lakeland.md
note right of J: git add index.md
J->I: Add *only* the changes to index.md to the staging area
note right of J: git commit -m "Include lakes"
I->R: Make a commit from currently staged changes: index.md only
note right of J: git commit -am "Add Helvellyn"
J->I: Stage *all remaining* changes, (lakeland.md)
I->R: Make a commit from currently staged changes
note right of J: git push
R->M: Transfer commits to Github
"""
wsd(message)
###Output
_____no_output_____
###Markdown
Publishing We're still in our working directory:
###Code
import os
top_dir = os.getcwd()
git_dir = os.path.join(top_dir, 'learning_git')
working_dir = os.path.join(git_dir, 'git_example')
os.chdir(working_dir)
working_dir
###Output
_____no_output_____
###Markdown
Sharing your work So far, all our work has been on our own computer. But a big part of the point of version control is keeping your work safe, on remote servers. Another part is making it easy to share your work with the world In this example, we'll be using the "GitHub" cloud repository to store and publish our work. If you have not done so already, you should create an account on GitHub: go to [https://github.com/](https://github.com/), fill in a username and password, and click on "sign up for free". Creating a repositoryOk, let's create a repository to store our work. Hit "new repository" on the right of the github home screen, or click [here](https://github.com/new). Fill in a short name, and a description. Choose a "public" repository. Don't choose to add a Readme. Paying for GitHubFor this course, you should use public repositories in your personal account for your example work: it's good to share! GitHub is free for open source, but in general, charges a fee if you want to keep your work private. In the future, you might want to keep your work on GitHub private. Students can get free private repositories on GitHub, by going to [GitHub Education](https://github.com/edu) and filling in a form (look for the Student Developer Pack). Adding a new remote to your repositoryInstructions will appear, once you've created the repository, as to how to add this new "remote" server to your repository:
###Code
%%bash
git remote add origin https://${GITHUB_TOKEN}@github.com/alan-turing-institute/github-example.git
%%bash
git push -uf origin master # I have an extra `f` switch here.
#You should copy the instructions from YOUR repository.
###Output
Branch 'master' set up to track remote branch 'master' from 'origin'.
###Markdown
RemotesThe first command sets up the server as a new `remote`, called `origin`. Git, unlike some earlier version control systems is a "distributed" version control system, which means you can work with multiple remote servers. Usually, commands that work with remotes allow you to specify the remote to use, but assume the `origin` remote if you don't. Here, `git push` will push your whole history onto the server, and now you'll be able to see it on the internet! Refresh your web browser where the instructions were, and you'll see your repository! Let's add these commands to our diagram:
###Code
message="""
Working Directory -> Staging Area : git add
Staging Area -> Local Repository : git commit
Working Directory -> Local Repository : git commit -a
Staging Area -> Working Directory : git checkout
Local Repository -> Staging Area : git reset
Local Repository -> Working Directory: git reset --hard
Local Repository -> Remote Repository : git push
"""
from wsd import wsd
%matplotlib inline
wsd(message)
###Output
_____no_output_____
###Markdown
Playing with GitHubTake a few moments to click around and work your way through the GitHub interface. Try clicking on 'test.md' to see the content of the file: notice how the markdown renders prettily.Click on "commits" near the top of the screen, to see all the changes you've made. Click on the commit number next to the right of a change, to see what changes it includes: removals are shown in red, and additions in green. Working with multiple files Some new contentSo far, we've only worked with one file. Let's add another: ``` bashvim lakeland.md```
###Code
%%writefile lakeland.md
Lakeland
========
Cumbria has some pretty hills, and lakes too.
cat lakeland.md
###Output
Lakeland
========
Cumbria has some pretty hills, and lakes too.
###Markdown
Git will not by default commit your new file
###Code
%%bash
git commit -am "Try to add Lakeland"
###Output
On branch master
Your branch is up to date with 'origin/master'.
Untracked files:
(use "git add <file>..." to include in what will be committed)
__pycache__/
lakeland.md
wsd.py
nothing added to commit but untracked files present (use "git add" to track)
###Markdown
This didn't do anything, because we've not told git to track the new file yet. Tell git about the new file
###Code
%%bash
git add lakeland.md
git commit -am "Add lakeland"
###Output
[master 7c1bdbf] Add lakeland
1 file changed, 4 insertions(+)
create mode 100644 lakeland.md
###Markdown
Ok, now we have added the change about Cumbria to the file. Let's publish it to the origin repository.
###Code
%%bash
git push
###Output
To https://github.com/alan-turing-institute/github-example.git
b6fe910..7c1bdbf master -> master
###Markdown
Visit GitHub, and notice this change is on your repository on the server. We could have said `git push origin` to specify the remote to use, but origin is the default. Changing two files at once What if we change both files?
###Code
%%writefile lakeland.md
Lakeland
========
Cumbria has some pretty hills, and lakes too
Mountains:
* Helvellyn
%%writefile test.md
Mountains and Lakes in the UK
===================
Engerland is not very mountainous.
But has some tall hills, and maybe a
mountain or two depending on your definition.
%%bash
git status
###Output
On branch master
Your branch is up to date with 'origin/master'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: test.md
modified: lakeland.md
Untracked files:
(use "git add <file>..." to include in what will be committed)
__pycache__/
wsd.py
no changes added to commit (use "git add" and/or "git commit -a")
###Markdown
These changes should really be separate commits. We can do this with careful use of git add, to **stage** first one commit, then the other.
###Code
%%bash
git add test.md
git commit -m "Include lakes in the scope"
###Output
[master d0417fc] Include lakes in the scope
1 file changed, 4 insertions(+), 3 deletions(-)
###Markdown
Because we "staged" only test.md, the changes to lakeland.md were not included in that commit.
###Code
%%bash
git commit -am "Add Helvellyn"
%%bash
git log --oneline
%%bash
git push
message="""
participant "Jim's remote" as M
participant "Jim's repo" as R
participant "Jim's index" as I
participant Jim as J
note right of J: vim test.md
note right of J: vim lakeland.md
note right of J: git add test.md
J->I: Add *only* the changes to test.md to the staging area
note right of J: git commit -m "Include lakes"
I->R: Make a commit from currently staged changes: test.md only
note right of J: git commit -am "Add Helvellyn"
J->I: Stage *all remaining* changes, (lakeland.md)
I->R: Make a commit from currently staged changes
note right of J: git push
R->M: Transfer commits to Github
"""
wsd(message)
###Output
_____no_output_____
###Markdown
Publishing We're still in our working directory:
###Code
import os
top_dir = os.getcwd()
git_dir = os.path.join(top_dir, 'learning_git')
working_dir = os.path.join(git_dir, 'git_example')
os.chdir(working_dir)
working_dir
###Output
_____no_output_____
###Markdown
Sharing your work So far, all our work has been on our own computer. But a big part of the point of version control is keeping your work safe, on remote servers. Another part is making it easy to share your work with the world In this example, we'll be using the "GitHub" cloud repository to store and publish our work. If you have not done so already, you should create an account on [GitHub](https://github.com/): go to [GitHub's website](https://github.com/), fill in a username and password, and click on "sign up for GitHub". Creating a repositoryOk, let's create a repository to store our work. Hit "[new repository](https://github.com/new)" on the right of the github home screen.Fill in a short name, and a description. Choose a "public" repository. Don't choose to initialize the repository with a README. That will create a repository with content and we only want a placeholder where to upload what we've created locally. Paying for GitHubFor this course, you should use public repositories in your personal account for your example work: it's good to share! GitHub is free for open source, but in general, charges a fee if you want to keep your work private. In the future, you might want to keep your work on GitHub private. Students can get free private repositories on GitHub, by going to [GitHub Education](https://education.github.com/) and filling in a form (look for the Student Developer Pack). UCL pays for private GitHub repositories for UCL research groups: you can find the service details on the [Research Software Development Group's website](https://www.ucl.ac.uk/isd/services/research-it/research-software-development-tools/support-for-ucl-researchers-to-use-github). Adding a new remote to your repositoryInstructions will appear, once you've created the repository, as to how to add this new "remote" server to your repository, in the lower box on the screen. Mine say:
###Code
%%bash
git remote add origin [email protected]:UCL/github-example.git
%%bash
git push -uf origin master # I have an extra `f` switch here.
#You should copy the instructions from YOUR repository.
###Output
Branch master set up to track remote branch master from origin.
###Markdown
RemotesThe first command sets up the server as a new `remote`, called `origin`. Git, unlike some earlier version control systems is a "distributed" version control system, which means you can work with multiple remote servers. Usually, commands that work with remotes allow you to specify the remote to use, but assume the `origin` remote if you don't. Here, `git push` will push your whole history onto the server, and now you'll be able to see it on the internet! Refresh your web browser where the instructions were, and you'll see your repository! Let's add these commands to our diagram:
###Code
message="""
Working Directory -> Staging Area : git add
Staging Area -> Local Repository : git commit
Working Directory -> Local Repository : git commit -a
Staging Area -> Working Directory : git checkout
Local Repository -> Staging Area : git reset
Local Repository -> Working Directory: git reset --hard
Local Repository -> Remote Repository : git push
"""
from wsd import wsd
%matplotlib inline
wsd(message)
###Output
_____no_output_____
###Markdown
Playing with GitHubTake a few moments to click around and work your way through the GitHub interface. Try clicking on 'index.md' to see the content of the file: notice how the markdown renders prettily.Click on "commits" near the top of the screen, to see all the changes you've made. Click on the commit number next to the right of a change, to see what changes it includes: removals are shown in red, and additions in green. Working with multiple files Some new contentSo far, we've only worked with one file. Let's add another: ``` bashvim lakeland.md```
###Code
%%writefile lakeland.md
Lakeland
========
Cumbria has some pretty hills, and lakes too.
cat lakeland.md
###Output
Lakeland
========
Cumbria has some pretty hills, and lakes too.
###Markdown
Git will not by default commit your new file
###Code
%%bash --no-raise-error
git commit -am "Try to add Lakeland"
###Output
On branch master
Your branch is up-to-date with 'origin/master'.
Untracked files:
lakeland.md
wsd.py
wsd.pyc
nothing added to commit but untracked files present
###Markdown
This didn't do anything, because we've not told git to track the new file yet. Tell git about the new file
###Code
%%bash
git add lakeland.md
git commit -am "Add lakeland"
###Output
[master 76322e5] Add lakeland
1 file changed, 4 insertions(+)
create mode 100644 lakeland.md
###Markdown
Ok, now we have added the change about Cumbria to the file. Let's publish it to the origin repository.
###Code
%%bash
git push
###Output
To [email protected]:UCL/github-example.git
e533bb0..76322e5 master -> master
###Markdown
Visit GitHub, and notice this change is on your repository on the server. We could have said `git push origin` to specify the remote to use, but origin is the default. Changing two files at once What if we change both files?
###Code
%%writefile lakeland.md
Lakeland
========
Cumbria has some pretty hills, and lakes too
Mountains:
* Helvellyn
%%writefile index.md
Mountains and Lakes in the UK
===================
Engerland is not very mountainous.
But has some tall hills, and maybe a
mountain or two depending on your definition.
%%bash
git status
###Output
On branch master
Your branch is up-to-date with 'origin/master'.
Changes not staged for commit:
modified: index.md
modified: lakeland.md
Untracked files:
wsd.py
wsd.pyc
no changes added to commit
###Markdown
These changes should really be separate commits. We can do this with careful use of git add, to **stage** first one commit, then the other.
###Code
%%bash
git add index.md
git commit -m "Include lakes in the scope"
###Output
[master cdd35b8] Include lakes in the scope
1 file changed, 2 insertions(+), 2 deletions(-)
###Markdown
Because we "staged" only index.md, the changes to lakeland.md were not included in that commit.
###Code
%%bash
git commit -am "Add Helvellyn"
%%bash
git log --oneline
%%bash
git push
message="""
participant "Cleese's remote" as M
participant "Cleese's repo" as R
participant "Cleese's index" as I
participant Cleese as C
note right of C: vim index.md
note right of C: vim lakeland.md
note right of C: git add index.md
C->I: Add *only* the changes to index.md to the staging area
note right of C: git commit -m "Include lakes"
I->R: Make a commit from currently staged changes: index.md only
note right of C: git commit -am "Add Helvellyn"
C->I: Stage *all remaining* changes, (lakeland.md)
I->R: Make a commit from currently staged changes
note right of C: git push
R->M: Transfer commits to Github
"""
wsd(message)
###Output
_____no_output_____
###Markdown
Publishing We're still in our working directory:
###Code
import os
top_dir = os.getcwd()
git_dir = os.path.join(top_dir, 'learning_git')
working_dir = os.path.join(git_dir, 'git_example')
os.chdir(working_dir)
working_dir
###Output
_____no_output_____
###Markdown
Sharing your work So far, all our work has been on our own computer. But a big part of the point of version control is keeping your work safe, on remote servers. Another part is making it easy to share your work with the world In this example, we'll be using the "GitHub" cloud repository to store and publish our work. If you have not done so already, you should create an account on GitHub: go to [https://github.com/](https://github.com/), fill in a username and password, and click on "sign up for free". Creating a repositoryOk, let's create a repository to store our work. Hit "new repository" on the right of the github home screen, or click [here](https://github.com/new). Fill in a short name, and a description. Choose a "public" repository. Don't choose to add a Readme. Paying for GitHubFor this course, you should use public repositories in your personal account for your example work: it's good to share! GitHub is free for open source, but in general, charges a fee if you want to keep your work private. In the future, you might want to keep your work on GitHub private. Students can get free private repositories on GitHub, by going to [GitHub Education](https://github.com/edu) and filling in a form (look for the Student Developer Pack). Adding a new remote to your repositoryInstructions will appear, once you've created the repository, as to how to add this new "remote" server to your repository:
###Code
%%bash
git remote add origin https://${GITHUB_TOKEN}@github.com/alan-turing-institute/github-example.git
%%bash
git push -uf origin master # I have an extra `f` switch here.
#You should copy the instructions from YOUR repository.
###Output
[email protected]: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
###Markdown
RemotesThe first command sets up the server as a new `remote`, called `origin`. Git, unlike some earlier version control systems is a "distributed" version control system, which means you can work with multiple remote servers. Usually, commands that work with remotes allow you to specify the remote to use, but assume the `origin` remote if you don't. Here, `git push` will push your whole history onto the server, and now you'll be able to see it on the internet! Refresh your web browser where the instructions were, and you'll see your repository! Let's add these commands to our diagram:
###Code
message="""
Working Directory -> Staging Area : git add
Staging Area -> Local Repository : git commit
Working Directory -> Local Repository : git commit -a
Staging Area -> Working Directory : git checkout
Local Repository -> Staging Area : git reset
Local Repository -> Working Directory: git reset --hard
Local Repository -> Remote Repository : git push
"""
from wsd import wsd
%matplotlib inline
wsd(message)
###Output
_____no_output_____
###Markdown
Playing with GitHubTake a few moments to click around and work your way through the GitHub interface. Try clicking on 'index.md' to see the content of the file: notice how the markdown renders prettily.Click on "commits" near the top of the screen, to see all the changes you've made. Click on the commit number next to the right of a change, to see what changes it includes: removals are shown in red, and additions in green. Working with multiple files Some new contentSo far, we've only worked with one file. Let's add another: ``` bashvim lakeland.md```
###Code
%%writefile lakeland.md
Lakeland
========
Cumbria has some pretty hills, and lakes too.
cat lakeland.md
###Output
Lakeland
========
Cumbria has some pretty hills, and lakes too.
###Markdown
Git will not by default commit your new file
###Code
%%bash
git commit -am "Try to add Lakeland"
###Output
On branch master
Your branch is up-to-date with 'origin/master'.
Untracked files:
lakeland.md
wsd.py
wsd.pyc
nothing added to commit but untracked files present
###Markdown
This didn't do anything, because we've not told git to track the new file yet. Tell git about the new file
###Code
%%bash
git add lakeland.md
git commit -am "Add lakeland"
###Output
[master 76322e5] Add lakeland
1 file changed, 4 insertions(+)
create mode 100644 lakeland.md
###Markdown
Ok, now we have added the change about Cumbria to the file. Let's publish it to the origin repository.
###Code
%%bash
git push
###Output
To [email protected]:UCL/github-example.git
e533bb0..76322e5 master -> master
###Markdown
Visit GitHub, and notice this change is on your repository on the server. We could have said `git push origin` to specify the remote to use, but origin is the default. Changing two files at once What if we change both files?
###Code
%%writefile lakeland.md
Lakeland
========
Cumbria has some pretty hills, and lakes too
Mountains:
* Helvellyn
%%writefile index.md
Mountains and Lakes in the UK
===================
Engerland is not very mountainous.
But has some tall hills, and maybe a
mountain or two depending on your definition.
%%bash
git status
###Output
On branch master
Your branch is up-to-date with 'origin/master'.
Changes not staged for commit:
modified: index.md
modified: lakeland.md
Untracked files:
wsd.py
wsd.pyc
no changes added to commit
###Markdown
These changes should really be separate commits. We can do this with careful use of git add, to **stage** first one commit, then the other.
###Code
%%bash
git add index.md
git commit -m "Include lakes in the scope"
###Output
[master cdd35b8] Include lakes in the scope
1 file changed, 2 insertions(+), 2 deletions(-)
###Markdown
Because we "staged" only index.md, the changes to lakeland.md were not included in that commit.
###Code
%%bash
git commit -am "Add Helvellyn"
%%bash
git log --oneline
%%bash
git push
message="""
participant "Jim's remote" as M
participant "Jim's repo" as R
participant "Jim's index" as I
participant Jim as J
note right of J: vim index.md
note right of J: vim lakeland.md
note right of J: git add index.md
J->I: Add *only* the changes to index.md to the staging area
note right of J: git commit -m "Include lakes"
I->R: Make a commit from currently staged changes: index.md only
note right of J: git commit -am "Add Helvellyn"
J->I: Stage *all remaining* changes, (lakeland.md)
I->R: Make a commit from currently staged changes
note right of J: git push
R->M: Transfer commits to Github
"""
wsd(message)
###Output
_____no_output_____
###Markdown
Publishing We're still in our working directory:
###Code
import os
top_dir = os.getcwd()
git_dir = os.path.join(top_dir, 'learning_git')
working_dir = os.path.join(git_dir, 'git_example')
os.chdir(working_dir)
working_dir
###Output
_____no_output_____
###Markdown
Sharing your work So far, all our work has been on our own computer. But a big part of the point of version control is keeping your work safe, on remote servers. Another part is making it easy to share your work with the world In this example, we'll be using the "GitHub" cloud repository to store and publish our work. If you have not done so already, you should create an account on GitHub: go to [https://github.com/](https://github.com/), fill in a username and password, and click on "sign up for free". Creating a repositoryOk, let's create a repository to store our work. Hit "new repository" on the right of the github home screen, or click [here](https://github.com/new). Fill in a short name, and a description. Choose a "public" repository. Don't choose to add a Readme. Paying for GitHubFor this course, you should use public repositories in your personal account for your example work: it's good to share! GitHub is free for open source, but in general, charges a fee if you want to keep your work private. In the future, you might want to keep your work on GitHub private. Students can get free private repositories on GitHub, by going to [GitHub Education](https://github.com/edu) and filling in a form (look for the Student Developer Pack). Adding a new remote to your repositoryInstructions will appear, once you've created the repository, as to how to add this new "remote" server to your repository:
###Code
%%bash
git remote add origin https://${GITHUB_TOKEN}@github.com/alan-turing-institute/github-example.git
%%bash
git push -uf origin master # I have an extra `f` switch here.
#You should copy the instructions from YOUR repository.
###Output
Branch 'master' set up to track remote branch 'master' from 'origin'.
###Markdown
RemotesThe first command sets up the server as a new `remote`, called `origin`. Git, unlike some earlier version control systems is a "distributed" version control system, which means you can work with multiple remote servers. Usually, commands that work with remotes allow you to specify the remote to use, but assume the `origin` remote if you don't. Here, `git push` will push your whole history onto the server, and now you'll be able to see it on the internet! Refresh your web browser where the instructions were, and you'll see your repository! Let's add these commands to our diagram:
###Code
message="""
Working Directory -> Staging Area : git add
Staging Area -> Local Repository : git commit
Working Directory -> Local Repository : git commit -a
Staging Area -> Working Directory : git checkout
Local Repository -> Staging Area : git reset
Local Repository -> Working Directory: git reset --hard
Local Repository -> Remote Repository : git push
"""
from wsd import wsd
%matplotlib inline
wsd(message)
###Output
_____no_output_____
###Markdown
Playing with GitHubTake a few moments to click around and work your way through the GitHub interface. Try clicking on 'index.md' to see the content of the file: notice how the markdown renders prettily.Click on "commits" near the top of the screen, to see all the changes you've made. Click on the commit number next to the right of a change, to see what changes it includes: removals are shown in red, and additions in green. Working with multiple files Some new contentSo far, we've only worked with one file. Let's add another: ``` bashvim lakeland.md```
###Code
%%writefile lakeland.md
Lakeland
========
Cumbria has some pretty hills, and lakes too.
cat lakeland.md
###Output
Lakeland
========
Cumbria has some pretty hills, and lakes too.
###Markdown
Git will not by default commit your new file
###Code
%%bash
git commit -am "Try to add Lakeland"
###Output
On branch master
Your branch is up to date with 'origin/master'.
Untracked files:
(use "git add <file>..." to include in what will be committed)
__pycache__/
lakeland.md
wsd.py
nothing added to commit but untracked files present (use "git add" to track)
###Markdown
This didn't do anything, because we've not told git to track the new file yet. Tell git about the new file
###Code
%%bash
git add lakeland.md
git commit -am "Add lakeland"
###Output
[master 7c1bdbf] Add lakeland
1 file changed, 4 insertions(+)
create mode 100644 lakeland.md
###Markdown
Ok, now we have added the change about Cumbria to the file. Let's publish it to the origin repository.
###Code
%%bash
git push
###Output
To https://github.com/alan-turing-institute/github-example.git
b6fe910..7c1bdbf master -> master
###Markdown
Visit GitHub, and notice this change is on your repository on the server. We could have said `git push origin` to specify the remote to use, but origin is the default. Changing two files at once What if we change both files?
###Code
%%writefile lakeland.md
Lakeland
========
Cumbria has some pretty hills, and lakes too
Mountains:
* Helvellyn
%%writefile index.md
Mountains and Lakes in the UK
===================
Engerland is not very mountainous.
But has some tall hills, and maybe a
mountain or two depending on your definition.
%%bash
git status
###Output
On branch master
Your branch is up to date with 'origin/master'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: index.md
modified: lakeland.md
Untracked files:
(use "git add <file>..." to include in what will be committed)
__pycache__/
wsd.py
no changes added to commit (use "git add" and/or "git commit -a")
###Markdown
These changes should really be separate commits. We can do this with careful use of git add, to **stage** first one commit, then the other.
###Code
%%bash
git add index.md
git commit -m "Include lakes in the scope"
###Output
[master d0417fc] Include lakes in the scope
1 file changed, 4 insertions(+), 3 deletions(-)
###Markdown
Because we "staged" only index.md, the changes to lakeland.md were not included in that commit.
###Code
%%bash
git commit -am "Add Helvellyn"
%%bash
git log --oneline
%%bash
git push
message="""
participant "Jim's remote" as M
participant "Jim's repo" as R
participant "Jim's index" as I
participant Jim as J
note right of J: vim index.md
note right of J: vim lakeland.md
note right of J: git add index.md
J->I: Add *only* the changes to index.md to the staging area
note right of J: git commit -m "Include lakes"
I->R: Make a commit from currently staged changes: index.md only
note right of J: git commit -am "Add Helvellyn"
J->I: Stage *all remaining* changes, (lakeland.md)
I->R: Make a commit from currently staged changes
note right of J: git push
R->M: Transfer commits to Github
"""
wsd(message)
###Output
_____no_output_____
###Markdown
Publishing We're still in our working directory:
###Code
import os
top_dir = os.getcwd()
git_dir = os.path.join(top_dir, 'learning_git')
working_dir = os.path.join(git_dir, 'git_example')
os.chdir(working_dir)
working_dir
###Output
_____no_output_____
###Markdown
Sharing your work So far, all our work has been on our own computer. But a big part of the point of version control is keeping your work safe, on remote servers. Another part is making it easy to share your work with the world In this example, we'll be using the "GitHub" cloud repository to store and publish our work. If you have not done so already, you should create an account on GitHub: go to [https://github.com/](https://github.com/), fill in a username and password, and click on "sign up for free". Creating a repositoryOk, let's create a repository to store our work. Hit "new repository" on the right of the github home screen, or click [here](https://github.com/new). Fill in a short name, and a description. Choose a "public" repository. Don't choose to add a Readme. Paying for GitHubFor this course, you should use public repositories in your personal account for your example work: it's good to share! GitHub is free for open source, but in general, charges a fee if you want to keep your work private. In the future, you might want to keep your work on GitHub private. Students can get free private repositories on GitHub, by going to [GitHub Education](https://github.com/edu) and filling in a form (look for the Student Developer Pack). Adding a new remote to your repositoryInstructions will appear, once you've created the repository, as to how to add this new "remote" server to your repository. If you are using the token method to connect to GitHub it will be something like the following:
###Code
%%bash
git remote add origin https://${GITHUB_TOKEN}@github.com/alan-turing-institute/github-example.git
%%bash
git push -uf origin main # Note we use the '-f' flag here to force an update
###Output
Branch 'main' set up to track remote branch 'main' from 'origin'.
###Markdown
RemotesThe first command sets up the server as a new `remote`, called `origin`. Git, unlike some earlier version control systems is a "distributed" version control system, which means you can work with multiple remote servers. Usually, commands that work with remotes allow you to specify the remote to use, but assume the `origin` remote if you don't. Here, `git push` will push your whole history onto the server, and now you'll be able to see it on the internet! Refresh your web browser where the instructions were, and you'll see your repository! Let's add these commands to our diagram:
###Code
message="""
Working Directory -> Staging Area : git add
Staging Area -> Local Repository : git commit
Working Directory -> Local Repository : git commit -a
Local Repository -> Working Directory : git checkout
Local Repository -> Staging Area : git reset
Local Repository -> Working Directory: git reset --hard
Local Repository -> Remote Repository : git push
"""
from wsd import wsd
%matplotlib inline
wsd(message)
###Output
_____no_output_____
###Markdown
Playing with GitHubTake a few moments to click around and work your way through the GitHub interface. Try clicking on 'test.md' to see the content of the file: notice how the markdown renders prettily.Click on "commits" near the top of the screen, to see all the changes you've made. Click on the commit number next to the right of a change, to see what changes it includes: removals are shown in red, and additions in green. Working with multiple files Some new contentSo far, we've only worked with one file. Let's add another: ``` bashvim lakeland.md```
###Code
%%writefile lakeland.md
Lakeland
========
Cumbria has some pretty hills, and lakes too.
cat lakeland.md
###Output
Lakeland
========
Cumbria has some pretty hills, and lakes too.
###Markdown
Git will not by default commit your new file
###Code
%%bash
git commit -am "Try to add Lakeland"
###Output
On branch main
Your branch is up to date with 'origin/main'.
Untracked files:
(use "git add <file>..." to include in what will be committed)
__pycache__/
lakeland.md
wsd.py
nothing added to commit but untracked files present (use "git add" to track)
###Markdown
This failed, because we've not told git to track the new file yet. Tell git about the new file
###Code
%%bash
git add lakeland.md
git commit -am "Add lakeland"
###Output
[main 0a61df6] Add lakeland
1 file changed, 4 insertions(+)
create mode 100644 lakeland.md
###Markdown
Ok, now we have added the change about Cumbria to the file. Let's publish it to the origin repository.
###Code
%%bash
git push
###Output
To https://github.com/alan-turing-institute/github-example.git
b5d36db..0a61df6 main -> main
###Markdown
Visit GitHub, and notice this change is on your repository on the server. We could have said `git push origin` to specify the remote to use, but origin is the default. Changing two files at once What if we change both files?
###Code
%%writefile lakeland.md
Lakeland
========
Cumbria has some pretty hills, and lakes too
Mountains:
* Helvellyn
%%writefile test.md
Mountains and Lakes in the UK
===================
Engerland is not very mountainous.
But has some tall hills, and maybe a
mountain or two depending on your definition.
%%bash
git status
###Output
On branch main
Your branch is up to date with 'origin/main'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: lakeland.md
modified: test.md
Untracked files:
(use "git add <file>..." to include in what will be committed)
__pycache__/
wsd.py
no changes added to commit (use "git add" and/or "git commit -a")
###Markdown
These changes should really be separate commits. We can do this with careful use of git add, to **stage** first one commit, then the other.
###Code
%%bash
git add test.md
git commit -m "Include lakes in the scope"
###Output
[main 311761b] Include lakes in the scope
1 file changed, 4 insertions(+), 3 deletions(-)
###Markdown
Because we "staged" only test.md, the changes to lakeland.md were not included in that commit.
###Code
%%bash
git commit -am "Add Helvellyn"
%%bash
git log --oneline
%%bash
git push
message="""
participant "Jim's remote" as M
participant "Jim's repo" as R
participant "Jim's index" as I
participant Jim as J
note right of J: vim test.md
note right of J: vim lakeland.md
note right of J: git add test.md
J->I: Add *only* the changes to test.md to the staging area
note right of J: git commit -m "Include lakes"
I->R: Make a commit from currently staged changes: test.md only
note right of J: git commit -am "Add Helvellyn"
J->I: Stage *all remaining* changes, (lakeland.md)
I->R: Make a commit from currently staged changes
note right of J: git push
R->M: Transfer commits to Github
"""
wsd(message)
###Output
_____no_output_____ |
2021_12_15_ccmi_single_cell_analysis_workshop/1. Introduction to GenePattern Notebook/2021-12-15_intro-to-gpnb_no-modules.ipynb | ###Markdown
Introduction to the GenePattern NotebookAlexander T. [email protected] this notebook, we'll be reviewing tools for profiling an RNA-sequencing (or RNA-seq) experiment. We will first quantify transcript abundance in our samples using the pseudoaligner [Salmon](https://salmon.readthedocs.io/en/latest/salmon.html) and aggregate to gene-level count estimates using [tximport](https://bioconductor.org/packages/release/bioc/html/tximport.html). Using these counts, we will look for differentially expressed genes in our perturbation samples relative to our control samples using [DESeq2](https://bioconductor.org/packages/release/bioc/html/DESeq2.html) and visualize the results using the DifferentialExpressionViewer GenePattern module. Finally, we will test for up- or down-regulated pathways and processes using [Gene Set Enrichment Analysis (GSEA)](http://www.gsea-msigdb.org/gsea/index.jsp).The data for this example are 6 samples from an [experiment](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE119088) testing the effects of an inhibitor of *EZH2* on a variety of cancer cell lines [**1**]. *EZH2* is a component of complex which regulates transcription via methylation, and frequently has activating mutations in lymphomas [**2**]. Here we will focus on the U2932 Diffuse Large B Cell Lymphoma data. Workflow overview  **Instructions in workshop notebooks**Along with the modules and code cells which encompass an analysis, computational notebooks often include lots of text to provide background, context, and interpretation of the analysis. In this notebook and all other notebooks in this workshop, we will highlight instructions in cells with blue backgrounds. When in doubt on how to proceed with the analysis, look for instruction cells like this one: Instructions I am an instruction cell. Activate demo modeGenePattern modules run "headlessly" in the Cloud. This enhances the ability of the Notebook environment to serve as an encapsulation of the entire scientific workflow, since analysis components that require larger amounts of time or memory can be sent off to appropriately provisioned cloud servers without relying on the continuity of the notebook kernel. This means that, for example, you can start a sequencing alignment job, turn off your notebook, open the notebook hours later, and view the results of the alignment.Some of the modules in this notebook will take several minutes to hours to run. Therefore, we recommend that you activate "demo mode" for the duration of this workshop. This means that, when you click "Run" on a module, the notebook will first check if there are example job results that match the input parameters, and, if they exist, will immediately return those results to the notebook. This lets us keep to the workshop schedule while keeping the changes to the user experience minimal. If you want to actually run any of these modules with different parameters or inputs, or to just test out running modules, then **do not** click on "Run" in the interface below this cell. If you have demo mode already activated and want to turn it off, then click `Kernel -> Restart`.
###Code
import nbtools
import demo_mode
from IPython.display import display, HTML
## Alex
%load_ext autoreload
## Alex
%autoreload 2
@nbtools.build_ui(parameters={'output_var': {'hide': True}})
def activate_demo_mode():
"""Enable demo mode for this notebook"""
# Any job you enable as a demo job will need to be listed here,
# along with the module name and any parameters you want matched.
# Parameters that are not listed will not be matched. You can
# list the same module multiple times, assuming you use different
# parameter match sets.
"""
demo_mode.set_demo_jobs([
{
'name': 'Conos.Preprocess',
'job': 389201, # Make sure to set the permissions of any demo job to 'public'
'params': {
'knn': '40', # Demo mode should be able to match between int and string, but when in doubt use string
'perplexity': 50
}
},
{
'name': 'Conos.Cluster',
'job': 389530,
'params': {
'conos_object': 'conos_preprocess_output.rds', # For file parameters, just list the file name, not URL
'runleiden': 'True'
}
}
])
"""
demo_mode.set_demo_jobs([
{
"name": "Salmon.Quant",
#"job": 82034,
#"job": 82039,
"job": 398990,
"params": {
#"Reads": "Reads.list.txt",
"Reads": [
"SRR7758650-DMSO_R1.fastq.gz",
"SRR7758650-DMSO_R2.fastq.gz",
"SRR7758651-DMSO_R1.fastq.gz",
"SRR7758651-DMSO_R2.fastq.gz",
"SRR7758652-DMSO_R1.fastq.gz",
"SRR7758652-DMSO_R2.fastq.gz",
"SRR7758653-EPZ6438_R1.fastq.gz",
"SRR7758653-EPZ6438_R2.fastq.gz",
"SRR7758654-EPZ6438_R1.fastq.gz",
"SRR7758654-EPZ6438_R2.fastq.gz",
"SRR7758655-EPZ6438_R1.fastq.gz",
"SRR7758655-EPZ6438_R2.fastq.gz"
]
}
},
{
#"name": "tximport.DESeq2.Normalize",
"name": "tximport.DESeq2",
#"job": 82042,
"job": 399022,
"params": {
"Quantifications": [
"SRR7758650-DMSO.quant.sf",
"SRR7758651-DMSO.quant.sf",
"SRR7758652-DMSO.quant.sf",
"SRR7758653-EPZ6438.quant.sf",
"SRR7758654-EPZ6438.quant.sf",
"SRR7758655-EPZ6438.quant.sf"
]
}
},
{
"name": "CollapseDataset",
#"job": 82869,
#"job": 82994,
"job": 399032,
"params": {
"dataset.file": "Merged.Data.Normalized.Counts.gct"
}
},
{
"name": "txt2odf.20211130.colparams",
#"job": 82577,
#"job": 82798,
#"job": 82870,
#"job": 82987,
#"job": 82996,
#"job": 82997,
"job": 399034,
"params": {
"txt_file": "Merged.Data.Differential.Expression.txt",
#"txt_file": "Merged.Data.Differential.Expression_filtered.txt",
"id_col": "Gene_Symbol"
}
},
#{
# "name": "DifferentialExpressionViewer",
# "job": 399036,
# "params": {
# "differential.expression.filename": "Merged.Data.Differential.Expression_filtered.odf",
# "dataset.filename": "Merged.Data.Differential.Expression_pruned_filtered.gct"
# }
#},
{
"name": "GSEA",
#"job": 82800,
"job": 399037,
"params": {
#"expression.dataset": "Merged.Data.Normalized.Counts.gct",
"expression.dataset": "Merged.Data.Differential.Expression_pruned_filtered.gct",
"gene.sets.database": "c6.all.v7.4.symbols.gmt"
}
}
])
# To activate demo mode, just call activate(). This example wraps
# the activation call behind a UI Builder cell, but you could have
# it called in different ways.
demo_mode.activate()
display(HTML('<div class="alert alert-success">Demo mode activated</div>'))
# The code in this call has been left expanded for tutorial purposes
###Output
_____no_output_____
###Markdown
Login to GenePattern Instructions Insert a new cell with the + button. Click on "Cell -> CellType -> GenePattern". Log in using your GenePattern account.
###Code
# Requires GenePattern Notebook: pip install genepattern-notebook
import gp
import genepattern
# Username and password removed for security reasons.
genepattern.display(genepattern.session.register("https://cloud.genepattern.org/gp", "", ""))
###Output
_____no_output_____
###Markdown
Get transcript abundances with SalmonHere we will run the Salmon.Quant module, the GenePattern module that implements the Salmon pseudoalignment tool, a much faster approach to traditional sequencing alignment, which looks for read segments that match overlapping segments of the transcriptome rather than a base-pair-to-base-pair matching. Salmon.Quant needs a valid Salmon transcriptome index to use as a reference. GenePattern has prebuilt references for Human and Mouse transcriptomes. If you have sequencing data from another organism, you can build your own Salmon index with the Salmon.Indexer GenePattern module.**Note**: If you are analyzing your own data and you already have abundances derived from a pseudoaligner like Salmon or Kallisto, you can skip this step. **Input data**These URLs point to each of the input files, which are paired `fastq` files for a total of 6 samples. You don't have to download these files, they can be passed the first module using these URLs.
###Code
from nbtools import UIBuilder, UIOutput
UIOutput(
name = "FASTQ input files from GSE1119088",
files = [
"https://datasets.genepattern.org/data/Workshop_12152021/BulkRNASeq/SRR7758650-DMSO_R1.fastq.gz",
"https://datasets.genepattern.org/data/Workshop_12152021/BulkRNASeq/SRR7758650-DMSO_R2.fastq.gz",
"https://datasets.genepattern.org/data/Workshop_12152021/BulkRNASeq/SRR7758651-DMSO_R1.fastq.gz",
"https://datasets.genepattern.org/data/Workshop_12152021/BulkRNASeq/SRR7758651-DMSO_R2.fastq.gz",
"https://datasets.genepattern.org/data/Workshop_12152021/BulkRNASeq/SRR7758652-DMSO_R1.fastq.gz",
"https://datasets.genepattern.org/data/Workshop_12152021/BulkRNASeq/SRR7758652-DMSO_R2.fastq.gz",
"https://datasets.genepattern.org/data/Workshop_12152021/BulkRNASeq/SRR7758653-EPZ6438_R1.fastq.gz",
"https://datasets.genepattern.org/data/Workshop_12152021/BulkRNASeq/SRR7758653-EPZ6438_R2.fastq.gz",
"https://datasets.genepattern.org/data/Workshop_12152021/BulkRNASeq/SRR7758654-EPZ6438_R1.fastq.gz",
"https://datasets.genepattern.org/data/Workshop_12152021/BulkRNASeq/SRR7758654-EPZ6438_R2.fastq.gz",
"https://datasets.genepattern.org/data/Workshop_12152021/BulkRNASeq/SRR7758655-EPZ6438_R1.fastq.gz",
"https://datasets.genepattern.org/data/Workshop_12152021/BulkRNASeq/SRR7758655-EPZ6438_R2.fastq.gz"
]
)
###Output
_____no_output_____
###Markdown
**Salmon Index**
###Code
UIOutput(
name = "Gencode v37 Salmon Index",
files = [
"https://datasets-genepattern-org.s3.amazonaws.com/data/test_data/Salmon/gencode.v37.annotation.k31.salmon_full_decoy_index.tar.gz"
]
)
###Output
_____no_output_____
###Markdown
Instructions Insert a Salmon.Quant module. Create a new cell below this one using the + button. Click on "Cell -> CellType -> GenePattern". Search for "Salmon.Quant" and choose the Salmon.Quant module from the list. Add each of the fastq.gz links above to the Reads* parameter field. Add the Salmon Index link above to the Transcriptome Index* parameter field. Click "Run". Get gene counts and differentially expressed genesNow that we have the quantifications of transcript abundance, we can explore the changes in expression in the treated samples relative to the control. To do this, we start by running the tximport.DESeq2 module, which combines the functionality of the tximport and DESeq2 software tools. tximport aggregates the transcript-level abundance estimates to gene-level estimated counts, which are then appropriate for input to DESeq2, which tests for gene-level differential expression. Finally, this module also gives us the expression matrix transformed according to DESeq2's "median of ratios" normalization which it performs prior to testing for differential expression. We find that this normalization leads to optimal performance for Gene Set Enrichment Analysis (GSEA), which we will revisit later in this notebook. Instructions Insert and run tximport.DESeq2 Create a new cell below this one using the + button. Click on "Cell -> CellType -> GenePattern". Search for "tximport.DESeq2" and choose the tximport.DESeq2 module. Select each one from the dropdown for the Quantifications* parameter field. In the Sample Info field, choose 2021-11-16_intro-to-gp_tximport-sample-info.txt. In the transcriptome database parameter, choose gencode.v37.annotation.gtf.gz. Set the Reverse Sign* parameter to "TRUE". **Sample Info**
###Code
UIOutput(
name = "DESeq2 Sample Info",
files = ["https://datasets.genepattern.org/data/Workshop_12152021/BulkRNASeq/2021-11-16_intro-to-gp_tximport-sample-info.txt"]
)
###Output
_____no_output_____
###Markdown
**Transcriptome database**
###Code
UIOutput(
name = "Gencode v37 GTF",
files = ["https://ftp.ebi.ac.uk/pub/databases/gencode/Gencode_human/release_37/gencode.v37.annotation.gtf.gz"]
)
###Output
_____no_output_____
###Markdown
Visualize differential expression resultsHere, we will use the DifferentialExpressionViewer GenePattern module to render a visualization of significantly differentially expressed genes within the notebook. This requires us to make a number of modifications to the data. Here, we will do the following:1. Convert from Ensembl gene IDs to gene symbols using `CollapseDataset`.2. Generate an "ODF" file based on the DESeq2 results.3. Remove missing genes from the ODF and expression GCT file. Convert from Ensembl to gene symbols Instructions Insert a new cell below Change the cell to a GenePattern cell, and choose the CollapseDataset module. In the dataset file* field, choose Merged.Data.Normalized.Counts.gct. In the chip platform field, choose Human_ENSEMBL_Gene_ID_MSigDB.v.7.4.chip. In the output file name* field, write Merged.Data.Normalized.Counts_collapsed.gct. Generate ODF file for DifferentialExpressionViewer Instructions Insert a new cell below, change it to a GenePattern cell, and choose DESeq2ToODF. In the txt file* field, choose Merged.Data.Differential.Expression.txt. In the id col* field, write Gene_Symbol. In the stat* field, write Wald statistic: Factor EPZ6438 vs DMSO. Set the prune gct* field to True. In the gct field, choose Merged.Data.Normalized_collapsed_to_symbols.gct. In the cls field, choose Merged.Data.cls. Remove missing genes Instructions In the DESeq2ToODF ODF file* field, choose Merged.Data.Differential.Expression.odf. In the DESeq2ToODF GCT file* field, choose Merged.Data.Differential.Expression_pruned.gct. Click Run.
###Code
from gp.data import GCT, ODF, write_gct, write_odf, _parse_header
import os
import pandas as pd
import requests
import warnings
warnings.simplefilter(action='ignore', category=UserWarning)
def get_gp_file(gpserver, url):
"""
General file retrieval from gp server
"""
f = gp.GPFile(gpserver, url)
basename = os.path.basename(url)
resp = requests.get(url, headers={
'Authorization': f.server_data.authorization_header(),
'User-Agent': 'GenePatternRest'
})
with open(basename, "wb") as f:
f.write(resp.content)
return basename
@genepattern.build_ui(name="Remove missing genes", parameters={
"odf_url": {
"name": "DESeq2ToODF ODF file",
"description": "The ODF file from DESeq2ToODF output ODF file",
"type": "file",
"kinds": [".Expression.odf"]
},
"gct_url": {
"name": "DESeq2ToODF GCT file",
"description": "Pruned GCT file from DESeq2ToODF output",
"type": "file",
"kinds": ["_pruned.gct"]
},
"output_var": {
"default":"output_var",
"hide": True
}
})
def remove_missing_genes(odf_url, gct_url):
## Initialize API connection
gpserver = genepattern.session.get(0)
## Get ODF from GP
odf_local_path = get_gp_file(gpserver, odf_url)
odf_df = ODF(odf_local_path)
## Get GCT from GP
gct_local_path = get_gp_file(gpserver, gct_url)
gct_df = GCT(gct_local_path)
## Get intersecting symbols
shared_symbols = list(set(odf_df["Feature"]).intersection(gct_df.index.get_level_values(0)))
## Drop duplicate gene symbols
odf_short_df = odf_df[odf_df["Feature"].isin(shared_symbols)]
odf_short_df = odf_short_df.drop_duplicates(subset = "Feature", keep = "first")
## Subset to common genes
gct_short_df = gct_df[gct_df.index.isin(shared_symbols, level = 0)]
gct_short_df = gct_short_df[~gct_short_df.index.duplicated(keep = 'first')]
## Generate paths for outputs
new_odf_path = odf_local_path.strip(".odf") + "_filtered.odf"
new_gct_path = gct_local_path.strip(".gct") + "_filtered.gct"
## Reset ranks in ODF
odf_short_df["Rank"] = list(range(1, odf_short_df.shape[0] + 1))
## Save output ODF
with open(odf_local_path) as odf_file:
headers = [next(odf_file) for x in range(21)]
headers[-1] = f"DataLines={odf_short_df.shape[0]}\n"
open(new_odf_path, 'w').writelines(headers)
odf_short_df.to_csv(new_odf_path, header = False, mode = 'a', sep = '\t', index = False)
## Remove extra newline in ODF
new_odf_lines = open(new_odf_path, 'r').readlines()
new_odf_lines[-1] = new_odf_lines[-1].strip('\n')
open(new_odf_path, 'w').writelines(new_odf_lines)
## Save output GCT
write_gct(gct_short_df, new_gct_path)
## Upload to GP
gct_url = gpserver.upload_file(new_gct_path, new_gct_path).get_url()
odf_url = gpserver.upload_file(new_odf_path, new_odf_path).get_url()
display(
UIOutput(
name = "Filtered GCT and ODF files",
files = [odf_url, gct_url]
)
)
###Output
_____no_output_____
###Markdown
Render heatmap with DifferentialExpressionViewer Instructions In the differential expression filename* field, choose Merged.Data.Differential.Expression_filtered.odf. In the dataset filename* field, choose Merged.Data.Differential.Expression_pruned_filtered.gct. Click Run Find dysregulated pathways with GSEABased on the above results, we can conclude that the inhibition of *EZH2* has caused extensive dysregulation. Because so many genes are differentially expressed, it can be difficult to identify which pathways are most likely to be affected by these changes. Here, we will use Gene Set Enrichment Analysis (GSEA) to test for significantly up- or down-regulated pathways. Specifically, we will investigate the oncogenic signatures gene set collection. A link to those signatures is below:
###Code
display(
UIOutput(
name = "C6 Oncogenic Signatures Collection",
files = ["https://datasets.genepattern.org/data/Workshop_12152021/BulkRNASeq/c6.all.v7.4.symbols.gmt"]
)
)
###Output
_____no_output_____ |
Part 3 - Handling Missing YF Historicals/Part_3A_Tutorial.ipynb | ###Markdown
Reference: Cutting the Extra Date in 'GS' and 'DIA'
###Code
# Let's take a look at GS and DIA.
display(historicals['GS'])
display(historicals['DIA'])
# We can see that they just have a top Nan row, let's use drop.na() on them in order for them to fit our chosen standard length.
historicals['GS'] = historicals['GS'].dropna()
historicals['DIA'] = historicals['DIA'].dropna()
# Double check it worked.
print(f"GS Check {full_date_range.difference(historicals['GS'].index}")
print(f"DIA Check {full_date_range.difference(historicals['DIA'].index)}")
display(historicals['GS'])
# Since there was no difference in the dates index, then GS and DIA are now the standard size of our Yahoo Finance Data.
# Because you are reducing the array shape and not extending it you will have to delete your GS and DIA hdf5 files and upload the updated data.
###Output
_____no_output_____ |
IPYNB Interactivities and HTML/DV.ipynb | ###Markdown
Import the Data Visualization Module and the CJH Scraper from the JATA package.
###Code
%%capture
!pip install JATA -U
!pip install pandas-summary
from pandas_summary import DataFrameSummary
from CJH import CJH_Archives
from DV import *
import plotly.express as px
#records
records = CJH_Archives('AJHS').get_meta_data('records', 1, 2)
#collections
collections= CJH_Archives('AJHS').get_meta_data('collections', 1, 2)
###Output
Creating CJHA Scraper Object for AJHS
Scraping All Individual Records
Scraping Archive Index for Entry Links 1
Scraping Archive Index for Entry Links 2
Number of Objects Extracted: 60
Scraping entry meta data...
Record: 1 https://archives.cjh.org/repositories/3/archival_objects/85903
"______ for the absorbing of Ethiopian immigrants ___ ____," by Edna Angel, 1990
Record: 2 https://archives.cjh.org/repositories/3/archival_objects/1149685
#1-7, 1960
Record: 3 https://archives.cjh.org/repositories/3/archival_objects/1149756
#1-10, 1968
Record: 4 https://archives.cjh.org/repositories/3/archival_objects/1149638
#1-14, 1953
Record: 5 https://archives.cjh.org/repositories/3/archival_objects/1149701
#1-14, 1962
Record: 6 https://archives.cjh.org/repositories/3/archival_objects/1149647
#1-16, 1954
Record: 7 https://archives.cjh.org/repositories/3/archival_objects/1149663
#1-18, 1956
Record: 8 https://archives.cjh.org/repositories/3/archival_objects/1149750
#1-20, 1966-1967
Record: 9 https://archives.cjh.org/repositories/3/archival_objects/1149734
#1-21, 1967-1968
Record: 10 https://archives.cjh.org/repositories/3/archival_objects/1149611
#1-22 and Yiddish Releases, 1949-1950
Record: 11 https://archives.cjh.org/repositories/3/archival_objects/1149656
#1-22 and Yiddish Releases, 1955
Record: 12 https://archives.cjh.org/repositories/3/archival_objects/1149680
#1-23, 1958-1959
Record: 13 https://archives.cjh.org/repositories/3/archival_objects/1149601
#1-24, 1949
Record: 14 https://archives.cjh.org/repositories/3/archival_objects/1149620
#1-25, 1951
Record: 15 https://archives.cjh.org/repositories/3/archival_objects/1149675
#1-25, 1958
Record: 16 https://archives.cjh.org/repositories/3/archival_objects/1149714
#1-25, 1964
Record: 17 https://archives.cjh.org/repositories/3/archival_objects/1149745
#1-25, 1964
Record: 18 https://archives.cjh.org/repositories/3/archival_objects/1149710
#1-27, 1963
Record: 19 https://archives.cjh.org/repositories/3/archival_objects/1149730
#1-27, 1966-1967
Record: 20 https://archives.cjh.org/repositories/3/archival_objects/1149671
#1-33, 1956-1957
Record: 21 https://archives.cjh.org/repositories/3/archival_objects/1149740
#1-33, 1963
Record: 22 https://archives.cjh.org/repositories/3/archival_objects/1149722
#1-40, 1965
Record: 23 https://archives.cjh.org/repositories/3/archival_objects/1149698
#1-41, 1960-1961
Record: 24 https://archives.cjh.org/repositories/3/archival_objects/1149725
#1-42, 1966
Record: 25 https://archives.cjh.org/repositories/3/archival_objects/1149597
#1-64, 1948
Record: 26 https://archives.cjh.org/repositories/3/archival_objects/1149593
#1-78, 1947
Record: 27 https://archives.cjh.org/repositories/3/archival_objects/648414
1-800-Shmurah (Brooklyn), [unknown]
Record: 28 https://archives.cjh.org/repositories/3/archival_objects/1152557
1. 1964 Convention, 1956-1964
Record: 29 https://archives.cjh.org/repositories/3/archival_objects/1152495
1. A, 1940, 1945, 1947-1969
Record: 30 https://archives.cjh.org/repositories/3/archival_objects/324779
1) A copy of a play performed at a pageant held in honor of the 35<sup class="emph
Record: 31 https://archives.cjh.org/repositories/3/archival_objects/324730
1) A handwritten note from Taylor Phillips, Treasurer., April 10, 1936
Record: 32 https://archives.cjh.org/repositories/3/archival_objects/324737
1) A handwritten paper signed by Albert Jones., October 2, 1941
Record: 33 https://archives.cjh.org/repositories/3/archival_objects/1144954
1. Administration Subject Files, undated, 1932, 1943-1970
Record: 34 https://archives.cjh.org/repositories/3/archival_objects/1163081
1) "Alice G. Davis Compositions"., 1885
Record: 35 https://archives.cjh.org/repositories/3/archival_objects/1156892
1) Alphabetical Files, undated, 1927-1966
Record: 36 https://archives.cjh.org/repositories/3/archival_objects/1150158
1. Alphabetical Files, undated, 1951-1974
Record: 37 https://archives.cjh.org/repositories/3/archival_objects/580790
1) An anonymous letter written to a member of Myer Isaacs family accounting for $180,000 raised by the balls of the Purim Association over the course of 30 years., undated
Record: 38 https://archives.cjh.org/repositories/3/archival_objects/1154350
1. Annual, 1928-1974
Record: 39 https://archives.cjh.org/repositories/3/archival_objects/1158016
1) Annual Meetings, 1938-1944
Record: 40 https://archives.cjh.org/repositories/3/archival_objects/324780
1) Annual report of the Columbia Religious and Industrial School for Jewish Girls of 1907., 1907
Record: 41 https://archives.cjh.org/repositories/3/archival_objects/1141175
1. Area Councils Files, 1967-1974
Record: 42 https://archives.cjh.org/repositories/3/archival_objects/1144966
1. Arian, Harold Files, undated, 1948-1969, 1983-1995
Record: 43 https://archives.cjh.org/repositories/3/archival_objects/1148974
1. Articles, undated, 1954-1979
Record: 44 https://archives.cjh.org/repositories/3/archival_objects/1148990
1. Articles and Editorials, 1950-1969
Record: 45 https://archives.cjh.org/repositories/3/archival_objects/1141171
1. Belgian Youth, 1975-1978
Record: 46 https://archives.cjh.org/repositories/3/archival_objects/1151741
1. Bicentennial Files, undated, 1973-1976
Record: 47 https://archives.cjh.org/repositories/3/archival_objects/1141161
1. Biennial Files, undated, 1972-1976
Record: 48 https://archives.cjh.org/repositories/3/archival_objects/1141166
1. Biennial Files, undated, 1953-1982
Record: 49 https://archives.cjh.org/repositories/3/archival_objects/1144962
1. Boeko, Jack Files, undated, 1952-1977
Record: 50 https://archives.cjh.org/repositories/3/archival_objects/1163092
1) Booklet on N.Y. Training School for Community Center Workers., 1915-1916
Record: 51 https://archives.cjh.org/repositories/3/archival_objects/1144947
1. Brodkin, Arthur Files, undated, 1946-1977
Record: 52 https://archives.cjh.org/repositories/3/archival_objects/1163210
1) Calling card of Miss Miriam Peixotto, her mother., undated
Record: 53 https://archives.cjh.org/repositories/3/archival_objects/1149982
1. Clerical Employees, 1942-1976
Record: 54 https://archives.cjh.org/repositories/3/archival_objects/1149974
1. Clerical Personnel: Diane Rogoff Files, 1954-1975
Record: 55 https://archives.cjh.org/repositories/3/archival_objects/1163064
1) Clipping from "The Bulletin," National Council of Jewish Women., October 1933
Record: 56 https://archives.cjh.org/repositories/3/archival_objects/1149975
1. Committee Files, 1940-1941
Record: 57 https://archives.cjh.org/repositories/3/archival_objects/302949
1) Committees, 1960-1977
Record: 58 https://archives.cjh.org/repositories/3/archival_objects/1141151
1. Communities: Individual, 1964-1984
Record: 59 https://archives.cjh.org/repositories/3/archival_objects/1141178
1. Community Files, 1963-1969
Record: 60 https://archives.cjh.org/repositories/3/archival_objects/1141191
1. Community Files, 1973-1977
Creating CJHA Scraper Object for AJHS
Scraping Collections (Finding Aids)
Scraping Archive Index for Entry Links 1
Scraping Archive Index for Entry Links 2
Number of Objects Extracted: 60
Scraping entry meta data...
Record: 1 https://archives.cjh.org/repositories/3/resources/15236
The White Jew Newspaper
Record: 2 https://archives.cjh.org/repositories/3/resources/13248
Synagogue Council of America Records
Record: 3 https://archives.cjh.org/repositories/3/resources/15562
Admiral Lewis Lichtenstein Strauss Papers
Record: 4 https://archives.cjh.org/repositories/3/resources/15566
Meyer Greenberg Papers
Record: 5 https://archives.cjh.org/repositories/3/resources/15570
Louis Lipsky Papers
Record: 6 https://archives.cjh.org/repositories/3/resources/15623
Noah Benevolent Society Records
Record: 7 https://archives.cjh.org/repositories/3/resources/15557
Leo Hershkowitz Collection of Court Records
Record: 8 https://archives.cjh.org/repositories/3/resources/15770
E. Michael Bluestone Papers
Record: 9 https://archives.cjh.org/repositories/3/resources/18294
Oscar M Lifshutz (1916-1990) Papers
Record: 10 https://archives.cjh.org/repositories/3/resources/18296
Norman Hapgood (1868-1937) Papers
Record: 11 https://archives.cjh.org/repositories/3/resources/18357
Jonah J. Goldstein Papers
Record: 12 https://archives.cjh.org/repositories/3/resources/18366
Melvin Urofsky collection
Record: 13 https://archives.cjh.org/repositories/3/resources/19663
Jewish Music Forum (New York, N.Y.) records
Record: 14 https://archives.cjh.org/repositories/3/resources/5998
Aaron Kramer (1921-1997) Papers
Record: 15 https://archives.cjh.org/repositories/3/resources/6010
Chaim Weizmann Papers
Record: 16 https://archives.cjh.org/repositories/3/resources/6113
Lawrence Sampter collection
Record: 17 https://archives.cjh.org/repositories/3/resources/6114
Emanuel de la Motta prayerbook collection
Record: 18 https://archives.cjh.org/repositories/3/resources/6115
Israel Goldberg papers
Record: 19 https://archives.cjh.org/repositories/3/resources/6116
M.S. Polack collection
Record: 20 https://archives.cjh.org/repositories/3/resources/6117
David Lloyd George Paris Peace Conference autograph album
Record: 21 https://archives.cjh.org/repositories/3/resources/6118
Henry Hochheimer marriage record book
Record: 22 https://archives.cjh.org/repositories/3/resources/6119
Judah family (New York City and Richmond) papers
Record: 23 https://archives.cjh.org/repositories/3/resources/6196
Baron family papers
Record: 24 https://archives.cjh.org/repositories/3/resources/6197
Lewisohn family genealogy
Record: 25 https://archives.cjh.org/repositories/3/resources/6198
Ewenczyk family genealogy collection
Record: 26 https://archives.cjh.org/repositories/3/resources/6199
Sulzberger family collection
Record: 27 https://archives.cjh.org/repositories/3/resources/6200
Moses Alexander autograph
Record: 28 https://archives.cjh.org/repositories/3/resources/6201
Sholem Asch autograph photograph
Record: 29 https://archives.cjh.org/repositories/3/resources/6202
Simon Bamburger collection
Record: 30 https://archives.cjh.org/repositories/3/resources/6203
Simon Guggenheimer letter
Record: 31 https://archives.cjh.org/repositories/3/resources/6204
Henry M. Moos correspondence
Record: 32 https://archives.cjh.org/repositories/3/resources/6205
Joseph Austrian autobiographical and historical sketches
Record: 33 https://archives.cjh.org/repositories/3/resources/6206
Samuel Lawrence scrapbook
Record: 34 https://archives.cjh.org/repositories/3/resources/6225
Adolph J. Sabath papers
Record: 35 https://archives.cjh.org/repositories/3/resources/6226
Roy H. Millenson collection of Senator Jacob K. Javits
Record: 36 https://archives.cjh.org/repositories/3/resources/6227
Kuttenplum family legal records
Record: 37 https://archives.cjh.org/repositories/3/resources/6228
Selkind family Yiddish postcards
Record: 38 https://archives.cjh.org/repositories/3/resources/6229
John Gellman papers
Record: 39 https://archives.cjh.org/repositories/3/resources/6230
Elliott S. Shapiro biographical materials
Record: 40 https://archives.cjh.org/repositories/3/resources/6231
Joan Breslow Woodbine colony reference materials
Record: 41 https://archives.cjh.org/repositories/3/resources/6233
Halpern family papers
Record: 42 https://archives.cjh.org/repositories/3/resources/6234
Blu Greenberg papers
Record: 43 https://archives.cjh.org/repositories/3/resources/6236
Ellen Norman Stern, Collection of Elie Wiesel newsclippings
Record: 44 https://archives.cjh.org/repositories/3/resources/6237
Saralea Zohar Aaron papers
Record: 45 https://archives.cjh.org/repositories/3/resources/6238
Vivian White Soboleski papers
Record: 46 https://archives.cjh.org/repositories/3/resources/6120
Judah family (New York, Montreal, Indiana) papers
Record: 47 https://archives.cjh.org/repositories/3/resources/6121
Esther Levy estate inventory
Record: 48 https://archives.cjh.org/repositories/3/resources/6137
Solomon Eudovich papers
Record: 49 https://archives.cjh.org/repositories/3/resources/6139
Abendanone family papers
Record: 50 https://archives.cjh.org/repositories/3/resources/6140
Mark Levy estate inventory
Record: 51 https://archives.cjh.org/repositories/3/resources/6141
Ehrenreich family papers
Record: 52 https://archives.cjh.org/repositories/3/resources/6142
Selman A. Waksman papers
Record: 53 https://archives.cjh.org/repositories/3/resources/6143
Martin Van Buren papers
Record: 54 https://archives.cjh.org/repositories/3/resources/6145
Stephen Wise papers
Record: 55 https://archives.cjh.org/repositories/3/resources/6146
Philip Slomovitz United Hebrew Schools of Detroit collection
Record: 56 https://archives.cjh.org/repositories/3/resources/6147
Louis Arthur Ungar papers
Record: 57 https://archives.cjh.org/repositories/3/resources/6148
Morris Rosenfeld papers
Record: 58 https://archives.cjh.org/repositories/3/resources/6149
Herman W. Block papers
Record: 59 https://archives.cjh.org/repositories/3/resources/6150
Peter Gouled papers
Record: 60 https://archives.cjh.org/repositories/3/resources/6151
Meier Steinbrink papers
###Markdown
Exploring Digital ArchivesIn this tutorial we will walk through cleaning and parsing our dataset to be exported and used in the data visualization platform pelateo. We will explore data cleaning, creating derived datasets, and dataframe manipulations to achieve our final product. What data have we collected and how might we analyze it What is the structure of the datasetWe have a lot of data that comes with the record metadata we scraped from AJHS. Records
###Code
records
###Output
_____no_output_____
###Markdown
Collections
###Code
collections
###Output
_____no_output_____
###Markdown
What fields do we have and what kind of variables are they?There are a few types of variables that we can recognize in this dataset. The most obvious are text or “strings” as they are called in python. All the meta data that includes writing, names, and dates are strings. Dates are technically strings, however we will consider them a different type of variable because they add a chronological dimension to our dataset. The only numeric variables in this dataset are the index number of the record and the Box/Folder info. How can we reorganize or manipulate our dataset to extract more information from it?We are going to want to clean up both datasets to make them easier to work this. The main thing we want to do is strip all the special charecters like newlines and breaks out ('\n' or '\t') as well as remove any extra spaces from the right or left
###Code
cleaned_records = clean_df(records)
cleaned_collections = clean_df(collections)
###Output
_____no_output_____
###Markdown
Cleaned Records
###Code
cleaned_records
###Output
_____no_output_____
###Markdown
Cleaned Collections
###Code
cleaned_collections
collections = cleaned_collections
records = cleaned_records
###Output
_____no_output_____
###Markdown
Missing DataLets do some descriptive tests to see what types of variables there are and how many are missing.
###Code
collections.info()
records.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 60 entries, 0 to 59
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Access Restrictions 11 non-null object
1 Dates 60 non-null object
2 Digital Material 0 non-null float64
3 Extent 60 non-null object
4 Language of Materials 60 non-null object
5 Link 60 non-null object
6 Name 60 non-null object
7 Physical Storage Information 35 non-null object
8 Related Names 55 non-null object
9 Repository Details 60 non-null object
10 Scope and Contents 59 non-null object
dtypes: float64(1), object(10)
memory usage: 5.3+ KB
###Markdown
More Overview Stats
###Code
recs = DataFrameSummary(records)
cols = DataFrameSummary(collections)
###Output
_____no_output_____
###Markdown
**RECORDS**
###Code
recs.columns_types
recs.columns_stats
###Output
_____no_output_____
###Markdown
**COLLECTIONS**
###Code
cols.columns_types
cols.columns_stats
###Output
_____no_output_____
###Markdown
What historical questions can we answer and how?We are particularly interested in investigating historical context, as well as geographic and demographic trends. To do this, there are a number of data points which would be useful.For Individual Records: The name of the person or subject in the record, if there is one, is vital because it gives us information on who the entry is about. The birthplace or geographical origin of the source, which can help us understand how physical items in the archive existed at one point. Knowing the collections it belongs to is important because we can see how things move over time in storage, what other types of documents they are usually batched with, and where exactly the item is at any point in time.For Finding Aids (Collections)The data accessible to our scraper in the collection object type are much more descriptive. Again, the biographical note and overview fields are filled with useful geographical data about where people have studied, lived, born, and died. All of these would be useful to compare at a macro level to explore greater trends within the sample. Data Visualizations in Python These functions parse out the multiple dates in the data frame into one list which we can then use to see the distribution!
###Code
cols_dates = pd.DataFrame({'Dates': clean_dates(collections)}).stack().value_counts()
recs_dates = pd.DataFrame({'Dates': clean_dates(records)}).stack().value_counts()
recs_dates.head(5)
cols_dates.head(5)
###Output
_____no_output_____
###Markdown
Distribution of dates over timeOne useful distribution might be to observe the distribution of records bucketed by year. By looking at this chart we can tell around what time the majority of the documents are dated to. If the dates are too spread apart, we can always increase the histogram bucket size to include more years. (This means instead of counting up all the records at each date, we count all of the records between say each 10 year period, and then chart that frequency graph)
###Code
source = pd.concat([recs_dates,cols_dates],axis=1)
source.columns = ['RECORDS','COLLECTIONS']
source.index.rename('Dates',inplace=True)
source.reset_index(inplace=True)
###Output
_____no_output_____
###Markdown
Collections
###Code
fig = px.bar(source[source.COLLECTIONS > 0], x='Dates', y='COLLECTIONS')
fig.update_yaxes(automargin=True)
###Output
_____no_output_____
###Markdown
Records
###Code
fig = px.bar(source[source.RECORDS > 0], x='Dates', y='RECORDS')
fig.update_yaxes(automargin=True)
fig.show()
###Output
_____no_output_____
###Markdown
Making a Data Cleaning Pipline for Records and Collections to explore in [Palladio](https://hdlab.stanford.edu/palladio/).To get our data ready for further data visualization we created a custom cleaning function that will export a csv with the appropriate columns to build a number of visualizations.We often call these things pipelines, because we systematically pass large ammounts of data through them and perform the same cleaning and housekeeping functions on them although the data we are querying may not be identical in every batch. DatesMost records contain at least two dates in the date field. Some collections have many more. To solve this, we will collect the earliest date and the latest date from each date field. We will also create a new variable that splits the distance between these two dates to give a more centered approximate timestamp.
###Code
make_date_columns(cleaned_records)
make_date_columns(cleaned_collections)
###Output
division by zero []
division by zero []
division by zero []
division by zero []
division by zero []
division by zero []
division by zero []
###Markdown
Geographical locationThis one is a little more tricky because we do not have the luxury of exact coordinates for the origin place of each record. However, in most of the scope fields for all records, there is mention of a home town or state, which we can use public databases and geocoding to extract!
###Code
make_geo_columns(cleaned_records,'Scope and Contents')
make_geo_columns(cleaned_collections,'Scope and Content Note')
###Output
_____no_output_____
###Markdown
Language InformationLanguage is a categorical variable which we will parse out and might be a good candidate for a pie chart or a pivot table. Some of the entrys are plain text while others are in list form so there its a little tricky to systematically clean since each archivist has their own style for inputting language info.
###Code
clean_langs(cleaned_collections, 'Language of Materials')
clean_langs(cleaned_records, 'Language of Materials')
###Output
_____no_output_____
###Markdown
Physical storage informationWe can divide physical storage information up into three variables: box number, folder number, and materials type. This way we can map out what the collection landscape looks like in physical storage and compare how many records are say documents vs photos vs mixed materials.
###Code
clean_locations(cleaned_records)
###Output
_____no_output_____
###Markdown
Export and Load into [Palladio](https://hdlab.stanford.edu/palladio/)
###Code
cleaned_records.to_csv('records.csv',index=False)
cleaned_collections.to_csv('collections.csv',index=False)
cleaned_records
cleaned_collections
###Output
_____no_output_____ |
PythonDataScienceHandbook-master/notebooks/02.02-The-Basics-Of-NumPy-Arrays.ipynb | ###Markdown
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).**The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!* The Basics of NumPy Arrays Data manipulation in Python is nearly synonymous with NumPy array manipulation: even newer tools like Pandas ([Chapter 3](03.00-Introduction-to-Pandas.ipynb)) are built around the NumPy array.This section will present several examples of using NumPy array manipulation to access data and subarrays, and to split, reshape, and join the arrays.While the types of operations shown here may seem a bit dry and pedantic, they comprise the building blocks of many other examples used throughout the book.Get to know them well!We'll cover a few categories of basic array manipulations here:- *Attributes of arrays*: Determining the size, shape, memory consumption, and data types of arrays- *Indexing of arrays*: Getting and setting the value of individual array elements- *Slicing of arrays*: Getting and setting smaller subarrays within a larger array- *Reshaping of arrays*: Changing the shape of a given array- *Joining and splitting of arrays*: Combining multiple arrays into one, and splitting one array into many NumPy Array Attributes First let's discuss some useful array attributes.We'll start by defining three random arrays, a one-dimensional, two-dimensional, and three-dimensional array.We'll use NumPy's random number generator, which we will *seed* with a set value in order to ensure that the same random arrays are generated each time this code is run:
###Code
import numpy as np
np.random.seed(0) # seed for reproducibility
x1 = np.random.randint(10, size=6) # One-dimensional array
x2 = np.random.randint(10, size=(3, 4)) # Two-dimensional array
x3 = np.random.randint(10, size=(3, 4, 5)) # Three-dimensional array
###Output
_____no_output_____
###Markdown
Each array has attributes ``ndim`` (the number of dimensions), ``shape`` (the size of each dimension), and ``size`` (the total size of the array):
###Code
print("x3 ndim: ", x3.ndim)
print("x3 shape:", x3.shape)
print("x3 size: ", x3.size)
###Output
x3 ndim: 3
x3 shape: (3, 4, 5)
x3 size: 60
###Markdown
Another useful attribute is the ``dtype``, the data type of the array (which we discussed previously in [Understanding Data Types in Python](02.01-Understanding-Data-Types.ipynb)):
###Code
print("dtype:", x3.dtype)
###Output
dtype: int64
###Markdown
Other attributes include ``itemsize``, which lists the size (in bytes) of each array element, and ``nbytes``, which lists the total size (in bytes) of the array:
###Code
print("itemsize:", x3.itemsize, "bytes")
print("nbytes:", x3.nbytes, "bytes")
###Output
itemsize: 8 bytes
nbytes: 480 bytes
###Markdown
In general, we expect that ``nbytes`` is equal to ``itemsize`` times ``size``. Array Indexing: Accessing Single Elements If you are familiar with Python's standard list indexing, indexing in NumPy will feel quite familiar.In a one-dimensional array, the $i^{th}$ value (counting from zero) can be accessed by specifying the desired index in square brackets, just as with Python lists:
###Code
x1
x1[0]
x1[4]
###Output
_____no_output_____
###Markdown
To index from the end of the array, you can use negative indices:
###Code
x1[-1]
x1[-2]
###Output
_____no_output_____
###Markdown
In a multi-dimensional array, items can be accessed using a comma-separated tuple of indices:
###Code
x2
x2[0, 0]
x2[2, 0]
x2[2, -1]
###Output
_____no_output_____
###Markdown
Values can also be modified using any of the above index notation:
###Code
x2[0, 0] = 12
x2
###Output
_____no_output_____
###Markdown
Keep in mind that, unlike Python lists, NumPy arrays have a fixed type.This means, for example, that if you attempt to insert a floating-point value to an integer array, the value will be silently truncated. Don't be caught unaware by this behavior!
###Code
x1[0] = 3.14159 # this will be truncated!
x1
###Output
_____no_output_____
###Markdown
Array Slicing: Accessing Subarrays Just as we can use square brackets to access individual array elements, we can also use them to access subarrays with the *slice* notation, marked by the colon (``:``) character.The NumPy slicing syntax follows that of the standard Python list; to access a slice of an array ``x``, use this:``` pythonx[start:stop:step]```If any of these are unspecified, they default to the values ``start=0``, ``stop=``*``size of dimension``*, ``step=1``.We'll take a look at accessing sub-arrays in one dimension and in multiple dimensions. One-dimensional subarrays
###Code
x = np.arange(10)
x
x[:5] # first five elements
x[5:] # elements after index 5
x[4:7] # middle sub-array
x[::2] # every other element
x[1::2] # every other element, starting at index 1
###Output
_____no_output_____
###Markdown
A potentially confusing case is when the ``step`` value is negative.In this case, the defaults for ``start`` and ``stop`` are swapped.This becomes a convenient way to reverse an array:
###Code
x[::-1] # all elements, reversed
x[5::-2] # reversed every other from index 5
###Output
_____no_output_____
###Markdown
Multi-dimensional subarraysMulti-dimensional slices work in the same way, with multiple slices separated by commas.For example:
###Code
x2
x2[:2, :3] # two rows, three columns
x2[:3, ::2] # all rows, every other column
###Output
_____no_output_____
###Markdown
Finally, subarray dimensions can even be reversed together:
###Code
x2[::-1, ::-1]
###Output
_____no_output_____
###Markdown
Accessing array rows and columnsOne commonly needed routine is accessing of single rows or columns of an array.This can be done by combining indexing and slicing, using an empty slice marked by a single colon (``:``):
###Code
print(x2[:, 0]) # first column of x2
print(x2[0, :]) # first row of x2
###Output
[12 5 2 4]
###Markdown
In the case of row access, the empty slice can be omitted for a more compact syntax:
###Code
print(x2[0]) # equivalent to x2[0, :]
###Output
[12 5 2 4]
###Markdown
Subarrays as no-copy viewsOne important–and extremely useful–thing to know about array slices is that they return *views* rather than *copies* of the array data.This is one area in which NumPy array slicing differs from Python list slicing: in lists, slices will be copies.Consider our two-dimensional array from before:
###Code
print(x2)
###Output
[[12 5 2 4]
[ 7 6 8 8]
[ 1 6 7 7]]
###Markdown
Let's extract a $2 \times 2$ subarray from this:
###Code
x2_sub = x2[:2, :2]
print(x2_sub)
###Output
[[12 5]
[ 7 6]]
###Markdown
Now if we modify this subarray, we'll see that the original array is changed! Observe:
###Code
x2_sub[0, 0] = 99
print(x2_sub)
print(x2)
###Output
[[99 5 2 4]
[ 7 6 8 8]
[ 1 6 7 7]]
###Markdown
This default behavior is actually quite useful: it means that when we work with large datasets, we can access and process pieces of these datasets without the need to copy the underlying data buffer. Creating copies of arraysDespite the nice features of array views, it is sometimes useful to instead explicitly copy the data within an array or a subarray. This can be most easily done with the ``copy()`` method:
###Code
x2_sub_copy = x2[:2, :2].copy()
print(x2_sub_copy)
###Output
[[99 5]
[ 7 6]]
###Markdown
If we now modify this subarray, the original array is not touched:
###Code
x2_sub_copy[0, 0] = 42
print(x2_sub_copy)
print(x2)
###Output
[[99 5 2 4]
[ 7 6 8 8]
[ 1 6 7 7]]
###Markdown
Reshaping of ArraysAnother useful type of operation is reshaping of arrays.The most flexible way of doing this is with the ``reshape`` method.For example, if you want to put the numbers 1 through 9 in a $3 \times 3$ grid, you can do the following:
###Code
grid = np.arange(1, 10).reshape((3, 3))
print(grid)
###Output
[[1 2 3]
[4 5 6]
[7 8 9]]
###Markdown
Note that for this to work, the size of the initial array must match the size of the reshaped array. Where possible, the ``reshape`` method will use a no-copy view of the initial array, but with non-contiguous memory buffers this is not always the case.Another common reshaping pattern is the conversion of a one-dimensional array into a two-dimensional row or column matrix.This can be done with the ``reshape`` method, or more easily done by making use of the ``newaxis`` keyword within a slice operation:
###Code
x = np.array([1, 2, 3])
# row vector via reshape
x.reshape((1, 3))
# row vector via newaxis
x[np.newaxis, :]
# column vector via reshape
x.reshape((3, 1))
# column vector via newaxis
x[:, np.newaxis]
###Output
_____no_output_____
###Markdown
We will see this type of transformation often throughout the remainder of the book. Array Concatenation and SplittingAll of the preceding routines worked on single arrays. It's also possible to combine multiple arrays into one, and to conversely split a single array into multiple arrays. We'll take a look at those operations here. Concatenation of arraysConcatenation, or joining of two arrays in NumPy, is primarily accomplished using the routines ``np.concatenate``, ``np.vstack``, and ``np.hstack``.``np.concatenate`` takes a tuple or list of arrays as its first argument, as we can see here:
###Code
x = np.array([1, 2, 3])
y = np.array([3, 2, 1])
np.concatenate([x, y])
###Output
_____no_output_____
###Markdown
You can also concatenate more than two arrays at once:
###Code
z = [99, 99, 99]
print(np.concatenate([x, y, z]))
###Output
[ 1 2 3 3 2 1 99 99 99]
###Markdown
It can also be used for two-dimensional arrays:
###Code
grid = np.array([[1, 2, 3],
[4, 5, 6]])
# concatenate along the first axis
np.concatenate([grid, grid])
# concatenate along the second axis (zero-indexed)
np.concatenate([grid, grid], axis=1)
###Output
_____no_output_____
###Markdown
For working with arrays of mixed dimensions, it can be clearer to use the ``np.vstack`` (vertical stack) and ``np.hstack`` (horizontal stack) functions:
###Code
x = np.array([1, 2, 3])
grid = np.array([[9, 8, 7],
[6, 5, 4]])
# vertically stack the arrays
np.vstack([x, grid])
# horizontally stack the arrays
y = np.array([[99],
[99]])
np.hstack([grid, y])
###Output
_____no_output_____
###Markdown
Similary, ``np.dstack`` will stack arrays along the third axis. Splitting of arraysThe opposite of concatenation is splitting, which is implemented by the functions ``np.split``, ``np.hsplit``, and ``np.vsplit``. For each of these, we can pass a list of indices giving the split points:
###Code
x = [1, 2, 3, 99, 99, 3, 2, 1]
x1, x2, x3 = np.split(x, [3, 5])
print(x1, x2, x3)
###Output
[1 2 3] [99 99] [3 2 1]
###Markdown
Notice that *N* split-points, leads to *N + 1* subarrays.The related functions ``np.hsplit`` and ``np.vsplit`` are similar:
###Code
grid = np.arange(16).reshape((4, 4))
grid
upper, lower = np.vsplit(grid, [2])
print(upper)
print(lower)
left, right = np.hsplit(grid, [2])
print(left)
print(right)
###Output
[[ 0 1]
[ 4 5]
[ 8 9]
[12 13]]
[[ 2 3]
[ 6 7]
[10 11]
[14 15]]
|
notebooks/Dataset D - Contraceptive Method Choice/Synthetic data evaluation/Privacy/2_Membership_Inference_Dataset D.ipynb | ###Markdown
Membership Inference Attcak (MIA) Dataset D
###Code
#import libraries
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
import os
print('Libraries imported!!')
#define directory of functions and actual directory
HOME_PATH = '' #home path of the project
FUNCTIONS_DIR = 'EVALUATION FUNCTIONS/PRIVACY'
ACTUAL_DIR = os.getcwd()
#change directory to functions directory
os.chdir(HOME_PATH + FUNCTIONS_DIR)
#import functions for membership attack simulation
from membership_inference import evaluate_membership_attack
#change directory to actual directory
os.chdir(ACTUAL_DIR)
print('Functions imported!!')
###Output
Functions imported!!
###Markdown
1. Read real and synthetic datasetsIn this part real and synthetic datasets are read.
###Code
#Define global variables
DATA_TYPES = ['Real','GM','SDV','CTGAN','WGANGP']
SYNTHESIZERS = ['GM','SDV','CTGAN','WGANGP']
FILEPATHS = {'Real' : HOME_PATH + 'REAL DATASETS/TRAIN DATASETS/D_ContraceptiveMethod_Real_Train.csv',
'GM' : HOME_PATH + 'SYNTHETIC DATASETS/GM/D_ContraceptiveMethod_Synthetic_GM.csv',
'SDV' : HOME_PATH + 'SYNTHETIC DATASETS/SDV/D_ContraceptiveMethod_Synthetic_SDV.csv',
'CTGAN' : HOME_PATH + 'SYNTHETIC DATASETS/CTGAN/D_ContraceptiveMethod_Synthetic_CTGAN.csv',
'WGANGP' : HOME_PATH + 'SYNTHETIC DATASETS/WGANGP/D_ContraceptiveMethod_Synthetic_WGANGP.csv'}
categorical_columns = ['wife_education','husband_education','wife_religion','wife_working','husband_occupation',
'standard_of_living_index','media_exposure','contraceptive_method_used']
data = dict()
Q=5
#iterate over all datasets filepaths and read each dataset
data = dict()
for name, path in FILEPATHS.items() :
data[name] = pd.read_csv(path)
for col in categorical_columns :
data[name][col] = data[name][col].astype('category').cat.codes
numerical_columns = data[name].select_dtypes(include=['int64','float64']).columns.tolist()
for col in numerical_columns :
data[name][col] = pd.qcut(data[name][col], q=Q, duplicates='drop').cat.codes
data
#read TRAIN real dataset
train_data = pd.read_csv(HOME_PATH + 'REAL DATASETS/TRAIN DATASETS/D_ContraceptiveMethod_Real_Train.csv')
for col in categorical_columns :
train_data[col] = train_data[col].astype('category').cat.codes
for col in numerical_columns :
train_data[col] = pd.qcut(train_data[col], q=Q, duplicates='drop').cat.codes
train_data = train_data.sample(frac=1)
#read TEST real dataset
test_data = pd.read_csv(HOME_PATH + 'REAL DATASETS/TEST DATASETS/D_ContraceptiveMethod_Real_Test.csv')
for col in categorical_columns :
test_data[col] = test_data[col].astype('category').cat.codes
for col in numerical_columns :
test_data[col] = pd.qcut(test_data[col], q=Q, duplicates='drop').cat.codes
print(len(test_data))
test_data.index = range(len(train_data), len(train_data) + len(test_data))
real_data = (pd.concat([train_data[0:len(test_data)], test_data])).sample(frac=1)
real_data
thresholds = [0.4, 0.3, 0.2, 0.1]
props = [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]
train_data_indexes = train_data.index.tolist()
precision_values_all = dict()
accuracy_values_all = dict()
for name in SYNTHESIZERS :
print(name)
precision_values = dict()
accuracy_values = dict()
for th in thresholds :
precision_values[th] = []
accuracy_values[th] = []
for p in props :
attacker_data = real_data.iloc[0:int(len(real_data)*p)]
precision_vals, accuracy_vals = evaluate_membership_attack(attacker_data, train_data_indexes, data[name], th)
precision_values[th].append(precision_vals)
accuracy_values[th].append(accuracy_vals)
print('Proportion ', p, ' Threshold ', th, ' analysed')
print('- mean precision', np.mean(precision_values[th]))
print('- mean accuracy', np.mean(accuracy_values[th]))
print('###################################################')
precision_values_all[name] = precision_values
accuracy_values_all[name] = accuracy_values
colors = ['tab:blue','tab:orange','tab:green','tab:red']
fig, axs = plt.subplots(nrows=2, ncols=4, figsize=(13,2.5*2))
idx = {SYNTHESIZERS[0] : {'accuracy' : [0,0], 'precision' : [0,1]},
SYNTHESIZERS[1] : {'accuracy' : [0,2], 'precision' : [0,3]},
SYNTHESIZERS[2] : {'accuracy' : [1,0], 'precision' : [1,1]},
SYNTHESIZERS[3] : {'accuracy' : [1,2], 'precision' : [1,3]}}
first = True
for name in SYNTHESIZERS :
ax_pre = axs[idx[name]['precision'][0], idx[name]['precision'][1]]
ax_acc = axs[idx[name]['accuracy'][0], idx[name]['accuracy'][1]]
precision_values = precision_values_all[name]
accuracy_values = accuracy_values_all[name]
for i in range(0,len(thresholds)) :
ax_pre.plot(props, precision_values[thresholds[i]], 'o-', color=colors[i])
ax_acc.plot(props, accuracy_values[thresholds[i]], 'o-', color=colors[i])
ax_pre.axhline(y=0.5, color='gray', linestyle='--', alpha=0.5)
ax_acc.axhline(y=0.5, color='gray', linestyle='--', alpha=0.5)
ax_pre.set_ylabel('prec')
ax_acc.set_title(name, fontsize=12)
ax_acc.title.set_position([1.1, 1.03])
ax_pre.set_ylim(-0.05,1.05)
ax_acc.set_ylabel('acc')
ax_acc.set_ylim(-0.05,1.05)
ax_acc.grid(True)
ax_pre.grid(True)
ax_acc.set_yticks([0.0,0.2,0.4,0.6,0.8,1])
ax_pre.set_yticks([0.0,0.2,0.4,0.6,0.8,1])
ax_acc.set_xticks([0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0])
ax_pre.set_xticks([0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0])
if first == False :
ax_acc.set_yticklabels([])
else :
first = False
ax_pre.set_yticklabels([])
ax_acc.set_xticklabels([])
ax_pre.set_xticklabels([])
axs[idx['CTGAN']['accuracy'][0],idx['CTGAN']['accuracy'][1]].set_yticklabels([0.0,0.2,0.4,0.6,0.8,1])
axs[idx['CTGAN']['accuracy'][0],idx['CTGAN']['accuracy'][1]].set_xticklabels([0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0])
axs[idx['CTGAN']['precision'][0],idx['CTGAN']['precision'][1]].set_xticklabels([0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0])
axs[idx['WGANGP']['accuracy'][0],idx['WGANGP']['accuracy'][1]].set_xticklabels([0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0])
axs[idx['WGANGP']['precision'][0],idx['WGANGP']['precision'][1]].set_xticklabels([0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0])
fig.text(0.7, 0.04, 'Proportion of dataset known by an attacker', ha='center')
fig.text(0.3, 0.04, 'Proportion of dataset known by an attacker', ha='center')
ax_pre.legend(thresholds, ncol=len(thresholds), bbox_to_anchor=(-0.7, -0.3))
fig.tight_layout()
#fig.suptitle('Membership Inference Tests Results \n Dataset F - Indian Liver Patient', fontsize=18)
fig.savefig('INFERENCE TESTS RESULTS/MEMBERSHIP INFERENCE TESTS RESULTS.svg', bbox_inches='tight')
###Output
_____no_output_____ |
Featuring Engineering/Variable_distribution.ipynb | ###Markdown
Variable distribution Linear Regression AssumptionsLinear Regression has the following assumptions over the predictor variables X:- Linear relationship with the outcome Y- Multivariate normality- No or little multicollinearity- HomoscedasticityNormality assumption means that every variable X should follow a Gaussian distribution.Homoscedasticity, also known as homogeneity of variance, describes a situation in which the error term (that is, the “noise” or random disturbance in the relationship between the independent variables (Xs) and the dependent variable (Y)) is the same across all values of the independent variables.Violations in the assumptions of homoscedasticity and / or normality (assuming a distribution of data is homoscedastic or Gaussian, when in reality it is not) may result in poor model performance. Does Variable Distribution affect other machine learning models?The remaining machine learning models, including Neural Networks, Support Vector Machines, Tree based methods and PCA do not make any assumption over the distribution of the independent variables. However, in many occasions the model performance may benefit from a "Gaussian-like" distribution. Why may models benefit from a "Gaussian-like" distributions? In variables with a normal distribution, the observations of X available to predict Y vary across a greater range of values, that is, the values of X are "spread" over a greater range. See Figure 1 below.
###Code
# simulation of the Gaussian distribution of a variable X
# x-axis indicates the values of X
# y-axis indicates the frequency of each value
# x is spread over a big range (-3,3)
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.mlab as mlab
import math
mu = 0
variance = 1
sigma = math.sqrt(variance)
x = np.linspace(mu-3*variance,mu+3*variance, 100)
plt.plot(x,mlab.normpdf(x, mu, sigma))
plt.show()
###Output
_____no_output_____
###Markdown
In the Gaussian distribution depicted in the figure above, the values of the individual observations vary greatly across a wide range of x values (in this case -2 to 2). In variables with skewed distributions, the majority of the observations available to predict Y, vary within a very narrow value range of X, and very few observations are available in the tails of the distribution. See Figure 2 below.
###Code
from scipy import linspace
from scipy import pi,sqrt,exp
from scipy.special import erf
from pylab import plot,show
def pdf(x):
return 1/sqrt(2*pi) * exp(-x**2/2)
def cdf(x):
return (1 + erf(x/sqrt(2))) / 2
def skew(x,e=0,w=1,a=0):
t = (x-e) / w
return 2 / w * pdf(t) * cdf(a*t)
n = 2**10
e = 1.0 # location
w = 2.0 # scale
x = linspace(-10,10,n)
# This simulation shows a variable X skewed to the left. The majority of the values of X are accumulated
# on the right hand side of the plot, with only a few observations on the left tail.
# This meas, that we do not have enough values of X on the left, for our prediction model to learn from
# more precisely, there are a lot of observations for the value range of x (0,5), but very few for the range (-10, 0)
p = skew(x,e,w,5)
plot(x,p)
plt.show()
###Output
_____no_output_____
###Markdown
In the skewed distribution above we see that the majority of the observations take values over a very narrow value range (2-2.5 in this example), with few observations taking higher values (> 5 in this example). Therefore, regardless of the outcome, most observations will have values in the 2-2.5 space, making discrimination and outcome prediction difficult.
###Code
# overlay of a normal distribution (yellow), with 2 skewed distributions (green and blue)
for a in [-5,0,5]:
p = skew(x,e,w,a)
plot(x,p)
show()
###Output
_____no_output_____
###Markdown
In the figure above, we can see more clearly, how in a variable with a Gaussian distribution, the different observations can take values in a wider value range, than in a skewed variable, where the majority of values are concentrated on one end. What can we do if variables are skewed?If the performance o the machine learning model is poor due to the skewed distribution of the variables, there are two strategies available to improve performance:1. Find a transformation of the variable X that stabilizes the variance and generates more consistent support across the range of values (a transformation that gives the variable more of the bell-shape of the Gaussian Distribution).2. Choose an appropriate binning transformation (discretisation) in order to enable each portion of the predictors' ranges to be weighted appropriately.**I will discuss these 2 methods in more detail in sections 14 and 15.In this notebook, I will discuss some diagnostic methods to determine whether the variables have an approximate normal distribution, and make some experiments to see how normality affects the performance of machine learning algorithms. ============================================================================= Real Life example: Predicting Survival on the Titanic: understanding society behaviour and beliefsPerhaps one of the most infamous shipwrecks in history, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 people on board. Interestingly, by analysing the probability of survival based on few attributes like gender, age, and social status, we can make very accurate predictions on which passengers would survive. Some groups of people were more likely to survive than others, such as women, children, and the upper-class. Therefore, we can learn about the society priorities and privileges at the time. Predicting Sale Price of HousesThe problem at hand aims to predict the final sale price of homes based on different explanatory variables describing aspects of residential homes. Predicting house prices is useful to identify fruitful investments, or to determine whether the price advertised for a house is over or underestimated, before making a buying judgment. Titanic dataset
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
import pylab
import scipy.stats as stats
# for regression problems
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
# for classification
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
# to split and standarize the datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# to evaluate regression models
from sklearn.metrics import mean_squared_error
# to evaluate classification models
from sklearn.metrics import roc_auc_score
# load the numerical variables of the Titanic Dataset
data = pd.read_csv('titanic.csv', usecols = ['Pclass', 'Age', 'Fare', 'Survived'])
data.head()
###Output
_____no_output_____
###Markdown
Age
###Code
# plot the histograms to have a quick look at the distributions
data.Age.hist()
###Output
_____no_output_____
###Markdown
One way of assessing whether the distribution is approximately normal is to evaluate the Quantile-Quantile plot (Q-Q plot).In a QQ-plot, the quantiles of the variable are plotted on the vertical axis, and the quantiles of a specified probability distribution (Gaussian distribution) are on the horizontal axis. The plot consists of a series of points that show the relationship between the real data and the specified probability distribution. If the values of a variable perfectly match the specified probability distribution, the points on the graph will form a 45 degree line. See below.
###Code
# let's plot the Q-Q plot for the variable Age.
temp = data.dropna(subset=['Age'])
stats.probplot(temp.Age, dist="norm", plot=pylab)
pylab.show()
###Output
_____no_output_____
###Markdown
The majority of the observations lie on the 45 degree red line following the expected quantiles of the theoretical Gaussian distribution. Some observations at the lower end of the value range depart from the red line, and this is consistant with the slight shift towards the left in the variable distribution observed in the histogram above.
###Code
# let's apply a transformation and see what it does to the distribution
(data.Age**(1/1.5)).hist()
# and now the effect of the transformation on the Q-Q plot
stats.probplot((temp.Age**(1/1.5)), dist="norm", plot=pylab)
pylab.show()
###Output
_____no_output_____
###Markdown
The variable transformation did not end up in Gaussian distribution of the transformed Age values. Fare
###Code
# let's have a look at the Fare variable
data.Fare.hist()
# and the Q-Q plot
stats.probplot(data.Fare, dist="norm", plot=pylab)
pylab.show()
###Output
_____no_output_____
###Markdown
Both from the histogram and from the Q-Qplot it is clear that Fare does not follow a Gaussian distribution.
###Code
# and now let's apply a transformation
(data.Fare**(1/4)).hist()
# and the Q-Q plot
stats.probplot((data.Fare**(1/4)), dist="norm", plot=pylab)
pylab.show()
###Output
_____no_output_____
###Markdown
We can see that after the transformation, the quantiles are somewhat more aligned over the 45 degree line with the theoretical quantiles of the Gaussian distribution. The transformation is not perfect but it does end in a broader distribution of the values over a wider value range of the variable. Model performance with original and transformed variables
###Code
# let's add the transformed variables to the dataset
data['Fare_transformed'] = data.Fare**(1/4)
data['Age_transformed'] = data.Age**(1/1.5)
# let's separate into training and testing set
X_train, X_test, y_train, y_test = train_test_split(data.fillna(0), data.Survived, test_size=0.3,
random_state=0)
X_train.shape, X_test.shape
# let's scale the features
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
# model build using the natural distributions
logit = LogisticRegression(random_state=44, C=1000) # c big to avoid regularization
logit.fit(X_train[['Age', 'Fare']], y_train)
print('Train set')
pred = logit.predict_proba(X_train[['Age', 'Fare']])
print('Logistic Regression roc-auc: {}'.format(roc_auc_score(y_train, pred[:,1])))
print('Test set')
pred = logit.predict_proba(X_test[['Age', 'Fare']])
print('Logistic Regression roc-auc: {}'.format(roc_auc_score(y_test, pred[:,1])))
# model built using the transformed variables
logit = LogisticRegression(random_state=44, C=1000) # c big to avoid regularization
logit.fit(X_train[['Age_transformed', 'Fare_transformed']], y_train)
print('Train set')
pred = logit.predict_proba(X_train[['Age_transformed', 'Fare_transformed']])
print('Logistic Regression roc-auc: {}'.format(roc_auc_score(y_train, pred[:,1])))
print('Test set')
pred = logit.predict_proba(X_test[['Age_transformed', 'Fare_transformed']])
print('Logistic Regression roc-auc: {}'.format(roc_auc_score(y_test, pred[:,1])))
Using transformed variables improved the performance of the Logistic Regression model (compare test roc auc: 0.7137 vs 0.7275).
### Support Vector Machine
# model build using natural distributions
SVM_model = SVC(random_state=44, probability=True)
SVM_model.fit(X_train[['Age', 'Fare']], y_train)
print('Train set')
pred = SVM_model.predict_proba(X_train[['Age', 'Fare']])
print('Logistic Regression roc-auc: {}'.format(roc_auc_score(y_train, pred[:,1])))
print('Test set')
pred = SVM_model.predict_proba(X_test[['Age', 'Fare']])
print('Logistic Regression roc-auc: {}'.format(roc_auc_score(y_test, pred[:,1])))
# model built on transformed variables
SVM_model = SVC(random_state=44, probability=True)
SVM_model.fit(X_train[['Age_transformed', 'Fare_transformed']], y_train)
print('Train set')
pred = SVM_model.predict_proba(X_train[['Age_transformed', 'Fare_transformed']])
print('Logistic Regression roc-auc: {}'.format(roc_auc_score(y_train, pred[:,1])))
print('Test set')
pred = SVM_model.predict_proba(X_test[['Age_transformed', 'Fare_transformed']])
print('Logistic Regression roc-auc: {}'.format(roc_auc_score(y_test, pred[:,1])))
###Output
_____no_output_____
###Markdown
For SVM the transformation of the variables also improved the performance of the model. Not only has the SVM now a better generalisation on the test set, the mode built using the transformed variables does not over-fit to the train set (compare train roc-auc 0.927 vs 0.726 for training set). Random Forests
###Code
# model build using natural distributions
rf = RandomForestClassifier(n_estimators=700, random_state=39)
rf.fit(X_train[['Age', 'Fare']], y_train)
print('Train set')
pred = rf.predict_proba(X_train[['Age', 'Fare']])
print('Random Forests roc-auc: {}'.format(roc_auc_score(y_train, pred[:,1])))
print('Test set')
pred = rf.predict_proba(X_test[['Age', 'Fare']])
print('Random Forests roc-auc: {}'.format(roc_auc_score(y_test, pred[:,1])))
# model built on transformed variables
rf = RandomForestClassifier(n_estimators=700, random_state=39)
rf.fit(X_train[['Age_transformed', 'Fare_transformed']], y_train)
print('Train set')
pred = rf.predict_proba(X_train[['Age_transformed', 'Fare_transformed']])
print('Random Forests roc-auc: {}'.format(roc_auc_score(y_train, pred[:,1])))
print('Test set')
pred = rf.predict_proba(X_test[['Age_transformed', 'Fare_transformed']])
print('Random Forests roc-auc: {}'.format(roc_auc_score(y_test, pred[:,1])))
###Output
_____no_output_____
###Markdown
As expected, Random Forests did not see a benefit from transforming the variables to a more Gaussian like distribution. House Sale Dataset
###Code
# let's load the House Sale Price dataset with a few columns
cols_to_use = ['LotArea', 'BsmtFinSF1','GrLivArea', 'OpenPorchSF', 'YearBuilt', 'SalePrice']
data = pd.read_csv('houseprice.csv', usecols=cols_to_use)
data.head()
# let's check for missing data
data[cols_to_use].isnull().sum()
# let's plot the histograms to have an impression of the distribution of the numerical variables
for col in cols_to_use:
fig = data[col].hist(bins=50)
fig.set_xlabel(col)
fig.set_label('Number of houses')
plt.show()
We observed that the numerical variables are not normally distributed. In particular, most of them apart from YearBuilt are skewed.
# let's apply a transformation and see whether the variables are now more Gaussian shaped
# on top of the histograms we plot now as well the Q-Q plots
for col in cols_to_use:
if col not in ['SalePrice', 'YearBuilt']:
data[col+'_transformed'] = data[col]**(1/4)
fig = data[col+'_transformed'].hist(bins=50)
fig.set_xlabel(col)
fig.set_ylabel('Number of houses')
plt.show()
stats.probplot(data[data[col+'_transformed']!=0][col+'_transformed'], dist="norm", plot=pylab)
plt.show()
###Output
_____no_output_____
###Markdown
We see that in the Q-Q plots, that most of the observations (blue dots) lie on the 45 degree line. Therefore the transformation was successful in attributing a Gaussian-like shape to the variables.
###Code
# tried a few transformation on the variable YearBuilt without major success, you can go ahead and try other transformations
data['YearBuilt_transformed'] = data['YearBuilt']**(1/-2)
fig = data['YearBuilt_transformed'].hist(bins=50)
fig.set_xlabel(col)
plt.show()
stats.probplot(data['YearBuilt_transformed'], dist="norm", plot=pylab)
pylab.show()
###Output
_____no_output_____
###Markdown
However, the transformation of the variable 'YearBuilt_transformed' does not help to get a more 'Gaussian'-like distribution. Machine learning model performance on original vs transformed variables
###Code
# let's separate into training and testing set
X_train, X_test, y_train, y_test = train_test_split(data.fillna(0), data.SalePrice, test_size=0.3,
random_state=0)
X_train.shape, X_test.shape
X_train.head()
y_train.head()
# create a list with the untransformed columns
cols_to_use = cols_to_use[0:-1]
cols_to_use
# create a list with the transformed columns
cols_transformed = [col+'_transformed' for col in cols_to_use]
cols_transformed
# let's standarise the dataset
scaler = StandardScaler()
X_train_o = scaler.fit_transform(X_train[cols_to_use])
X_test_o = scaler.transform(X_test[cols_to_use])
# let's standarise the dataset
scaler = StandardScaler()
X_train_t = scaler.fit_transform(X_train[cols_transformed])
X_test_t = scaler.transform(X_test[cols_transformed])
###Output
_____no_output_____
###Markdown
Linear Regression
###Code
linreg = LinearRegression()
linreg.fit(X_train_o, y_train)
print('Train set')
pred = linreg.predict(X_train_o)
print('Linear Regression mse: {}'.format(mean_squared_error(y_train, pred)))
print('Test set')
pred = linreg.predict(X_test_o)
print('Linear Regression mse: {}'.format(mean_squared_error(y_test, pred)))
print()
cols_transformed = [col+'_transformed' for col in cols_to_use]
linreg = LinearRegression()
linreg.fit(X_train_t, y_train)
print('Train set')
pred = linreg.predict(X_train_t)
print('Linear Regression mse: {}'.format(mean_squared_error(y_train, pred)))
print('Test set')
pred = linreg.predict(X_test_t)
print('Linear Regression mse: {}'.format(mean_squared_error(y_test, pred)))
print()
###Output
_____no_output_____
###Markdown
We can see that variable transformation improved the model performance, the mse on the test set is smaller when using the Linear Regression model built on the transformed variables (2.7e6 vs 2.4e6). In addition, the mse on the train set is bigger, suggesting that the model built using the natural distributions is over-fitting to the train set (mse 1.6e6 vs 1.8e6).
###Code
rf = RandomForestRegressor(n_estimators=5, random_state=39, max_depth=2,min_samples_leaf=100)
rf.fit(X_train_o, y_train)
print('Train set')
pred = rf.predict(X_train_o)
print('Random Forests mse: {}'.format(mean_squared_error(y_train, pred)))
print('Test set')
pred = rf.predict(X_test_o)
print('Random Forests mse: {}'.format(mean_squared_error(y_test, pred)))
print()
print()
rf = RandomForestRegressor(n_estimators=5, random_state=39, max_depth=2,min_samples_leaf=100)
rf.fit(X_train_t, y_train)
print('Train set')
pred = rf.predict(X_train_t)
print('Random Forests mse: {}'.format(mean_squared_error(y_train, pred)))
print('Test set')
pred = rf.predict(X_test_t)
print('Random Forests mse: {}'.format(mean_squared_error(y_test, pred)))
print()
print()
###Output
_____no_output_____ |
InteractiveFigures/InDevelopment/HYG_Database/HYG.ipynb | ###Markdown
An attempt to render the HYG Database in X3D(OM). Based upon Michael Chang's [Google Chrome "100,000" Stars Experiment](http://www.html5rocks.com/en/tutorials/casestudies/100000stars/)History: > April 2015: Initial version, HYD v3, August Muench ([email protected])
###Code
%pdb
# mayavi 4.4.0
from mayavi import mlab
# astropy 1.0.1
from astropy.table import Table
###Output
_____no_output_____
###Markdown
The HYD data were downloaded as a gzipped CSV file from the [Astronomy Nexus](http://astronexus.com/hyg/) website. > MD5 (hygdata_v3.csv.gz) = abbb1109c62d2c759b765e3315ffa901I tried two examples inports with this data. First as the straight CSV in astropy; then using a modified version with fewer columns and wrapped as a FITS objects (using TopCat).
###Code
data = Table.read("hygdata_v3.csv")
data
print(type(data))
print(type(data['ra']), data['ra'].fill_value)
print(type(data['hip']), data['hip'].fill_value)
print(type(data['hip']), data['hip']._mask)
###Output
<class 'astropy.table.table.Table'>
(<class 'astropy.table.column.MaskedColumn'>, 1e+20)
(<class 'astropy.table.column.MaskedColumn'>, 999999)
(<class 'astropy.table.column.MaskedColumn'>, array([ True, False, False, ..., True, True, True], dtype=bool))
###Markdown
A few things about this dataset:* RA is expressed in decimal hours;* The "null" for distance is 1e5;* The data are read into "masked arrays" in astropy. * There is an (are many) incompatibility(ies) with masked arrays; * use .filled() to down convert.
###Code
px = data['ra'].filled()
py = data['dec'].filled()
pz = data['dist'].filled()
ps = data['absmag'].filled()
mlab.close(1)
mlab.figure(1,size=(600,300))
mlab.points3d(px, py, pz, extent=[0, 1, 0, 0.5, 0, 1], mode='point')
# figure
mlab.outline(color=(0,0,0),line_width = 2.0)
mlab.axes(color = (0,0,0), ranges = [360, 0.0, -90, 90, 1, 30], nb_labels=5)
mlab.xlabel("RA")
mlab.ylabel("Dec")
mlab.zlabel("Distance (pc)")
mlab.title("HYD v3", height=0.9, opacity = 0.5, size=0.3)
mlab.colorbar(orientation="vertical",nb_labels=7)
# save to X3D file
mlab.savefig('hydv3_simple.x3d')
mlab.show()
px._mask
###Output
_____no_output_____ |
notebooks/SI/S16_mutations_time_traces_low.ipynb | ###Markdown
Make data frame from time traces
###Code
data_frame = makeDataframe.make_dataframe(file_path)
data_frame = data_frame.sort_values(by=['doubling_rate'])
time_traces_data_frame = pd.read_hdf(data_frame['path_dataset'].iloc[indx], key='dataset_time_traces')
v_init_data_frame = pd.read_hdf(data_frame['path_dataset'].iloc[indx], key='dataset_init_events')
v_init = v_init_data_frame.iloc[10]['v_init']
v_init_per_ori = v_init_data_frame.iloc[10]['v_init_per_ori']
t_init_list = v_init_data_frame['t_init'].to_numpy()
v_d_data_frame = pd.read_hdf(data_frame['path_dataset'].iloc[indx], key='dataset_div_events')
data_frame
time = np.array(time_traces_data_frame["time"])
volume = np.array(time_traces_data_frame["volume"])
n_ori = np.array(time_traces_data_frame["n_ori"])
active_fraction = np.array(time_traces_data_frame["active_fraction"])
free_conc = np.array(time_traces_data_frame["free_conc"])
print(time.size)
cycle_0 = 6
cycle_f = 9
t_0 = time[volume==v_d_data_frame['v_b'][cycle_0]]
indx_0 = np.where(time==t_0)[0][0]
t_f = time[volume==v_d_data_frame['v_b'][cycle_f]]
indx_f = np.where(time==t_f)[0][0]+20
print(indx_0, indx_f)
n_ori_cut = n_ori[indx_0:indx_f]
time_cut = time[indx_0:indx_f]
volume_cut = volume[indx_0:indx_f]
active_fraction_cut = active_fraction[indx_0:indx_f]
free_conc_cut = free_conc[indx_0:indx_f]
t_init_list_cut_1 = t_init_list[t_init_list>t_0]
t_init_list_cut = t_init_list_cut_1[t_init_list_cut_1<t_f]
t_b = t_init_list + data_frame.iloc[indx]['t_CD']
t_b_cut_1 = t_b[t_b<t_f]
t_b_cut = t_b_cut_1[t_b_cut_1>t_0]
print(t_init_list_cut, t_b_cut)
###Output
40000
20182 26200
[21.178 23.178 25.178] [22.178 24.178 26.178]
###Markdown
Color definitions
###Code
pinkish_red = (247 / 255, 109 / 255, 109 / 255)
green = (0 / 255, 133 / 255, 86 / 255)
dark_blue = (36 / 255, 49 / 255, 94 / 255)
light_blue = (168 / 255, 209 / 255, 231 / 255)
darker_light_blue = (112 / 255, 157 / 255, 182 / 255)
blue = (55 / 255, 71 / 255, 133 / 255)
yellow = (247 / 255, 233 / 255, 160 / 255)
###Output
_____no_output_____
###Markdown
Plot four figures
###Code
label_list = [r'$V(t)$', r'$f(t)$']
x_axes_list = [time_cut, time_cut]
y_axes_list = [volume_cut, active_fraction_cut]
color_list = [green, pinkish_red]
fig, ax = plt.subplots(2, figsize=(3.2,2))
plt.xlabel(r'time [$\tau_{\rm d}$]')
y_min_list = [0,0]
y_max_list = [1, 1.2]
doubling_time = 1/data_frame.iloc[indx]['doubling_rate']
print(1/doubling_time)
print('number of titration sites per origin:', data_frame.iloc[indx]['n_c_max_0'])
for item in range(0, len(label_list)):
ax[item].set_ylabel(label_list[item])
ax[item].plot(x_axes_list[item], y_axes_list[item], color=color_list[item])
ax[item].set_ylim(ymin=0)
ax[item].tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False) # labels along the bottom edge are off
ax[item].spines["top"].set_visible(False)
ax[item].spines["right"].set_visible(False)
ax[item].margins(0)
for t_div in t_b_cut:
ax[item].axvline(x=t_div,
ymin=y_min_list[item],
ymax=y_max_list[item],
c="black",
zorder=0,
linewidth=0.8,
clip_on=False)
for t_init in t_init_list_cut:
ax[item].axvline(x=t_init,
ymin=y_min_list[item],
ymax=y_max_list[item],
c="black",
zorder=0,
linewidth=0.8,
linestyle='--',
clip_on=False)
ax[0].set_yticks([0, v_init])
# ax[0].set(ylim=(0, v_init+0.01))
ax[0].set_yticklabels(['0',r'$v^\ast$'])
ax[0].get_yticklabels()[1].set_color(green)
ax[0].axhline(y=v_init, color=green, linestyle='--')
# ax[1].axhline(y=data_frame.iloc[0]['michaelis_const_initiator'], color=color_list[1], linestyle='--')
# ax[1].set_yticks([0, data_frame.iloc[0]['michaelis_const_initiator']])
# ax[1].set_yticklabels([0, r'$K_{\rm D}$'])
# ax[1].get_yticklabels()[1].set_color(color_list[1])
# ax[1].set(ylim=(0,data_frame.iloc[0]['michaelis_const_initiator']*1.15))
# ax[2].axhline(y=data_frame.iloc[0]['frac_init'], color=pinkish_red, linestyle='--')
ax[1].set_yticks([0, 0.5, 1])
ax[1].set_yticklabels(['0', '0.5', '1'])
# ax[3].set_yticks([0, data_frame.iloc[0]['critical_free_active_conc']])
# ax[3].set_yticklabels(['0',r'$[D]_{\rm ATP, f}^\ast$'])
# ax[3].get_yticklabels()[1].set_color(color_list[3])
# ax[3].axhline(y=data_frame.iloc[0]['critical_free_active_conc'], color=color_list[3], linestyle='--')
ax[1].tick_params(bottom=True, labelbottom=True)
ax[1].tick_params(axis='x', colors='black')
ax[1].set_xticks([time_cut[0],
time_cut[0]+ doubling_time,
time_cut[0]+ 2*doubling_time,
time_cut[0]+ 3*doubling_time
])
ax[1].set_xticklabels(['0', '1', '2', '3'])
plt.savefig(file_path + '/S11_titration_switch_combined_'+mutant+'_'+str(indx)+'.pdf', format='pdf',bbox_inches='tight')
###Output
0.5
number of titration sites per origin: 300.0
|
past-team-code/Fall2018Team1/Exploration/Jeff&Ziyang_clean_up_author.ipynb | ###Markdown
gets rid of urls in author column
###Code
uglydata= pd.read_csv('article_data_and_price_labeled_publisher.csv').drop('Unnamed: 0', axis = 1)
uglydata.head()
mask = uglydata['author'].str.contains('http|.com|@|.co|.net|.www|.org')
print(len(uglydata))
df = uglydata[mask==False]
print(len(df))
df.head(60)
df.to_csv('1106_cleaned_author_articles.csv')
###Output
_____no_output_____ |
NLSMRF.ipynb | ###Markdown
Nonlinear seismic response of a MRF March 2020, By Amir Hossein NamadchiThis is an OpenSeesPy simulation of a moment resisting frame subjected to seismic excitation. The model was introduced by *C. Kolay* & *J. M. Ricles* in their paper entitled [Assessment of explicit and semi-explicit classes of model-based algorithms for direct integration in structural dynamics](https://onlinelibrary.wiley.com/doi/abs/10.1002/nme.5153). The beams and columns of the MRF are modeled using `dispBeamColumn` fiber elements. The gravity load resisting system associated with the MRF is modeled using a lean-on column composed of linear elastic beam-column elements with 2nd Order $P-\Delta$ effects [[1]](https://onlinelibrary.wiley.com/doi/abs/10.1002/nme.5153). 
###Code
import numpy as np
import openseespy.opensees as ops
import matplotlib.pyplot as plt
import eSEESminiPy
%matplotlib notebook
###Output
_____no_output_____
###Markdown
Units
###Code
## Units
m = 1.0 # Meters
KN = 1.0 # KiloNewtons
sec = 1.0 # Seconds
inch = 0.0254*m # inches
kg = KN*(sec**2)/m # mass unit (derived)
g = 9.81*(m/sec**2) # gravitational constant
###Output
_____no_output_____
###Markdown
Earthquake recordThis will load *1994 Northridge* earthquake ground motion record (Canyon Country - W Lost Cany station) downloaded for the [PEER website](https://ngawest2.berkeley.edu/). Then, the record is scaled by a factor of **3** as follows (scaling could also be done when defining `timeSeries`):
###Code
dt = 0.01*sec
northridge = np.loadtxt('RSN960_NORTHR_LOS270.AT2', skiprows=4).flatten()
northridge = np.column_stack((np.arange(0,len(northridge)*dt, dt),
northridge*3*g))
###Output
_____no_output_____
###Markdown
Model Definition GeometryNode coordinates and element connectivity are defined.
###Code
ops.wipe()
ops.model('basic','-ndm',2,'-ndf',3)
## Main Nodes
# Node Coordinates Matrix (size : nn x 2)
node_coords = np.array([[0,0],[6,0],
[0,1],[6,1],
[0,2],[6,2],
[0,2.5],[6,2.5],
[0,3],[0.5,3],[1,3],[3,3],[5,3],[5.5,3],[6,3],
[0,3.5],[6,3.5],
[0,4],[6,4],
[0,5],[6,5],
[0,5.5],[6,5.5],
[0,6],[0.5,6],[1,6],[3,6],[5,6],[5.5,6],[6,6]
], dtype = np.float64)*m
## Main Elements
# Element Connectivity Matrix (size: nel x 2)
connectivity = [[1, 3], [3, 5], [5, 7], [7, 9],
[9, 10], [10, 11], [11, 12], [12, 13],
[13, 14], [14, 15], [15, 8], [8, 6],
[6, 4], [4, 2], [9, 16], [16, 18],
[18, 20], [20, 22], [22, 24], [24, 25],
[25, 26], [26, 27], [27, 28], [28, 29],
[29, 30], [30, 23], [23, 21], [21, 19],
[19, 17], [17, 15]]
# Get Number of elements
nel = len(connectivity)
# Distinguish beams and columns by their element tag ID
all_the_beams = list(range(5, 10+1)) + list(range(20, 25+1))
all_the_cols = list(np.setdiff1d(np.arange(1, nel+1),
all_the_beams))
###Output
_____no_output_____
###Markdown
Sections & Materialsections are defined in `dict` which is quite self-explanatory.
###Code
# Main Beams and Columns
sections = {'W24x55':{'d':23.57*inch, 'tw':0.395*inch,
'bf':7.005*inch, 'tf':0.505*inch,
'A':16.2*(inch**2),
'I1':1350*(inch**4), 'I2':29.1*(inch**4)},
'W14x120':{'d':14.48*inch, 'tw':0.590*inch,
'bf':14.670*inch, 'tf':0.940*inch,
'A':35.3*(inch**2),
'I1':1380*(inch**4), 'I2':495*(inch**4)}
}
# Leaning columns section properties
leaning_col = {'A':(9.76e-2)*m**2,
'I1':(7.125e-4)*m**4}
# Material properties
F_y = 345000*(KN/m**2) # yield strength
E_0 = 2e8*(KN/m**2) # initial elastic tangent
eta = 0.01 # strain-hardening ratio
rho = 7850*(kg/m**3) # mass density
###Output
_____no_output_____
###Markdown
Adding to the Domain
###Code
# Nodal loads and Masses
lumped_mass = 50.97*kg # seismic floor mass
P_1 = 500*KN # Nodal loads
# Adding nodes to the domain
## Main Nodes
[ops.node(n+1,*node_coords[n])
for n in range(len(node_coords))];
## Fictitious Nodes (Leaning columns)
ops.node(100,*[7.0, 0.0]) # @ Base
ops.node(101,*[7.0, 3.0],
'-mass', *[lumped_mass, 0.00001, 0.00001]) # @ Story 1
ops.node(102,*[7.0, 6.0],
'-mass', *[lumped_mass, 0.00001, 0.00001]) # @ Story 2 (roof)
# Material
# -> uniaxial bilinear steel material with kinematic hardening
ops.uniaxialMaterial('Steel01', 1,
F_y, E_0, eta)
# Adding Sections
## Beams
ops.section('WFSection2d', 1, 1,
sections['W24x55']['d'],
sections['W24x55']['tw'],
sections['W24x55']['bf'],
sections['W24x55']['tf'], 10, 3)
## Columns
ops.section('WFSection2d', 2, 1,
sections['W14x120']['d'],
sections['W14x120']['tw'],
sections['W14x120']['bf'],
sections['W14x120']['tf'], 10, 3)
# Boundary Conditions
## Fixing the Base Nodes
[ops.fix(n, 1, 1, 0)
for n in [1, 2, 100]];
## Rigid floor diaphragm
ops.equalDOF(12, 101, 1)
ops.equalDOF(27, 102, 1)
# Transformations & Integration
## Transformation
ops.geomTransf('Linear', 1) # For Beams
ops.geomTransf('PDelta', 2) # For leaning Columns
## Integration scheme
ops.beamIntegration('Lobatto', 1, 1, 5) # For Beams
ops.beamIntegration('Lobatto', 2, 2, 5) # For Columns
# Adding Elements
## Beams
[ops.element('dispBeamColumn',
e, *connectivity[e-1], 1, 1,
'-cMass', rho*sections['W24x55']['A'])
for e in all_the_beams];
## Columns
## -> OpenseesPy cannot handle numpy int types
## -> so I had to convert them to primitive python int type
[ops.element('dispBeamColumn',
e, *connectivity[e-1], 1, 2,
'-cMass', rho*sections['W14x120']['A'])
for e in list(map(int, all_the_cols))];
## Leaning Columns
ops.element('elasticBeamColumn', nel+1, *[100, 101],
leaning_col['A'], E_0, leaning_col['I1'], 2)
ops.element('elasticBeamColumn', nel+2, *[101, 102],
leaning_col['A'], E_0, leaning_col['I1'], 2)
###Output
_____no_output_____
###Markdown
Draw ModelThe model can now be drawn using eSEESminiPy:
###Code
eSEESminiPy.drawModel()
###Output
_____no_output_____
###Markdown
Damping ModelThe model assumes 2% damping for the first and second modes of the system according to rayleigh's damping model. for two modes, the damping coefficients can be obtained by:$$ \left( \begin{array}{l}{\alpha _0}\\{\alpha _1}\end{array} \right) = 2\frac{{{\omega _m}{\omega _n}}}{{\omega _n^2 - \omega _m^2}}\left[ {\begin{array}{*{20}{c}}{{\omega _n}}&{ - {\omega _m}}\\{ - 1/{\omega _n}}&{1/{\omega _m}}\end{array}} \right]\left( \begin{array}{l}{\zeta _m}\\{\zeta _n}\end{array} \right)\ $$So, we need to perform an eigen analysis to obtain first two natural frequencies.
###Code
# Building Rayleigh damping model
omega = np.sqrt(ops.eigen('-fullGenLapack', 2))
print('Two first periods are:', 2*np.pi/omega)
a_m, b_k = 2*((omega[0]*omega[1])/(omega[1]**2-omega[0]**2))*(
np.array([[omega[1],-omega[0]],
[-1/omega[1],1/omega[0]]])@np.array([0.02,0.02]))
## Rayleigh damping based on initial stiffness
ops.rayleigh(a_m, 0, b_k, 0)
###Output
Two first periods are: [0.61688251 0.11741945]
###Markdown
Analysis Gravity Analysis
###Code
# Time Series
ops.timeSeries('Linear', 1) # For gravitional loads
# Load Pattern
ops.pattern('Plain', 1, 1)
ops.load(101, *[0.0, -P_1, 0.0])
ops.load(102, *[0.0, -P_1, 0.0])
# Settings
ops.constraints('Transformation')
ops.numberer('RCM')
ops.system('ProfileSPD')
ops.test('NormUnbalance', 0.000001, 100)
ops.algorithm('Newton')
ops.integrator('LoadControl', 0.1)
ops.analysis('Static')
# Perform static analysis
ops.analyze(10)
###Output
_____no_output_____
###Markdown
Time History Analysis
###Code
# Set time to zero
ops.loadConst('-time', 0.0)
ops.wipeAnalysis()
# Time Series
ops.timeSeries('Path', 2, '-dt', dt, # For EQ
'-values', *northridge[:,1],
'-time', *northridge[:,0])
# Load Pattern
ops.pattern('UniformExcitation', 2, 1, '-accel', 2)
# Settings
ops.constraints('Plain')
ops.numberer('RCM')
ops.system('ProfileSPD')
ops.test('NormUnbalance', 0.0000001, 100)
ops.algorithm('Newton')
ops.integrator('Newmark', 0.5, 0.25)
ops.analysis('Transient')
# Record some responses to plot
time_lst =[] # list to hold time stations for plotting
d_lst = [] # list to hold roof displacments
for i in range(len(northridge)):
ops.analyze(1, dt)
time_lst.append(ops.getTime())
d_lst.append(ops.nodeDisp(27,1))
###Output
_____no_output_____
###Markdown
VisualizationTime history of the horizontal displacement of the roof is plotted here
###Code
plt.figure(figsize=(12,4))
plt.plot(time_lst, np.array(d_lst), color = '#d62d20', linewidth=1.75)
plt.ylabel('Horizontal Displacement (m)', {'fontname':'Cambria', 'fontstyle':'italic','size':14})
plt.xlabel('Time (sec)', {'fontname':'Cambria', 'fontstyle':'italic','size':14})
plt.grid()
plt.yticks(fontname = 'Cambria', fontsize = 14)
plt.xticks(fontname = 'Cambria', fontsize = 14);
###Output
_____no_output_____ |
PLAsTiCC Astronomical Classification/data pre-processing.ipynb | ###Markdown
[PLAsTiCC Astronomical Classification | Kaggle](https://www.kaggle.com/c/PLAsTiCC-2018) Code Link: [Github](https://github.com/AutuanLiu/Kaggle-Compettions/tree/master/PLAsTiCC%20Astronomical%20Classification) Ref Links1. [Naive Benchmark - Galactic vs Extragalactic | Kaggle](https://www.kaggle.com/kyleboone/naive-benchmark-galactic-vs-extragalactic)2. [The Astronomical (complete) EDA - PLAsTiCC dataset | Kaggle](https://www.kaggle.com/danilodiogo/the-astronomical-complete-eda-plasticc-dataset)3. [All Classes Light Curve Characteristics | Kaggle](https://www.kaggle.com/mithrillion/all-classes-light-curve-characteristics)4. [Simple Neural Net for Time Series Classification | Kaggle](https://www.kaggle.com/meaninglesslives/simple-neural-net-for-time-series-classification)5. [Dataset overview - Exploration and comments | Kaggle](https://www.kaggle.com/hrmello/dataset-overview-exploration-and-comments)6. [Strategies for Flux Time Series Preprocessing | Kaggle](https://www.kaggle.com/mithrillion/strategies-for-flux-time-series-preprocessing)7. [The PLAsTiCC Astronomy "Starter Kit" | Kaggle](https://www.kaggle.com/michaelapers/the-plasticc-astronomy-starter-kit)
###Code
%matplotlib inline
%reload_ext autoreload
%autoreload 2
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import pandas as pd
import numpy as np
import xgboost as xgb
import seaborn as sns
import matplotlib.pyplot as plt
import re
from collections import Counter
from sklearn.preprocessing import LabelEncoder
sns.set_style('whitegrid')
sns.set_palette('Set1')
###Output
_____no_output_____
###Markdown
1 导入数据
###Code
datadir = 'dataset/'
# read data from file
# train_data = pd.read_csv(f'{datadir}training_set.csv')
# test_data = pd.read_csv(f'{datadir}test_set.csv')
train_metadata = pd.read_csv(f'{datadir}training_set_metadata.csv')
# test_metadata = pd.read_csv(f'{datadir}test_set_metadata.csv')
###Output
_____no_output_____
###Markdown
2 数据基本信息
###Code
train_metadata.describe()
train_metadata.info()
for v in train_metadata.columns:
print(v, train_metadata[v].isna().sum())
###Output
object_id 0
ra 0
decl 0
gal_l 0
gal_b 0
ddf 0
hostgal_specz 0
hostgal_photoz 0
hostgal_photoz_err 0
distmod 2325
mwebv 0
target 0
|
notebooks/1810 - Comparing CNN architectures.ipynb | ###Markdown
1810 - Comparing CNN architectures
###Code
# Imports
import sys
import os
import time
import math
import random
# Add the path to the parent directory to augment search for module
par_dir = os.path.abspath(os.path.join(os.getcwd(), os.pardir))
if par_dir not in sys.path:
sys.path.append(par_dir)
# Plotting import
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Import the utils for plotting the metrics
from plot_utils import plot_utils
from plot_utils import notebook_utils_2
from sklearn.preprocessing import label_binarize
from sklearn.metrics import roc_curve, auc
# Fix the colour scheme for each particle type
color_dict = {"gamma":"red", "e":"blue", "mu":"green"}
# Fix the numpy seed
np.random.seed(42)
# Plot the ROC curve for one vs another class
def plot_old_ROC(softmaxes, labels, energies, softmax_index_dict, label_0, label_1,
min_energy=0, max_energy=1500, show_plot=False, save_path=None):
assert softmaxes is not None
assert labels is not None
assert softmax_index_dict is not None
assert softmaxes.shape[0] == labels.shape[0]
assert label_0 in softmax_index_dict.keys()
assert label_1 in softmax_index_dict.keys()
#------------------------------------------------------------------------
# Create a boolean map to select events in the user-defined energy range
#------------------------------------------------------------------------
energy_slice_map = [False for i in range(len(energies))]
for i in range(len(energies)):
if(energies[i] >= min_energy and energies[i] < max_energy):
energy_slice_map[i] = True
curr_softmax = softmaxes[energy_slice_map]
curr_labels = labels[energy_slice_map]
#------------------------------------------------------------------------
# Extract the softmax and true label values for signal and background events
#------------------------------------------------------------------------
# Extract the useful softmax and labels from the input arrays
softmax_0 = curr_softmax[curr_labels==softmax_index_dict[label_0]]
labels_0 = curr_labels[curr_labels==softmax_index_dict[label_0]]
softmax_1 = curr_softmax[curr_labels==softmax_index_dict[label_1]]
labels_1 = curr_labels[curr_labels==softmax_index_dict[label_1]]
# Add the two arrays
softmax = np.concatenate((softmax_0, softmax_1), axis=0)
labels = np.concatenate((labels_0, labels_1), axis=0)
#------------------------------------------------------------------------
# Compute the ROC curve and the AUC for class corresponding to label 0
#------------------------------------------------------------------------
fpr, tpr, threshold = roc_curve(labels, softmax[:,softmax_index_dict[label_0]], pos_label=softmax_index_dict[label_0])
roc_auc = auc(fpr, tpr)
tnr = 1. - fpr
if show_plot or save_path is not None:
# TNR vs TPR plot
fig, ax = plt.subplots(figsize=(16,9),facecolor="w")
ax.tick_params(axis="both", labelsize=20)
ax.plot(tpr, tnr, color=color_dict[label_0],
label=r"$\{0}$, AUC ${1:0.3f}$".format(label_0, roc_auc) if label_0 is not "e" else r"${0}$, AUC ${1:0.3f}$".format(label_0, roc_auc),
linewidth=1.0, marker=".", markersize=4.0, markerfacecolor=color_dict[label_0])
# Show coords of individual points near x = 0.2, 0.5, 0.8
todo = {0.2: True, 0.5: True, 0.8: True}
for xy in zip(tpr, tnr):
xy = (round(xy[0], 3), round(xy[1], 3))
for point in todo.keys():
if xy[0] >= point and todo[point]:
ax.annotate('(%s, %s)' % xy, xy=xy, textcoords='data', fontsize=18, bbox=dict(boxstyle="square", fc="w"))
todo[point] = False
ax.grid(True, which='both', color='grey')
xlabel = r"$\{0}$ signal efficiency".format(label_0) if label_0 is not "e" else r"${0}$ signal efficiency".format(label_0)
ylabel = r"$\{0}$ background rejection".format(label_1) if label_1 is not "e" else r"${0}$ background rejection".format(label_1)
ax.set_xlabel(xlabel, fontsize=20)
ax.set_ylabel(ylabel, fontsize=20)
ax.set_title(r"${0} \leq E < {1}$".format(round(min_energy,2), round(max_energy,2)), fontsize=20)
ax.legend(loc="upper right", prop={"size":20})
plt.margins(0.1)
if save_path is not None:
plt.savefig(save_path)
if show_plot:
plt.show()
plt.clf() # Clear the current figure
plt.close() # Close the opened window
return fpr, tpr, threshold, roc_auc
# Plot the ROC curve for one vs another class
def plot_new_ROC(softmaxes, labels, energies, softmax_index_dict, label_0, label_1,
min_energy=0, max_energy=1500, show_plot=False, save_path=None):
assert softmaxes is not None
assert labels is not None
assert softmax_index_dict is not None
assert softmaxes.shape[0] == labels.shape[0]
assert label_0 in softmax_index_dict.keys()
assert label_1 in softmax_index_dict.keys()
#------------------------------------------------------------------------
# Create a boolean map to select events in the user-defined energy range
#------------------------------------------------------------------------
energy_slice_map = [False for i in range(len(energies))]
for i in range(len(energies)):
if(energies[i] >= min_energy and energies[i] < max_energy):
energy_slice_map[i] = True
curr_softmax = softmaxes[energy_slice_map]
curr_labels = labels[energy_slice_map]
#------------------------------------------------------------------------
# Extract the softmax and true label values for signal and background events
#------------------------------------------------------------------------
# Extract the useful softmax and labels from the input arrays
softmax_0 = curr_softmax[curr_labels==softmax_index_dict[label_0]]
labels_0 = curr_labels[curr_labels==softmax_index_dict[label_0]]
softmax_1 = curr_softmax[curr_labels==softmax_index_dict[label_1]]
labels_1 = curr_labels[curr_labels==softmax_index_dict[label_1]]
# Add the two arrays
softmax = np.concatenate((softmax_0, softmax_1), axis=0)
labels = np.concatenate((labels_0, labels_1), axis=0)
#------------------------------------------------------------------------
# Compute the ROC curve and the AUC for class corresponding to label 0
#------------------------------------------------------------------------
fpr, tpr, threshold = roc_curve(labels, softmax[:,softmax_index_dict[label_0]], pos_label=softmax_index_dict[label_0])
roc_auc = auc(fpr, tpr)
inv_fpr = []
for i in fpr:
inv_fpr.append(1/i) if i != 0 else inv_fpr.append(1/1e-3)
if show_plot or save_path is not None:
# TNR vs TPR plot
fig, ax = plt.subplots(figsize=(16,9),facecolor="w")
ax.tick_params(axis="both", labelsize=20)
ax.plot(tpr, inv_fpr, color=color_dict[label_0],
label=r"$\{0}$, AUC ${1:0.3f}$".format(label_0, roc_auc) if label_0 is not "e" else r"${0}$, AUC ${1:0.3f}$".format(label_0, roc_auc),
linewidth=1.0, marker=".", markersize=4.0, markerfacecolor=color_dict[label_0])
# Show coords of individual points near x = 0.2, 0.5, 0.8
todo = {0.2: True, 0.5: True, 0.8: True}
for xy in zip(tpr, inv_fpr):
xy = (round(xy[0], 3), round(xy[1], 3))
for point in todo.keys():
if xy[0] >= point and todo[point]:
ax.annotate('(%s, %s)' % xy, xy=xy, textcoords='data', fontsize=18, bbox=dict(boxstyle="square", fc="w"))
todo[point] = False
ax.grid(True, which='both', color='grey')
xlabel = r"$\{0}$ signal efficiency".format(label_0) if label_0 is not "e" else r"${0}$ signal efficiency".format(label_0)
ylabel = r"$\{0}$ background rejection".format(label_1) if label_1 is not "e" else r"${0}$ background rejection".format(label_1)
ax.set_xlabel(xlabel, fontsize=20)
ax.set_ylabel(ylabel, fontsize=20)
ax.set_title(r"${0} \leq E < {1}$".format(round(min_energy,2), round(max_energy,2)), fontsize=20)
ax.legend(loc="upper right", prop={"size":20})
plt.margins(0.1)
plt.yscale("log")
if save_path is not None:
plt.savefig(save_path)
if show_plot:
plt.show()
plt.clf() # Clear the current figure
plt.close() # Close the opened window
return fpr, tpr, threshold, roc_auc
num_samples = [9000000, 9000000, 9000000, 9000000]
dumps = ["20191018_114334", "20191018_114334", "20191018_114334", "20191018_114334"]
dump_dir = "/home/akajal/WatChMaL/VAE/dumps/"
dump_file = "/test_validation_iteration_dump.npz"
softmax_index_dict = {"gamma":0, "e":1, "mu":2}
for num_sample, dump in zip(num_samples, dumps):
print("-------------------------------------------------------------")
print("Plotting the ROC curve for LeNet CNN trained using {0} samples".format(num_sample))
print("-------------------------------------------------------------")
test_dump_path = dump_dir + dump + dump_file
test_dump_np = np.load(test_dump_path)
test_softmax = test_dump_np['softmax'].reshape(-1, 3)
test_labels = test_dump_np['labels'].reshape(-1)
test_energies = test_dump_np['energies'].reshape(-1)
roc_metrics = plot_new_ROC(test_softmax, test_labels, test_energies,
softmax_index_dict, "e", "gamma", min_energy=0,
max_energy=2000, show_plot=True)
roc_metrics = plot_old_ROC(test_softmax, test_labels, test_energies,
softmax_index_dict, "e", "gamma", min_energy=0,
max_energy=2000, show_plot=True)
###Output
-------------------------------------------------------------
Plotting the ROC curve for LeNet CNN trained using 9000000 samples
-------------------------------------------------------------
|
Yandex data science/2/Week 1/.ipynb_checkpoints/_3791dd48ff93b23aff49215d7a9fc9af_peer_review_linreg_height_weight-checkpoint.ipynb | ###Markdown
Линейная регрессия и основные библиотеки Python для анализа данных и научных вычислений Это задание посвящено линейной регрессии. На примере прогнозирования роста человека по его весу Вы увидите, какая математика за этим стоит, а заодно познакомитесь с основными библиотеками Python, необходимыми для дальнейшего прохождения курса. **Материалы**- Лекции данного курса по линейным моделям и градиентному спуску- [Документация](http://docs.scipy.org/doc/) по библиотекам NumPy и SciPy- [Документация](http://matplotlib.org/) по библиотеке Matplotlib - [Документация](http://pandas.pydata.org/pandas-docs/stable/tutorials.html) по библиотеке Pandas- [Pandas Cheat Sheet](http://www.analyticsvidhya.com/blog/2015/07/11-steps-perform-data-analysis-pandas-python/)- [Документация](http://stanford.edu/~mwaskom/software/seaborn/) по библиотеке Seaborn Задание 1. Первичный анализ данных c Pandas В этом заданиии мы будем использовать данные [SOCR](http://wiki.stat.ucla.edu/socr/index.php/SOCR_Data_Dinov_020108_HeightsWeights) по росту и весу 25 тысяч подростков. **[1].** Если у Вас не установлена библиотека Seaborn - выполните в терминале команду *conda install seaborn*. (Seaborn не входит в сборку Anaconda, но эта библиотека предоставляет удобную высокоуровневую функциональность для визуализации данных).
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import scipy as sc
%matplotlib inline
###Output
_____no_output_____
###Markdown
Считаем данные по росту и весу (*weights_heights.csv*, приложенный в задании) в объект Pandas DataFrame:
###Code
data = pd.read_csv('weights_heights.csv', index_col='Index')
###Output
_____no_output_____
###Markdown
Чаще всего первое, что надо надо сделать после считывания данных - это посмотреть на первые несколько записей. Так можно отловить ошибки чтения данных (например, если вместо 10 столбцов получился один, в названии которого 9 точек с запятой). Также это позволяет познакомиться с данными, как минимум, посмотреть на признаки и их природу (количественный, категориальный и т.д.). После этого стоит построить гистограммы распределения признаков - это опять-таки позволяет понять природу признака (степенное у него распределение, или нормальное, или какое-то еще). Также благодаря гистограмме можно найти какие-то значения, сильно не похожие на другие - "выбросы" в данных. Гистограммы удобно строить методом *plot* Pandas DataFrame с аргументом *kind='hist'*.**Пример.** Построим гистограмму распределения роста подростков из выборки *data*. Используем метод *plot* для DataFrame *data* c аргументами *y='Height'* (это тот признак, распределение которого мы строим)
###Code
data.plot(y='Height', kind='hist',
color='red', title='Height (inch.) distribution')
###Output
_____no_output_____
###Markdown
Аргументы:- *y='Height'* - тот признак, распределение которого мы строим- *kind='hist'* - означает, что строится гистограмма- *color='red'* - цвет **[2]**. Посмотрите на первые 5 записей с помощью метода *head* Pandas DataFrame. Нарисуйте гистограмму распределения веса с помощью метода *plot* Pandas DataFrame. Сделайте гистограмму зеленой, подпишите картинку.
###Code
pd.DataFrame.head(data)
data.plot(y='Weight', kind='hist',
color='green', title='Weight (lbs.) distribution')
###Output
_____no_output_____
###Markdown
Один из эффективных методов первичного анализа данных - отображение попарных зависимостей признаков. Создается $m \times m$ графиков (*m* - число признаков), где по диагонали рисуются гистограммы распределения признаков, а вне диагонали - scatter plots зависимости двух признаков. Это можно делать с помощью метода $scatter\_matrix$ Pandas Data Frame или *pairplot* библиотеки Seaborn. Чтобы проиллюстрировать этот метод, интересней добавить третий признак. Создадим признак *Индекс массы тела* ([BMI](https://en.wikipedia.org/wiki/Body_mass_index)). Для этого воспользуемся удобной связкой метода *apply* Pandas DataFrame и lambda-функций Python.
###Code
def make_bmi(height_inch, weight_pound):
METER_TO_INCH, KILO_TO_POUND = 39.37, 2.20462
return (weight_pound / KILO_TO_POUND) / \
(height_inch / METER_TO_INCH) ** 2
data['BMI'] = data.apply(lambda row: make_bmi(row['Height'],
row['Weight']), axis=1)
###Output
_____no_output_____
###Markdown
**[3].** Постройте картинку, на которой будут отображены попарные зависимости признаков , 'Height', 'Weight' и 'BMI' друг от друга. Используйте метод *pairplot* библиотеки Seaborn.
###Code
sns.pairplot(data)
###Output
_____no_output_____
###Markdown
Часто при первичном анализе данных надо исследовать зависимость какого-то количественного признака от категориального (скажем, зарплаты от пола сотрудника). В этом помогут "ящики с усами" - boxplots библиотеки Seaborn. Box plot - это компактный способ показать статистики вещественного признака (среднее и квартили) по разным значениям категориального признака. Также помогает отслеживать "выбросы" - наблюдения, в которых значение данного вещественного признака сильно отличается от других. **[4]**. Создайте в DataFrame *data* новый признак *weight_category*, который будет иметь 3 значения: 1 – если вес меньше 120 фунтов. (~ 54 кг.), 3 - если вес больше или равен 150 фунтов (~68 кг.), 2 – в остальных случаях. Постройте «ящик с усами» (boxplot), демонстрирующий зависимость роста от весовой категории. Используйте метод *boxplot* библиотеки Seaborn и метод *apply* Pandas DataFrame. Подпишите ось *y* меткой «Рост», ось *x* – меткой «Весовая категория».
###Code
def weight_category(weight):
if weight < 120:
a = 1
else:
if weight >= 150:
a = 3
else:
a = 2
return a
data['weight_cat'] = data['Weight'].apply(weight_category)
sns.boxplot(y = 'Height', x = 'weight_cat', data = data).set(ylabel = "Рост", xlabel = "Весовая категория")
###Output
_____no_output_____
###Markdown
**[5].** Постройте scatter plot зависимости роста от веса, используя метод *plot* для Pandas DataFrame с аргументом *kind='scatter'*. Подпишите картинку.
###Code
data.plot(y='Height', x = 'Weight', kind='scatter', title='Height on weight dependence')
###Output
_____no_output_____
###Markdown
Задание 2. Минимизация квадратичной ошибки В простейшей постановке задача прогноза значения вещественного признака по прочим признакам (задача восстановления регрессии) решается минимизацией квадратичной функции ошибки. **[6].** Напишите функцию, которая по двум параметрам $w_0$ и $w_1$ вычисляет квадратичную ошибку приближения зависимости роста $y$ от веса $x$ прямой линией $y = w_0 + w_1 * x$:$$error(w_0, w_1) = \sum_{i=1}^n {(y_i - (w_0 + w_1 * x_i))}^2 $$Здесь $n$ – число наблюдений в наборе данных, $y_i$ и $x_i$ – рост и вес $i$-ого человека в наборе данных.
###Code
def error_calc(w):
k = np.zeros((len(data["Height"])))
height = np.array(data["Height"])
weight = np.array(data["Weight"])
for i in range (0, len(k)):
k[i] = (height[i] - (w[0] + w[1] * weight[i]))**2
return np.sum(k)
###Output
_____no_output_____
###Markdown
Итак, мы решаем задачу: как через облако точек, соответсвующих наблюдениям в нашем наборе данных, в пространстве признаков "Рост" и "Вес" провести прямую линию так, чтобы минимизировать функционал из п. 6. Для начала давайте отобразим хоть какие-то прямые и убедимся, что они плохо передают зависимость роста от веса.**[7].** Проведите на графике из п. 5 Задания 1 две прямые, соответствующие значениям параметров ($w_0, w_1) = (60, 0.05)$ и ($w_0, w_1) = (50, 0.16)$. Используйте метод *plot* из *matplotlib.pyplot*, а также метод *linspace* библиотеки NumPy. Подпишите оси и график.
###Code
x = np.linspace(70, 180, 11001)
y1 = x * 0.05 + 60
y2 = x * 0.16 + 50
height = np.array(data["Height"])
weight = np.array(data["Weight"])
plt.plot(weight, height, "bo", x, y1, x, y2)
plt.xlabel("Weight")
plt.ylabel("Height")
plt.title("Scatter plot with two regression lines")
###Output
_____no_output_____
###Markdown
Минимизация квадратичной функции ошибки - относительная простая задача, поскольку функция выпуклая. Для такой задачи существует много методов оптимизации. Посмотрим, как функция ошибки зависит от одного параметра (наклон прямой), если второй параметр (свободный член) зафиксировать.**[8].** Постройте график зависимости функции ошибки, посчитанной в п. 6, от параметра $w_1$ при $w_0$ = 50. Подпишите оси и график.
###Code
x = np.linspace(0, 0.4, 4001)
y = np.zeros((len(x)))
for i in range(0, len(y)):
y[i] = error_calc([50, x[i]])
plt.plot(x, y)
plt.xlabel("w1")
plt.ylabel("Error")
plt.title("Error dependence on w1")
###Output
_____no_output_____
###Markdown
Теперь методом оптимизации найдем "оптимальный" наклон прямой, приближающей зависимость роста от веса, при фиксированном коэффициенте $w_0 = 50$.**[9].** С помощью метода *minimize_scalar* из *scipy.optimize* найдите минимум функции, определенной в п. 6, для значений параметра $w_1$ в диапазоне [-5,5]. Проведите на графике из п. 5 Задания 1 прямую, соответствующую значениям параметров ($w_0$, $w_1$) = (50, $w_1\_opt$), где $w_1\_opt$ – найденное в п. 8 оптимальное значение параметра $w_1$.
###Code
w1_opt = sc.optimize.minimize_scalar(lambda w1: error_calc([50, w1]), bounds = (-5,5)).x
print(w1_opt)
x = np.linspace(70, 180, 11001)
y = w1_opt * x + 50
plt.plot(weight, height, "bo", x, y, "r")
plt.xlabel("Weight")
plt.ylabel("Height")
plt.title("Scatter plot with one optimized regression line")
###Output
_____no_output_____
###Markdown
При анализе многомерных данных человек часто хочет получить интуитивное представление о природе данных с помощью визуализации. Увы, при числе признаков больше 3 такие картинки нарисовать невозможно. На практике для визуализации данных в 2D и 3D в данных выделаяют 2 или, соответственно, 3 главные компоненты (как именно это делается - мы увидим далее в курсе) и отображают данные на плоскости или в объеме. Посмотрим, как в Python рисовать 3D картинки, на примере отображения функции $z(x,y) = sin(\sqrt{x^2+y^2})$ для значений $x$ и $y$ из интервала [-5,5] c шагом 0.25.
###Code
from mpl_toolkits.mplot3d import Axes3D
###Output
_____no_output_____
###Markdown
Создаем объекты типа matplotlib.figure.Figure (рисунок) и matplotlib.axes._subplots.Axes3DSubplot (ось).
###Code
fig = plt.figure()
ax = fig.gca(projection='3d') # get current axis
# Создаем массивы NumPy с координатами точек по осям X и У.
# Используем метод meshgrid, при котором по векторам координат
# создается матрица координат. Задаем нужную функцию Z(x, y).
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
X, Y = np.meshgrid(X, Y)
Z = np.sin(np.sqrt(X**2 + Y**2))
# Наконец, используем метод *plot_surface* объекта
# типа Axes3DSubplot. Также подписываем оси.
surf = ax.plot_surface(X, Y, Z)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
###Output
_____no_output_____
###Markdown
**[10].** Постройте 3D-график зависимости функции ошибки, посчитанной в п.6 от параметров $w_0$ и $w_1$. Подпишите ось $x$ меткой «Intercept», ось $y$ – меткой «Slope», a ось $z$ – меткой «Error».
###Code
fig = plt.figure()
ax = fig.gca(projection='3d')
Z = np.zeros((len(X), len(Y)))
for i in range(0, len(X)):
for j in range(0, len(Y)):
Z[i,j] = error_calc([X[i,j], Y[i,j]])
surf = ax.plot_surface(X, Y, Z)
ax.set_xlabel('Intercept')
ax.set_ylabel('Slope')
ax.set_zlabel('Error')
plt.show()
###Output
_____no_output_____
###Markdown
**[11].** С помощью метода *minimize* из scipy.optimize найдите минимум функции, определенной в п. 6, для значений параметра $w_0$ в диапазоне [-100,100] и $w_1$ - в диапазоне [-5, 5]. Начальная точка – ($w_0$, $w_1$) = (0, 0). Используйте метод оптимизации L-BFGS-B (аргумент method метода minimize). Проведите на графике из п. 5 Задания 1 прямую, соответствующую найденным оптимальным значениям параметров $w_0$ и $w_1$. Подпишите оси и график.
###Code
w0_opt, w1_opt = sc.optimize.minimize(error_calc, bounds = ((-100, 100), (-5, 5)), x0 = [0, 0], method = "L-BFGS-B").x
x = np.linspace(70, 180, 11001)
y = w1_opt * x + w0_opt
plt.plot(weight, height, "bo", x, y, "r")
plt.xlabel("Weight")
plt.ylabel("Height")
plt.title("Scatter plot with the best of the best regression line")
print(w0_opt, w1_opt)
###Output
57.57178753255578 0.08200640184212181
|
ipynb/Germany-Sachsen-Anhalt-LK-Harz.ipynb | ###Markdown
Germany: LK Harz (Sachsen-Anhalt)* Homepage of project: https://oscovida.github.io* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Sachsen-Anhalt-LK-Harz.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview(country="Germany", subregion="LK Harz");
# load the data
cases, deaths, region_label = germany_get_region(landkreis="LK Harz")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Sachsen-Anhalt-LK-Harz.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
Germany: LK Harz (Sachsen-Anhalt)* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Sachsen-Anhalt-LK-Harz.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview(country="Germany", subregion="LK Harz", weeks=5);
overview(country="Germany", subregion="LK Harz");
compare_plot(country="Germany", subregion="LK Harz", dates="2020-03-15:");
# load the data
cases, deaths = germany_get_region(landkreis="LK Harz")
# get population of the region for future normalisation:
inhabitants = population(country="Germany", subregion="LK Harz")
print(f'Population of country="Germany", subregion="LK Harz": {inhabitants} people')
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 1000 rows
pd.set_option("max_rows", 1000)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Sachsen-Anhalt-LK-Harz.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
Germany: LK Harz (Sachsen-Anhalt)* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Sachsen-Anhalt-LK-Harz.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview(country="Germany", subregion="LK Harz", weeks=5);
overview(country="Germany", subregion="LK Harz");
compare_plot(country="Germany", subregion="LK Harz", dates="2020-03-15:");
# load the data
cases, deaths = germany_get_region(landkreis="LK Harz")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Sachsen-Anhalt-LK-Harz.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____ |
Mathematics/Mathematical Modeling/08.01-Zero-Order-Hold-and-Interpolation.ipynb | ###Markdown
Zero-Hold Holdr & Interpolation Function
###Code
import numpy as np
def interp0(x, xp, yp):
"""Zeroth order hold interpolation w/ same
(base) signature as numpy.interp."""
def func(x0):
if x0 <= xp[0]:
return yp[0]
if x0 >= xp[-1]:
return yp[-1]
k = 0
while x0 > xp[k]:
k += 1
return yp[k-1]
if isinstance(x,float):
return func(x)
elif isinstance(x, list):
return [func(x) for x in x]
elif isinstance(x, np.ndarray):
return np.asarray([func(x) for x in x])
else:
raise TypeError('argument must be float, list, or ndarray')
###Output
_____no_output_____
###Markdown
Demonstration
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# choose a function
f = np.sin
# sampled signal
xp = np.linspace(0,10,20)
yp = f(xp)
# interpolation grid with 'true' function
x = np.linspace(0,12,1000)
plt.plot(x,f(x),'--')
# plot
# plt.hold(True)
plt.scatter(xp,yp)
plt.plot(x,interp0(x,xp,yp),'r')
plt.xlim([x.min(),x.max()])
plt.title('Zero Order Hold/Interpolation')
plt.xlabel('Time')
plt.ylabel('Value')
plt.legend(['Signal','ZOH'])
###Output
_____no_output_____ |
week5/.ipynb_checkpoints/Support verctor machine-checkpoint.ipynb | ###Markdown
Data
###Code
# x,y ,bias
X = np.array([
[-2,4,-1],
[4,1,-1],
[1,6,-1],
[2,4,-1],
[6,2,-1],
])
# Associated output label
Y = np.array([-1,-1,1,1,1])
# Plotin this exemple in 2D graph
# For eache exwmple
for d, sample in enumerate(X):
# plot the negative sample (-2)
if d < 2:
plt.scatter(sample[0], sample[1], s=120, marker="_", linewidths=2)
# PLot the positive sample (3)
else:
plt.scatter(sample[0], sample[1], s=120, marker="+", linewidths=2)
plt.plot([-2,6],[6,0.5])
def svn_sgd_plot(x,y):
# init the SVMs weight vwtors with 0
w = np.zeros(len(X[0]))
# Learnig rate
eta = 1
# Interactions to train for
epochs = 100000
# Store the misclassification
errors = []
# Training part, gradient descent
for epoch in range(1, epochs):
error = 0
for i, x in enumerate(X):
# Missclassification
if (Y[i]*np.dot(X[i],w)) < 1:
# Misclassified update for ours weights
w = w + eta * ( (X[i] * Y[i]) +(-2 * (1/epoch) * w))
error = 1
else:
# Correct classification, update weights
w = w + eta * (-2 * (1/epoch) * w)
errors.append(error)
plt.plot(errors, '|')
plt.ylim(0.5, 1.5)
plt.axes().set_ytickables([])
plt.xlabel('Epoch')
###Output
_____no_output_____ |
section_4/01_partial.ipynb | ###Markdown
偏微分偏微分では、多変数関数を1つの変数により微分します。 ディープラーニングにおいて、1つのパラメータの変化が結果におよぼす影響を求めるのに使います。 多変数関数変数を2つ以上持つような関数のことを、「多変数関数」といいます。 以下は多変数関数の例です。 $$f(x, y)=3x+y$$$$f(x, y)=x^2+2xy+\frac{1}{y}$$$$f(x, y, z)=x^2+2xy+3y^2+4yz+5z^2$$多変数関数は、以下のように添字をつけた変数を使って表記されることがあります。 $$f(X) = f(x_1,x_2,\cdots, x_i,\cdots, x_n)$$以下のコードは、多変数関数 $$f(x, y)=x^4 + 2x^3y - 3x^2y - 2x + y -1$$を$x$、$y$ともに変化させて描画します。
###Code
import numpy as np
import matplotlib.pyplot as plt
def my_func(x, y): # 描画する関数
return x**4 + 2*x**3*y - 3*x**2*y - 2*x + y -1
ys = [-2, -1, 0 ,1, 2] # yの値
xs = np.linspace(-1, 1.5) # xの値
for y in ys:
f_xy = my_func(xs, y) # f(x, y)
plt.plot(xs, f_xy, label="y="+str(y), linestyle="dashed")
plt.xlabel("x", size=14)
plt.ylabel("f(x, y)", size=14)
plt.legend()
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
偏微分とは?複数の変数を持つ関数に対する、1つの変数のみによる微分を「偏微分」といいます。 偏微分の場合、他の変数は定数として扱います。 例えば、2変数からなる関数$f(x,y)$の偏微分は、以下のように表すことができます。 $$ \frac{\partial}{\partial x}f(x,y) = \lim_{\Delta x \to 0}\frac{f(x+\Delta x,y)-f(x,y)}{\Delta x} $$$x$のみ微小量$\Delta x$だけ変化させて、$\Delta x$を限りなく0に近づけます。 $y$は微小変化しないので、偏微分の際は定数のように扱うことができます。 偏微分の例例として、次のような変数$x$、$y$を持つ関数$f(x,y)$を考えてみましょう。 $$ f(x,y)=3x^2+4xy+5y^3 $$この関数を偏微分します。偏微分の際は$y$を定数として扱い、微分の公式を用いて$x$で微分します。 これにより、以下の式を得ることができます。 偏微分では$d$ではなく$\partial$の記号を使います。$$ \frac{\partial}{\partial x}f(x,y) = 6x+4y $$このような、偏微分により求めた関数を「偏導関数」といいます。 この場合、偏導関数は$y$の値を固定した際の、$x$の変化に対する$f(x,y)$の変化の割合になります。 $f(x,y)$の$y$による偏微分は以下の通りです。この場合、$x$は定数として扱います。 $$ \frac{\partial}{\partial y}f(x,y) = 4x+15y^2 $$これは、$x$の値を固定した際の、$y$の変化に対する$f(x,y)$の変化の割合になります。 偏微分を用いることで、特定のパラーメータの微小な変化が、結果へ及ぼす影響を予測することができます。 偏導関数を利用した接線の描画多変数関数$f(x, y)$で$y$を固定し、接線を描画しましょう。 $$f(x, y)=x^4 + 2x^3y - 3x^2y - 2x + y -1$$において、$y=2$に固定し、$x=-0.5$における接線を描画します。 $x$による偏微分は以下の通りです。 $$ \frac{\partial}{\partial x}f(x,y) = 4x^3+6x^2y-6xy-2 $$
###Code
import numpy as np
import matplotlib.pyplot as plt
def my_func(x, y): # 描画する関数
return x**4 + 2*x**3*y - 3*x**2*y - 2*x + y -1
def my_func_dif(x, y): # 導関数
return 4*x**3 + 6*x**2*y - 6*x*y -2
ys = [-2, -1, 0 ,1, 2] # yの値
xs = np.linspace(-1, 1.5) # xの値
for y in ys:
f_xy = my_func(xs, y) # f(x, y)
plt.plot(xs, f_xy, label="y="+str(y), linestyle="dashed")
a = -0.5 # 接点におけるxの値
b = 2 # yの値
f_xy_t = my_func_dif(a, b)*xs + my_func(a, b) - my_func_dif(a, b)*a # 接線の式
plt.plot(xs, f_xy_t, label="f_xy_t")
plt.xlabel("x", size=14)
plt.ylabel("f(x, y)", size=14)
plt.legend()
plt.grid()
plt.show()
###Output
_____no_output_____ |
extreme_response/fairfax/plot_recomputed_fairfax_pathways.ipynb | ###Markdown
Plot enriched pathways for overlapping recomputed Fairfax eQTLsThis code plots the top pathways in which genes associated with Neanderthal-introgressed recomputed Fairfax eQTLs were enriched in. Monocytes in the Fairfax dataset were divided into 4 treatment groups: IFN, LPS 2h, LPS 24h and Naive.Neanderthal SNPs from:1. Dannemann M, Prufer K & Kelso J. Functional implications of Neandertal introgression in modern humans. *Genome Biol* 2017 **18**:61.2. Simonti CN *et al.* The phenotypic legacy of admixture between modern humans and Neandertals. *Science* 2016 **351**:737-41.Recomputed Fairfax *et al.* (2014) eQTLs from:* [EMBL-EBI eQTL Catalogue](https://www.ebi.ac.uk/eqtl/Data_access/)--- First, the list of genes and associated p-values for each condition was obtained from: `/well/jknight/shiyao/data/fairfax/EMBL_recomputed/genes_cleaned_*.txt`. This was inputted into the XGR R package to obtain pathway enrichment information, using the following code.```r Gene enrichment analysislibrary(XGR)library(RCircos)RData.location <- "http://galahad.well.ox.ac.uk/bigdata_dev/" Get enriched termsdd <- read.csv('genes_cleaned_*.txt', sep = '\t', where * refers to ifn/lps2/lps24/naive header = FALSE, check.names = TRUE) eTerm <- xEnricherGenes(data=as.character(dd$V1), ontology="MsigdbC2CPall", RData.location=RData.location) Visualise top 30 enriched pathwaysres <- xEnrichViewer(eTerm, top_num=30, sortBy="adjp",details=TRUE) Save enrichment results to the file called 'enrichment.txt'output <- data.frame(term=rownames(res), res)utils::write.table(output, file="enrichment.txt", sep="\t",row.names=FALSE)```--- Next, enriched pathways were plotted in Python using the enrichment.txt files.
###Code
# Import modules
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import textwrap
import numpy as np
sns.set()
###Output
_____no_output_____
###Markdown
IFN
###Code
enrich = pd.read_csv('ifn_enrichment.txt', sep='\t')
enrich['-log10(FDR)'] = -np.log10(enrich['adjp'])
enrich.sort_values("-log10(FDR)", ascending=False)
to_plot = enrich.iloc[0:15]
ax = sns.barplot(x="-log10(FDR)", y="name", data=to_plot, dodge=False)
ax.set(xlabel='-log10(FDR)', ylabel='')
ax.set_yticklabels((textwrap.fill(y.get_text(), 30) for y in ax.get_yticklabels()))
plt.show()
###Output
_____no_output_____
###Markdown
Naive
###Code
enrich = pd.read_csv('naive_enrichment.txt', sep='\t')
enrich['-log10(FDR)'] = -np.log10(enrich['adjp'])
enrich.sort_values("-log10(FDR)", ascending=False)
to_plot = enrich.iloc[0:15]
ax = sns.barplot(x="-log10(FDR)", y="name", data=to_plot, dodge=False)
ax.set(xlabel='-log10(FDR)', ylabel='')
plt.show()
###Output
_____no_output_____
###Markdown
LPS24
###Code
enrich = pd.read_csv('lps24_enrichment.txt', sep='\t')
enrich['-log10(FDR)'] = -np.log10(enrich['adjp'])
enrich.sort_values("-log10(FDR)", ascending=False)
to_plot = enrich.iloc[0:15]
ax = sns.barplot(x="-log10(FDR)", y="name", data=to_plot, dodge=False)
ax.set(xlabel='-log10(FDR)', ylabel='')
ax.set_yticklabels((textwrap.fill(y.get_text(), 30) for y in ax.get_yticklabels()))
plt.show()
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.