Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
9,400 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="images/csdms_logo.jpg">
Example 1
Read and explore the output from Example 1 -- a vector parameter study that evaluates an objective function over the Rosenbrock function.
Use pylab magic
Step1: Read the Dakota tabular data file.
Step2: Plot the path taken in the vector parameter study.
Step3: Plot the values of the Rosenbrock function at the study locations.
Step4: What's the minimum value of the function over the study locations? | Python Code:
%pylab inline
Explanation: <img src="images/csdms_logo.jpg">
Example 1
Read and explore the output from Example 1 -- a vector parameter study that evaluates an objective function over the Rosenbrock function.
Use pylab magic:
End of explanation
dat_file = '../examples/1-rosenbrock/dakota.dat'
data = numpy.loadtxt(dat_file, skiprows=1, unpack=True, usecols=[0,2,3,4])
data
Explanation: Read the Dakota tabular data file.
End of explanation
plot(data[1,], data[2,], 'ro')
xlim((-2, 2))
ylim((-2, 2))
xlabel('$x_1$')
ylabel('$x_2$')
title('Planview of parameter study locations')
Explanation: Plot the path taken in the vector parameter study.
End of explanation
plot(data[-1,], 'bo')
xlabel('index')
ylabel('Rosenbrock fuction value')
title('Rosenbrock function values at study locations')
Explanation: Plot the values of the Rosenbrock function at the study locations.
End of explanation
min(data[-1,:])
Explanation: What's the minimum value of the function over the study locations?
End of explanation |
9,401 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
pyschedule - resource-constrained scheduling in python
pyschedule is the easiest way to match tasks with resources. Do you need to plan a conference or schedule your employees and there are a lot of requirements to satisfy, like availability of rooms or maximal allowed working times? Then pyschedule might be for you. Install it with pip
Step1: Here is a hello world example, you can also find this document as a <a href="https
Step2: In this example we use a makespan objective which means that we want to minimize the completion time of the last task. Hence, Bob should do the cooking from 0 to 1 and then do the washing from 1 to 3, whereas Alice will only do the cleaning from 0 to 3. This will ensure that both are done after three hours. This table representation is a little hard to read, we can visualize the plan using matplotlib
Step3: pyschedule supports different solvers, classical <a href="https | Python Code:
pip install pyschedule
Explanation: pyschedule - resource-constrained scheduling in python
pyschedule is the easiest way to match tasks with resources. Do you need to plan a conference or schedule your employees and there are a lot of requirements to satisfy, like availability of rooms or maximal allowed working times? Then pyschedule might be for you. Install it with pip:
End of explanation
# Load pyschedule and create a scenario with ten steps planning horizon
from pyschedule import Scenario, solvers, plotters
S = Scenario('hello_pyschedule',horizon=10)
# Create two resources
Alice, Bob = S.Resource('Alice'), S.Resource('Bob')
# Create three tasks with lengths 1,2 and 3
cook, wash, clean = S.Task('cook',1), S.Task('wash',2), S.Task('clean',3)
# Assign tasks to resources, either Alice or Bob,
# the %-operator connects tasks and resource
cook += Alice|Bob
wash += Alice|Bob
clean += Alice|Bob
# Solve and print solution
S.use_makespan_objective()
solvers.mip.solve(S,msg=1)
# Print the solution
print(S.solution())
Explanation: Here is a hello world example, you can also find this document as a <a href="https://github.com/timnon/pyschedule-notebooks/blob/master/README.ipynb">notebook</a>. There are more example notebooks <a href="https://github.com/timnon/pyschedule-notebooks/">here</a> and simpler examples in the <a href="https://github.com/timnon/pyschedule/tree/master/examples">examples folder</a>. For a technical overview go to <a href="https://github.com/timnon/pyschedule/blob/master/docs/pyschedule-overview.md">here</a>.
End of explanation
%matplotlib inline
plotters.matplotlib.plot(S,fig_size=(10,5))
Explanation: In this example we use a makespan objective which means that we want to minimize the completion time of the last task. Hence, Bob should do the cooking from 0 to 1 and then do the washing from 1 to 3, whereas Alice will only do the cleaning from 0 to 3. This will ensure that both are done after three hours. This table representation is a little hard to read, we can visualize the plan using matplotlib:
End of explanation
solvers.mip.solve(S,kind='SCIP')
Explanation: pyschedule supports different solvers, classical <a href="https://en.wikipedia.org/wiki/Integer_programming">MIP</a>- as well as <a href="https://en.wikipedia.org/wiki/Constraint_programming">CP</a>-based ones. All solvers and their capabilities are listed in the <a href="https://github.com/timnon/pyschedule/blob/master/docs/pyschedule-overview.md">overview notebook</a>. The default solver used above uses a standard MIP-model in combination with <a href="https://projects.coin-or.org/Cbc">CBC</a>, which is part of package <a href="https://pypi.python.org/pypi/PuLP">pulp</a>. If you have <a href="http://scip.zib.de/">SCIP</a> installed (command "scip" must be running), you can easily switch to SCIP using:
End of explanation |
9,402 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
APIs and Scraping
Overview of today's topic
Step1: An API is an application programming interface. It provides a structured way to send commands or requests to a piece of software. "API" often refers to a web service API. This is like web site (but designed for applications, rather than humans, to use) that you can send requests to to execute commands or query for data. Today, REST APIs are the most common. To use them, you simply send them a request, and they reply with a response, similar to how a web browser works. The request is sent to an endpoint (a URL) typically with a set of parameters to provide the details of your query or command.
In the example below, we make a request to the ipify API and request a JSON formatted response. Then we look up the location of the IP address it returned, using the ip-api API.
Step2: What's the current weather? Use the National Weather Service API.
Step3: You can use any web service's API in this same basic way
Step4: Instead of lat-lng coordinates, we can also geocode place names to their place's boundaries with OSMnx. This essentially looks-up the place in OpenStreetMap's database (note
Step5: Use OSMnx to query for geospatial entities within USC's boundary polygon. You can specify what kind of entities to retrieve by using a tags dictionary. In a couple weeks we'll see how to model street networks within a place's boundary.
Step6: 1b
Step7: We will use the Google Maps geocoding API. Their geocoder is very powerful, but you do have to pay for it beyond a certain threshold of free usage.
Documentation
Step8: 2. Google Places API
We will use Google's Places API to look up places in the vicinity of some location.
Documentation
Step9: 3. Reverse geocoding
Reverse geocoding, as you might expect from its name, does the opposite of regular geocoding
Step10: What if you just want the city or state?
You could try to parse the address strings, but you're relying on them always having a consistent format. This might not be the case if you have international location data. In this case, you should call the API manually and extract the individual address components you are interested in.
Step11: Now look inside each reverse geocode result to see if address_components exists. If it does, look inside each component to see if we can find the city or the state. Google calls the city name by the abstract term 'locality' and the state name by the abstract term 'administrative_area_level_1' ...this lets them use consistent terminology anywhere in the world.
Step12: 4. Web Scraping
If you need data from a web page that doesn't offer an API, you can scrape it. Note that many web sites prohibit scraping in their terms of use, so proceed respectfully and cautiously. Web scraping means downloading a web page, parsing individual data out of its HTML, and converting those data into a structured dataset.
For straightforward web scraping tasks, you can use the powerful BeautifulSoup package. However, some web pages load content dynamically using JavaScript. For such complex web scraping tasks, consider using the Selenium browser automation package.
In this example, we'll scrape https
Step13: Web scraping is really hard! It takes lots of practice. If you want to use it, read the BeautifulSoup and Selenium documentation carefully, and then practice, practice, practice. You'll be an expert before long.
5. Data Portals
Many governments and agencies now open up their data to the public through a data portal. These often offer APIs to query them for real-time data. This example uses the LA Open Data Portal... browse the portal for public datasets
Step14: We have parking space ID, occupancy status, and reporting time. But we don't know where these spaces are! Fortunately the LA GeoHub has sensor location data
Step15: That's impossible to see! At this scale, all the vacant spots are obscured by occupied spots next to them. It would be much better if we had an interactive map. We'll use folium more in coming weeks to create interactive web maps, but here's a preview. | Python Code:
import geopandas as gpd
import folium
import osmnx as ox
import pandas as pd
import re
import requests
import time
from bs4 import BeautifulSoup
from geopy.geocoders import GoogleV3
from keys import google_api_key
# define a pause duration between API requests
pause = 0.1
Explanation: APIs and Scraping
Overview of today's topic:
What are APIs and how do you work with them?
Geocoding place names and addresses
Reverse-geocoding coordinates
Looking up places near some location
Web scraping when no API is provided
Using data portals programmatically
To follow along with this lecture, you need a working Google API key to use the Google Maps Geocoding API and the Google Places API Web Service. These APIs require you to set up billing info, but we won't use them in class beyond the free threshold.
End of explanation
# what is your current public IP address?
url = 'https://api.ipify.org?format=json'
data = requests.get(url).json()
data
# and what is the location of that IP address?
url = 'http://ip-api.com/json/{}'.format(data['ip'])
requests.get(url).json()
Explanation: An API is an application programming interface. It provides a structured way to send commands or requests to a piece of software. "API" often refers to a web service API. This is like web site (but designed for applications, rather than humans, to use) that you can send requests to to execute commands or query for data. Today, REST APIs are the most common. To use them, you simply send them a request, and they reply with a response, similar to how a web browser works. The request is sent to an endpoint (a URL) typically with a set of parameters to provide the details of your query or command.
In the example below, we make a request to the ipify API and request a JSON formatted response. Then we look up the location of the IP address it returned, using the ip-api API.
End of explanation
# query for the forecast url for a pair of lat-lng coords
location = '34.019268,-118.283554'
url = 'https://api.weather.gov/points/{}'.format(location)
data = requests.get(url).json()
# extract the forecast url and retrieve it
forecast_url = data['properties']['forecast']
forecast = requests.get(forecast_url).json()
# convert the forecast to a dataframe
pd.DataFrame(forecast['properties']['periods']).head()
Explanation: What's the current weather? Use the National Weather Service API.
End of explanation
# geocode a place name to lat-lng
place = 'University of Southern California'
latlng = ox.geocode(place)
latlng
# geocode a series of place names to lat-lng
places = pd.Series(['San Diego, California',
'Los Angeles, California',
'San Francisco, California',
'Seattle, Washington',
'Vancouver, British Columbia'])
coords = places.map(ox.geocode)
# parse out lats and lngs to individual columns in a dataframe
pd.DataFrame({'place': places,
'lat': coords.map(lambda x: x[0]),
'lng': coords.map(lambda x: x[1])})
Explanation: You can use any web service's API in this same basic way: request the URL with some parameters. Read the API's documentation to know how to use it and what to send. You can also use many web service's through a Python package to make complex services easier to work with. For example, there's a fantastic package called cenpy that makes downloading and working with US census data super easy.
1. Geocoding
"Geocoding" means converting a text description of some place (such as the place's name or its address) into geographic coordinates identifying the place's location on Earth. These geographic coordinates may take the form of a single latitude-longitude coordinate pair, or a bounding box, or a boundary polygon, etc.
1a. Geocoding place names with OpenStreetMap via OSMnx
OpenStreetMap is a worldwide mapping platform that anyone can contribute to. OSMnx is a Python package to work with OpenStreetMap for geocoding, downloading geospatial data, and modeling/analyzing networks. OpenStreetMap and OSMnx are free to use and do not require an API key. We'll work with OSMnx more in a couple weeks.
End of explanation
# geocode a list of place names to a GeoDataFrame
# by default, OSMnx retrieves the first [multi]polygon object
# specify which_result=1 to retrieve the top match, regardless of geometry type
gdf_places = ox.geocode_to_gdf(places.to_list(), which_result=1)
gdf_places
# geocode a single place name to a GeoDataFrame
gdf = ox.geocode_to_gdf(place)
gdf
# extract the value from row 0's geometry column
polygon = gdf['geometry'].iloc[0]
polygon
Explanation: Instead of lat-lng coordinates, we can also geocode place names to their place's boundaries with OSMnx. This essentially looks-up the place in OpenStreetMap's database (note: that means the place has to exist in its database!) then returns its details, including geometry and bounding box, as a GeoPandas GeoDataFrame. We'll review GeoDataFrames next week.
End of explanation
# get all the buildings within that polygon
tags = {'building': True}
gdf_bldg = ox.geometries_from_polygon(polygon, tags)
gdf_bldg.shape
# plot the building footprints
fig, ax = ox.plot_footprints(gdf_bldg)
# now it's your turn
# get all the building footprints within santa monica
Explanation: Use OSMnx to query for geospatial entities within USC's boundary polygon. You can specify what kind of entities to retrieve by using a tags dictionary. In a couple weeks we'll see how to model street networks within a place's boundary.
End of explanation
# geocode an address to lat-lng
address = '704 S Alvarado St, Los Angeles, California'
latlng = ox.geocode(address)
latlng
Explanation: 1b: Geocoding addresses to lat-lng
You can geocode addresses as well with OpenStreetMap, but it can be a little hit-or-miss compared to the data coverage of commercial closed-source services.
End of explanation
locations = pd.DataFrame(['704 S Alvarado St, Los Angeles, CA',
'100 Larkin St, San Francisco, CA',
'350 5th Ave, New York, NY'], columns=['address'])
locations
# function accepts an address string, sends it to Google API, returns lat-lng result
def geocode(address, print_url=False):
# pause for some duration before each request, to not hammer their server
time.sleep(pause)
# api url with placeholders to fill in with variables' values
url_template = 'https://maps.googleapis.com/maps/api/geocode/json?address={}&key={}'
url = url_template.format(address, google_api_key)
if print_url: print(url)
# send request to server, get response, and convert json string to dict
data = requests.get(url).json()
# if results were returned, extract lat-lng from top result
if len(data['results']) > 0:
lat = data['results'][0]['geometry']['location']['lat']
lng = data['results'][0]['geometry']['location']['lng']
# return lat-lng as a string
return '{},{}'.format(lat, lng)
# test the function
geocode('350 5th Ave, New York, NY')
# for each value in the address column, geocode it, save results as new column
locations['latlng'] = locations['address'].map(geocode)
locations
# parse the result into separate lat and lng columns, if desired
locations[['lat', 'lng']] = pd.DataFrame(data=locations['latlng'].str.split(',').to_list())
locations
# now it's your turn
# create a new pandas series of 3 addresses and use our function to geocode them
# then create a new pandas series of 3 famous site names and use our function to geocode them
# create new variables to contain your work so as to not overwrite the locations df
Explanation: We will use the Google Maps geocoding API. Their geocoder is very powerful, but you do have to pay for it beyond a certain threshold of free usage.
Documentation: https://developers.google.com/maps/documentation/geocoding/start
End of explanation
# google places API URL, with placeholders
url_template = 'https://maps.googleapis.com/maps/api/place/search/json?keyword={}&location={}&radius={}&key={}'
# what keyword to search for
keyword = 'restaurant'
# define the radius (in meters) for the search
radius = 500
# define the location coordinates
location = '34.019268,-118.283554'
# add our variables into the url, submit the request to the api, and load the response
url = url_template.format(keyword, location, radius, google_api_key)
response = requests.get(url)
data = response.json()
# how many results did we get?
len(data['results'])
# inspect a result
data['results'][0]
# turn the results into a dataframe of places
places = pd.DataFrame(data=data['results'],
columns=['name', 'geometry', 'rating', 'vicinity'])
places.head()
# parse out lat-long and return it as a series
# this creates a dataframe of all the results when you .apply()
def parse_coords(geometry):
if isinstance(geometry, dict):
lng = geometry['location']['lng']
lat = geometry['location']['lat']
return pd.Series({'lat':lat, 'lng':lng})
# test our function
places['geometry'].head().apply(parse_coords)
# now run our function on the whole dataframe and save the output to 2 new dataframe columns
places[['lat', 'lng']] = places['geometry'].apply(parse_coords)
places_clean = places.drop('geometry', axis='columns')
# sort the places by rating
places_clean = places_clean.sort_values(by='rating', ascending=False)
places_clean.head(10)
# now it's your turn
# find the five highest-rated bars within 1/2 mile of pershing square
# create new variables to contain your work so as to not overwrite places and places_clean
Explanation: 2. Google Places API
We will use Google's Places API to look up places in the vicinity of some location.
Documentation: https://developers.google.com/places/web-service/intro
End of explanation
# we'll use the points from the Places API, but you could use any point data here
points = places_clean[['lat', 'lng']].head()
points
# create a column to put lat-lng into the format google likes
points['latlng'] = points.apply(lambda row: '{},{}'.format(row['lat'], row['lng']), axis='columns')
points.head()
# tell geopy to reverse geocode using Google's API and return address
def reverse_geopy(latlng):
time.sleep(pause)
geocoder = GoogleV3(api_key=google_api_key)
address, _ = geocoder.reverse(latlng, exactly_one=True)
return address
# now reverse-geocode the points to addresses
points['address'] = points['latlng'].map(reverse_geopy)
points.head()
Explanation: 3. Reverse geocoding
Reverse geocoding, as you might expect from its name, does the opposite of regular geocoding: it takes a pair of coordinates on the Earth's surface and looks up what address or place corresponds to that location.
We'll use Google's reverse geocoding API. Documentation: https://developers.google.com/maps/documentation/geocoding/intro#ReverseGeocoding
As we saw with OSMnx, you often don't have to query the API yourself manually: many popular APIs have dedicated Python packages to work with them. You can do this manually, just like in the previous Google examples, but it's a little more complicated to parse Google's address component results. If we just want addresses, we can use geopy to simply interact with Google's API automatically for us.
End of explanation
# pass the Google API latlng data to reverse geocode it
def reverse_geocode(latlng):
time.sleep(pause)
url_template = 'https://maps.googleapis.com/maps/api/geocode/json?latlng={}&key={}'
url = url_template.format(latlng, google_api_key)
response = requests.get(url)
data = response.json()
if len(data['results']) > 0:
return data['results'][0]
geocode_results = points['latlng'].map(reverse_geocode)
geocode_results.iloc[0]
Explanation: What if you just want the city or state?
You could try to parse the address strings, but you're relying on them always having a consistent format. This might not be the case if you have international location data. In this case, you should call the API manually and extract the individual address components you are interested in.
End of explanation
def get_city(geocode_result):
if 'address_components' in geocode_result:
for address_component in geocode_result['address_components']:
if 'locality' in address_component['types']:
return address_component['long_name']
def get_state(geocode_result):
if 'address_components' in geocode_result:
for address_component in geocode_result['address_components']:
if 'administrative_area_level_1' in address_component['types']:
return address_component['long_name']
# now map our functions to extract city and state names
points['city'] = geocode_results.map(get_city)
points['state'] = geocode_results.map(get_state)
points.head()
# now it's your turn
# write a new function get_neighborhood() to parse the neighborhood name and add it to the points df
Explanation: Now look inside each reverse geocode result to see if address_components exists. If it does, look inside each component to see if we can find the city or the state. Google calls the city name by the abstract term 'locality' and the state name by the abstract term 'administrative_area_level_1' ...this lets them use consistent terminology anywhere in the world.
End of explanation
url = 'https://en.wikipedia.org/wiki/List_of_National_Basketball_Association_arenas'
response = requests.get(url)
html = response.text
# look at the html string
html[5000:7000]
# parse the html
soup = BeautifulSoup(html, features='html.parser')
#soup
rows = soup.find('tbody').findAll('tr')
#rows
#rows[1]
data = []
for row in rows[1:]:
cells = row.findAll('td')
d = [cell.text.strip('\n') for cell in cells[1:-1]]
data.append(d)
cols = ['arena', 'city', 'team', 'capacity', 'opened']
df = pd.DataFrame(data=data, columns=cols).dropna()
df
# strip out all the wikipedia notes in square brackets
df = df.applymap(lambda x: re.sub(r'\[.\]', '', x))
df
# convert capacity and opened to integer
df['capacity'] = df['capacity'].str.replace(',', '')
df[['capacity', 'opened']] = df[['capacity', 'opened']].astype(int)
df.sort_values('capacity', ascending=False)
Explanation: 4. Web Scraping
If you need data from a web page that doesn't offer an API, you can scrape it. Note that many web sites prohibit scraping in their terms of use, so proceed respectfully and cautiously. Web scraping means downloading a web page, parsing individual data out of its HTML, and converting those data into a structured dataset.
For straightforward web scraping tasks, you can use the powerful BeautifulSoup package. However, some web pages load content dynamically using JavaScript. For such complex web scraping tasks, consider using the Selenium browser automation package.
In this example, we'll scrape https://en.wikipedia.org/wiki/List_of_National_Basketball_Association_arenas
End of explanation
# define API endpoint
url = 'https://data.lacity.org/resource/e7h6-4a3e.json'
# request the URL and download its response
response = requests.get(url)
# parse the json string into a Python dict
data = response.json()
len(data)
# turn the json data into a dataframe
df = pd.DataFrame(data)
df.shape
df.columns
df.head()
Explanation: Web scraping is really hard! It takes lots of practice. If you want to use it, read the BeautifulSoup and Selenium documentation carefully, and then practice, practice, practice. You'll be an expert before long.
5. Data Portals
Many governments and agencies now open up their data to the public through a data portal. These often offer APIs to query them for real-time data. This example uses the LA Open Data Portal... browse the portal for public datasets: https://data.lacity.org/browse
Let's look at parking meter data for those that have sensors telling us if they're currently occupied or vacant: https://data.lacity.org/A-Livable-and-Sustainable-City/LADOT-Parking-Meter-Occupancy/e7h6-4a3e
End of explanation
# define API endpoint
url = 'https://opendata.arcgis.com/datasets/723c00530ea441deaa35f25e53d098a8_16.geojson'
# request the URL and download its response
response = requests.get(url)
# parse the json string into a Python dict
data = response.json()
len(data['features'])
# turn the geojson data into a geodataframe
gdf = gpd.GeoDataFrame.from_features(data)
gdf.shape
# what columns are in our data?
gdf.columns
gdf.head()
# now merge sensor locations with current occupancy status
parking = pd.merge(left=gdf, right=df, left_on='SENSOR_UNIQUE_ID', right_on='spaceid', how='inner')
parking.shape
parking = parking[['occupancystate', 'geometry', 'ADDRESS_SPACE']]
# extract lat and lon from geometry column
parking['lon'] = parking['geometry'].x
parking['lat'] = parking['geometry'].y
parking
# how many vacant vs occupied spots are there right now?
parking['occupancystate'].value_counts()
# map it
vacant = parking[parking['occupancystate'] == 'VACANT']
ax = vacant.plot(c='b', markersize=1, alpha=0.5)
occupied = parking[parking['occupancystate'] == 'OCCUPIED']
ax = vacant.plot(ax=ax, c='r', markersize=1, alpha=0.5)
Explanation: We have parking space ID, occupancy status, and reporting time. But we don't know where these spaces are! Fortunately the LA GeoHub has sensor location data: http://geohub.lacity.org/datasets/parking-meter-sensors/data
End of explanation
# create leaflet web map centered/zoomed to downtown LA
m = folium.Map(location=(34.05, -118.25), zoom_start=15, tiles='cartodbpositron')
# add blue markers for each vacant spot
cols = ['lat', 'lon', 'ADDRESS_SPACE']
for lat, lng, address in vacant[cols].values:
folium.CircleMarker(location=(lat, lng), radius=5, color='#3186cc',
fill=True, fill_color='#3186cc', tooltip=address).add_to(m)
# add red markers for each occupied spot
for lat, lng, address in occupied[cols].values:
folium.CircleMarker(location=(lat, lng), radius=5, color='#dc143c',
fill=True, fill_color='#dc143c', tooltip=address).add_to(m)
# now view the web map we created
m
Explanation: That's impossible to see! At this scale, all the vacant spots are obscured by occupied spots next to them. It would be much better if we had an interactive map. We'll use folium more in coming weeks to create interactive web maps, but here's a preview.
End of explanation |
9,403 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using custom containers with AI Platform Training
Learning Objectives
Step1: Run the command in the cell below to install gcsfs package.
Step2: Prepare lab dataset
Set environment variable so that we can use them throughout the entire lab.
The pipeline ingests data from BigQuery. The cell below uploads the Covertype dataset to BigQuery.
Step3: Next, create the BigQuery dataset and upload the Covertype csv data into a table.
Step4: Configure environment settings
Set location paths, connections strings, and other environment settings. Make sure to update REGION, and ARTIFACT_STORE with the settings reflecting your lab environment.
REGION - the compute region for AI Platform Training and Prediction
ARTIFACT_STORE - the Cloud Storage bucket created during installation of AI Platform Pipelines. The bucket name starts with the qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default prefix.
Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.
Step5: HINT
Step6: Explore the Covertype dataset
Run the query statement below to scan covertype_dataset.covertype table in BigQuery and return the computed result rows.
Step7: Create training and validation splits
Use BigQuery to sample training and validation splits and save them to Cloud Storage.
Create a training split
Run the query below in order to have repeatable sampling of the data in BigQuery. Note that FARM_FINGERPRINT() is used on the field that you are going to split your data. It creates a training split that takes 80% of the data using the bq command and exports this split into the BigQuery table of covertype_dataset.training.
Step8: Use the bq extract command to export the BigQuery training table to GCS at $TRAINING_FILE_PATH.
Step9: Create a validation split
Exercise
In the first cell below, create
a validation split that takes 10% of the data using the bq command and
export this split into the BigQuery table covertype_dataset.validation.
In the second cell, use the bq command to export that BigQuery validation table to GCS at $VALIDATION_FILE_PATH.
<ql-infobox><b>NOTE
Step10: Develop a training application
Configure the sklearn training pipeline.
The training pipeline preprocesses data by standardizing all numeric features using sklearn.preprocessing.StandardScaler and encoding all categorical features using sklearn.preprocessing.OneHotEncoder. It uses stochastic gradient descent linear classifier (SGDClassifier) for modeling.
Step11: Convert all numeric features to float64
To avoid warning messages from StandardScaler all numeric features are converted to float64.
Step12: Run the pipeline locally.
Step13: Calculate the trained model's accuracy.
Step14: Prepare the hyperparameter tuning application.
Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on AI Platform Training.
Step15: Write the tuning script.
Notice the use of the hypertune package to report the accuracy optimization metric to AI Platform hyperparameter tuning service.
Exercise
Complete the code below to capture the metric that the hyper parameter tunning engine will use to optimize
the hyper parameter.
<ql-infobox><b>NOTE
Step16: Package the script into a docker image.
Notice that we are installing specific versions of scikit-learn and pandas in the training image. This is done to make sure that the training runtime is aligned with the serving runtime. Later in the notebook you will deploy the model to AI Platform Prediction, using the 1.15 version of AI Platform Prediction runtime.
Make sure to update the URI for the base image so that it points to your project's Container Registry.
Exercise
Complete the Dockerfile below so that it copies the 'train.py' file into the container
at /app and runs it when the container is started.
<ql-infobox><b>NOTE
Step17: Build the docker image.
You use Cloud Build to build the image and push it your project's Container Registry. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
Step18: Submit an AI Platform hyperparameter tuning job
Create the hyperparameter configuration file.
Recall that the training code uses SGDClassifier. The training application has been designed to accept two hyperparameters that control SGDClassifier
Step19: Start the hyperparameter tuning job.
Exercise
Use the gcloud command to start the hyperparameter tuning job.
<ql-infobox><b>NOTE
Step20: Monitor the job.
You can monitor the job using Google Cloud console or from within the notebook using gcloud commands.
Step21: NOTE
Step22: The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list.
Step23: Retrain the model with the best hyperparameters
You can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset.
Configure and run the training job
Step24: NOTE
Step25: Deploy the model to AI Platform Prediction
Create a model resource
Exercise
Complete the gcloud command below to create a model with
model_name in $REGION tagged with labels
Step26: Create a model version
Exercise
Complete the gcloud command below to create a version of the model
Step27: Serve predictions
Prepare the input file with JSON formated instances.
Step28: Invoke the model
Exercise
Using the gcloud command send the data in $input_file to
your model deployed as a REST API | Python Code:
import json
import os
import numpy as np
import pandas as pd
import pickle
import uuid
import time
import tempfile
from googleapiclient import discovery
from googleapiclient import errors
from google.cloud import bigquery
from jinja2 import Template
from kfp.components import func_to_container_op
from typing import NamedTuple
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
Explanation: Using custom containers with AI Platform Training
Learning Objectives:
1. Learn how to create a train and a validation split with BigQuery
1. Learn how to wrap a machine learning model into a Docker container and train in on AI Platform
1. Learn how to use the hyperparameter tunning engine on Google Cloud to find the best hyperparameters
1. Learn how to deploy a trained machine learning model Google Cloud as a rest API and query it
In this lab, you develop a multi-class classification model, package the model as a docker image, and run on AI Platform Training as a training application. The training application trains a multi-class classification model that predicts the type of forest cover from cartographic data. The dataset used in the lab is based on Covertype Data Set from UCI Machine Learning Repository.
Scikit-learn is one of the most useful libraries for machine learning in Python. The training code uses scikit-learn for data pre-processing and modeling.
The code is instrumented using the hypertune package so it can be used with AI Platform hyperparameter tuning job in searching for the best combination of hyperparameter values by optimizing the metrics you specified.
End of explanation
%pip install gcsfs==0.8
Explanation: Run the command in the cell below to install gcsfs package.
End of explanation
PROJECT_ID=!(gcloud config get-value core/project)
PROJECT_ID=PROJECT_ID[0]
DATASET_ID='covertype_dataset'
DATASET_LOCATION='US'
TABLE_ID='covertype'
DATA_SOURCE='gs://workshop-datasets/covertype/small/dataset.csv'
SCHEMA='Elevation:INTEGER,Aspect:INTEGER,Slope:INTEGER,Horizontal_Distance_To_Hydrology:INTEGER,Vertical_Distance_To_Hydrology:INTEGER,Horizontal_Distance_To_Roadways:INTEGER,Hillshade_9am:INTEGER,Hillshade_Noon:INTEGER,Hillshade_3pm:INTEGER,Horizontal_Distance_To_Fire_Points:INTEGER,Wilderness_Area:STRING,Soil_Type:STRING,Cover_Type:INTEGER'
Explanation: Prepare lab dataset
Set environment variable so that we can use them throughout the entire lab.
The pipeline ingests data from BigQuery. The cell below uploads the Covertype dataset to BigQuery.
End of explanation
!bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
!bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
Explanation: Next, create the BigQuery dataset and upload the Covertype csv data into a table.
End of explanation
!gsutil ls
Explanation: Configure environment settings
Set location paths, connections strings, and other environment settings. Make sure to update REGION, and ARTIFACT_STORE with the settings reflecting your lab environment.
REGION - the compute region for AI Platform Training and Prediction
ARTIFACT_STORE - the Cloud Storage bucket created during installation of AI Platform Pipelines. The bucket name starts with the qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default prefix.
Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.
End of explanation
REGION = 'us-central1'
ARTIFACT_STORE = 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default' # TO DO: REPLACE WITH YOUR ARTIFACT_STORE NAME
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATA_ROOT='{}/data'.format(ARTIFACT_STORE)
JOB_DIR_ROOT='{}/jobs'.format(ARTIFACT_STORE)
TRAINING_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'training', 'dataset.csv')
VALIDATION_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'validation', 'dataset.csv')
Explanation: HINT: For ARTIFACT_STORE, copy the bucket name which starts with the qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default prefix from the previous cell output.
Your copied value should look like 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default').
End of explanation
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
Explanation: Explore the Covertype dataset
Run the query statement below to scan covertype_dataset.covertype table in BigQuery and return the computed result rows.
End of explanation
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
Explanation: Create training and validation splits
Use BigQuery to sample training and validation splits and save them to Cloud Storage.
Create a training split
Run the query below in order to have repeatable sampling of the data in BigQuery. Note that FARM_FINGERPRINT() is used on the field that you are going to split your data. It creates a training split that takes 80% of the data using the bq command and exports this split into the BigQuery table of covertype_dataset.training.
End of explanation
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
Explanation: Use the bq extract command to export the BigQuery training table to GCS at $TRAINING_FILE_PATH.
End of explanation
# TO DO: Your code goes here to create the BQ table validation split.
# TO DO: Your code goes here to export the validation table to the Cloud Storage bucket.
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
Explanation: Create a validation split
Exercise
In the first cell below, create
a validation split that takes 10% of the data using the bq command and
export this split into the BigQuery table covertype_dataset.validation.
In the second cell, use the bq command to export that BigQuery validation table to GCS at $VALIDATION_FILE_PATH.
<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers and opening lab-01.ipynb.
</ql-infobox>
End of explanation
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log', tol=1e-3))
])
Explanation: Develop a training application
Configure the sklearn training pipeline.
The training pipeline preprocesses data by standardizing all numeric features using sklearn.preprocessing.StandardScaler and encoding all categorical features using sklearn.preprocessing.OneHotEncoder. It uses stochastic gradient descent linear classifier (SGDClassifier) for modeling.
End of explanation
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
Explanation: Convert all numeric features to float64
To avoid warning messages from StandardScaler all numeric features are converted to float64.
End of explanation
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
Explanation: Run the pipeline locally.
End of explanation
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
Explanation: Calculate the trained model's accuracy.
End of explanation
TRAINING_APP_FOLDER = 'training_app'
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
Explanation: Prepare the hyperparameter tuning application.
Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on AI Platform Training.
End of explanation
%%writefile {TRAINING_APP_FOLDER}/train.py
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import sys
import fire
import pickle
import numpy as np
import pandas as pd
import hypertune
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path,
validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature
in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
# TO DO: Your code goes here to score the model with the validation data and capture the result
# with the hypertune library
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path],
stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
Explanation: Write the tuning script.
Notice the use of the hypertune package to report the accuracy optimization metric to AI Platform hyperparameter tuning service.
Exercise
Complete the code below to capture the metric that the hyper parameter tunning engine will use to optimize
the hyper parameter.
<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers and opening lab-01.ipynb.
</ql-infobox>
End of explanation
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
# TO DO: Your code goes here
Explanation: Package the script into a docker image.
Notice that we are installing specific versions of scikit-learn and pandas in the training image. This is done to make sure that the training runtime is aligned with the serving runtime. Later in the notebook you will deploy the model to AI Platform Prediction, using the 1.15 version of AI Platform Prediction runtime.
Make sure to update the URI for the base image so that it points to your project's Container Registry.
Exercise
Complete the Dockerfile below so that it copies the 'train.py' file into the container
at /app and runs it when the container is started.
<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers and opening lab-01.ipynb.
</ql-infobox>
End of explanation
IMAGE_NAME='trainer_image'
IMAGE_TAG='latest'
IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG)
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
Explanation: Build the docker image.
You use Cloud Build to build the image and push it your project's Container Registry. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
End of explanation
%%writefile {TRAINING_APP_FOLDER}/hptuning_config.yaml
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 4
maxParallelTrials: 4
hyperparameterMetricTag: accuracy
enableTrialEarlyStopping: TRUE
params:
# TO DO: Your code goes here
Explanation: Submit an AI Platform hyperparameter tuning job
Create the hyperparameter configuration file.
Recall that the training code uses SGDClassifier. The training application has been designed to accept two hyperparameters that control SGDClassifier:
- Max iterations
- Alpha
The below file configures AI Platform hypertuning to run up to 6 trials on up to three nodes and to choose from two discrete values of max_iter and the linear range betwee 0.00001 and 0.001 for alpha.
Exercise
Complete the hptuning_config.yaml file below so that the hyperparameter
tunning engine try for parameter values
* max_iter the two values 200 and 300
* alpha a linear range of values between 0.00001 and 0.001
<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers and opening lab-01.ipynb.
</ql-infobox>
End of explanation
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=# TO DO: ADD YOUR REGION \
--job-dir=# TO DO: ADD YOUR JOB-DIR \
--master-image-uri=# TO DO: ADD YOUR IMAGE-URI \
--scale-tier=# TO DO: ADD YOUR SCALE-TIER \
--config # TO DO: ADD YOUR CONFIG PATH \
-- \
# TO DO: Complete the command
Explanation: Start the hyperparameter tuning job.
Exercise
Use the gcloud command to start the hyperparameter tuning job.
<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers and opening lab-01.ipynb.
</ql-infobox>
End of explanation
!gcloud ai-platform jobs describe $JOB_NAME
!gcloud ai-platform jobs stream-logs $JOB_NAME
Explanation: Monitor the job.
You can monitor the job using Google Cloud console or from within the notebook using gcloud commands.
End of explanation
ml = discovery.build('ml', 'v1')
job_id = 'projects/{}/jobs/{}'.format(PROJECT_ID, JOB_NAME)
request = ml.projects().jobs().get(name=job_id)
try:
response = request.execute()
except errors.HttpError as err:
print(err)
except:
print("Unexpected error")
response
Explanation: NOTE: The above AI platform job stream logs will take approximately 5~10 minutes to display.
Retrieve HP-tuning results.
After the job completes you can review the results using Google Cloud Console or programatically by calling the AI Platform Training REST end-point.
End of explanation
response['trainingOutput']['trials'][0]
Explanation: The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list.
End of explanation
alpha = response['trainingOutput']['trials'][0]['hyperparameters']['alpha']
max_iter = response['trainingOutput']['trials'][0]['hyperparameters']['max_iter']
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--alpha=$alpha \
--max_iter=$max_iter \
--nohptune
!gcloud ai-platform jobs stream-logs $JOB_NAME
Explanation: Retrain the model with the best hyperparameters
You can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset.
Configure and run the training job
End of explanation
!gsutil ls $JOB_DIR
Explanation: NOTE: The above AI platform job stream logs will take approximately 5~10 minutes to display.
Examine the training output
The training script saved the trained model as the 'model.pkl' in the JOB_DIR folder on Cloud Storage.
End of explanation
model_name = 'forest_cover_classifier'
labels = "task=classifier,domain=forestry"
!gcloud # TO DO: You code goes here
Explanation: Deploy the model to AI Platform Prediction
Create a model resource
Exercise
Complete the gcloud command below to create a model with
model_name in $REGION tagged with labels:
<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers and opening lab-01.ipynb.
</ql-infobox>
End of explanation
model_version = 'v01'
!gcloud # TO DO: Complete the command \
--model=# TO DO: ADD YOUR MODEL NAME \
--origin=# TO DO: ADD YOUR PATH \
--runtime-version=# TO DO: ADD YOUR RUNTIME \
--framework=# TO DO: ADD YOUR FRAMEWORK \
--python-version=# TO DO: ADD YOUR PYTHON VERSION \
--region # TO DO: ADD YOUR REGION
Explanation: Create a model version
Exercise
Complete the gcloud command below to create a version of the model:
<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers and opening lab-01.ipynb.
</ql-infobox>
End of explanation
input_file = 'serving_instances.json'
with open(input_file, 'w') as f:
for index, row in X_validation.head().iterrows():
f.write(json.dumps(list(row.values)))
f.write('\n')
!cat $input_file
Explanation: Serve predictions
Prepare the input file with JSON formated instances.
End of explanation
!gcloud # TO DO: Complete the command
Explanation: Invoke the model
Exercise
Using the gcloud command send the data in $input_file to
your model deployed as a REST API:
<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers and opening lab-01.ipynb.
</ql-infobox>
End of explanation |
9,404 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mass Spec Data Analysis
Step1: Relative intensity
Every MS run has a characteristic intensity scale depending on the quantity of sample, concentration of cells / protein and probably other things.
Before further analysis we compute relative intensity based on the total intensity in each MS run.
Step2: Fold-change
Compute base-2 logarithm of the experiment | Python Code:
# First, we must perform the incantations.
%pylab inline
import pandas as pd
# Parse data file.
proteins = pd.read_table('data/pubs2015/proteinGroups.txt', low_memory=False)
# Find mass spec intensity columns.
intensity_cols = [c for c in proteins.columns if 'intensity '
in c.lower() and 'lfq' not in c.lower()]
# Find columns corresponding to experiment classes.
wcl_cols = [c for c in intensity_cols if '_wcl' in c.lower() and '_wclp' not in c.lower()]
wclp_cols = [c for c in intensity_cols if '_wclp' in c.lower()]
ub_cols = [c for c in intensity_cols if '_ub' in c.lower() and '_ubp' not in c.lower()]
ubp_cols = [c for c in intensity_cols if '_ubp' in c.lower()]
# Create a binary mask excluding reversed and contaminated samples.
mask = (proteins['Reverse'] != '+') & \
(proteins['Potential contaminant'] != '+')
Explanation: Mass Spec Data Analysis
End of explanation
# Apply reversed/contaminated mask and get intensity columns.
intensities = proteins[mask][intensity_cols]
# Sum down the columns (MS runs).
total_intensities = proteins[intensity_cols].sum(axis=0)
# Element-wise division with singleton expansion/broadcasting.
normed_intensities = intensities / total_intensities
# Indices of proteins which have non-zero intensity in at least one run.
idx = (normed_intensities != 0).any(axis=1)
# Get names and intensities of such proteins.
names = proteins[mask][idx]['Protein IDs']
nonzero_intensities = normed_intensities[idx]
# Separate the intensity DataFrame into separate DataFrames for each experiment class.
wcl = nonzero_intensities[wcl_cols]
wclp = nonzero_intensities[wclp_cols]
ub = nonzero_intensities[ub_cols]
ubp = nonzero_intensities[ubp_cols]
# Find control columns in each experiment class.
wcl_ctrl = [c for c in wcl.columns if 'control' in c.lower()]
wclp_ctrl = [c for c in wclp.columns if 'control' in c.lower()]
ub_ctrl = [c for c in ub.columns if 'control' in c.lower()]
ubp_ctrl = [c for c in ubp.columns if 'control' in c.lower()]
# Find experiment columns in each experiment class.
wcl_exp = [c for c in wcl.columns if 'control' not in c.lower()]
wclp_exp = [c for c in wclp.columns if 'control' not in c.lower()]
ub_exp = [c for c in ub.columns if 'control' not in c.lower()]
ubp_exp = [c for c in ubp.columns if 'control' not in c.lower()]
Explanation: Relative intensity
Every MS run has a characteristic intensity scale depending on the quantity of sample, concentration of cells / protein and probably other things.
Before further analysis we compute relative intensity based on the total intensity in each MS run.
End of explanation
# Need to use underlying numpy arrays for singleton expansion ('broadcasting')
# and form new DataFrame using appropriate column names.
wcl_foldch = pd.DataFrame(log2(wcl[wcl_exp]).values - log2(wcl[wcl_ctrl]).values, columns=wcl_exp)
wclp_foldch = pd.DataFrame(log2(wclp[wclp_exp]).values - log2(wclp[wclp_ctrl]).values, columns=wclp_exp)
ub_foldch = pd.DataFrame(log2(ub[ub_exp]).values - log2(ub[ub_ctrl]).values, columns=ub_exp)
ubp_foldch = pd.DataFrame(log2(ubp[ubp_exp]).values - log2(ubp[ubp_ctrl]).values, columns=ubp_exp)
# 3rd-to-last element is Shmoo / CaCl2.
# Only histogram finite (non-inf, non-NaN) values.
hist(wcl_foldch[wcl_foldch.columns[-3]][isfinite(wcl_foldch[wcl_foldch.columns[-3]])].values, 100);
Explanation: Fold-change
Compute base-2 logarithm of the experiment : control ratios for each protein and experiment class.
These values represent the "fold change" from control in each of the experiments.
End of explanation |
9,405 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Statistics Made Simple
Code and exercises from my workshop on Bayesian statistics in Python.
Copyright 2016 Allen Downey
MIT License
Step1: Working with Pmfs
Create a Pmf object to represent a six-sided die.
Step2: A Pmf is a map from possible outcomes to their probabilities.
Step3: Initially the probabilities don't add up to 1.
Step4: Normalize adds up the probabilities and divides through. The return value is the total probability before normalizing.
Step5: Now the Pmf is normalized.
Step6: And we can compute its mean (which only works if it's normalized).
Step7: Random chooses a random value from the Pmf.
Step8: thinkplot provides methods for plotting Pmfs in a few different styles.
Step9: Exercise 1
Step10: Exercise 2
Step11: The cookie problem
Create a Pmf with two equally likely hypotheses.
Step12: Update each hypothesis with the likelihood of the data (a vanilla cookie).
Step13: Print the posterior probabilities.
Step14: Exercise 3
Step15: Exercise 4
Step16: The dice problem
Create a Suite to represent dice with different numbers of sides.
Step17: Exercise 5
Step18: Exercise 6
Step19: Now we can create a Dice object and update it.
Step20: If we get more data, we can perform more updates.
Step21: Here are the results.
Step22: The German tank problem
The German tank problem is actually identical to the dice problem.
Step23: Here are the posterior probabilities after seeing Tank #37.
Step24: Exercise 7
Step26: The Euro problem
Exercise 8
Step27: We'll start with a uniform distribution from 0 to 100.
Step28: Now we can update with a single heads
Step29: Another heads
Step30: And a tails
Step31: Starting over, here's what it looks like after 7 heads and 3 tails.
Step32: The maximum posterior probability is 70%, which is the observed proportion.
Here are the posterior probabilities after 140 heads and 110 tails.
Step33: The posterior mean s about 56%
Step34: So is the value with Maximum Aposteriori Probability (MAP).
Step35: The posterior credible interval has a 90% chance of containing the true value (provided that the prior distribution truly represents our background knowledge).
Step37: Swamping the prior
The following function makes a Euro object with a triangle prior.
Step38: And here's what it looks like
Step39: Exercise 9 | Python Code:
from __future__ import print_function, division
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
from thinkbayes2 import Pmf, Suite
import thinkplot
Explanation: Bayesian Statistics Made Simple
Code and exercises from my workshop on Bayesian statistics in Python.
Copyright 2016 Allen Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
d6 = Pmf()
Explanation: Working with Pmfs
Create a Pmf object to represent a six-sided die.
End of explanation
for x in [1,2,3,4,5,6]:
d6[x] = 1
Explanation: A Pmf is a map from possible outcomes to their probabilities.
End of explanation
d6.Print()
Explanation: Initially the probabilities don't add up to 1.
End of explanation
d6.Normalize()
Explanation: Normalize adds up the probabilities and divides through. The return value is the total probability before normalizing.
End of explanation
d6.Print()
Explanation: Now the Pmf is normalized.
End of explanation
d6.Mean()
Explanation: And we can compute its mean (which only works if it's normalized).
End of explanation
d6.Random()
Explanation: Random chooses a random value from the Pmf.
End of explanation
thinkplot.Hist(d6)
Explanation: thinkplot provides methods for plotting Pmfs in a few different styles.
End of explanation
# Solution goes here
Explanation: Exercise 1: The Pmf object provides __add__, so you can use the + operator to compute the Pmf of the sum of two dice.
Compute and plot the Pmf of the sum of two 6-sided dice.
End of explanation
# Solution goes here
Explanation: Exercise 2: Suppose I roll two dice and tell you the result is greater than 3.
Plot the Pmf of the remaining possible outcomes and compute its mean.
End of explanation
cookie = Pmf(['Bowl 1', 'Bowl 2'])
cookie.Print()
Explanation: The cookie problem
Create a Pmf with two equally likely hypotheses.
End of explanation
cookie['Bowl 1'] *= 0.75
cookie['Bowl 2'] *= 0.5
cookie.Normalize()
Explanation: Update each hypothesis with the likelihood of the data (a vanilla cookie).
End of explanation
cookie.Print()
Explanation: Print the posterior probabilities.
End of explanation
# Solution goes here
Explanation: Exercise 3: Suppose we put the first cookie back, stir, choose again from the same bowl, and get a chocolate cookie.
Hint: The posterior (after the first cookie) becomes the prior (before the second cookie).
End of explanation
# Solution goes here
Explanation: Exercise 4: Instead of doing two updates, what if we collapse the two pieces of data into one update?
Re-initialize Pmf with two equally likely hypotheses and perform one update based on two pieces of data, a vanilla cookie and a chocolate cookie.
The result should be the same regardless of how many updates you do (or the order of updates).
End of explanation
pmf = Pmf([4, 6, 8, 12])
pmf.Print()
Explanation: The dice problem
Create a Suite to represent dice with different numbers of sides.
End of explanation
# Solution goes here
Explanation: Exercise 5: We'll solve this problem two ways. First we'll do it "by hand", as we did with the cookie problem; that is, we'll multiply each hypothesis by the likelihood of the data, and then renormalize.
In the space below, update suite based on the likelihood of the data (rolling a 6), then normalize and print the results.
End of explanation
class Dice(Suite):
# hypo is the number of sides on the die
# data is the outcome
def Likelihood(self, data, hypo):
return 1
# Solution goes here
Explanation: Exercise 6: Now let's do the same calculation using Suite.Update.
Write a definition for a new class called Dice that extends Suite. Then define a method called Likelihood that takes data and hypo and returns the probability of the data (the outcome of rolling the die) for a given hypothesis (number of sides on the die).
Hint: What should you do if the outcome exceeds the hypothetical number of sides on the die?
Here's an outline to get you started:
End of explanation
dice = Dice([4, 6, 8, 12])
dice.Update(6)
dice.Print()
Explanation: Now we can create a Dice object and update it.
End of explanation
for roll in [8, 7, 7, 5, 4]:
dice.Update(roll)
Explanation: If we get more data, we can perform more updates.
End of explanation
dice.Print()
Explanation: Here are the results.
End of explanation
class Tank(Suite):
# hypo is the number of tanks
# data is an observed serial number
def Likelihood(self, data, hypo):
if data > hypo:
return 0
else:
return 1 / hypo
Explanation: The German tank problem
The German tank problem is actually identical to the dice problem.
End of explanation
tank = Tank(range(100))
tank.Update(37)
thinkplot.Pdf(tank)
tank.Mean()
Explanation: Here are the posterior probabilities after seeing Tank #37.
End of explanation
# Solution goes here
Explanation: Exercise 7: Suppose we see another tank with serial number 17. What effect does this have on the posterior probabilities?
Update the suite again with the new data and plot the results.
End of explanation
class Euro(Suite):
def Likelihood(self, data, hypo):
hypo is the prob of heads (0-100)
data is a string, either 'H' or 'T'
return 1
# Solution goes here
Explanation: The Euro problem
Exercise 8: Write a class definition for Euro, which extends Suite and defines a likelihood function that computes the probability of the data (heads or tails) for a given value of x (the probability of heads).
Note that hypo is in the range 0 to 100. Here's an outline to get you started.
End of explanation
euro = Euro(range(101))
thinkplot.Pdf(euro)
Explanation: We'll start with a uniform distribution from 0 to 100.
End of explanation
euro.Update('H')
thinkplot.Pdf(euro)
Explanation: Now we can update with a single heads:
End of explanation
euro.Update('H')
thinkplot.Pdf(euro)
Explanation: Another heads:
End of explanation
euro.Update('T')
thinkplot.Pdf(euro)
Explanation: And a tails:
End of explanation
euro = Euro(range(101))
for outcome in 'HHHHHHHTTT':
euro.Update(outcome)
thinkplot.Pdf(euro)
euro.MaximumLikelihood()
Explanation: Starting over, here's what it looks like after 7 heads and 3 tails.
End of explanation
euro = Euro(range(101))
evidence = 'H' * 140 + 'T' * 110
for outcome in evidence:
euro.Update(outcome)
thinkplot.Pdf(euro)
Explanation: The maximum posterior probability is 70%, which is the observed proportion.
Here are the posterior probabilities after 140 heads and 110 tails.
End of explanation
euro.Mean()
Explanation: The posterior mean s about 56%
End of explanation
euro.MAP()
Explanation: So is the value with Maximum Aposteriori Probability (MAP).
End of explanation
euro.CredibleInterval(90)
Explanation: The posterior credible interval has a 90% chance of containing the true value (provided that the prior distribution truly represents our background knowledge).
End of explanation
def TrianglePrior():
Makes a Suite with a triangular prior.
suite = Euro(label='triangle')
for x in range(0, 51):
suite[x] = x
for x in range(51, 101):
suite[x] = 100-x
suite.Normalize()
return suite
Explanation: Swamping the prior
The following function makes a Euro object with a triangle prior.
End of explanation
euro1 = Euro(range(101), label='uniform')
euro2 = TrianglePrior()
thinkplot.Pdfs([euro1, euro2])
thinkplot.Config(title='Priors')
Explanation: And here's what it looks like:
End of explanation
# Solution goes here
Explanation: Exercise 9: Update euro1 and euro2 with the same data we used before (140 heads and 110 tails) and plot the posteriors. How big is the difference in the means?
End of explanation |
9,406 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification
Consider a binary classification problem. The data and target files are available online. The domain of the problem is chemoinformatics. Data is about toxicity of 4K small molecules.
The creation of a predictive system happens in 3 steps
Step1: load data and convert it to graphs
Step2: 2 Vectorization
setup the vectorizer
Step3: extract features and build data matrix
Step4: 3 Modelling
Induce a predictor and evaluate its performance | Python Code:
from eden.util import load_target
y = load_target( 'http://www.bioinf.uni-freiburg.de/~costa/bursi.target' )
Explanation: Classification
Consider a binary classification problem. The data and target files are available online. The domain of the problem is chemoinformatics. Data is about toxicity of 4K small molecules.
The creation of a predictive system happens in 3 steps:
data conversion: transform instances into a suitable graph format. This is done using specialized programs for each (domain, format) pair. In the example we have molecular graphs encoded using the gSpan format and we will therefore use the 'gspan' tool.
data vectorization: transform graphs into sparse vectors. This is done using the EDeN tool. The vectorizer accepts as parameters the (maximal) size of the fragments to be used as features, this is expressed as the pair 'radius' and the 'distance'. See for details: F. Costa, K. De Grave,''Fast Neighborhood Subgraph Pairwise Distance Kernel'', 27th International Conference on Machine Learning (ICML), 2010.
modelling: fit a predicitve system and evaluate its performance. This is done using the tools offered by the scikit library. In the example we will use a Stochastic Gradient Descent linear classifier.
In the following cells there is the code for each step.
Install the library
1 Conversion
load a target file
End of explanation
from eden.converter.graph.gspan import gspan_to_eden
graphs = gspan_to_eden( 'http://www.bioinf.uni-freiburg.de/~costa/bursi.gspan' )
Explanation: load data and convert it to graphs
End of explanation
from eden.graph import Vectorizer
vectorizer = Vectorizer( r=2,d=0 )
Explanation: 2 Vectorization
setup the vectorizer
End of explanation
%%time
X = vectorizer.transform( graphs )
print 'Instances: %d Features: %d with an avg of %d features per instance' % (X.shape[0], X.shape[1], X.getnnz()/X.shape[0])
Explanation: extract features and build data matrix
End of explanation
%%time
#induce a predictive model
from sklearn.linear_model import SGDClassifier
predictor = SGDClassifier(average=True, class_weight='auto', shuffle=True, n_jobs=-1)
from sklearn import cross_validation
scores = cross_validation.cross_val_score(predictor, X, y, cv=10, scoring='roc_auc')
import numpy as np
print('AUC ROC: %.4f +- %.4f' % (np.mean(scores),np.std(scores)))
Explanation: 3 Modelling
Induce a predictor and evaluate its performance
End of explanation |
9,407 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Permanent Income Model
Chase Coleman and Thomas Sargent
This notebook maps instances of the linear-quadratic-Gaussian permanent income model
with $\beta R = 1$ into a linear state space system, applies two different approaches to solving the model and compares outcomes from those two approaches. After confirming that answers produced by the two methods agree, it applies the quantecon LinearStateSpace class to illustrate various features of the model.
Besides being a workhorse model for analyzing consumption data, the model is good for illustrating the concepts of
* stationarity
* ergodicity
* ensemble moments and cross section observations
* cointegration
* linear-quadratic dynamic programming problems
Background readings on the linear-quadratic-Gaussian permanent income model are Robert Hall's 1978 JPE paper ``Stochastic Implications of the Life Cycle-Permanent Income Hypothesis
Step1: Plan of the notebook
We study a version of the linear-quadratic-Gaussian model described in section 2.12 of chapter 2 of Ljungqvist and Sargent's Recursive Macroeconomic Theory
We solve the model in two ways
Step2: It turns out that the bliss level of consumption $\gamma$ in the utility function $-.5 (c_t -\gamma)^2$
has no effect on the optimal decision rule.
(We shall see why below when we inspect the Euler equation for consumption.)
Now create the objects for the optimal linear regulator.
Here we will use a trick to induce the Bellman equation to respect restriction (4) on the debt sequence
${b_t}$. To accomplish that, we'll put a very small penalty on $b_t^2$ in the criterion function.
That will induce a (hopefully) small approximation error in the decision rule. We'll check whether it really is small numerically soon.
Step3: Now create the appropriate instance of an LQ model
Step4: Now create the optimal policies using the analytic formulas.
We'll save the answers and will compare them with answers we get by employing an alternative solution method.
Step5: Solution via a system of expectational difference equations
Now we will solve the household's optimum problem by first deducing the Euler equations that are the first-order conditions with respect to consumption and savings, then using the budget constraints and the boundary condition (4) to complete a system of expectational linear difference equations that we'll solve for the optimal consumption, debt plan.
First-order conditions for the problem are
$$ E_t u'(c_{t+1}) = u'(c_t) , \ \ \forall t \geq 0. \quad (5) $$
In our linear-quadratic model, we assume
the quadratic utility function
$u(c_t) = -.5 (c_t - \gamma)^2$,
where $\gamma$ is a bliss level of consumption. Then the consumption Euler equation becomes
$$ E_t c_{t+1} = c_t . \quad (6) $$
Along with the quadratic utility specification, we allow consumption
$c_t$ to be negative.
To deduce the optimal decision rule, we want to solve the system
of difference equations formed by (2) and (6)
subject to the boundary condition (4). To accomplish this,
solve (2) forward and impose $\lim_{T\rightarrow +\infty} \beta^T b_{T+1} =0$ to get
$$ b_t = \sum_{j=0}^\infty \beta^j (y_{t+j} - c_{t+j}) . \quad (7) $$
Imposing $\lim_{T\rightarrow +\infty} \beta^T b_{T+1} =0$ suffices to impose (4) on the debt
path.
Take conditional expectations on both sides of (7) and use (6)
and the law of iterated expectations to deduce
$$ b_t = \sum_{j=0}^\infty \beta^j E_t y_{t+j} - {1 \over 1-\beta} c_t
\quad (8) $$
or
$$ c_t = (1-\beta)
\left[ \sum_{j=0}^\infty \beta^j E_t y_{t+j} - b_t\right].
\quad (9) $$
If we define the net rate of interest $r$ by $\beta ={1 \over 1+r}$, we can
also express this
equation as
$$ c_t = {r \over 1+r}
\left[ \sum_{j=0}^\infty \beta^j E_t y_{t+j} - b_t\right]. \quad (10) $$
Equation (9) or (10) asserts that consumption equals what Irving Fisher defined as
economic income, namely, a constant
marginal propensity to consume or interest factor ${r \over 1+r}$ times
the sum of nonfinancial wealth $
\sum_{j=0}^\infty \beta^j E_t y_{t+j}$ and financial
wealth $-b_t$. Notice that (9) or (10) represents
$c_t$ as a function of the state $[b_t, z_t]$
confronting the household, where from $z_t$ contains all
information useful for forecasting the endowment process.
Pulling together our preceding results, we can regard $z_t, b_t$ as
the time $t$ state, where $z_t$ is an exogenous component of the state
and $b_t$ is an endogenous component of the state vector. The system
can be represented as
$$ \eqalign{ z_{t+1} & = A_{22} z_t + C_2 w_{t+1} \cr
b_{t+1} & = b_t + U_y [ (I -\beta A_{22})^{-1} (A_{22} - I) ] z_t \cr
y_t & = U_y z_t \cr
c_t & = (1-\beta) [ U_y(I-\beta A_{22})^{-1} z_t - b_t ]. \cr } \quad (11) $$
Now we'll apply the formulas in equation system (11).
Later we shall use them to get objects needed to form the system (11) as an instance of a LinearStateSpace class that we'll use to exhibit features of the LQ permanent income model.
Step6: A_LSS calculated as we have here should equal ABF calculated above using the LQ model.
Here comes the check. The difference between ABF and A_LSS should be zero
Step7: Now compare pertinent elements of c_pol and -F
Step8: We have verified that the two methods give the same solution.
Now let's create an instance of a LinearStateSpace model.
To do this, we'll use the outcomes from out second method.
Two examples
Now we'll generate panels of consumers. We'll study two examples that are differentiated only by the initial states with which we endow consumers. All other parameter values are kept the same in the two examples.
In the first example, all consumers begin with zero nonfinancial income and zero debt. The consumers are thus ex ante identical.
In the second example, consumers are ex ante heterogeneous. While all of them begin with zero debt, we draw their initial income levels from the invariant distribution of financial income.
In the first example, consumers' nonfinancial income paths will display prounounced transients early in the sample that will affect outcomes in striking ways. Those transient effects will not be present in the second example.
Now we'll use methods that the LinearStateSpace class contains to simulate the model with our first set of intitial conditions.
25 paths of the exogenous non-financial income process and the associated consumption and debt paths. In the first set of graphs, the darker lines depict one particular sample path, while the lighter lines indicate the other 24 paths.
A second graph that plots a collection of simulations against the population distribution that we extract from the LinearStateSpace instance LSS
Step10: Population and sample panels
In the code below, we use the LinearStateSpace class to
compute and plot population quantiles of the distributions of consumption and debt for a population of consumers
simulate a group of 25 consumers and plot sample paths on the same graph as the population distribution
Step12: First example
Here is what is going on in the above graphs.
Because we have set $y_{-1} = y_{-2} = 0$, nonfinancial income $y_t$ starts far below its stationary mean
$\mu_{y, \infty}$ and rises early in each simulation.
To help interpret the behavior above graph, recall that we can represent the optimal decision rule for consumption
in terms of the co-integrating relationship
$$ (1-\beta) b_t + c_t = (1-\beta) E_t \sum_{j=0}^\infty \beta^j y_{t+j}, $$
For our simulation, we have set initial conditions $b_0 = y_{-1} = y_{-2} = 0$ (please see the code above).
So at time $0$ we have
$$ c_0 = (1-\beta) E_0 \sum_{t=0}^\infty \beta^j y_{t} . $$
This tells us that consumption starts at the value of an annuity from the expected discounted value of nonfinancial
income. To support that level of consumption, the consumer borrows a lot early on, building up substantial debt.
In fact, he or she incurs so much debt that eventually, in the stochastic steady state, he consumes less each period than his income. He uses the gap between consumption and income mostly to service the interest payments due on his debt.
Thus, when we look at the panel of debt in the accompanying graph, we see that this is a group of ex ante indentical people each of whom starts with zero debt. All of them accumulate debt in anticipation of rising nonfinancial income. The expect their nonfinancial income to rise toward the invariant distribution of income, a consequence of our having started them at $y_{-1} = y_{-2} = 0$.
Illustration of cointegration
The LQ permanent income model is a good one for illustrating the concept of cointegration.
The following figure plots realizations of the left side of
$$ (1-\beta) b_t + c_t = (1-\beta) E_t \sum_{j=0}^\infty \beta^j y_{t+j}, \quad (12) $$
which is called the cointegrating residual.
Notice that it equals the right side, namely, $(1-\beta) E_t \sum_{j=0}^\infty \beta^j y_{t+j}$,
which equals an annuity payment on the expected present value of future income $E_t \sum_{j=0}^\infty \beta^j y_{t+j}$.
Early along a realization, $c_t$ is approximately constant while $(1-\beta) b_t$ and $(1-\beta) E_t \sum_{j=0}^\infty \beta^j y_{t+j}$ both rise markedly as the household's present value of income and borrowing rise pretty much together.
Note
Step13: A "borrowers and lenders" closed economy
When we set $y_{-1} = y_{-2} = 0$ and $b_0 =0$ in the preceding exercise, we make debt "head north" early in the sample. Average debt rises and approaches asymptote.
We can regard these as outcomes of a ``small open economy'' that borrows from abroad at the fixed gross interest rate $R$ in anticipation of rising incomes.
So with the economic primitives set as above, the economy converges to a steady state in which there is an excess aggregate supply of risk-free loans at a gross interest rate of $R$. This excess supply is filled by ``foreigner lenders'' willing to make those loans.
We can use virtually the same code to rig a "poor man's Bewley model" in the following way.
as before, we start everyone at $b_0 = 0$.
But instead of starting everyone at $y_{-1} = y_{-2} = 0$, we draw $\begin{bmatrix} y_{-1} \cr y_{-2}
\end{bmatrix}$ from the invariant distribution of the ${y_t}$ process.
This rigs a closed economy in which people are borrowing and lending with each other at a gross risk-free
interest rate of $R = \beta^{-1}$. Here within the group of people being analyzed, risk-free loans are in zero excess supply. We have arranged primitives so that $R = \beta^{-1}$ clears the market for risk-free loans at zero aggregate excess supply. There is no need for foreigners to lend to our group.
The following graphs confirm the following outcomes | Python Code:
import quantecon as qe
import numpy as np
import scipy.linalg as la
import matplotlib.pyplot as plt
%matplotlib inline
np.set_printoptions(suppress=True, precision=4)
Explanation: Permanent Income Model
Chase Coleman and Thomas Sargent
This notebook maps instances of the linear-quadratic-Gaussian permanent income model
with $\beta R = 1$ into a linear state space system, applies two different approaches to solving the model and compares outcomes from those two approaches. After confirming that answers produced by the two methods agree, it applies the quantecon LinearStateSpace class to illustrate various features of the model.
Besides being a workhorse model for analyzing consumption data, the model is good for illustrating the concepts of
* stationarity
* ergodicity
* ensemble moments and cross section observations
* cointegration
* linear-quadratic dynamic programming problems
Background readings on the linear-quadratic-Gaussian permanent income model are Robert Hall's 1978 JPE paper ``Stochastic Implications of the Life Cycle-Permanent Income Hypothesis: Theory and Evidence'' and chapter 2 of Recursive Macroeconomic Theory
Let's get started
End of explanation
# Possible parameters
# alpha, beta, rho1, rho2, sigma
params = [[10.0, 0.95, 1.2, -0.3, 1.0],
[10.0, 0.95, 0.9, 0.0, 1.0],
[10.0, 0.95, 0.0, -0.0, 10.0]]
# Set parameters
alpha, beta, rho1, rho2, sigma = params[1]
# Note: LinearStateSpace object runs into iteration limit in computing stationary variance when we set
# sigma = .5 -- replace with doublej2 to fix this. Do some more testing
R = 1/beta
A = np.array([[1., 0., 0.],
[alpha, rho1, rho2],
[0., 1., 0.]])
C = np.array([[0.], [sigma], [0.]])
G = np.array([[0., 1., 0.]])
# for later use, form LinearStateSpace system and pull off steady state moments
mu_z0 = np.array([[1.0], [0.0], [0.0]])
sig_z0 = np.zeros((3, 3))
Lz = qe.LinearStateSpace(A, C, G, mu_0=mu_z0, Sigma_0=sig_z0)
muz, muy, Sigz, Sigy = Lz.stationary_distributions()
# mean vector of state for the savings problem
mxo = np.vstack([muz, 0.0])
# create stationary covariance matrix of x -- start everyone off at b=0
a1 = np.zeros((3, 1))
aa = np.hstack([Sigz, a1])
bb = np.zeros((1, 4))
sxo = np.vstack([aa, bb])
# These choices will initialize the state vector of an individual at zero debt
# and the ergodic distribution of the endowment process. Use these to create
# the Bewley economy.
mxbewley = mxo
sxbewley = sxo
Explanation: Plan of the notebook
We study a version of the linear-quadratic-Gaussian model described in section 2.12 of chapter 2 of Ljungqvist and Sargent's Recursive Macroeconomic Theory
We solve the model in two ways:
as an LQ dynamic programming problem, and
as a system of expectational difference equations with boundary conditions that advise us to solve stable roots backwards and unstable roots forwards (see appendix A of chapter 2 of Ljungqvist and Sargent).
We confirm numerically that these two methods give rise to approximately the same solution. The adverb approximately is appropriate because we use a technical trick to map the problem into a well behaved LQ dynamic programming problem.
The model
The LQ permanent income model is an example of a
``savings problem.''
A consumer has preferences over consumption streams
that are ordered by
the utility functional
$$ E_0 \sum_{t=0}^\infty \beta^t u(c_t), \quad(1) $$
where $E_t$ is the mathematical expectation conditioned
on the consumer's time $t$ information, $c_t$ is time $t$ consumption,
$u(c)$ is a strictly concave one-period utility function, and
$\beta \in (0,1)$ is a discount factor. The LQ model gets its name partly from assuming that the
utility function $u$ is quadratic:
$$ u(c) = -.5(c - \gamma)^2 $$
where $\gamma>0$ is a bliss level of consumption.
The consumer maximizes
the utility functional (1) by choosing a consumption, borrowing plan
${c_t, b_{t+1}}_{t=0}^\infty$ subject to the sequence of budget constraints
$$ c_t + b_t = R^{-1} b_{t+1} + y_t, t \geq 0, \quad(2) $$
where $y_t$ is an exogenous
stationary endowment process, $R$ is a constant gross
risk-free interest rate, $b_t$ is one-period risk-free debt maturing at
$t$, and $b_0$ is a given initial condition. We shall assume
that $R^{-1} = \beta$. Equation (2) is linear. We use another set of linear equations
to model the endowment process. In particular, we assume that the endowment
process has the state-space representation
$$ \eqalign{ z_{t+1} & = A_{22} z_t + C_2 w_{t+1} \cr
y_t & = U_y z_t \cr} \quad (3) $$
where $w_{t+1}$ is an i.i.d. process with mean zero and
identity contemporaneous covariance matrix, $A_{22}$ is a stable matrix,
its eigenvalues being strictly below unity in modulus, and
$U_y$ is a selection vector that identifies $y$ with a particular
linear combination of the $z_t$.
We impose the following condition on the
consumption, borrowing plan:
$$ E_0 \sum_{t=0}^\infty \beta^t b_t^2 < +\infty. \quad (4) $$
This condition suffices to rule out Ponzi schemes. (We impose this condition to
rule out a borrow-more-and-more plan that would allow the household to
enjoy bliss consumption forever.)
The state vector confronting the household at $t$ is
$$ x_t = \left[\matrix{z_t \cr b_t\cr}\right]',$$
where $b_t$ is its one-period debt falling
due at the beginning of period $t$
and $z_t$ contains all variables useful for
forecasting its future endowment.
We shall solve the problem two ways.
First, as a linear-quadratic control dynamic programming problem that we can solve using the LQ class.
Second, as a set of expectational difference equations that we can solve with homemade programs.
Solution as an LQ problem
We can map the problem into a linear-quadratic dynamic programming problem, also known
as an optimal linear regulator problem.
The stochastic discounted linear optimal regulator problem is to
choose a decision rule for $u_t$ to
maximize
$$ - E_0\sum_{t=0}^\infty \beta^t {x'_t Rx_t+u'_tQu_t},\quad 0<\beta<1,$$
subject to $x_0$ given, and the law of motion
$$x_{t+1} = A x_t+ Bu_t+ C w_{t+1},\qquad t\geq 0, $$
where $w_{t+1}$ is an $(n\times 1)$ vector of random variables that is
independently and identically distributed according to the normal
distribution with mean vector zero and covariance matrix
$Ew_t w'_t= I .$
The value function for this problem is
$v(x)= - x'Px-d,$
where $P$ is the unique positive semidefinite solution of the discounted
algebraic matrix Riccati equation corresponding to the limit of iterations on matrix Riccati difference
equation
$$P_{j+1} =R+\beta A'P_j A-\beta^2 A'P_jB(Q+\beta B'P_jB)^{-1} B'P_jA.$$
from $P_0=0$. The optimal policy is $u_t=-Fx_t$, where $F=\beta (Q+\beta
B'PB)^{-1} B'PA$.
The scalar $d$ is given by
$ d=\beta(1-\beta)^{-1} {\rm trace} ( P C C') . $
Under an optimal decision rule $F$, the state vector $x_t$ evolves according to
$$ x_{t+1} = (A-BF) x_t + C w_{t+1} $$
$$ \left[\matrix{z_{t+1} \cr b_{t+1} \cr}\right] = \left[\matrix{ A_{22} & 0 \cr R(U_\gamma - U_y) & R } \right]\left[\matrix{z_{t} \cr b_{t} \cr}\right] +
\left[\matrix{0 \cr R}\right] (c_t - \gamma) + \left[\matrix{ C_t \cr 0 } \right] w_{t+1} $$
or
$$ x_{t+1} = A x_t + B u_t + C w_{t+1} $$
We form the quadratic form $x_t' \bar R x_t + u_t'Q u_t $ with
$Q =1$ and $\bar R$ a $ 4 \times 4$ matrix with all elements zero except for a very small entry
$\alpha >0$ in the $(4,4)$ position. (We put the $\bar \cdot$ over the $R$ to avoid ``recycling''
the $R$ notation!)
We begin by creating an instance of the state-space system (2) that governs the income ${y_t}$ process. We assume
it is a second order univariate autoregressive process:
$$ y_{t+1} = \alpha + \rho_1 y_t + \rho_2 y_{t-1} + \sigma w_{t+1} $$
End of explanation
#
# Here we create the matrices for our system
#
A12 = np.zeros((3,1))
ALQ_l = np.hstack([A, A12])
ALQ_r = np.array([[0, -R, 0, R]])
ALQ = np.vstack([ALQ_l, ALQ_r])
RLQ = np.array([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 1e-9]])
QLQ = np.array([1.0])
BLQ = np.array([0., 0., 0., R]).reshape(4,1)
CLQ = np.array([0., sigma, 0., 0.]).reshape(4,1)
betaLQ = beta
print("We can inspect the matrices that describe our system below")
print("A = \n", ALQ)
print("B = \n", BLQ)
print("R = \n", RLQ)
print("Q = \n", QLQ)
Explanation: It turns out that the bliss level of consumption $\gamma$ in the utility function $-.5 (c_t -\gamma)^2$
has no effect on the optimal decision rule.
(We shall see why below when we inspect the Euler equation for consumption.)
Now create the objects for the optimal linear regulator.
Here we will use a trick to induce the Bellman equation to respect restriction (4) on the debt sequence
${b_t}$. To accomplish that, we'll put a very small penalty on $b_t^2$ in the criterion function.
That will induce a (hopefully) small approximation error in the decision rule. We'll check whether it really is small numerically soon.
End of explanation
LQPI = qe.LQ(QLQ, RLQ, ALQ, BLQ, C=CLQ, beta=betaLQ)
Explanation: Now create the appropriate instance of an LQ model
End of explanation
P, F, d = LQPI.stationary_values() # Compute optimal value function and decision rule
ABF = ALQ - np.dot(BLQ,F) # Form closed loop system
Explanation: Now create the optimal policies using the analytic formulas.
We'll save the answers and will compare them with answers we get by employing an alternative solution method.
End of explanation
# Use the above formulas to create the optimal policies for $b_{t+1}$ and $c_t$
b_pol = np.dot(G, la.inv(np.eye(3, 3) - beta*A)).dot(A - np.eye(3, 3))
c_pol = (1 - beta)*np.dot(G, la.inv(np.eye(3, 3) - beta*A))
#Create the A matrix for a LinearStateSpace instance
A_LSS1 = np.vstack([A, b_pol])
A_LSS2 = np.eye(4, 1, -3)
A_LSS = np.hstack([A_LSS1, A_LSS2])
# Create the C matrix for LSS methods
C_LSS = np.vstack([C, np.zeros(1)])
# Create the G matrix for LSS methods
G_LSS1 = np.vstack([G, c_pol])
G_LSS2 = np.vstack([np.zeros(1), -(1 - beta)])
G_LSS = np.hstack([G_LSS1, G_LSS2])
# use the following values to start everyone off at b=0, initial incomes zero
# Initial Conditions
mu_0 = np.array([1., 0., 0., 0.])
sigma_0 = np.zeros((4, 4))
Explanation: Solution via a system of expectational difference equations
Now we will solve the household's optimum problem by first deducing the Euler equations that are the first-order conditions with respect to consumption and savings, then using the budget constraints and the boundary condition (4) to complete a system of expectational linear difference equations that we'll solve for the optimal consumption, debt plan.
First-order conditions for the problem are
$$ E_t u'(c_{t+1}) = u'(c_t) , \ \ \forall t \geq 0. \quad (5) $$
In our linear-quadratic model, we assume
the quadratic utility function
$u(c_t) = -.5 (c_t - \gamma)^2$,
where $\gamma$ is a bliss level of consumption. Then the consumption Euler equation becomes
$$ E_t c_{t+1} = c_t . \quad (6) $$
Along with the quadratic utility specification, we allow consumption
$c_t$ to be negative.
To deduce the optimal decision rule, we want to solve the system
of difference equations formed by (2) and (6)
subject to the boundary condition (4). To accomplish this,
solve (2) forward and impose $\lim_{T\rightarrow +\infty} \beta^T b_{T+1} =0$ to get
$$ b_t = \sum_{j=0}^\infty \beta^j (y_{t+j} - c_{t+j}) . \quad (7) $$
Imposing $\lim_{T\rightarrow +\infty} \beta^T b_{T+1} =0$ suffices to impose (4) on the debt
path.
Take conditional expectations on both sides of (7) and use (6)
and the law of iterated expectations to deduce
$$ b_t = \sum_{j=0}^\infty \beta^j E_t y_{t+j} - {1 \over 1-\beta} c_t
\quad (8) $$
or
$$ c_t = (1-\beta)
\left[ \sum_{j=0}^\infty \beta^j E_t y_{t+j} - b_t\right].
\quad (9) $$
If we define the net rate of interest $r$ by $\beta ={1 \over 1+r}$, we can
also express this
equation as
$$ c_t = {r \over 1+r}
\left[ \sum_{j=0}^\infty \beta^j E_t y_{t+j} - b_t\right]. \quad (10) $$
Equation (9) or (10) asserts that consumption equals what Irving Fisher defined as
economic income, namely, a constant
marginal propensity to consume or interest factor ${r \over 1+r}$ times
the sum of nonfinancial wealth $
\sum_{j=0}^\infty \beta^j E_t y_{t+j}$ and financial
wealth $-b_t$. Notice that (9) or (10) represents
$c_t$ as a function of the state $[b_t, z_t]$
confronting the household, where from $z_t$ contains all
information useful for forecasting the endowment process.
Pulling together our preceding results, we can regard $z_t, b_t$ as
the time $t$ state, where $z_t$ is an exogenous component of the state
and $b_t$ is an endogenous component of the state vector. The system
can be represented as
$$ \eqalign{ z_{t+1} & = A_{22} z_t + C_2 w_{t+1} \cr
b_{t+1} & = b_t + U_y [ (I -\beta A_{22})^{-1} (A_{22} - I) ] z_t \cr
y_t & = U_y z_t \cr
c_t & = (1-\beta) [ U_y(I-\beta A_{22})^{-1} z_t - b_t ]. \cr } \quad (11) $$
Now we'll apply the formulas in equation system (11).
Later we shall use them to get objects needed to form the system (11) as an instance of a LinearStateSpace class that we'll use to exhibit features of the LQ permanent income model.
End of explanation
ABF - A_LSS
Explanation: A_LSS calculated as we have here should equal ABF calculated above using the LQ model.
Here comes the check. The difference between ABF and A_LSS should be zero
End of explanation
print(c_pol, "\n", -F)
Explanation: Now compare pertinent elements of c_pol and -F
End of explanation
LSS = qe.LinearStateSpace(A_LSS, C_LSS, G_LSS, mu_0=mu_0, Sigma_0=sigma_0)
Explanation: We have verified that the two methods give the same solution.
Now let's create an instance of a LinearStateSpace model.
To do this, we'll use the outcomes from out second method.
Two examples
Now we'll generate panels of consumers. We'll study two examples that are differentiated only by the initial states with which we endow consumers. All other parameter values are kept the same in the two examples.
In the first example, all consumers begin with zero nonfinancial income and zero debt. The consumers are thus ex ante identical.
In the second example, consumers are ex ante heterogeneous. While all of them begin with zero debt, we draw their initial income levels from the invariant distribution of financial income.
In the first example, consumers' nonfinancial income paths will display prounounced transients early in the sample that will affect outcomes in striking ways. Those transient effects will not be present in the second example.
Now we'll use methods that the LinearStateSpace class contains to simulate the model with our first set of intitial conditions.
25 paths of the exogenous non-financial income process and the associated consumption and debt paths. In the first set of graphs, the darker lines depict one particular sample path, while the lighter lines indicate the other 24 paths.
A second graph that plots a collection of simulations against the population distribution that we extract from the LinearStateSpace instance LSS
End of explanation
def income_consumption_debt_series(A, C, G, m0, s0, T=150, npaths=25):
This function takes initial conditions (m0, s0) and uses the Linear State Space
class from QuantEcon to simulate an economy `npaths` times for `T` periods.
It then uses that information to generate some graphs related to the discussion
below.
LSS = qe.LinearStateSpace(A, C, G, mu_0=m0, Sigma_0=s0)
# Simulation/Moment Parameters
moment_generator = LSS.moment_sequence()
# Simulate various paths
bsim = np.empty((npaths, T))
csim = np.empty((npaths, T))
ysim = np.empty((npaths, T))
for i in range(npaths):
sims = LSS.simulate(T)
bsim[i, :] = sims[0][-1, :]
csim[i, :] = sims[1][1, :]
ysim[i, :] = sims[1][0, :]
# Get the moments
cons_mean = np.empty(T)
cons_var = np.empty(T)
debt_mean = np.empty(T)
debt_var = np.empty(T)
for t in range(T):
mu_x, mu_y, sig_x, sig_y = next(moment_generator)
cons_mean[t], cons_var[t] = mu_y[1], sig_y[1, 1]
debt_mean[t], debt_var[t] = mu_x[3], sig_x[3, 3]
return bsim, csim, ysim, cons_mean, cons_var, debt_mean, debt_var
def consumption_income_debt_figure(bsim, csim, ysim):
# Get T
T = bsim.shape[1]
# Create first figure
fig, ax = plt.subplots(2, 1, figsize=(10, 8))
xvals = np.arange(T)
# Plot consumption and income
ax[0].plot(csim[0, :], label="c", color="b")
ax[0].plot(ysim[0, :], label="y", color="g")
ax[0].plot(csim.T, alpha=.1, color="b")
ax[0].plot(ysim.T, alpha=.1, color="g")
ax[0].legend(loc=4)
ax[0].set_xlabel("t")
ax[0].set_ylabel("y and c")
# Plot debt
ax[1].plot(bsim[0, :], label="b", color="r")
ax[1].plot(bsim.T, alpha=.1, color="r")
ax[1].legend(loc=4)
ax[1].set_xlabel("t")
ax[1].set_ylabel("debt")
fig.suptitle("Nonfinancial Income, Consumption, and Debt")
return fig
def consumption_debt_fanchart(csim, cons_mean, cons_var,
bsim, debt_mean, debt_var):
# Get T
T = bsim.shape[1]
# Create Percentiles of cross-section distributions
cmean = np.mean(cons_mean)
c90 = 1.65*np.sqrt(cons_var)
c95 = 1.96*np.sqrt(cons_var)
c_perc_95p, c_perc_95m = cons_mean + c95, cons_mean - c95
c_perc_90p, c_perc_90m = cons_mean + c90, cons_mean - c90
# Create Percentiles of cross-section distributions
dmean = np.mean(debt_mean)
d90 = 1.65*np.sqrt(debt_var)
d95 = 1.96*np.sqrt(debt_var)
d_perc_95p, d_perc_95m = debt_mean + d95, debt_mean - d95
d_perc_90p, d_perc_90m = debt_mean + d90, debt_mean - d90
# Create second figure
fig2, ax2 = plt.subplots(2, 1, figsize=(10, 8))
xvals = np.arange(T)
# Consumption fan
ax2[0].plot(xvals, cons_mean, color="k")
ax2[0].plot(csim.T, color="k", alpha=.25)
ax2[0].fill_between(xvals, c_perc_95m, c_perc_95p, alpha=.25, color="b")
ax2[0].fill_between(xvals, c_perc_90m, c_perc_90p, alpha=.25, color="r")
ax2[0].set_ylim((cmean-15, cmean+15))
ax2[0].set_ylabel("consumption")
# Debt fan
ax2[1].plot(xvals, debt_mean, color="k")
ax2[1].plot(bsim.T, color="k", alpha=.25)
ax2[1].fill_between(xvals, d_perc_95m, d_perc_95p, alpha=.25, color="b")
ax2[1].fill_between(xvals, d_perc_90m, d_perc_90p, alpha=.25, color="r")
# ax2[1].set_ylim()
ax2[1].set_ylabel("debt")
fig2.suptitle("Consumption/Debt over time")
ax2[1].set_xlabel("t")
return fig2
# Creates pictures with initial conditions of 0.0 for y and b
out = income_consumption_debt_series(A_LSS, C_LSS, G_LSS, mu_0, sigma_0)
bsim0, csim0, ysim0 = out[:3]
cons_mean0, cons_var0, debt_mean0, debt_var0 = out[3:]
fig_0 = consumption_income_debt_figure(bsim0, csim0, ysim0)
fig_02 = consumption_debt_fanchart(csim0, cons_mean0, cons_var0,
bsim0, debt_mean0, debt_var0)
fig_0.show()
fig_02.show()
Explanation: Population and sample panels
In the code below, we use the LinearStateSpace class to
compute and plot population quantiles of the distributions of consumption and debt for a population of consumers
simulate a group of 25 consumers and plot sample paths on the same graph as the population distribution
End of explanation
def cointegration_figure(bsim, csim):
Plots the cointegration
# Create figure
fig, ax = plt.subplots(figsize=(10, 8))
ax.plot((1-beta)*bsim[0, :] + csim[0, :], color="k")
ax.plot((1-beta)*bsim.T + csim.T, color="k", alpha=.1)
fig.suptitle("Cointegration of Assets and Consumption")
ax.set_xlabel("t")
ax.set_ylabel("")
return fig
fig = cointegration_figure(bsim0, csim0)
fig.show()
Explanation: First example
Here is what is going on in the above graphs.
Because we have set $y_{-1} = y_{-2} = 0$, nonfinancial income $y_t$ starts far below its stationary mean
$\mu_{y, \infty}$ and rises early in each simulation.
To help interpret the behavior above graph, recall that we can represent the optimal decision rule for consumption
in terms of the co-integrating relationship
$$ (1-\beta) b_t + c_t = (1-\beta) E_t \sum_{j=0}^\infty \beta^j y_{t+j}, $$
For our simulation, we have set initial conditions $b_0 = y_{-1} = y_{-2} = 0$ (please see the code above).
So at time $0$ we have
$$ c_0 = (1-\beta) E_0 \sum_{t=0}^\infty \beta^j y_{t} . $$
This tells us that consumption starts at the value of an annuity from the expected discounted value of nonfinancial
income. To support that level of consumption, the consumer borrows a lot early on, building up substantial debt.
In fact, he or she incurs so much debt that eventually, in the stochastic steady state, he consumes less each period than his income. He uses the gap between consumption and income mostly to service the interest payments due on his debt.
Thus, when we look at the panel of debt in the accompanying graph, we see that this is a group of ex ante indentical people each of whom starts with zero debt. All of them accumulate debt in anticipation of rising nonfinancial income. The expect their nonfinancial income to rise toward the invariant distribution of income, a consequence of our having started them at $y_{-1} = y_{-2} = 0$.
Illustration of cointegration
The LQ permanent income model is a good one for illustrating the concept of cointegration.
The following figure plots realizations of the left side of
$$ (1-\beta) b_t + c_t = (1-\beta) E_t \sum_{j=0}^\infty \beta^j y_{t+j}, \quad (12) $$
which is called the cointegrating residual.
Notice that it equals the right side, namely, $(1-\beta) E_t \sum_{j=0}^\infty \beta^j y_{t+j}$,
which equals an annuity payment on the expected present value of future income $E_t \sum_{j=0}^\infty \beta^j y_{t+j}$.
Early along a realization, $c_t$ is approximately constant while $(1-\beta) b_t$ and $(1-\beta) E_t \sum_{j=0}^\infty \beta^j y_{t+j}$ both rise markedly as the household's present value of income and borrowing rise pretty much together.
Note: This example illustrates the following point: the definition of cointegration implies that the cointegrating residual is asymptotically covariance stationary, not covariance stationary. The cointegrating residual for the specification with zero income and zero debt initially has a notable transient component that dominates its behavior early in the sample. By specifying different initial conditions, we shall remove this transient in our second example to be presented below.
End of explanation
# Creates pictures with initial conditions of 0.0 for b and y from invariant distribution
out = income_consumption_debt_series(A_LSS, C_LSS, G_LSS, mxbewley, sxbewley)
bsimb, csimb, ysimb = out[:3]
cons_meanb, cons_varb, debt_meanb, debt_varb = out[3:]
fig_0 = consumption_income_debt_figure(bsimb, csimb, ysimb)
fig_02 = consumption_debt_fanchart(csimb, cons_meanb, cons_varb,
bsimb, debt_meanb, debt_varb)
fig = cointegration_figure(bsimb, csimb)
fig.show()
Explanation: A "borrowers and lenders" closed economy
When we set $y_{-1} = y_{-2} = 0$ and $b_0 =0$ in the preceding exercise, we make debt "head north" early in the sample. Average debt rises and approaches asymptote.
We can regard these as outcomes of a ``small open economy'' that borrows from abroad at the fixed gross interest rate $R$ in anticipation of rising incomes.
So with the economic primitives set as above, the economy converges to a steady state in which there is an excess aggregate supply of risk-free loans at a gross interest rate of $R$. This excess supply is filled by ``foreigner lenders'' willing to make those loans.
We can use virtually the same code to rig a "poor man's Bewley model" in the following way.
as before, we start everyone at $b_0 = 0$.
But instead of starting everyone at $y_{-1} = y_{-2} = 0$, we draw $\begin{bmatrix} y_{-1} \cr y_{-2}
\end{bmatrix}$ from the invariant distribution of the ${y_t}$ process.
This rigs a closed economy in which people are borrowing and lending with each other at a gross risk-free
interest rate of $R = \beta^{-1}$. Here within the group of people being analyzed, risk-free loans are in zero excess supply. We have arranged primitives so that $R = \beta^{-1}$ clears the market for risk-free loans at zero aggregate excess supply. There is no need for foreigners to lend to our group.
The following graphs confirm the following outcomes:
as before, the consumption distribution spreads out over time. But now there is some initial dispersion because there is ex ante heterogeneity in the initial draws of $\begin{bmatrix} y_{-1} \cr y_{-2}
\end{bmatrix}$.
as before, the cross-section distribution of debt spreads out over time.
Unlike before, the average level of debt stays at zero, reflecting that this is a closed borrower-and-lender economy.
Now the cointegrating residual seems stationary, and not just asymptotically stationary.
End of explanation |
9,408 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let´s reproject to Alberts or something with distance
Step1: Uncomment to reproject
proj string taken from
Step2: Model Fitting Using a GLM
The general model will have the form
Step3: Fitted parameters (From HEC)
Step4: Predictions with +2std.Dev
Step5: Predicted means
Step6: Model Analysis
Step7: Let's calculate the residuals
Step8: Experiment
In this section we will bring a Raster Data from the US, using Biospytial Raster API.
1. First select a polygon, then get A Raster FRom there, say Mean Temperature. | Python Code:
new_data.crs = {'init':'epsg:4326'}
Explanation: Let´s reproject to Alberts or something with distance
End of explanation
#new_data = new_data.to_crs("+proj=aea +lat_1=29.5 +lat_2=45.5 +lat_0=37.5 +lon_0=-96 +x_0=0 +y_0=0 +ellps=GRS80 +datum=NAD83 +units=m +no_defs ")
Explanation: Uncomment to reproject
proj string taken from: http://spatialreference.org/
End of explanation
##### OLD #######
len(data.lon)
#X = data[['AET','StandAge','lon','lat']]
#X = data[['SppN','lon','lat']]
X = data[['lon','lat']]
#Y = data['plotBiomass']
Y = data[['SppN']]
## First step in spatial autocorrelation
#Y = pd.DataFrame(np.zeros(len(Y)))
## Let´s take a small sample only for the spatial autocorrelation
#import numpy as np
#sample_size = 2000
#randindx = np.random.randint(0,X.shape[0],sample_size)
#nX = X.loc[randindx]
#nY = Y.loc[randindx]
nX = X
nY = Y
## Small function for systematically selecting the k-th element of the data.
#### Sughgestion use for now a small k i.e. 10
systematic_selection = lambda k : filter(lambda i : not(i % k) ,range(len(data)))
idx = systematic_selection(50)
print(len(idx))
nX = X.loc[idx]
nY = Y.loc[idx]
new_data = data.loc[idx]
len(new_data)
# Import GPFlow
import GPflow as gf
k = gf.kernels.Matern12(2,ARD=False,active_dims = [0,1]) + gf.kernels.Bias(1)
#k = gf.kernels.Matern12(2, lengthscales=1, active_dims = [0,1] ) + gf.kernels.Bias(1)
l = gf.likelihoods.Poisson()
model = gf.gpmc.GPMC(nX.as_matrix(), nY.as_matrix().reshape(len(nY),1).astype(float), k, l)
#model = gf.gpr.GPR(nX.as_matrix(),nY.as_matrix().reshape(len(nY),1).astype(float),k)
## If priors
#model.kern.matern12.lengthscales.prior = gf.priors.Gaussian(25.0,3.0)
#model.kern.matern32.variance.prior = GPflow.priors.Gamma(1.,1.)
#model.kern.bias.variance.prior = gf.priors.Gamma(1.,1.)
## Optimize
%time model.optimize(maxiter=100) # start near MAP
model.kern
samples = model.sample(50, verbose=True, epsilon=1, Lmax=100)
Explanation: Model Fitting Using a GLM
The general model will have the form:
$$ Biomass(x,y) = \beta_1 AET + \beta_2 Age + Z(x,y) + \epsilon $$
Where:
$\beta_1$ and $\beta_2$ are model parameters, $Z(x,y)$ is the Spatial Autocorrelation process and $\epsilon \sim N(0,\sigma^2)$
End of explanation
model.kern.lengthscales = 25.4846122373
model.kern.variance = 10.9742076021
model.likelihood.variance = 4.33463026664
%time mm = k.compute_K_symm(X.as_matrix())
import numpy as np
Nn = 500
dsc = data
predicted_x = np.linspace(min(dsc.lon),max(dsc.lon),Nn)
predicted_y = np.linspace(min(dsc.lat),max(dsc.lat),Nn)
Xx, Yy = np.meshgrid(predicted_x,predicted_y)
## Fake richness
fake_sp_rich = np.ones(len(Xx.ravel()))
predicted_coordinates = np.vstack([ Xx.ravel(), Yy.ravel()]).transpose()
#predicted_coordinates = np.vstack([section.SppN, section.newLon,section.newLat]).transpose()
len(predicted_coordinates)
#We will calculate everything with the new model and parameters
#model = gf.gpr.GPR(X.as_matrix(),Y.as_matrix().reshape(len(Y),1).astype(float),k)
%time means,variances = model.predict_y(predicted_coordinates)
-
Explanation: Fitted parameters (From HEC)
End of explanation
#Using k-partition = 7
import cartopy
plt.figure(figsize=(17,11))
proj = cartopy.crs.PlateCarree()
ax = plt.subplot(111, projection=proj)
ax = plt.axes(projection=proj)
#algo = new_data.plot(column='SppN',ax=ax,cmap=colormap,edgecolors='')
#ax.set_extent([-93, -70, 30, 50])
ax.set_extent([-125, -60, 20, 50])
#ax.set_extent([-95, -70, 25, 45])
#ax.add_feature(cartopy.feature.LAND)
ax.add_feature(cartopy.feature.OCEAN)
ax.add_feature(cartopy.feature.COASTLINE)
ax.add_feature(cartopy.feature.BORDERS, linestyle=':')
ax.add_feature(cartopy.feature.LAKES, alpha=0.9)
ax.stock_img()
#ax.add_geometries(new_data.geometry,crs=cartopy.crs.PlateCarree())
#ax.add_feature(cartopy.feature.RIVERS)
mm = ax.pcolormesh(Xx,Yy,means.reshape(Nn,Nn) + (2* np.sqrt(variances).reshape(Nn,Nn)),transform=proj )
#cs = plt.contour(Xx,Yy,np.sqrt(variances).reshape(Nn,Nn),linewidths=2,cmap=plt.cm.Greys_r,linestyles='dotted')
cs = plt.contour(Xx,Yy,means.reshape(Nn,Nn) + (2 * np.sqrt(variances).reshape(Nn,Nn)),linewidths=2,colors='k',linestyles='dotted',levels=range(1,20))
plt.clabel(cs, fontsize=16,inline=True,fmt='%1.1f')
#ax.scatter(new_data.lon,new_data.lat,edgecolors='',color='white',alpha=0.6)
plt.colorbar(mm)
plt.title("Predicted Species Richness + 2stdev")
Explanation: Predictions with +2std.Dev
End of explanation
#Using k-partition = 7
import cartopy
plt.figure(figsize=(17,11))
proj = cartopy.crs.PlateCarree()
ax = plt.subplot(111, projection=proj)
ax = plt.axes(projection=proj)
#algo = new_data.plot(column='SppN',ax=ax,cmap=colormap,edgecolors='')
#ax.set_extent([-93, -70, 30, 50])
ax.set_extent([-125, -60, 20, 50])
#ax.set_extent([-95, -70, 25, 45])
#ax.add_feature(cartopy.feature.LAND)
ax.add_feature(cartopy.feature.OCEAN)
ax.add_feature(cartopy.feature.COASTLINE)
ax.add_feature(cartopy.feature.BORDERS, linestyle=':')
ax.add_feature(cartopy.feature.LAKES, alpha=0.9)
ax.stock_img()
#ax.add_geometries(new_data.geometry,crs=cartopy.crs.PlateCarree())
#ax.add_feature(cartopy.feature.RIVERS)
mm = ax.pcolormesh(Xx,Yy,means.reshape(Nn,Nn),transform=proj )
#cs = plt.contour(Xx,Yy,np.sqrt(variances).reshape(Nn,Nn),linewidths=2,cmap=plt.cm.Greys_r,linestyles='dotted')
cs = plt.contour(Xx,Yy,means.reshape(Nn,Nn),linewidths=2,colors='k',linestyles='dotted',levels=range(1,20))
plt.clabel(cs, fontsize=16,inline=True,fmt='%1.1f')
#ax.scatter(new_data.lon,new_data.lat,edgecolors='',color='white',alpha=0.6)
plt.colorbar(mm)
plt.title("Predicted Species Richness")
#Using k-partition = 7
import cartopy
plt.figure(figsize=(17,11))
proj = cartopy.crs.PlateCarree()
ax = plt.subplot(111, projection=proj)
ax = plt.axes(projection=proj)
#algo = new_data.plot(column='SppN',ax=ax,cmap=colormap,edgecolors='')
#ax.set_extent([-93, -70, 30, 50])
ax.set_extent([-125, -60, 20, 50])
#ax.set_extent([-95, -70, 25, 45])
#ax.add_feature(cartopy.feature.LAND)
ax.add_feature(cartopy.feature.OCEAN)
ax.add_feature(cartopy.feature.COASTLINE)
ax.add_feature(cartopy.feature.BORDERS, linestyle=':')
ax.add_feature(cartopy.feature.LAKES, alpha=0.9)
ax.stock_img()
#ax.add_geometries(new_data.geometry,crs=cartopy.crs.PlateCarree())
#ax.add_feature(cartopy.feature.RIVERS)
mm = ax.pcolormesh(Xx,Yy,means.reshape(Nn,Nn) - (2* np.sqrt(variances).reshape(Nn,Nn)),transform=proj )
#cs = plt.contour(Xx,Yy,np.sqrt(variances).reshape(Nn,Nn),linewidths=2,cmap=plt.cm.Greys_r,linestyles='dotted')
cs = plt.contour(Xx,Yy,means.reshape(Nn,Nn) - (2 * np.sqrt(variances).reshape(Nn,Nn)),linewidths=2,colors='k',linestyles='dotted',levels=[4.0,5.0,6.0,7.0,8.0])
plt.clabel(cs, fontsize=16,inline=True,fmt='%1.1f')
#ax.scatter(new_data.lon,new_data.lat,edgecolors='',color='white',alpha=0.6)
plt.colorbar(mm)
plt.title("Predicted Species Richness - 2stdev")
Explanation: Predicted means
End of explanation
model.get_parameter_dict()
Explanation: Model Analysis
End of explanation
X_ = data[['LON','LAT']]
%time Y_hat = model.predict_y(X_)
pred_y = pd.DataFrame(Y_hat[0])
var_y = pd.DataFrame(Y_hat[1])
new_data['pred_y'] = pred_y
new_data['var_y'] = var_y
new_data= new_data.assign(error=lambda y : (y.SppN - y.pred_y)**2 )
new_data.error.hist(bins=50)
print(new_data.error.mean())
print(new_data.error.std())
Explanation: Let's calculate the residuals
End of explanation
import raster_api.tools as rt
from raster_api.models import MeanTemperature,ETOPO1,Precipitation,SolarRadiation
from sketches.models import Country
## Select US
us_border = Country.objects.filter(name__contains='United States')[1]
from django.db import close_old_connections
close_old_connections()
#Get Raster API
us_meantemp = rt.RasterData(Precipitation,us_border.geom)
us_meantemp.getRaster()
us_meantemp.display_field()
%time coords = us_meantemp.getCoordinates()
Explanation: Experiment
In this section we will bring a Raster Data from the US, using Biospytial Raster API.
1. First select a polygon, then get A Raster FRom there, say Mean Temperature.
End of explanation |
9,409 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train Station Data Cleaner
Data source
https
Step1: Step 1
Step2: Comparison with bus and tram reports.
Station entries v Boardings and alightings
The train station entry data does not provide information about how many people got on or off a train. (There is no boarding or alighting information). Train Station entries can only measure the level of activity over the course of a day at a particular station.
Step 2
Step3: Step 3 | Python Code:
rawtrain = './raw/Train Station Entries 2008-09 to 2011-12 - data.XLS'
Explanation: Train Station Data Cleaner
Data source
https://www.data.vic.gov.au/data/dataset/train-station-entries-2008-09-to-2011-12-new
Data Temporal Coverage: 01/07/2008 to 30/06/2012
Comparable data with buses and trams [Weekeday by time 'AM Peak', 'Interpeak', 'PM Peak']
Data Source
https://www.data.vic.gov.au/data/dataset/train-station-entries-2008-09-to-2011-12-new
Data Temporal Coverage: 01/07/2008 to 30/06/2012
End of explanation
import pandas as pd
df = pd.read_excel(rawtrain,sheetname='Data', header = 0, skiprows = 1, skip_footer=5)
df
Explanation: Step 1: Download raw tram boarding data, save a local copy in ./raw directory
Download Tram boardings and alightings xls file manually. The web page has a 'I consent to terms and conditions / I am not a robot' button that prevents automated downloading (or at least makes it harder than I expected). Save file to './raw' directory
End of explanation
trains = df.loc[:, ['Station','AM Peak','Interpeak','PM Peak']]
trains['wk7am7pm'] = trains['AM Peak'] + trains['Interpeak'] + trains['PM Peak']
Explanation: Comparison with bus and tram reports.
Station entries v Boardings and alightings
The train station entry data does not provide information about how many people got on or off a train. (There is no boarding or alighting information). Train Station entries can only measure the level of activity over the course of a day at a particular station.
Step 2: Subset out the weekday 7am to 7pm station entry data
The Train station entry report covers the entire operating day. The bus and tram reports cover only the 7am to 7pm period.
Train station entries are broken into four time periods. They are not specieifed on the Data Vic Gov Au website, but they are specified on the PTV Research website https://www.ptv.vic.gov.au/about-ptv/ptv-data-and-reports/research-and-statistics/
Pre AM Peak - first service to 6:59am
AM Peak - 7:00am to 9:29am
Interpeak - 9:30am to 2:59pm
PM Peak - 3:00pm to 6:59pm
PM Late - 7:00pm to last service
To compare with bus and tram boadings, SUM('AM Peak', 'Interpeak', 'PM Peak') columns from the 2011 weekday dataset to create a 'wk7am7pm' value.
End of explanation
trains.to_csv('./clean/TrainStationEntries.csv')
trains
Explanation: Step 3: Create a .csv file with weekday 7am to 7pm station entries for each stop
This script groups all the reported tram boardings and alightings for a given stop
If multiple routes use the same stop the results from multiple routes will be combined into a single "boarding" value and a single "alighting" value.
Results are saved as
'./clean/TrainStationEntries.csv'
End of explanation |
9,410 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
vocab = set(text)
vocab_to_int = {w: i for i,w in enumerate(vocab)}
int_to_vocab = {vocab_to_int[w]: w for w in vocab_to_int}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
tokens = {'.' : '||Period||',
',' : '||Comma||' ,
'"' : '||QuotationMark||',
';' : '||Semicolon||',
'!' : '||ExclamationMark||',
'?' : '||QuestionMark||',
'(' : '||LeftParenthesis||',
')' : '||RightParenthesis||',
'--': '||Dash||',
'\n': '||Return||'}
return tokens
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
input = tf.placeholder(tf.int32, shape=[None,None], name='input')
targets = tf.placeholder(tf.int32, shape=[None,None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return input, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
lstm_layers = 1
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([cell]*lstm_layers)
#TODO: add dropout?
initial_state = cell.zero_state(batch_size, tf.int32)
initial_state = tf.identity(initial_state, name='initial_state')
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
#with graph.as_default():
embeddings = tf.Variable(tf.truncated_normal((vocab_size, embed_dim),
stddev=0.1))
embed = tf.nn.embedding_lookup(embeddings, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
print('inputs.shape={}'.format(inputs.shape))
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
print(outputs.shape)
#print(final_state.shape)
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed_dim = 200
print('input_data.shape={}'.format(input_data.shape))
print('vocab_size={}'.format(vocab_size))
embedded = get_embed(input_data, vocab_size, embed_dim)
print('embedded.shape={}'.format(embedded.shape))
outputs, final_state = build_rnn(cell, embedded)
print('outputs.shape={}'.format(outputs.shape))
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
n_batches = len(int_text)//(batch_size*seq_length)
batches = np.zeros([n_batches, 2, batch_size, seq_length])
for i1 in range(n_batches):
for i2 in range(2):
for i3 in range(batch_size):
pos = i1*seq_length+i2+2*seq_length*i3
batches[i1,i2,i3,:] = int_text[pos:pos+seq_length]
return batches
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
# Number of Epochs
num_epochs = 200
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 512
# Sequence Length
seq_length = 200
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 10
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
input = loaded_graph.get_tensor_by_name('input:0')
init_state = loaded_graph.get_tensor_by_name('initial_state:0')
final_state = loaded_graph.get_tensor_by_name('final_state:0')
probs = loaded_graph.get_tensor_by_name('probs:0')
return input, init_state, final_state, probs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
#return int_to_vocab[np.argmax(probabilities)]
idx = np.random.choice(range(len(probabilities)), p=probabilities)
return int_to_vocab[idx]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
9,411 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p>1. Load the `"virgo_novisc.0054.gdf"` dataset from the `"data"` directory.</p>
Step1: <p>2. Create a `SlicePlot` of temperature along the y-axis, with a width of 0.4 Mpc. Change the colormap to "algae". Annotate the magnetic field vectors.</p>
Step2: <p>3. What happens if you supply a vector, say `[0.1, -0.3, 0.4]`, to the second argument of `SlicePlot`? Try it.</p>
Step3: <p>4. Now make a `ProjectionPlot` of the `"velocity_x"` field, weighted by the `"density"` field, along the x-axis. Use the `set_log` method to make the plot have linear scaling, and change the units to km/s.</p> | Python Code:
ds = yt.load("../data/virgo_novisc.0054.gdf")
Explanation: <p>1. Load the `"virgo_novisc.0054.gdf"` dataset from the `"data"` directory.</p>
End of explanation
slc = yt.SlicePlot(ds, "y", ["temperature"], width=(0.4, "Mpc"))
slc.set_cmap("temperature", "algae")
slc.annotate_magnetic_field()
Explanation: <p>2. Create a `SlicePlot` of temperature along the y-axis, with a width of 0.4 Mpc. Change the colormap to "algae". Annotate the magnetic field vectors.</p>
End of explanation
slc = yt.SlicePlot(ds, [0.1, -0.3, 0.4], ["temperature"], width=(0.4, "Mpc"))
slc.set_cmap("temperature", "algae")
Explanation: <p>3. What happens if you supply a vector, say `[0.1, -0.3, 0.4]`, to the second argument of `SlicePlot`? Try it.</p>
End of explanation
prj = yt.ProjectionPlot(ds, 'x', ["velocity_x"], weight_field="density", width=(0.4, "Mpc"))
prj.set_log("velocity_x", False)
prj.set_unit("velocity_x", "km/s")
Explanation: <p>4. Now make a `ProjectionPlot` of the `"velocity_x"` field, weighted by the `"density"` field, along the x-axis. Use the `set_log` method to make the plot have linear scaling, and change the units to km/s.</p>
End of explanation |
9,412 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Writing on files
This is a Python notebook in which you will practice the concepts learned during the lectures.
Startup ROOT
Import the ROOT module
Step1: Writing histograms
Create a TFile containing three histograms filled with random numbers distributed according to a Gaus, an exponential and a uniform distribution.
Close the file
Step2: Now, you can invoke the ls command from within the notebook to list the files in this directory. Check that the file is there. You can invoke the rootls command to see what's inside the file.
Step3: Access the histograms and draw them in Python. Remember that you need to create a TCanvas before and draw it too in order to inline the plots in the notebooks.
You can switch to the interactive JavaScript visualisation using the %jsroot on "magic" command.
Step4: You can now repeat the exercise above using C++. Transform the cell in a C++ cell using the %%cpp "magic".
Step5: Inspect the content of the file | Python Code:
import ROOT
Explanation: Writing on files
This is a Python notebook in which you will practice the concepts learned during the lectures.
Startup ROOT
Import the ROOT module: this will activate the integration layer with the notebook automatically
End of explanation
rndm = ROOT.TRandom3(1)
filename = "histos.root"
# Here open a file and create three histograms
for i in xrange(1024):
# Use the following lines to feed the Fill method of the histograms in order to fill
rndm.Gaus()
rndm.Exp(1)
rndm.Uniform(-4,4)
# Here write the three histograms on the file and close the file
Explanation: Writing histograms
Create a TFile containing three histograms filled with random numbers distributed according to a Gaus, an exponential and a uniform distribution.
Close the file: you will reopen it later.
End of explanation
! ls .
! echo Now listing the content of the file
! rootls -l #filename here
Explanation: Now, you can invoke the ls command from within the notebook to list the files in this directory. Check that the file is there. You can invoke the rootls command to see what's inside the file.
End of explanation
%jsroot on
f = ROOT.TFile(filename)
c = ROOT.TCanvas()
c.Divide(2,2)
c.cd(1)
f.gaus.Draw()
# finish the drawing in each pad
# Draw the Canvas
Explanation: Access the histograms and draw them in Python. Remember that you need to create a TCanvas before and draw it too in order to inline the plots in the notebooks.
You can switch to the interactive JavaScript visualisation using the %jsroot on "magic" command.
End of explanation
%cpp
TFile f("histos.root");
TH1F *hg, *he, *hu;
f.GetObject("gaus", hg);
// ... read the histograms and draw them in each pad
Explanation: You can now repeat the exercise above using C++. Transform the cell in a C++ cell using the %%cpp "magic".
End of explanation
f = ROOT.TXMLFile("histos.xml","RECREATE")
hg = ROOT.TH1F("gaus","Gaussian numbers", 64, -4, 4)
he = ROOT.TH1F("expo","Exponential numbers", 64, -4, 4)
hu = ROOT.TH1F("unif","Uniform numbers", 64, -4, 4)
for i in xrange(1024):
hg.Fill(rndm.Gaus())
# ... Same as above!
! ls -l histos.xml histos.root
! cat histos.xml
Explanation: Inspect the content of the file: TXMLFile
ROOT provides a different kind of TFile, TXMLFile. It has the same interface and it's very useful to better understand how objects are written in files by ROOT.
Repeat the exercise above, either on Python or C++ - your choice, using a TXMLFILE rather than a TFile and then display its content with the cat command. Can you see how the content of the individual bins of the histograms is stored? And the colour of its markers?
Do you understand why the xml file is bigger than the root one even if they have the same content?
End of explanation |
9,413 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Map the acoustic environment
This notebook creates a map of biophony and anthrophony, predicted from a multilevel regression model.
The model takes land cover areas (within a specified radius) as input, which this notebook computes for a grid of points over the study area. The output is then overlayed on web background tiles over the study area.
Import statements
Step1: Variable declarations
radius — the radius in meters for which to compute land cover areas around each grid location <br />
spacing — the spacing in meters between each grid location <br />
p1, p2 — projections defining the local coordinate system (p1) and the web mapping coordinate system (p2)
Step2: Function declarations
Step3: Create grid of points
load study area boundary from the database
Step4: create a grid of point coordinates (in a numpy array) based on the boundary and desired node spacing
Step5: extended grid points
Step6: Compute land cover area for all points in the grid
Step7: View result
Step8: Predict biophony
Step9: Map predicted biophony
transform predicted biophony to range from 0 to 1
Step10: define folium map
Step11: create the biophony overlay (for x[0]) and add it to the map
Step12: show the map
Step13: create and save images in PNG format for each time step (each value of x)
Step14: Predict anthrophony
Step15: Map anthrophony
transform predicted biophony to range from 0 to 1
Step16: define folium map
Step17: create the anthrophony overlay and add it to the map
Step18: show the map
Step19: create and save an image in PNG format of the overlay | Python Code:
# datawaves database
from landscape.models import LandCoverMergedMapArea
from database.models import Site
from geo.models import Boundary
from django.contrib.gis.geos import Point, Polygon
from django.contrib.gis.db.models.functions import Intersection, Envelope
from django.contrib.gis.db.models import Collect
from os import path
import numpy
import pandas
import pyprind
import pyproj
import folium
from folium.plugins import ImageOverlay
from PIL import Image
from matplotlib import pyplot
from matplotlib import cm
%matplotlib inline
# hd5 to save results between sessions
import h5py
Explanation: Map the acoustic environment
This notebook creates a map of biophony and anthrophony, predicted from a multilevel regression model.
The model takes land cover areas (within a specified radius) as input, which this notebook computes for a grid of points over the study area. The output is then overlayed on web background tiles over the study area.
Import statements
End of explanation
radius = 500
spacing = 200
p1 = pyproj.Proj(init='EPSG:31254') # MGI / Austria GK West
p2 = pyproj.Proj(init='EPSG:4326') # WGS 84
Explanation: Variable declarations
radius — the radius in meters for which to compute land cover areas around each grid location <br />
spacing — the spacing in meters between each grid location <br />
p1, p2 — projections defining the local coordinate system (p1) and the web mapping coordinate system (p2)
End of explanation
# naturalness = (%natural - %anthropogenic) / (% natural + % anthropogenic)
def get_naturalness(point, radius=radius):
buffer = point.buffer(radius)
result = LandCoverMergedMapArea.objects.filter(geometry__intersects=buffer, cover_type__in=[1, 2, 9])\
.annotate(intersection=Intersection('geometry', buffer))
forest = result.filter(cover_type__exact=9)
forest = forest.aggregate(total=Collect('intersection'))
result = result.aggregate(total=Collect('intersection'))
try:
forest_area = forest['total'].area
except AttributeError:
forest_area = 0
try:
result_area = result['total'].area
except AttributeError:
result_area = 0
urban_area = result_area - forest_area
try:
naturalness = (forest_area - urban_area) / (result_area)
except ZeroDivisionError:
naturalness = 0
return naturalness
# efficently queries the land cover data
# to determine the percentage of the specified land cover
# within a specified radius around a point location
def get_forest_net_area(point, radius=radius):
buffer = point.buffer(radius)
result = LandCoverMergedMapArea.objects.filter(shape__intersects=buffer, cover_type__in=[1, 2, 9])\
.annotate(intersection=Intersection('shape', buffer))
forest = result.filter(cover_type__exact=9)
forest = forest.aggregate(total=Collect('intersection'))
result = result.aggregate(total=Collect('intersection'))
try:
forest_area = forest['total'].area
except AttributeError:
forest_area = 0
try:
result_area = result['total'].area
except AttributeError:
result_area = 0
net_area = (2 * forest_area) - result_area
return (net_area / buffer.area) * 100
def get_pavement_area(point, radius=radius):
buffer = point.buffer(radius)
result = LandCoverMergedMapArea.objects.filter(geometry__intersects=buffer, cover_type__in=[1, 2, 9])\
.annotate(intersection=Intersection('geometry', buffer))
pavement = result.filter(cover_type__exact=2)
pavement = pavement.aggregate(total=Collect('intersection'))
try:
pavement_area = pavement['total'].area
except AttributeError:
pavement_area = 0
return (pavement_area / buffer.area) * 100
# utility function to save results between sessions
def export_to_hdf5(filepath, name, data):
h5f = h5py.File(filepath, 'w')
h5f.create_dataset(name, data=data)
h5f.close()
Explanation: Function declarations
End of explanation
boundary = Boundary.objects.get(name = "study area").geometry.envelope
Explanation: Create grid of points
load study area boundary from the database
End of explanation
coords = numpy.array(boundary.coords[0])
bbox = dict()
bbox['left'] = coords[:, 0].min()
bbox['bottom'] = coords[:, 1].min()
bbox['right'] = coords[:, 0].max()
bbox['top'] = coords[:, 1].max()
rows = int(numpy.ceil((bbox['top'] - bbox['bottom']) / spacing))
columns = int(numpy.ceil((bbox['right'] - bbox['left']) / spacing))
grid = numpy.empty(shape=(rows, columns), dtype=numpy.ndarray)
x_start = bbox['left']
y_start = bbox['bottom']
for row in range(rows):
for column in range(columns):
grid[row, column] = [x_start + (spacing * column), y_start + (spacing * row)]
Explanation: create a grid of point coordinates (in a numpy array) based on the boundary and desired node spacing
End of explanation
extended_bbox = dict()
extended_bbox['left'] = 71753.8726
extended_bbox['bottom'] = 229581.4586
extended_bbox['right'] = 86753.8726
extended_bbox['top'] = 241581.4586
extended_rows = int((extended_bbox['top'] - extended_bbox['bottom']) / spacing)
extended_columns = int((extended_bbox['right'] - extended_bbox['left']) / spacing)
extended_grid = numpy.empty(shape=(extended_rows, extended_columns), dtype=numpy.ndarray)
extended_x_start = extended_bbox['left']
extended_y_start = extended_bbox['bottom']
for row in range(extended_rows):
for column in range(extended_columns):
extended_grid[row, column] = [extended_x_start + (spacing * column), extended_y_start + (spacing * row)]
grid_extent = Polygon(((bbox['left'], bbox['bottom']),
(bbox['right'], bbox['bottom']),
(bbox['right'], bbox['top']),
(bbox['left'], bbox['top']),
(bbox['left'], bbox['bottom']),
))
extended_grid_extent = Polygon(((extended_bbox['left'], extended_bbox['bottom']),
(extended_bbox['right'], extended_bbox['bottom']),
(extended_bbox['right'], extended_bbox['top']),
(extended_bbox['left'], extended_bbox['top']),
(extended_bbox['left'], extended_bbox['bottom']),
))
rows
columns
extended_rows
extended_columns
Explanation: extended grid points
End of explanation
#values = pandas.DataFrame({'id':numpy.arange(rows*columns), 'naturalness_500m':numpy.zeros(rows*columns)}).set_index('id')
values = pandas.DataFrame({'id':numpy.arange(extended_rows * extended_columns), 'naturalness_500m':numpy.zeros(extended_rows * extended_columns)}).set_index('id')
n_calculations = extended_rows * extended_columns
progress = pyprind.ProgBar(n_calculations, bar_char='█', title='progress', monitor=True, stream=1, width=70)
for i, coords in enumerate(extended_grid.ravel()):
progress.update(item_id=i)
point = Point(coords)
#if point.within(grid_extent):
# value = 999
#else:
value = get_naturalness(point=point, radius=radius)
values['naturalness_500m'].iloc[i] = value
# save values
values.to_csv("/home/ubuntu/data/model grid/naturalness_500m_grid_200m_spacing.csv")
Explanation: Compute land cover area for all points in the grid
End of explanation
data = values['naturalness_500m'].as_matrix().reshape((rows, columns))
data = numpy.flipud(data)
plt1 = pyplot.imshow(data, cmap='viridis')
clb1 = pyplot.colorbar()
Explanation: View result
End of explanation
# import values
values = pandas.read_csv("/home/ubuntu/data/model grid/naturalness_500m_grid_200m_spacing_merged.csv")
# reshape into grid
data = values['naturalness_500m'].as_matrix().reshape((extended_rows, extended_columns))
# mean center values (mean computed in the regression model notebook)
u = data - 2.647528316787191
# define model parameters (computed in the regresssion model notebook)
g_00_b = -0.4061 # level 2
g_01_b = 1.2439
g_10_b = 0.1696
g_11_b = 0.2671
a_b = g_00_b + (g_01_b * u)
b_b = g_10_b + (g_11_b * u)
x = [t for t in numpy.linspace(-3, 3, 21)] # level 1
y_b = numpy.empty(shape=(len(x), u.shape[0], u.shape[1]), dtype=numpy.float64)
for i, xi in enumerate(x):
y_b[i] = a_b + (b_b * xi)
export_to_hdf5(filepath="/home/ubuntu/predicted_biophony.hd5", name="predicted biophony", data=y_b)
Explanation: Predict biophony
End of explanation
y_b = y_b - y_b.min()
y_b = y_b / y_b.max()
Explanation: Map predicted biophony
transform predicted biophony to range from 0 to 1
End of explanation
# center the map on the site named 'Hofgarten' and use the "Stamen Toner" background tiles
map_center = Site.objects.get(name='Hofgarten').geometry.coords
# transfrom the points from MGI / Austria GK West to WGS 84
lon, lat = pyproj.transform(p1, p2, map_center[0], map_center[1])
# intialize the map and use the "Stamen Toner" background tiles
map_b = folium.Map(location=[lat, lon],
zoom_start=15, tiles="Stamen Toner",
detect_retina=True)
Explanation: define folium map
End of explanation
b_overlay = ImageOverlay(y_b[15],
bounds=[numpy.roll(numpy.array(\
pyproj.transform(p1, p2, extended_bbox['left'], extended_bbox['bottom'])), 1).tolist(),
numpy.roll(numpy.array(\
pyproj.transform(p1, p2, extended_bbox['right'], extended_bbox['top'])), 1).tolist()],
opacity=0.5,
origin='lower',
colormap=cm.Greens).add_to(map_b)
Explanation: create the biophony overlay (for x[0]) and add it to the map
End of explanation
map_b
Explanation: show the map
End of explanation
image_path = ""
for i in range(y.shape[0]):
image = Image.fromarray(cm.Greens(numpy.flipud(y[i]), bytes=True))
image.save(path.join(image_path, "image_{0}.png".format(i + 1)))
Explanation: create and save images in PNG format for each time step (each value of x)
End of explanation
# import values
values = pandas.read_csv("/home/ubuntu/data/model grid/pavement_100m_grid_200m_spacing_merged.csv")
# reshape into grid
data = values['pavement_100m'].as_matrix().reshape((extended_rows, extended_columns))
# mean center values (mean computed in the regression model notebook)
u = data - (-48.8641)
# define model parameters (computed in the regresssion model notebook)
g_00_a = -0.0876 # level 2
g_01_a = -0.3314
y_a = g_00_a + (g_01_a * u) # level 1
export_to_hdf5(filepath="/home/ubuntu/sel.hd5", name="predicted SEL", data=y_a)
Explanation: Predict anthrophony
End of explanation
# transform to range 0-1
y_a = y_a + abs(y_a.min())
y_a = y_a / y_a.max()
# transform to account for scale
#y_a = (y_a * 0.50) + 0.3
Explanation: Map anthrophony
transform predicted biophony to range from 0 to 1
End of explanation
# center the map on the site named 'Hofgarten' and use the "Stamen Toner" background tiles
map_center = Site.objects.get(name='Hofgarten').geometry.coords
# transfrom the points from MGI / Austria GK West to WGS 84
lon, lat = pyproj.transform(p1, p2, map_center[0], map_center[1])
# intialize the map and use the "Stamen Toner" background tiles
map_a = folium.Map(location=[lat, lon],
zoom_start=15, tiles="Stamen Toner",
detect_retina=True)
Explanation: define folium map
End of explanation
a_overlay = ImageOverlay(y_a,
bounds=[numpy.roll(numpy.array(\
pyproj.transform(p1, p2, extended_bbox['left'], extended_bbox['bottom'])), 1).tolist(),
numpy.roll(numpy.array(\
pyproj.transform(p1, p2, extended_bbox['right'], extended_bbox['top'])), 1).tolist()],
opacity=0.5,
origin='lower',
colormap=cm.YlOrBr).add_to(map_a)
Explanation: create the anthrophony overlay and add it to the map
End of explanation
map_a
Explanation: show the map
End of explanation
image = Image.fromarray(cm.YlOrBr(numpy.flipud(y_a), bytes=True))
image.save(path.join(image_path, "image_1_a.png"))
Explanation: create and save an image in PNG format of the overlay
End of explanation |
9,414 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2022 The TensorFlow Authors.
Step1: Text Searcher with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Import the required packages.
Step3: Prepare the dataset
This tutorial uses the dataset CNN / Daily Mail summarization dataset from the GitHub repo.
First, download the text and urls of cnn and dailymail and unzip them. If it
failed to download from google drive, please wait a few minutes to try it again or download it manually and then upload it to the colab.
Step10: Then, save the data into the CSV file that can be loaded into tflite_model_maker library. The code is based on the logic used to load this data in tensorflow_datasets. We can't use tensorflow_dataset directly since it doesn't contain urls which are used in this colab.
Since it takes a long time to process the data into embedding feature vectors
for the whole dataset. Only first 5% stories of CNN and Daily Mail dataset are
selected by default for demo purpose. You can adjust the
fraction or try with the pre-built TFLite model with 50% stories of CNN and Daily Mail dataset to search as well.
Step11: Build the text Searcher model
Create a text Searcher model by loading a dataset, creating a model with the data and exporting the TFLite model.
Step 1. Load the dataset
Model Maker takes the text dataset and the corresponding metadata of each text string (such as urls in this example) in the CSV format. It embeds the text strings into feature vectors using the user-specified embedder model.
In this demo, we build the Searcher model using Universal Sentence Encoder, a state-of-the-art sentence embedding model which is already retrained from colab. The model is optimized for on-device inference performance, and only takes 6ms to embed a query string (measured on Pixel 6). Alternatively, you can use this quantized version, which is smaller but takes 38ms for each embedding.
Step12: Create a searcher.TextDataLoader instance and use data_loader.load_from_csv method to load the dataset. It takes ~10 minutes for this
step since it generates the embedding feature vector for each text one by one. You can try to upload your own CSV file and load it to build the customized model as well.
Specify the name of text column and metadata column in the CSV file.
* Text is used to generate the embedding feature vectors.
* Metadata is the content to be shown when you search the certain text.
Here are the first 4 lines of the CNN-DailyMail CSV file generated above.
| highlights| urls
| ---------- |----------
|Syrian official
Step13: For image use cases, you can create a searcher.ImageDataLoader instance and then use data_loader.load_from_folder to load images from the folder. The searcher.ImageDataLoader instance needs to be created by a TFLite embedder model because it will be leveraged to encode queries to feature vectors and be exported with the TFLite Searcher model. For instance
Step14: In the above example, we define the following options
Step15: Test the TFLite model on your query
You can test the exported TFLite model using custom query text. To query text using the Searcher model, initialize the model and run a search with text phrase, as follows | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2022 The TensorFlow Authors.
End of explanation
!sudo apt -y install libportaudio2
!pip install -q tflite-model-maker-nightly
!pip install gdown
Explanation: Text Searcher with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_text_searcher"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_searcher.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_searcher.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_text_searcher.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this colab notebook, you can learn how to use the TensorFlow Lite Model Maker library to create a TFLite Searcher model. You can use a text Searcher model to build Sematic Search or Smart Reply for your app. This type of model lets you take a text query and search for the most related entries in a text dataset, such as a database of web pages. The model returns a list of the smallest distance scoring entries in the dataset, including metadata you specify, such as URL, page title, or other text entry identifiers. After building this, you can deploy it onto devices (e.g. Android) using Task Library Searcher API to run inference with just a few lines of code.
This tutorial leverages CNN/DailyMail dataset as an instance to create the TFLite Searcher model. You can try with your own dataset with the compatible input comma separated value (CSV) format.
Text search using Scalable Nearest Neighbor
This tutorial uses the publicly available CNN/DailyMail non-anonymized summarization dataset, which was produced from the GitHub repo. This dataset contains over 300k news articles, which makes it a good dataset to build the Searcher model, and return various related news during model inference for a text query.
The text Searcher model in this example uses a ScaNN (Scalable Nearest Neighbors) index file that can search for similar items from a predefined database. ScaNN achieves state-of-the-art performance for efficient vector similarity search at scale.
Highlights and urls in this dataset are used in this colab to create the model:
Highlights are the text for generating the embedding feature vectors and then used for search.
Urls are the returned result shown to users after searching the related highlights.
This tutorial saves these data into the CSV file and then uses the CSV file to build the model. Here are several examples from the dataset.
| Highlights | Urls
| ---------- |----------
|Hawaiian Airlines again lands at No. 1 in on-time performance. The Airline Quality Rankings Report looks at the 14 largest U.S. airlines. ExpressJet <br> and American Airlines had the worst on-time performance. Virgin America had the best baggage handling; Southwest had lowest complaint rate. | http://www.cnn.com/2013/04/08/travel/airline-quality-report
| European football's governing body reveals list of countries bidding to host 2020 finals. The 60th anniversary edition of the finals will be hosted by 13 <br> countries. Thirty-two countries are considering bids to host 2020 matches. UEFA will announce host cities on September 25. | http://edition.cnn.com:80/2013/09/20/sport/football/football-euro-2020-bid-countries/index.html?
| Once octopus-hunter Dylan Mayer has now also signed a petition of 5,000 divers banning their hunt at Seacrest Park. Decision by Washington <br> Department of Fish and Wildlife could take months. | http://www.dailymail.co.uk:80/news/article-2238423/Dylan-Mayer-Washington-considers-ban-Octopus-hunting-diver-caught-ate-Puget-Sound.html?
| Galaxy was observed 420 million years after the Big Bang. found by NASA’s Hubble Space Telescope, Spitzer Space Telescope, and one of nature’s <br> own natural 'zoom lenses' in space. | http://www.dailymail.co.uk/sciencetech/article-2233883/The-furthest-object-seen-Record-breaking-image-shows-galaxy-13-3-BILLION-light-years-Earth.html
Setup
Start by installing the required packages, including the Model Maker package from the GitHub repo.
End of explanation
from tflite_model_maker import searcher
Explanation: Import the required packages.
End of explanation
!gdown https://drive.google.com/uc?id=0BwmD_VLjROrfTHk4NFg2SndKcjQ
!gdown https://drive.google.com/uc?id=0BwmD_VLjROrfM1BxdkxVaTY2bWs
!wget -O all_train.txt https://raw.githubusercontent.com/abisee/cnn-dailymail/master/url_lists/all_train.txt
!tar xzf cnn_stories.tgz
!tar xzf dailymail_stories.tgz
Explanation: Prepare the dataset
This tutorial uses the dataset CNN / Daily Mail summarization dataset from the GitHub repo.
First, download the text and urls of cnn and dailymail and unzip them. If it
failed to download from google drive, please wait a few minutes to try it again or download it manually and then upload it to the colab.
End of explanation
#@title Save the highlights and urls to the CSV file
#@markdown Load the highlights from the stories of CNN / Daily Mail, map urls with highlights, and save them to the CSV file.
CNN_FRACTION = 0.05 #@param {type:"number"}
DAILYMAIL_FRACTION = 0.05 #@param {type:"number"}
import csv
import hashlib
import os
import tensorflow as tf
dm_single_close_quote = u"\u2019" # unicode
dm_double_close_quote = u"\u201d"
END_TOKENS = [
".", "!", "?", "...", "'", "`", '"', dm_single_close_quote,
dm_double_close_quote, ")"
] # acceptable ways to end a sentence
def read_file(file_path):
Reads lines in the file.
lines = []
with tf.io.gfile.GFile(file_path, "r") as f:
for line in f:
lines.append(line.strip())
return lines
def url_hash(url):
Gets the hash value of the url.
h = hashlib.sha1()
url = url.encode("utf-8")
h.update(url)
return h.hexdigest()
def get_url_hashes_dict(urls_path):
Gets hashes dict that maps the hash value to the original url in file.
urls = read_file(urls_path)
return {url_hash(url): url[url.find("id_/") + 4:] for url in urls}
def find_files(folder, url_dict):
Finds files corresponding to the urls in the folder.
all_files = tf.io.gfile.listdir(folder)
ret_files = []
for file in all_files:
# Gets the file name without extension.
filename = os.path.splitext(os.path.basename(file))[0]
if filename in url_dict:
ret_files.append(os.path.join(folder, file))
return ret_files
def fix_missing_period(line):
Adds a period to a line that is missing a period.
if "@highlight" in line:
return line
if not line:
return line
if line[-1] in END_TOKENS:
return line
return line + "."
def get_highlights(story_file):
Gets highlights from a story file path.
lines = read_file(story_file)
# Put periods on the ends of lines that are missing them
# (this is a problem in the dataset because many image captions don't end in
# periods; consequently they end up in the body of the article as run-on
# sentences)
lines = [fix_missing_period(line) for line in lines]
# Separate out article and abstract sentences
highlight_list = []
next_is_highlight = False
for line in lines:
if not line:
continue # empty line
elif line.startswith("@highlight"):
next_is_highlight = True
elif next_is_highlight:
highlight_list.append(line)
# Make highlights into a single string.
highlights = "\n".join(highlight_list)
return highlights
url_hashes_dict = get_url_hashes_dict("all_train.txt")
cnn_files = find_files("cnn/stories", url_hashes_dict)
dailymail_files = find_files("dailymail/stories", url_hashes_dict)
# The size to be selected.
cnn_size = int(CNN_FRACTION * len(cnn_files))
dailymail_size = int(DAILYMAIL_FRACTION * len(dailymail_files))
print("CNN size: %d"%cnn_size)
print("Daily Mail size: %d"%dailymail_size)
with open("cnn_dailymail.csv", "w") as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=["highlights", "urls"])
writer.writeheader()
for file in cnn_files[:cnn_size] + dailymail_files[:dailymail_size]:
highlights = get_highlights(file)
# Gets the filename which is the hash value of the url.
filename = os.path.splitext(os.path.basename(file))[0]
url = url_hashes_dict[filename]
writer.writerow({"highlights": highlights, "urls": url})
Explanation: Then, save the data into the CSV file that can be loaded into tflite_model_maker library. The code is based on the logic used to load this data in tensorflow_datasets. We can't use tensorflow_dataset directly since it doesn't contain urls which are used in this colab.
Since it takes a long time to process the data into embedding feature vectors
for the whole dataset. Only first 5% stories of CNN and Daily Mail dataset are
selected by default for demo purpose. You can adjust the
fraction or try with the pre-built TFLite model with 50% stories of CNN and Daily Mail dataset to search as well.
End of explanation
!wget -O universal_sentence_encoder.tflite https://storage.googleapis.com/download.tensorflow.org/models/tflite_support/searcher/text_to_image_blogpost/text_embedder.tflite
Explanation: Build the text Searcher model
Create a text Searcher model by loading a dataset, creating a model with the data and exporting the TFLite model.
Step 1. Load the dataset
Model Maker takes the text dataset and the corresponding metadata of each text string (such as urls in this example) in the CSV format. It embeds the text strings into feature vectors using the user-specified embedder model.
In this demo, we build the Searcher model using Universal Sentence Encoder, a state-of-the-art sentence embedding model which is already retrained from colab. The model is optimized for on-device inference performance, and only takes 6ms to embed a query string (measured on Pixel 6). Alternatively, you can use this quantized version, which is smaller but takes 38ms for each embedding.
End of explanation
data_loader = searcher.TextDataLoader.create("universal_sentence_encoder.tflite", l2_normalize=True)
data_loader.load_from_csv("cnn_dailymail.csv", text_column="highlights", metadata_column="urls")
Explanation: Create a searcher.TextDataLoader instance and use data_loader.load_from_csv method to load the dataset. It takes ~10 minutes for this
step since it generates the embedding feature vector for each text one by one. You can try to upload your own CSV file and load it to build the customized model as well.
Specify the name of text column and metadata column in the CSV file.
* Text is used to generate the embedding feature vectors.
* Metadata is the content to be shown when you search the certain text.
Here are the first 4 lines of the CNN-DailyMail CSV file generated above.
| highlights| urls
| ---------- |----------
|Syrian official: Obama climbed to the top of the tree, doesn't know how to get down. Obama sends a letter to the heads of the House and Senate. Obama <br> to seek congressional approval on military action against Syria. Aim is to determine whether CW were used, not by whom, says U.N. spokesman.|http://www.cnn.com/2013/08/31/world/meast/syria-civil-war/
|Usain Bolt wins third gold of world championship. Anchors Jamaica to 4x100m relay victory. Eighth gold at the championships for Bolt. Jamaica double <br> up in women's 4x100m relay.|http://edition.cnn.com/2013/08/18/sport/athletics-bolt-jamaica-gold
|The employee in agency's Kansas City office is among hundreds of "virtual" workers. The employee's travel to and from the mainland U.S. last year cost <br> more than $24,000. The telecommuting program, like all GSA practices, is under review.|http://www.cnn.com:80/2012/08/23/politics/gsa-hawaii-teleworking
|NEW: A Canadian doctor says she was part of a team examining Harry Burkhart in 2010. NEW: Diagnosis: "autism, severe anxiety, post-traumatic stress <br> disorder and depression" Burkhart is also suspected in a German arson probe, officials say. Prosecutors believe the German national set a string of fires <br> in Los Angeles.|http://edition.cnn.com:80/2012/01/05/justice/california-arson/index.html?
End of explanation
scann_options = searcher.ScaNNOptions(
distance_measure="dot_product",
tree=searcher.Tree(num_leaves=140, num_leaves_to_search=4),
score_ah=searcher.ScoreAH(dimensions_per_block=1, anisotropic_quantization_threshold=0.2))
model = searcher.Searcher.create_from_data(data_loader, scann_options)
Explanation: For image use cases, you can create a searcher.ImageDataLoader instance and then use data_loader.load_from_folder to load images from the folder. The searcher.ImageDataLoader instance needs to be created by a TFLite embedder model because it will be leveraged to encode queries to feature vectors and be exported with the TFLite Searcher model. For instance:
python
data_loader = searcher.ImageDataLoader.create("mobilenet_v2_035_96_embedder_with_metadata.tflite")
data_loader.load_from_folder("food/")
Step 2. Create the Searcher model
Configure ScaNN options. See api doc for more details.
Create the Searcher model from data and ScaNN options. You can see the in-depth examination to learn more about the ScaNN algorithm.
End of explanation
model.export(
export_filename="searcher.tflite",
userinfo="",
export_format=searcher.ExportFormat.TFLITE)
Explanation: In the above example, we define the following options:
* distance_measure: we use "dot_product" to measure the distance between two embedding vectors. Note that we actually compute the negative dot product value to preserve the notion that "smaller is closer".
tree: the dataset is divided the dataset into 140 partitions (roughly the square root of the data size), and 4 of them are searched during retrieval, which is roughly 3% of the dataset.
score_ah: we quantize the float embeddings to int8 values with the same dimension to save space.
Step 3. Export the TFLite model
Then you can export the TFLite Searcher model.
End of explanation
from tflite_support.task import text
# Initializes a TextSearcher object.
searcher = text.TextSearcher.create_from_file("searcher.tflite")
# Searches the input query.
results = searcher.search("The Airline Quality Rankings Report looks at the 14 largest U.S. airlines.")
print(results)
Explanation: Test the TFLite model on your query
You can test the exported TFLite model using custom query text. To query text using the Searcher model, initialize the model and run a search with text phrase, as follows:
End of explanation |
9,415 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: CSE 6040, Fall 2015
Step2: Task 1(a). [5 points] From the Yelp! Academic Dataset, create an SQLite database called, yelp-rest.db, which contains the subset of the data pertaining to restaurants.
In particular, start by creating a table called Restaurants. This table should have the following columns
Step3: Task 1(b). [5 points] Next, create a table called Reviews, which contains only reviews of restaurants.
This table should have the following columns
Step4: Task 1(c). [5 points] Next, create a table called Users, which contains all users.
This table should have the following columns
Step5: Task 1(d). [5 points] Create a table, UserEdges, that stores the connectivity graph between users.
This table should have two columns, one for the source vertex (named Source) and one for the target vertex (named Target). Treat the graph as undirected
Step6: Part 2
Step7: Task 2(b). [5 points] For each distinct state, compute the number of reviews and the average restaurant rating (in "stars"). You may ignore businesses that have no reviews. Store these in a dataframe variable, df, with three columns
Step8: Task 2(c). [3 points] On average, how many reviews does each user write? You may ignore users who write no reviews.
Write Python code to answer this question in the code cell below, and enter your answer below, rounded to the nearest tenth. For instance, you would enter "5.2" if your program computes "5.24384". (type your answer here)
Step9: Task 2(c). [5 points] On average, how many friends does each user have? In computing the average, include users who have no friends.
Write Python code to answer this question in the code cell below, and enter your answer here, rounded to the nearest integer
Step10: Task 2(d). [5 points] Use Seaborn or Plotly to create a histogram of ratings in the state of Nevada. | Python Code:
# As you complete Part 1, place any additional imports you
# need in this code cell.
import json
import sqlite3 as db
import pandas
from IPython.display import display
import string
# A little helper function you can use to quickly inspect tables:
def peek_table (db, name):
Given a database connection (`db`), prints both the number of
records in the table as well as its first few entries.
count = '''SELECT COUNT (*) FROM {table}'''.format (table=name)
display (pandas.read_sql_query (count, db))
peek = '''SELECT * FROM {table} LIMIT 5'''.format (table=name)
display (pandas.read_sql_query (peek, db))
# By way of reminder, here's how you open a connection to a database
# and request a cursor for executing queries.
db_conn = db.connect ('yelp-rest.db')
db_cursor = db_conn.cursor ()
Explanation: CSE 6040, Fall 2015: Homework #2
This assignment has the following learning goals:
1. Reinforce your data preprocessing skills, on basic files and in SQL, using a real dataset.
2. Allow you to flex your creative muscles on an open-ended data analysis problem.
In particular, you will work on the data provided for the 2015 Yelp! Dataset Challenge. Start by downloading this data and reviewing the information at that page under the section heading, Notes on the Dataset.
Note that the official challenge from Yelp! is an open competition to produce the coolest analysis of their dataset. The "open-ended" part of this homework assignment might be a first step toward helping your team win the $5,000 prize! (Entries for that competition are due December 31.)
You may work in teams of up to 2 students each. If you want a partner but can't find one, try using some combination of your basic social skills, the instructor's knowledge of the class, and Piazza to reach out to your peers.
Upload your completed version of this notebook plus all your output files to T-Square by Friday, October 16, 2015 at 5:00pm Eastern.
Actually, we will set up T-Square to accept the assignment on Friday, Oct 16, 2015 "anywhere on earth" (AOE). However, there will be no instructor Q&A after 5pm Friday, so Shang can party and Rich can make progress on clearing his email backlog.
Part 0: List your team members here
Team members:
1. (name goes here)
2. (name goes here)
Part 1: JSON to SQLite [20 points]
The learning goal in Part 1 is to reinforce the basic lessons on data conversion and SQL, by having you preprocess some "raw" data, turning it into a SQLite database, and then running some queries on it.
Hint: If you inspect the Yelp! Academic Dataset, you will see that each file is a sequence of JSON records, with one JSON record per line. So, you will most likely want to read the relevant input file line-by-line and process each line as a JSON record.
End of explanation
# Your code goes here. Feel free to use additional code cells
# to break up your work into easily testable chunks.
# Quickly inspect your handiwork
peek_table (db_conn, "Restaurants")
Explanation: Task 1(a). [5 points] From the Yelp! Academic Dataset, create an SQLite database called, yelp-rest.db, which contains the subset of the data pertaining to restaurants.
In particular, start by creating a table called Restaurants. This table should have the following columns: the business ID (call it Id), restaurant name (Name), city (City), state (State), coordinates (two columns, called Lat and Long), and a semicolon-separated string of the restaurant's categories (Cats).
Note: This table should only contain businesses that are categorized as restaurants.
Hint: When performing large numbers of inserts into the database, it may be helpful to execute db_conn.commit() to save the results before proceeding.
End of explanation
# Your code goes here. Feel free to use additional code cells
# to break up your work into easily testable chunks.
peek_table (db_conn, "Reviews")
Explanation: Task 1(b). [5 points] Next, create a table called Reviews, which contains only reviews of restaurants.
This table should have the following columns: the restaurant's business ID (call it BizId), the reviewer's ID (RevId), the numerical rating (Stars), the date (Date), and the number of up-votes of the review itself (three columns: Useful, Funny, and Cool).
Note: This table should only contain the subset of reviews that pertain to restaurants. You may find your results from Task 1(a) helpful here!
End of explanation
# Your code goes here. Feel free to use additional code cells
# to break up your work into easily testable chunks.
peek_table (db_conn, "Users")
Explanation: Task 1(c). [5 points] Next, create a table called Users, which contains all users.
This table should have the following columns: the user's ID (Id), name (Name), and number of fans (NumFans).
Note: This table should contain all users, not just the ones that reviewed restaurants!
End of explanation
# Your code goes here. Feel free to use additional code cells
# to break up your work into easily testable chunks.
peek_table (db_conn, "UserEdges")
Explanation: Task 1(d). [5 points] Create a table, UserEdges, that stores the connectivity graph between users.
This table should have two columns, one for the source vertex (named Source) and one for the target vertex (named Target). Treat the graph as undirected: that is, if there is a link from a user $u$ to a user $v$, then the table should contain both edges $u \rightarrow v$ and $v \rightarrow u$.
End of explanation
# Your code goes here. Feel free to use additional code cells
# to break up your work into easily testable chunks.
Explanation: Part 2: Summary statistics [20 points]
Task 2(a). [2 point] Compute the average rating (measured in "stars"), taken over all reviews.
End of explanation
# ... Your code to compute `df` goes here ...
display (df)
Explanation: Task 2(b). [5 points] For each distinct state, compute the number of reviews and the average restaurant rating (in "stars"). You may ignore businesses that have no reviews. Store these in a dataframe variable, df, with three columns: one column for the state (named State), one column for the number of reviews (NumRevs), and one column for the average rating (named AvgStars). The rows of the df should be sorted in descending order by number of reviews.
End of explanation
# Your code goes here. Feel free to use additional code cells
# to break up your work into easily testable chunks.
Explanation: Task 2(c). [3 points] On average, how many reviews does each user write? You may ignore users who write no reviews.
Write Python code to answer this question in the code cell below, and enter your answer below, rounded to the nearest tenth. For instance, you would enter "5.2" if your program computes "5.24384". (type your answer here)
End of explanation
# Your code goes here. Feel free to use additional code cells
# to break up your work into easily testable chunks.
Explanation: Task 2(c). [5 points] On average, how many friends does each user have? In computing the average, include users who have no friends.
Write Python code to answer this question in the code cell below, and enter your answer here, rounded to the nearest integer: (your answer)
Hint: There is at least one relatively simple way that combines left (outer) joins and the IFNULL(...) function. Although we haven't covered these in class, you should be comfortable enough that you can read about them independently and apply them.
End of explanation
# Your code goes here. Feel free to use additional code cells
# to break up your work into easily testable chunks.
Explanation: Task 2(d). [5 points] Use Seaborn or Plotly to create a histogram of ratings in the state of Nevada.
End of explanation |
9,416 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Milestone Project 1
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4: Step 4
Step5: Step 5
Step6: Step 6
Step7: Step 7
Step8: Step 8
Step9: Step 9
Step10: Step 10 | Python Code:
# For using the same code in either Python 2 or 3
from __future__ import print_function
## Note: Python 2 users, use raw_input() to get player input. Python 3 users, use input()
Explanation: Milestone Project 1: Full Walkthrough Code Solution
Below is the filled in code that goes along with the complete walkthrough video. Check out the corresponding lecture videos for more information on this code!
End of explanation
from IPython.display import clear_output
def display_board(board):
clear_output()
print(' | |')
print(' ' + board[7] + ' | ' + board[8] + ' | ' + board[9])
print(' | |')
print('-----------')
print(' | |')
print(' ' + board[4] + ' | ' + board[5] + ' | ' + board[6])
print(' | |')
print('-----------')
print(' | |')
print(' ' + board[1] + ' | ' + board[2] + ' | ' + board[3])
print(' | |')
Explanation: Step 1: Write a function that can print out a board. Set up your board as a list, where each index 1-9 corresponds with a number on a number pad, so you get a 3 by 3 board representation.
End of explanation
def player_input():
marker = ''
while not (marker == 'X' or marker == 'O'):
marker = raw_input('Player 1: Do you want to be X or O?').upper()
if marker == 'X':
return ('X', 'O')
else:
return ('O', 'X')
Explanation: Step 2: Write a function that can take in a player input and assign their marker as 'X' or 'O'. Think about using while loops to continually ask until you get a correct answer.
End of explanation
def place_marker(board, marker, position):
board[position] = marker
Explanation: Step 3: Write a function that takes, in the board list object, a marker ('X' or 'O'), and a desired position (number 1-9) and assigns it to the board.
End of explanation
def win_check(board,mark):
return ((board[7] == mark and board[8] == mark and board[9] == mark) or # across the top
(board[4] == mark and board[5] == mark and board[6] == mark) or # across the middle
(board[1] == mark and board[2] == mark and board[3] == mark) or # across the bottom
(board[7] == mark and board[4] == mark and board[1] == mark) or # down themarkft side
(board[8] == mark and board[5] == mark and board[2] == mark) or # down the middle
(board[9] == mark and board[6] == mark and board[3] == mark) or # down the right side
(board[7] == mark and board[5] == mark and board[3] == mark) or # diagonal
(board[9] == mark and board[5] == mark and board[1] == mark)) # diagonal
Explanation: Step 4: Write a function that takes in a board and checks to see if someone has won.
End of explanation
import random
def choose_first():
if random.randint(0, 1) == 0:
return 'Player 2'
else:
return 'Player 1'
Explanation: Step 5: Write a function that uses the random module to randomly decide which player goes first. You may want to lookup random.randint() Return a string of which player went first.
End of explanation
def space_check(board, position):
return board[position] == ' '
Explanation: Step 6: Write a function that returns a boolean indicating whether a space on the board is freely available.
End of explanation
def full_board_check(board):
for i in range(1,10):
if space_check(board, i):
return False
return True
Explanation: Step 7: Write a function that checks if the board is full and returns a boolean value. True if full, False otherwise.
End of explanation
def player_choice(board):
# Using strings because of raw_input
position = ' '
while position not in '1 2 3 4 5 6 7 8 9'.split() or not space_check(board, int(position)):
position = raw_input('Choose your next position: (1-9) ')
return int(position)
Explanation: Step 8: Write a function that asks for a player's next position (as a number 1-9) and then uses the function from step 6 to check if its a free position. If it is, then return the position for later use.
End of explanation
def replay():
return raw_input('Do you want to play again? Enter Yes or No: ').lower().startswith('y')
Explanation: Step 9: Write a function that asks the player if they want to play again and returns a boolean True if they do want to play again.
End of explanation
print('Welcome to Tic Tac Toe!')
while True:
# Reset the board
theBoard = [' '] * 10
player1_marker, player2_marker = player_input()
turn = choose_first()
print(turn + ' will go first.')
game_on = True
while game_on:
if turn == 'Player 1':
# Player1's turn.
display_board(theBoard)
position = player_choice(theBoard)
place_marker(theBoard, player1_marker, position)
if win_check(theBoard, player1_marker):
display_board(theBoard)
print('Congratualtions! You have won the game!')
game_on = False
else:
if full_board_check(theBoard):
display_board(theBoard)
print('The game is a draw!')
break
else:
turn = 'Player 2'
else:
# Player2's turn.
display_board(theBoard)
position = player_choice(theBoard)
place_marker(theBoard, player2_marker, position)
if win_check(theBoard, player2_marker):
display_board(theBoard)
print('Player 2 has won!')
game_on = False
else:
if full_board_check(theBoard):
display_board(theBoard)
print('The game is a tie!')
break
else:
turn = 'Player 1'
if not replay():
break
Explanation: Step 10: Here comes the hard part! Use while loops and the functions you've made to run the game!
End of explanation |
9,417 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
extract features from each photo in the directory using a function extract_features
| Python Code::
def extract_features(filename):
# load the model
model = VGG16()
# re-structure the model
model = Model(inputs=model.inputs, outputs=model.layers[-2].output)
# load the photo
image = load_img(filename, target_size=(224, 224))
# convert the image pixels to a numpy array
image = img_to_array(image)
# reshape data for the model
image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
# prepare the image for the VGG model
image = preprocess_input(image)
# get features
feature = model.predict(image, verbose=0)
return feature
# load and prepare the photograph
photo = extract_features('example.jpg')
|
9,418 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Leitura 14 - Arrays Multidimensionais (Matrize3)
By Hans. Original
Step1: Uma das maiores vantagens da vetorizaçao eh a possibilidade da aplicacao de inumeras operaçoes diretamente a cada um dos elementos do objeto.
Tais operaçoes sao as aritmeticas, logica, aplicaçao de funçoes especificas, etc...
Step2: Metodos | Python Code:
import sys
import numpy as np
print(sys.version) # Versao do python - Opcional
print(np.__version__) # VErsao do modulo numpy - Opcional
# Criando um vetor padrao com 25 valores
npa = np.arange(25)
npa
# Transformando o vetor npa em um vetor multidimensional usando o metodo reshape
npa.reshape(5,5)
# Podemos criar um array multidimensional de zeros usando o metodo zeros
np.zeros([5,5])
# Algumas funcoes uteis
npa2 = np.zeros([5,5])
# Tamanho (numero de elementos)
npa2.size
# Forma (numero de linhas e colunas)
npa2.shape
# Dimensoes
npa2.ndim
# Pode se criar arrays com o numero de dimensoes que voce quiser
# Ex: um array com 3 dimensoes. (2 elemenso em cada dimensao)
np.arange(8).reshape(2,2,2)
# 4 Dimensoes com 4 elementos em cada (zeros como elementos)
# np.zeros([4,4,4,4])
# 4 Dimensoes com 2 elementos em cada
np.arange(16).reshape(2,2,2,2)
Explanation: Leitura 14 - Arrays Multidimensionais (Matrize3)
By Hans. Original: Bill Chambers
End of explanation
# Gerando uma semente para gerar numeros aleatorios
np.random.seed(10)
# Gerando uma lista de 25 numeros inteiros aleatorios entre 1 e 10
np.random.random_integers(1,10,25)
#Veja que sempre obtemos a mesma sequencia aleatoria para a mesma semente]
# caso contrario, se nao reajustarmos a semente ele sempre criara uma sequencia diferente
np.random.seed(10)
np.random.random_integers(1,10,25)
np.random.seed(10)
npa2 = np.random.random_integers(1,10,25).reshape(5,5)
npa2
npa3 = np.random.random_integers(1,10,25).reshape(5,5)
npa3
# Aplicando operaçoes
# comparaçoes
npa2 > npa3
# Contar quantos alores temos que npa2 > npa3 (em python, TRUE eh tratato como 1 )
(npa2 > npa3).sum()
# Podemos aplicar esta soma por colunas [primeira dimensao] (axis=0)
(npa2 > npa3).sum(axis=0)
# Podemos aplicar esta soma por linha [segunda dimensao] (axis=1)
(npa2 > npa3).sum(axis=1)
npa2
# OPERAÇOES VALIDAS TANTO PARA O MAXIMO QUANTO PARA O MINIMO
# Encontrando o valor maximo em toda a matrix
npa2.max()
# Encontrando o valor maximo por colunas
npa2.max(axis=0)
# Encontrando o valor maximo por linhas
npa2.max(axis=1)
# Fazer a Matriz Transposta usando o metodo transpose
npa2.transpose()
# Fazer a Matriz Transposta usando a propriedade T
npa2.T
# Multiplicar esta transposta por ela mesma
npa2.T * npa2.T
Explanation: Uma das maiores vantagens da vetorizaçao eh a possibilidade da aplicacao de inumeras operaçoes diretamente a cada um dos elementos do objeto.
Tais operaçoes sao as aritmeticas, logica, aplicaçao de funçoes especificas, etc...
End of explanation
# Unidimensionalizando a matrix npa2 usando o metodo flatten [pythonizado como "flattening"]
npa2.flatten()
# Unidimensionalizando a matrix npa2 usando o metodo ravel [pythonizado como "raveling"]
#npa2.flatten()
r = npa2.ravel()
r
npa2
# Unidimensinalizando e tentando modificar o primeiro elemento para 25
npa2.flatten()[0] = 25
npa2 # nada deve ocorrer, o array multidimensional deve permanecer inalterado
# Modificando o primeiro elemento no "raveled" array
r[0] = 25 # Deve alterar o valor do array original
npa2
# Mostrando os valores do array npa para comparar com as proximas funoes
npa
# Soma cumulativa dos valores dos elemntos do array cumsum
npa.cumsum()
# Produto cumulativo dos valores dos elemntos do array cumsum
npa.cumprod() # Resultara em zeros porque o primeiro elemento eh zero
Explanation: Metodos: flatten e ravel
Estas duas propriedades sao bastante uteis quando para trabalhar com arrays multimensionais.
* O metodo flatten unidemensionaliza o array multidimensional mantendo-o imutavel.
* O metodo ravel unidemensionaliza o array multidimensional transformando-o em mutavel.
Claramente percebemos a vantagem do metodo ravel caso queiramos alterar algum valor do array.
End of explanation |
9,419 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Waymo Open Dataset Tutorial (using local Jupyter kernel)
Website
Step1: Load waymo_open_dataset package
Step2: Read one frame
Each file in the dataset is a sequence of frames ordered by frame start timestamps. We have extracted two frames from the dataset to demonstrate the dataset format.
Step3: Examine frame context
Refer to dataset.proto for the data format. The context contains shared information among all frames in the scene.
Step5: Visualize Camera Images
Step9: Visualize Range Images
Step10: Point Cloud Conversion and Visualization
Step11: Examine number of points in each lidar sensor.
First return.
Step12: Second return.
Step13: Show point cloud
3D point clouds are rendered using an internal tool, which is unfortunately not publicly available yet. Here is an example of what they look like.
Step17: Visualize Camera Projection | Python Code:
# copybara removed file resource import
import os
if os.path.exists('tutorial_local.ipynb'):
# in case it is executed as a Jupyter notebook from the tutorial folder.
os.chdir('../')
fake_predictions_path = '{pyglib_resource}waymo_open_dataset/metrics/tools/fake_predictions.bin'.format(pyglib_resource='')
fake_ground_truths_path = '{pyglib_resource}waymo_open_dataset/metrics/tools/fake_ground_truths.bin'.format(pyglib_resource='')
bin_path = '{pyglib_resource}waymo_open_dataset/metrics/tools/compute_detection_metrics_main'.format(pyglib_resource='')
frames_path = '{pyglib_resource}tutorial/frames'.format(pyglib_resource='')
point_cloud_path = '{pyglib_resource}tutorial/3d_point_cloud.png'.format(pyglib_resource='')
!{bin_path} {fake_predictions_path} {fake_ground_truths_path}
Explanation: Waymo Open Dataset Tutorial (using local Jupyter kernel)
Website: https://waymo.com/open
GitHub: https://github.com/waymo-research/waymo-open-dataset
This tutorial demonstrates how to use the Waymo Open Dataset with two frames of data. Visit the Waymo Open Dataset Website to download the full dataset.
To use:
1. checkout waymo_open_dataset (need to run once):
git clone https://github.com/waymo-research/waymo-open-dataset.git waymo-open-dataset-repo
cd waymo-open-dataset-repo
2. build docker container (need to run once):
docker build --tag=open_dataset -f tutorial/cpu-jupyter.Dockerfile .
3. start Jupyter kernel inside the docker container:
docker run -p 8888:8888 open_dataset
4. connect the notebook to the local port 8888.
Uncheck the box "Reset all runtimes before running" if you run this colab directly from the remote kernel. Alternatively, you can make a copy before trying to run it by following "File > Save copy in Drive ...".
Metrics computation
The core metrics computation library is written in C++, so it can be extended to other programming languages. It can compute detection metrics (mAP) and tracking metrics (MOTA). See more information about the metrics on the website.
We provide command line tools and TensorFlow ops to call the detection metrics library to compute detection metrics. We will provide a similar wrapper for tracking metrics library in the future. You are welcome to contribute your wrappers.
Command line detection metrics computation
The command takes a pair of files for prediction and ground truth. Read the comment in waymo_open_dataset/metrics/tools/compute_detection_metrics_main.cc for details of the data format.
End of explanation
import tensorflow as tf
import math
import numpy as np
import itertools
from waymo_open_dataset.utils import frame_utils
from waymo_open_dataset import dataset_pb2 as open_dataset
tf.enable_eager_execution()
Explanation: Load waymo_open_dataset package
End of explanation
dataset = tf.data.TFRecordDataset(frames_path, compression_type='')
for data in dataset:
frame = open_dataset.Frame()
frame.ParseFromString(bytearray(data.numpy()))
break
(range_images, camera_projections, _, range_image_top_pose) = (
frame_utils.parse_range_image_and_camera_projection(frame))
Explanation: Read one frame
Each file in the dataset is a sequence of frames ordered by frame start timestamps. We have extracted two frames from the dataset to demonstrate the dataset format.
End of explanation
print(frame.context)
Explanation: Examine frame context
Refer to dataset.proto for the data format. The context contains shared information among all frames in the scene.
End of explanation
import matplotlib.pyplot as plt
plt.figure(figsize=(25, 20))
def image_show(data, name, layout, cmap=None):
Show an image.
plt.subplot(*layout)
plt.imshow(tf.image.decode_jpeg(data), cmap=cmap)
plt.title(name)
plt.grid(False)
plt.axis('off')
for index, image in enumerate(frame.images):
image_show(image.image, open_dataset.CameraName.Name.Name(image.name),
[3, 3, index+1])
Explanation: Visualize Camera Images
End of explanation
plt.figure(figsize=(64, 20))
def plot_range_image_helper(data, name, layout, vmin = 0, vmax=1, cmap='gray'):
Plots range image.
Args:
data: range image data
name: the image title
layout: plt layout
vmin: minimum value of the passed data
vmax: maximum value of the passed data
cmap: color map
plt.subplot(*layout)
plt.imshow(data, cmap=cmap, vmin=vmin, vmax=vmax)
plt.title(name)
plt.grid(False)
plt.axis('off')
def get_range_image(laser_name, return_index):
Returns range image given a laser name and its return index.
return range_images[laser_name][return_index]
def show_range_image(range_image, layout_index_start = 1):
Shows range image.
Args:
range_image: the range image data from a given lidar of type MatrixFloat.
layout_index_start: layout offset
range_image_tensor = tf.convert_to_tensor(range_image.data)
range_image_tensor = tf.reshape(range_image_tensor, range_image.shape.dims)
lidar_image_mask = tf.greater_equal(range_image_tensor, 0)
range_image_tensor = tf.where(lidar_image_mask, range_image_tensor,
tf.ones_like(range_image_tensor) * 1e10)
range_image_range = range_image_tensor[...,0]
range_image_intensity = range_image_tensor[...,1]
range_image_elongation = range_image_tensor[...,2]
plot_range_image_helper(range_image_range.numpy(), 'range',
[8, 1, layout_index_start], vmax=75, cmap='gray')
plot_range_image_helper(range_image_intensity.numpy(), 'intensity',
[8, 1, layout_index_start + 1], vmax=1.5, cmap='gray')
plot_range_image_helper(range_image_elongation.numpy(), 'elongation',
[8, 1, layout_index_start + 2], vmax=1.5, cmap='gray')
frame.lasers.sort(key=lambda laser: laser.name)
show_range_image(get_range_image(open_dataset.LaserName.TOP, 0), 1)
show_range_image(get_range_image(open_dataset.LaserName.TOP, 1), 4)
Explanation: Visualize Range Images
End of explanation
points, cp_points = frame_utils.convert_range_image_to_point_cloud(frame,
range_images,
camera_projections,
range_image_top_pose)
points_ri2, cp_points_ri2 = frame_utils.convert_range_image_to_point_cloud(
frame,
range_images,
camera_projections,
range_image_top_pose,
ri_index=1)
# 3d points in vehicle frame.
points_all = np.concatenate(points, axis=0)
points_all_ri2 = np.concatenate(points_ri2, axis=0)
# camera projection corresponding to each point.
cp_points_all = np.concatenate(cp_points, axis=0)
cp_points_all_ri2 = np.concatenate(cp_points_ri2, axis=0)
Explanation: Point Cloud Conversion and Visualization
End of explanation
print(points_all.shape)
print(cp_points_all.shape)
print(points_all[0:2])
for i in range(5):
print(points[i].shape)
print(cp_points[i].shape)
Explanation: Examine number of points in each lidar sensor.
First return.
End of explanation
print(points_all_ri2.shape)
print(cp_points_all_ri2.shape)
print(points_all_ri2[0:2])
for i in range(5):
print(points_ri2[i].shape)
print(cp_points_ri2[i].shape)
Explanation: Second return.
End of explanation
from IPython.display import Image, display
display(Image(point_cloud_path))
Explanation: Show point cloud
3D point clouds are rendered using an internal tool, which is unfortunately not publicly available yet. Here is an example of what they look like.
End of explanation
images = sorted(frame.images, key=lambda i:i.name)
cp_points_all_concat = np.concatenate([cp_points_all, points_all], axis=-1)
cp_points_all_concat_tensor = tf.constant(cp_points_all_concat)
# The distance between lidar points and vehicle frame origin.
points_all_tensor = tf.norm(points_all, axis=-1, keepdims=True)
cp_points_all_tensor = tf.constant(cp_points_all, dtype=tf.int32)
mask = tf.equal(cp_points_all_tensor[..., 0], images[0].name)
cp_points_all_tensor = tf.cast(tf.gather_nd(
cp_points_all_tensor, tf.where(mask)), dtype=tf.float32)
points_all_tensor = tf.gather_nd(points_all_tensor, tf.where(mask))
projected_points_all_from_raw_data = tf.concat(
[cp_points_all_tensor[..., 1:3], points_all_tensor], axis=-1).numpy()
def rgba(r):
Generates a color based on range.
Args:
r: the range value of a given point.
Returns:
The color for a given range
c = plt.get_cmap('jet')((r % 20.0) / 20.0)
c = list(c)
c[-1] = 0.5 # alpha
return c
def plot_image(camera_image):
Plot a cmaera image.
plt.figure(figsize=(20, 12))
plt.imshow(tf.image.decode_jpeg(camera_image.image))
plt.grid("off")
def plot_points_on_image(projected_points, camera_image, rgba_func,
point_size=5.0):
Plots points on a camera image.
Args:
projected_points: [N, 3] numpy array. The inner dims are
[camera_x, camera_y, range].
camera_image: jpeg encoded camera image.
rgba_func: a function that generates a color from a range value.
point_size: the point size.
plot_image(camera_image)
xs = []
ys = []
colors = []
for point in projected_points:
xs.append(point[0]) # width, col
ys.append(point[1]) # height, row
colors.append(rgba_func(point[2]))
plt.scatter(xs, ys, c=colors, s=point_size, edgecolors="none")
plot_points_on_image(projected_points_all_from_raw_data,
images[0], rgba, point_size=5.0)
Explanation: Visualize Camera Projection
End of explanation |
9,420 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Data Explanation Benchmarking
Step1: Load Data and Model
Step2: Class Label Mapping
Step3: Define Score Function
Step4: Define Image Masker
Step5: Create Explainer Object
Step6: Run SHAP Explanation
Step7: Plot SHAP Explanation
Step8: Get Output Class Indices
Step9: Define Metrics (Sort Order & Perturbation Method)
Step10: Benchmark Explainer | Python Code:
import json
import numpy as np
import shap
import shap.benchmark as benchmark
import tensorflow as tf
import scipy as sp
from tensorflow.keras.applications.resnet50 import ResNet50, preprocess_input
Explanation: Image Data Explanation Benchmarking: Image Multiclass Classification
This notebook demonstrates how to use the benchmark utility to benchmark the performance of an explainer for image data. In this demo, we showcase explanation performance for partition explainer on an Image Multiclass Classification model. The metrics used to evaluate are "keep positive" and "keep negative". The masker used is Image Masker with Inpaint Telea.
The new benchmark utility uses the new API with MaskedModel as wrapper around user-imported model and evaluates masked values of inputs.
End of explanation
model = ResNet50(weights='imagenet')
X, y = shap.datasets.imagenet50()
Explanation: Load Data and Model
End of explanation
url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json"
with open(shap.datasets.cache(url)) as file:
class_names = [v[1] for v in json.load(file).values()]
Explanation: Class Label Mapping
End of explanation
def f(x):
tmp = x.copy()
if len(tmp.shape) == 2:
tmp = tmp.reshape(tmp.shape[0], *X[0].shape)
preprocess_input(tmp)
return model(tmp)
Explanation: Define Score Function
End of explanation
masker = shap.maskers.Image("inpaint_telea", X[0].shape)
Explanation: Define Image Masker
End of explanation
explainer = shap.Explainer(f, masker, output_names=class_names)
Explanation: Create Explainer Object
End of explanation
shap_values = explainer(X[1:3], max_evals=500, batch_size=50, outputs=shap.Explanation.argsort.flip[:4])
Explanation: Run SHAP Explanation
End of explanation
shap.image_plot(shap_values)
Explanation: Plot SHAP Explanation
End of explanation
output = f(X[1:3]).numpy()
num_of_outputs = 4
sorted_indexes = np.argsort(-output,axis=1)
sliced_indexes = np.array([index_list[:num_of_outputs] for index_list in sorted_indexes])
Explanation: Get Output Class Indices
End of explanation
sort_order = 'positive'
perturbation = 'keep'
Explanation: Define Metrics (Sort Order & Perturbation Method)
End of explanation
sequential_perturbation = benchmark.perturbation.SequentialPerturbation(explainer.model, explainer.masker, sort_order, perturbation)
xs, ys, auc = sequential_perturbation.model_score(shap_values, X[1:2], indices=sliced_indexes[0])
sequential_perturbation.plot(xs, ys, auc)
Explanation: Benchmark Explainer
End of explanation |
9,421 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This code will load the model information, generate the model definition, and run the model estimation using FSL
Step1: Load the scan and model info, and generate the event files for FSL from the information in model.json
Step2: Specify the model. For the sake of speed we will use a simplified model that treats the study as a blocked design rather than modeling each item separately, but we also model instructions and motor responses; this, it is a hybrid block/event-related design
Step3: Generate the fsf and ev files using Level1Design
Step4: Generate the full set of model files using FEATModel
Step5: Visualize the design matrix
Step6: Show the correlation matrix for design
Step7: Estimate the model using FILMGLS - this will take a few minutes. | Python Code:
import nipype.algorithms.modelgen as model # model generation
import nipype.interfaces.fsl as fsl # fsl
from nipype.interfaces.base import Bunch
import os,json,glob
import numpy
import nibabel
import nilearn.plotting
from make_event_files_from_json import MakeEventFilesFromJSON
%matplotlib inline
import matplotlib.pyplot as plt
try:
datadir=os.environ['FMRIDATADIR']
assert not datadir==''
except:
datadir='/Users/poldrack/data_unsynced/myconnectome/sub00001'
print 'Using data from',datadir
Explanation: This code will load the model information, generate the model definition, and run the model estimation using FSL
End of explanation
subject='ses014'
# note - we have to use the anatomy from a different session'
subdir=os.path.join(datadir,subject)
tasknum=2 # n-back
scaninfo=json.load(open(os.path.join(subdir,
'functional/sub00001_ses014_task002_run001_bold.json')))
tr=scaninfo['RepetitionTime']
modelfile=os.path.join(subdir,'model.json')
modelinfo=json.load(open(modelfile))
taskinfo=modelinfo['task%03d'%tasknum]['model001']
evs=taskinfo['Variables']
contrasts=taskinfo['Contrasts']
# get the response onsets
response_onsets=[]
for v in evs.iterkeys():
if evs[v]['VariableName'].find('_target_ons')>-1:
for ons in evs[v]['onsets']:
response_onsets.append(ons[0])
Explanation: Load the scan and model info, and generate the event files for FSL from the information in model.json
End of explanation
modeldir=os.path.join(subdir,'model/task%03d/model001/featmodel'%tasknum)
# no way to specify the output directory, so we just chdir into the
# desired output directory
if not os.path.exists(modeldir):
os.mkdir(modeldir)
os.chdir(modeldir)
instruction_onsets=list(numpy.array([68,176,372,2,154,416,24,220,350,112,198,328,46,264,394,90,242,306])-2.0)
info = [Bunch(conditions=['faces-1back','faces-2back','scenes-1back','scenes-2back','chars-1back','chars-2back','instructions','responses'],
onsets=[[68,176,372],[2,154,416],[24,220,350],[112,198,328],[46,264,394],[90,242,306],instruction_onsets,response_onsets],
durations=[[20],[20],[20],[20],[20],[20],[2],[1]])
]
s = model.SpecifyModel()
s.inputs.input_units = 'secs'
s.inputs.functional_runs = [os.path.join(subdir,'functional/sub00001_ses014_task002_run001_bold_mcf_unwarped_smoothed_hpf_rescaled.nii.gz')]
s.inputs.time_repetition = 6
s.inputs.high_pass_filter_cutoff = 128.
s.inputs.subject_info = info
s.run()
Explanation: Specify the model. For the sake of speed we will use a simplified model that treats the study as a blocked design rather than modeling each item separately, but we also model instructions and motor responses; this, it is a hybrid block/event-related design
End of explanation
contrasts=[['faces>Baseline','T', ['faces-1back','faces-2back'],[0.5,0.5]],
['scenes>Baseline','T', ['scenes-1back','scenes-2back'],[0.5,0.5]],
['chars>Baseline','T', ['chars-1back','chars-2back'],[0.5,0.5]],
['2back>1back','T',
['faces-1back','faces-2back','scenes-1back','scenes-2back','chars-1back','chars-2back'],[-1,1,-1,1,-1,1,-1,1]],
['response>Baseline','T',['responses'],[1]],
['instructions>Baseline','T',['instructions'],[1]]]
level1design = fsl.model.Level1Design()
level1design.inputs.interscan_interval = tr
level1design.inputs.bases = {'dgamma':{'derivs': True}}
level1design.inputs.session_info = s._sessinfo
level1design.inputs.model_serial_correlations=True
level1design.inputs.contrasts=contrasts
level1info=level1design.run()
fsf_file=os.path.join(modeldir,'run0.fsf')
event_files=glob.glob(os.path.join(modeldir,'ev*txt'))
Explanation: Generate the fsf and ev files using Level1Design
End of explanation
modelgen=fsl.model.FEATModel()
modelgen.inputs.fsf_file=fsf_file
modelgen.inputs.ev_files=event_files
modelgen.run()
Explanation: Generate the full set of model files using FEATModel
End of explanation
desmtx=numpy.loadtxt(fsf_file.replace(".fsf",".mat"),skiprows=5)
plt.imshow(desmtx,aspect='auto',interpolation='nearest',cmap='gray')
Explanation: Visualize the design matrix
End of explanation
cc=numpy.corrcoef(desmtx.T)
plt.imshow(cc,aspect='auto',interpolation='nearest')
plt.colorbar()
Explanation: Show the correlation matrix for design
End of explanation
if not os.path.exists(os.path.join(modeldir,'stats')):
fgls = fsl.FILMGLS(smooth_autocorr=True,mask_size=5)
fgls.inputs.in_file = os.path.join(subdir,
'functional/sub00001_ses014_task002_run001_bold_mcf_unwarped_smoothed_hpf_rescaled.nii.gz')
fgls.inputs.design_file = os.path.join(modeldir,'run0.mat')
fgls.inputs.threshold = 10
fgls.inputs.results_dir = os.path.join(modeldir,'stats')
fgls.inputs.tcon_file=os.path.join(modeldir,'run0.con')
res = fgls.run()
else:
print 'using existing stats dir'
# skip this for now, just do uncorrected visualization
dof=int(open(os.path.join(modeldir,'stats/dof')).readline().strip())
est = fsl.SmoothEstimate()
est.inputs.dof=dof
est.inputs.residual_fit_file = os.path.join(modeldir,'stats/res4d.nii.gz')
est.inputs.mask_file = os.path.join(subdir,'functional/sub00001_ses014_task002_run001_bold_mcf_brain_mask.nii.gz')
#smoothness=est.run()
zstats=glob.glob(os.path.join(modeldir,'stats/zstat*.nii.gz'))
zstats.sort()
meanimg=nibabel.load(os.path.join(subdir,
'functional/sub00001_ses014_task002_run001_bold_mcf_brain_unwarped_mean.nii.gz'))
for zstat in zstats:
connum=int(os.path.basename(zstat).split('.')[0].replace('zstat',''))
zstatimg=nibabel.load(zstat)
fmap_display=nilearn.plotting.plot_stat_map(zstatimg,meanimg,threshold=2.3,
title='Contrast %d: %s'%(connum,contrasts[connum-1][0]))
Explanation: Estimate the model using FILMGLS - this will take a few minutes.
End of explanation |
9,422 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simulating with FBA
Simulations using flux balance analysis can be solved using Model.optimize(). This will maximize or minimize (maximizing is the default) flux through the objective reactions.
Step1: Running FBA
Step2: The Model.optimize() function will return a Solution object, which will also be stored at model.solution. A solution object has several attributes
Step3: Analyzing FBA solutions
Models solved using FBA can be further analyzed by using summary methods, which output printed text to give a quick representation of model behavior. Calling the summary method on the entire model displays information on the input and output behavior of the model, along with the optimized objective.
Step4: In addition, the input-output behavior of individual metabolites can also be inspected using summary methods. For instance, the following commands can be used to examine the overall redox balance of the model
Step5: Or to get a sense of the main energy production and consumption reactions
Step6: Changing the Objectives
The objective function is determined from the objective_coefficient attribute of the objective reaction(s). Generally, a "biomass" function which describes the composition of metabolites which make up a cell is used.
Step7: Currently in the model, there is only one objective reaction (the biomass reaction), with an objective coefficient of 1.
Step8: The objective function can be changed by assigning Model.objective, which can be a reaction object (or just it's name), or a dict of {Reaction
Step9: The objective function can also be changed by setting Reaction.objective_coefficient directly.
Step10: Running FVA
FBA will not give always give unique solution, because multiple flux states can achieve the same optimum. FVA (or flux variability analysis) finds the ranges of each metabolic flux at the optimum.
Step11: Setting parameter fraction_of_optimium=0.90 would give the flux ranges for reactions at 90% optimality.
Step12: Running FVA in summary methods
Flux variability analysis can also be embedded in calls to summary methods. For instance, the expected variability in substrate consumption and product formation can be quickly found by
Step13: Similarly, variability in metabolite mass balances can also be checked with flux variability analysis
Step14: In these summary methods, the values are reported as a the center point +/- the range of the FVA solution, calculated from the maximum and minimum values.
Running pFBA
Parsimonious FBA (often written pFBA) finds a flux distribution which gives the optimal growth rate, but minimizes the total sum of flux. This involves solving two sequential linear programs, but is handled transparently by cobrapy. For more details on pFBA, please see Lewis et al. (2010).
Step15: These functions should give approximately the same objective value | Python Code:
import pandas
pandas.options.display.max_rows = 100
import cobra.test
model = cobra.test.create_test_model("textbook")
Explanation: Simulating with FBA
Simulations using flux balance analysis can be solved using Model.optimize(). This will maximize or minimize (maximizing is the default) flux through the objective reactions.
End of explanation
model.optimize()
Explanation: Running FBA
End of explanation
model.solution.status
model.solution.f
Explanation: The Model.optimize() function will return a Solution object, which will also be stored at model.solution. A solution object has several attributes:
f: the objective value
status: the status from the linear programming solver
x_dict: a dictionary of {reaction_id: flux_value} (also called "primal")
x: a list for x_dict
y_dict: a dictionary of {metabolite_id: dual_value}.
y: a list for y_dict
For example, after the last call to model.optimize(), the status should be 'optimal' if the solver returned no errors, and f should be the objective value
End of explanation
model.summary()
Explanation: Analyzing FBA solutions
Models solved using FBA can be further analyzed by using summary methods, which output printed text to give a quick representation of model behavior. Calling the summary method on the entire model displays information on the input and output behavior of the model, along with the optimized objective.
End of explanation
model.metabolites.nadh_c.summary()
Explanation: In addition, the input-output behavior of individual metabolites can also be inspected using summary methods. For instance, the following commands can be used to examine the overall redox balance of the model
End of explanation
model.metabolites.atp_c.summary()
Explanation: Or to get a sense of the main energy production and consumption reactions
End of explanation
biomass_rxn = model.reactions.get_by_id("Biomass_Ecoli_core")
Explanation: Changing the Objectives
The objective function is determined from the objective_coefficient attribute of the objective reaction(s). Generally, a "biomass" function which describes the composition of metabolites which make up a cell is used.
End of explanation
model.objective
Explanation: Currently in the model, there is only one objective reaction (the biomass reaction), with an objective coefficient of 1.
End of explanation
# change the objective to ATPM
model.objective = "ATPM"
# The upper bound should be 1000, so that we get
# the actual optimal value
model.reactions.get_by_id("ATPM").upper_bound = 1000.
model.objective
model.optimize().f
Explanation: The objective function can be changed by assigning Model.objective, which can be a reaction object (or just it's name), or a dict of {Reaction: objective_coefficient}.
End of explanation
model.reactions.get_by_id("ATPM").objective_coefficient = 0.
biomass_rxn.objective_coefficient = 1.
model.objective
Explanation: The objective function can also be changed by setting Reaction.objective_coefficient directly.
End of explanation
fva_result = cobra.flux_analysis.flux_variability_analysis(
model, model.reactions[:20])
pandas.DataFrame.from_dict(fva_result).T.round(5)
Explanation: Running FVA
FBA will not give always give unique solution, because multiple flux states can achieve the same optimum. FVA (or flux variability analysis) finds the ranges of each metabolic flux at the optimum.
End of explanation
fva_result = cobra.flux_analysis.flux_variability_analysis(
model, model.reactions[:20], fraction_of_optimum=0.9)
pandas.DataFrame.from_dict(fva_result).T.round(5)
Explanation: Setting parameter fraction_of_optimium=0.90 would give the flux ranges for reactions at 90% optimality.
End of explanation
model.optimize()
model.summary(fva=0.95)
Explanation: Running FVA in summary methods
Flux variability analysis can also be embedded in calls to summary methods. For instance, the expected variability in substrate consumption and product formation can be quickly found by
End of explanation
model.metabolites.pyr_c.summary(fva=0.95)
Explanation: Similarly, variability in metabolite mass balances can also be checked with flux variability analysis
End of explanation
FBA_sol = model.optimize()
pFBA_sol = cobra.flux_analysis.optimize_minimal_flux(model)
Explanation: In these summary methods, the values are reported as a the center point +/- the range of the FVA solution, calculated from the maximum and minimum values.
Running pFBA
Parsimonious FBA (often written pFBA) finds a flux distribution which gives the optimal growth rate, but minimizes the total sum of flux. This involves solving two sequential linear programs, but is handled transparently by cobrapy. For more details on pFBA, please see Lewis et al. (2010).
End of explanation
abs(FBA_sol.f - pFBA_sol.f)
Explanation: These functions should give approximately the same objective value
End of explanation |
9,423 | Given the following text description, write Python code to implement the functionality described.
Description:
Count of K
To store the frequency array ; Function to check palindromic of of any substring using frequency array ; Initialise the odd count ; Traversing frequency array to compute the count of characters having odd frequency ; Returns true if odd count is atmost 1 ; Function to count the total number substring whose any permutations are palindromic ; Computing the frequency of first K character of the string ; To store the count of palindromic permutations ; Checking for the current window if it has any palindromic permutation ; Start and end poof window ; Decrementing count of first element of the window ; Incrementing count of next element of the window ; Checking current window character frequency count ; Return the final count ; Given string str ; Window of size K ; Function call
| Python Code:
freq =[0 ] * 26
def checkPalindrome() :
oddCnt = 0
for x in freq :
if(x % 2 == 1 ) :
oddCnt += 1
return oddCnt <= 1
def countPalindromePermutation(s , k ) :
for i in range(k ) :
freq[ord(s[i ] ) - 97 ] += 1
ans = 0
if(checkPalindrome() ) :
ans += 1
i = 0
j = k
while(j < len(s ) ) :
freq[ord(s[i ] ) - 97 ] -= 1
i += 1
freq[ord(s[j ] ) - 97 ] += 1
j += 1
if(checkPalindrome() ) :
ans += 1
return ans
str = "abbaca "
K = 3
print(countPalindromePermutation(str , K ) )
|
9,424 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Document retrieval from wikipedia data
Fire up GraphLab Create
Step1: Load some text data - from wikipedia, pages on people
Step2: Data contains
Step3: Explore the dataset and checkout the text it contains
Exploring the entry for president Obama
Step4: Exploring the entry for actor George Clooney
Step5: Get the word counts for Obama article
Step6: Sort the word counts for the Obama article
Turning dictonary of word counts into a table
Step7: Sorting the word counts to show most common words at the top
Step8: Most common words include uninformative words like "the", "in", "and",...
Compute TF-IDF for the corpus
To give more weight to informative words, we weigh them by their TF-IDF scores.
Step9: Examine the TF-IDF for the Obama article
Step10: Words with highest TF-IDF are much more informative.
Manually compute distances between a few people
Let's manually compare the distances between the articles for a few famous people.
Step11: Is Obama closer to Clinton than to Beckham?
We will use cosine distance, which is given by
(1-cosine_similarity)
and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.
Step12: Build a nearest neighbor model for document retrieval
We now create a nearest-neighbors model and apply it to document retrieval.
Step13: Applying the nearest-neighbors model for retrieval
Who is closest to Obama?
Step14: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians.
Other examples of document retrieval | Python Code:
import graphlab
Explanation: Document retrieval from wikipedia data
Fire up GraphLab Create
End of explanation
people = graphlab.SFrame('people_wiki.gl/')
Explanation: Load some text data - from wikipedia, pages on people
End of explanation
people.head()
len(people)
Explanation: Data contains: link to wikipedia article, name of person, text of article.
End of explanation
obama = people[people['name'] == 'Barack Obama']
obama
obama['text']
Explanation: Explore the dataset and checkout the text it contains
Exploring the entry for president Obama
End of explanation
clooney = people[people['name'] == 'George Clooney']
clooney['text']
Explanation: Exploring the entry for actor George Clooney
End of explanation
obama['word_count'] = graphlab.text_analytics.count_words(obama['text'])
print obama['word_count']
Explanation: Get the word counts for Obama article
End of explanation
obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])
Explanation: Sort the word counts for the Obama article
Turning dictonary of word counts into a table
End of explanation
obama_word_count_table.head()
obama_word_count_table.sort('count',ascending=False)
Explanation: Sorting the word counts to show most common words at the top
End of explanation
people['word_count'] = graphlab.text_analytics.count_words(people['text'])
people.head()
tfidf = graphlab.text_analytics.tf_idf(people['word_count'])
tfidf
people['tfidf'] = tfidf['docs']
Explanation: Most common words include uninformative words like "the", "in", "and",...
Compute TF-IDF for the corpus
To give more weight to informative words, we weigh them by their TF-IDF scores.
End of explanation
obama = people[people['name'] == 'Barack Obama']
obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
Explanation: Examine the TF-IDF for the Obama article
End of explanation
clinton = people[people['name'] == 'Bill Clinton']
beckham = people[people['name'] == 'David Beckham']
Explanation: Words with highest TF-IDF are much more informative.
Manually compute distances between a few people
Let's manually compare the distances between the articles for a few famous people.
End of explanation
graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0])
graphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])
Explanation: Is Obama closer to Clinton than to Beckham?
We will use cosine distance, which is given by
(1-cosine_similarity)
and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.
End of explanation
knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name')
Explanation: Build a nearest neighbor model for document retrieval
We now create a nearest-neighbors model and apply it to document retrieval.
End of explanation
knn_model.query(obama)
Explanation: Applying the nearest-neighbors model for retrieval
Who is closest to Obama?
End of explanation
swift = people[people['name'] == 'Taylor Swift']
knn_model.query(swift)
jolie = people[people['name'] == 'Angelina Jolie']
knn_model.query(jolie)
arnold = people[people['name'] == 'Arnold Schwarzenegger']
knn_model.query(arnold)
Explanation: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians.
Other examples of document retrieval
End of explanation |
9,425 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Join two sheets, groupby and sum on the joined data
Step1: Change the index to be the showname
Step2: Do the same for views
DANGER note that battle-star has a hyphen!
Step3: Join on shows watched against category
PROBLEM - we have NaN values for the last two Firefly entries. Can you do something, earlier on, to fix this, from here inside the Notebook? | Python Code:
import pandas as pd
# load both sheets as new dataframes
shows_df = pd.read_csv("show_category.csv")
views_df = pd.read_excel("views.xls")
Explanation: Join two sheets, groupby and sum on the joined data
End of explanation
shows_df.head()
shows_df = shows_df.set_index('showname')
shows_df.head()
Explanation: Change the index to be the showname
End of explanation
views_df.head()
views_df = views_df.set_index('viewer_id')
views_df.head() # note that we can have repeating viewer_id values (they're non-unique)
# we can select out the column to work on, then use the built-in str (string) functions
# to replace hyphens (we do this and just print the result to screen)
views_df['show_watched'].str.replace("-", "")
# now we do the fix in-place
views_df['show_watched'] = views_df['show_watched'].str.replace("-", "")
# NOTE if you comment out the line above, you'll get a NaN in the final table
# as `battle-star` won't be joined
views_df
print("Index info:", views_df.index)
views_df.ix[22] # select the items with index 22 (note this is an integer, not string value)
Explanation: Do the same for views
DANGER note that battle-star has a hyphen!
End of explanation
shows_views_df = views_df.join(shows_df, on='show_watched')
shows_views_df
# take out two relevant columns, group by category, sum the views
shows_views_df[['views', 'category']].groupby('category').sum()
Explanation: Join on shows watched against category
PROBLEM - we have NaN values for the last two Firefly entries. Can you do something, earlier on, to fix this, from here inside the Notebook?
End of explanation |
9,426 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The pickle module implements an algorithm for turning an arbitrary Python object into a series of bytes. This process is also called serializing the object. The byte stream representing the object can then be transmitted or stored, and later reconstructed to create a new object with the same characteristics.
Encoding and decoding Data in String
Step1: Working with Stream
Step2: Problem with Reconstructing Objects | Python Code:
import pickle
import pprint
data = [{'a': 'A', 'b': 2, 'c': 3.0}]
print('DATA:', end=' ')
pprint.pprint(data)
data_string = pickle.dumps(data)
print('PICKLE: {!r}'.format(data_string))
import pickle
import pprint
data1 = [{'a': 'A', 'b': 2, 'c': 3.0}]
print('BEFORE: ', end=' ')
pprint.pprint(data1)
data1_string = pickle.dumps(data1)
data2 = pickle.loads(data1_string)
print('AFTER : ', end=' ')
pprint.pprint(data2)
print('SAME? :', (data1 is data2))
print('EQUAL?:', (data1 == data2))
Explanation: The pickle module implements an algorithm for turning an arbitrary Python object into a series of bytes. This process is also called serializing the object. The byte stream representing the object can then be transmitted or stored, and later reconstructed to create a new object with the same characteristics.
Encoding and decoding Data in String
End of explanation
import io
import pickle
import pprint
class SimpleObject:
def __init__(self, name):
self.name = name
self.name_backwards = name[::-1]
return
data = []
data.append(SimpleObject('pickle'))
data.append(SimpleObject('preserve'))
data.append(SimpleObject('last'))
# Simulate a file.
out_s = io.BytesIO()
# Write to the stream
for o in data:
print('WRITING : {} ({})'.format(o.name, o.name_backwards))
pickle.dump(o, out_s)
out_s.flush()
# Set up a read-able stream
in_s = io.BytesIO(out_s.getvalue())
# Read the data
while True:
try:
o = pickle.load(in_s)
except EOFError:
break
else:
print('READ : {} ({})'.format(
o.name, o.name_backwards))
Explanation: Working with Stream
End of explanation
import pickle
import sys
class SimpleObject:
def __init__(self, name):
self.name = name
l = list(name)
l.reverse()
self.name_backwards = ''.join(l)
data = []
data.append(SimpleObject('pickle'))
data.append(SimpleObject('preserve'))
data.append(SimpleObject('last'))
filename ='test.dat'
with open(filename, 'wb') as out_s:
for o in data:
print('WRITING: {} ({})'.format(
o.name, o.name_backwards))
pickle.dump(o, out_s)
with open(filename, 'rb') as in_s:
while True:
try:
o = pickle.load(in_s)
except EOFError:
break
else:
print('READ: {} ({})'.format(
o.name, o.name_backwards))
Explanation: Problem with Reconstructing Objects
End of explanation |
9,427 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Installation
Just pip install
Step1: From a dictionary
Step2: From a list
Step3: From a yaml file
Step5: From a yaml string
Step6: From a dot-list
Step7: From command line arguments
To parse the content of sys.arg
Step8: Access and manipulation
Input yaml file
Step9: Object style access
Step10: dictionary style access
Step11: items in list
Step12: Changing existing keys
Step13: Adding new keys
Step14: Adding a new dictionary
Step15: providing default values
Step16: Accessing mandatory values
Accessing fields with the value ??? will cause a MissingMandatoryValue exception.
Use this to indicate that the value must be set before accessing.
Step17: Variable interpolation
OmegaConf support variable interpolation, Interpolations are evaluated lazily on access.
Config node interpolation
The interpolated variable can be the path to another node in the configuration, and in that case the value will be the value of that node.
This path may use either dot-notation (foo.1), brackets ([foo][1]) or a mix of both (foo[1], [foo].1).
Interpolations are absolute by default. Relative interpolation are prefixed by one or more dots
Step18: to_yaml() will resolve interpolations if resolve=True is passed
Step19: Interpolations may be nested, enabling more advanced behavior like dynamically selecting a sub-config
Step20: Interpolated nodes can be any node in the config, not just leaf nodes
Step21: Environment variable interpolation
Access to environment variables is supported using oc.env.
Step22: Here is an example config file interpolates with the USER environment variable
Step23: You can specify a default value to use in case the environment variable is not set.
In such a case, the default value is converted to a string using str(default), unless it is null (representing Python None) - in which case None is returned.
The following example falls back to default passwords when DB_PASSWORD is not defined
Step24: Decoding strings with interpolations
With oc.decode, strings can be converted into their corresponding data types using the OmegaConf grammar.
This grammar recognizes typical data types like bool, int, float, dict and list,
e.g. "true", "1", "1e-3", "{a
Step25: Custom interpolations
You can add additional interpolation types using custom resolvers.
The example below creates a resolver that adds 10 to the given value.
Step26: You can take advantage of nested interpolations to perform custom operations over variables
Step27: By default a custom resolver is called on every access, but it is possible to cache its output
by registering it with use_cache=True.
This may be useful either for performance reasons or to ensure the same value is always returned.
Note that the cache is based on the string literals representing the resolver's inputs, and not
the inputs themselves
Step28: Merging configurations
Merging configurations enables the creation of reusable configuration files for each logical component instead of a single config file for each variation of your task.
Machine learning experiment example | Python Code:
from omegaconf import OmegaConf
conf = OmegaConf.create()
print(conf)
Explanation: Installation
Just pip install:
pip install omegaconf
If you want to try this notebook after checking out the repository be sure to run
python setup.py develop at the repository root before running this code.
Creating OmegaConf objects
Empty
End of explanation
conf = OmegaConf.create(dict(k='v',list=[1,dict(a='1',b='2')]))
print(OmegaConf.to_yaml(conf))
Explanation: From a dictionary
End of explanation
conf = OmegaConf.create([1, dict(a=10, b=dict(a=10))])
print(OmegaConf.to_yaml(conf))
Explanation: From a list
End of explanation
conf = OmegaConf.load('../source/example.yaml')
print(OmegaConf.to_yaml(conf))
Explanation: From a yaml file
End of explanation
yaml =
a: b
b: c
list:
- item1
- item2
conf = OmegaConf.create(yaml)
print(OmegaConf.to_yaml(conf))
Explanation: From a yaml string
End of explanation
dot_list = ["a.aa.aaa=1", "a.aa.bbb=2", "a.bb.aaa=3", "a.bb.bbb=4"]
conf = OmegaConf.from_dotlist(dot_list)
print(OmegaConf.to_yaml(conf))
Explanation: From a dot-list
End of explanation
# Simulating command line arguments
import sys
sys.argv = ['your-program.py', 'server.port=82', 'log.file=log2.txt']
conf = OmegaConf.from_cli()
print(OmegaConf.to_yaml(conf))
Explanation: From command line arguments
To parse the content of sys.arg:
End of explanation
conf = OmegaConf.load('../source/example.yaml')
print(OmegaConf.to_yaml(conf))
Explanation: Access and manipulation
Input yaml file:
End of explanation
conf.server.port
Explanation: Object style access:
End of explanation
conf['log']['rotation']
Explanation: dictionary style access
End of explanation
conf.users[0]
Explanation: items in list
End of explanation
conf.server.port = 81
Explanation: Changing existing keys
End of explanation
conf.server.hostname = "localhost"
Explanation: Adding new keys
End of explanation
conf.database = {'hostname': 'database01', 'port': 3306}
Explanation: Adding a new dictionary
End of explanation
conf.get('missing_key', 'a default value')
Explanation: providing default values
End of explanation
from omegaconf import MissingMandatoryValue
try:
conf.log.file
except MissingMandatoryValue as exc:
print(exc)
Explanation: Accessing mandatory values
Accessing fields with the value ??? will cause a MissingMandatoryValue exception.
Use this to indicate that the value must be set before accessing.
End of explanation
conf = OmegaConf.load('../source/config_interpolation.yaml')
print(OmegaConf.to_yaml(conf))
# Primitive interpolation types are inherited from the referenced value
print("conf.client.server_port: ", conf.client.server_port, type(conf.client.server_port).__name__)
# Composite interpolation types are always string
print("conf.client.url: ", conf.client.url, type(conf.client.url).__name__)
Explanation: Variable interpolation
OmegaConf support variable interpolation, Interpolations are evaluated lazily on access.
Config node interpolation
The interpolated variable can be the path to another node in the configuration, and in that case the value will be the value of that node.
This path may use either dot-notation (foo.1), brackets ([foo][1]) or a mix of both (foo[1], [foo].1).
Interpolations are absolute by default. Relative interpolation are prefixed by one or more dots: The first dot denotes the level of the node itself and additional dots are going up the parent hierarchy. e.g. ${..foo} points to the foo sibling of the parent of the current node.
End of explanation
print(OmegaConf.to_yaml(conf, resolve=True))
Explanation: to_yaml() will resolve interpolations if resolve=True is passed
End of explanation
cfg = OmegaConf.create(
{
"plans": {"A": "plan A", "B": "plan B"},
"selected_plan": "A",
"plan": "${plans[${selected_plan}]}",
}
)
print(f"Default: cfg.plan = {cfg.plan}")
cfg.selected_plan = "B"
print(f"After selecting plan B: cfg.plan = {cfg.plan}")
Explanation: Interpolations may be nested, enabling more advanced behavior like dynamically selecting a sub-config:
End of explanation
cfg = OmegaConf.create(
{
"john": {"height": 180, "weight": 75},
"player": "${john}",
}
)
(cfg.player.height, cfg.player.weight)
Explanation: Interpolated nodes can be any node in the config, not just leaf nodes:
End of explanation
# Let's set up the environment first (only needed for this demonstration)
import os
os.environ['USER'] = 'omry'
Explanation: Environment variable interpolation
Access to environment variables is supported using oc.env.
End of explanation
conf = OmegaConf.load('../source/env_interpolation.yaml')
print(OmegaConf.to_yaml(conf))
conf = OmegaConf.load('../source/env_interpolation.yaml')
print(OmegaConf.to_yaml(conf, resolve=True))
Explanation: Here is an example config file interpolates with the USER environment variable:
End of explanation
cfg = OmegaConf.create(
{
"database": {
"password1": "${oc.env:DB_PASSWORD,password}",
"password2": "${oc.env:DB_PASSWORD,12345}",
"password3": "${oc.env:DB_PASSWORD,null}",
},
}
)
print(repr(cfg.database.password1))
print(repr(cfg.database.password2))
print(repr(cfg.database.password3))
Explanation: You can specify a default value to use in case the environment variable is not set.
In such a case, the default value is converted to a string using str(default), unless it is null (representing Python None) - in which case None is returned.
The following example falls back to default passwords when DB_PASSWORD is not defined:
End of explanation
cfg = OmegaConf.create(
{
"database": {
"port": "${oc.decode:${oc.env:DB_PORT}}",
"nodes": "${oc.decode:${oc.env:DB_NODES}}",
"timeout": "${oc.decode:${oc.env:DB_TIMEOUT,null}}",
}
}
)
os.environ["DB_PORT"] = "3308" # integer
os.environ["DB_NODES"] = "[host1, host2, host3]" # list
os.environ.pop("DB_TIMEOUT", None) # unset variable
print("port (int):", repr(cfg.database.port))
print("nodes (list):", repr(cfg.database.nodes))
print("timeout (missing variable):", repr(cfg.database.timeout))
os.environ["DB_TIMEOUT"] = "${.port}"
print("timeout (interpolation):", repr(cfg.database.timeout))
Explanation: Decoding strings with interpolations
With oc.decode, strings can be converted into their corresponding data types using the OmegaConf grammar.
This grammar recognizes typical data types like bool, int, float, dict and list,
e.g. "true", "1", "1e-3", "{a: b}", "[a, b, c]".
It will also resolve interpolations like "${foo}", returning the corresponding value of the node.
Note that:
When providing as input to oc.decode a string that is meant to be decoded into another string, in general
the input string should be quoted (since only a subset of characters are allowed by the grammar in unquoted
strings). For instance, a proper string interpolation could be: "'Hi! My name is: ${name}'" (with extra quotes).
None (written as null in the grammar) is the only valid non-string input to oc.decode (returning None in that case)
This resolver can be useful for instance to parse environment variables:
End of explanation
OmegaConf.register_new_resolver("plus_10", lambda x: x + 10)
conf = OmegaConf.create({'key': '${plus_10:990}'})
conf.key
Explanation: Custom interpolations
You can add additional interpolation types using custom resolvers.
The example below creates a resolver that adds 10 to the given value.
End of explanation
OmegaConf.register_new_resolver("plus", lambda x, y: x + y)
conf = OmegaConf.create({"a": 1, "b": 2, "a_plus_b": "${plus:${a},${b}}"})
conf.a_plus_b
Explanation: You can take advantage of nested interpolations to perform custom operations over variables:
End of explanation
import random
random.seed(1234)
OmegaConf.register_new_resolver("cached", random.randint, use_cache=True)
OmegaConf.register_new_resolver("uncached", random.randint)
cfg = OmegaConf.create(
{
"uncached": "${uncached:0,10000}",
"cached_1": "${cached:0,10000}",
"cached_2": "${cached:0, 10000}",
"cached_3": "${cached:0,${uncached}}",
}
)
# not the same since the cache is disabled by default
print("Without cache:", cfg.uncached, "!=", cfg.uncached)
# same value on repeated access thanks to the cache
print("With cache:", cfg.cached_1, "==", cfg.cached_1)
# same value as `cached_1` since the input is the same
print("With cache (same input):", cfg.cached_2, "==", cfg.cached_1)
# same value even if `uncached` changes, because the cache is based
# on the string literal "${uncached}" that remains the same
print("With cache (interpolation):", cfg.cached_3, "==", cfg.cached_3)
Explanation: By default a custom resolver is called on every access, but it is possible to cache its output
by registering it with use_cache=True.
This may be useful either for performance reasons or to ensure the same value is always returned.
Note that the cache is based on the string literals representing the resolver's inputs, and not
the inputs themselves:
End of explanation
base_conf = OmegaConf.load('../source/example2.yaml')
print(OmegaConf.to_yaml(base_conf))
second_conf = OmegaConf.load('../source/example3.yaml')
print(OmegaConf.to_yaml(second_conf))
from omegaconf import OmegaConf
import sys
# Merge configs:
conf = OmegaConf.merge(base_conf, second_conf)
# Simulate command line arguments
sys.argv = ['program.py', 'server.port=82']
# Merge with cli arguments
conf.merge_with_cli()
print(OmegaConf.to_yaml(conf))
Explanation: Merging configurations
Merging configurations enables the creation of reusable configuration files for each logical component instead of a single config file for each variation of your task.
Machine learning experiment example:
python
conf = OmegaConf.merge(base_cfg, model_cfg, optimizer_cfg, dataset_cfg)
Web server configuration example:
python
conf = OmegaConf.merge(server_cfg, plugin1_cfg, site1_cfg, site2_cfg)
The following example creates two configs from files, and one from the cli. It then combines them into a single object. Note how the port changes to 82, and how the users lists are combined.
End of explanation |
9,428 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<span style="color
Step1: A genome file
You will need a genome file in fasta format (optionally it can be gzip compressed).
Step2: Initialize the tool
You can generate single or paired-end data, and you will likely want to restrict the size of selected fragments to be within an expected size selection window, as is typically done in empirical data sets. Here I select all fragments occuring between two restriction enzymes where the intervening fragment is 300-500bp in length. I then ask that the analysis returns the digested fragments as 150bp fastq reads, and to provide 10 copies of each one.
Step3: Check results | Python Code:
# conda install ipyrad -c bioconda
import ipyrad.analysis as ipa
Explanation: <span style="color:gray">ipyrad-analysis toolkit: </span> digest genomes
The purpose of this tool is to digest a genome file in silico using the same restriction enzymes that were used for an empirical data set to attempt to extract homologous data from the genome file. This can be a useful procedure for adding additional outgroup samples to a data set.
Required software
End of explanation
genome = "/home/deren/Downloads/Ahypochondriacus_459_v2.0.fa"
Explanation: A genome file
You will need a genome file in fasta format (optionally it can be gzip compressed).
End of explanation
digest = ipa.digest_genome(
fasta=genome,
name="amaranthus-digest",
workdir="digested_genomes",
re1="CTGCAG",
re2="AATTC",
ncopies=10,
readlen=150,
min_size=300,
max_size=500,
)
fio = open(genome)
scaffolds = fio.read().split(">")[1:]
ordered = sorted(scaffolds, key=lambda x: len(x), reverse=True)
len(ordered[0])
digest.run()
Explanation: Initialize the tool
You can generate single or paired-end data, and you will likely want to restrict the size of selected fragments to be within an expected size selection window, as is typically done in empirical data sets. Here I select all fragments occuring between two restriction enzymes where the intervening fragment is 300-500bp in length. I then ask that the analysis returns the digested fragments as 150bp fastq reads, and to provide 10 copies of each one.
End of explanation
ll digested_genomes/
Explanation: Check results
End of explanation |
9,429 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stock Trading Strategy Backtesting
Author
Step1: Download data
This project will use the historical daily closing quotes data for S&P 500 index from January 3,2000 to December 9,2016, which can be found on Quandl at this link. S&P 500 index is generally considered to be a good proxy for the whole stock market in the United States. Quandl provides an API which can be used to download the data.
API
Step2: Clean data
The time series data for the S&P 500 index is now in the dataframe sp500 which automatically has index as datetime objects. I will keep data in the column Close and drop any other data that this project is not going to use.
Step3: Plot the closing quotes over time to get a fisrt impression about the historical market trend by using plotly package. In the following graph, not only can you observe the trend from the start date to the end date but also use the range selectors in the upper left corner and the range slider at the bottom to see the trend in a specific period of time.
Step4: Generate trend lines
The trading strategy I am going to test is based on both a two-month(i.e., 42 trading days) trend line and a one-year(i.e., 252 trading days) trend line. Trend line is formed of the moving average of the index level for the corresponding time period. To generate the two kinds of trend lines, first, the data of moving average of the S&P 500 index in respective period should be calculated. Two new columns are added to the dataframe sp500, the column 42d contains values for the 42-day trend and the column 252d contains the 252-day trend data.
Step5: Notice that these two new columns have fewer entries because they start having data only when 42 and 252 observation points, respectively, are available for the first time to calculate the moving average. Then, plot these two trend lines in a single figure with the historical level of S&P 500 index. You can still use the range selectors and the range slider to observe a certain period. Also, a trend line will disappear if you click on the corresponding legend in the upper right corner of the graph. This function makes it easier to get some insights from those upward and downward trends.
Step6: Generate trading signals
The stock investment strategy which is going to be tested is based on trading signals generated by the 42-day and 252-day trends created above. The following rule generates trading signals
Step7: After the differences between the 42-day trend and the 252-day trend being calculated, the trading signals are generated according to the rule. The signal "1" means to have long positions in the index and get the market returns. The signal "0" means not to buy or sell the index and make no returns. The signal "-1" means to go short on the index and get the negative market returns.
Step8: The result shows that from January 3,2000 to December 9,2016, there were 1935 trading days when the 42-day trend was more than 50 points above the 252-day trend. On 950 trading days, the 42-day trend lies more than 50 points below the 252-day trend. The change of signals over time can be seen in the following graph.
Step9: Does the trading strategy perform well?
Test the performance of the investment strategy based on trading signals generated above by comparing the cumulative,continuous market returns with the cumulative, continuous returns made by the strategy. Daily log returns are calculated here.
Step10: Plot the market returns and the returns of the strategy over time to see the performance of the trading strategy constructed on trend lines. As before, You can use the range selectors and the range slider to check whether the strategy works well in a certain period of time. | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from plotly.offline import init_notebook_mode,iplot
import plotly.graph_objs as go
%matplotlib inline
init_notebook_mode(connected=True)
Explanation: Stock Trading Strategy Backtesting
Author: Long Shangshang (Cheryl)
Date: December 15,2016
Summary: Test a trading system based on market trend signals. Does technical analysis really work?
Description: Technical analysis is a security analysis methodology for forecasting the direction of prices through the study of past market data, primarily price and volume. Many financial professionals and interested amateurs apply technical analysis to stock investments. They predict future price trends and develop trading strategies using historical market data.In this project, I am going to test the performance of an investment strategy based on trading signals generated by two-month and one-year trend lines.
Outline:
* Download data: get the daily closing quotes data for S&P 500 index
* Clean data: observe the historical market trend
* Generate trend lines: develop the two-month and one-year trend lines
* Generate trading signals: form the trading signals based on a rule
* Backtest the strategy: see if the strategy outperforms the market
This Jupyter notebook was created with a lot of help from Spencer Lyon and Chase Coleman for the NYU Stern course Data Bootcamp.
End of explanation
import quandl
sp500=quandl.get("YAHOO/INDEX_GSPC",start_date="2000-01-03",end_date="2016-12-09")
sp500.info()
sp500.head()
type(sp500.index)
Explanation: Download data
This project will use the historical daily closing quotes data for S&P 500 index from January 3,2000 to December 9,2016, which can be found on Quandl at this link. S&P 500 index is generally considered to be a good proxy for the whole stock market in the United States. Quandl provides an API which can be used to download the data.
API: YAHOO/INDEX_GSPC
Parameters:
start_date: Specified in format year-month-day
end_date: Specified in format year-month-day
Note: The Quandl Python package should be installed already. If not, enter
pip install quandl from the command line (command prompt on windows, terminal on mac) to install it.
End of explanation
sp500=sp500.drop(['Open','High','Low','Volume','Adjusted Close'],axis=1)
sp500.head()
Explanation: Clean data
The time series data for the S&P 500 index is now in the dataframe sp500 which automatically has index as datetime objects. I will keep data in the column Close and drop any other data that this project is not going to use.
End of explanation
trace = go.Scatter(x=sp500.index,
y=sp500['Close'])
data=[trace]
layout = dict(
width=1000,
height=600,
title='Historical levels of the S&P 500 index',
xaxis=dict(
rangeselector=dict(
buttons=list([
dict(count=1,
label='1y',
step='year',
stepmode='backward'),
dict(count=5,
label='5y',
step='year',
stepmode='backward'),
dict(count=10,
label='10y',
step='year',
stepmode='backward'),
dict(step='all')
])
),
rangeslider=dict(),
type='date'
)
)
fig = dict(data=data, layout=layout)
iplot(fig)
Explanation: Plot the closing quotes over time to get a fisrt impression about the historical market trend by using plotly package. In the following graph, not only can you observe the trend from the start date to the end date but also use the range selectors in the upper left corner and the range slider at the bottom to see the trend in a specific period of time.
End of explanation
sp500['42d']=sp500['Close'].rolling(window=42).mean()
sp500['252d']=sp500['Close'].rolling(window=252).mean()
sp500.tail()
Explanation: Generate trend lines
The trading strategy I am going to test is based on both a two-month(i.e., 42 trading days) trend line and a one-year(i.e., 252 trading days) trend line. Trend line is formed of the moving average of the index level for the corresponding time period. To generate the two kinds of trend lines, first, the data of moving average of the S&P 500 index in respective period should be calculated. Two new columns are added to the dataframe sp500, the column 42d contains values for the 42-day trend and the column 252d contains the 252-day trend data.
End of explanation
trace1 = go.Scatter(x=sp500.index,
y=sp500['Close'],
name='close')
trace2 = go.Scatter(x=sp500.index,
y=sp500['42d'],
name='42d')
trace3 = go.Scatter(x=sp500.index,
y=sp500['252d'],
name='252d')
data=[trace1,trace2,trace3]
layout = dict(
width=1000,
height=600,
title='The S&P 500 index with 42-day and 252-day trend lines ',
xaxis=dict(
rangeselector=dict(
buttons=list([
dict(count=1,
label='1y',
step='year',
stepmode='backward'),
dict(count=5,
label='5y',
step='year',
stepmode='backward'),
dict(count=10,
label='10y',
step='year',
stepmode='backward'),
dict(step='all')
])
),
rangeslider=dict(),
type='date'
)
)
fig = dict(data=data, layout=layout)
iplot(fig)
Explanation: Notice that these two new columns have fewer entries because they start having data only when 42 and 252 observation points, respectively, are available for the first time to calculate the moving average. Then, plot these two trend lines in a single figure with the historical level of S&P 500 index. You can still use the range selectors and the range slider to observe a certain period. Also, a trend line will disappear if you click on the corresponding legend in the upper right corner of the graph. This function makes it easier to get some insights from those upward and downward trends.
End of explanation
sp500['42-252']=sp500['42d']-sp500['252d']
sp500['42-252'].tail()
Explanation: Generate trading signals
The stock investment strategy which is going to be tested is based on trading signals generated by the 42-day and 252-day trends created above. The following rule generates trading signals:
* Buy: when the 42-day trend is for the first time 50 points above the 252-day trend
* Wait: when the 42-day trend is within a range of +/- 50 points around the 252-day trend
* Sell: when the 42-day trend is for the first time 50 points below the 252-day trend
In this project, it is assumed that an invester can directly buy or sell the S&P 500 index. The transaction costs will be caused in the real market is not considered here.
End of explanation
sp500['Signal']=np.where(sp500['42-252']>50,1,0)
sp500['Signal']=np.where(sp500['42-252']<-50,-1,sp500['Signal'])
sp500['Signal'].value_counts()
Explanation: After the differences between the 42-day trend and the 252-day trend being calculated, the trading signals are generated according to the rule. The signal "1" means to have long positions in the index and get the market returns. The signal "0" means not to buy or sell the index and make no returns. The signal "-1" means to go short on the index and get the negative market returns.
End of explanation
figure,ax=plt.subplots()
sp500['Signal'].plot(ax=ax,lw=1.3,fontsize=10,
ylim=[-1.1,1.1],
title='Trading signals over time',
grid=True)
Explanation: The result shows that from January 3,2000 to December 9,2016, there were 1935 trading days when the 42-day trend was more than 50 points above the 252-day trend. On 950 trading days, the 42-day trend lies more than 50 points below the 252-day trend. The change of signals over time can be seen in the following graph.
End of explanation
sp500['Market returns']=np.log(sp500['Close']/sp500['Close'].shift(1))
sp500['Strategy returns']=sp500['Signal'].shift(1)*sp500['Market returns']
sp500[['Market returns','Strategy returns']].cumsum().apply(np.exp).tail()
Explanation: Does the trading strategy perform well?
Test the performance of the investment strategy based on trading signals generated above by comparing the cumulative,continuous market returns with the cumulative, continuous returns made by the strategy. Daily log returns are calculated here.
End of explanation
return1 = go.Scatter(x=sp500.index,
y=sp500['Market returns'].cumsum().apply(np.exp),
name='Market')
return2 = go.Scatter(x=sp500.index,
y=sp500['Strategy returns'].cumsum().apply(np.exp),
name='Strategy')
data=[return1,return2]
layout = dict(
width=1000,
height=600,
title='The market returns vs the strategy returns ',
xaxis=dict(
rangeselector=dict(
buttons=list([
dict(count=1,
label='1y',
step='year',
stepmode='backward'),
dict(count=5,
label='5y',
step='year',
stepmode='backward'),
dict(count=10,
label='10y',
step='year',
stepmode='backward'),
dict(step='all')
])
),
rangeslider=dict(),
type='date'
)
)
fig = dict(data=data, layout=layout)
iplot(fig)
Explanation: Plot the market returns and the returns of the strategy over time to see the performance of the trading strategy constructed on trend lines. As before, You can use the range selectors and the range slider to check whether the strategy works well in a certain period of time.
End of explanation |
9,430 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have a Series that looks like: | Problem:
import pandas as pd
s = pd.Series([1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0.98,0.93],
index=['146tf150p','havent','home','okie','thanx','er','anything','lei','nite','yup','thank','ok','where','beerage','anytime','too','done','645','tick','blank'])
import numpy as np
def g(s):
return s.iloc[np.lexsort([s.index, s.values])]
result = g(s.copy()) |
9,431 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Week 1 - Getting Started
Step1: Python Summary
Further information
More information is usually available with the help function. Using ? brings up the same information in ipython.
Using the dir function lists all the options available from a variable.
help(np)
np?
dir(np)
Variables
A variable is simply a name for something. One of the simplest tasks is printing the value of a variable.
Printing can be customized using the format method on strings.
Step2: Types
A number of different types are available as part of the standard library. The following links to the documentation provide a summary.
https
Step3: Conditionals
https
Step4: Loops
https
Step5: Functions
https
Step6: Numpy
http
Step7: Exercises
Step8: Print the variable a in all uppercase
Print the variable a with every other letter in uppercase
Print the variable a in reverse, i.e. god yzal ...
Print the variable a with the words reversed, i.e. ehT kciuq ...
Print the variable b in scientific notation with 4 decimal places
Step9: Print the items in people as comma seperated values
Sort people so that they are ordered by age, and print
Sort people so that they are ordered by age first, and then their names, i.e. Bob and Charlie should be next to each other due to their ages with Bob first due to his name.
Write a function that returns the first n prime numbers
Given a list of coordinates calculate the distance using the (Euclidean distance)[https
Step10: Print the standard deviation of each row in a numpy array
Print only the values greater than 90 in a numpy array
From a numpy array display the values in each row in a seperate plot (the subplots method may be useful) | Python Code:
import numpy as np
print("Numpy:", np.__version__)
Explanation: Week 1 - Getting Started
End of explanation
location = 'Bethesda'
zip_code = 20892
elevation = 71.9
print("We're in", location, "zip code", zip_code, ", ", elevation, "m above sea level")
print("We're in " + location + " zip code " + str(zip_code) + ", " + str(elevation) + "m above sea level")
print("We're in {0} zip code {1}, {2}m above sea level".format(location, zip_code, elevation))
print("We're in {0} zip code {1}, {2:.2e}m above sea level".format(location, zip_code, elevation))
Explanation: Python Summary
Further information
More information is usually available with the help function. Using ? brings up the same information in ipython.
Using the dir function lists all the options available from a variable.
help(np)
np?
dir(np)
Variables
A variable is simply a name for something. One of the simplest tasks is printing the value of a variable.
Printing can be customized using the format method on strings.
End of explanation
# Sequences
# Lists
l = [1,2,3,4,4]
print("List:", l, len(l), 1 in l)
# Tuples
t = (1,2,3,4,4)
print("Tuple:", t, len(t), 1 in t)
# Sets
s = set([1,2,3,4,4])
print("Set:", s, len(s), 1 in s)
# Dictionaries
# Dictionaries map hashable values to arbitrary objects
d = {'a': 1, 'b': 2, 3: 's', 2.5: 't'}
print("Dictionary:", d, len(d), 'a' in d)
Explanation: Types
A number of different types are available as part of the standard library. The following links to the documentation provide a summary.
https://docs.python.org/3.5/library/stdtypes.html
https://docs.python.org/3.5/tutorial/datastructures.html
Other types are available from other packages and can be created to support special situations.
A variety of different methods are available depending on the type.
End of explanation
import random
if random.random() < 0.5:
print("Should be printed 50% of the time")
elif random.random() < 0.5:
print("Should be primted 25% of the time")
else:
print("Should be printed 25% of the time")
Explanation: Conditionals
https://docs.python.org/3.5/tutorial/controlflow.html
End of explanation
for i in ['a', 'b', 'c', 'd']:
print(i)
else:
print('Else')
for i in ['a', 'b', 'c', 'd']:
if i == 'b':
continue
elif i == 'd':
break
print(i)
else:
print('Else')
Explanation: Loops
https://docs.python.org/3.5/tutorial/controlflow.html
End of explanation
def is_even(n):
return not n % 2
print(is_even(1), is_even(2))
def first_n_squared_numbers(n=5):
return [i**2 for i in range(1,n+1)]
print(first_n_squared_numbers())
def next_fibonacci(status=[]):
if len(status) < 2:
status.append(1)
return 1
status.append(status[-2] + status[-1])
return status[-1]
print(next_fibonacci(), next_fibonacci(), next_fibonacci(), next_fibonacci(), next_fibonacci(), next_fibonacci())
def accepts_anything(*args, **kwargs):
for a in args:
print(a)
for k in kwargs:
print(k, kwargs[k])
accepts_anything(1,2,3,4, a=1, b=2, c=3)
# For quick and simple functions a lambda expression can be a useful approach.
# Standard functions are always a valid alternative and often make code clearer.
f = lambda x: x**2
print(f(5))
people = [{'name': 'Alice', 'age': 30},
{'name': 'Bob', 'age': 35},
{'name': 'Charlie', 'age': 35},
{'name': 'Dennis', 'age': 25}]
print(people)
people.sort(key=lambda x: x['age'])
print(people)
Explanation: Functions
https://docs.python.org/3.5/tutorial/controlflow.html
End of explanation
a = np.array([[1,2,3], [4,5,6], [7,8,9]])
print(a)
print(a[1:,1:])
a = a + 2
print(a)
a = a + np.array([1,2,3])
print(a)
a = a + np.array([[10],[20],[30]])
print(a)
print(a.mean(), a.mean(axis=0), a.mean(axis=1))
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(0, 3*2*np.pi, 500)
plt.plot(x, np.sin(x))
plt.show()
Explanation: Numpy
http://docs.scipy.org/doc/numpy/reference/
End of explanation
a = "The quick brown fox jumps over the lazy dog"
b = 1234567890.0
Explanation: Exercises
End of explanation
people = [{'name': 'Bob', 'age': 35},
{'name': 'Alice', 'age': 30},
{'name': 'Eve', 'age': 20},
{'name': 'Gail', 'age': 30},
{'name': 'Dennis', 'age': 25},
{'name': 'Charlie', 'age': 35},
{'name': 'Fred', 'age': 25},]
Explanation: Print the variable a in all uppercase
Print the variable a with every other letter in uppercase
Print the variable a in reverse, i.e. god yzal ...
Print the variable a with the words reversed, i.e. ehT kciuq ...
Print the variable b in scientific notation with 4 decimal places
End of explanation
coords = [(0,0), (10,5), (10,10), (5,10), (3,3), (3,7), (12,3), (10,11)]
Explanation: Print the items in people as comma seperated values
Sort people so that they are ordered by age, and print
Sort people so that they are ordered by age first, and then their names, i.e. Bob and Charlie should be next to each other due to their ages with Bob first due to his name.
Write a function that returns the first n prime numbers
Given a list of coordinates calculate the distance using the (Euclidean distance)[https://en.wikipedia.org/wiki/Euclidean_distance]
Given a list of coordinates arrange them in such a way that the distance traveled is minimized (the itertools module may be useful).
End of explanation
np.random.seed(0)
a = np.random.randint(0, 100, size=(10,20))
Explanation: Print the standard deviation of each row in a numpy array
Print only the values greater than 90 in a numpy array
From a numpy array display the values in each row in a seperate plot (the subplots method may be useful)
End of explanation |
9,432 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 9
Python Basic, Lesson 4, v1.0.1, 2016.12 by David.Yi
Python Basic, Lesson 4, v1.0.2, 2017.03 modified by Yimeng.Zhang
v1.1,2020.4 5, edit by David Yi
本次内容要点
日期函数库 datetime 用法介绍,datetime、time 等库的介绍,获得日期,字符串和日期转换,日期格式介绍,日期加减计算
思考一下:距离某个日期还有几天
日期处理
datetime 是 Python处理日期和时间的标准库,用于获取日期和进行日期计算等;
Python 的日期相关的标准库比较多(略有杂乱),有 datetime, time, calendar等,这是 python 长期发展过程中造成的问题。也有很好的第三方库来解决 python 日期库比较多并且功能略有重叠的问题。
datetime 库包括 date日期,time时间, datetime日期和时间,tzinfo时区,timedelt 时间跨度计算等主要对象。
获取当前日期和时间:now = datetime.now()
日期戳和日期的区别,日期戳更加精确,日期只是年月日,根据需要使用,大多数情况下只需要日期即可;
Time 对于时间的处理更加精确,用时间戳的表达方式;
时间戳定义为格林威治时间1970年01月01日00时00分00秒起至现在的总秒数,时间戳是惟一的。
Step1: 日期库-字符串和日期的转换
字符串转化为日期:datetime.strptime()
日期转换为字符串:datetime.strftime()
日期字符串格式,举例
python
cday1 = datetime.now().strftime('%a, %b %d %H
Step2: 日期计算
对日期和时间进行加减实际上就是把 datetime 往后或往前计算,得到新的 datetime,需要导入 datetime 的 timedelta 类。
Step3: 思考一下
计算倒计时,现在距离2021元旦距离现在还有多少天 | Python Code:
# 显示今天日期
# 可以比较一下三种结果的差异
from datetime import datetime, date
import time
print(datetime.now())
print(date.today())
print(time.time())
# 各种日期时间类型的数据类型
print(type(datetime.now()))
print(type(date.today()))
print(type(time.time()))
# 连续运行显示时间戳,看看时间戳差了多少毫秒
# 因为电脑运行速度太快,没有意外的话,可能看到的时间是一样的
for i in range(10):
print(time.time())
# 如果循环中只是需要记数,而不需要使用变量,for 循环可以写的更加简洁一点点
for _ in range(10):
print(time.time())
# 用 time() 来计时,算10万次平方,看看哪台电脑速度快
# 算是一个简单的性能检测程序
import time
a = time.time()
for i in range(100000):
j = i * i
b = time.time()
print('time:', b-a)
Explanation: Lesson 9
Python Basic, Lesson 4, v1.0.1, 2016.12 by David.Yi
Python Basic, Lesson 4, v1.0.2, 2017.03 modified by Yimeng.Zhang
v1.1,2020.4 5, edit by David Yi
本次内容要点
日期函数库 datetime 用法介绍,datetime、time 等库的介绍,获得日期,字符串和日期转换,日期格式介绍,日期加减计算
思考一下:距离某个日期还有几天
日期处理
datetime 是 Python处理日期和时间的标准库,用于获取日期和进行日期计算等;
Python 的日期相关的标准库比较多(略有杂乱),有 datetime, time, calendar等,这是 python 长期发展过程中造成的问题。也有很好的第三方库来解决 python 日期库比较多并且功能略有重叠的问题。
datetime 库包括 date日期,time时间, datetime日期和时间,tzinfo时区,timedelt 时间跨度计算等主要对象。
获取当前日期和时间:now = datetime.now()
日期戳和日期的区别,日期戳更加精确,日期只是年月日,根据需要使用,大多数情况下只需要日期即可;
Time 对于时间的处理更加精确,用时间戳的表达方式;
时间戳定义为格林威治时间1970年01月01日00时00分00秒起至现在的总秒数,时间戳是惟一的。
End of explanation
# 字符串转化为日期
day1 = datetime.strptime('2017-1-2 18:19:59', '%Y-%m-%d %H:%M:%S')
print(day1)
print(type(day1))
# 日期转换为字符串
day1 = datetime.now().strftime('%Y, %W, %m %d %H:%M')
print(day1)
print(type(day1))
# 日期转换为字符串,各种格式
cday1 = datetime.now().strftime('%a, %b %d %H:%M')
cday2 = datetime.now().strftime('%A, %b %d %H:%M, %j')
cday3 = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
print(cday1)
print(cday2)
print(cday3)
# 本地时间转换为字符串
t1 = time.localtime()
print(t1)
t2 = time.strftime('%Y-%m-%d %H:%M:%S',t1)
print(t2)
print(type(t2))
Explanation: 日期库-字符串和日期的转换
字符串转化为日期:datetime.strptime()
日期转换为字符串:datetime.strftime()
日期字符串格式,举例
python
cday1 = datetime.now().strftime('%a, %b %d %H:%M')
cday2 = datetime.now().strftime('%A, %b %d %H:%M, %j')
cday3 = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
日期库 – 日期格式说明
%a 英文星期的简写 (shorthand of week in English)
%A 英文星期的完整拼写 (longhand of week in English)
%b 英文月份的简写 (shorthand of month in English)
%B 英文月份的完整拼写 (longhand of month in English)
%c 本地当前的日期与时间 (current local date and time)
%d 日期数, 1-31之间 (date, between 1-31))
%H 小时数, 00-23之间 (hour, between 00-23))
%I 小时数, 01-12之间 (hour, between 01-12)
%m 月份, 01-12之间 (month, between 01-12)
%M 分钟数, 01-59之间 (minute, 01-59)
%j 本年从第1天开始计数到当天的天数 (total days from 1st day of this year till now)
%w 星期数, 0-6之间(0是周日) (day of the week, between 0-6, 0=Sunday)
%W 当天属于本年的第几周,周一作为一周的第一天进行计算 (week of the year, starting with Monday )
%x 本地的当天日期 (local date)
%X 本地的当前时间 (local time)
%y 年份,0-99之间 (year, between 0-99)
%Y 年份的完整拼写 (longhand of year)
End of explanation
# 日期计算
# timedelta(days=0, seconds=0, microseconds=0, milliseconds=0, minutes=0, hours=0, weeks=0)
from datetime import timedelta
now = datetime.now()
# 往后减去十个小时
now1 = now + timedelta(hours=-10)
print(now)
print(now1)
# 日期的各个变量的计算
now = datetime.now()
now1 = now + timedelta(hours=10, minutes=20, seconds=-10)
print(now)
print(now1)
Explanation: 日期计算
对日期和时间进行加减实际上就是把 datetime 往后或往前计算,得到新的 datetime,需要导入 datetime 的 timedelta 类。
End of explanation
from datetime import datetime
#日期格式转换
newyear = datetime.strptime('2036-06-07', '%Y-%m-%d')
#获取现在的时间
now = datetime.now()
#时间差
timedelta = newyear-now
print(timedelta)
print(timedelta.days)
Explanation: 思考一下
计算倒计时,现在距离2021元旦距离现在还有多少天
End of explanation |
9,433 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sistema binario
Step1: Primero definimos los símbolos de las variables a usar
Step2: Momento de inercia
Step3: Ahora ingresamos la forma de la órbita Kepleriana, en términos de $a, e, \varphi$ y $\varphi_0$
Step4: Ahora sustituimos los valores de las componentes del momento cuadrupolar en función de las coordenadas del sistema reducido
Step5: Calculo de la derivada ${d M_{ij}}/{d\varphi}$
Calculamos las derivadas de las componentes de $M_{ij}$ respecto a $\varphi$, usando la función diff de Sympy
Step6: Calculo de las derivadas $\dot{M}{ij}$, $\ddot{M}{ij}$ y $\dddot{M}_{ij}$
Ahora calculamos las derivadas respecto al tiempo, usando la regla de la cadena, es decir,
$$\frac{dM_{ij}}{dt}=\frac{dM_{ij}}{d\varphi}\frac{d\varphi}{dt},$$
donde luego sustituiremos
$$\frac{d \varphi}{d t}=\frac{L}{\mu r^2},$$
con
$$L=\omega_0\mu a^2\sqrt{1-e^2}$$
Step7: Repetimos el procedimiento, para calcular las segundas derivadas
Step8: Finalmente, hacemos lo mismo, para calcular la tercera derivada temporal
Step9: Potencia promedio radiada
A continuación, calcularemos la potencia promedio radiada, evaluando la expresión
$$
\left\langle P\right\rangle=\frac{G}{5 c^5} \left\langle\dddot{Q}{}^{ij}\dddot{Q}{}^{ij}\right\rangle
$$
que, en nuestro caso de movimiento en el plano $xy$, se reduce a
$$\left\langle P\right\rangle=\frac{2G}{15c^5}\, \left\langle \left(\dddot{M}{11}\right)^2+ \left(\dddot{M}{22}\right)^2+3\left(\dddot{M}{12}\right)^2 -\dddot{M}{11}\dddot{M}_{22}\right\rangle .
$$
Primero, definiremos los símbolos de $G, c$ y $P$
Step10: Y reemplazamos los valores de las derivadas del momento cuadrupolar en la expresión para la potencia (aún sin promediar!)
Step11: Nuevamente, usando la regla de la cadena, podemos reescribir el promedio requerido en términos de una integral sobre el ángulo $\varphi$
Step12: Como en el apunte
Step13: $$
\left\langle P\right\rangle=\frac{2 G^4 \mu^2 M^3}{15 c^5 a^5 (1 − e^2 )^5} \left\langle g(\varphi)\right\rangle
$$
Step14: Momentum angular radiado
Similarmente, para calcular el momento angular radiado usamos
$$
\left\langle\dot{L}{}^{i}\right\rangle=\frac{2 G}{5 c^5} \epsilon^{ijk}\left\langle\ddot{M}{}^{ja}\ddot{M}{}^{ka}\right\rangle .
$$
En nuestro caso, de movimiento en el plano $xy$, encontramos
Step15: El promedio se calcula de forma similar a como se hizo con la potencia, realizando un cambio de variable para finalmente integrar sobre el ángulo $\varphi$ | Python Code:
from sympy import *
init_printing(use_unicode=True)
Explanation: Sistema binario: Energía y Momentum angular radiado
End of explanation
a = Symbol('a', positive=True)
e = Symbol('e', positive=True)
mu = Symbol('mu', positive=True)
L = Symbol('L',positive=True)
omega0 = Symbol('omega0',positive=True)
phi = Symbol('varphi',positive=True)
phi0 = Symbol('varphi0',real=True)
Explanation: Primero definimos los símbolos de las variables a usar: $a,e,\mu,L,\omega_0,\varphi$ y $\varphi_0$:
End of explanation
M11 = Function('M11')(phi)
M12 = Function('M12')(phi)
M22 = Function('M22')(phi)
r = Function('r')(phi)
Explanation: Momento de inercia: $M_{ij}$
Y ahora definimos los símbolos de las componentes (no nulas) del momento cuadrupolar $M_{ij}$, como funciones de $\varphi$
End of explanation
r = a*(1-e**2)/(1+e*cos(phi-phi0))
r
Explanation: Ahora ingresamos la forma de la órbita Kepleriana, en términos de $a, e, \varphi$ y $\varphi_0$:
End of explanation
M11 = mu*(r**2)*cos(phi)**2
M12 = mu*(r**2)*sin(phi)*cos(phi)
M22 = mu*(r**2)*sin(phi)**2
M11,M12,M22
Explanation: Ahora sustituimos los valores de las componentes del momento cuadrupolar en función de las coordenadas del sistema reducido:
End of explanation
dM11_phi = simplify(diff(M11,phi))
dM12_phi = simplify(diff(M12,phi))
dM22_phi = simplify(diff(M22,phi))
dM11_phi,dM12_phi,dM22_phi
Explanation: Calculo de la derivada ${d M_{ij}}/{d\varphi}$
Calculamos las derivadas de las componentes de $M_{ij}$ respecto a $\varphi$, usando la función diff de Sympy
End of explanation
L = omega0*mu*(a**2)*sqrt(1-e**2)
dphi_t = L/(mu*r**2)
dphi_t
dM11_t = simplify(dM11_phi*dphi_t)
dM12_t = simplify(dM12_phi*dphi_t)
dM22_t = simplify(dM22_phi*dphi_t)
dM11_t,dM12_t,dM22_t
Explanation: Calculo de las derivadas $\dot{M}{ij}$, $\ddot{M}{ij}$ y $\dddot{M}_{ij}$
Ahora calculamos las derivadas respecto al tiempo, usando la regla de la cadena, es decir,
$$\frac{dM_{ij}}{dt}=\frac{dM_{ij}}{d\varphi}\frac{d\varphi}{dt},$$
donde luego sustituiremos
$$\frac{d \varphi}{d t}=\frac{L}{\mu r^2},$$
con
$$L=\omega_0\mu a^2\sqrt{1-e^2}$$
End of explanation
dM11_tt = simplify(diff(dM11_t,phi)*dphi_t)
dM12_tt = simplify(diff(dM12_t,phi)*dphi_t)
dM22_tt = simplify(diff(dM22_t,phi)*dphi_t)
dM11_tt,dM12_tt,dM22_tt
Explanation: Repetimos el procedimiento, para calcular las segundas derivadas:
End of explanation
dM11_ttt = simplify(diff(dM11_tt,phi)*dphi_t)
dM12_ttt = simplify(diff(dM12_tt,phi)*dphi_t)
dM22_ttt = simplify(diff(dM22_tt,phi)*dphi_t)
dM11_ttt,dM12_ttt,dM22_ttt
Explanation: Finalmente, hacemos lo mismo, para calcular la tercera derivada temporal:
End of explanation
G = Symbol('G')
c = Symbol('c')
P = Symbol('P')
Explanation: Potencia promedio radiada
A continuación, calcularemos la potencia promedio radiada, evaluando la expresión
$$
\left\langle P\right\rangle=\frac{G}{5 c^5} \left\langle\dddot{Q}{}^{ij}\dddot{Q}{}^{ij}\right\rangle
$$
que, en nuestro caso de movimiento en el plano $xy$, se reduce a
$$\left\langle P\right\rangle=\frac{2G}{15c^5}\, \left\langle \left(\dddot{M}{11}\right)^2+ \left(\dddot{M}{22}\right)^2+3\left(\dddot{M}{12}\right)^2 -\dddot{M}{11}\dddot{M}_{22}\right\rangle .
$$
Primero, definiremos los símbolos de $G, c$ y $P$:
End of explanation
P = simplify(((2*G)/(15*c**5))*(dM11_ttt**2 + dM22_ttt**2 + 3*dM12_ttt**2 - dM11_ttt*dM22_ttt))
P
Explanation: Y reemplazamos los valores de las derivadas del momento cuadrupolar en la expresión para la potencia (aún sin promediar!):
End of explanation
integrando = factor(simplify(((1-e**2)**(3/2))/(2*pi))*(P/(1+e*cos(phi-phi0))**2))
integrando
promedio_P = Symbol('PP')
import time
# tomamos el tiempo de este cálculo, porque es el que más tarda (~ 2 horas!)
st = time.time()
promedio_P =integrate(integrando,(phi,0,2*pi))
ft = time.time()
print(ft-st)
simplify(promedio_P)
Explanation: Nuevamente, usando la regla de la cadena, podemos reescribir el promedio requerido en términos de una integral sobre el ángulo $\varphi$:
\begin{align}
\left\langle P(t)\right\rangle &= \frac{1}{T}\int_0^T P(t)\,dt \
&= \frac{1}{T}\int_0^{2\pi} P(\varphi)\frac{dt}{d\varphi}\,d\varphi \
&= \frac{1}{T}\frac{\mu}{L}\int_0^{2\pi} r^2(\varphi)P(\varphi)\,d\varphi \
&= \frac{\mu a^2(1-e^2)^2}{TL}\int_0^{2\pi} \frac{P(\varphi)}{\left[1+e\cos(\varphi-\varphi_0)\right]^2}\,d\varphi \
&= \frac{(1-e^2)^{3/2}}{2\pi}\int_0^{2\pi} \frac{P(\varphi)}{\left[1+e\cos(\varphi-\varphi_0)\right]^2}\,d\varphi.
\end{align}
End of explanation
M = Symbol('M')
x = Symbol('x')
x0 = Symbol('x0')
g = Function('g')(x)
promedio_g = Symbol('Pg')
g = 2*((1 + e*cos(phi-phi0))**4)*(24 + 13*e**2 + 48*e*cos(phi-phi0) + 11*e**2*cos(2*phi-2*phi0))
g
Explanation: Como en el apunte:
$$
g(\varphi) := 2 [1 + e \cos(\varphi − \varphi_{0} )]^4 [ 24 + 13e^2 + 48e \cos(\varphi − \varphi_{0} ) + 11e^2 \cos(2\varphi − 2\varphi_{0} )]
$$
$$
\left\langle g(\varphi)\right\rangle =\frac{(1 − e^2 )^{3/2}}{2\pi}\int_{0}^{2\pi} \frac{g(\varphi)}{[1 + e cos(\varphi − \varphi_{0})]^2} d\varphi
$$
End of explanation
promedio_g = simplify(((1-e**2)**(3/2))/(2*pi))*integrate(g/(1+e*cos(phi-phi0))**2,(phi,0,2*pi))
trigsimp(promedio_g)
Explanation: $$
\left\langle P\right\rangle=\frac{2 G^4 \mu^2 M^3}{15 c^5 a^5 (1 − e^2 )^5} \left\langle g(\varphi)\right\rangle
$$
End of explanation
L_t = simplify(((4*G)/(5*c**5))*(dM12_tt*(dM22_ttt-dM11_ttt)))
L_t
Explanation: Momentum angular radiado
Similarmente, para calcular el momento angular radiado usamos
$$
\left\langle\dot{L}{}^{i}\right\rangle=\frac{2 G}{5 c^5} \epsilon^{ijk}\left\langle\ddot{M}{}^{ja}\ddot{M}{}^{ka}\right\rangle .
$$
En nuestro caso, de movimiento en el plano $xy$, encontramos:
\begin{align}
\left\langle\dot{L}{}^{3}\right\rangle &= \frac{2 G}{5 c^5} \left\langle\epsilon^{31k}\ddot{M}{}^{1l}\dddot{M}{}^{kl}+\epsilon^{32k}\ddot{M}{}^{2l}\dddot{M}{}^{kl}\right\rangle\
&= \frac{2 G}{5 c^5} \left\langle\epsilon^{312}\ddot{M}{}^{1l}\dddot{M}{}^{2l}+\epsilon^{321}\ddot{M}{}^{2l}\dddot{M}{}^{1l}\right\rangle \
&= \frac{2 G}{5 c^5} \left\langle\ddot{M}{}^{11}\dddot{M}{}^{21}+\ddot{M}{}^{12}\dddot{M}{}^{22}-\ddot{M}{}^{21}\dddot{M}{}^{11}-\ddot{M}{}^{22}\dddot{M}{}^{12}\right\rangle \
&=\frac{4 G}{5 c^5} \left\langle\ddot{M}{}^{12}(\dddot{M}{}^{22}-\dddot{M}{}^{11})\right\rangle .
\end{align}
End of explanation
integrando = (1-e**2)**(3/2)/(2*pi)*L_t/(1+e*cos(phi-phi0))**2
promedio_L_t = trigsimp(integrate(integrando,(phi,0,2*pi)))
promedio_L_t
Explanation: El promedio se calcula de forma similar a como se hizo con la potencia, realizando un cambio de variable para finalmente integrar sobre el ángulo $\varphi$
End of explanation |
9,434 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>The BurnMan Tutorial</h1>
Part 5
Step1: Each equality constraint can be a list of constraints, in which case equilibrate will loop over them. In the next code block we change the equality constraints to be a series of pressures which correspond to the total entropy obtained from the previous solve.
Step2: The object sols is now a 1D list of solution objects. Each one of these contains an equilibrium assemblage object that can be interrogated for any properties
Step3: The next code block plots these properties.
Step4: From the above figure, we can see that the proportion of orthopyroxene is decreasing rapidly and is exhausted near 13 GPa. In the next code block, we determine the exact pressure at which orthopyroxene is exhausted.
Step5: Equilibrating while allowing bulk composition to vary
Step6: First, we find the compositions of the three phases at the univariant.
Step7: Now we solve for the stable sections of the three binary loops
Step8: Finally, we do some plotting | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import burnman
from burnman import equilibrate
from burnman.minerals import SLB_2011
# Set the pressure, temperature and composition
pressure = 3.e9
temperature = 1500.
composition = {'Na': 0.02, 'Fe': 0.2, 'Mg': 2.0, 'Si': 1.9,
'Ca': 0.2, 'Al': 0.4, 'O': 6.81}
# Create the assemblage
gt = SLB_2011.garnet()
ol = SLB_2011.mg_fe_olivine()
opx = SLB_2011.orthopyroxene()
assemblage = burnman.Composite(phases=[ol, opx, gt],
fractions=[0.7, 0.1, 0.2],
name='NCFMAS ol-opx-gt assemblage')
# The solver uses the current compositions of each solution as a starting guess,
# so we have to set them here
ol.set_composition([0.93, 0.07])
opx.set_composition([0.8, 0.1, 0.05, 0.05])
gt.set_composition([0.8, 0.1, 0.05, 0.03, 0.02])
equality_constraints = [('P', 10.e9), ('T', 1500.)]
sol, prm = equilibrate(composition, assemblage, equality_constraints)
print(f'It is {sol.success} that equilibrate was successful')
print(sol.assemblage)
# The total entropy of the assemblage is the molar entropy
# multiplied by the number of moles in the assemblage
entropy = sol.assemblage.S*sol.assemblage.n_moles
Explanation: <h1>The BurnMan Tutorial</h1>
Part 5: Equilibrium problems
This file is part of BurnMan - a thermoelastic and thermodynamic toolkit
for the Earth and Planetary Sciences
Copyright (C) 2012 - 2021 by the BurnMan team,
released under the GNU GPL v2 or later.
Introduction
This ipython notebook is the fifth in a series designed to introduce new users to the code structure and functionalities present in BurnMan.
<b>Demonstrates</b>
burnman.equilibrate, an experimental function that determines the bulk elemental composition, pressure, temperature, phase proportions and compositions of an assemblage subject to user-defined constraints.
Everything in BurnMan and in this tutorial is defined in SI units.
Phase equilibria
What BurnMan does and doesn't do
Members of the BurnMan Team are often asked whether BurnMan does Gibbs energy minimization. The short answer to that is no, for three reasons:
1) Python is ill-suited to such computationally intensive problems.
2) There are many pieces of software already in the community that do Gibbs energy minimization, including but not limited to: PerpleX, HeFESTo, Theriak Domino, MELTS, ENKI, FactSAGE (proprietary), and MMA-EoS.
3) Gibbs minimization is a hard problem. The brute-force pseudocompound/simplex technique employed by Perple_X is the only globally robust method, but clever techniques have to be used to make the computations tractable, and the solution found is generally only a (very close) approximation to the true minimum assemblage. More refined Newton / higher order schemes (e.g. HeFESTo, MELTS, ENKI) provide an exact solution, but can get stuck in local minima or even fail to find a solution.
So, with those things in mind, what does BurnMan do? Well, because BurnMan can compute the Gibbs energy and analytical derivatives of composite materials, it is well suited to solving the equilibrium relations for fixed assemblages. This is done using the burnman.equilibrate function, which acts in a similar (but slightly more general) way to the THERMOCALC software developed by Tim Holland, Roger Powell and coworkers. Essentially, one chooses an assemblage (e.g. olivine + garnet + orthopyroxene) and some equality constraints (typically related to bulk composition, pressure, temperature, entropy, volume, phase proportions or phase compositions) and the equilibrate function attempts to find the remaining unknowns that satisfy those constraints.
In a sense, then, the equilibrate function is simultaneously more powerful and more limited than Gibbs minimization techniques. It allows the user to investigate and plot metastable reactions, and quickly obtain answers to questions like "at what pressure does wadsleyite first become stable along a given isentrope?". However, it is not designed to create P-T tables of equilibrium assemblages. If a user wishes to do this for a complex problem, we refer them to other existing codes. BurnMan also contains a useful utility material called burnman.PerplexMaterial that is specifically designed to read in and interrogate P-T data from PerpleX.
There are a couple more caveats to bear in mind. Firstly, the equilibrate function is experimental and can certainly be improved. Equilibrium problems are highly nonlinear, and sometimes solvers struggle to find a solution. If you have a better, more robust way of solving these problems, we would love to hear from you! Secondly, the equilibrate function is not completely free from the curse of multiple roots - sometimes there is more than one solution to the equilibrium problem, and BurnMan (and indeed any equilibrium software) may find one a metastable root.
Equilibrating at fixed bulk composition
Fixed bulk composition problems are most similar to those asked by Gibbs minimization software like HeFESTo. Essentially, the only difference is that rather than allowing the assemblage to change to minimize the Gibbs energy, the assemblage is instead fixed.
In the following code block, we calculate the equilibrium assemblage of olivine, orthopyroxene and garnet for a mantle composition in the system NCFMAS at 10 GPa and 1500 K.
End of explanation
equality_constraints = [('P', np.linspace(3.e9, 13.e9, 21)),
('S', entropy)]
sols, prm = equilibrate(composition, assemblage, equality_constraints)
Explanation: Each equality constraint can be a list of constraints, in which case equilibrate will loop over them. In the next code block we change the equality constraints to be a series of pressures which correspond to the total entropy obtained from the previous solve.
End of explanation
data = np.array([[sol.assemblage.pressure,
sol.assemblage.temperature,
sol.assemblage.p_wave_velocity,
sol.assemblage.shear_wave_velocity,
sol.assemblage.molar_fractions[0],
sol.assemblage.molar_fractions[1],
sol.assemblage.molar_fractions[2]]
for sol in sols if sol.success])
Explanation: The object sols is now a 1D list of solution objects. Each one of these contains an equilibrium assemblage object that can be interrogated for any properties:
End of explanation
fig = plt.figure(figsize=(12, 4))
ax = [fig.add_subplot(1, 3, i) for i in range(1, 4)]
P, T, V_p, V_s = data.T[:4]
phase_proportions = data.T[4:]
ax[0].plot(P/1.e9, T)
ax[1].plot(P/1.e9, V_p/1.e3)
ax[1].plot(P/1.e9, V_s/1.e3)
for i in range(3):
ax[2].plot(P/1.e9, phase_proportions[i], label=sol.assemblage.phases[i].name)
for i in range(3):
ax[i].set_xlabel('Pressure (GPa)')
ax[0].set_ylabel('Temperature (K)')
ax[1].set_ylabel('Seismic velocities (km/s)')
ax[2].set_ylabel('Molar phase proportions')
ax[2].legend()
plt.show()
Explanation: The next code block plots these properties.
End of explanation
equality_constraints = [('phase_fraction', [opx, 0.]),
('S', entropy)]
sol, prm = equilibrate(composition, assemblage, equality_constraints)
print(f'Orthopyroxene is exhausted from the assemblage at {sol.assemblage.pressure/1.e9:.2f} GPa, {sol.assemblage.temperature:.2f} K.')
Explanation: From the above figure, we can see that the proportion of orthopyroxene is decreasing rapidly and is exhausted near 13 GPa. In the next code block, we determine the exact pressure at which orthopyroxene is exhausted.
End of explanation
# Initialize the minerals we will use in this example.
ol = SLB_2011.mg_fe_olivine()
wad = SLB_2011.mg_fe_wadsleyite()
rw = SLB_2011.mg_fe_ringwoodite()
# Set the starting guess compositions for each of the solutions
ol.set_composition([0.90, 0.10])
wad.set_composition([0.90, 0.10])
rw.set_composition([0.80, 0.20])
Explanation: Equilibrating while allowing bulk composition to vary
End of explanation
T = 1600.
composition = {'Fe': 0.2, 'Mg': 1.8, 'Si': 1.0, 'O': 4.0}
assemblage = burnman.Composite([ol, wad, rw], [1., 0., 0.])
equality_constraints = [('T', T),
('phase_fraction', (ol, 0.0)),
('phase_fraction', (rw, 0.0))]
free_compositional_vectors = [{'Mg': 1., 'Fe': -1.}]
sol, prm = equilibrate(composition, assemblage, equality_constraints,
free_compositional_vectors,
verbose=False)
if not sol.success:
raise Exception('Could not find solution for the univariant using '
'provided starting guesses.')
P_univariant = sol.assemblage.pressure
phase_names = [sol.assemblage.phases[i].name for i in range(3)]
x_fe_mbr = [sol.assemblage.phases[i].molar_fractions[1] for i in range(3)]
print(f'Univariant pressure at {T:.0f} K: {P_univariant/1.e9:.3f} GPa')
print('Fe2SiO4 concentrations at the univariant:')
for i in range(3):
print(f'{phase_names[i]}: {x_fe_mbr[i]:.2f}')
Explanation: First, we find the compositions of the three phases at the univariant.
End of explanation
output = []
for (m1, m2, x_fe_m1) in [[ol, wad, np.linspace(x_fe_mbr[0], 0.001, 20)],
[ol, rw, np.linspace(x_fe_mbr[0], 0.999, 20)],
[wad, rw, np.linspace(x_fe_mbr[1], 0.001, 20)]]:
assemblage = burnman.Composite([m1, m2], [1., 0.])
# Reset the compositions of the two phases to have compositions
# close to those at the univariant point
m1.set_composition([1.-x_fe_mbr[1], x_fe_mbr[1]])
m2.set_composition([1.-x_fe_mbr[1], x_fe_mbr[1]])
# Also set the pressure and temperature
assemblage.set_state(P_univariant, T)
# Here our equality constraints are temperature,
# the phase fraction of the second phase,
# and we loop over the composition of the first phase.
equality_constraints = [('T', T),
('phase_composition',
(m1, [['Mg_A', 'Fe_A'],
[0., 1.], [1., 1.], x_fe_m1])),
('phase_fraction', (m2, 0.0))]
sols, prm = equilibrate(composition, assemblage,
equality_constraints,
free_compositional_vectors,
verbose=False)
# Process the solutions
out = np.array([[sol.assemblage.pressure,
sol.assemblage.phases[0].molar_fractions[1],
sol.assemblage.phases[1].molar_fractions[1]]
for sol in sols if sol.success])
output.append(out)
output = np.array(output)
Explanation: Now we solve for the stable sections of the three binary loops
End of explanation
fig = plt.figure()
ax = [fig.add_subplot(1, 1, 1)]
color='purple'
# Plot the line connecting the three phases
ax[0].plot([x_fe_mbr[0], x_fe_mbr[2]],
[P_univariant/1.e9, P_univariant/1.e9], color=color)
for i in range(3):
if i == 0:
ax[0].plot(output[i,:,1], output[i,:,0]/1.e9, color=color, label=f'{T} K')
else:
ax[0].plot(output[i,:,1], output[i,:,0]/1.e9, color=color)
ax[0].plot(output[i,:,2], output[i,:,0]/1.e9, color=color)
ax[0].fill_betweenx(output[i,:,0]/1.e9, output[i,:,1], output[i,:,2],
color=color, alpha=0.2)
ax[0].text(0.1, 6., 'olivine', horizontalalignment='left')
ax[0].text(0.015, 14.2, 'wadsleyite', horizontalalignment='left',
bbox=dict(facecolor='white',
edgecolor='white',
boxstyle='round,pad=0.2'))
ax[0].text(0.9, 15., 'ringwoodite', horizontalalignment='right')
ax[0].set_xlim(0., 1.)
ax[0].set_ylim(0.,20.)
ax[0].set_xlabel('p(Fe$_2$SiO$_4$)')
ax[0].set_ylabel('Pressure (GPa)')
ax[0].legend()
plt.show()
Explanation: Finally, we do some plotting
End of explanation |
9,435 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id="Shallow_Water_Bathymetry_top"></a>
Shallow Water Bathymetry
Visualizing Differences in Depth With Spectral Analysis
<hr>
Notebook Summary
Import data from LANDSAT 8
A bathymetry index is calculated
Contrast is adjusted to make a more interpretable visualization.
Citation
Step1: <span id="Shallow_Water_Bathymetry_import">Import Dependencies and Connect to the Data Cube ▴</span>
Step2: <span id="Shallow_Water_Bathymetry_plat_prod">Choose the Platform and Product ▴</span>
Step3: <span id="Shallow_Water_Bathymetry_define_extents">Define the Extents of the Analysis ▴</span>
Region bounds
Step4: Display
Step5: <span id="Shallow_Water_Bathymetry_retrieve_data">Retrieve the Data ▴</span>
Load and integrate datasets
Step6: Preview the Data
Step7: <span id="Shallow_Water_Bathymetry_bathymetry">Calculate the Bathymetry and NDWI Indices ▴</span>
The bathymetry function is located at the top of this notebook.
Step8: <hr>
Preview Combined Dataset
Step9: <span id="Shallow_Water_Bathymetry_export_unmasked">Export Unmasked GeoTIFF ▴</span>
Step10: <span id="Shallow_Water_Bathymetry_mask">Mask the Dataset Using the Quality Column and NDWI ▴</span>
Step11: Use NDWI to Mask Out Land
The threshold can be tuned if need be to better fit the RGB image above. <br>
Unfortunately our existing WOFS algorithm is designed to work with Surface Reflectance (SR) and does not work with this data yet but with a few modifications it could be made to do so. We will approximate the WOFs mask with NDWI for now.
Step12: <span id="Shallow_Water_Bathymetry_vis_func">Create a Visualization Function ▴</span>
Visualize the distribution of the bathymetry index for the water pixels
Step13: <b>Interpretation
Step14: <span id="Shallow_Water_Bathymetry_bath_vis">Visualize the Bathymetry ▴</span>
Step15: <span id="Shallow_Water_Bathymetry_bath_vis_better">Visualize the Bathymetry With Adjusted Contrast ▴</span>
If we clamp the range of the plot using different quantile ranges we can see relative differences in higher contrast. | Python Code:
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
def bathymetry_index(df, m0 = 1, m1 = 0):
return m0*(np.log(df.blue)/np.log(df.green))+m1
Explanation: <a id="Shallow_Water_Bathymetry_top"></a>
Shallow Water Bathymetry
Visualizing Differences in Depth With Spectral Analysis
<hr>
Notebook Summary
Import data from LANDSAT 8
A bathymetry index is calculated
Contrast is adjusted to make a more interpretable visualization.
Citation: Stumpf, Richard P., Kristine Holderied, and Mark Sinclair. "Determination of water depth with high‐resolution satellite imagery over variable bottom types." Limnology and Oceanography 48.1part2 (2003): 547-556.
<hr>
Index
Import Dependencies and Connect to the Data Cube
Choose the Platform and Product
Define the Extents of the Analysis
Retrieve the Data
Calculate the Bathymetry and NDWI Indices
Export Unmasked GeoTIFF
Mask the Dataset Using the Quality Column and NDWI
Create a Visualization Function
Visualize the Bathymetry
Visualize the Bathymetry With Adjusted Contrast
<hr>
How It Works
Bathymetry is the measurement of depth in bodies of water(Oceans, Seas or Lakes). This notebook illustrates a technique for deriving depth of shallow water areas using purely optical features from Landsat Collection 1 imagery and draws heavily from the publication Determination of water depth with high-resolution satelite imagery over variable bottom types.
<br>
Bathymetry Index
This bathymetry index uses optical green and blue values on a logarithmic scale with two tunable coefficients m0 and m1.
$$ BATH = m_0*\frac{ln(blue)}{ln(green)} -m_1$$
Where:
- m0 is a tunable scaling factor to tune the ratio to depth <br>
- m1 is the offset for a depth of 0 meters.
<br>
<div class="alert-info"><br>
<b>Note: </b> that for our purposes, $m_0$ and $m_1$ are equal to <b>1</b> and <b>0</b> respectively, since we cannot determine the baseline nor the offset from spectral reflectance alone. This effectively simplifies the formula to: $$\frac{ln(blue)}{ln(green)}$$
<br>
</div>
Bathymetry Index Function
End of explanation
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
import datacube
dc = datacube.Datacube()
Explanation: <span id="Shallow_Water_Bathymetry_import">Import Dependencies and Connect to the Data Cube ▴</span>
End of explanation
#List the products available on this server/device
dc.list_products()
#create a list of the desired platforms
platform = 'LANDSAT_8'
product = 'ls8_level1_usgs'
Explanation: <span id="Shallow_Water_Bathymetry_plat_prod">Choose the Platform and Product ▴</span>
End of explanation
# East Coast of Australia
lat_subsect = (-31.7, -32.2)
lon_subsect = (152.4, 152.9)
print('''
Latitude:\t{0}\t\tRange:\t{2} degrees
Longitude:\t{1}\t\tRange:\t{3} degrees
'''.format(lat_subsect,
lon_subsect,
max(lat_subsect)-min(lat_subsect),
max(lon_subsect)-min(lon_subsect)))
Explanation: <span id="Shallow_Water_Bathymetry_define_extents">Define the Extents of the Analysis ▴</span>
Region bounds
End of explanation
from utils.data_cube_utilities.dc_display_map import display_map
display_map(latitude = lat_subsect,longitude = lon_subsect)
Explanation: Display
End of explanation
%%time
ds = dc.load(lat = lat_subsect,
lon = lon_subsect,
platform = platform,
product = product,
output_crs = "EPSG:32756",
measurements = ["red","blue","green","nir","quality"],
resolution = (-30,30))
ds
Explanation: <span id="Shallow_Water_Bathymetry_retrieve_data">Retrieve the Data ▴</span>
Load and integrate datasets
End of explanation
from utils.data_cube_utilities.dc_rgb import rgb
rgb(ds.isel(time=6), x_coord='x', y_coord='y')
plt.show()
Explanation: Preview the Data
End of explanation
# Create Bathemtry Index column
ds["bathymetry"] = bathymetry_index(ds)
from utils.data_cube_utilities.dc_water_classifier import NDWI
# (green - nir) / (green + nir)
ds["ndwi"] = NDWI(ds, band_pair=1)
Explanation: <span id="Shallow_Water_Bathymetry_bathymetry">Calculate the Bathymetry and NDWI Indices ▴</span>
The bathymetry function is located at the top of this notebook.
End of explanation
ds
Explanation: <hr>
Preview Combined Dataset
End of explanation
import os
from utils.data_cube_utilities.import_export import export_xarray_to_multiple_geotiffs
unmasked_dir = "geotiffs/landsat8/unmasked"
if not os.path.exists(unmasked_dir):
os.makedirs(unmasked_dir)
export_xarray_to_multiple_geotiffs(ds, unmasked_dir + "/unmasked.tif",
x_coord='x', y_coord='y')
Explanation: <span id="Shallow_Water_Bathymetry_export_unmasked">Export Unmasked GeoTIFF ▴</span>
End of explanation
# preview values
np.unique(ds["quality"])
Explanation: <span id="Shallow_Water_Bathymetry_mask">Mask the Dataset Using the Quality Column and NDWI ▴</span>
End of explanation
# Tunable threshold for masking the land out
threshold = .05
water = (ds.ndwi>threshold).values
#preview one time slice to determine the effectiveness of the NDWI masking
rgb(ds.where(water).isel(time=6), x_coord='x', y_coord='y')
plt.show()
from utils.data_cube_utilities.dc_mosaic import ls8_oli_unpack_qa
clear_xarray = ls8_oli_unpack_qa(ds.quality, "clear")
full_mask = np.logical_and(clear_xarray, water)
ds = ds.where(full_mask)
Explanation: Use NDWI to Mask Out Land
The threshold can be tuned if need be to better fit the RGB image above. <br>
Unfortunately our existing WOFS algorithm is designed to work with Surface Reflectance (SR) and does not work with this data yet but with a few modifications it could be made to do so. We will approximate the WOFs mask with NDWI for now.
End of explanation
plt.figure(figsize=[15,5])
#Visualize the distribution of the remaining data
sns.boxplot(ds['bathymetry'])
plt.show()
Explanation: <span id="Shallow_Water_Bathymetry_vis_func">Create a Visualization Function ▴</span>
Visualize the distribution of the bathymetry index for the water pixels
End of explanation
#set the quantile range in either direction from the median value
def get_quantile_range(col, quantile_range = .25):
low = ds[col].quantile(.5 - quantile_range,["time","y","x"]).values
high = ds[col].quantile(.5 + quantile_range,["time","y","x"]).values
return low,high
#Custom function for a color mapping object
from matplotlib.colors import LinearSegmentedColormap
def custom_color_mapper(name = "custom", val_range = (1.96,1.96), colors = "RdGnBu"):
custom_cmap = LinearSegmentedColormap.from_list(name,colors=colors)
min, max = val_range
step = max/10.0
Z = [min,0],[0,max]
levels = np.arange(min,max+step,step)
cust_map = plt.contourf(Z, 100, cmap=custom_cmap)
plt.clf()
return cust_map.cmap
def mean_value_visual(ds, col, figsize = [15,15], cmap = "GnBu", low=None, high=None):
if low is None: low = np.min(ds[col]).values
if high is None: high = np.max(ds[col]).values
ds.reduce(np.nanmean,dim=["time"])[col].plot.imshow(figsize = figsize, cmap=cmap,
vmin=low, vmax=high)
Explanation: <b>Interpretation: </b> We can see that most of the values fall within a very short range. We can scale our plot's cmap limits to fit the specific quantile ranges for the bathymetry index so we can achieve better contrast from our plots.
End of explanation
mean_value_visual(ds, "bathymetry", cmap="GnBu")
Explanation: <span id="Shallow_Water_Bathymetry_bath_vis">Visualize the Bathymetry ▴</span>
End of explanation
# create range using the 10th and 90th quantile
low, high = get_quantile_range("bathymetry", .40)
custom = custom_color_mapper(val_range=(low,high),
colors=["darkred","red","orange","yellow","green",
"blue","darkblue","black"])
mean_value_visual(ds, "bathymetry", cmap=custom, low=low, high=high)
# create range using the 5th and 95th quantile
low, high = get_quantile_range("bathymetry", .45)
custom = custom_color_mapper(val_range=(low,high),
colors=["darkred","red","orange","yellow","green",
"blue","darkblue","black"])
mean_value_visual(ds, "bathymetry", cmap = custom, low=low, high = high)
# create range using the 2nd and 98th quantile
low, high = get_quantile_range("bathymetry", .48)
custom = custom_color_mapper(val_range=(low,high),
colors=["darkred","red","orange","yellow","green",
"blue","darkblue","black"])
mean_value_visual(ds, "bathymetry", cmap=custom, low=low, high=high)
# create range using the 1st and 99th quantile
low, high = get_quantile_range("bathymetry", .49)
custom = custom_color_mapper(val_range=(low,high),
colors=["darkred","red","orange","yellow","green",
"blue","darkblue","black"])
mean_value_visual(ds, "bathymetry", cmap=custom, low=low, high=high)
Explanation: <span id="Shallow_Water_Bathymetry_bath_vis_better">Visualize the Bathymetry With Adjusted Contrast ▴</span>
If we clamp the range of the plot using different quantile ranges we can see relative differences in higher contrast.
End of explanation |
9,436 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Natural language processing (NLP) is a field of computer science, artificial intelligence and computational linguistics concerned with the interactions between computers and human (natural) languages. In my first NLP post I will create a simple, yet effective sentiment analysis model that can classify a movie review on IMDB as being either positive or negative.
NLP
Step1: The dataset is perfectly balanced across the two categories POSITIVE and NEGATIVE.
Counting words
Let's build up a simple sentiment theory. It is common sense that some of the words are more common in positive reviews and some are more frequently found in negative reviews. For example, I expect words like "supurb", "impresive", "magnificent", etc. to be common in positive reviews, while words like "miserable", "bad", "horrible", etc. to appear in negative reviews. Let's count the words in order to see what words are most common and what words appear most frequently the positive and the negative reviews.
Step2: Well, at a first glance, that seems dissapointing. As expected, the most common words are some linking words like "the", "of", "for", "at", etc. Counting the words for POSITIVE and NEGATIVE reviews separetely might appear pontless at first, as the same linking words are found among the most common for both the POSITIVE and NEGATIVE reviews.
Sentiment Ratio
However, counting the words that way would allow us to build a far more meaningful metric, called the sentiment ratio. A word with a sentiment ratio of 1 is used only in POSITIVE reviews. A word with a sentiment ratio of -1 is used only in NEGATIVE reviews. A word with sentiment ratio of 0 are neither POSITIVE nor NEGATIVE, but are neutral. Hence, linking words like the once shown above are expected to be close to the neutral 0. Let's draw the sentiment ratio for all words. I am expecting to see figure showing a beautiful normal distribution.
Step3: Well that looks like a normal distribution with a considerable amount of words that were used only in POSITIVE and only in NEGATIVE reviews. Could it be, those are words that occur only once or twice in the review corpus? They are not necessarly useful when identifying the sentiment, as they occur only in one of few reviews. If that is the case it would be better to exclude these words. We want our models to generalize well instead of overfitting on some very rare words. Let's exclude all words that occur less than 'min_occurance' times in the whole review corpus.
Step4: And that is the beautiful normal destribution that I was expecting. The total word count shrinked from 74074 to 4276. Hence, there are many words that have been used only few times. Looking at the figure, there are a lot of neutral words in our new sentiment selection, but there are also some words that are used almost exclusively in POSITIVE or NEGATIVE reviews. You can try different values for 'min_occurance' and observe how the amount of total words and the plot is changing. Let's check out the words for min_occurance = 100.
Step5: There are a lot of names among the words with positive sentiment. For example, edie (probably from edie falco, who won 2 Golden Globes and another 21 wins & 70 nominations), polanski (probably from roman polanski, who won 1 oscar and another 83 wins & 75 nominations). But there are also words like "superbly", "breathtaking", "refreshing", etc. Those are exactly the positive sentiment loaded words I was looking for. Similarly, there are words like "insult", "uninspired", "lame", "sucks", "miserably", "boredom" that no director would be happy to read in the reviews regarding his movie. One name catches the eye - that is "seagal", (probably from Steven Seagal). Well, I won't comment on that.
Naive Sentiment Classifier
Let's build a naive machine learning classifier. This classifier is very simple and does not utilize any special kind of models like linear regression, trees or neural networks. However, it is sill a machine LEARNING classifier as you need data that it fits on in order to use it for predictions. It is largely based on the sentiment radio that we previously discussed and has only two parameters 'min_word_count' and 'sentiment_threshold'. Here it is
Step6: The classifier has only two parameters - 'min_word_count' and 'sentiment_threshold'. A min_word_count of 20 means the classifier will only consider words that occur at least 20 times in the review corpus. The 'sentiment_threshhold' allows you to ignore words with rather neutral sentiment. A 'sentiment_threshhold' of 0.3 means that only words with sentiment ratio of more than 0.3 or less than -0.3 would be considered in the prediction process. What the classifier does is creating the sentiment ratio like previosly shown. When predicting the sentiment, the classifier uses the sentiment ratio dict to sum up all sentiment ratios of all the words used in the review. If the overall sum is positive the sentiment is also positive. If the overall sum is negative the sentiment is also negative. It is pretty simple, isn't it? Let's measure the performance in a 5 fold cross-validation setting
Step7: A cross-validaiton accuracy of 85.7% is not bad for this naive approach and a classifier that trains in only a few seconds. At this point you will be asking yourself, can this score be easily beaten with the use of a neural network. Let's see.
Neural Networks can do better
To train a neural network you should transform the data to a format the neural network can undestand. Hence, first you need to convert the reviews to numerical vectors. Let's assume the neural network would be only interested on the words
Step8: Same as before, the create_word2index has the two parameters 'min_occurance' and 'sentiment_threshhold' Check the explanation of those two in the previous section. Anyway, once you have the word2index dict, you can encode the reviews with the function below
Step9: Labels are easily one-hot encoded. Check out this explanation on why one-hot encoding is needed
Step10: At this point, you can transform both the reviews and the labels into data that the neural network can understand. Let's do that
Step11: You are good to go and train the neural network. In the example below, I'am using a simple neural network consisting of two fully connected layers. Trying different things out, I found Dropout before the first layer can reduce overfitting. Dropout between the first and the second layer, however, made the performance worse. Increasing the the number of the hidden units in the two layers did't lead to better performance, but to more overfitting. Increasing the number of layers made no difference. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
with open('data/reviews.txt','r') as file_handler:
reviews = np.array(list(map(lambda x:x[:-1], file_handler.readlines())))
with open('data/labels.txt','r') as file_handler:
labels = np.array(list(map(lambda x:x[:-1].upper(), file_handler.readlines())))
unique, counts = np.unique(labels, return_counts=True)
print('Reviews', len(reviews), 'Labels', len(labels), dict(zip(unique, counts)))
for i in range(10):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
Explanation: Natural language processing (NLP) is a field of computer science, artificial intelligence and computational linguistics concerned with the interactions between computers and human (natural) languages. In my first NLP post I will create a simple, yet effective sentiment analysis model that can classify a movie review on IMDB as being either positive or negative.
NLP: The Basics of Sentiment Analysis
If you have been reading AI related news in the last few years, you were probably reading about Reinforcement Learning. However, next to Google's AlphaGo and the poker AI called Libratus that out-bluffed some of the best human players, there have been a lot of chat bots that made it into the news. For instance, there is the Microsoft's chatbot that turned racist in less than a day. And there is the chatbot that made news when it convinced 10 out 30 judges at the University of Reading's 2014 Turing Test that it was human, thus winning the contest. NLP is the exciting field in AI that aims at enabling machines to understand and speak human language. One of the most popular commercial products is the IBM Watson. And while I am already planning a post regarding IBM's NLP tech, with this first NLP post, I will start with the some very basic NLP.
The Data: Reviews and Labels
The data consists of 25000 IMDB reviews. Each review is stored as a single line in the file reviews.txt. The reviews have already been preprocessed a bit and contain only lower case characters. The labels.txt file contains the corresponding labels. Each review is either labeled as POSITIVE or NEGATIVE. Let's read the data and print some of it.
End of explanation
from collections import Counter
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
# Examine the counts of the most common words in positive reviews
print('Most common words:', total_counts.most_common()[0:30])
print('\nMost common words in NEGATIVE reviews:', negative_counts.most_common()[0:30])
print('\nMost common words in POSITIVE reviews:', positive_counts.most_common()[0:30])
Explanation: The dataset is perfectly balanced across the two categories POSITIVE and NEGATIVE.
Counting words
Let's build up a simple sentiment theory. It is common sense that some of the words are more common in positive reviews and some are more frequently found in negative reviews. For example, I expect words like "supurb", "impresive", "magnificent", etc. to be common in positive reviews, while words like "miserable", "bad", "horrible", etc. to appear in negative reviews. Let's count the words in order to see what words are most common and what words appear most frequently the positive and the negative reviews.
End of explanation
import seaborn as sns
sentiment_ratio = Counter()
for word, count in list(total_counts.most_common()):
sentiment_ratio[word] = ((positive_counts[word] / total_counts[word]) - 0.5) / 0.5
print('Total words in sentiment ratio', len(sentiment_ratio))
sns.distplot(list(sentiment_ratio.values()));
Explanation: Well, at a first glance, that seems dissapointing. As expected, the most common words are some linking words like "the", "of", "for", "at", etc. Counting the words for POSITIVE and NEGATIVE reviews separetely might appear pontless at first, as the same linking words are found among the most common for both the POSITIVE and NEGATIVE reviews.
Sentiment Ratio
However, counting the words that way would allow us to build a far more meaningful metric, called the sentiment ratio. A word with a sentiment ratio of 1 is used only in POSITIVE reviews. A word with a sentiment ratio of -1 is used only in NEGATIVE reviews. A word with sentiment ratio of 0 are neither POSITIVE nor NEGATIVE, but are neutral. Hence, linking words like the once shown above are expected to be close to the neutral 0. Let's draw the sentiment ratio for all words. I am expecting to see figure showing a beautiful normal distribution.
End of explanation
min_occurance = 100
sentiment_ratio = Counter()
for word, count in list(total_counts.most_common()):
if total_counts[word] >= min_occurance: # only consider words
sentiment_ratio[word] = ((positive_counts[word] / total_counts[word]) - 0.5) / 0.5
print('Total words in sentiment ratio', len(sentiment_ratio))
sns.distplot(list(sentiment_ratio.values()));
Explanation: Well that looks like a normal distribution with a considerable amount of words that were used only in POSITIVE and only in NEGATIVE reviews. Could it be, those are words that occur only once or twice in the review corpus? They are not necessarly useful when identifying the sentiment, as they occur only in one of few reviews. If that is the case it would be better to exclude these words. We want our models to generalize well instead of overfitting on some very rare words. Let's exclude all words that occur less than 'min_occurance' times in the whole review corpus.
End of explanation
print('Words with the most POSITIVE sentiment' ,sentiment_ratio.most_common()[:30])
print('\nWords with the most NEGATIVE sentiment' ,sentiment_ratio.most_common()[-30:])
Explanation: And that is the beautiful normal destribution that I was expecting. The total word count shrinked from 74074 to 4276. Hence, there are many words that have been used only few times. Looking at the figure, there are a lot of neutral words in our new sentiment selection, but there are also some words that are used almost exclusively in POSITIVE or NEGATIVE reviews. You can try different values for 'min_occurance' and observe how the amount of total words and the plot is changing. Let's check out the words for min_occurance = 100.
End of explanation
class NaiveSentimentClassifier:
def __init__(self, min_word_count, sentiment_threshold):
self.min_word_count = min_word_count
self.sentiment_threshold = sentiment_threshold
def fit(self, reviews, labels):
positive_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
total_counts[word] += 1
self.sentiment_ratios = Counter()
for word, count in total_counts.items():
if(count > self.min_word_count):
self.sentiment_ratios[word] = \
((positive_counts[word] / count) - 0.5) / 0.5
def predict(self, reviews):
predictions = []
for review in reviews:
sum_review_sentiment = 0
for word in review.split(" "):
if abs(self.sentiment_ratios[word]) >= self.sentiment_threshold:
sum_review_sentiment += self.sentiment_ratios[word]
if sum_review_sentiment >= 0:
predictions.append('POSITIVE')
else:
predictions.append('NEGATIVE')
return predictions
Explanation: There are a lot of names among the words with positive sentiment. For example, edie (probably from edie falco, who won 2 Golden Globes and another 21 wins & 70 nominations), polanski (probably from roman polanski, who won 1 oscar and another 83 wins & 75 nominations). But there are also words like "superbly", "breathtaking", "refreshing", etc. Those are exactly the positive sentiment loaded words I was looking for. Similarly, there are words like "insult", "uninspired", "lame", "sucks", "miserably", "boredom" that no director would be happy to read in the reviews regarding his movie. One name catches the eye - that is "seagal", (probably from Steven Seagal). Well, I won't comment on that.
Naive Sentiment Classifier
Let's build a naive machine learning classifier. This classifier is very simple and does not utilize any special kind of models like linear regression, trees or neural networks. However, it is sill a machine LEARNING classifier as you need data that it fits on in order to use it for predictions. It is largely based on the sentiment radio that we previously discussed and has only two parameters 'min_word_count' and 'sentiment_threshold'. Here it is:
End of explanation
from sklearn.model_selection import KFold
from sklearn.metrics import accuracy_score
all_predictions = []
all_true_labels = []
for train_index, validation_index in KFold(n_splits=5, random_state=42, shuffle=True).split(labels):
trainX, trainY = reviews[train_index], labels[train_index]
validationX, validationY = reviews[validation_index], labels[validation_index]
classifier = NaiveSentimentClassifier(20, 0.3)
classifier.fit(trainX, trainY)
predictions = classifier.predict(validationX)
print('Fold accuracy', accuracy_score(validationY, predictions))
all_predictions += predictions
all_true_labels += list(validationY)
print('CV accuracy', accuracy_score(all_true_labels, all_predictions))
Explanation: The classifier has only two parameters - 'min_word_count' and 'sentiment_threshold'. A min_word_count of 20 means the classifier will only consider words that occur at least 20 times in the review corpus. The 'sentiment_threshhold' allows you to ignore words with rather neutral sentiment. A 'sentiment_threshhold' of 0.3 means that only words with sentiment ratio of more than 0.3 or less than -0.3 would be considered in the prediction process. What the classifier does is creating the sentiment ratio like previosly shown. When predicting the sentiment, the classifier uses the sentiment ratio dict to sum up all sentiment ratios of all the words used in the review. If the overall sum is positive the sentiment is also positive. If the overall sum is negative the sentiment is also negative. It is pretty simple, isn't it? Let's measure the performance in a 5 fold cross-validation setting:
End of explanation
def create_word2index(min_occurance, sentiment_threshold):
word2index = {}
index = 0
sentiment_ratio = Counter()
for word, count in list(total_counts.most_common()):
sentiment_ratio[word] = ((positive_counts[word] / total_counts[word]) - 0.5) / 0.5
is_word_eligable = lambda word: word not in word2index and \
total_counts[word] >= min_occurance and \
abs(sentiment_ratio[word]) >= sentiment_threshold
for i in range(len(reviews)):
for word in reviews[i].split(" "):
if is_word_eligable(word):
word2index[word] = index
index += 1
print("Word2index contains", len(word2index), 'words.')
return word2index
Explanation: A cross-validaiton accuracy of 85.7% is not bad for this naive approach and a classifier that trains in only a few seconds. At this point you will be asking yourself, can this score be easily beaten with the use of a neural network. Let's see.
Neural Networks can do better
To train a neural network you should transform the data to a format the neural network can undestand. Hence, first you need to convert the reviews to numerical vectors. Let's assume the neural network would be only interested on the words: "breathtaking", "refreshing", "sucks" and "lame". Thus, we have an input vector of size 4. If the review does not contain any of these words the input vector would contain only zeros: [0, 0, 0, 0]. If the review is "Wow, that was such a refreshing experience. I was impreced by the breathtaking acting and the breathtaking visual effects.", the input vector would look like this: [1, 2, 0, 0]. A negative review such as "Wow, that was some lame acting and a lame music. Totally, lame. Sad." would be transformed to an input vector like this [0, 0, 0, 3]. Anyway, you need to create a word2index dictionary that points to an index of the vector for a given word:
End of explanation
def encode_reviews_by_word_count(word2index):
encoded_reviews = []
for i in range(len(reviews)):
review_array = np.zeros(len(word2index))
for word in reviews[i].split(" "):
if word in word2index:
review_array[word2index[word]] += 1
encoded_reviews.append(review_array)
encoded_reviews = np.array(encoded_reviews)
print('Encoded reviews matrix shape', encoded_reviews.shape)
return encoded_reviews
Explanation: Same as before, the create_word2index has the two parameters 'min_occurance' and 'sentiment_threshhold' Check the explanation of those two in the previous section. Anyway, once you have the word2index dict, you can encode the reviews with the function below:
End of explanation
def encode_labels():
encoded_labels = []
for label in labels:
if label == 'POSITIVE':
encoded_labels.append([0, 1])
else:
encoded_labels.append([1, 0])
return np.array(encoded_labels)
Explanation: Labels are easily one-hot encoded. Check out this explanation on why one-hot encoding is needed:
End of explanation
word2index = create_word2index(min_occurance=10, sentiment_threshold=0.2)
encoded_reviews = encode_reviews_by_word_count(word2index)
encoded_labels = encode_labels()
Explanation: At this point, you can transform both the reviews and the labels into data that the neural network can understand. Let's do that:
End of explanation
from sklearn.model_selection import KFold
from sklearn.metrics import accuracy_score
from keras.callbacks import ModelCheckpoint
from keras.models import Sequential
from keras.layers import Dense, Dropout, Input
from keras import metrics
all_predictions = []
all_true_labels = []
model_index = 0
for train_index, validation_index in \
KFold(n_splits=5, random_state=42, shuffle=True).split(encoded_labels):
model_index +=1
model_path= 'models/model_' + str(model_index)
print('Training model: ', model_path)
train_X, train_Y = encoded_reviews[train_index], encoded_labels[train_index]
validation_X, validation_Y = encoded_reviews[validation_index], encoded_labels[validation_index]
save_best_model = ModelCheckpoint(
model_path,
monitor='val_loss',
save_best_only=True,
save_weights_only=True)
model = Sequential()
model.add(Dropout(0.3, input_shape=(len(word2index),)))
model.add(Dense(10, activation="relu"))
model.add(Dense(10, activation="relu"))
model.add(Dense(2, activation="softmax"))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=[metrics.categorical_accuracy])
model.fit(train_X, train_Y,
validation_data=(validation_X, validation_Y),
callbacks = [save_best_model],
epochs=20, batch_size=32, verbose=0)
model.load_weights(model_path)
all_true_labels += list(validation_Y[:, 0])
all_predictions += list(model.predict(validation_X)[:, 0] > 0.5)
print('CV accuracy', accuracy_score(all_true_labels, all_predictions))
Explanation: You are good to go and train the neural network. In the example below, I'am using a simple neural network consisting of two fully connected layers. Trying different things out, I found Dropout before the first layer can reduce overfitting. Dropout between the first and the second layer, however, made the performance worse. Increasing the the number of the hidden units in the two layers did't lead to better performance, but to more overfitting. Increasing the number of layers made no difference.
End of explanation |
9,437 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
lab
Laboratory tests that have have been mapped to a standard set of measurements. Unmapped measurements are recorded in the customLab table. The lab table is fairly well populated by hospitals. It is possible some rarely obtained lab measurements are not interfaced into the system and therefore will not be available in the database. Absence of a rare lab measurement, such as serum lidocaine concentrations, would not indicate the lab was not drawn. However, absence of a platelet count would likely indicate the value was not obtained.
Step3: Examine a single patient
Step5: Immediately we can note the very large negative labresultoffset. This likely means we have some lab values pre-ICU. In some cases this will be a lab measured in another hospital location such as the emergency department or hospital floor. In this case, the large value (-99620 minutes, or ~70 days) is surprising, but we can see from the patient table that the patient was admitted to the hospital -99779 minutes before their ICU stay (hospitaladmitoffset). This patient was admitted to the ICU with thrombocytopenia (apacheadmissiondx), and inspection of the diagnosis table indicates they have a form of cancer, so likely this is a long hospital stay where labs were taken on hospital admission.
Available labs
We can group the lab table to summarize all available labs.
Step7: The lab table is a large table with over 39 million observations. The most frequent observation is bedside glucose which accounts for almost 10% of the lab table, followed by potassium and sodium.
Hospitals with data available | Python Code:
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import psycopg2
import getpass
import pdvega
# for configuring connection
from configobj import ConfigObj
import os
%matplotlib inline
# Create a database connection using settings from config file
config='../db/config.ini'
# connection info
conn_info = dict()
if os.path.isfile(config):
config = ConfigObj(config)
conn_info["sqluser"] = config['username']
conn_info["sqlpass"] = config['password']
conn_info["sqlhost"] = config['host']
conn_info["sqlport"] = config['port']
conn_info["dbname"] = config['dbname']
conn_info["schema_name"] = config['schema_name']
else:
conn_info["sqluser"] = 'postgres'
conn_info["sqlpass"] = ''
conn_info["sqlhost"] = 'localhost'
conn_info["sqlport"] = 5432
conn_info["dbname"] = 'eicu'
conn_info["schema_name"] = 'public,eicu_crd'
# Connect to the eICU database
print('Database: {}'.format(conn_info['dbname']))
print('Username: {}'.format(conn_info["sqluser"]))
if conn_info["sqlpass"] == '':
# try connecting without password, i.e. peer or OS authentication
try:
if (conn_info["sqlhost"] == 'localhost') & (conn_info["sqlport"]=='5432'):
con = psycopg2.connect(dbname=conn_info["dbname"],
user=conn_info["sqluser"])
else:
con = psycopg2.connect(dbname=conn_info["dbname"],
host=conn_info["sqlhost"],
port=conn_info["sqlport"],
user=conn_info["sqluser"])
except:
conn_info["sqlpass"] = getpass.getpass('Password: ')
con = psycopg2.connect(dbname=conn_info["dbname"],
host=conn_info["sqlhost"],
port=conn_info["sqlport"],
user=conn_info["sqluser"],
password=conn_info["sqlpass"])
query_schema = 'set search_path to ' + conn_info['schema_name'] + ';'
Explanation: lab
Laboratory tests that have have been mapped to a standard set of measurements. Unmapped measurements are recorded in the customLab table. The lab table is fairly well populated by hospitals. It is possible some rarely obtained lab measurements are not interfaced into the system and therefore will not be available in the database. Absence of a rare lab measurement, such as serum lidocaine concentrations, would not indicate the lab was not drawn. However, absence of a platelet count would likely indicate the value was not obtained.
End of explanation
patientunitstayid = 2704494
query = query_schema +
select *
from lab
where patientunitstayid = {}
order by labresultoffset
.format(patientunitstayid)
df = pd.read_sql_query(query, con)
df.head()
query = query_schema +
select *
from patient
where patientunitstayid = {}
.format(patientunitstayid)
pt = pd.read_sql_query(query, con)
pt[['patientunitstayid', 'apacheadmissiondx', 'hospitaladmitoffset']]
Explanation: Examine a single patient
End of explanation
query = query_schema +
select labname, count(*) as n
from lab
group by labname
order by n desc
.format(patientunitstayid)
lab = pd.read_sql_query(query, con)
print('{} total vlues for {} distinct labs.'.format(lab['n'].sum(), lab.shape[0]))
print('\nTop 5 labs by frequency:')
lab.head()
Explanation: Immediately we can note the very large negative labresultoffset. This likely means we have some lab values pre-ICU. In some cases this will be a lab measured in another hospital location such as the emergency department or hospital floor. In this case, the large value (-99620 minutes, or ~70 days) is surprising, but we can see from the patient table that the patient was admitted to the hospital -99779 minutes before their ICU stay (hospitaladmitoffset). This patient was admitted to the ICU with thrombocytopenia (apacheadmissiondx), and inspection of the diagnosis table indicates they have a form of cancer, so likely this is a long hospital stay where labs were taken on hospital admission.
Available labs
We can group the lab table to summarize all available labs.
End of explanation
query = query_schema +
with t as
(
select distinct patientunitstayid
from lab
)
select
pt.hospitalid
, count(distinct pt.patientunitstayid) as number_of_patients
, count(distinct t.patientunitstayid) as number_of_patients_with_tbl
from patient pt
left join t
on pt.patientunitstayid = t.patientunitstayid
group by pt.hospitalid
.format(patientunitstayid)
df = pd.read_sql_query(query, con)
df['data completion'] = df['number_of_patients_with_tbl'] / df['number_of_patients'] * 100.0
df.sort_values('number_of_patients_with_tbl', ascending=False, inplace=True)
df.head(n=10)
df.tail(n=10)
df[['data completion']].vgplot.hist(bins=10,
var_name='Number of hospitals',
value_name='Percent of patients with data')
Explanation: The lab table is a large table with over 39 million observations. The most frequent observation is bedside glucose which accounts for almost 10% of the lab table, followed by potassium and sodium.
Hospitals with data available
End of explanation |
9,438 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step3: 前処理行列を用いた確率的勾配ランジュバン動力学法を使用してディリクレ過程混合モデルを適合する
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step4: 2. モデル
ここでは、対称ディレクレ事前分布を使ってガウス分布のディレクレ過程混合を定義します。このノートブックでは、ベクトル量を太字で記述しています。$i\in{1,\ldots,N}$ 個のサンプルに対し、$j \in{1,\ldots,K}$ ガウス分布の混合行列は、次のように計算されます。
$$\begin{align} p(\boldsymbol{x}1,\cdots, \boldsymbol{x}N) &=\prod{i=1}^N \text{GMM}(x_i), \ &,\quad \text{with};\text{GMM}(x_i)=\sum{j=1}^K\pi_j\text{Normal}(x_i,|,\text{loc}=\boldsymbol{\mu_{j}},,\text{scale}=\boldsymbol{\sigma_{j}})\ \end{align}$$ とし、ここでは次のようになります。
$$\begin{align} x_i&\sim \text{Normal}(\text{loc}=\boldsymbol{\mu}{z_i},,\text{scale}=\boldsymbol{\sigma}{z_i}) \ z_i &= \text{Categorical}(\text{prob}=\boldsymbol{\pi}),\ &,\quad \text{with};\boldsymbol{\pi}={\pi_1,\cdots,\pi_K}\ \boldsymbol{\pi}&\sim\text{Dirichlet}(\text{concentration}={\frac{\alpha}{K},\cdots,\frac{\alpha}{K}})\ \alpha&\sim \text{InverseGamma}(\text{concentration}=1,,\text{rate}=1)\ \boldsymbol{\mu_j} &\sim \text{Normal}(\text{loc}=\boldsymbol{0}, ,\text{scale}=\boldsymbol{1})\ \boldsymbol{\sigma_j} &\sim \text{InverseGamma}(\text{concentration}=\boldsymbol{1},,\text{rate}=\boldsymbol{1})\ \end{align}$$
クラスターの推論されたインデックスを表す $z_i$ を通じて、それぞれの $x_i$ を $j$ 番目のクラスターに代入するが目標です。
理想的なディリクレ混合モデルでは $K$ は $\infty$ に設定されますが、$K$ が十分に大きい場合は、ディリクレ混合モデルに近似できることが知られています。$K$ の初期値を任意に設定していますが、単純なガウス混合モデルとは異なり、最適なクラスターの数も最適化によって推論されます。
このノートブックでは、二変量ガウス分布を混合コンポーネントをして使用し、$K$ を 30 に設定します。
Step5: 3. 最適化
このモデルは、前処理行列を用いた確率的勾配ランジュバン動力学法(pSGLD)で最適化するため、大量のサンプルに対して、モデルをミニバッチの勾配降下法で最適化することができます。
$t,$th 回目のイタレーションにおいてミニバッチサイズ $M$ でパラメータ $\boldsymbol{\theta}\equiv{\boldsymbol{\pi},,\alpha,, \boldsymbol{\mu_j},,\boldsymbol{\sigma_j}}$ を更新するために、更新を次のようにサンプリングします。
$$\begin{align*} \Delta \boldsymbol { \theta } _ { t } & \sim \frac { \epsilon _ { t } } { 2 } \bigl[ G \left( \boldsymbol { \theta } _ { t } \right) \bigl( \nabla _ { \boldsymbol { \theta } } \log p \left( \boldsymbol { \theta } _ { t } \right)
\frac { N } { M } \sum _ { k = 1 } ^ { M } \nabla _ \boldsymbol { \theta } \log \text{GMM}(x_{t_k})\bigr) + \sum_\boldsymbol{\theta}\nabla_\theta G \left( \boldsymbol { \theta } _ { t } \right) \bigr]\ &+ G ^ { \frac { 1 } { 2 } } \left( \boldsymbol { \theta } _ { t } \right) \text { Normal } \left( \text{loc}=\boldsymbol{0} ,, \text{scale}=\epsilon _ { t }\boldsymbol{1} \right)\ \end{align*}$$
上記の方程式では、$\epsilon _ { t }$ は $t,$ 回目のイタレーションの学習率で、$\log p(\theta_t)$ は $\theta$ の対数事前分布の和です。$G ( \boldsymbol { \theta } _ { t })$ は各パラメータの勾配のスケールを調整する前処理行列です。
Step6: 尤度 $\text{GMM}(x_{t_k})$ の同時対数確率と事前確率 $p(\theta_t)$ を pSGLD の損失関数として使用します。
pSGLD の API に説明されているとおり、事前確率の和をサンプルサイズ $N$ で除算する必要があります。
Step7: 4. 結果を視覚化する
4.1. クラスター化された結果
まず、クラスター化の結果を視覚化します。
各サンプル $x_i$ をクラスター $j$ に代入するには、$z_i$ の事後分布を次のように計算します。
$$\begin{align} j = \underset{z_i}{\arg\max},p(z_i,|,x_i,,\boldsymbol{\theta}) \end{align}$$
Step8: ほぼ同数のサンプルが適切なクラスターに代入され、モデルが正しい数のクラスターを推論できたことが確認できます。
4.2. 不確実性を視覚化する
次に、サンプルごとにクラスター化の結果の不確実性を視覚化して確認します。
不確実性は、次のようにエントロピーを使用して計算します。
$$\begin{align} \text{Uncertainty}\text{entropy} = -\frac{1}{K}\sum^{K}{z_i=1}\sum^{O}_{l=1}p(z_i,|,x_i,,\boldsymbol{\theta}_l)\log p(z_i,|,x_i,,\boldsymbol{\theta}_l) \end{align}$$
pSGLD では、イタレーションごとのトレーニングパラメータの値をその事後分布のサンプルとして処理します。したがって、パラメータごとに $O$ イタレーションの値に対するエントロピーを計算します。最終的なエントロピー値は、全クラスター代入のエントロピーを平均化して計算されます。
Step9: 上記のグラフでは、輝度が低いほど不確実性が高いことを示します。クラスターの境界近くのサンプルの不確実性が特に高いことがわかります。直感的に、これらのサンプルをクラスター化するのが困難であることを知ることができます。
4.3. 選択された混合コンポーネントの平均とスケール
次に、選択されたクラスターの $\mu_j$ と $\sigma_j$ を見てみましょう。
Step10: またしても、$\boldsymbol{\mu_j}$ と $\boldsymbol{\sigma_j}$ は、グラウンドトゥルースに近い結果が得られています。
4.4 各混合コンポーネントの混合重み
推論された混合重みも確認しましょう。
Step11: いくつか(3 つ)の混合コンポーネントにのみ大きな重みがあり、残りはゼロに近い値となっているのがわかります。これはまた、モデルがサンプルの分布を構成する正しい数の混合コンポーネントを推論したことも示しています。
4.5. $\alpha$ の収束
ディリクレ分布の集中度パラメータ $\alpha$ の収束を調べましょう。
Step12: ディリクレ混合モデルでは $\alpha$ が小さいほど期待されるクラスター数が低くなることを考慮すると、モデルはイタレーションごとに最適な数のクラスターを学習しているようです。
4.6. イテレーションで推論されるクラスターの数
推論されるクラスター数が、イテレーションを通じてどのように変化するかを視覚化します。
これを行うには、インテレーションでのクラスター数を推論します。
Step13: インテレーションを繰り返すと、クラスターの数が 3 に近づいていきます。イテレーションを繰り返すうちに、$\alpha$ がより小さな値に収束することから、モデルが最適なクラスターの数を推論するようにパラメータを正しく学習していることがわかります。
興味深いことに、ずっと後のイテレーションで収束した $\alpha$ とは異なり、早期のイテレーションで推論がすでに適切なクラスター数に収束していることが示されています。
4.7. RMSProp を使ってモデルを適合する
このセクションでは、pSGLD のモンテカルロサンプリングスキームの有効性を確認するために、RMSProp を使用してモデルを適合します。RMSProp にはサンプリングスキームがなく、pSGLD はF RMSProp に基づいているため、比較のために RMSProp を選んでいます。
Step14: pSGLD に比較して、RMSProp のイテレーション数の方が長いにも関わらず、RMSProp による最適化の方がはるかに高速に行われています。
次に、クラスター化の結果を確認しましょう。
Step15: この実験では、RMSProp によって正しいクラスター数を推論することができませんでした。混合重みも見てみましょう。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import time
import numpy as np
import matplotlib.pyplot as plt
import tensorflow.compat.v1 as tf
import tensorflow_probability as tfp
plt.style.use('ggplot')
tfd = tfp.distributions
def session_options(enable_gpu_ram_resizing=True):
Convenience function which sets common `tf.Session` options.
config = tf.ConfigProto()
config.log_device_placement = True
if enable_gpu_ram_resizing:
# `allow_growth=True` makes it possible to connect multiple colabs to your
# GPU. Otherwise the colab malloc's all GPU ram.
config.gpu_options.allow_growth = True
return config
def reset_sess(config=None):
Convenience function to create the TF graph and session, or reset them.
if config is None:
config = session_options()
tf.reset_default_graph()
global sess
try:
sess.close()
except:
pass
sess = tf.InteractiveSession(config=config)
# For reproducibility
rng = np.random.RandomState(seed=45)
tf.set_random_seed(76)
# Precision
dtype = np.float64
# Number of training samples
num_samples = 50000
# Ground truth loc values which we will infer later on. The scale is 1.
true_loc = np.array([[-4, -4],
[0, 0],
[4, 4]], dtype)
true_components_num, dims = true_loc.shape
# Generate training samples from ground truth loc
true_hidden_component = rng.randint(0, true_components_num, num_samples)
observations = (true_loc[true_hidden_component]
+ rng.randn(num_samples, dims).astype(dtype))
# Visualize samples
plt.scatter(observations[:, 0], observations[:, 1], 1)
plt.axis([-10, 10, -10, 10])
plt.show()
Explanation: 前処理行列を用いた確率的勾配ランジュバン動力学法を使用してディリクレ過程混合モデルを適合する
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/probability/examples/Fitting_DPMM_Using_pSGLD"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/Fitting_DPMM_Using_pSGLD.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/Fitting_DPMM_Using_pSGLD.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/probability/examples/Fitting_DPMM_Using_pSGLD.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
このノートブックでは、ガウス分布のディリクレ過程混合モデルを適合し、大量のサンプルのクラスター化とクラスター数の推論を同時に行う方法を説明します。推論には、前処理行列を用いた確率的勾配ランジュバン動力学法(pSGLD)を使用します。
目次
サンプル
モデル
最適化
結果を視覚化する
4.1. クラスター化された結果
4.2. 不確実性を視覚化する
4.3. 選択された混合コンポーネントの平均とスケール
4.4. 各混合コンポーネントの混合重み
4.5. $\alpha$ の収束
4.6. イテレーションで推論されるクラスターの数
4.7. RMSProp を使ってモデルを適合する
結論
1. サンプル
まず、トイデータセットをセットアップします。3 つの二変量ガウス分布から 50,000 個のランダムサンプルを生成します。
End of explanation
reset_sess()
# Upperbound on K
max_cluster_num = 30
# Define trainable variables.
mix_probs = tf.nn.softmax(
tf.Variable(
name='mix_probs',
initial_value=np.ones([max_cluster_num], dtype) / max_cluster_num))
loc = tf.Variable(
name='loc',
initial_value=np.random.uniform(
low=-9, #set around minimum value of sample value
high=9, #set around maximum value of sample value
size=[max_cluster_num, dims]))
precision = tf.nn.softplus(tf.Variable(
name='precision',
initial_value=
np.ones([max_cluster_num, dims], dtype=dtype)))
alpha = tf.nn.softplus(tf.Variable(
name='alpha',
initial_value=
np.ones([1], dtype=dtype)))
training_vals = [mix_probs, alpha, loc, precision]
# Prior distributions of the training variables
#Use symmetric Dirichlet prior as finite approximation of Dirichlet process.
rv_symmetric_dirichlet_process = tfd.Dirichlet(
concentration=np.ones(max_cluster_num, dtype) * alpha / max_cluster_num,
name='rv_sdp')
rv_loc = tfd.Independent(
tfd.Normal(
loc=tf.zeros([max_cluster_num, dims], dtype=dtype),
scale=tf.ones([max_cluster_num, dims], dtype=dtype)),
reinterpreted_batch_ndims=1,
name='rv_loc')
rv_precision = tfd.Independent(
tfd.InverseGamma(
concentration=np.ones([max_cluster_num, dims], dtype),
rate=np.ones([max_cluster_num, dims], dtype)),
reinterpreted_batch_ndims=1,
name='rv_precision')
rv_alpha = tfd.InverseGamma(
concentration=np.ones([1], dtype=dtype),
rate=np.ones([1]),
name='rv_alpha')
# Define mixture model
rv_observations = tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(probs=mix_probs),
components_distribution=tfd.MultivariateNormalDiag(
loc=loc,
scale_diag=precision))
Explanation: 2. モデル
ここでは、対称ディレクレ事前分布を使ってガウス分布のディレクレ過程混合を定義します。このノートブックでは、ベクトル量を太字で記述しています。$i\in{1,\ldots,N}$ 個のサンプルに対し、$j \in{1,\ldots,K}$ ガウス分布の混合行列は、次のように計算されます。
$$\begin{align} p(\boldsymbol{x}1,\cdots, \boldsymbol{x}N) &=\prod{i=1}^N \text{GMM}(x_i), \ &,\quad \text{with};\text{GMM}(x_i)=\sum{j=1}^K\pi_j\text{Normal}(x_i,|,\text{loc}=\boldsymbol{\mu_{j}},,\text{scale}=\boldsymbol{\sigma_{j}})\ \end{align}$$ とし、ここでは次のようになります。
$$\begin{align} x_i&\sim \text{Normal}(\text{loc}=\boldsymbol{\mu}{z_i},,\text{scale}=\boldsymbol{\sigma}{z_i}) \ z_i &= \text{Categorical}(\text{prob}=\boldsymbol{\pi}),\ &,\quad \text{with};\boldsymbol{\pi}={\pi_1,\cdots,\pi_K}\ \boldsymbol{\pi}&\sim\text{Dirichlet}(\text{concentration}={\frac{\alpha}{K},\cdots,\frac{\alpha}{K}})\ \alpha&\sim \text{InverseGamma}(\text{concentration}=1,,\text{rate}=1)\ \boldsymbol{\mu_j} &\sim \text{Normal}(\text{loc}=\boldsymbol{0}, ,\text{scale}=\boldsymbol{1})\ \boldsymbol{\sigma_j} &\sim \text{InverseGamma}(\text{concentration}=\boldsymbol{1},,\text{rate}=\boldsymbol{1})\ \end{align}$$
クラスターの推論されたインデックスを表す $z_i$ を通じて、それぞれの $x_i$ を $j$ 番目のクラスターに代入するが目標です。
理想的なディリクレ混合モデルでは $K$ は $\infty$ に設定されますが、$K$ が十分に大きい場合は、ディリクレ混合モデルに近似できることが知られています。$K$ の初期値を任意に設定していますが、単純なガウス混合モデルとは異なり、最適なクラスターの数も最適化によって推論されます。
このノートブックでは、二変量ガウス分布を混合コンポーネントをして使用し、$K$ を 30 に設定します。
End of explanation
# Learning rates and decay
starter_learning_rate = 1e-6
end_learning_rate = 1e-10
decay_steps = 1e4
# Number of training steps
training_steps = 10000
# Mini-batch size
batch_size = 20
# Sample size for parameter posteriors
sample_size = 100
Explanation: 3. 最適化
このモデルは、前処理行列を用いた確率的勾配ランジュバン動力学法(pSGLD)で最適化するため、大量のサンプルに対して、モデルをミニバッチの勾配降下法で最適化することができます。
$t,$th 回目のイタレーションにおいてミニバッチサイズ $M$ でパラメータ $\boldsymbol{\theta}\equiv{\boldsymbol{\pi},,\alpha,, \boldsymbol{\mu_j},,\boldsymbol{\sigma_j}}$ を更新するために、更新を次のようにサンプリングします。
$$\begin{align*} \Delta \boldsymbol { \theta } _ { t } & \sim \frac { \epsilon _ { t } } { 2 } \bigl[ G \left( \boldsymbol { \theta } _ { t } \right) \bigl( \nabla _ { \boldsymbol { \theta } } \log p \left( \boldsymbol { \theta } _ { t } \right)
\frac { N } { M } \sum _ { k = 1 } ^ { M } \nabla _ \boldsymbol { \theta } \log \text{GMM}(x_{t_k})\bigr) + \sum_\boldsymbol{\theta}\nabla_\theta G \left( \boldsymbol { \theta } _ { t } \right) \bigr]\ &+ G ^ { \frac { 1 } { 2 } } \left( \boldsymbol { \theta } _ { t } \right) \text { Normal } \left( \text{loc}=\boldsymbol{0} ,, \text{scale}=\epsilon _ { t }\boldsymbol{1} \right)\ \end{align*}$$
上記の方程式では、$\epsilon _ { t }$ は $t,$ 回目のイタレーションの学習率で、$\log p(\theta_t)$ は $\theta$ の対数事前分布の和です。$G ( \boldsymbol { \theta } _ { t })$ は各パラメータの勾配のスケールを調整する前処理行列です。
End of explanation
# Placeholder for mini-batch
observations_tensor = tf.compat.v1.placeholder(dtype, shape=[batch_size, dims])
# Define joint log probabilities
# Notice that each prior probability should be divided by num_samples and
# likelihood is divided by batch_size for pSGLD optimization.
log_prob_parts = [
rv_loc.log_prob(loc) / num_samples,
rv_precision.log_prob(precision) / num_samples,
rv_alpha.log_prob(alpha) / num_samples,
rv_symmetric_dirichlet_process.log_prob(mix_probs)[..., tf.newaxis]
/ num_samples,
rv_observations.log_prob(observations_tensor) / batch_size
]
joint_log_prob = tf.reduce_sum(tf.concat(log_prob_parts, axis=-1), axis=-1)
# Make mini-batch generator
dx = tf.compat.v1.data.Dataset.from_tensor_slices(observations)\
.shuffle(500).repeat().batch(batch_size)
iterator = tf.compat.v1.data.make_one_shot_iterator(dx)
next_batch = iterator.get_next()
# Define learning rate scheduling
global_step = tf.Variable(0, trainable=False)
learning_rate = tf.train.polynomial_decay(
starter_learning_rate,
global_step, decay_steps,
end_learning_rate, power=1.)
# Set up the optimizer. Don't forget to set data_size=num_samples.
optimizer_kernel = tfp.optimizer.StochasticGradientLangevinDynamics(
learning_rate=learning_rate,
preconditioner_decay_rate=0.99,
burnin=1500,
data_size=num_samples)
train_op = optimizer_kernel.minimize(-joint_log_prob)
# Arrays to store samples
mean_mix_probs_mtx = np.zeros([training_steps, max_cluster_num])
mean_alpha_mtx = np.zeros([training_steps, 1])
mean_loc_mtx = np.zeros([training_steps, max_cluster_num, dims])
mean_precision_mtx = np.zeros([training_steps, max_cluster_num, dims])
init = tf.global_variables_initializer()
sess.run(init)
start = time.time()
for it in range(training_steps):
[
mean_mix_probs_mtx[it, :],
mean_alpha_mtx[it, 0],
mean_loc_mtx[it, :, :],
mean_precision_mtx[it, :, :],
_
] = sess.run([
*training_vals,
train_op
], feed_dict={
observations_tensor: sess.run(next_batch)})
elapsed_time_psgld = time.time() - start
print("Elapsed time: {} seconds".format(elapsed_time_psgld))
# Take mean over the last sample_size iterations
mean_mix_probs_ = mean_mix_probs_mtx[-sample_size:, :].mean(axis=0)
mean_alpha_ = mean_alpha_mtx[-sample_size:, :].mean(axis=0)
mean_loc_ = mean_loc_mtx[-sample_size:, :].mean(axis=0)
mean_precision_ = mean_precision_mtx[-sample_size:, :].mean(axis=0)
Explanation: 尤度 $\text{GMM}(x_{t_k})$ の同時対数確率と事前確率 $p(\theta_t)$ を pSGLD の損失関数として使用します。
pSGLD の API に説明されているとおり、事前確率の和をサンプルサイズ $N$ で除算する必要があります。
End of explanation
loc_for_posterior = tf.compat.v1.placeholder(
dtype, [None, max_cluster_num, dims], name='loc_for_posterior')
precision_for_posterior = tf.compat.v1.placeholder(
dtype, [None, max_cluster_num, dims], name='precision_for_posterior')
mix_probs_for_posterior = tf.compat.v1.placeholder(
dtype, [None, max_cluster_num], name='mix_probs_for_posterior')
# Posterior of z (unnormalized)
unnomarlized_posterior = tfd.MultivariateNormalDiag(
loc=loc_for_posterior, scale_diag=precision_for_posterior)\
.log_prob(tf.expand_dims(tf.expand_dims(observations, axis=1), axis=1))\
+ tf.log(mix_probs_for_posterior[tf.newaxis, ...])
# Posterior of z (normarizad over latent states)
posterior = unnomarlized_posterior\
- tf.reduce_logsumexp(unnomarlized_posterior, axis=-1)[..., tf.newaxis]
cluster_asgmt = sess.run(tf.argmax(
tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={
loc_for_posterior: mean_loc_mtx[-sample_size:, :],
precision_for_posterior: mean_precision_mtx[-sample_size:, :],
mix_probs_for_posterior: mean_mix_probs_mtx[-sample_size:, :]})
idxs, count = np.unique(cluster_asgmt, return_counts=True)
print('Number of inferred clusters = {}\n'.format(len(count)))
np.set_printoptions(formatter={'float': '{: 0.3f}'.format})
print('Number of elements in each cluster = {}\n'.format(count))
def convert_int_elements_to_consecutive_numbers_in(array):
unique_int_elements = np.unique(array)
for consecutive_number, unique_int_element in enumerate(unique_int_elements):
array[array == unique_int_element] = consecutive_number
return array
cmap = plt.get_cmap('tab10')
plt.scatter(
observations[:, 0], observations[:, 1],
1,
c=cmap(convert_int_elements_to_consecutive_numbers_in(cluster_asgmt)))
plt.axis([-10, 10, -10, 10])
plt.show()
Explanation: 4. 結果を視覚化する
4.1. クラスター化された結果
まず、クラスター化の結果を視覚化します。
各サンプル $x_i$ をクラスター $j$ に代入するには、$z_i$ の事後分布を次のように計算します。
$$\begin{align} j = \underset{z_i}{\arg\max},p(z_i,|,x_i,,\boldsymbol{\theta}) \end{align}$$
End of explanation
# Calculate entropy
posterior_in_exponential = tf.exp(posterior)
uncertainty_in_entropy = tf.reduce_mean(-tf.reduce_sum(
posterior_in_exponential
* posterior,
axis=1), axis=1)
uncertainty_in_entropy_ = sess.run(uncertainty_in_entropy, feed_dict={
loc_for_posterior: mean_loc_mtx[-sample_size:, :],
precision_for_posterior: mean_precision_mtx[-sample_size:, :],
mix_probs_for_posterior: mean_mix_probs_mtx[-sample_size:, :]
})
plt.title('Entropy')
sc = plt.scatter(observations[:, 0],
observations[:, 1],
1,
c=uncertainty_in_entropy_,
cmap=plt.cm.viridis_r)
cbar = plt.colorbar(sc,
fraction=0.046,
pad=0.04,
ticks=[uncertainty_in_entropy_.min(),
uncertainty_in_entropy_.max()])
cbar.ax.set_yticklabels(['low', 'high'])
cbar.set_label('Uncertainty', rotation=270)
plt.show()
Explanation: ほぼ同数のサンプルが適切なクラスターに代入され、モデルが正しい数のクラスターを推論できたことが確認できます。
4.2. 不確実性を視覚化する
次に、サンプルごとにクラスター化の結果の不確実性を視覚化して確認します。
不確実性は、次のようにエントロピーを使用して計算します。
$$\begin{align} \text{Uncertainty}\text{entropy} = -\frac{1}{K}\sum^{K}{z_i=1}\sum^{O}_{l=1}p(z_i,|,x_i,,\boldsymbol{\theta}_l)\log p(z_i,|,x_i,,\boldsymbol{\theta}_l) \end{align}$$
pSGLD では、イタレーションごとのトレーニングパラメータの値をその事後分布のサンプルとして処理します。したがって、パラメータごとに $O$ イタレーションの値に対するエントロピーを計算します。最終的なエントロピー値は、全クラスター代入のエントロピーを平均化して計算されます。
End of explanation
for idx, numbe_of_samples in zip(idxs, count):
print(
'Component id = {}, Number of elements = {}'
.format(idx, numbe_of_samples))
print(
'Mean loc = {}, Mean scale = {}\n'
.format(mean_loc_[idx, :], mean_precision_[idx, :]))
Explanation: 上記のグラフでは、輝度が低いほど不確実性が高いことを示します。クラスターの境界近くのサンプルの不確実性が特に高いことがわかります。直感的に、これらのサンプルをクラスター化するのが困難であることを知ることができます。
4.3. 選択された混合コンポーネントの平均とスケール
次に、選択されたクラスターの $\mu_j$ と $\sigma_j$ を見てみましょう。
End of explanation
plt.ylabel('Mean posterior of mixture weight')
plt.xlabel('Component')
plt.bar(range(0, max_cluster_num), mean_mix_probs_)
plt.show()
Explanation: またしても、$\boldsymbol{\mu_j}$ と $\boldsymbol{\sigma_j}$ は、グラウンドトゥルースに近い結果が得られています。
4.4 各混合コンポーネントの混合重み
推論された混合重みも確認しましょう。
End of explanation
print('Value of inferred alpha = {0:.3f}\n'.format(mean_alpha_[0]))
plt.ylabel('Sample value of alpha')
plt.xlabel('Iteration')
plt.plot(mean_alpha_mtx)
plt.show()
Explanation: いくつか(3 つ)の混合コンポーネントにのみ大きな重みがあり、残りはゼロに近い値となっているのがわかります。これはまた、モデルがサンプルの分布を構成する正しい数の混合コンポーネントを推論したことも示しています。
4.5. $\alpha$ の収束
ディリクレ分布の集中度パラメータ $\alpha$ の収束を調べましょう。
End of explanation
step = sample_size
num_of_iterations = 50
estimated_num_of_clusters = []
interval = (training_steps - step) // (num_of_iterations - 1)
iterations = np.asarray(range(step, training_steps+1, interval))
for iteration in iterations:
start_position = iteration-step
end_position = iteration
result = sess.run(tf.argmax(
tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={
loc_for_posterior:
mean_loc_mtx[start_position:end_position, :],
precision_for_posterior:
mean_precision_mtx[start_position:end_position, :],
mix_probs_for_posterior:
mean_mix_probs_mtx[start_position:end_position, :]})
idxs, count = np.unique(result, return_counts=True)
estimated_num_of_clusters.append(len(count))
plt.ylabel('Number of inferred clusters')
plt.xlabel('Iteration')
plt.yticks(np.arange(1, max(estimated_num_of_clusters) + 1, 1))
plt.plot(iterations - 1, estimated_num_of_clusters)
plt.show()
Explanation: ディリクレ混合モデルでは $\alpha$ が小さいほど期待されるクラスター数が低くなることを考慮すると、モデルはイタレーションごとに最適な数のクラスターを学習しているようです。
4.6. イテレーションで推論されるクラスターの数
推論されるクラスター数が、イテレーションを通じてどのように変化するかを視覚化します。
これを行うには、インテレーションでのクラスター数を推論します。
End of explanation
# Learning rates and decay
starter_learning_rate_rmsprop = 1e-2
end_learning_rate_rmsprop = 1e-4
decay_steps_rmsprop = 1e4
# Number of training steps
training_steps_rmsprop = 50000
# Mini-batch size
batch_size_rmsprop = 20
# Define trainable variables.
mix_probs_rmsprop = tf.nn.softmax(
tf.Variable(
name='mix_probs_rmsprop',
initial_value=np.ones([max_cluster_num], dtype) / max_cluster_num))
loc_rmsprop = tf.Variable(
name='loc_rmsprop',
initial_value=np.zeros([max_cluster_num, dims], dtype)
+ np.random.uniform(
low=-9, #set around minimum value of sample value
high=9, #set around maximum value of sample value
size=[max_cluster_num, dims]))
precision_rmsprop = tf.nn.softplus(tf.Variable(
name='precision_rmsprop',
initial_value=
np.ones([max_cluster_num, dims], dtype=dtype)))
alpha_rmsprop = tf.nn.softplus(tf.Variable(
name='alpha_rmsprop',
initial_value=
np.ones([1], dtype=dtype)))
training_vals_rmsprop =\
[mix_probs_rmsprop, alpha_rmsprop, loc_rmsprop, precision_rmsprop]
# Prior distributions of the training variables
#Use symmetric Dirichlet prior as finite approximation of Dirichlet process.
rv_symmetric_dirichlet_process_rmsprop = tfd.Dirichlet(
concentration=np.ones(max_cluster_num, dtype)
* alpha_rmsprop / max_cluster_num,
name='rv_sdp_rmsprop')
rv_loc_rmsprop = tfd.Independent(
tfd.Normal(
loc=tf.zeros([max_cluster_num, dims], dtype=dtype),
scale=tf.ones([max_cluster_num, dims], dtype=dtype)),
reinterpreted_batch_ndims=1,
name='rv_loc_rmsprop')
rv_precision_rmsprop = tfd.Independent(
tfd.InverseGamma(
concentration=np.ones([max_cluster_num, dims], dtype),
rate=np.ones([max_cluster_num, dims], dtype)),
reinterpreted_batch_ndims=1,
name='rv_precision_rmsprop')
rv_alpha_rmsprop = tfd.InverseGamma(
concentration=np.ones([1], dtype=dtype),
rate=np.ones([1]),
name='rv_alpha_rmsprop')
# Define mixture model
rv_observations_rmsprop = tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(probs=mix_probs_rmsprop),
components_distribution=tfd.MultivariateNormalDiag(
loc=loc_rmsprop,
scale_diag=precision_rmsprop))
og_prob_parts_rmsprop = [
rv_loc_rmsprop.log_prob(loc_rmsprop),
rv_precision_rmsprop.log_prob(precision_rmsprop),
rv_alpha_rmsprop.log_prob(alpha_rmsprop),
rv_symmetric_dirichlet_process_rmsprop
.log_prob(mix_probs_rmsprop)[..., tf.newaxis],
rv_observations_rmsprop.log_prob(observations_tensor)
* num_samples / batch_size
]
joint_log_prob_rmsprop = tf.reduce_sum(
tf.concat(log_prob_parts_rmsprop, axis=-1), axis=-1)
# Define learning rate scheduling
global_step_rmsprop = tf.Variable(0, trainable=False)
learning_rate = tf.train.polynomial_decay(
starter_learning_rate_rmsprop,
global_step_rmsprop, decay_steps_rmsprop,
end_learning_rate_rmsprop, power=1.)
# Set up the optimizer. Don't forget to set data_size=num_samples.
optimizer_kernel_rmsprop = tf.train.RMSPropOptimizer(
learning_rate=learning_rate,
decay=0.99)
train_op_rmsprop = optimizer_kernel_rmsprop.minimize(-joint_log_prob_rmsprop)
init_rmsprop = tf.global_variables_initializer()
sess.run(init_rmsprop)
start = time.time()
for it in range(training_steps_rmsprop):
[
_
] = sess.run([
train_op_rmsprop
], feed_dict={
observations_tensor: sess.run(next_batch)})
elapsed_time_rmsprop = time.time() - start
print("RMSProp elapsed_time: {} seconds ({} iterations)"
.format(elapsed_time_rmsprop, training_steps_rmsprop))
print("pSGLD elapsed_time: {} seconds ({} iterations)"
.format(elapsed_time_psgld, training_steps))
mix_probs_rmsprop_, alpha_rmsprop_, loc_rmsprop_, precision_rmsprop_ =\
sess.run(training_vals_rmsprop)
Explanation: インテレーションを繰り返すと、クラスターの数が 3 に近づいていきます。イテレーションを繰り返すうちに、$\alpha$ がより小さな値に収束することから、モデルが最適なクラスターの数を推論するようにパラメータを正しく学習していることがわかります。
興味深いことに、ずっと後のイテレーションで収束した $\alpha$ とは異なり、早期のイテレーションで推論がすでに適切なクラスター数に収束していることが示されています。
4.7. RMSProp を使ってモデルを適合する
このセクションでは、pSGLD のモンテカルロサンプリングスキームの有効性を確認するために、RMSProp を使用してモデルを適合します。RMSProp にはサンプリングスキームがなく、pSGLD はF RMSProp に基づいているため、比較のために RMSProp を選んでいます。
End of explanation
cluster_asgmt_rmsprop = sess.run(tf.argmax(
tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={
loc_for_posterior: loc_rmsprop_[tf.newaxis, :],
precision_for_posterior: precision_rmsprop_[tf.newaxis, :],
mix_probs_for_posterior: mix_probs_rmsprop_[tf.newaxis, :]})
idxs, count = np.unique(cluster_asgmt_rmsprop, return_counts=True)
print('Number of inferred clusters = {}\n'.format(len(count)))
np.set_printoptions(formatter={'float': '{: 0.3f}'.format})
print('Number of elements in each cluster = {}\n'.format(count))
cmap = plt.get_cmap('tab10')
plt.scatter(
observations[:, 0], observations[:, 1],
1,
c=cmap(convert_int_elements_to_consecutive_numbers_in(
cluster_asgmt_rmsprop)))
plt.axis([-10, 10, -10, 10])
plt.show()
Explanation: pSGLD に比較して、RMSProp のイテレーション数の方が長いにも関わらず、RMSProp による最適化の方がはるかに高速に行われています。
次に、クラスター化の結果を確認しましょう。
End of explanation
plt.ylabel('MAP inferece of mixture weight')
plt.xlabel('Component')
plt.bar(range(0, max_cluster_num), mix_probs_rmsprop_)
plt.show()
Explanation: この実験では、RMSProp によって正しいクラスター数を推論することができませんでした。混合重みも見てみましょう。
End of explanation |
9,439 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: gdsfactory in 5 minutes
Component -> Circuit -> Mask
gdsfactory easily enables you to go from a Component, to a higher level Component (circuit), or even higher level Component (Mask)
For a component it's important that you spend some time early to parametrize it correctly. Don't be afraid to spend some time using pen and paper and choosing easy to understand names.
Lets for example define a ring resonator, which is already a circuit made of waveguides, bends and couplers.
Components, circuits and Masks are made in Parametric cell functions, that can also accept other ComponentSpec.
A Component Spec can be
Step3: Lets define a ring function that also accepts other component specs for the subcomponents (straight, coupler, bend)
Step4: How do you customize components?
You can use functools.partial to customize the default settings from any component
Step6: Netlist driven flow
You can define components as a Place and Route netlist.
instances
placements
routes
Step7: Mask
Once you have your components and circuits defined, you can add them into a mask.
You need to consider
Step8: Make sure you save the GDS with metadata so when the chip comes back you remember what you put on it
You can also save the labels for automatic testing. | Python Code:
from typing import Optional
import gdsfactory as gf
from gdsfactory.component import Component
from gdsfactory.components.bend_euler import bend_euler
from gdsfactory.components.coupler90 import coupler90 as coupler90function
from gdsfactory.components.coupler_straight import (
coupler_straight as coupler_straight_function,
)
from gdsfactory.components.straight import straight as straight_function
from gdsfactory.cross_section import strip
from gdsfactory.snap import assert_on_2nm_grid
from gdsfactory.types import ComponentSpec, CrossSectionSpec
@gf.cell
def coupler_ring(
gap: float = 0.2,
radius: float = 5.0,
length_x: float = 4.0,
coupler90: ComponentSpec = coupler90function,
bend: Optional[ComponentSpec] = None,
straight: ComponentSpec = straight_function,
coupler_straight: ComponentSpec = coupler_straight_function,
cross_section: CrossSectionSpec = strip,
bend_cross_section: Optional[CrossSectionSpec] = None,
**kwargs
) -> Component:
rCoupler for ring.
Args:
gap: spacing between parallel coupled straight waveguides.
radius: of the bends.
length_x: length of the parallel coupled straight waveguides.
coupler90: straight coupled to a 90deg bend.
bend: bend spec.
coupler_straight: two parallel coupled straight waveguides.
cross_section: cross_section spec.
bend_cross_section: optional bend cross_section spec.
kwargs: cross_section settings for bend and coupler.
.. code::
2 3
| |
\ /
\ /
---=========---
1 length_x 4
bend = bend or bend_euler
c = Component()
assert_on_2nm_grid(gap)
# define subcells
coupler90_component = gf.get_component(
coupler90,
gap=gap,
radius=radius,
bend=bend,
cross_section=cross_section,
bend_cross_section=bend_cross_section,
**kwargs
)
coupler_straight_component = gf.get_component(
coupler_straight,
gap=gap,
length=length_x,
cross_section=cross_section,
straight=straight,
**kwargs
)
# add references to subcells
cbl = c << coupler90_component
cbr = c << coupler90_component
cs = c << coupler_straight_component
# connect references
y = coupler90_component.y
cs.connect(port="o4", destination=cbr.ports["o1"])
cbl.reflect(p1=(0, y), p2=(1, y))
cbl.connect(port="o2", destination=cs.ports["o2"])
c.add_port("o1", port=cbl.ports["o3"])
c.add_port("o2", port=cbl.ports["o4"])
c.add_port("o3", port=cbr.ports["o3"])
c.add_port("o4", port=cbr.ports["o4"])
c.auto_rename_ports()
return c
coupler = coupler_ring()
coupler
Explanation: gdsfactory in 5 minutes
Component -> Circuit -> Mask
gdsfactory easily enables you to go from a Component, to a higher level Component (circuit), or even higher level Component (Mask)
For a component it's important that you spend some time early to parametrize it correctly. Don't be afraid to spend some time using pen and paper and choosing easy to understand names.
Lets for example define a ring resonator, which is already a circuit made of waveguides, bends and couplers.
Components, circuits and Masks are made in Parametric cell functions, that can also accept other ComponentSpec.
A Component Spec can be:
a parametric cell function (decorated with cell)
a string. To get a cell registered in the active pdk.
a dict. dict(component='mmi2x2', length_mmi=3)
End of explanation
import gdsfactory as gf
@gf.cell
def ring_single(
gap: float = 0.2,
radius: float = 10.0,
length_x: float = 4.0,
length_y: float = 0.6,
coupler_ring: ComponentSpec = coupler_ring,
straight: ComponentSpec = straight_function,
bend: ComponentSpec = bend_euler,
cross_section: ComponentSpec = "strip",
**kwargs
) -> gf.Component:
Returns a single ring.
ring coupler (cb: bottom) connects to two vertical straights (sl: left, sr: right),
two bends (bl, br) and horizontal straight (wg: top)
Args:
gap: gap between for coupler.
radius: for the bend and coupler.
length_x: ring coupler length.
length_y: vertical straight length.
coupler_ring: ring coupler spec.
straight: straight spec.
bend: 90 degrees bend spec.
cross_section: cross_section spec.
kwargs: cross_section settings
.. code::
bl-st-br
| |
sl sr length_y
| |
--==cb==-- gap
length_x
gf.snap.assert_on_2nm_grid(gap)
c = gf.Component()
cb = c << gf.get_component(
coupler_ring,
bend=bend,
straight=straight,
gap=gap,
radius=radius,
length_x=length_x,
cross_section=cross_section,
**kwargs
)
sy = gf.get_component(
straight, length=length_y, cross_section=cross_section, **kwargs
)
b = gf.get_component(bend, cross_section=cross_section, radius=radius, **kwargs)
sx = gf.get_component(
straight, length=length_x, cross_section=cross_section, **kwargs
)
sl = c << sy
sr = c << sy
bl = c << b
br = c << b
st = c << sx
sl.connect(port="o1", destination=cb.ports["o2"])
bl.connect(port="o2", destination=sl.ports["o2"])
st.connect(port="o2", destination=bl.ports["o1"])
br.connect(port="o2", destination=st.ports["o1"])
sr.connect(port="o1", destination=br.ports["o1"])
sr.connect(port="o2", destination=cb.ports["o3"])
c.add_port("o2", port=cb.ports["o4"])
c.add_port("o1", port=cb.ports["o1"])
return c
ring = ring_single()
ring
Explanation: Lets define a ring function that also accepts other component specs for the subcomponents (straight, coupler, bend)
End of explanation
ring_single3 = gf.partial(ring_single, radius=3)
ring_single3()
ring_array = gf.components.ring_single_array(
list_of_dicts=[dict(radius=i) for i in [5, 6, 7]]
)
ring_array
ring_with_grating_couplers = gf.routing.add_fiber_array(ring_array)
ring_with_grating_couplers
Explanation: How do you customize components?
You can use functools.partial to customize the default settings from any component
End of explanation
import gdsfactory as gf
yaml =
name: sample_different_factory
instances:
bl:
component: pad
tl:
component: pad
br:
component: pad
tr:
component: pad
placements:
tl:
x: 0
y: 200
br:
x: 400
y: 400
tr:
x: 400
y: 600
routes:
electrical:
settings:
separation: 20
layer: [31, 0]
width: 10
links:
tl,e3: tr,e1
bl,e3: br,e1
optical:
settings:
radius: 100
links:
bl,e4: br,e3
mzi = gf.read.from_yaml(yaml)
mzi
Explanation: Netlist driven flow
You can define components as a Place and Route netlist.
instances
placements
routes
End of explanation
import toolz
import gdsfactory as gf
ring_te = toolz.compose(gf.routing.add_fiber_array, gf.components.ring_single)
rings = gf.grid([ring_te(radius=r) for r in [10, 20, 50]])
@gf.cell
def mask(size=(1000, 1000)):
c = gf.Component()
c << gf.components.die(size=size)
c << rings
return c
m = mask(cache=False)
m
gdspath = m.write_gds_with_metadata(gdspath="mask.gds")
Explanation: Mask
Once you have your components and circuits defined, you can add them into a mask.
You need to consider:
what design variations do you want to include in the mask? You need to define your Design Of Experiment or DOE
obey DRC (Design rule checking) foundry rules for manufacturability. Foundry usually provides those rules for each layer (min width, min space, min density, max density)
make sure you will be able to test te devices after fabrication. Obey DFT (design for testing) rules. For exammple, if your test setup works only for fiber array, what is the fiber array spacing (127 or 250um?)
if you plan to package your device, make sure you follow your packaging guidelines from your packaging house (min pad size, min pad pitch, max number of rows for wire bonding ...)
End of explanation
labels_path = gdspath.with_suffix(".csv")
gf.mask.write_labels(gdspath=gdspath, layer_label=(66, 0))
mask_metadata = gf.mask.read_metadata(gdspath=gdspath)
tm = gf.mask.merge_test_metadata(mask_metadata=mask_metadata, labels_path=labels_path)
tm.keys()
Explanation: Make sure you save the GDS with metadata so when the chip comes back you remember what you put on it
You can also save the labels for automatic testing.
End of explanation |
9,440 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spatiotemporal permutation F-test on full sensor data
Tests for differential evoked responses in at least
one condition using a permutation clustering test.
The FieldTrip neighbor templates will be used to determine
the adjacency between sensors. This serves as a spatial prior
to the clustering. Spatiotemporal clusters will then
be visualized using custom matplotlib code.
Here, the unit of observation is epochs from a specific study subject.
However, the same logic applies when the unit observation is
a number of study subject each of whom contribute their own averaged
data (i.e., an average of their epochs). This would then be considered
an analysis at the "2nd level".
See the FieldTrip tutorial for a caveat regarding
the possible interpretation of "significant" clusters.
For more information on cluster-based permutation testing in MNE-Python,
see also
Step1: Set parameters
Step2: Read epochs for the channel of interest
Step3: Find the FieldTrip neighbor definition to setup sensor adjacency
Step4: Compute permutation statistic
How does it work? We use clustering to "bind" together features which are
similar. Our features are the magnetic fields measured over our sensor
array at different times. This reduces the multiple comparison problem.
To compute the actual test-statistic, we first sum all F-values in all
clusters. We end up with one statistic for each cluster.
Then we generate a distribution from the data by shuffling our conditions
between our samples and recomputing our clusters and the test statistics.
We test for the significance of a given cluster by computing the probability
of observing a cluster of that size
Step5: <div class="alert alert-info"><h4>Note</h4><p>Note how we only specified an adjacency for sensors! However,
because we used
Step6: Permutation statistic for time-frequencies
Let's do the same thing with the time-frequency decomposition of the data
(see tut-sensors-time-freq for a tutorial and
ex-tfr-comparison for a comparison of time-frequency methods) to
show how cluster permutations can be done on higher-dimensional data.
Step7: Remember the note on the adjacency matrix from above
Step8: Now we can run the cluster permutation test, but first we have to set a
threshold. This example decimates in time and uses few frequencies so we need
to increase the threshold from the default value in order to have
differentiated clusters (i.e., so that our algorithm doesn't just find one
large cluster). For a more principled method of setting this parameter,
threshold-free cluster enhancement may be used.
See disc-stats for a discussion.
Step9: Finally, we can plot our results. It is difficult to visualize clusters in
time-frequency-sensor space; plotting time-frequency spectrograms and
plotting topomaps display time-frequency and sensor space respectively
but they are difficult to combine. We will plot topomaps with the clustered
sensors colored in white adjacent to spectrograms in order to provide a
visualization of the results. This is a dimensionally limited view, however.
Each sensor has its own significant time-frequencies, but, in order to
display a single spectrogram, all the time-frequencies that are significant
for any sensor in the cluster are plotted as significant. This is a
difficulty inherent to visualizing high-dimensional data and should be taken
into consideration when interpreting results. | Python Code:
# Authors: Denis Engemann <[email protected]>
# Jona Sassenhagen <[email protected]>
# Alex Rockhill <[email protected]>
# Stefan Appelhoff <[email protected]>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import scipy.stats
import mne
from mne.stats import spatio_temporal_cluster_test, combine_adjacency
from mne.datasets import sample
from mne.channels import find_ch_adjacency
from mne.viz import plot_compare_evokeds
from mne.time_frequency import tfr_morlet
Explanation: Spatiotemporal permutation F-test on full sensor data
Tests for differential evoked responses in at least
one condition using a permutation clustering test.
The FieldTrip neighbor templates will be used to determine
the adjacency between sensors. This serves as a spatial prior
to the clustering. Spatiotemporal clusters will then
be visualized using custom matplotlib code.
Here, the unit of observation is epochs from a specific study subject.
However, the same logic applies when the unit observation is
a number of study subject each of whom contribute their own averaged
data (i.e., an average of their epochs). This would then be considered
an analysis at the "2nd level".
See the FieldTrip tutorial for a caveat regarding
the possible interpretation of "significant" clusters.
For more information on cluster-based permutation testing in MNE-Python,
see also: tut-cluster-one-samp-tfr
End of explanation
data_path = sample.data_path()
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
event_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif'
event_id = {'Aud/L': 1, 'Aud/R': 2, 'Vis/L': 3, 'Vis/R': 4}
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 30)
events = mne.read_events(event_fname)
Explanation: Set parameters
End of explanation
picks = mne.pick_types(raw.info, meg='mag', eog=True)
reject = dict(mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=None, reject=reject, preload=True)
epochs.drop_channels(['EOG 061'])
epochs.equalize_event_counts(event_id)
# Obtain the data as a 3D matrix and transpose it such that
# the dimensions are as expected for the cluster permutation test:
# n_epochs × n_times × n_channels
X = [epochs[event_name].get_data() for event_name in event_id]
X = [np.transpose(x, (0, 2, 1)) for x in X]
Explanation: Read epochs for the channel of interest
End of explanation
adjacency, ch_names = find_ch_adjacency(epochs.info, ch_type='mag')
print(type(adjacency)) # it's a sparse matrix!
mne.viz.plot_ch_adjacency(epochs.info, adjacency, ch_names)
Explanation: Find the FieldTrip neighbor definition to setup sensor adjacency
End of explanation
# We are running an F test, so we look at the upper tail
# see also: https://stats.stackexchange.com/a/73993
tail = 1
# We want to set a critical test statistic (here: F), to determine when
# clusters are being formed. Using Scipy's percent point function of the F
# distribution, we can conveniently select a threshold that corresponds to
# some alpha level that we arbitrarily pick.
alpha_cluster_forming = 0.001
# For an F test we need the degrees of freedom for the numerator
# (number of conditions - 1) and the denominator (number of observations
# - number of conditions):
n_conditions = len(event_id)
n_observations = len(X[0])
dfn = n_conditions - 1
dfd = n_observations - n_conditions
# Note: we calculate 1 - alpha_cluster_forming to get the critical value
# on the right tail
f_thresh = scipy.stats.f.ppf(1 - alpha_cluster_forming, dfn=dfn, dfd=dfd)
# run the cluster based permutation analysis
cluster_stats = spatio_temporal_cluster_test(X, n_permutations=1000,
threshold=f_thresh, tail=tail,
n_jobs=None, buffer_size=None,
adjacency=adjacency)
F_obs, clusters, p_values, _ = cluster_stats
Explanation: Compute permutation statistic
How does it work? We use clustering to "bind" together features which are
similar. Our features are the magnetic fields measured over our sensor
array at different times. This reduces the multiple comparison problem.
To compute the actual test-statistic, we first sum all F-values in all
clusters. We end up with one statistic for each cluster.
Then we generate a distribution from the data by shuffling our conditions
between our samples and recomputing our clusters and the test statistics.
We test for the significance of a given cluster by computing the probability
of observing a cluster of that size
:footcite:MarisOostenveld2007,Sassenhagen2019.
End of explanation
# We subselect clusters that we consider significant at an arbitrarily
# picked alpha level: "p_accept".
# NOTE: remember the caveats with respect to "significant" clusters that
# we mentioned in the introduction of this tutorial!
p_accept = 0.01
good_cluster_inds = np.where(p_values < p_accept)[0]
# configure variables for visualization
colors = {"Aud": "crimson", "Vis": 'steelblue'}
linestyles = {"L": '-', "R": '--'}
# organize data for plotting
evokeds = {cond: epochs[cond].average() for cond in event_id}
# loop over clusters
for i_clu, clu_idx in enumerate(good_cluster_inds):
# unpack cluster information, get unique indices
time_inds, space_inds = np.squeeze(clusters[clu_idx])
ch_inds = np.unique(space_inds)
time_inds = np.unique(time_inds)
# get topography for F stat
f_map = F_obs[time_inds, ...].mean(axis=0)
# get signals at the sensors contributing to the cluster
sig_times = epochs.times[time_inds]
# create spatial mask
mask = np.zeros((f_map.shape[0], 1), dtype=bool)
mask[ch_inds, :] = True
# initialize figure
fig, ax_topo = plt.subplots(1, 1, figsize=(10, 3))
# plot average test statistic and mark significant sensors
f_evoked = mne.EvokedArray(f_map[:, np.newaxis], epochs.info, tmin=0)
f_evoked.plot_topomap(times=0, mask=mask, axes=ax_topo, cmap='Reds',
vmin=np.min, vmax=np.max, show=False,
colorbar=False, mask_params=dict(markersize=10))
image = ax_topo.images[0]
# remove the title that would otherwise say "0.000 s"
ax_topo.set_title("")
# create additional axes (for ERF and colorbar)
divider = make_axes_locatable(ax_topo)
# add axes for colorbar
ax_colorbar = divider.append_axes('right', size='5%', pad=0.05)
plt.colorbar(image, cax=ax_colorbar)
ax_topo.set_xlabel(
'Averaged F-map ({:0.3f} - {:0.3f} s)'.format(*sig_times[[0, -1]]))
# add new axis for time courses and plot time courses
ax_signals = divider.append_axes('right', size='300%', pad=1.2)
title = 'Cluster #{0}, {1} sensor'.format(i_clu + 1, len(ch_inds))
if len(ch_inds) > 1:
title += "s (mean)"
plot_compare_evokeds(evokeds, title=title, picks=ch_inds, axes=ax_signals,
colors=colors, linestyles=linestyles, show=False,
split_legend=True, truncate_yaxis='auto')
# plot temporal cluster extent
ymin, ymax = ax_signals.get_ylim()
ax_signals.fill_betweenx((ymin, ymax), sig_times[0], sig_times[-1],
color='orange', alpha=0.3)
# clean up viz
mne.viz.tight_layout(fig=fig)
fig.subplots_adjust(bottom=.05)
plt.show()
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Note how we only specified an adjacency for sensors! However,
because we used :func:`mne.stats.spatio_temporal_cluster_test`,
an adjacency for time points was automatically taken into
account. That is, at time point N, the time points N - 1 and
N + 1 were considered as adjacent (this is also called "lattice
adjacency"). This is only possbile because we ran the analysis on
2D data (times × channels) per observation ... for 3D data per
observation (e.g., times × frequencies × channels), we will need
to use :func:`mne.stats.combine_adjacency`, as shown further
below.</p></div>
Note also that the same functions work with source estimates.
The only differences are the origin of the data, the size,
and the adjacency definition.
It can be used for single trials or for groups of subjects.
Visualize clusters
End of explanation
decim = 4
freqs = np.arange(7, 30, 3) # define frequencies of interest
n_cycles = freqs / freqs[0]
epochs_power = list()
for condition in [epochs[k] for k in ('Aud/L', 'Vis/L')]:
this_tfr = tfr_morlet(condition, freqs, n_cycles=n_cycles,
decim=decim, average=False, return_itc=False)
this_tfr.apply_baseline(mode='ratio', baseline=(None, 0))
epochs_power.append(this_tfr.data)
# transpose again to (epochs, frequencies, times, channels)
X = [np.transpose(x, (0, 2, 3, 1)) for x in epochs_power]
Explanation: Permutation statistic for time-frequencies
Let's do the same thing with the time-frequency decomposition of the data
(see tut-sensors-time-freq for a tutorial and
ex-tfr-comparison for a comparison of time-frequency methods) to
show how cluster permutations can be done on higher-dimensional data.
End of explanation
# our data at each observation is of shape frequencies × times × channels
tfr_adjacency = combine_adjacency(
len(freqs), len(this_tfr.times), adjacency)
Explanation: Remember the note on the adjacency matrix from above: For 3D data, as here,
we must use :func:mne.stats.combine_adjacency to extend the
sensor-based adjacency to incorporate the time-frequency plane as well.
Here, the integer inputs are converted into a lattice and
combined with the sensor adjacency matrix so that data at similar
times and with similar frequencies and at close sensor locations are
clustered together.
End of explanation
# This time we don't calculate a threshold based on the F distribution.
# We might as well select an arbitrary threshold for cluster forming
tfr_threshold = 15.0
# run cluster based permutation analysis
cluster_stats = spatio_temporal_cluster_test(
X, n_permutations=1000, threshold=tfr_threshold, tail=1, n_jobs=None,
buffer_size=None, adjacency=tfr_adjacency)
Explanation: Now we can run the cluster permutation test, but first we have to set a
threshold. This example decimates in time and uses few frequencies so we need
to increase the threshold from the default value in order to have
differentiated clusters (i.e., so that our algorithm doesn't just find one
large cluster). For a more principled method of setting this parameter,
threshold-free cluster enhancement may be used.
See disc-stats for a discussion.
End of explanation
F_obs, clusters, p_values, _ = cluster_stats
good_cluster_inds = np.where(p_values < p_accept)[0]
for i_clu, clu_idx in enumerate(good_cluster_inds):
# unpack cluster information, get unique indices
freq_inds, time_inds, space_inds = clusters[clu_idx]
ch_inds = np.unique(space_inds)
time_inds = np.unique(time_inds)
freq_inds = np.unique(freq_inds)
# get topography for F stat
f_map = F_obs[freq_inds].mean(axis=0)
f_map = f_map[time_inds].mean(axis=0)
# get signals at the sensors contributing to the cluster
sig_times = epochs.times[time_inds]
# initialize figure
fig, ax_topo = plt.subplots(1, 1, figsize=(10, 3))
# create spatial mask
mask = np.zeros((f_map.shape[0], 1), dtype=bool)
mask[ch_inds, :] = True
# plot average test statistic and mark significant sensors
f_evoked = mne.EvokedArray(f_map[:, np.newaxis], epochs.info, tmin=0)
f_evoked.plot_topomap(times=0, mask=mask, axes=ax_topo, cmap='Reds',
vmin=np.min, vmax=np.max, show=False,
colorbar=False, mask_params=dict(markersize=10))
image = ax_topo.images[0]
# create additional axes (for ERF and colorbar)
divider = make_axes_locatable(ax_topo)
# add axes for colorbar
ax_colorbar = divider.append_axes('right', size='5%', pad=0.05)
plt.colorbar(image, cax=ax_colorbar)
ax_topo.set_xlabel(
'Averaged F-map ({:0.3f} - {:0.3f} s)'.format(*sig_times[[0, -1]]))
# remove the title that would otherwise say "0.000 s"
ax_topo.set_title("")
# add new axis for spectrogram
ax_spec = divider.append_axes('right', size='300%', pad=1.2)
title = 'Cluster #{0}, {1} spectrogram'.format(i_clu + 1, len(ch_inds))
if len(ch_inds) > 1:
title += " (max over channels)"
F_obs_plot = F_obs[..., ch_inds].max(axis=-1)
F_obs_plot_sig = np.zeros(F_obs_plot.shape) * np.nan
F_obs_plot_sig[tuple(np.meshgrid(freq_inds, time_inds))] = \
F_obs_plot[tuple(np.meshgrid(freq_inds, time_inds))]
for f_image, cmap in zip([F_obs_plot, F_obs_plot_sig], ['gray', 'autumn']):
c = ax_spec.imshow(f_image, cmap=cmap, aspect='auto', origin='lower',
extent=[epochs.times[0], epochs.times[-1],
freqs[0], freqs[-1]])
ax_spec.set_xlabel('Time (ms)')
ax_spec.set_ylabel('Frequency (Hz)')
ax_spec.set_title(title)
# add another colorbar
ax_colorbar2 = divider.append_axes('right', size='5%', pad=0.05)
plt.colorbar(c, cax=ax_colorbar2)
ax_colorbar2.set_ylabel('F-stat')
# clean up viz
mne.viz.tight_layout(fig=fig)
fig.subplots_adjust(bottom=.05)
plt.show()
Explanation: Finally, we can plot our results. It is difficult to visualize clusters in
time-frequency-sensor space; plotting time-frequency spectrograms and
plotting topomaps display time-frequency and sensor space respectively
but they are difficult to combine. We will plot topomaps with the clustered
sensors colored in white adjacent to spectrograms in order to provide a
visualization of the results. This is a dimensionally limited view, however.
Each sensor has its own significant time-frequencies, but, in order to
display a single spectrogram, all the time-frequencies that are significant
for any sensor in the cluster are plotted as significant. This is a
difficulty inherent to visualizing high-dimensional data and should be taken
into consideration when interpreting results.
End of explanation |
9,441 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demo caching
This notebook shows how caching of daily results is organised. First we show the low-level approach, then a high-level function is used.
Low-level approach
Step1: We demonstrate the caching for the minimal daily water consumption (should be close to zero unless there is a water leak). We create a cache object by specifying what we like to store and retrieve through this object. The cached data is saved as a single csv per sensor in a folder specified in the opengrid.cfg. Add the path to a folder where you want these csv-files to be stored as follows to your opengrid.cfg
[data]
folder
Step2: If this is the first time you run this demo, no cached data will be found, and you get an empty graph.
Let's store some results in this cache. We start from the water consumption of last week.
Step3: We use the method daily_min() from the analysis module to obtain a dataframe with daily minima for each sensor.
Step4: Now we can get the daily water minima from the cache directly. Pass a start or end date to limit the returned dataframe.
Step5: A high-level cache function
The caching of daily results is very similar for all kinds of results. Therefore, a high-level function is defined that can be parametrised to cache a lot of different things. | Python Code:
import pandas as pd
from opengrid.library import misc
from opengrid.library import houseprint
from opengrid.library import caching
import charts
hp = houseprint.Houseprint()
Explanation: Demo caching
This notebook shows how caching of daily results is organised. First we show the low-level approach, then a high-level function is used.
Low-level approach
End of explanation
cache_water = caching.Cache(variable='water_daily_min')
df_cache = cache_water.get(sensors=hp.get_sensors(sensortype='water'))
charts.plot(df_cache.ix[-8:], stock=True, show='inline')
Explanation: We demonstrate the caching for the minimal daily water consumption (should be close to zero unless there is a water leak). We create a cache object by specifying what we like to store and retrieve through this object. The cached data is saved as a single csv per sensor in a folder specified in the opengrid.cfg. Add the path to a folder where you want these csv-files to be stored as follows to your opengrid.cfg
[data]
folder: path_to_folder
End of explanation
hp.sync_tmpos()
start = pd.Timestamp('now') - pd.Timedelta(weeks=1)
df_water = hp.get_data(sensortype='water', head=start, )
df_water.info()
Explanation: If this is the first time you run this demo, no cached data will be found, and you get an empty graph.
Let's store some results in this cache. We start from the water consumption of last week.
End of explanation
daily_min = analysis.DailyAgg(df_water, agg='min').result
daily_min.info()
daily_min
cache_water.update(daily_min)
Explanation: We use the method daily_min() from the analysis module to obtain a dataframe with daily minima for each sensor.
End of explanation
sensors = hp.get_sensors(sensortype='water') # sensor objects
charts.plot(cache_water.get(sensors=sensors, start=start, end=None), show='inline', stock=True)
Explanation: Now we can get the daily water minima from the cache directly. Pass a start or end date to limit the returned dataframe.
End of explanation
import pandas as pd
from opengrid.library import misc
from opengrid.library import houseprint
from opengrid.library import caching
from opengrid.library import analysis
import charts
hp = houseprint.Houseprint()
#hp.sync_tmpos()
sensors = hp.get_sensors(sensortype='water')
caching.cache_results(hp=hp, sensors=sensors, resultname='water_daily_min', AnalysisClass=analysis.DailyAgg, agg='min')
cache = caching.Cache('water_daily_min')
daily_min = cache.get(sensors = sensors, start = '20151201')
charts.plot(daily_min, stock=True, show='inline')
Explanation: A high-level cache function
The caching of daily results is very similar for all kinds of results. Therefore, a high-level function is defined that can be parametrised to cache a lot of different things.
End of explanation |
9,442 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Instalation
Source
Step1: Import
Step2: Create samples
Step3: Visualize | Python Code:
! pip install numpy
! pip install scipy -U
! pip install -U scikit-learn
Explanation: Instalation
Source: ...
Scikit-learn requires:
Python (>= 2.6 or >= 3.3),
NumPy (>= 1.6.1),
SciPy (>= 0.9).
If you already have a working installation of numpy and scipy, the easiest way to install scikit-learn is using pip
End of explanation
from time import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn.cluster import KMeans
from sklearn.datasets import load_digits
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
Explanation: Import
End of explanation
np.random.seed(42)
digits = load_digits()
data = scale(digits.data)
n_samples, n_features = data.shape
n_digits = len(np.unique(digits.target))
labels = digits.target
sample_size = 300
print("n_digits: %d, \t n_samples %d, \t n_features %d"
% (n_digits, n_samples, n_features))
print(79 * '_')
print('% 9s' % 'init'
' time inertia homo compl v-meas ARI AMI silhouette')
def bench_k_means(estimator, name, data):
t0 = time()
estimator.fit(data)
print('% 9s %.2fs %i %.3f %.3f %.3f %.3f %.3f %.3f'
% (name, (time() - t0), estimator.inertia_,
metrics.homogeneity_score(labels, estimator.labels_),
metrics.completeness_score(labels, estimator.labels_),
metrics.v_measure_score(labels, estimator.labels_),
metrics.adjusted_rand_score(labels, estimator.labels_),
metrics.adjusted_mutual_info_score(labels, estimator.labels_),
metrics.silhouette_score(data, estimator.labels_,
metric='euclidean',
sample_size=sample_size)))
bench_k_means(KMeans(init='k-means++', n_clusters=n_digits, n_init=10),
name="k-means++", data=data)
bench_k_means(KMeans(init='random', n_clusters=n_digits, n_init=10),
name="random", data=data)
# in this case the seeding of the centers is deterministic, hence we run the
# kmeans algorithm only once with n_init=1
pca = PCA(n_components=n_digits).fit(data)
bench_k_means(KMeans(init=pca.components_,
n_clusters=n_digits, n_init=1),
name="PCA-based",
data=data)
print(79 * '_')
Explanation: Create samples
End of explanation
reduced_data = PCA(n_components=2).fit_transform(data)
kmeans = KMeans(init='k-means++', n_clusters=n_digits, n_init=10)
kmeans.fit(reduced_data)
# Step size of the mesh. Decrease to increase the quality of the VQ.
h = .02 # point in the mesh [x_min, x_max]x[y_min, y_max].
# Plot the decision boundary. For that, we will assign a color to each
x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1
y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Obtain labels for each point in mesh. Use last trained model.
Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1)
plt.clf()
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect='auto', origin='lower')
plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2)
# Plot the centroids as a white X
centroids = kmeans.cluster_centers_
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x', s=169, linewidths=3,
color='w', zorder=10)
plt.title('K-means clustering on the digits dataset (PCA-reduced data)\n'
'Centroids are marked with white cross')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
plt.show()
Explanation: Visualize
End of explanation |
9,443 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training
training3.py 파일에 아래에 예제들에서 설명되는 함수들을 정의하라.
예제 1
인자로 x 라디안(호도, radian)을 입력받아 각도(degree)로 계산하여 되돌려주는 함수 degree(x)를 정의하라.
`degree(x) = (x * 360) / (2 * pi)`
여기서 pi는 원주율을 나타내며, 라디안(호오) 설명은 아래 사이트 참조.
https
Step1: 예제 2
리스트 자료형 xs를 입력받아 리스트 내의 값들의 최소값 xmin과 최대값 xmax 계산하여 순서쌍 (xmin, xmax) 형태로 되돌려주는 함수 min_max(xs)를 정의하라.
활용 예
Step2: min과 max 함수는 모든 시퀀스 자료형에 활용할 수 있는 함수들이다.
Step3: 파이썬에서 다루는 모든 값과 문자들을 비교할 수 있다. 많은 예제들을 테스하면서 순서에 대한 감을 익힐 필요가 있다.
Step4: 예제 3
리스트 자료형 xs를 입력받아 리스트 내의 값들의 기하평균을 되돌려주는 함수 geometric_mean(xs)를 정의하라.
기하평균에 대한 설명은 아래 사이트 참조할 것.
https
Step5: 연습문제
아래 연습문제들에서 사용되는 함수들을 lab3.py 파일로 저장하라.
연습문제 1
다음 조건을 만족시키는 함수 swing_time(L) 함수를 정의하라.
길이가 L인 진자(pendulum)가 한 번 왔다갔다 하는 데에 걸리는 시간(주기, 초단위)을 계산하여 되돌려 준다.
진자와 주기 관련해서 아래 사이트 참조.
https
Step6: 연습문제 2
음수가 아닌 정수 n을 입력 받아 아래 형태의 리스트를 되돌려주는 range_squared(n) 함수를 정의하라.
[0, 1, 4, 9, 16, 25, ..., (n-1)** 2]
n=0인 경우에는 비어있는 리스트를 리턴한다.
활용 예
Step7: 연습문제 3
시퀀스 자료형 seq가 주어졌을 때 element 라는 값이 seq에 몇 번 나타나는지를 알려주는 함수 count(element, seq)를 정의하라.
활용 예 | Python Code:
import math # math 모듈을 임포트해야 pi 값을 사용할 수 있다.
def degree(x):
return (x *360.0) / (2 * math.pi)
degree(math.pi)
Explanation: Training
training3.py 파일에 아래에 예제들에서 설명되는 함수들을 정의하라.
예제 1
인자로 x 라디안(호도, radian)을 입력받아 각도(degree)로 계산하여 되돌려주는 함수 degree(x)를 정의하라.
`degree(x) = (x * 360) / (2 * pi)`
여기서 pi는 원주율을 나타내며, 라디안(호오) 설명은 아래 사이트 참조.
https://namu.wiki/w/%EB%9D%BC%EB%94%94%EC%95%88
활용 예:
In [ ]: degree(math.pi)
Out[ ]: 180.0
End of explanation
def min_max(xs):
return (min(xs), max(xs))
# 튜플을 이용하여 최소값과 최대값을 쌍으로 묶어 리턴하였다.
# 따라서 리턴값을 쪼개어 사용할 수도 있다.
a, b = min_max([0, 1, 2, 10, -5, 3])
a
Explanation: 예제 2
리스트 자료형 xs를 입력받아 리스트 내의 값들의 최소값 xmin과 최대값 xmax 계산하여 순서쌍 (xmin, xmax) 형태로 되돌려주는 함수 min_max(xs)를 정의하라.
활용 예:
In [ ]: min_max([0, 1, 2, 10, -5, 3])
Out[ ]: (-5, 10)
End of explanation
min((1, 20))
Explanation: min과 max 함수는 모든 시퀀스 자료형에 활용할 수 있는 함수들이다.
End of explanation
max("abcABC + $")
min("abcABC + $")
max([1, 1.0, [1], (1.0), [[1]]])
min([1, 1.0, [1], (1.0), [[1]]])
Explanation: 파이썬에서 다루는 모든 값과 문자들을 비교할 수 있다. 많은 예제들을 테스하면서 순서에 대한 감을 익힐 필요가 있다.
End of explanation
def geometric_mean(xs):
g = 1.0
for m in xs:
g = g * m
return g ** (1.0/len(xs))
geometric_mean([1,2])
Explanation: 예제 3
리스트 자료형 xs를 입력받아 리스트 내의 값들의 기하평균을 되돌려주는 함수 geometric_mean(xs)를 정의하라.
기하평균에 대한 설명은 아래 사이트 참조할 것.
https://ko.wikipedia.org/wiki/%EA%B8%B0%ED%95%98_%ED%8F%89%EA%B7%A0
활용 예:
In [ ]: geometric_mean([1, 2])
Out[ ]: 1.4142135623730951
End of explanation
g = 9.81
def swing_time(L):
return 2 * math.pi * math.sqrt(L / g)
swing_time(1)
Explanation: 연습문제
아래 연습문제들에서 사용되는 함수들을 lab3.py 파일로 저장하라.
연습문제 1
다음 조건을 만족시키는 함수 swing_time(L) 함수를 정의하라.
길이가 L인 진자(pendulum)가 한 번 왔다갔다 하는 데에 걸리는 시간(주기, 초단위)을 계산하여 되돌려 준다.
진자와 주기 관련해서 아래 사이트 참조.
https://ko.wikipedia.org/wiki/%EC%A7%84%EC%9E%90
활용 예:
In [ ]: swing_time(1)
Out[ ]: 2.0060666807106475
End of explanation
def range_squared(n):
L = []
for index in range(n):
L.append(index ** 2)
return L
range_squared(3)
Explanation: 연습문제 2
음수가 아닌 정수 n을 입력 받아 아래 형태의 리스트를 되돌려주는 range_squared(n) 함수를 정의하라.
[0, 1, 4, 9, 16, 25, ..., (n-1)** 2]
n=0인 경우에는 비어있는 리스트를 리턴한다.
활용 예:
In [ ]: range_squared(3)
Out[ ]: [0, 1, 4]
End of explanation
def count(element, seq):
return seq.count(element)
count(2, range(5))
Explanation: 연습문제 3
시퀀스 자료형 seq가 주어졌을 때 element 라는 값이 seq에 몇 번 나타나는지를 알려주는 함수 count(element, seq)를 정의하라.
활용 예:
In [ ]: count('dog',['dog', 'cat', 'mouse', 'dog'])
Out[ ]: 2
In [ ]: count(2, range(5))
Out[ ]: 1
End of explanation |
9,444 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementation on the nWAM data
Our implementation based on the work of the authors. We use the same module as coded in the JobTraining ipt. Please Note that the current implementation is in Python 2.7 and can not be ported as is to Python 3. We recommend setting a virtual environment (How to do it on Ubuntu) to run this notebook.
As for the supplementary notebook provided with the article, we recommend to run this notebook in a folder containing the files of the ipt package rather than try to install it
Step1: We see that there is a distribution difference between our respondent and non-respondent populations, it can even more so be observed when we use the official separation between food security scores
Step2: Checking the convex hull
We check the convex hull on the food and non-food spending. Even though the mean of the "treatment" group (the respondents to the phone survey) seem close to the control group's convex hull bound, means of treatment and control are also pretty close, meaning that a reweighing on these variables aiming at equating both means would be minimal.
Step4: Defining a bootstrap function
Bootstrap is used in the Matlab code to determine standard error on proportions of Black and White under certain thresholds, we reproduce it here
Step5: Tilting on a few variables
Here we select only very few variables, simply to try if the tilting works on our variables
Step6: Check computation time
Step7: Present the results
As we can see below, even though there seem to be that no optimal tilt was found to make tilter control distribution and respondents distribution exactly coincide, using only a few covariates from our dataset seem to already correct the control distribution to make it closer to the respondents'.
As the results show, there seem to be little difference -except computing time - between using the default regularization parameter or 1/2 as the authors did. This comforts our idea that overlap is good enough in our data
Step9: Working on categorical variables
With the way the att module is coded, there is no way to simply put categorical variables inside it and hope it works.
We work around this hurdle by using dummies on every modalities when accounting for categorical variables.
What about ordered variables ?
Some categorical variables -such as education- can be ordered from lowest to highest. Using the same solution as normal categorical variables should be enough, however we also propose "ordered dummies", that is, several dummies that are equal to 1 if e.g the individual has attained this level or education or higher and 0 otherwise.
We code a function that can be used to "order" those dummies, however we notice that using it tends to yield non-invertible matrices, so we actually don't use it
Step10: Tilting on more variables for W
We want to check both if adding more variables to our implementation would lead to a better tilt and the affirmation from the authors that the AST can be used with a high-dimensional W.
If most of the time the computation runs fine, we can rarely encounter an error because the sample yields a non-invertible matrix. As it happens randomly depending on the draw of the bootstrap and
Step12: Estimating and plotting the densities
We estimate our densities through a gaussian kernel
Step13: We also evaluate the tilting on the distribution on a covariate by the same method | Python Code:
from ols import ols
from logit import logit
from att import att
%pylab inline
import warnings
warnings.filterwarnings('ignore') # Remove pandas warnings
import numpy as np
import pandas as pd
import statsmodels.api as sm
from statsmodels.nonparametric.kde import KDEUnivariate
import seaborn as sns
from __future__ import division
plt.rcParams["figure.figsize"] = (15,6) # Run twice or it won't work
plt.rcParams["figure.figsize"] = (15,6) # Run twice or it won't work
df = pd.read_csv("nWAN_data.csv")
D = (df.Participant == "Yes")
# Plot an histogram for follow up and not follow up
plt.hist(df.FCS_score[D], bins= np.arange(0,100, 4), alpha=0.5, color='r', label='respondant')
plt.hist(df.FCS_score[~D], bins= np.arange(0,100, 4), alpha=0.5, color= 'b', label = 'non-respondent')
plt.legend(loc='upper right')
plt.show()
plt.hist([df.FCS_score[D], df.FCS_score[~D]], bins= [0, 24, 42, 60, 100], alpha=0.5, color=['r', 'b'],
label=['respondant', 'non-respondent'], align='right')
plt.legend()
plt.ylabel("Population")
plt.xlabel("FCS score")
plt.xticks([24, 42, 60, 100], ["Under 24", "24 to 42", "42 to 60", "over 60"])
plt.savefig("repartition_FCS_score.png")
plt.show()
Explanation: Implementation on the nWAM data
Our implementation based on the work of the authors. We use the same module as coded in the JobTraining ipt. Please Note that the current implementation is in Python 2.7 and can not be ported as is to Python 3. We recommend setting a virtual environment (How to do it on Ubuntu) to run this notebook.
As for the supplementary notebook provided with the article, we recommend to run this notebook in a folder containing the files of the ipt package rather than try to install it
End of explanation
# Select some usable variables
df_ast = df[['Depense_Alimentaire', 'Depense_non_Alimentaire', 'Taille_menage', "age_chef_menage",
'Alphabetisation', 'sexe_chef_menage', 'FCS_score']]
df_ast['constant'] = 1
df_ast['age_chef_menage'] /= 10
df_ast[['Depense_Alimentaire', 'Depense_non_Alimentaire']] /= 1000
df_ast['Alphabetisation'] = (df_ast['Alphabetisation'] == "oui").astype(float)
df_ast['sexe_chef_menage'] = (df_ast['sexe_chef_menage'] == 'Femme').astype(float)
Explanation: We see that there is a distribution difference between our respondent and non-respondent populations, it can even more so be observed when we use the official separation between food security scores:
The first category (under 24) is considered as "insufficient"
The second one (24 to 42) is "borderline"
The last one (over 42) is "acceptable"
We want to check if tilting our background covariate for respondents can make the two distributions coincide.
We separate the food security score into the official categories, and furthermore break the "acceptable" category in two more smaller subcategories - 42 to 60 and over 60- to have more comparison points.
End of explanation
# Shameless copypaste of the convex hull in the author's notebook
from scipy.spatial import ConvexHull
# Extract pre-treatment earnings for NSW treated units and PSID controls
earnings_treatment = np.asarray(df[D][['Depense_Alimentaire','Depense_non_Alimentaire']])
earnings_control = np.asarray(df[~D][['Depense_Alimentaire','Depense_non_Alimentaire']])
# Calculate convex hull of PSID control units' earnings realizations
hull = ConvexHull(earnings_control)
# Create a figure object to plot the convex hull
convexhull_fig = plt.figure(figsize=(6, 6))
# Scatter plot of pre-treatment earnings in 1974 and 1975 for PSID controls
ax = convexhull_fig.add_subplot(1,1,1)
sns.regplot(x="Depense_Alimentaire", y="Depense_non_Alimentaire", data=df[~D], \
fit_reg=False, color='#FDB515')
plt.title('Convex Hull of spendings in control', fontsize=12)
plt.xlabel('Depense_Alimentaire')
plt.ylabel('Depense_non_Alimentaire')
# Plot mean earnings for NSW treated units and PSID controls
plt.plot(np.mean(earnings_control[:,0]), np.mean(earnings_control[:,1]), \
color='#00B0DA', marker='o', markersize=10)
plt.plot(np.mean(earnings_treatment[:,0]), np.mean(earnings_treatment[:,1]), \
color='#EE1F60', marker='s', markersize=10)
# Plot convex hull
for simplex in hull.simplices:
plt.plot(earnings_control[simplex, 0], earnings_control[simplex, 1], 'k-')
# Clean up the plot, add frames, remove gridlines etc.
ax = plt.gca()
ax.patch.set_facecolor('gray') # Color of background
ax.patch.set_alpha(0.15) # Translucency of background
ax.grid(False) # Remove gridlines from plot
# Add frame around plot
for spine in ['left','right','top','bottom']:
ax.spines[spine].set_visible(True)
ax.spines[spine].set_color('k')
ax.spines[spine].set_linewidth(2)
# Add legend to the plot
import matplotlib.lines as mlines
psid_patch = mlines.Line2D([], [], color='#FDB515', marker='o', linestyle='None',\
markersize=5, label='controls')
psid_mean_patch = mlines.Line2D([], [], color='#00B0DA', marker='o', linestyle='None',\
markersize=10, label='control mean')
nsw_mean_patch = mlines.Line2D([], [], color='#EE1F60', marker='s', linestyle='None',\
markersize=10, label='"treatement" mean')
lgd = plt.legend(handles=[psid_patch, psid_mean_patch, nsw_mean_patch], \
loc='upper left', fontsize=12, ncol=2, numpoints = 1)
# Render & save plot
plt.tight_layout()
#plt.savefig(workdir+'Fig_LaLonde_Convex_Hull.png')
Explanation: Checking the convex hull
We check the convex hull on the food and non-food spending. Even though the mean of the "treatment" group (the respondents to the phone survey) seem close to the control group's convex hull bound, means of treatment and control are also pretty close, meaning that a reweighing on these variables aiming at equating both means would be minimal.
End of explanation
def bootstrap_ast(df, variables, D, Y, groups, n_iter=1000, rglrz=1 ):
Sample with replacement a sample of the same size as the original data and compute the AST tilt based on this.
It is assumed there that t(W) = r(W), and thus we compute only the tilting for the control sample.
df : dataframe containing the variables
variables: name of the variables to use the AST on
D: array of the treatment and control group
groups: list of bounds separating the different groups we want to make our probabilities on
n_iter: number of iterations
rglrz : regularization parameter for the tilting
size_sample = len(df)
list_probs = []
# Check if the name of the variable Y is given or the array
if type(Y) == str:
Y = df[Y]
h_W = df[variables]
for i in range(n_iter):
sample_idx = np.random.choice(np.arange(size_sample), size_sample,
replace=True)
# Select dataframe by index
h_W_sel = df.loc[sample_idx, variables]
t_W_sel = h_W_sel
# We can directly select the index since this is a numpy array
Y_sel = Y[sample_idx]
D_sel = D[sample_idx]
[gamma_ast, vcov_gamma_ast, pscore_tests, tilts, exitflag] = \
att(D_sel, Y_sel, h_W_sel, t_W_sel, study_tilt = False,
rlgrz = rglrz, silent=True)
if exitflag != 1:
raise ValueError('Exit flag %s' % exitflag)
# Compute the probabilities given the tilts for each selection
sum_weigths = sum(tilts[:, 2]) # Weigths of study sample are 0 there so we can sum
bounds_prob = []
for bound in groups:
# We only have to check the tilt of the control group since we don't tilt the study
prob = sum(tilts[(Y_sel < bound) & (D_sel == 0), 2])/sum_weigths
bounds_prob.append(prob)
list_probs.append(bounds_prob)
return np.array(list_probs)
Explanation: Defining a bootstrap function
Bootstrap is used in the Matlab code to determine standard error on proportions of Black and White under certain thresholds, we reproduce it here
End of explanation
res = bootstrap_ast(df_ast, ["constant", "Taille_menage", "Depense_Alimentaire", "sexe_chef_menage", 'Alphabetisation'],
D, 'FCS_score', [24, 42, 60], n_iter=1000, rglrz=1)
# Try with a lower regularizer
res_half_rglrz = bootstrap_ast(df_ast, ["constant", "Taille_menage", "Depense_Alimentaire", "sexe_chef_menage", 'Alphabetisation'],
D, 'FCS_score', [24, 42, 60], n_iter=1000, rglrz=1/2)
Explanation: Tilting on a few variables
Here we select only very few variables, simply to try if the tilting works on our variables
End of explanation
%%timeit
bootstrap_ast(df_ast, ["constant", "Taille_menage", "Depense_Alimentaire", "sexe_chef_menage", 'Alphabetisation'],
D, 'FCS_score', [24, 42, 60], n_iter=100, rglrz=1)
%%timeit
res_half_rglrz = bootstrap_ast(df_ast, ["constant", "Taille_menage", "Depense_Alimentaire", "sexe_chef_menage", 'Alphabetisation'],
D, 'FCS_score', [24, 42, 60], n_iter=100, rglrz=1/2)
Explanation: Check computation time
End of explanation
bounds = [24, 42, 60]
bounds_treat = []
bounds_control = []
Y = df_ast['FCS_score']
for b in bounds:
# Every weight is assumed to be 1 at the beginning
# Check repartition in treatment group
b_treat = sum(Y[D] < b)/len(Y[D])
bounds_treat.append(b_treat)
# Check repartition in control group
b_control = sum(Y[~D] < b)/len(Y[~D])
bounds_control.append(b_control)
df_res = pd.DataFrame(data={'Respondents' : bounds_treat,
'Non Respondents' : bounds_control,
'Tilted non Respondents': res.mean(axis=0),
'Tilted non Respondents std. err.': res.std(axis=0),
'1/2 regularizer Tilted non Respondents': res_half_rglrz.mean(axis=0),
'1/2 regularizer std. err.': res_half_rglrz.std(axis=0)},
index=['Pr(FCS_score < 24)', 'Pr(FCS_score < 42)', 'Pr(FCS_score < 60)'])
df_res[['Respondents', 'Non Respondents',
'Tilted non Respondents', 'Tilted non Respondents std. err.',
'1/2 regularizer Tilted non Respondents', '1/2 regularizer std. err.']]
print(df_res[['Respondents', 'Non Respondents',
'Tilted non Respondents', 'Tilted non Respondents std. err.']].to_latex())
Explanation: Present the results
As we can see below, even though there seem to be that no optimal tilt was found to make tilter control distribution and respondents distribution exactly coincide, using only a few covariates from our dataset seem to already correct the control distribution to make it closer to the respondents'.
As the results show, there seem to be little difference -except computing time - between using the default regularization parameter or 1/2 as the authors did. This comforts our idea that overlap is good enough in our data
End of explanation
def order_dummify(df, ordered_dummies, prefix=None):
order dummies so that in a hierachical modalities setting,
a modality corresponds to its dummy and every dummy under it equal to 1.
df: DataFrame which contains the dummies
ordered_dummies: list of the hierarchy on this categorical variable, from lowest to highest
prefix: if there is a prefix before every modality (as it can be the case when using pd.get_dummies), automatically add it
df = df.copy() # Make sure we don't modify the previous DataFrame
if prefix :
ordered_dummies = [prefix + '_' + mod for mod in ordered_dummies]
# Put the in reverse order
ordered_dummies.reverse()
# Compare a category and the one directly under it
for high, low in zip(ordered_dummies[:-1], ordered_dummies[1:]):
df[low] = df[high] | df[low]
return df
Explanation: Working on categorical variables
With the way the att module is coded, there is no way to simply put categorical variables inside it and hope it works.
We work around this hurdle by using dummies on every modalities when accounting for categorical variables.
What about ordered variables ?
Some categorical variables -such as education- can be ordered from lowest to highest. Using the same solution as normal categorical variables should be enough, however we also propose "ordered dummies", that is, several dummies that are equal to 1 if e.g the individual has attained this level or education or higher and 0 otherwise.
We code a function that can be used to "order" those dummies, however we notice that using it tends to yield non-invertible matrices, so we actually don't use it
End of explanation
df_ast = df[["Taille_menage", "sexe_chef_menage", "niveau_edu_chef", "age_chef_menage", 'Volume_eaupot_parpersonne',
"q24_Situat_matri_cm", "pourcent_age0_5ans", "Alphabetisation", 'q27a_nbre_personnes_inactif',
"q39_biens_fonctionnlsq392_tlvsr", "nb_enfant_6_59_mois", "pourcents_femme", 'asin', 'equin', 'caprin',
'ovin', 'bovin',"tel_portable", "WallType", "FCS_score", "Taux_promiscuite"]]
df_ast['constant'] = 1
# List of the variables we are going to use in the AST
list_variables = ['constant', "Taille_menage", "q39_biens_fonctionnlsq392_tlvsr", "age_chef_menage",
'q27a_nbre_personnes_inactif', 'asin', 'equin', 'caprin', 'ovin', 'bovin',
"nb_enfant_6_59_mois", "pourcents_femme", "pourcent_age0_5ans", 'Volume_eaupot_parpersonne',
"tel_portable", "Taux_promiscuite"]
df_ast["betail"] = df_ast[['asin', 'equin', 'caprin', 'ovin', 'bovin']].sum(axis=1)
list_variables.append("betail")
# Recode binary and categorical variables
df_ast["sexe_chef_menage"] = (df_ast["sexe_chef_menage"] == "Femme").astype(float)
df_ast['Alphabetisation'] = (df_ast['Alphabetisation'] == "oui").astype(float)
list_variables += ['sexe_chef_menage', 'Alphabetisation']
# Add ordered dummies on education
edu_dummies = pd.get_dummies(df_ast['niveau_edu_chef'], prefix="edu")
edu_order= ["None", "Primary - First cycle", "Primary - Second cycle", "Secondary", "Litterate - Qur'anic", 'Higher']
df_ast = df_ast.merge(edu_dummies, left_index=True, right_index=True)
list_variables += list(edu_dummies)
# Add dummies on marital situation
marital_dummies = pd.get_dummies(df_ast['q24_Situat_matri_cm'], prefix="marital")
df_ast= df_ast.merge(marital_dummies, left_index=True, right_index=True)
list_variables += list(marital_dummies)
# Add dummies on Walltype
wall_dummies = pd.get_dummies(df_ast['WallType'], prefix='wall') # No real order
df_ast = df_ast.merge(wall_dummies, left_index=True, right_index=True)
list_variables += list(wall_dummies)
# Because of the low std err found previously, we compute less iterations for faster computing
res_big = bootstrap_ast(df_ast, list_variables, D, 'FCS_score', [24, 42, 60],
n_iter=100, rglrz=1/2)
df_res_big = pd.DataFrame(data={'Respondents' : bounds_treat,
'Non Respondents' : bounds_control,
'Tilted non Respondents': res_big.mean(axis=0),
'Tilted non Respondents std. err.': res_big.std(axis=0)},
index=['Pr(FCS_score < 24)', 'Pr(FCS_score < 42)', 'Pr(FCS_score < 60)'])
# Order the columns
df_res_big[['Respondents', 'Non Respondents', 'Tilted non Respondents', 'Tilted non Respondents std. err.']]
print(df_res_big[['Respondents', 'Non Respondents',
'Tilted non Respondents', 'Tilted non Respondents std. err.']].to_latex())
Explanation: Tilting on more variables for W
We want to check both if adding more variables to our implementation would lead to a better tilt and the affirmation from the authors that the AST can be used with a high-dimensional W.
If most of the time the computation runs fine, we can rarely encounter an error because the sample yields a non-invertible matrix. As it happens randomly depending on the draw of the bootstrap and
End of explanation
def KDE_evaluate(data, grid, **kwargs):
Generate the evaluations of density for a gaussian kernel
gaussian_evaluator = KDEUnivariate(data)
gaussian_evaluator.fit(bw=2, **kwargs)
return gaussian_evaluator.evaluate(grid)
# Estimating the density of the tilted distribution
size_sample = len(df)
list_probs = []
# Check if the name of the variable Y is given or the array
if type(Y) == str:
Y = df[Y]
h_W = df_ast[list_variables]
t_W = h_W
# We can directly select the index since this is a numpy array
[gamma_ast, vcov_gamma_ast, pscore_tests, tilts, exitflag] = \
att(D, Y, h_W, t_W, study_tilt = False,
rlgrz = 1, silent=True)
if exitflag != 1:
raise ValueError('Exit flag %s' % exitflag)
xgrid = np.linspace(0, Y.max(), 1000)
tilted_density_eval = KDE_evaluate(Y[~D],xgrid, weights=tilts[~D, 2], fft=False)
plt.plot(xgrid,KDE_evaluate(Y[D].values.astype(float), xgrid), label="Respondents")
plt.plot(xgrid, KDE_evaluate(Y[~D].values.astype(float), xgrid), label="Non respondents")
plt.plot(xgrid, tilted_density_eval, label="Tilted non respondents")
plt.xlabel("Food security score", fontsize=18)
plt.ylabel("Density", fontsize=16)
plt.legend(fontsize=18)
plt.tight_layout()
plt.savefig("Estimated_FCS_densities.png")
plt.show()
Explanation: Estimating and plotting the densities
We estimate our densities through a gaussian kernel
End of explanation
var = "tel_portable"
grid = np.linspace(0, 16 , 1000 )
tilted_var_eval = KDE_evaluate(df_ast.loc[~D, var], grid, weights=tilts[~D, 2], fft=False)
plt.plot(grid, KDE_evaluate(((df_ast.loc[D, var]).values).astype(float), grid), label= "Respondents")
plt.plot(grid, KDE_evaluate(((df_ast.loc[~D, var]).values).astype(float), grid), label= "Non respondents")
plt.plot(grid, tilted_var_eval, label="Tilted non respondents")
plt.xlabel("Phone usage over one week", fontsize=18)
plt.ylabel("Density", fontsize=16)
plt.legend(fontsize=18)
plt.tight_layout()
plt.savefig("Estimated_phone_use_densities")
plt.show()
Explanation: We also evaluate the tilting on the distribution on a covariate by the same method
End of explanation |
9,445 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Diagrama TS
Vamos elaborar um diagrama TS com o auxílio do pacote gsw [https
Step1: Se você não conseguiu importar a biblioteca acima, precisa instalar o módulo gsw.
Em seguida, importamos a biblioteca numpy que nos permite usar algumas funções matemáticas no python | Python Code:
import gsw
Explanation: Diagrama TS
Vamos elaborar um diagrama TS com o auxílio do pacote gsw [https://pypi.python.org/pypi/gsw/3.0.3], que é uma alternativa em python para a toolbox gsw do MATLAB:
End of explanation
import numpy as np
import matplotlib.pyplot as plt
sal = np.linspace(0, 42, 100)
temp = np.linspace(-2, 40, 100)
s, t = np.meshgrid(sal, temp)
# Abaixo usamos diretamente o resultado da biblioteca gsw:
# Thermodynamic Equation Of Seawater - 2010 (TEOS-10)
sigma = gsw.sigma0(s, t)
# Quantidade de linhas desejada
cnt = np.arange(-7, 35, 10)
fig, ax = plt.subplots(figsize=(5, 5))
ax.plot(sal, temp, 'ro')
# O comando abaixo faz curvas de nível com dados contour(X, Y, Z)
cs = ax.contour(s, t, sigma, colors='blue', levels=cnt)
# Aqui fazemos rótulos para as curvas de nível
ax.clabel(cs, fontsize=9, inline=1, fmt='%2i')
ax.set_xlabel('Salinity [g kg$^{-1}$]')
ax.set_ylabel('Temperature [$^{\circ}$C]')
Explanation: Se você não conseguiu importar a biblioteca acima, precisa instalar o módulo gsw.
Em seguida, importamos a biblioteca numpy que nos permite usar algumas funções matemáticas no python:
End of explanation |
9,446 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Authenticate with the docker registry first
bash
gcloud auth configure-docker
If using TPUs please also authorize Cloud TPU to access your project as described here.
Set up your output bucket
Step1: Build a base image to work with fairing
Step2: Start an AI Platform job | Python Code:
BUCKET = "gs://" # your bucket here
assert re.search(r'gs://.+', BUCKET), 'A GCS bucket is required to store your results.'
Explanation: Authenticate with the docker registry first
bash
gcloud auth configure-docker
If using TPUs please also authorize Cloud TPU to access your project as described here.
Set up your output bucket
End of explanation
!cat Dockerfile
!docker build . -t {base_image}
!docker push {base_image}
Explanation: Build a base image to work with fairing
End of explanation
additional_files = '' # If your code requires additional files, you can specify them here (or include everything in the current folder with glob.glob('./**', recursive=True))
# If your code does not require any dependencies or config changes, you can directly start from an official Tensorflow docker image
#fairing.config.set_builder('docker', registry=DOCKER_REGISTRY, base_image='gcr.io/deeplearning-platform-release/tf-gpu.1-13')
# base image
fairing.config.set_builder('docker', registry=DOCKER_REGISTRY, base_image=base_image)
# AI Platform job hardware config
fairing.config.set_deployer('gcp', job_config={'trainingInput': {'scaleTier': 'CUSTOM', 'masterType': 'standard_p100'}})
# input and output notebooks
fairing.config.set_preprocessor('full_notebook',
notebook_file="05K_MNIST_TF20Keras_Tensorboard_playground.ipynb",
input_files=additional_files,
output_file=os.path.join(BUCKET, 'fairing-output', 'mnist-001.ipynb'))
# GPU settings for single K80, single p100 respectively
# job_config={'trainingInput': {'scaleTier': 'BASIC_GPU'}}
# job_config={'trainingInput': {'scaleTier': 'CUSTOM', 'masterType': 'standard_p100'}}
# These job_config settings for TPUv2
#job_config={'trainingInput': {'scaleTier': 'BASIC_GPU'}}
#job_config={'trainingInput': {'scaleTier': 'CUSTOM', 'masterType': 'n1-standard-8', 'workerType': 'cloud_tpu', 'workerCount': 1,
# 'workerConfig': {'accelerator_config': {'type': 'TPU_V2','count': 8}}}})
# On AI Platform, TPUv3 support is alpha and available to whitelisted customers only
fairing.config.run()
Explanation: Start an AI Platform job
End of explanation |
9,447 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scalable Kernel Interpolation for Product Kernels (SKIP)
Overview
In this notebook, we'll overview of how to use SKIP, a method that exploits product structure in some kernels to reduce the dependency of SKI on the data dimensionality from exponential to linear.
The most important practical consideration to note in this notebook is the use of gpytorch.settings.max_root_decomposition_size, which we explain the use of right before the training loop cell.
Step1: For this example notebook, we'll be using the elevators UCI dataset used in the paper. Running the next cell downloads a copy of the dataset that has already been scaled and normalized appropriately. For this notebook, we'll simply be splitting the data using the first 80% of the data as training and the last 20% as testing.
Note
Step2: Defining the SKIP GP Model
We now define the GP model. For more details on the use of GP models, see our simpler examples. This model uses a GridInterpolationKernel (SKI) with an RBF base kernel. To use SKIP, we make two changes
Step3: Training the model
The training loop for SKIP has one main new feature we haven't seen before
Step4: Making Predictions
The next cell makes predictions with SKIP. We use the same max_root_decomposition size, and we also demonstrate increasing the max preconditioner size. Increasing the preconditioner size on this dataset is not necessary, but can make a big difference in final test performance, and is often preferable to increasing the number of CG iterations if you can afford the space. | Python Code:
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
# Make plots inline
%matplotlib inline
Explanation: Scalable Kernel Interpolation for Product Kernels (SKIP)
Overview
In this notebook, we'll overview of how to use SKIP, a method that exploits product structure in some kernels to reduce the dependency of SKI on the data dimensionality from exponential to linear.
The most important practical consideration to note in this notebook is the use of gpytorch.settings.max_root_decomposition_size, which we explain the use of right before the training loop cell.
End of explanation
import urllib.request
import os
from scipy.io import loadmat
from math import floor
# this is for running the notebook in our testing framework
smoke_test = ('CI' in os.environ)
if not smoke_test and not os.path.isfile('../elevators.mat'):
print('Downloading \'elevators\' UCI dataset...')
urllib.request.urlretrieve('https://drive.google.com/uc?export=download&id=1jhWL3YUHvXIaftia4qeAyDwVxo6j1alk', '../elevators.mat')
if smoke_test: # this is for running the notebook in our testing framework
X, y = torch.randn(1000, 3), torch.randn(1000)
else:
data = torch.Tensor(loadmat('../elevators.mat')['data'])
X = data[:, :-1]
X = X - X.min(0)[0]
X = 2 * (X / X.max(0)[0]) - 1
y = data[:, -1]
train_n = int(floor(0.8 * len(X)))
train_x = X[:train_n, :].contiguous()
train_y = y[:train_n].contiguous()
test_x = X[train_n:, :].contiguous()
test_y = y[train_n:].contiguous()
if torch.cuda.is_available():
train_x, train_y, test_x, test_y = train_x.cuda(), train_y.cuda(), test_x.cuda(), test_y.cuda()
X.size()
Explanation: For this example notebook, we'll be using the elevators UCI dataset used in the paper. Running the next cell downloads a copy of the dataset that has already been scaled and normalized appropriately. For this notebook, we'll simply be splitting the data using the first 80% of the data as training and the last 20% as testing.
Note: Running the next cell will attempt to download a ~400 KB dataset file to the current directory.
End of explanation
from gpytorch.means import ConstantMean
from gpytorch.kernels import ScaleKernel, RBFKernel, ProductStructureKernel, GridInterpolationKernel
from gpytorch.distributions import MultivariateNormal
class GPRegressionModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(GPRegressionModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = ConstantMean()
self.base_covar_module = RBFKernel()
self.covar_module = ProductStructureKernel(
ScaleKernel(
GridInterpolationKernel(self.base_covar_module, grid_size=100, num_dims=1)
), num_dims=18
)
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return MultivariateNormal(mean_x, covar_x)
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = GPRegressionModel(train_x, train_y, likelihood)
if torch.cuda.is_available():
model = model.cuda()
likelihood = likelihood.cuda()
Explanation: Defining the SKIP GP Model
We now define the GP model. For more details on the use of GP models, see our simpler examples. This model uses a GridInterpolationKernel (SKI) with an RBF base kernel. To use SKIP, we make two changes:
First, we use only a 1 dimensional GridInterpolationKernel (e.g., by passing num_dims=1). The idea of SKIP is to use a product of 1 dimensional GridInterpolationKernels instead of a single d dimensional one.
Next, we create a ProductStructureKernel that wraps our 1D GridInterpolationKernel with num_dims=18. This specifies that we want to use product structure over 18 dimensions, using the 1D GridInterpolationKernel in each dimension.
Note: If you've explored the rest of the package, you may be wondering what the differences between AdditiveKernel, AdditiveStructureKernel, ProductKernel, and ProductStructureKernel are. The Structure kernels (1) assume that we want to apply a single base kernel over a fully decomposed dataset (e.g., every dimension is additive or has product structure), and (2) are significantly more efficient as a result, because they can exploit batch parallel operations instead of using for loops.
End of explanation
training_iterations = 2 if smoke_test else 50
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
def train():
for i in range(training_iterations):
# Zero backprop gradients
optimizer.zero_grad()
with gpytorch.settings.use_toeplitz(False), gpytorch.settings.max_root_decomposition_size(30):
# Get output from model
output = model(train_x)
# Calc loss and backprop derivatives
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f' % (i + 1, training_iterations, loss.item()))
optimizer.step()
torch.cuda.empty_cache()
# See dkl_mnist.ipynb for explanation of this flag
with gpytorch.settings.use_toeplitz(True):
%time train()
Explanation: Training the model
The training loop for SKIP has one main new feature we haven't seen before: we specify the max_root_decomposition_size. This controls how many iterations of Lanczos we want to use for SKIP, and trades off with time and--more importantly--space. Realistically, the goal should be to set this as high as possible without running out of memory.
In some sense, this parameter is the main trade-off of SKIP. Whereas many inducing point methods care more about the number of inducing points, because SKIP approximates one dimensional kernels, it is able to do so very well with relatively few inducing points. The main source of approximation really comes from these Lanczos decompositions we perform.
End of explanation
model.eval()
likelihood.eval()
with gpytorch.settings.max_preconditioner_size(10), torch.no_grad():
with gpytorch.settings.use_toeplitz(False), gpytorch.settings.max_root_decomposition_size(30), gpytorch.settings.fast_pred_var():
preds = model(test_x)
print('Test MAE: {}'.format(torch.mean(torch.abs(preds.mean - test_y))))
Explanation: Making Predictions
The next cell makes predictions with SKIP. We use the same max_root_decomposition size, and we also demonstrate increasing the max preconditioner size. Increasing the preconditioner size on this dataset is not necessary, but can make a big difference in final test performance, and is often preferable to increasing the number of CG iterations if you can afford the space.
End of explanation |
9,448 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Text generation with an RNN
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Download the Shakespeare dataset
Change the following line to run this code on your own data.
Step3: Read the data
First, look in the text
Step4: Process the text
Vectorize the text
Before training, you need to convert the strings to a numerical representation.
The tf.keras.layers.StringLookup layer can convert each character into a numeric ID. It just needs the text to be split into tokens first.
Step5: Now create the tf.keras.layers.StringLookup layer
Step6: It converts from tokens to character IDs
Step7: Since the goal of this tutorial is to generate text, it will also be important to invert this representation and recover human-readable strings from it. For this you can use tf.keras.layers.StringLookup(..., invert=True).
Note
Step8: This layer recovers the characters from the vectors of IDs, and returns them as a tf.RaggedTensor of characters
Step9: You can tf.strings.reduce_join to join the characters back into strings.
Step10: The prediction task
Given a character, or a sequence of characters, what is the most probable next character? This is the task you're training the model to perform. The input to the model will be a sequence of characters, and you train the model to predict the output—the following character at each time step.
Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character?
Create training examples and targets
Next divide the text into example sequences. Each input sequence will contain seq_length characters from the text.
For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.
So break the text into chunks of seq_length+1. For example, say seq_length is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".
To do this first use the tf.data.Dataset.from_tensor_slices function to convert the text vector into a stream of character indices.
Step11: The batch method lets you easily convert these individual characters to sequences of the desired size.
Step12: It's easier to see what this is doing if you join the tokens back into strings
Step13: For training you'll need a dataset of (input, label) pairs. Where input and
label are sequences. At each time step the input is the current character and the label is the next character.
Here's a function that takes a sequence as input, duplicates, and shifts it to align the input and label for each timestep
Step14: Create training batches
You used tf.data to split the text into manageable sequences. But before feeding this data into the model, you need to shuffle the data and pack it into batches.
Step15: Build The Model
This section defines the model as a keras.Model subclass (For details see Making new Layers and Models via subclassing).
This model has three layers
Step16: For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character
Step17: In the above example the sequence length of the input is 100 but the model can be run on inputs of any length
Step18: To get actual predictions from the model you need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.
Note
Step19: This gives us, at each timestep, a prediction of the next character index
Step20: Decode these to see the text predicted by this untrained model
Step21: Train the model
At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character.
Attach an optimizer, and a loss function
The standard tf.keras.losses.sparse_categorical_crossentropy loss function works in this case because it is applied across the last dimension of the predictions.
Because your model returns logits, you need to set the from_logits flag.
Step22: A newly initialized model shouldn't be too sure of itself, the output logits should all have similar magnitudes. To confirm this you can check that the exponential of the mean loss is approximately equal to the vocabulary size. A much higher loss means the model is sure of its wrong answers, and is badly initialized
Step23: Configure the training procedure using the tf.keras.Model.compile method. Use tf.keras.optimizers.Adam with default arguments and the loss function.
Step24: Configure checkpoints
Use a tf.keras.callbacks.ModelCheckpoint to ensure that checkpoints are saved during training
Step25: Execute the training
To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training.
Step26: Generate text
The simplest way to generate text with this model is to run it in a loop, and keep track of the model's internal state as you execute it.
Each time you call the model you pass in some text and an internal state. The model returns a prediction for the next character and its new state. Pass the prediction and state back in to continue generating text.
The following makes a single step prediction
Step27: Run it in a loop to generate some text. Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
Step28: The easiest thing you can do to improve the results is to train it for longer (try EPOCHS = 30).
You can also experiment with a different start string, try adding another RNN layer to improve the model's accuracy, or adjust the temperature parameter to generate more or less random predictions.
If you want the model to generate text faster the easiest thing you can do is batch the text generation. In the example below the model generates 5 outputs in about the same time it took to generate 1 above.
Step29: Export the generator
This single-step model can easily be saved and restored, allowing you to use it anywhere a tf.saved_model is accepted.
Step30: Advanced
Step31: The above implementation of the train_step method follows Keras' train_step conventions. This is optional, but it allows you to change the behavior of the train step and still use keras' Model.compile and Model.fit methods.
Step32: Or if you need more control, you can write your own complete custom training loop | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import numpy as np
import os
import time
Explanation: Text generation with an RNN
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/text/tutorials/text_generation"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/text_generation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/tutorials/text_generation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/text/docs/tutorials/text_generation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial demonstrates how to generate text using a character-based RNN. You will work with a dataset of Shakespeare's writing from Andrej Karpathy's The Unreasonable Effectiveness of Recurrent Neural Networks. Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.
Note: Enable GPU acceleration to execute this notebook faster. In Colab: Runtime > Change runtime type > Hardware accelerator > GPU.
This tutorial includes runnable code implemented using tf.keras and eager execution. The following is the sample output when the model in this tutorial trained for 30 epochs, and started with the prompt "Q":
<pre>
QUEENE:
I had thought thou hadst a Roman; for the oracle,
Thus by All bids the man against the word,
Which are so weak of care, by old care done;
Your children were in your holy love,
And the precipitation through the bleeding throne.
BISHOP OF ELY:
Marry, and will, my lord, to weep in such a one were prettiest;
Yet now I was adopted heir
Of the world's lamentable day,
To watch the next way with his father with his face?
ESCALUS:
The cause why then we are all resolved more sons.
VOLUMNIA:
O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,
And love and pale as any will to that word.
QUEEN ELIZABETH:
But how long have I heard the soul for this world,
And show his hands of life be proved to stand.
PETRUCHIO:
I say he look'd on, if I must be content
To stay him from the fatal of our country's bliss.
His lordship pluck'd from this sentence then for prey,
And then let us twain, being the moon,
were she such a case as fills m
</pre>
While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:
The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.
The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.
As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure.
Setup
Import TensorFlow and other libraries
End of explanation
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
Explanation: Download the Shakespeare dataset
Change the following line to run this code on your own data.
End of explanation
# Read, then decode for py2 compat.
text = open(path_to_file, 'rb').read().decode(encoding='utf-8')
# length of text is the number of characters in it
print(f'Length of text: {len(text)} characters')
# Take a look at the first 250 characters in text
print(text[:250])
# The unique characters in the file
vocab = sorted(set(text))
print(f'{len(vocab)} unique characters')
Explanation: Read the data
First, look in the text:
End of explanation
example_texts = ['abcdefg', 'xyz']
chars = tf.strings.unicode_split(example_texts, input_encoding='UTF-8')
chars
Explanation: Process the text
Vectorize the text
Before training, you need to convert the strings to a numerical representation.
The tf.keras.layers.StringLookup layer can convert each character into a numeric ID. It just needs the text to be split into tokens first.
End of explanation
ids_from_chars = tf.keras.layers.StringLookup(
vocabulary=list(vocab), mask_token=None)
Explanation: Now create the tf.keras.layers.StringLookup layer:
End of explanation
ids = ids_from_chars(chars)
ids
Explanation: It converts from tokens to character IDs:
End of explanation
chars_from_ids = tf.keras.layers.StringLookup(
vocabulary=ids_from_chars.get_vocabulary(), invert=True, mask_token=None)
Explanation: Since the goal of this tutorial is to generate text, it will also be important to invert this representation and recover human-readable strings from it. For this you can use tf.keras.layers.StringLookup(..., invert=True).
Note: Here instead of passing the original vocabulary generated with sorted(set(text)) use the get_vocabulary() method of the tf.keras.layers.StringLookup layer so that the [UNK] tokens is set the same way.
End of explanation
chars = chars_from_ids(ids)
chars
Explanation: This layer recovers the characters from the vectors of IDs, and returns them as a tf.RaggedTensor of characters:
End of explanation
tf.strings.reduce_join(chars, axis=-1).numpy()
def text_from_ids(ids):
return tf.strings.reduce_join(chars_from_ids(ids), axis=-1)
Explanation: You can tf.strings.reduce_join to join the characters back into strings.
End of explanation
all_ids = ids_from_chars(tf.strings.unicode_split(text, 'UTF-8'))
all_ids
ids_dataset = tf.data.Dataset.from_tensor_slices(all_ids)
for ids in ids_dataset.take(10):
print(chars_from_ids(ids).numpy().decode('utf-8'))
seq_length = 100
Explanation: The prediction task
Given a character, or a sequence of characters, what is the most probable next character? This is the task you're training the model to perform. The input to the model will be a sequence of characters, and you train the model to predict the output—the following character at each time step.
Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character?
Create training examples and targets
Next divide the text into example sequences. Each input sequence will contain seq_length characters from the text.
For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.
So break the text into chunks of seq_length+1. For example, say seq_length is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".
To do this first use the tf.data.Dataset.from_tensor_slices function to convert the text vector into a stream of character indices.
End of explanation
sequences = ids_dataset.batch(seq_length+1, drop_remainder=True)
for seq in sequences.take(1):
print(chars_from_ids(seq))
Explanation: The batch method lets you easily convert these individual characters to sequences of the desired size.
End of explanation
for seq in sequences.take(5):
print(text_from_ids(seq).numpy())
Explanation: It's easier to see what this is doing if you join the tokens back into strings:
End of explanation
def split_input_target(sequence):
input_text = sequence[:-1]
target_text = sequence[1:]
return input_text, target_text
split_input_target(list("Tensorflow"))
dataset = sequences.map(split_input_target)
for input_example, target_example in dataset.take(1):
print("Input :", text_from_ids(input_example).numpy())
print("Target:", text_from_ids(target_example).numpy())
Explanation: For training you'll need a dataset of (input, label) pairs. Where input and
label are sequences. At each time step the input is the current character and the label is the next character.
Here's a function that takes a sequence as input, duplicates, and shifts it to align the input and label for each timestep:
End of explanation
# Batch size
BATCH_SIZE = 64
# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences,
# so it doesn't attempt to shuffle the entire sequence in memory. Instead,
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000
dataset = (
dataset
.shuffle(BUFFER_SIZE)
.batch(BATCH_SIZE, drop_remainder=True)
.prefetch(tf.data.experimental.AUTOTUNE))
dataset
Explanation: Create training batches
You used tf.data to split the text into manageable sequences. But before feeding this data into the model, you need to shuffle the data and pack it into batches.
End of explanation
# Length of the vocabulary in StringLookup Layer
vocab_size = len(ids_from_chars.get_vocabulary())
# The embedding dimension
embedding_dim = 256
# Number of RNN units
rnn_units = 1024
class MyModel(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, rnn_units):
super().__init__(self)
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(rnn_units,
return_sequences=True,
return_state=True)
self.dense = tf.keras.layers.Dense(vocab_size)
def call(self, inputs, states=None, return_state=False, training=False):
x = inputs
x = self.embedding(x, training=training)
if states is None:
states = self.gru.get_initial_state(x)
x, states = self.gru(x, initial_state=states, training=training)
x = self.dense(x, training=training)
if return_state:
return x, states
else:
return x
model = MyModel(
vocab_size=vocab_size,
embedding_dim=embedding_dim,
rnn_units=rnn_units)
Explanation: Build The Model
This section defines the model as a keras.Model subclass (For details see Making new Layers and Models via subclassing).
This model has three layers:
tf.keras.layers.Embedding: The input layer. A trainable lookup table that will map each character-ID to a vector with embedding_dim dimensions;
tf.keras.layers.GRU: A type of RNN with size units=rnn_units (You can also use an LSTM layer here.)
tf.keras.layers.Dense: The output layer, with vocab_size outputs. It outputs one logit for each character in the vocabulary. These are the log-likelihood of each character according to the model.
End of explanation
for input_example_batch, target_example_batch in dataset.take(1):
example_batch_predictions = model(input_example_batch)
print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)")
Explanation: For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character:
Note: For training you could use a keras.Sequential model here. To generate text later you'll need to manage the RNN's internal state. It's simpler to include the state input and output options upfront, than it is to rearrange the model architecture later. For more details see the Keras RNN guide.
Try the model
Now run the model to see that it behaves as expected.
First check the shape of the output:
End of explanation
model.summary()
Explanation: In the above example the sequence length of the input is 100 but the model can be run on inputs of any length:
End of explanation
sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices, axis=-1).numpy()
Explanation: To get actual predictions from the model you need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.
Note: It is important to sample from this distribution as taking the argmax of the distribution can easily get the model stuck in a loop.
Try it for the first example in the batch:
End of explanation
sampled_indices
Explanation: This gives us, at each timestep, a prediction of the next character index:
End of explanation
print("Input:\n", text_from_ids(input_example_batch[0]).numpy())
print()
print("Next Char Predictions:\n", text_from_ids(sampled_indices).numpy())
Explanation: Decode these to see the text predicted by this untrained model:
End of explanation
loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True)
example_batch_mean_loss = loss(target_example_batch, example_batch_predictions)
print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)")
print("Mean loss: ", example_batch_mean_loss)
Explanation: Train the model
At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character.
Attach an optimizer, and a loss function
The standard tf.keras.losses.sparse_categorical_crossentropy loss function works in this case because it is applied across the last dimension of the predictions.
Because your model returns logits, you need to set the from_logits flag.
End of explanation
tf.exp(example_batch_mean_loss).numpy()
Explanation: A newly initialized model shouldn't be too sure of itself, the output logits should all have similar magnitudes. To confirm this you can check that the exponential of the mean loss is approximately equal to the vocabulary size. A much higher loss means the model is sure of its wrong answers, and is badly initialized:
End of explanation
model.compile(optimizer='adam', loss=loss)
Explanation: Configure the training procedure using the tf.keras.Model.compile method. Use tf.keras.optimizers.Adam with default arguments and the loss function.
End of explanation
# Directory where the checkpoints will be saved
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix,
save_weights_only=True)
Explanation: Configure checkpoints
Use a tf.keras.callbacks.ModelCheckpoint to ensure that checkpoints are saved during training:
End of explanation
EPOCHS = 20
history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback])
Explanation: Execute the training
To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training.
End of explanation
class OneStep(tf.keras.Model):
def __init__(self, model, chars_from_ids, ids_from_chars, temperature=1.0):
super().__init__()
self.temperature = temperature
self.model = model
self.chars_from_ids = chars_from_ids
self.ids_from_chars = ids_from_chars
# Create a mask to prevent "[UNK]" from being generated.
skip_ids = self.ids_from_chars(['[UNK]'])[:, None]
sparse_mask = tf.SparseTensor(
# Put a -inf at each bad index.
values=[-float('inf')]*len(skip_ids),
indices=skip_ids,
# Match the shape to the vocabulary
dense_shape=[len(ids_from_chars.get_vocabulary())])
self.prediction_mask = tf.sparse.to_dense(sparse_mask)
@tf.function
def generate_one_step(self, inputs, states=None):
# Convert strings to token IDs.
input_chars = tf.strings.unicode_split(inputs, 'UTF-8')
input_ids = self.ids_from_chars(input_chars).to_tensor()
# Run the model.
# predicted_logits.shape is [batch, char, next_char_logits]
predicted_logits, states = self.model(inputs=input_ids, states=states,
return_state=True)
# Only use the last prediction.
predicted_logits = predicted_logits[:, -1, :]
predicted_logits = predicted_logits/self.temperature
# Apply the prediction mask: prevent "[UNK]" from being generated.
predicted_logits = predicted_logits + self.prediction_mask
# Sample the output logits to generate token IDs.
predicted_ids = tf.random.categorical(predicted_logits, num_samples=1)
predicted_ids = tf.squeeze(predicted_ids, axis=-1)
# Convert from token ids to characters
predicted_chars = self.chars_from_ids(predicted_ids)
# Return the characters and model state.
return predicted_chars, states
one_step_model = OneStep(model, chars_from_ids, ids_from_chars)
Explanation: Generate text
The simplest way to generate text with this model is to run it in a loop, and keep track of the model's internal state as you execute it.
Each time you call the model you pass in some text and an internal state. The model returns a prediction for the next character and its new state. Pass the prediction and state back in to continue generating text.
The following makes a single step prediction:
End of explanation
start = time.time()
states = None
next_char = tf.constant(['ROMEO:'])
result = [next_char]
for n in range(1000):
next_char, states = one_step_model.generate_one_step(next_char, states=states)
result.append(next_char)
result = tf.strings.join(result)
end = time.time()
print(result[0].numpy().decode('utf-8'), '\n\n' + '_'*80)
print('\nRun time:', end - start)
Explanation: Run it in a loop to generate some text. Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
End of explanation
start = time.time()
states = None
next_char = tf.constant(['ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:'])
result = [next_char]
for n in range(1000):
next_char, states = one_step_model.generate_one_step(next_char, states=states)
result.append(next_char)
result = tf.strings.join(result)
end = time.time()
print(result, '\n\n' + '_'*80)
print('\nRun time:', end - start)
Explanation: The easiest thing you can do to improve the results is to train it for longer (try EPOCHS = 30).
You can also experiment with a different start string, try adding another RNN layer to improve the model's accuracy, or adjust the temperature parameter to generate more or less random predictions.
If you want the model to generate text faster the easiest thing you can do is batch the text generation. In the example below the model generates 5 outputs in about the same time it took to generate 1 above.
End of explanation
tf.saved_model.save(one_step_model, 'one_step')
one_step_reloaded = tf.saved_model.load('one_step')
states = None
next_char = tf.constant(['ROMEO:'])
result = [next_char]
for n in range(100):
next_char, states = one_step_reloaded.generate_one_step(next_char, states=states)
result.append(next_char)
print(tf.strings.join(result)[0].numpy().decode("utf-8"))
Explanation: Export the generator
This single-step model can easily be saved and restored, allowing you to use it anywhere a tf.saved_model is accepted.
End of explanation
class CustomTraining(MyModel):
@tf.function
def train_step(self, inputs):
inputs, labels = inputs
with tf.GradientTape() as tape:
predictions = self(inputs, training=True)
loss = self.loss(labels, predictions)
grads = tape.gradient(loss, model.trainable_variables)
self.optimizer.apply_gradients(zip(grads, model.trainable_variables))
return {'loss': loss}
Explanation: Advanced: Customized Training
The above training procedure is simple, but does not give you much control.
It uses teacher-forcing which prevents bad predictions from being fed back to the model, so the model never learns to recover from mistakes.
So now that you've seen how to run the model manually next you'll implement the training loop. This gives a starting point if, for example, you want to implement curriculum learning to help stabilize the model's open-loop output.
The most important part of a custom training loop is the train step function.
Use tf.GradientTape to track the gradients. You can learn more about this approach by reading the eager execution guide.
The basic procedure is:
Execute the model and calculate the loss under a tf.GradientTape.
Calculate the updates and apply them to the model using the optimizer.
End of explanation
model = CustomTraining(
vocab_size=len(ids_from_chars.get_vocabulary()),
embedding_dim=embedding_dim,
rnn_units=rnn_units)
model.compile(optimizer = tf.keras.optimizers.Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True))
model.fit(dataset, epochs=1)
Explanation: The above implementation of the train_step method follows Keras' train_step conventions. This is optional, but it allows you to change the behavior of the train step and still use keras' Model.compile and Model.fit methods.
End of explanation
EPOCHS = 10
mean = tf.metrics.Mean()
for epoch in range(EPOCHS):
start = time.time()
mean.reset_states()
for (batch_n, (inp, target)) in enumerate(dataset):
logs = model.train_step([inp, target])
mean.update_state(logs['loss'])
if batch_n % 50 == 0:
template = f"Epoch {epoch+1} Batch {batch_n} Loss {logs['loss']:.4f}"
print(template)
# saving (checkpoint) the model every 5 epochs
if (epoch + 1) % 5 == 0:
model.save_weights(checkpoint_prefix.format(epoch=epoch))
print()
print(f'Epoch {epoch+1} Loss: {mean.result().numpy():.4f}')
print(f'Time taken for 1 epoch {time.time() - start:.2f} sec')
print("_"*80)
model.save_weights(checkpoint_prefix.format(epoch=epoch))
Explanation: Or if you need more control, you can write your own complete custom training loop:
End of explanation |
9,449 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Imports
Step1: Model preparation
Variables
Any model exported using the export_inference_graph.py tool can be loaded here simply by changing PATH_TO_CKPT to point to a new .pb file.
By default we use an "SSD with Mobilenet" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies.
Step2: Download Model
Step3: Load a (frozen) Tensorflow model into memory.
Step4: Loading label map
Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
Step5: Helper code
Step6: Detection | Python Code:
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
%matplotlib inline
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from utils import label_map_util
from utils import visualization_utils as vis_util
Explanation: Imports
End of explanation
# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
NUM_CLASSES = 90
Explanation: Model preparation
Variables
Any model exported using the export_inference_graph.py tool can be loaded here simply by changing PATH_TO_CKPT to point to a new .pb file.
By default we use an "SSD with Mobilenet" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies.
End of explanation
opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd())
Explanation: Download Model
End of explanation
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
Explanation: Load a (frozen) Tensorflow model into memory.
End of explanation
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
Explanation: Loading label map
Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
End of explanation
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
Explanation: Helper code
End of explanation
# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ]
# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
# Definite input and output Tensors for detection_graph
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
(boxes, scores, classes, num) = sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
Explanation: Detection
End of explanation |
9,450 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data mining the hansard
Estimating contribution by party
Step2: Data gathering and parsing
Get data
mine hansard from theyworkforyou scrape for June 2015 onwards using lxml
Step4: Parse the xml of debates
Generate 3 datastructs from xml and mp_data.csv mentioned below
Step5: parties and party_dict
Using mp_data.csv manually downloaded from http
Step6: words_spoken_by_mp
For just the actual parties therefore
Combining Labour + labour co-op
Ignoring None, Multiple, and Speaker "parties"
Get a dict keyed with each MP name and valued with a tuple of their party and a concatenated string of everything they have said
Step8: Tokenize per MP speech
Use NLTK to tokenize each MP's speeches into words. Then stem tally the per MP speech
Step9: Generate the SVG data struct for D3.js visualisation
Step10: party_talk
Also generate a dict for each party valued by the concatenated string of everything their MPs have said
Step11: Analysis
Using the raw data discover the following
Step13: Wordclouds | Python Code:
%matplotlib inline
import sys
#sys.path.append('/home/fin/Documents/datamining_hansard/dh/lib/python3.4/site-packages/')
import nltk
import nltk.tokenize
import itertools
import glob
import lxml
import lxml.html
import requests
import os
import csv
import wordcloud
import matplotlib.pyplot as plt
from scipy.misc import imread
import numpy as np
import wordcloud
Explanation: Data mining the hansard
Estimating contribution by party
End of explanation
page = requests.get("http://www.theyworkforyou.com/pwdata/scrapedxml/debates/")
tree = lxml.html.fromstring(page.text)
debates = tree.xpath('//a/text()')
# Find the filenames of all the daily hansard xmls for this parliament
debates2015 = [x for x in debates if x.startswith('debates2015')]
new_parliament = [x for x in debates2015 if int(x.split('-')[1]) > 4]
def download_file(url, folder):
Download a given url into specified folder
local_filename = url.split('/')[-1]
if not os.path.exists(os.path.join(folder, local_filename)):
r = requests.get(url, stream=True)
with open(os.path.join(folder, local_filename), 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.flush()
return local_filename
# download xmls
for xml in new_parliament:
download_file("http://www.theyworkforyou.com/pwdata/scrapedxml/debates/"+xml,
"data")
Explanation: Data gathering and parsing
Get data
mine hansard from theyworkforyou scrape for June 2015 onwards using lxml
End of explanation
def get_speaker_and_text(xml):
For a given xml file return a list of tuples
of speeches and their speakers
with open(xml, 'rb') as fh:
parsed_xml = fh.read()
data = []
root = lxml.etree.fromstring(parsed_xml)
speeches = root.xpath('speech')
for speech in speeches:
#if speech.get('nospeaker') is not None:
speech_list = [x.text for x in speech.getchildren() if x.text is not None]
speech_list = " ".join(speech_list)
speaker = speech.get('speakername')
data.append((speaker, speech_list))
return data
named_text = []
for xml in glob.glob(os.path.join("data", "*.xml")):
named_text = named_text + get_speaker_and_text(xml)
named_text[-1]
Explanation: Parse the xml of debates
Generate 3 datastructs from xml and mp_data.csv mentioned below:
named_text (a list of tuples of MPs names and their speeches over time i.e. will appear multiple times if they speak several times)
words_spoken_by_mp (a dict keyed with the MPs name and valued by a tuple representing their party and concatenated text)
party_talk (a dict keyed by the party name and valued by concatenated text of all speeches for all MPs belonging to it)
Additionally create the ancillary dict parties which contains the list of "real parties" and their respective number of MPs and the party_dict containing all MPs respective parties
named_text
End of explanation
#Encoding issue thus the weird kwarg
with open('data/mp_data.csv', encoding = "ISO-8859-1") as fh:
mp_data = [x for x in csv.reader(fh)]
party_dict = {}
for mp in mp_data[1:]:
# forename and surname fields
name = mp[1] + ' ' + mp[2]
party = mp[3]
party_dict.update({name: party})
#Manual clean up of this data
#Add the speakers
party_dict.update({'Mr Speaker': 'Speaker'})
party_dict.update({'Mr Deputy Speaker': 'Speaker'})
party_dict.update({'Mr Speaker-Elect': 'Speaker'})
party_dict.update({'The Chairman': 'Speaker'})
party_dict.update({'The Second Deputy Chairman': 'Speaker'})
party_dict.update({'Madam Deputy Speaker': 'Speaker'})
#add Stephen Philip Rotheram for some reason
party_dict.update({'Stephen Philip Rotheram': 'Labour'})
#add multiple speakers
party_dict.update({'My Lords and Members of the House of Commons':'Multiple'})
party_dict.update({'Members of the House of Commons':'Multiple'})
party_dict.update({'Hon. Members':'Multiple'})
party_dict.update({'Several hon. Members':'Multiple'})
#add none
party_dict.update({None: 'None'})
# parties and MP number
parties = {'UKIP': 1,
'Sinn Féin': 4,
'Conservative': 330,
'Labour': 232,
'Liberal Democrat': 8,
'Independent': 1,
'UUP': 2,
'DUP': 8,
'Scottish National Party': 56,
'Green': 1,
'Social Democratic and Labour Party': 3,
'Plaid Cymru': 3}
Explanation: parties and party_dict
Using mp_data.csv manually downloaded from http://www.theyworkforyou.com/mps/?f=csv to acquire party affiliations for each mp and generate a dict for quick lookup of an MP's party using their name
End of explanation
words_spoken_by_mp = {}
for speaker, text in named_text:
text = text.replace("“", "\"")
party = party_dict[speaker]
# remove cruft "parties"
if party == 'None' or party == 'Multiple' or party == 'Speaker':
continue
# collapse labour co-op and labour down to one party
if party == 'Labour/Co-operative':
party = 'Labour'
if speaker in words_spoken_by_mp.keys():
words_spoken_by_mp[speaker] = (party, words_spoken_by_mp[speaker][1] + ' ' + text)
else:
words_spoken_by_mp.update({speaker: (party, text)})
Explanation: words_spoken_by_mp
For just the actual parties therefore
Combining Labour + labour co-op
Ignoring None, Multiple, and Speaker "parties"
Get a dict keyed with each MP name and valued with a tuple of their party and a concatenated string of everything they have said
End of explanation
from nltk.tokenize import word_tokenize, sent_tokenize, RegexpTokenizer
from nltk.stem.wordnet import WordNetLemmatizer
def text_string_to_vocab(speech_string):
Get unique vocabularly for a given string of transcribed
text.
Specifically, tokenize this to remove all vocabularly (i.e. just find strings
of alphanumerical characters) resulting in a list of tokenized words
Then use the WordNet Lemmatizer for lemmatization
Convert this lemmatized list into a set to remove redundancy
and return the length of the non-redundant lemmatization
lemma = WordNetLemmatizer()
tokenizer = RegexpTokenizer(r'\w+')
lemmatized_set = set([lemma.lemmatize(word) for word in tokenizer.tokenize(speech_string)])
return len(lemmatized_set)
# get MP vocab size in the same format as words_spoken_by_mp but replacing text with len of lemmatized speech
vocab_by_MP = {}
for MP in words_spoken_by_mp.keys():
vocab_by_MP.update({MP : (words_spoken_by_mp[MP][0], text_string_to_vocab(words_spoken_by_mp[MP][1]))})
Explanation: Tokenize per MP speech
Use NLTK to tokenize each MP's speeches into words. Then stem tally the per MP speech
End of explanation
d3_js_data = []
for MP in vocab_by_MP.keys():
d3_js_data.append('{{MP: "{0}", id:"{3}", x: .{1}, r: .45, colour: "{2}", vocabulary: "{1}", party: "{2}"}}'.format(MP, vocab_by_MP[MP][1], vocab_by_MP[MP][0], MP.replace('.','').replace(' ','_').replace("'", '')))
d3_data = ",".join(d3_js_data)
d3_data
Explanation: Generate the SVG data struct for D3.js visualisation
End of explanation
# Initialise party talk dict with each party name
party_talk = {}
for party in parties.keys():
party_talk.update({party:''})
for speaker, party_and_text in words_spoken_by_mp.items():
# will throw error if party doesn't exist in parties list
party, text = party_and_text
if len(party_talk[party]) == 0:
party_talk[party] = text
else:
party_talk[party] = party_talk[party] + ' ' + text
con_sentences = nltk.tokenize.sent_tokenize(party_talk['Conservative'])
con_sentences = ["{0} {1} {2}".format("START_TOKEN", w, "END_TOKEN") for w in con_sentences]
con_sentences = [nltk.word_tokenize(sentence) for sentence in con_sentences]
word_freqs = nltk.FreqDist(con_sentences)
vocab = word_freqs.most_common(9999)
idx_2_word = [x[0] for x in vocab]
idx_2_word.append("TOO_INFREQUENT_WORD")
word_2_idx = dict([(w,i) for i,w in enumerate(idx_2_word)])
for i, sentence in enumerate(con_sentences):
con_sentences[i] = [word if word in word_2_idx else "TOO_INFREQUENT_WORD" for word in sentence]
X_train = np.asarray([[word_2_idx[w] for w in sent[:-1]] for sent in con_sentences])
Y_train = np.asarray([[word_2_idx[w] for w in sent[1:]] for sent in con_sentences])
Explanation: party_talk
Also generate a dict for each party valued by the concatenated string of everything their MPs have said
End of explanation
total_speech = []
for speaker, party_text in words_spoken_by_mp.items():
total_speech.append((speaker, party_text[0], len(party_text[1])))
average_totals = []
for party, mp_number in parties.items():
average_totals.append((party, len(party_talk[party])/mp_number))
average_totals = sorted(average_totals, key=lambda x: x[1], reverse=True)
for party, average in average_totals:
print("{0:<40s}: {1:>10,.1f}".format(party, average))
y_pos = np.arange(len(average_totals))
plt.figure(figsize=(10,10))
plot_mp_numbers = [parties[x[1]] for x in average_totals]
plt.bar(y_pos, [x[1] for x in average_totals], align='center', alpha=0.4)
plt.xticks(y_pos, [x[0] for x in average_totals], rotation=90)
plt.ylabel('Average words spoken per MP')
plt.xlabel('Party')
plt.show()
Explanation: Analysis
Using the raw data discover the following:
Average number of words spoken by party
word cloud of spoken words by party
plot of total words spoken for each MP highlighted by party
Then apply NLTK methods and look at vocabulary:
Average vocab of each party
replot word clouds used token analysis
plot of total vocab for each MP highlighted by party
Other ideas
NLTK dispersion plot of words like "austerity"
frequency distribution by party top words by token and their context
word redundancy by party - i.e. unique words which parties repeat the same shit over and over (lexical_diversity)
longest word by party
which MP uses the longest words on average - distribution
collocations
http://www.nltk.org/book/ch01.html#counting-vocabulary
Party-wise summaries
Total speech per MP per Party
End of explanation
def make_wordcloud(text, image_filename=None):
Generate a wordcloud
# remove minor words
wordcloud.STOPWORDS.add("hon")
wordcloud.STOPWORDS.add("Government")
wordcloud.STOPWORDS.add("government")
wordcloud.STOPWORDS.add("Minister")
wordcloud.STOPWORDS.add("minister")
wordcloud.STOPWORDS.add("S")
wordcloud.STOPWORDS.add("s")
wordcloud.STOPWORDS.add("Member")
wordcloud.STOPWORDS.add("Friend")
if image_filename:
mask = imread(image_filename)
wc = wordcloud.WordCloud(background_color="white", max_words=2000, mask=mask,
stopwords=wordcloud.STOPWORDS)
else:
wc = wordcloud.WordCloud(background_color="white", max_words=2000, stopwords=wordcloud.STOPWORDS)
wc.generate(text)
return wc
plt.figure(figsize=(20,10))
plt.subplot(241)
plt.imshow(make_wordcloud(party_talk['Conservative']))
plt.axis("off")
plt.title('Conservative')
plt.subplot(242)
plt.imshow(make_wordcloud(party_talk['Labour']))
plt.axis("off")
plt.title('Labour')
plt.subplot(243)
plt.imshow(make_wordcloud(party_talk['Scottish National Party']))
plt.axis("off")
plt.title('SNP')
plt.subplot(244)
plt.imshow(make_wordcloud(party_talk['Liberal Democrat']))
plt.axis("off")
plt.title('Lib Dem')
plt.subplot(245)
plt.imshow(make_wordcloud(party_talk['DUP']))
plt.axis("off")
plt.title('DUP')
plt.subplot(246)
plt.imshow(make_wordcloud(party_talk['Green']))
plt.axis("off")
plt.title('Green')
plt.subplot(247)
plt.imshow(make_wordcloud(party_talk['UKIP']))
plt.axis("off")
plt.title('UKIP')
plt.subplot(248)
plt.imshow(make_wordcloud(party_talk['Plaid Cymru']))
plt.axis("off")
plt.title('PC')
Explanation: Wordclouds
End of explanation |
9,451 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ejercicios Graphs, Paths & Components
Ejercicios básicos de Grafos.
Ejercicio - Número de Nodos y Enlaces
(resuelva en código propio y usando la librería NerworkX o iGraph)
Cuente en número de nodos y enalces con los siguientes links (asumiendo que el grafo puede ser dirigido y no dirigido)
Step1: Usando la libreria
Step2: Propio
Step3: Ejercicio - Matriz de Adyacencia
(resuelva en código propio y usando la librería NetworkX (python) o iGraph (R))
Cree la matriz de adyacencia del grafo del ejercicio anterior (para dirigido y no-dirigido)
Usando Librería
Step4: Propia
Step5: Ejercicio - Sparseness
Enron email network - Directed http
Step6: En la matriz de adyacencia de cada uno de las redes elegidas, cuantos ceros hay?
Step7: Social circles from Facebook (anonymized) - Undirected http
Step8: En la matriz de adyacencia de cada uno de las redes elegidas, cuantos ceros hay?
Step9: Webgraph from the Google programming contest, 2002 - Directed http
Step10: En la matriz de adyacencia de cada uno de las redes elegidas, cuantos ceros hay?
Step11: Ejercicio - Redes Bipartitas
Defina una red bipartita y genere ambas proyecciones, explique qué son los nodos y links tanto de la red original como de las proyeccciones
Se define una red donde los nodes E1, E2 y E3 son Estaciones de Bus, y se definen los nodos R101, R250, R161, R131 y R452 como rutas de buses.
Step12: La proyección A representa la comunicación entre Estaciones mediante el flujo de las rutas de buses, La proyección B representa la posible interacción o "encuentros" entre las rutas de buses en función de las estaciones.
Ejercicio - Paths
Cree un grafo de 5 nodos con 5 enlaces. Elija dos nodos cualquiera e imprima
Step13: Ejercicio - Componentes
Baje una red real (http
Step14: Implemente el algorithmo Breadth First para encontrar el número de componentes (revise que el resultado es el mismo que utilizando la librería) | Python Code:
edges = set([(1, 2), (3, 1), (3, 2), (2, 4)])
import networkx as nx
import matplotlib.pyplot as plt
import numpy as np
import scipy as sc
import itertools
import random
Explanation: Ejercicios Graphs, Paths & Components
Ejercicios básicos de Grafos.
Ejercicio - Número de Nodos y Enlaces
(resuelva en código propio y usando la librería NerworkX o iGraph)
Cuente en número de nodos y enalces con los siguientes links (asumiendo que el grafo puede ser dirigido y no dirigido)
End of explanation
gr = nx.Graph()
for i in range(1,5):
gr.add_node(i)
for i in edges:
gr.add_edge(i[0], i[1])
nx.draw_spectral(gr)
plt.show()
print ('The graph is directed?: ', nx.is_directed(gr))
if nx.is_directed(gr) is True:
print ('Number of edges: ', gr.number_of_edges())
else:
print ('Number of edges: ', gr.number_of_edges()*2)
print ('Number of nodes: ', gr.number_of_nodes())
gr2 = nx.DiGraph()
for i in range(1,5):
gr2.add_node(i)
for i in edges:
gr2.add_edge(i[0], i[1])
nx.draw_spectral(gr2)
plt.show()
print ('The graph is directed?: ', nx.is_directed(gr2))
if nx.is_directed(gr2) is True:
print ('Number of edges: ', gr2.number_of_edges())
else:
print ('Number of edges: ', gr2.number_of_edges()*2)
print ('Number of nodes: ', gr2.number_of_nodes())
Explanation: Usando la libreria
End of explanation
Directed=False
print ('The graph is directed?: ', Directed)
if Directed is True:
print ('Number of edges: ', len(edges))
else:
print ('Number of edges: ', 2*len(edges))
temp = []
for i in edges:
temp.append(i[0])
temp.append(i[1])
temp = np.array(temp)
print ('Number of nodes: ', np.size(np.unique(temp)))
Directed=True
print ('The graph is directed?: ', Directed)
if Directed is True:
print ('Number of edges: ', len(edges))
else:
print ('Number of edges: ', 2*len(edges))
temp = []
for i in edges:
temp.append(i[0])
temp.append(i[1])
temp = np.array(temp)
print ('Number of nodes: ', np.size(np.unique(temp)))
del temp, Directed
Explanation: Propio
End of explanation
A = nx.adjacency_matrix(gr)
print ('No Dirigida')
print(A)
A = nx.adjacency_matrix(gr2)
print ('Dirigida')
print(A)
Explanation: Ejercicio - Matriz de Adyacencia
(resuelva en código propio y usando la librería NetworkX (python) o iGraph (R))
Cree la matriz de adyacencia del grafo del ejercicio anterior (para dirigido y no-dirigido)
Usando Librería
End of explanation
def adjmat(ed, directed):
if directed is True:
temp_d1 = []
temp_d2 = []
for i in ed:
temp_d1.append(i[0])
temp_d2.append(i[1])
B=sc.sparse.csr_matrix((np.ones(len(temp_d1), dtype='int'), (temp_d1, temp_d2)))
else:
temp_d1 = []
temp_d2 = []
for i in ed:
temp_d1.append(i[0])
temp_d1.append(i[1])
temp_d2.append(i[1])
temp_d2.append(i[0])
B=sc.sparse.csr_matrix((np.ones(len(temp_d1), dtype='int'), (temp_d1, temp_d2)))
return B
A2 = adjmat(edges, True)
print ('Dirigida')
print (A2)
A2 = adjmat(edges, False)
print ('No Dirigida')
print (A2)
del A, A2, gr, gr2
Explanation: Propia
End of explanation
F = open("Email-Enron.txt",'r')
Net1=nx.read_edgelist(F)
F.close()
n = Net1.number_of_nodes()
posibles = Net1.number_of_nodes()*(Net1.number_of_nodes()-1.0)/2.0
print ('Ratio: ', Net1.number_of_edges()/posibles)
Explanation: Ejercicio - Sparseness
Enron email network - Directed http://snap.stanford.edu/data/email-Enron.html
Calcule la proporción entre número de links existentes contra el número de links posibles.
End of explanation
ANet1 = nx.adjacency_matrix(Net1)
nzeros=Net1.number_of_nodes()*Net1.number_of_nodes()-len(ANet1.data)
print ('La Red tiene: ', nzeros, ' ceros')
del Net1, posibles, ANet1, nzeros
Explanation: En la matriz de adyacencia de cada uno de las redes elegidas, cuantos ceros hay?
End of explanation
F = open("facebook_combined.txt",'r')
Net=nx.read_edgelist(F)
F.close()
n = Net.number_of_nodes()
posibles = Net.number_of_nodes()*(Net.number_of_nodes()-1.0)/2.0
print ('Ratio: ', Net.number_of_edges()/posibles)
Explanation: Social circles from Facebook (anonymized) - Undirected http://snap.stanford.edu/data/egonets-Facebook.html
Calcule la proporción entre número de links existentes contra el número de links posibles.
End of explanation
ANet = nx.adjacency_matrix(Net)
nzeros=Net.number_of_nodes()*Net.number_of_nodes()-len(ANet.data)
print ('La Red tiene: ', nzeros, ' ceros')
del Net, n, posibles, ANet, nzeros
Explanation: En la matriz de adyacencia de cada uno de las redes elegidas, cuantos ceros hay?
End of explanation
F = open("web-Google.txt",'r')
Net=nx.read_edgelist(F)
F.close()
n = Net.number_of_nodes()
posibles = Net.number_of_nodes()*(Net.number_of_nodes()-1.0)/2.0
print ('Ratio: ', Net.number_of_edges()/posibles)
Explanation: Webgraph from the Google programming contest, 2002 - Directed http://snap.stanford.edu/data/web-Google.html
Calcule la proporción entre número de links existentes contra el número de links posibles.
End of explanation
ANet = nx.adjacency_matrix(Net)
nzeros=Net.number_of_nodes()*Net.number_of_nodes()-len(ANet.data)
print ('La Red tiene: ', nzeros, ' ceros')
del Net, n, posibles, ANet, nzeros
Explanation: En la matriz de adyacencia de cada uno de las redes elegidas, cuantos ceros hay?
End of explanation
B = nx.Graph()
B.add_nodes_from(['E1','E2', 'E3'], bipartite=0)
B.add_nodes_from(['R250', 'R161', 'R131', 'R452','R101'], bipartite=1)
B.add_edges_from([('E1', 'R250'), ('E1', 'R452'), ('E3', 'R250'), ('E3', 'R131'), ('E3', 'R161'), ('E3', 'R452'), ('E2', 'R161'), ('E2', 'R101'),('E1', 'R131')])
B1=nx.algorithms.bipartite.projected_graph(B, ['E1','E2', 'E3'])
B2=nx.algorithms.bipartite.projected_graph(B,['R250', 'R161', 'R131', 'R452'])
value =np.zeros(len(B.nodes()))
i = 0
for node in B.nodes():
if any(node == a for a in B1.nodes()):
value[i] = 0.25
if any(node == a for a in B2.nodes()):
value[i] = 0.75
i += 1
fig, ax = plt.subplots(1, 3, num=1)
plt.sca(ax[1])
ax[1].set_title('Bipartita')
nx.draw(B, with_labels = True, cmap=plt.get_cmap('summer'), node_color=value)
plt.sca(ax[0])
ax[0].set_title('Proyeccion A')
nx.draw(B1, with_labels = True, cmap=plt.get_cmap('summer'), node_color=np.ones(len(B1.nodes()))*0.25)
plt.sca(ax[2])
nx.draw(B2, with_labels = True, cmap=plt.get_cmap('summer'), node_color=0.75*np.ones(len(B2.nodes())))
ax[2].set_title('Proyeccion B')
plt.show()
Explanation: Ejercicio - Redes Bipartitas
Defina una red bipartita y genere ambas proyecciones, explique qué son los nodos y links tanto de la red original como de las proyeccciones
Se define una red donde los nodes E1, E2 y E3 son Estaciones de Bus, y se definen los nodos R101, R250, R161, R131 y R452 como rutas de buses.
End of explanation
Nodes = [1, 2, 3, 4, 5]
nEdges = 5
temp = []
for subset in itertools.combinations(Nodes, 2):
temp.append(subset)
Edges = random.sample(temp, nEdges)
Edges
G = nx.Graph()
G.add_edges_from(Edges)
nx.draw(G, with_labels = True)
plt.show()
Grafo = {
1 : []
, 2 : []
, 3 : []
, 4 : []
, 5 : []
}
for i in Edges:
Grafo[i[0]].append(i[1])
Grafo[i[1]].append(i[0])
def pathGen(Inicio, Fin):
flag=False
actual = Inicio
temp = []
cont = 0
while not flag:
temp.append(actual)
actual = random.sample(Grafo[actual], 1)[0]
if actual == Fin:
flag = True
temp.append(actual)
break
return temp
print "Un posible path entre el nodo 5 y 4 es: ", pathGen(5,3)
print "Un posible path entre el nodo 5 y 4 es: ", pathGen(5,3)
print "Un posible path entre el nodo 5 y 4 es: ", pathGen(5,3)
print "Un posible path entre el nodo 5 y 4 es: ", pathGen(5,3)
print "Un posible path entre el nodo 5 y 4 es: ", pathGen(5,3)
visited = {i : False for i in xrange(1, 6)}
def shortest(a, b, length = 0):
global visited, Grafo
if b == a : return length
minL = float('inf')
for v in Grafo[a]:
if not visited[v]:
visited[v] = True
minL = min(minL, 1 + shortest(v, b))
visited[v] = False
return minL
print 'El camino mas corto entre los nodos 5 y 3 es: ', shortest(5, 3)
temp = []
for subset in itertools.combinations(Nodes, 2):
temp.append(subset)
maxL = 0
for i in temp:
maxL=max(maxL,shortest(i[0], i[1]))
print 'La diametro de la Red es, ', maxL
def avoidpathGen(Inicio, Fin):
flag=False
actual = Inicio
temp = []
past = []
cont = 0
while not flag:
temp.append(actual)
past.append(actual)
temp2 = random.sample(Grafo[actual], 1)[0]
while not len(np.intersect1d(past,temp2)) == 0:
temp2 = random.sample(Grafo[actual], 1)[0]
actual = temp2
if actual == Fin:
flag = True
temp.append(actual)
break
return temp
print 'Un self-avoiding path del nodo 5 a 3 es: ', avoidpathGen(5,3)
Explanation: La proyección A representa la comunicación entre Estaciones mediante el flujo de las rutas de buses, La proyección B representa la posible interacción o "encuentros" entre las rutas de buses en función de las estaciones.
Ejercicio - Paths
Cree un grafo de 5 nodos con 5 enlaces. Elija dos nodos cualquiera e imprima:
5 Paths diferentes entre los nodos
El camino mas corto entre los nodos
El diámetro de la red
Un self-avoiding path
End of explanation
F = open("youtube.txt",'r')
Net1=nx.read_edgelist(F)
F.close()
print 'La red tiene: ',nx.number_connected_components(Net1), ' componentes'
Explanation: Ejercicio - Componentes
Baje una red real (http://snap.stanford.edu/data/index.html) y lea el archivo
Social circles from Facebook (anonymized) - Undirected http://snap.stanford.edu/data/egonets-Facebook.html
End of explanation
Edges = Net1.edges()
len(Edges)
def netgen(nn, ne):
nod = [i for i in range(nn)]
nEdges = ne
temp = []
for subset in itertools.combinations(nod, 2):
temp.append(subset)
edg = random.sample(temp, nEdges)
return edg, nod
G = nx.Graph()
edges, nodes = netgen(10, 7)
G.add_edges_from(edges)
nx.draw(G, with_labels = True)
plt.show()
nx.number_connected_components(G)
def componts(nod, edg):
dgraf = {}
for i in nod:
dgraf[i] = []
for i in edg:
dgraf[i[0]].append(i[1])
dgraf[i[1]].append(i[0])
empty = nod[:]
cont = -1
Labels = {}
for i in nod:
Labels[i] = -1
while (len(empty) is not 0):
cont += 1
temp = random.sample(empty, 1)
if Labels[temp[0]] is -1:
value = cont
else:
value = Labels[temp[0]]
Labels[temp[0]] = value
empty.remove(temp[0])
for i in dgraf[temp[0]]:
Labels[i] = value
if not any_in(dgraf[i], empty):
if i in empty:
empty.remove(i)
print empty
return Labels, cont
Lab, comp = componts(nodes, edges)
for i in range(10):
print i, Lab[i]
print comp
print edges
plt.bar(Lab.keys(), Lab.values(), color='g')
plt.show()
any_in([1,2],[2,3,4,5,6,7])
any_in = lambda a, b: any(i in b for i in a)
Explanation: Implemente el algorithmo Breadth First para encontrar el número de componentes (revise que el resultado es el mismo que utilizando la librería)
End of explanation |
9,452 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reinforcement learning based training of modulation scheme without a channel model
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates
* End-to-end-learning of modulation scheme without a channel model using policy learning (policy gradient)
This code is based on the Fayçal Ait Aoudia, Jakob Hoydis, "End-to-End Learning of Communications Systems Without a Channel Model" (https
Step1: Define Transmitter, Channel, Recevier and helper functions
Step2: Helper function to compute Symbol Error Rate (SER)
Step3: Parameter and Training
Here, we define the parameters of the neural network and training, generate the validation set and a helping set to show the decision regions
Step4: Now, carry out the training as such. First initialize the variables and then loop through the training. Here, the epochs are not defined in the classical way, as we do not have a training set per se. We generate new data on the fly and never reuse data.
Step5: Evaluation
Plot decision region and scatter plot of the validation set. Note that the validation set is only used for computing SERs and plotting, there is no feedback towards the training! | Python Code:
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from ipywidgets import interactive
import ipywidgets as widgets
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print("We are using the following device for learning:",device)
Explanation: Reinforcement learning based training of modulation scheme without a channel model
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates
* End-to-end-learning of modulation scheme without a channel model using policy learning (policy gradient)
This code is based on the Fayçal Ait Aoudia, Jakob Hoydis, "End-to-End Learning of Communications Systems Without a Channel Model" (https://arxiv.org/pdf/1804.02276.pdf).
End of explanation
class Transmitter(nn.Module):
def __init__(self):
super(Transmitter, self).__init__()
# Define Transmitter Layer: Linear function, M input neurons (symbols), 2 output neurons (real and imaginary part)
# the matrix defines just a mapping between one-hot vectors and modulation symbols
self.W = nn.Parameter(torch.empty(M,2))
nn.init.xavier_uniform_(self.W)
def normalize(self, x):
# Normalize the power of the transmit symbols to 1
norm_factor = torch.sqrt(torch.mean(torch.sum(torch.square(x), 1)))
return norm_factor
def forward(self, x):
# compute output
norm_factor = self.normalize(self.W)
out = torch.matmul(x, self.W/norm_factor)
return out
def receiver(modulation_symbols, received):
# minimum euclidean distance receiver: returns a 1 for the modulation symbols with the minimum distance
rx_sym_onehot = torch.zeros(len(received), len(modulation_symbols), device=device)
# Calculate the distance between the received symbol and all modulation symbols
# Remark: looping over all received symbols is very slow -> loop over the modulation symbols
rx_dist = torch.zeros(len(received), len(modulation_symbols), device=device)
for i in range(len(modulation_symbols)):
rx_dist[:,i] = torch.sqrt(torch.square(modulation_symbols[i,0]-received[:,0])+torch.square(modulation_symbols[i,1]-received[:,1]))
# Return 1 for the modulation sybmol with the minimum distance
rx_sym_onehot[range(rx_sym_onehot.shape[0]), torch.argmin(rx_dist, dim=1).long()]=1
return rx_sym_onehot
def channel_model(x):
# AWGN-channel (adds Gaussian noise with standard deviatian sigma_n)
received = torch.add(x, sigma_n*torch.randn(len(x),2, device=device))
return received
Explanation: Define Transmitter, Channel, Recevier and helper functions
End of explanation
# helper function to compute the symbol error rate
def SER(predictions, labels):
return (np.sum(np.argmax(predictions, 1) != labels) / predictions.shape[0])
# This loss function is equivalent to the SER
def my_loss_fn(predicitions, labels):
same = torch.zeros(len(predicitions), device=device)
same = torch.where(torch.argmax(predicitions, 1) == torch.argmax(labels, 1), same, torch.FloatTensor([1]).to(device))
return same
Explanation: Helper function to compute Symbol Error Rate (SER)
End of explanation
# number of symbols
M = 16
EbN0 = 7
# noise standard deviation (of the channel)
sigma_n = np.sqrt((1/2/np.log2(M)) * 10**(-EbN0/10))
# validation set. Training examples are generated on the fly
N_valid = 100000
# variance of policy vector
sigma_p = np.sqrt(0.02)
# Generate Validation Data
y_valid = np.random.randint(M,size=N_valid)
y_valid_onehot = np.eye(M)[y_valid]
# meshgrid for plotting
# assume that the worst case constellation is the one where all points lie on a straight line starting at the center
# and then are spreaded equidistantly. In this case, this is the scaling factor of the constellation points and
# we assume that there is an (M+1)th point which defines ext_max
ext_max = 1.8
mgx,mgy = np.meshgrid(np.linspace(-ext_max,ext_max,400), np.linspace(-ext_max,ext_max,400))
meshgrid = np.column_stack((np.reshape(mgx,(-1,1)),np.reshape(mgy,(-1,1))))
Explanation: Parameter and Training
Here, we define the parameters of the neural network and training, generate the validation set and a helping set to show the decision regions
End of explanation
# Initilize Transmitter
model_tx = Transmitter()
model_tx.to(device)
# Adam Optimizers for TX
optimizer_tx = optim.Adam(model_tx.parameters(), lr=5e-4)
# Training parameters
num_epochs = 100
batches_per_epoch = 50
# Vary batch size during training
batch_size_per_epoch = np.linspace(100,10000,num=num_epochs, dtype=np.int16)
validation_SERs = np.zeros(num_epochs)
validation_received = []
decision_region_evolution = []
constellations = []
pg_loss_list = []
print('Start Training')
for epoch in range(num_epochs):
batch_labels = torch.empty(batch_size_per_epoch[epoch], device=device)
# Optimize Transmitter
for step in range(batches_per_epoch):
# Generate training data: In most cases, you have a dataset and do not generate a training dataset during training loop
# sample new mini-batch directory on the GPU (if available)
batch_labels.random_(M)
batch_labels_onehot = torch.zeros(int(batch_size_per_epoch[epoch]), M, device=device)
batch_labels_onehot[range(batch_labels_onehot.shape[0]), batch_labels.long()]=1
# Propagate (training) data through the net
tx_output = model_tx(batch_labels_onehot)
tx_output_clone = tx_output.clone().detach()
# apply policy, which is Gaussian noise
noise = (sigma_p / np.sqrt(2)) * torch.randn_like(tx_output).to(device)
encoded = np.sqrt(1-(sigma_p**2))*tx_output + noise
# channel model
encoded_clone = encoded.clone().detach()
received = channel_model(encoded_clone)
const_diagram = (1/model_tx.normalize(model_tx.W.detach()))*model_tx.W
# Estimate transmit symbols
logits = receiver((1/model_tx.normalize(model_tx.W.detach()))*model_tx.W.detach(), received)
# compute per example losses ... detach from graph so that the gradient is not computed
per_example_losses = my_loss_fn(logits, batch_labels_onehot).detach()
# policy gradient loss
pg_loss = (1/batch_size_per_epoch[epoch])*torch.sum(
per_example_losses * torch.log(
(1/(np.pi * sigma_p**2))*torch.exp(-1/(sigma_p**2)*torch.sum(torch.square(
encoded_clone-np.sqrt(1-(sigma_p**2))*tx_output), 1)
)
)
)
# compute gradients
pg_loss.backward()
# Adapt weights
optimizer_tx.step()
# reset gradients
optimizer_tx.zero_grad()
# print(torch.mean(per_example_losses))
# compute validation SER
with torch.no_grad():
encoded_valid = model_tx(torch.Tensor(y_valid_onehot).to(device))
received_valid = channel_model(encoded_valid)
out_valid = receiver((1/model_tx.normalize(model_tx.W.detach()))*model_tx.W.detach(), received_valid)
validation_SERs[epoch] = SER(out_valid.detach().cpu().numpy().squeeze(), y_valid)
print('Validation SER after epoch %d: %f (pg_loss %1.8f)' % (epoch, validation_SERs[epoch], pg_loss.detach().cpu().numpy()))
# calculate and store received validation data
encoded = model_tx(torch.Tensor(y_valid_onehot).to(device))
received = channel_model(encoded)
validation_received.append(received.detach().cpu().numpy())
# calculate and store constellation
encoded = model_tx(torch.eye(M).to(device))
constellations.append(encoded.detach().cpu().numpy())
# store decision region for generating the animation
# mesh_prediction = softmax(model_rx(torch.Tensor(meshgrid).to(device)))
mesh_prediction = receiver((1/model_tx.normalize(model_tx.W.detach()))*model_tx.W.detach(), torch.Tensor(meshgrid).to(device))
decision_region_evolution.append(mesh_prediction.detach().cpu().numpy())
print('Training finished! The constellation points are:')
print(model_tx.W)
Explanation: Now, carry out the training as such. First initialize the variables and then loop through the training. Here, the epochs are not defined in the classical way, as we do not have a training set per se. We generate new data on the fly and never reuse data.
End of explanation
# find minimum SER from validation set
min_SER_iter = np.argmin(validation_SERs)
print('Minimum SER obtained: %1.5f' % validation_SERs[min_SER_iter])
cmap = matplotlib.cm.tab20
base = plt.cm.get_cmap(cmap)
color_list = base.colors
new_color_list = [[t/2 + 0.5 for t in color_list[k]] for k in range(len(color_list))]
%matplotlib inline
plt.figure(figsize=(19,9.5))
font = {'size' : 14}
plt.rc('font', **font)
plt.rc('text', usetex=matplotlib.checkdep_usetex(True))
plt.subplot(121)
plt.scatter(constellations[min_SER_iter][:,0], constellations[min_SER_iter][:,1], c=range(M), cmap='tab20',s=50)
plt.axis('scaled')
plt.xlabel(r'$\Re\{t\}$',fontsize=18)
plt.ylabel(r'$\Im\{t\}$',fontsize=18)
plt.xlim((-ext_max,ext_max))
plt.ylim((-ext_max,ext_max))
plt.grid(which='both')
plt.title('Constellation diagram (transmit symbols)',fontsize=18)
plt.subplot(122)
decision_scatter = np.argmax(decision_region_evolution[min_SER_iter], 1)
plt.scatter(meshgrid[:,0], meshgrid[:,1], c=decision_scatter, cmap=matplotlib.colors.ListedColormap(colors=new_color_list),s=4)
plt.scatter(constellations[min_SER_iter][:,0], constellations[min_SER_iter][:,1], c=range(M), cmap='tab20',s=50)
plt.axis('scaled')
plt.xlim((-ext_max,ext_max))
plt.ylim((-ext_max,ext_max))
plt.xlabel(r'$\Re\{t/r\}$',fontsize=18)
plt.ylabel(r'$\Im\{t/r\}$',fontsize=18)
plt.title('Constellation diagram (transmit symbols) with decision regions',fontsize=18)
plt.savefig('PGAE_AWGN_ML_M%d_EbN0%1.2f_const.pdf' % (M,EbN0),bbox_inches='tight')
cmap = matplotlib.cm.tab20
base = plt.cm.get_cmap(cmap)
color_list = base.colors
new_color_list = [[t/2 + 0.5 for t in color_list[k]] for k in range(len(color_list))]
%matplotlib inline
plt.figure(figsize=(19,8))
font = {'size' : 14}
plt.rc('font', **font)
plt.rc('text', usetex=matplotlib.checkdep_usetex(True))
plt.subplot(121)
#plt.contourf(mgx,mgy,decision_region_evolution[-1].reshape(mgy.shape).T,cmap='coolwarm',vmin=0.3,vmax=0.7)
plt.scatter(validation_received[min_SER_iter][:,0], validation_received[min_SER_iter][:,1], c=y_valid, cmap='tab20',s=4)
plt.axis('scaled')
plt.xlabel(r'$\Re\{r\}$',fontsize=18)
plt.ylabel(r'$\Im\{r\}$',fontsize=18)
plt.xlim((-ext_max,ext_max))
plt.ylim((-ext_max,ext_max))
plt.title('Received symbols',fontsize=18)
plt.subplot(122)
decision_scatter = np.argmax(decision_region_evolution[min_SER_iter], 1)
plt.scatter(meshgrid[:,0], meshgrid[:,1], c=decision_scatter, cmap=matplotlib.colors.ListedColormap(colors=new_color_list),s=4)
plt.scatter(validation_received[min_SER_iter][0:4000,0], validation_received[min_SER_iter][0:4000,1], c=y_valid[0:4000], cmap='tab20',s=4)
plt.axis('scaled')
plt.xlim((-ext_max,ext_max))
plt.ylim((-ext_max,ext_max))
plt.xlabel(r'$\Re\{r\}$',fontsize=18)
plt.ylabel(r'$\Im\{r\}$',fontsize=18)
plt.title('Received symbols with decision regions',fontsize=18)
plt.savefig('PGAE_AWGN_ML_M%d_EbN0%1.2f_noisy.pdf' % (M,EbN0),bbox_inches='tight')
%matplotlib notebook
%matplotlib notebook
cmap = matplotlib.cm.tab20
base = plt.cm.get_cmap(cmap)
color_list = base.colors
new_color_list = [[t/2 + 0.5 for t in color_list[k]] for k in range(len(color_list))]
from matplotlib import animation, rc
from matplotlib.animation import PillowWriter # Disable if you don't want to save any GIFs.
font = {'size' : 18}
plt.rc('font', **font)
plt.rc('text', usetex=matplotlib.checkdep_usetex(True))
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(1,1,1)
ax.axis('scaled')
written = True
def animate(i):
ax.clear()
decision_scatter = np.argmax(decision_region_evolution[i], 1)
ax.scatter(meshgrid[:,0], meshgrid[:,1], c=decision_scatter, cmap=matplotlib.colors.ListedColormap(colors=new_color_list),s=4, marker='s')
ax.scatter(constellations[i][:,0], constellations[i][:,1], c=range(M), cmap='tab20',s=75)
ax.scatter(validation_received[i][0:4000,0], validation_received[i][0:4000,1], c=y_valid[0:4000], cmap='tab20',s=4)
ax.set_xlim(( -ext_max, ext_max))
ax.set_ylim(( -ext_max, ext_max))
ax.set_xlabel(r'$\Re\{r\}$',fontsize=18)
ax.set_ylabel(r'$\Im\{r\}$',fontsize=18)
anim = animation.FuncAnimation(fig, animate, frames=num_epochs-1, interval=200, blit=False)
fig.show()
anim.save('PGAE_AWGN_ML_M%d_EbN0%1.2f_anim.gif' % (M,EbN0), writer=PillowWriter(fps=5))
Explanation: Evaluation
Plot decision region and scatter plot of the validation set. Note that the validation set is only used for computing SERs and plotting, there is no feedback towards the training!
End of explanation |
9,453 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Eficiencia terminal y mujeres en ingeniería
Objetivo
Explorar la relación entre el porcentaje de eficiencia terminal por estado y ciclo escolar, con respecto al porcentaje de mujeres inscritas a una carre de ingeniería. Hacer una análisis gráfico del comportamiento de los estados en los diferentes ciclos escolares.
Datos
Los datos analizados provienen de la SEP, estos corresponden al conjunto de indicadores objetivos de desarrollo sustentable. Para conocer más al respecto o extraer más datos relacionados se pueden descargar desde la página siguiente
Step1: Se observa que tienen el mismo tipo de datos y lo único que falta revisar es la cantidad de periodos, ya que se observa en la información de cada tabla que la de Mujeres cuanta con 168 líneas y las de Eficiencia solo 128. Reviso los ciclos que se consideran en los datos.
Step2: Se observa que desde la fila 128 inician los registros del ciclo 2014/2015, entonces solo eligo de la tabla de mujeres las filas relacionadas con los ciclos 2010/2011, 2011/2012, 2012/2013 y 2013/2014.
Step3: Al nuevo DataFrame se le agrega una columna nueva, la finalidad es concentrar los datos en una sola tabla.
Step4: Para tener una "idea" visual de lo que está pasando con los datos o como se comportan, hago una gráfica de barras con respecto a los años.
Step5: Se precia bruscamente que el porcentaje relacionado con la inscripción de mujeres a ingeniería es muy bajo, es entendible debido a que se busca que la tasa de eficiencia terminar sea muy cercana siempre a 100%. Si se hace esto por estado o entidad federativa, se observará algo "similar".
Step6: Para tratar de revisar la "posible" relación entre las dos tasas de porcentaje hago el ejercicio de "normalizar" los valores, estos es "centrar" los datos y "dividir" por la desviación estándar. No explico los detalles técnicas, solo hago esto para poder tener un marco comparativo entre las dos tasas.
Nota
Step7: La gráfica anterior deja ver algo curioso, que ven valores negativos en la mediana para casi todos los años. Lo cual pues si bien, no es nada malo debido a que se "normalizaron" los datos, pero creo que viendo un histograma de los valores de las dos columnas puede aclarar o ayudar aclarar lo que pasó con la normalización de los datos.
Step8: Las gráficas anteriores muestran que las distribuciones son "asimétricas". Se visualiza que son en mayoría "cargadas" hacia la izquierda, es decir tienen más valores negativos.
Haciendo una comparación de como se comportan las variables con respecto a cada estado, la gráfica siguiente muestra la relación. Cada estado tienen solo las 4 mediciones de los 4 ciclos escolares, así que considero la media como el valor de la longitud de la barra sin ningún razón en especial, solo con el fin de explorar las dos variables.
Step9: Inicialmente esperaba que las dos barras por cada estado tuvieran valor positivo o negativo, pero no hay una razón real detras de mi supuesto inicial. Uno de los problemas es que se normalizan todos los datos sin importar el ciclo, esto afecta en la "escala" pero no en la "forma", lo intensante sería normalizar los 32 registros de cada ciclo para ver como se comportan por cada ciclo. Como ejemplo reviso el comportamiento del ciclo 2010/2011.
Step10: Dejando de lado el valor o relación de los porcentajes normalizados, se puede explorar como se comportó el porcentaje tanto de inscripción a ingeniería como de eficiencia por ciclo y estado. Se hace un mapa de calor para cada caso.
Step11: Se observa que los estados con mejor eficiencia terminal en todos los ciclos son Durango y Querétaro.En cada ciclo hay 3 estados relevantes con casi el 100% de eficiencia.
Step12: Se observa que en el periódo 2010/2011 los porcentajes fueron mayores con respecto a los periodos siguientes, se aprecia que Durango mantiene un porcentaje casi constante por cada ciclo y tabasto el más alto en los 4 ciclos revisados. Lo raro es que los porcentaje bajaran considerablemente del 2010/2011 al 2011/2012. Los últimos 3 periodos muestran un comportamiento más similar, lo cual puede ser interesante para revisar con mayor información.
Por último, se puede explorar la relación del porcentaje de inscripción de mujeres a ingeniería en los 4 periodos y se puede calcular la correlacion, esperando que las tasas registradas de un ciclo afectan al siguente ciclo.
Step13: Se observa de la matriz, que el ciclo 2010/2011 tiene mayor correlación con el ciclo 2011/2012 ( fila 3 , columna 3) un valor de 0.913, también el ciclo 2011/2012 tienen mucha correlacion con el 2012/2013 (fila 4 y columna 4) un valor de 0.993, y por último el ciclo 2012/2013 con el 2013/2014 ( fila 5, columna 5) con un valor de 0.993. Esto no implica que esto sea "cierto", solo muestra que hay una correlación alta no una causualidad.
Por último exploro cual puede ser la relación de los valores normalizados con respecto a los periódos. Para tener una imagen gráfica de la relación hago un scatter plot. | Python Code:
%matplotlib inline
import matplotlib
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot')
plt.rcParams['figure.figsize']=(20,7)
import sys
reload(sys)
sys.setdefaultencoding('utf8')
Mujeres=pd.read_csv('/Datos/Mujeres_ingeniería_y_tecnología.csv')
Eficiencia=pd.read_csv('/Datos/Eficiencia_terminal_en_educación_terciaria.csv')
Mujeres.head()
Eficiencia.head()
Mujeres.info()
Eficiencia.info()
Explanation: Eficiencia terminal y mujeres en ingeniería
Objetivo
Explorar la relación entre el porcentaje de eficiencia terminal por estado y ciclo escolar, con respecto al porcentaje de mujeres inscritas a una carre de ingeniería. Hacer una análisis gráfico del comportamiento de los estados en los diferentes ciclos escolares.
Datos
Los datos analizados provienen de la SEP, estos corresponden al conjunto de indicadores objetivos de desarrollo sustentable. Para conocer más al respecto o extraer más datos relacionados se pueden descargar desde la página siguiente:
* http://catalogo.datos.gob.mx/dataset/indicadores-objetivos-de-desarrollo-sustentable
End of explanation
Mujeres.CICLO.drop_duplicates()
Explanation: Se observa que tienen el mismo tipo de datos y lo único que falta revisar es la cantidad de periodos, ya que se observa en la información de cada tabla que la de Mujeres cuanta con 168 líneas y las de Eficiencia solo 128. Reviso los ciclos que se consideran en los datos.
End of explanation
Mujeres[Mujeres.CICLO!='2014/2015'][['ENTIDAD','CICLO','%_MUJERES_EN_ING']].shape
#Se construye otra tabla solo con las filas
Tabla=Mujeres[Mujeres.CICLO!='2014/2015'][['ENTIDAD','CICLO','%_MUJERES_EN_ING']]
Explanation: Se observa que desde la fila 128 inician los registros del ciclo 2014/2015, entonces solo eligo de la tabla de mujeres las filas relacionadas con los ciclos 2010/2011, 2011/2012, 2012/2013 y 2013/2014.
End of explanation
#Se agrega una nueva columna a los datos
Tabla['Etsup']=Eficiencia.Etsup
#Se visualizan los primeros registros del DataFrame
Tabla.head()
#Se agrupan los datos y tomo las medianas de los dos porcentajes, tanto el de inscripción de mujeres a ingeniería
#como el porcentaje de eficiencia teminal.
Tabla.groupby('CICLO').median()
Explanation: Al nuevo DataFrame se le agrega una columna nueva, la finalidad es concentrar los datos en una sola tabla.
End of explanation
#Gráfica de barras
Tabla.groupby('CICLO').median().plot(kind='bar')
Explanation: Para tener una "idea" visual de lo que está pasando con los datos o como se comportan, hago una gráfica de barras con respecto a los años.
End of explanation
Tabla.groupby('ENTIDAD').median().plot(kind='bar')
Explanation: Se precia bruscamente que el porcentaje relacionado con la inscripción de mujeres a ingeniería es muy bajo, es entendible debido a que se busca que la tasa de eficiencia terminar sea muy cercana siempre a 100%. Si se hace esto por estado o entidad federativa, se observará algo "similar".
End of explanation
#Cambio el nombre a las columnas, para quitarle el caracter '%' a la tasa de mujeres inscritas en ingeniería.
Tabla.columns=['ENTIDAD','CICLO','P_MUJERES_EN_ING','Etsup']
#Normalizo los datos y los agrego a una nueva columna en el DataFrame.
Tabla['Z_MUJERES_ING']=(Tabla.P_MUJERES_EN_ING-Tabla.P_MUJERES_EN_ING.mean())/(Tabla.P_MUJERES_EN_ING.std())
#Normalizo los datos de eficiencia terminal y los agrego a una nueva columna.
Tabla['Z_Etsup']=(Tabla.Etsup-Tabla.Etsup.mean())/Tabla.Etsup.std()
Tabla.head()
#Hago una visualización de las variables o columnas normalizadas.
Tabla.groupby('CICLO')[['Z_MUJERES_ING','Z_Etsup']].median().plot(kind='bar')
Explanation: Para tratar de revisar la "posible" relación entre las dos tasas de porcentaje hago el ejercicio de "normalizar" los valores, estos es "centrar" los datos y "dividir" por la desviación estándar. No explico los detalles técnicas, solo hago esto para poder tener un marco comparativo entre las dos tasas.
Nota: Lo que hago es un análisis ilustrativo, por lo cual no entro en detalles estadísticos serios. No es la finalidad de esta exploración.
End of explanation
#Histogramas de las variables o columnas normalizadas.
Tabla[['Z_MUJERES_ING','Z_Etsup']].hist(bins=15)
Explanation: La gráfica anterior deja ver algo curioso, que ven valores negativos en la mediana para casi todos los años. Lo cual pues si bien, no es nada malo debido a que se "normalizaron" los datos, pero creo que viendo un histograma de los valores de las dos columnas puede aclarar o ayudar aclarar lo que pasó con la normalización de los datos.
End of explanation
#Gráfica de barras para comparar las dos variables por estado o entidad federativa.
Tabla.groupby('ENTIDAD')[['Z_MUJERES_ING','Z_Etsup']].mean().plot(kind='bar')
Explanation: Las gráficas anteriores muestran que las distribuciones son "asimétricas". Se visualiza que son en mayoría "cargadas" hacia la izquierda, es decir tienen más valores negativos.
Haciendo una comparación de como se comportan las variables con respecto a cada estado, la gráfica siguiente muestra la relación. Cada estado tienen solo las 4 mediciones de los 4 ciclos escolares, así que considero la media como el valor de la longitud de la barra sin ningún razón en especial, solo con el fin de explorar las dos variables.
End of explanation
Tabla_2010_2011=Tabla[Tabla.CICLO=='2010/2011']
Tabla_2010_2011['Z_Mujeres']=(Tabla_2010_2011.P_MUJERES_EN_ING - Tabla_2010_2011.P_MUJERES_EN_ING.mean())/(Tabla_2010_2011.P_MUJERES_EN_ING.std())
#Todos los registros del 2010/2011
Tabla_2010_2011.head(32)
#Gráfica para visualizar que la modificación que se tiene al normalizar todos los datos o solo los correspondientes al ciclo
Tabla_2010_2011[['Z_Mujeres','Z_MUJERES_ING']].plot()
Explanation: Inicialmente esperaba que las dos barras por cada estado tuvieran valor positivo o negativo, pero no hay una razón real detras de mi supuesto inicial. Uno de los problemas es que se normalizan todos los datos sin importar el ciclo, esto afecta en la "escala" pero no en la "forma", lo intensante sería normalizar los 32 registros de cada ciclo para ver como se comportan por cada ciclo. Como ejemplo reviso el comportamiento del ciclo 2010/2011.
End of explanation
#Se cambia el tipo de dato de los valores del porcentaje, se agrega una nueva columna en el dataframe.
Tabla['E_Etsup']=Tabla.Etsup.apply(int)
#Se contruye una tabla pivot para pasarla al mapa de calor.
Tabla_1=Tabla.pivot(index='ENTIDAD',columns='CICLO',values='E_Etsup')
#Se usa la librería seaborn.
sns.heatmap(Tabla_1,annot=True,linewidths=.5)
Explanation: Dejando de lado el valor o relación de los porcentajes normalizados, se puede explorar como se comportó el porcentaje tanto de inscripción a ingeniería como de eficiencia por ciclo y estado. Se hace un mapa de calor para cada caso.
End of explanation
#Valores enteros del porcentaje de mujeres en ingeniería
Tabla['E_MUJERES']=Tabla.P_MUJERES_EN_ING.apply(int)
#Se contruye la tabla
Tabla_2=Tabla.pivot(index='ENTIDAD',columns='CICLO',values='E_MUJERES')
#Se contruye el mapa de calor.
sns.heatmap(Tabla_2,annot=True,linewidths=.5)
Explanation: Se observa que los estados con mejor eficiencia terminal en todos los ciclos son Durango y Querétaro.En cada ciclo hay 3 estados relevantes con casi el 100% de eficiencia.
End of explanation
Tabla_3=Tabla.pivot(index='ENTIDAD',columns='CICLO',values='P_MUJERES_EN_ING')
Tabla_3.corr()
Explanation: Se observa que en el periódo 2010/2011 los porcentajes fueron mayores con respecto a los periodos siguientes, se aprecia que Durango mantiene un porcentaje casi constante por cada ciclo y tabasto el más alto en los 4 ciclos revisados. Lo raro es que los porcentaje bajaran considerablemente del 2010/2011 al 2011/2012. Los últimos 3 periodos muestran un comportamiento más similar, lo cual puede ser interesante para revisar con mayor información.
Por último, se puede explorar la relación del porcentaje de inscripción de mujeres a ingeniería en los 4 periodos y se puede calcular la correlacion, esperando que las tasas registradas de un ciclo afectan al siguente ciclo.
End of explanation
#Tabla auxiliar para constuir el scatter plot
Tabla_2_1=Tabla[['CICLO','Z_MUJERES_ING','Z_Etsup']]
Tabla_2_1.head()
sns.pairplot(data=Tabla_2_1,hue="CICLO")
Explanation: Se observa de la matriz, que el ciclo 2010/2011 tiene mayor correlación con el ciclo 2011/2012 ( fila 3 , columna 3) un valor de 0.913, también el ciclo 2011/2012 tienen mucha correlacion con el 2012/2013 (fila 4 y columna 4) un valor de 0.993, y por último el ciclo 2012/2013 con el 2013/2014 ( fila 5, columna 5) con un valor de 0.993. Esto no implica que esto sea "cierto", solo muestra que hay una correlación alta no una causualidad.
Por último exploro cual puede ser la relación de los valores normalizados con respecto a los periódos. Para tener una imagen gráfica de la relación hago un scatter plot.
End of explanation |
9,454 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Arterial line study
This notebook reproduces the arterial line study in MIMIC-III. The following is an outline of the notebook
Step1: 1 - Generate materialized views
Before generating the aline cohort, we require the following materialized views to be already generated
Step3: Now we generate the aline_cohort table using the aline_cohort.sql file.
Afterwards, we can generate the remaining 6 materialized views in any order, as they all depend on only aline_cohort and raw MIMIC-III data.
Step4: The following codeblock loads in the SQL from each file in the aline subfolder and executes the query to generate the materialized view. We specifically exclude the aline_cohort.sql file as we have already executed it above. Again, the order of query execution does not matter for these queries. Note also that the filenames are the same as the created materialized view names for convenience.
Step6: Summarize the cohort exclusions before we pull all the data together.
2 - Extract all covariates and outcome measures
We now aggregate all the data from the various views into a single dataframe.
Step7: Now we need to remove obvious outliers, including correcting ages > 200 to 91.4 (i.e. replace anonymized ages with 91.4, the median age of patients older than 89).
Step8: 3 - Write to file | Python Code:
# Install OS dependencies. This only needs to be run once for each new notebook instance.
!pip install PyAthena
from pyathena import connect
from pyathena.util import as_pandas
from __future__ import print_function
# Import libraries
import datetime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import boto3
from botocore.client import ClientError
# below is used to print out pretty pandas dataframes
from IPython.display import display, HTML
%matplotlib inline
s3 = boto3.resource('s3')
client = boto3.client("sts")
account_id = client.get_caller_identity()["Account"]
my_session = boto3.session.Session()
region = my_session.region_name
athena_query_results_bucket = 'aws-athena-query-results-'+account_id+'-'+region
try:
s3.meta.client.head_bucket(Bucket=athena_query_results_bucket)
except ClientError:
bucket = s3.create_bucket(Bucket=athena_query_results_bucket)
print('Creating bucket '+athena_query_results_bucket)
cursor = connect(s3_staging_dir='s3://'+athena_query_results_bucket+'/athena/temp').cursor()
# The Glue database name of your MIMIC-III parquet data
gluedatabase="mimiciii"
# location of the queries to generate aline specific materialized views
aline_path = './'
# location of the queries to generate materialized views from the MIMIC code repository
concepts_path = './concepts/'
Explanation: Arterial line study
This notebook reproduces the arterial line study in MIMIC-III. The following is an outline of the notebook:
Generate necessary materialized views in SQL
Combine materialized views and acquire a single dataframe
Write this data to file for use in R
The R code then evaluates whether an arterial line is associated with mortality after propensity matching.
Note that the original arterial line study used a genetic algorithm to select the covariates in the propensity score. We omit the genetic algorithm step, and instead use the final set of covariates described by the authors. For more detail, see:
Hsu DJ, Feng M, Kothari R, Zhou H, Chen KP, Celi LA. The association between indwelling arterial catheters and mortality in hemodynamically stable patients with respiratory failure: a propensity score analysis. CHEST Journal. 2015 Dec 1;148(6):1470-6.
End of explanation
# Load in the query from file
query='DROP TABLE IF EXISTS DATABASE.angus_sepsis;'
cursor.execute(query.replace("DATABASE", gluedatabase))
f = os.path.join(concepts_path,'sepsis/angus-awsathena.sql')
with open(f) as fp:
query = ''.join(fp.readlines())
# Execute the query
print('Generating table \'angus_sepsis\' using {} ...'.format(f),end=' ')
cursor.execute(query.replace("DATABASE", gluedatabase))
print('done.')
# Load in the query from file
query='DROP TABLE IF EXISTS DATABASE.heightweight;'
cursor.execute(query.replace("DATABASE", gluedatabase))
f = os.path.join(concepts_path,'demographics/HeightWeightQuery-awsathena.sql')
with open(f) as fp:
query = ''.join(fp.readlines())
# Execute the query
print('Generating table \'heightweight\' using {} ...'.format(f),end=' ')
cursor.execute(query.replace("DATABASE", gluedatabase))
print('done.')
# Load in the query from file
query='DROP TABLE IF EXISTS DATABASE.aline_vaso_flag;'
cursor.execute(query.replace("DATABASE", gluedatabase))
f = os.path.join(aline_path,'aline_vaso_flag-awsathena.sql')
with open(f) as fp:
query = ''.join(fp.readlines())
# Execute the query
print('Generating table \'aline_vaso_flag\' using {} ...'.format(f),end=' ')
cursor.execute(query.replace("DATABASE", gluedatabase))
print('done.')
# Load in the query from file
query='DROP TABLE IF EXISTS DATABASE.ventsettings;'
cursor.execute(query.replace("DATABASE", gluedatabase))
f = os.path.join(concepts_path,'durations/ventilation-settings-awsathena.sql')
with open(f) as fp:
query = ''.join(fp.readlines())
# Execute the query
print('Generating table \'vent_settings\' using {} ...'.format(f),end=' ')
cursor.execute(query.replace("DATABASE", gluedatabase))
print('done.')
# Load in the query from file
query='DROP TABLE IF EXISTS DATABASE.ventdurations;'
cursor.execute(query.replace("DATABASE", gluedatabase))
f = os.path.join(concepts_path,'durations/ventilation-durations-awsathena.sql')
with open(f) as fp:
query = ''.join(fp.readlines())
# Execute the query
print('Generating table \'vent_durations\' using {} ...'.format(f),end=' ')
cursor.execute(query.replace("DATABASE", gluedatabase))
print('done.')
Explanation: 1 - Generate materialized views
Before generating the aline cohort, we require the following materialized views to be already generated:
angus - from angus.sql
heightweight - from HeightWeightQuery.sql
aline_vaso_flag - from aline_vaso_flag.sql
You can generate the above by executing the below codeblock. If you haven't changed the directory structure, the below should work, otherwise you may need to modify the concepts_path variable above.
End of explanation
# Load in the query from file
query='DROP TABLE IF EXISTS DATABASE.aline_cohort_all;'
cursor.execute(query.replace("DATABASE", gluedatabase))
f = os.path.join(aline_path,'aline_cohort-awsathena.sql')
with open(f) as fp:
query = ''.join(fp.readlines())
# Execute the query
print('Generating table \'aline_cohort_all\' using {} ...'.format(f),end=' ')
cursor.execute(query.replace("DATABASE", gluedatabase))
print('done.')
# Load in the query from file
query='DROP TABLE IF EXISTS DATABASE.aline_cohort;'
cursor.execute(query.replace("DATABASE", gluedatabase))
f = os.path.join(aline_path,'aline_final_cohort-awsathena.sql')
with open(f) as fp:
query = ''.join(fp.readlines())
# Execute the query
print('Generating table \'aline_cohort\' using {} ...'.format(f),end=' ')
cursor.execute(query.replace("DATABASE", gluedatabase))
print('done.')
query =
select
icustay_id
, exclusion_readmission
, exclusion_shortstay
, exclusion_vasopressors
, exclusion_septic
, exclusion_aline_before_admission
, exclusion_not_ventilated_first24hr
, exclusion_service_surgical
from DATABASE.aline_cohort_all
cursor.execute(query.replace("DATABASE", gluedatabase))
# Load the result of the query into a dataframe
df = as_pandas(cursor)
# print out exclusions
idxRem = df['icustay_id'].isnull()
for c in df.columns:
if 'exclusion_' in c:
print('{:5d} - {}'.format(df[c].sum(), c))
idxRem[df[c]==1] = True
# final exclusion (excl sepsis/something else)
print('Will remove {} of {} patients.'.format(np.sum(idxRem), df.shape[0]))
print('')
print('')
print('Reproducing the flow of the flowchart from Chest paper.')
# first stay
idxRem = (df['exclusion_readmission']==1) | (df['exclusion_shortstay']==1)
print('{:5d} - removing {:5d} ({:2.2f}%) patients - short stay // readmission.'.format(
df.shape[0], np.sum(idxRem), 100.0*np.mean(idxRem)))
df = df.loc[~idxRem,:]
idxRem = df['exclusion_not_ventilated_first24hr']==1
print('{:5d} - removing {:5d} ({:2.2f}%) patients - not ventilated in first 24 hours.'.format(
df.shape[0], np.sum(idxRem), 100.0*np.mean(idxRem)))
df = df.loc[df['exclusion_not_ventilated_first24hr']==0,:]
print('{:5d}'.format(df.shape[0]))
idxRem = df['icustay_id'].isnull()
for c in ['exclusion_septic', 'exclusion_vasopressors',
'exclusion_aline_before_admission', 'exclusion_service_surgical']:
print('{:5s} - removing {:5d} ({:2.2f}%) patients - additional {:5d} {:2.2f}% - {}'.format(
'', df[c].sum(), 100.0*df[c].mean(),
np.sum((idxRem==0)&(df[c]==1)), 100.0*np.mean((idxRem==0)&(df[c]==1)),
c))
idxRem = idxRem | (df[c]==1)
df = df.loc[~idxRem,:]
print('{} - final cohort.'.format(df.shape[0]))
Explanation: Now we generate the aline_cohort table using the aline_cohort.sql file.
Afterwards, we can generate the remaining 6 materialized views in any order, as they all depend on only aline_cohort and raw MIMIC-III data.
End of explanation
# get a list of all files in the subfolder
aline_queries = [f for f in os.listdir(aline_path)
# only keep the filename if it is actually a file (and not a directory)
if os.path.isfile(os.path.join(aline_path,f))
# and only keep the filename if it is an SQL file
& f.endswith('.sql')
# and we do *not* want aline_cohort - it's generated above
& (f != 'aline_cohort-awsathena.sql') & (f != 'aline_final_cohort-awsathena.sql') & (f != 'aline_vaso_flag-awsathena.sql')]
for f in aline_queries:
# Load in the query from file
table=f.split('-')
query='DROP TABLE IF EXISTS DATABASE.{};'.format(table[0])
cursor.execute(query.replace("DATABASE", gluedatabase))
print('Executing {} ...'.format(f), end=' ')
with open(os.path.join(aline_path,f)) as fp:
query = ''.join(fp.readlines())
cursor.execute(query.replace("DATABASE", gluedatabase))
print('done.')
Explanation: The following codeblock loads in the SQL from each file in the aline subfolder and executes the query to generate the materialized view. We specifically exclude the aline_cohort.sql file as we have already executed it above. Again, the order of query execution does not matter for these queries. Note also that the filenames are the same as the created materialized view names for convenience.
End of explanation
# Load in the query from file
query =
--FINAL QUERY
select
co.subject_id, co.hadm_id, co.icustay_id
-- static variables from patient tracking tables
, co.age
, co.gender
-- , co.gender_num -- gender, 0=F, 1=M
, co.intime as icustay_intime
, co.day_icu_intime -- day of week, text
--, co.day_icu_intime_num -- day of week, numeric (0=Sun, 6=Sat)
, co.hour_icu_intime -- hour of ICU admission (24 hour clock)
, case
when co.hour_icu_intime >= 7
and co.hour_icu_intime < 19
then 1
else 0
end as icu_hour_flag
, co.outtime as icustay_outtime
-- outcome variables
, co.icu_los_day
, co.hospital_los_day
, co.hosp_exp_flag -- 1/0 patient died within current hospital stay
, co.icu_exp_flag -- 1/0 patient died within current ICU stay
, co.mort_day -- days from ICU admission to mortality, if they died
, co.day_28_flag -- 1/0 whether the patient died 28 days after *ICU* admission
, co.mort_day_censored -- days until patient died *or* 150 days (150 days is our censor time)
, co.censor_flag -- 1/0 did this patient have 150 imputed in mort_day_censored
-- aline flags
-- , co.initial_aline_flag -- always 0, we remove patients admitted w/ aline
, co.aline_flag -- 1/0 did the patient receive an aline
, co.aline_time_day -- if the patient received aline, fractional days until aline put in
-- demographics extracted using regex + echos
, bmi.weight as weight_first
, bmi.height as height_first
, bmi.bmi
-- service patient was admitted to the ICU under
, co.service_unit
-- severity of illness just before ventilation
, so.sofa as sofa_first
-- vital sign value just preceeding ventilation
, vi.map as map_first
, vi.heartrate as hr_first
, vi.temperature as temp_first
, vi.spo2 as spo2_first
-- labs!
, labs.bun_first
, labs.creatinine_first
, labs.chloride_first
, labs.hgb_first
, labs.platelet_first
, labs.potassium_first
, labs.sodium_first
, labs.tco2_first
, labs.wbc_first
-- comorbidities extracted using ICD-9 codes
, icd.chf as chf_flag
, icd.afib as afib_flag
, icd.renal as renal_flag
, icd.liver as liver_flag
, icd.copd as copd_flag
, icd.cad as cad_flag
, icd.stroke as stroke_flag
, icd.malignancy as malignancy_flag
, icd.respfail as respfail_flag
, icd.endocarditis as endocarditis_flag
, icd.ards as ards_flag
, icd.pneumonia as pneumonia_flag
-- sedative use
, sed.sedative_flag
, sed.midazolam_flag
, sed.fentanyl_flag
, sed.propofol_flag
from DATABASE.aline_cohort co
-- The following tables are generated by code within this repository
left join DATABASE.aline_sofa so
on co.icustay_id = so.icustay_id
left join DATABASE.aline_bmi bmi
on co.icustay_id = bmi.icustay_id
left join DATABASE.aline_icd icd
on co.hadm_id = icd.hadm_id
left join DATABASE.aline_vitals vi
on co.icustay_id = vi.icustay_id
left join DATABASE.aline_labs labs
on co.icustay_id = labs.icustay_id
left join DATABASE.aline_sedatives sed
on co.icustay_id = sed.icustay_id
order by co.icustay_id
cursor.execute(query.replace("DATABASE", gluedatabase))
# Load the result of the query into a dataframe
df = as_pandas(cursor)
df.describe().T
Explanation: Summarize the cohort exclusions before we pull all the data together.
2 - Extract all covariates and outcome measures
We now aggregate all the data from the various views into a single dataframe.
End of explanation
# plot the rest of the distributions
for col in df.columns:
if df.dtypes[col] in ('int64','float64'):
plt.figure(figsize=[12,6])
plt.hist(df[col].dropna(), bins=50, normed=True)
plt.xlabel(col,fontsize=24)
plt.show()
# apply corrections
df.loc[df['age']>89, 'age'] = 91.4
Explanation: Now we need to remove obvious outliers, including correcting ages > 200 to 91.4 (i.e. replace anonymized ages with 91.4, the median age of patients older than 89).
End of explanation
df.to_csv('aline_data.csv',index=False)
Explanation: 3 - Write to file
End of explanation |
9,455 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Collaborative filtering on the MovieLense Dataset
Learning Objectives
Know how to build a BigQuery ML Matrix Factorization Model
Know how to use the model to make recommendations for a user
Know how to use the model to recommend an item to a group of users
This notebook is based on part of Chapter 9 of BigQuery
Step1: Exploring the data
Two tables should now be available in <a href="https
Step2: A quick exploratory query yields that the dataset consists of over 138 thousand users, nearly 27 thousand movies, and a little more than 20 million ratings, confirming that the data has been loaded successfully.
Step3: On examining the first few movies using the query following query, we can see that the genres column is a formatted string
Step4: We can parse the genres into an array and rewrite the table as follows
Step5: Matrix factorization
Matrix factorization is a collaborative filtering technique that relies on factorizing the ratings matrix into two vectors called the user factors and the item factors. The user factors is a low-dimensional representation of a user_id and the item factors similarly represents an item_id.
We can create the recommender model using (<b>Optional</b>, takes 30 minutes. Note
Step6: Note that we create a model as usual, except that the model_type is matrix_factorization and that we have to identify which columns play what roles in the collaborative filtering setup.
What did you get? Our model took an hour to train, and the training loss starts out extremely bad and gets driven down to near-zero over next the four iterations
Step7: Now, we get faster convergence (three iterations instead of five), and a lot less overfitting. Here are our results
Step8: When we did that, we discovered that the evaluation loss was lower (0.97) with num_factors=16 than with num_factors=36 (1.67) or num_factors=24 (1.45). We could continue experimenting, but we are likely to see diminishing returns with further experimentation. So, let’s pick this as the final matrix factorization model and move on.
Making recommendations
With the trained model, we can now provide recommendations. For example, let’s find the best comedy movies to recommend to the user whose userId is 903. In the query below, we are calling ML.PREDICT passing in the trained recommendation model and providing a set of movieId and userId to carry out the predictions on. In this case, it’s just one userId (903), but all movies whose genre includes Comedy.
Step9: Filtering out already rated movies
Of course, this includes movies the user has already seen and rated in the past. Let’s remove them.
TODO 2
Step10: For this user, this happens to yield the same set of movies -- the top predicted ratings didn’t include any of the movies the user has already seen.
Customer targeting
In the previous section, we looked at how to identify the top-rated movies for a specific user. Sometimes, we have a product and have to find the customers who are likely to appreciate it. Suppose, for example, we wish to get more reviews for movieId=96481 which has only one rating and we wish to send coupons to the 5 users who are likely to rate it the highest.
TODO 3
Step11: Batch predictions for all users and movies
What if we wish to carry out predictions for every user and movie combination? Instead of having to pull distinct users and movies as in the previous query, a convenience function is provided to carry out batch predictions for all movieId and userId encountered during training. A limit is applied here, otherwise, all user-movie predictions will be returned and will crash the notebook. | Python Code:
import os
PROJECT = "your-project-here" # REPLACE WITH YOUR PROJECT ID
# Do not change these
os.environ["PROJECT"] = PROJECT
%%bash
rm -r bqml_data
mkdir bqml_data
cd bqml_data
curl -O 'http://files.grouplens.org/datasets/movielens/ml-20m.zip'
unzip ml-20m.zip
yes | bq rm -r $PROJECT:movielens
bq --location=US mk --dataset \
--description 'Movie Recommendations' \
$PROJECT:movielens
bq --location=US load --source_format=CSV \
--autodetect movielens.ratings ml-20m/ratings.csv
bq --location=US load --source_format=CSV \
--autodetect movielens.movies_raw ml-20m/movies.csv
Explanation: Collaborative filtering on the MovieLense Dataset
Learning Objectives
Know how to build a BigQuery ML Matrix Factorization Model
Know how to use the model to make recommendations for a user
Know how to use the model to recommend an item to a group of users
This notebook is based on part of Chapter 9 of BigQuery: The Definitive Guide by Lakshmanan and Tigani.
MovieLens dataset
To illustrate recommender systems in action, let’s use the MovieLens dataset. This is a dataset of movie reviews released by GroupLens, a research lab in the Department of Computer Science and Engineering at the University of Minnesota, through funding by the US National Science Foundation.
Download the data and load it as a BigQuery table using:
End of explanation
%%bigquery --project $PROJECT
SELECT *
FROM movielens.ratings
LIMIT 10
Explanation: Exploring the data
Two tables should now be available in <a href="https://console.cloud.google.com/bigquery">BigQuery</a>.
Collaborative filtering provides a way to generate product recommendations for users, or user targeting for products. The starting point is a table, <b>movielens.ratings</b>, with three columns: a user id, an item id, and the rating that the user gave the product. This table can be sparse -- users don’t have to rate all products. Then, based on just the ratings, the technique finds similar users and similar products and determines the rating that a user would give an unseen product. Then, we can recommend the products with the highest predicted ratings to users, or target products at users with the highest predicted ratings.
End of explanation
%%bigquery --project $PROJECT
SELECT
COUNT(DISTINCT userId) numUsers,
COUNT(DISTINCT movieId) numMovies,
COUNT(*) totalRatings
FROM movielens.ratings
Explanation: A quick exploratory query yields that the dataset consists of over 138 thousand users, nearly 27 thousand movies, and a little more than 20 million ratings, confirming that the data has been loaded successfully.
End of explanation
%%bigquery --project $PROJECT
SELECT *
FROM movielens.movies_raw
WHERE movieId < 5
Explanation: On examining the first few movies using the query following query, we can see that the genres column is a formatted string:
End of explanation
%%bigquery --project $PROJECT
CREATE OR REPLACE TABLE movielens.movies AS
SELECT * REPLACE(SPLIT(genres, "|") AS genres)
FROM movielens.movies_raw
%%bigquery --project $PROJECT
SELECT *
FROM movielens.movies
WHERE movieId < 5
Explanation: We can parse the genres into an array and rewrite the table as follows:
End of explanation
%%bigquery --project $PROJECT
CREATE OR REPLACE MODEL movielens.recommender
options(model_type='matrix_factorization',
user_col='userId', item_col='movieId', rating_col='rating')
AS
SELECT
userId, movieId, rating
FROM movielens.ratings
%%bigquery --project $PROJECT
SELECT *
-- Note: remove cloud-training-demos if you are using your own model:
FROM ML.TRAINING_INFO(MODEL `cloud-training-demos.movielens.recommender`)
Explanation: Matrix factorization
Matrix factorization is a collaborative filtering technique that relies on factorizing the ratings matrix into two vectors called the user factors and the item factors. The user factors is a low-dimensional representation of a user_id and the item factors similarly represents an item_id.
We can create the recommender model using (<b>Optional</b>, takes 30 minutes. Note: we have a model we already trained if you want to skip this step):
End of explanation
%%bigquery --project $PROJECT
CREATE OR REPLACE MODEL movielens.recommender_l2
options(model_type='matrix_factorization',
user_col='userId', item_col='movieId',
rating_col='rating', l2_reg=0.2)
AS
SELECT
userId, movieId, rating
FROM movielens.ratings
%%bigquery --project $PROJECT
SELECT *
-- Note: remove cloud-training-demos if you are using your own model:
FROM ML.TRAINING_INFO(MODEL `cloud-training-demos.movielens.recommender_l2`)
Explanation: Note that we create a model as usual, except that the model_type is matrix_factorization and that we have to identify which columns play what roles in the collaborative filtering setup.
What did you get? Our model took an hour to train, and the training loss starts out extremely bad and gets driven down to near-zero over next the four iterations:
<table>
<tr>
<th>Iteration</th>
<th>Training Data Loss</th>
<th>Evaluation Data Loss</th>
<th>Duration (seconds)</th>
</tr>
<tr>
<td>4</td>
<td>0.5734</td>
<td>172.4057</td>
<td>180.99</td>
</tr>
<tr>
<td>3</td>
<td>0.5826</td>
<td>187.2103</td>
<td>1,040.06</td>
</tr>
<tr>
<td>2</td>
<td>0.6531</td>
<td>4,758.2944</td>
<td>219.46</td>
</tr>
<tr>
<td>1</td>
<td>1.9776</td>
<td>6,297.2573</td>
<td>1,093.76</td>
</tr>
<tr>
<td>0</td>
<td>63,287,833,220.5795</td>
<td>168,995,333.0464</td>
<td>1,091.21</td>
</tr>
</table>
However, the evaluation data loss is quite high, and much higher than the training data loss. This indicates that overfitting is happening, and so we need to add some regularization. Let’s do that next. Note the added l2_reg=0.2 (<b>Optional</b>, takes 30 minutes):
End of explanation
%%bigquery --project $PROJECT
CREATE OR REPLACE MODEL movielens.recommender_16
options(model_type='matrix_factorization',
user_col='userId', item_col='movieId',
rating_col='rating', l2_reg=0.2, num_factors=16)
AS
SELECT
userId, movieId, rating
FROM movielens.ratings
%%bigquery --project $PROJECT
SELECT *
-- Note: remove cloud-training-demos if you are using your own model:
FROM ML.TRAINING_INFO(MODEL `cloud-training-demos.movielens.recommender_16`)
Explanation: Now, we get faster convergence (three iterations instead of five), and a lot less overfitting. Here are our results:
<table>
<tr>
<th>Iteration</th>
<th>Training Data Loss</th>
<th>Evaluation Data Loss</th>
<th>Duration (seconds)</th>
</tr>
<tr>
<td>2</td>
<td>0.6509</td>
<td>1.4596</td>
<td>198.17</td>
</tr>
<tr>
<td>1</td>
<td>1.9829</td>
<td>33,814.3017</td>
<td>1,066.06</td>
</tr>
<tr>
<td>0</td>
<td>481,434,346,060.7928</td>
<td>2,156,993,687.7928</td>
<td>1,024.59</td>
</tr>
</table>
By default, BigQuery sets the number of factors to be the log2 of the number of rows. In our case, since we have 20 million rows in the table, the number of factors would have been chosen to be 24. As with the number of clusters in K-Means clustering, this is a reasonable default but it is often worth experimenting with a number about 50% higher (36) and a number that is about a third lower (16):
TODO 1: Create a Matrix Factorization model with 16 factors
End of explanation
%%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, (
SELECT
movieId, title, 903 AS userId
FROM movielens.movies, UNNEST(genres) g
WHERE g = 'Comedy'
))
ORDER BY predicted_rating DESC
LIMIT 5
Explanation: When we did that, we discovered that the evaluation loss was lower (0.97) with num_factors=16 than with num_factors=36 (1.67) or num_factors=24 (1.45). We could continue experimenting, but we are likely to see diminishing returns with further experimentation. So, let’s pick this as the final matrix factorization model and move on.
Making recommendations
With the trained model, we can now provide recommendations. For example, let’s find the best comedy movies to recommend to the user whose userId is 903. In the query below, we are calling ML.PREDICT passing in the trained recommendation model and providing a set of movieId and userId to carry out the predictions on. In this case, it’s just one userId (903), but all movies whose genre includes Comedy.
End of explanation
%%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, (
WITH seen AS (
SELECT ARRAY_AGG(movieId) AS movies
FROM movielens.ratings
WHERE userId = 903
)
SELECT
movieId, title, 903 AS userId
FROM movielens.movies, UNNEST(genres) g, seen
WHERE g = 'Comedy' AND movieId NOT IN UNNEST(seen.movies)
))
ORDER BY predicted_rating DESC
LIMIT 5
Explanation: Filtering out already rated movies
Of course, this includes movies the user has already seen and rated in the past. Let’s remove them.
TODO 2: Make a prediction for user 903 that does not include already seen movies.
End of explanation
%%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, (
WITH allUsers AS (
SELECT DISTINCT userId
FROM movielens.ratings
)
SELECT
96481 AS movieId,
(SELECT title FROM movielens.movies WHERE movieId=96481) title,
userId
FROM
allUsers
))
ORDER BY predicted_rating DESC
LIMIT 5
Explanation: For this user, this happens to yield the same set of movies -- the top predicted ratings didn’t include any of the movies the user has already seen.
Customer targeting
In the previous section, we looked at how to identify the top-rated movies for a specific user. Sometimes, we have a product and have to find the customers who are likely to appreciate it. Suppose, for example, we wish to get more reviews for movieId=96481 which has only one rating and we wish to send coupons to the 5 users who are likely to rate it the highest.
TODO 3: Find the top five users who will likely enjoy American Mullet (2001)
End of explanation
%%bigquery --project $PROJECT
SELECT *
FROM ML.RECOMMEND(MODEL `cloud-training-demos.movielens.recommender_16`)
LIMIT 10
Explanation: Batch predictions for all users and movies
What if we wish to carry out predictions for every user and movie combination? Instead of having to pull distinct users and movies as in the previous query, a convenience function is provided to carry out batch predictions for all movieId and userId encountered during training. A limit is applied here, otherwise, all user-movie predictions will be returned and will crash the notebook.
End of explanation |
9,456 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
Step1: Load software and filenames definitions
Step2: Data folder
Step3: Check that the folder exists
Step4: List of data files in data_dir
Step5: Data load
Initial loading of the data
Step6: Laser alternation selection
At this point we have only the timestamps and the detector numbers
Step7: We need to define some parameters
Step8: We should check if everithing is OK with an alternation histogram
Step9: If the plot looks good we can apply the parameters with
Step10: Measurements infos
All the measurement data is in the d variable. We can print it
Step11: Or check the measurements duration
Step12: Compute background
Compute the background using automatic threshold
Step13: Burst search and selection
Step14: Preliminary selection and plots
Step15: A-direct excitation fitting
To extract the A-direct excitation coefficient we need to fit the
S values for the A-only population.
The S value for the A-only population is fitted with different methods
Step16: Zero threshold on nd
Select bursts with
Step17: Selection 1
Bursts are weighted using $w = f(S)$, where the function $f(S)$ is a
Gaussian fitted to the $S$ histogram of the FRET population.
Step18: Selection 2
Bursts are here weighted using weights $w$
Step19: Selection 3
Bursts are here selected according to
Step20: Save data to file
Step21: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
Step22: This is just a trick to format the different variables | Python Code:
# ph_sel_name = "all-ph"
# data_id = "7d"
Explanation: usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
from fretbursts import *
init_notebook()
from IPython.display import display
Explanation: Load software and filenames definitions
End of explanation
data_dir = './data/singlespot/'
Explanation: Data folder:
End of explanation
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
Explanation: Check that the folder exists:
End of explanation
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
file_list
## Selection for POLIMI 2012-12-6 dataset
# file_list.pop(2)
# file_list = file_list[1:-2]
# display(file_list)
# labels = ['22d', '27d', '17d', '12d', '7d']
## Selection for P.E. 2012-12-6 dataset
# file_list.pop(1)
# file_list = file_list[:-1]
# display(file_list)
# labels = ['22d', '27d', '17d', '12d', '7d']
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
ph_sel_map = {'all-ph': Ph_sel('all'), 'AexAem': Ph_sel(Aex='Aem')}
ph_sel = ph_sel_map[ph_sel_name]
data_id, ph_sel_name
Explanation: List of data files in data_dir:
End of explanation
d = loader.photon_hdf5(filename=files_dict[data_id])
Explanation: Data load
Initial loading of the data:
End of explanation
d.ph_times_t, d.det_t
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
plot_alternation_hist(d)
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
loader.alex_apply_period(d)
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
d
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
d.time_max
Explanation: Or check the measurements duration:
End of explanation
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
from mpl_toolkits.axes_grid1 import AxesGrid
import lmfit
print('lmfit version:', lmfit.__version__)
assert d.dir_ex == 0
assert d.leakage == 0
d.burst_search(m=10, F=6, ph_sel=ph_sel)
print(d.ph_sel, d.num_bursts)
ds_sa = d.select_bursts(select_bursts.naa, th1=30)
ds_sa.num_bursts
Explanation: Burst search and selection
End of explanation
mask = (d.naa[0] - np.abs(d.na[0] + d.nd[0])) > 30
ds_saw = d.select_bursts_mask_apply([mask])
ds_sas0 = ds_sa.select_bursts(select_bursts.S, S2=0.10)
ds_sas = ds_sa.select_bursts(select_bursts.S, S2=0.15)
ds_sas2 = ds_sa.select_bursts(select_bursts.S, S2=0.20)
ds_sas3 = ds_sa.select_bursts(select_bursts.S, S2=0.25)
ds_st = d.select_bursts(select_bursts.size, add_naa=True, th1=30)
ds_sas.num_bursts
dx = ds_sas0
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
dx = ds_sas
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
dx = ds_sas2
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
dx = ds_sas3
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
plt.title('(nd + na) for A-only population using different S cutoff');
dx = ds_sa
alex_jointplot(dx);
dplot(ds_sa, hist_S)
Explanation: Preliminary selection and plots
End of explanation
dx = ds_sa
bin_width = 0.03
bandwidth = 0.03
bins = np.r_[-0.2 : 1 : bin_width]
x_kde = np.arange(bins.min(), bins.max(), 0.0002)
## Weights
weights = None
## Histogram fit
fitter_g = mfit.MultiFitter(dx.S)
fitter_g.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_g.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_hist_orig = fitter_g.hist_pdf
S_2peaks = fitter_g.params.loc[0, 'p1_center']
dir_ex_S2p = S_2peaks/(1 - S_2peaks)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p)
## KDE
fitter_g.calc_kde(bandwidth=bandwidth)
fitter_g.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak = fitter_g.kde_max_pos[0]
dir_ex_S_kde = S_peak/(1 - S_peak)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_g, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks*100))
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=True)
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak*100));
## 2-Asym-Gaussian
fitter_ag = mfit.MultiFitter(dx.S)
fitter_ag.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_ag.fit_histogram(model = mfit.factory_two_asym_gaussians(p1_center=0.1, p2_center=0.4))
#print(fitter_ag.fit_obj[0].model.fit_report())
S_2peaks_a = fitter_ag.params.loc[0, 'p1_center']
dir_ex_S2pa = S_2peaks_a/(1 - S_2peaks_a)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2pa)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_g, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks*100))
mfit.plot_mfit(fitter_ag, ax=ax[1])
ax[1].set_title('2-Asym-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_a*100));
Explanation: A-direct excitation fitting
To extract the A-direct excitation coefficient we need to fit the
S values for the A-only population.
The S value for the A-only population is fitted with different methods:
- Histogram git with 2 Gaussians or with 2 asymmetric Gaussians
(an asymmetric Gaussian has right- and left-side of the peak
decreasing according to different sigmas).
- KDE maximum
In the following we apply these methods using different selection
or weighting schemes to reduce amount of FRET population and make
fitting of the A-only population easier.
Even selection
Here A-only and FRET population are evenly selected.
End of explanation
dx = ds_sa.select_bursts(select_bursts.nd, th1=-100, th2=0)
fitter = bext.bursts_fitter(dx, 'S')
fitter.fit_histogram(model = mfit.factory_gaussian(center=0.1))
S_1peaks_th = fitter.params.loc[0, 'center']
dir_ex_S1p = S_1peaks_th/(1 - S_1peaks_th)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S1p)
mfit.plot_mfit(fitter)
plt.xlim(-0.1, 0.6)
Explanation: Zero threshold on nd
Select bursts with:
$$n_d < 0$$.
End of explanation
dx = ds_sa
## Weights
weights = 1 - mfit.gaussian(dx.S[0], fitter_g.params.loc[0, 'p2_center'], fitter_g.params.loc[0, 'p2_sigma'])
weights[dx.S[0] >= fitter_g.params.loc[0, 'p2_center']] = 0
## Histogram fit
fitter_w1 = mfit.MultiFitter(dx.S)
fitter_w1.weights = [weights]
fitter_w1.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w1.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_2peaks_w1 = fitter_w1.params.loc[0, 'p1_center']
dir_ex_S2p_w1 = S_2peaks_w1/(1 - S_2peaks_w1)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w1)
## KDE
fitter_w1.calc_kde(bandwidth=bandwidth)
fitter_w1.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak_w1 = fitter_w1.kde_max_pos[0]
dir_ex_S_kde_w1 = S_peak_w1/(1 - S_peak_w1)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w1)
def plot_weights(x, weights, ax):
ax2 = ax.twinx()
x_sort = x.argsort()
ax2.plot(x[x_sort], weights[x_sort], color='k', lw=4, alpha=0.4)
ax2.set_ylabel('Weights');
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_w1, ax=ax[0])
mfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)
plot_weights(dx.S[0], weights, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w1*100))
mfit.plot_mfit(fitter_w1, ax=ax[1], plot_model=False, plot_kde=True)
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)
plot_weights(dx.S[0], weights, ax=ax[1])
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w1*100));
Explanation: Selection 1
Bursts are weighted using $w = f(S)$, where the function $f(S)$ is a
Gaussian fitted to the $S$ histogram of the FRET population.
End of explanation
## Weights
sizes = dx.nd[0] + dx.na[0] #- dir_ex_S_kde_w3*dx.naa[0]
weights = dx.naa[0] - abs(sizes)
weights[weights < 0] = 0
## Histogram
fitter_w4 = mfit.MultiFitter(dx.S)
fitter_w4.weights = [weights]
fitter_w4.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w4.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_2peaks_w4 = fitter_w4.params.loc[0, 'p1_center']
dir_ex_S2p_w4 = S_2peaks_w4/(1 - S_2peaks_w4)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w4)
## KDE
fitter_w4.calc_kde(bandwidth=bandwidth)
fitter_w4.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak_w4 = fitter_w4.kde_max_pos[0]
dir_ex_S_kde_w4 = S_peak_w4/(1 - S_peak_w4)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w4)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_w4, ax=ax[0])
mfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)
#plot_weights(dx.S[0], weights, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w4*100))
mfit.plot_mfit(fitter_w4, ax=ax[1], plot_model=False, plot_kde=True)
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)
#plot_weights(dx.S[0], weights, ax=ax[1])
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w4*100));
Explanation: Selection 2
Bursts are here weighted using weights $w$:
$$w = n_{aa} - |n_a + n_d|$$
End of explanation
mask = (d.naa[0] - np.abs(d.na[0] + d.nd[0])) > 30
ds_saw = d.select_bursts_mask_apply([mask])
print(ds_saw.num_bursts)
dx = ds_saw
## Weights
weights = None
## 2-Gaussians
fitter_w5 = mfit.MultiFitter(dx.S)
fitter_w5.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w5.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_2peaks_w5 = fitter_w5.params.loc[0, 'p1_center']
dir_ex_S2p_w5 = S_2peaks_w5/(1 - S_2peaks_w5)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w5)
## KDE
fitter_w5.calc_kde(bandwidth=bandwidth)
fitter_w5.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak_w5 = fitter_w5.kde_max_pos[0]
S_2peaks_w5_fiterr = fitter_w5.fit_res[0].params['p1_center'].stderr
dir_ex_S_kde_w5 = S_peak_w5/(1 - S_peak_w5)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w5)
## 2-Asym-Gaussians
fitter_w5a = mfit.MultiFitter(dx.S)
fitter_w5a.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w5a.fit_histogram(model = mfit.factory_two_asym_gaussians(p1_center=0.05, p2_center=0.3))
S_2peaks_w5a = fitter_w5a.params.loc[0, 'p1_center']
dir_ex_S2p_w5a = S_2peaks_w5a/(1 - S_2peaks_w5a)
#print(fitter_w5a.fit_obj[0].model.fit_report(min_correl=0.5))
print('Fitted direct excitation (na/naa) [2-Asym-Gauss]:', dir_ex_S2p_w5a)
fig, ax = plt.subplots(1, 3, figsize=(19, 4.5))
mfit.plot_mfit(fitter_w5, ax=ax[0])
mfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w5*100))
mfit.plot_mfit(fitter_w5, ax=ax[1], plot_model=False, plot_kde=True)
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w5*100));
mfit.plot_mfit(fitter_w5a, ax=ax[2])
mfit.plot_mfit(fitter_g, ax=ax[2], plot_model=False, plot_kde=False)
ax[2].set_title('2-Asym-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w5a*100));
Explanation: Selection 3
Bursts are here selected according to:
$$n_{aa} - |n_a + n_d| > 30$$
End of explanation
sample = data_id
n_bursts_aa = ds_sas.num_bursts[0]
Explanation: Save data to file
End of explanation
variables = ('sample n_bursts_aa dir_ex_S1p dir_ex_S_kde dir_ex_S2p dir_ex_S2pa '
'dir_ex_S2p_w1 dir_ex_S_kde_w1 dir_ex_S_kde_w4 dir_ex_S_kde_w5 dir_ex_S2p_w5 dir_ex_S2p_w5a '
'S_2peaks_w5 S_2peaks_w5_fiterr\n')
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-raw-dir_ex_aa-fit-%s.csv' % ph_sel_name, 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
Explanation: This is just a trick to format the different variables:
End of explanation |
9,457 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
So I chose a min_score from the other jupyter notebook, but when I look at the max scores of investment rounds, the highest scores are always 1-1.5% below the score cutoff threshold. My theory is that perhaps at the highest score percentile, the model is picking B grade loans that back then had a higher interest rate. In this notebook I will look at the distribution of issuance dates within each percentile
Step1: Pull in loans and do monte carlo over the batches again
Step2: Add scores and npv_roi_5 to test set
Step3: find what is a good percentile to cutoff at, and what the distribution for scores is at that percentile
Step4: Say I wanted the 75pctile of the 80th percentile (-0.36289), what grade distribution of loans are those? | Python Code:
import modeling_utils.data_prep as data_prep
from sklearn.externals import joblib
import time
platform = 'lendingclub'
store = pd.HDFStore(
'/Users/justinhsi/justin_tinkering/data_science/lendingclub/{0}_store.h5'.
format(platform),
append=True)
Explanation: So I chose a min_score from the other jupyter notebook, but when I look at the max scores of investment rounds, the highest scores are always 1-1.5% below the score cutoff threshold. My theory is that perhaps at the highest score percentile, the model is picking B grade loans that back then had a higher interest rate. In this notebook I will look at the distribution of issuance dates within each percentile
End of explanation
store.open()
test = store['test_filtered_columns']
loan_npv_rois = store['loan_npv_rois']
default_series = test['target_strict']
store.close()
Explanation: Pull in loans and do monte carlo over the batches again
End of explanation
test_X, test_y = data_prep.process_data_test(test)
test_y = test_y['npv_roi_10'].values
regr = joblib.load('model_dump/model_0.2.1.pkl')
regr_version = '0.2.1'
test_yhat = regr.predict(test_X)
test['0.2.1_scores'] = test_yhat
test['npv_roi_5'] = loan_npv_rois[.05]
Explanation: Add scores and npv_roi_5 to test set
End of explanation
percentiles = np.arange(0,100,1)
def get_ids_at_percentile(trials, available_loans, test, percentiles):
test_copy = test.copy()
results = {}
for perc in percentiles:
results[perc] = []
for trial in tqdm_notebook(np.arange(trials)):
loan_ids = np.random.choice(
test_copy.index.values, available_loans, replace=False)
loans_to_pick_from = test_copy.loc[loan_ids, :]
loans_to_pick_from.sort_values(
'0.2.1_scores', ascending=False, inplace=True)
chunksize = int(len(loans_to_pick_from) / 100)
for k, perc in enumerate(percentiles):
subset = loans_to_pick_from[k * chunksize:(k + 1) * chunksize]
results[perc].extend(subset.index.values.tolist())
for perc in percentiles:
results[perc] = set(results[perc])
return results
# assume there's 200 loans per batch
trials = 20000
available_loans = 200
ids_within_each_percentile = get_ids_at_percentile(trials, available_loans, test, percentiles)
for i in percentiles:
print(i)
test.loc[ids_within_each_percentile[i],'issue_d'].hist(bins=30)
plt.show()
summaries = results.describe()
summaries_scores = results_scores.describe()
plt.figure(figsize=(12,9))
plt.plot(summaries.columns.values, summaries.loc['mean',:], 'o', label='mean')
plt.plot(summaries.columns.values, summaries.loc['25%',:], 'ro', label='25%')
# plt.plot(summaries.columns.values, summaries.loc['50%',:], '-.')
plt.plot(summaries.columns.values, summaries.loc['75%',:], 'ko', label='75%')
plt.title('return per percentile over batches')
plt.legend(loc='best')
plt.xlabel('percentile of 0.2.1_score')
plt.ylabel('npv_roi_5')
plt.show()
summaries_scores
# lets take one sided 99% cofidence interval at score is greater than mean -3 std_dev at 90th percentile
cutoff = summaries_scores.loc['mean', 90] - 3*summaries_scores.loc['std', 90]
Explanation: find what is a good percentile to cutoff at, and what the distribution for scores is at that percentile
End of explanation
picks = test[test['0.2.1_scores'] >= cutoff]
# grade distribution of picks
picks['grade'].value_counts(dropna=False)/len(picks)
# compared to grade distribution of all test loans
test['grade'].value_counts(dropna=False)/len(test)
cutoff
Explanation: Say I wanted the 75pctile of the 80th percentile (-0.36289), what grade distribution of loans are those?
End of explanation |
9,458 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part of Speech Tags
In this notebook, we learn more about POS tags.
Tagsets and Examples
Universal tagset
Step1: Or this summary table (also c.f. https
Step2: Various algorithms can be used to perform POS tagging. In general, the accuracy is pretty high (state-of-the-art can reach approximately 97%). However, there are still incorrect tags. We demonstrate this below. | Python Code:
import nltk
nltk.help.upenn_tagset()
nltk.help.upenn_tagset('WP$')
nltk.help.upenn_tagset('PDT')
nltk.help.upenn_tagset('DT')
nltk.help.upenn_tagset('POS')
nltk.help.upenn_tagset('RBR')
nltk.help.upenn_tagset('RBS')
nltk.help.upenn_tagset('MD')
Explanation: Part of Speech Tags
In this notebook, we learn more about POS tags.
Tagsets and Examples
Universal tagset: (thanks to http://www.tablesgenerator.com/markdown_tables)
| Tag | Meaning | English Examples |
|------|---------------------|----------------------------------------|
| ADJ | adjective | new, good, high, special, big, local |
| ADP | adposition | on, of, at, with, by, into, under |
| ADV | adverb | really, already, still, early, now |
| CONJ | conjunction | and, or, but, if, while, although |
| DET | determiner, article | the, a, some, most, every, no, which |
| NOUN | noun | year, home, costs, time, Africa |
| NUM | numeral | twenty-four, fourth, 1991, 14:24 |
| PRT | particle | at, on, out, over per, that, up, with |
| PRON | pronoun | he, their, her, its, my, I, us |
| VERB | verb | is, say, told, given, playing, would |
| . | punctuation marks | . , ; ! |
| X | other | ersatz, esprit, dunno, gr8, univeristy |
We list the upenn (aka. treebank) tagset below. In addition to that, NLTK also has
* brown: use nltk.help.brown_tagset()
* claws5: use nltk.help.claws5_tagset()
End of explanation
from pprint import pprint
sent = 'Beautiful is better than ugly.'
tokens = nltk.tokenize.word_tokenize(sent)
pos_tags = nltk.pos_tag(tokens)
pprint(pos_tags)
Explanation: Or this summary table (also c.f. https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html)
| Tag | Meaning | Tag | Meaning | Tag | Meaning |
|-----|------------------------------------------|------|-----------------------|-----|---------------------------------------|
| CC | Coordinating conjunction | NNP | Proper noun, singular | VB | Verb, base form |
| CD | Cardinal number | NNPS | Proper noun, plural | VBD | Verb, past tense |
| DT | Determiner | PDT | Predeterminer | VBG | Verb, gerund or present |
| EX | Existential there | POS | Possessive ending | VBN | Verb, past participle |
| FW | Foreign word | PRP | Personal pronoun | VBP | Verb, non-3rd person singular present |
| IN | Preposition or subordinating conjunction | PRP\$ | Possessive pronoun | VBZ | Verb, 3rd person singular |
| JJ | Adjective | RB | Adverb | WDT | Wh-determiner |
| JJR | Adjective, comparative | RBR | Adverb, comparative | WP | Wh-pronoun |
| JJS | Adjective, superlative | RBS | Adverb, superlative | WP\$ | Possessive wh-pronoun |
| LS | List item marker | RP | Particle | WRB | Wh-adverb |
| MD | Modal | SYM | Symbol | | |
| NN | Noun, singular or mass | TO | to | | |
| NNS | Noun, plural | UH | Interjection | | |
Tagging a sentence
End of explanation
truths = [[(u'Pierre', u'NNP'), (u'Vinken', u'NNP'), (u',', u','), (u'61', u'CD'),
(u'years', u'NNS'), (u'old', u'JJ'), (u',', u','), (u'will', u'MD'),
(u'join', u'VB'), (u'the', u'DT'), (u'board', u'NN'), (u'as', u'IN'),
(u'a', u'DT'), (u'nonexecutive', u'JJ'), (u'director', u'NN'),
(u'Nov.', u'NNP'), (u'29', u'CD'), (u'.', u'.')],
[(u'Mr.', u'NNP'), (u'Vinken', u'NNP'), (u'is', u'VBZ'), (u'chairman', u'NN'),
(u'of', u'IN'), (u'Elsevier', u'NNP'), (u'N.V.', u'NNP'), (u',', u','),
(u'the', u'DT'), (u'Dutch', u'NNP'), (u'publishing', u'VBG'),
(u'group', u'NN'), (u'.', u'.'), (u'Rudolph', u'NNP'), (u'Agnew', u'NNP'),
(u',', u','), (u'55', u'CD'), (u'years', u'NNS'), (u'old', u'JJ'),
(u'and', u'CC'), (u'former', u'JJ'), (u'chairman', u'NN'), (u'of', u'IN'),
(u'Consolidated', u'NNP'), (u'Gold', u'NNP'), (u'Fields', u'NNP'),
(u'PLC', u'NNP'), (u',', u','), (u'was', u'VBD'), (u'named', u'VBN'),
(u'a', u'DT'), (u'nonexecutive', u'JJ'), (u'director', u'NN'), (u'of', u'IN'),
(u'this', u'DT'), (u'British', u'JJ'), (u'industrial', u'JJ'),
(u'conglomerate', u'NN'), (u'.', u'.')],
[(u'A', u'DT'), (u'form', u'NN'),
(u'of', u'IN'), (u'asbestos', u'NN'), (u'once', u'RB'), (u'used', u'VBN'),
(u'to', u'TO'), (u'make', u'VB'), (u'Kent', u'NNP'), (u'cigarette', u'NN'),
(u'filters', u'NNS'), (u'has', u'VBZ'), (u'caused', u'VBN'), (u'a', u'DT'),
(u'high', u'JJ'), (u'percentage', u'NN'), (u'of', u'IN'),
(u'cancer', u'NN'), (u'deaths', u'NNS'),
(u'among', u'IN'), (u'a', u'DT'), (u'group', u'NN'), (u'of', u'IN'),
(u'workers', u'NNS'), (u'exposed', u'VBN'), (u'to', u'TO'), (u'it', u'PRP'),
(u'more', u'RBR'), (u'than', u'IN'), (u'30', u'CD'), (u'years', u'NNS'),
(u'ago', u'IN'), (u',', u','), (u'researchers', u'NNS'),
(u'reported', u'VBD'), (u'.', u'.')]]
import pandas as pd
def proj(pair_list, idx):
return [p[idx] for p in pair_list]
data = []
for truth in truths:
sent_toks = proj(truth, 0)
true_tags = proj(truth, 1)
nltk_tags = nltk.pos_tag(sent_toks)
for i in range(len(sent_toks)):
# print('{}\t{}\t{}'.format(sent_toks[i], true_tags[i], nltk_tags[i][1])) # if you do not want to use DataFrame
data.append( (sent_toks[i], true_tags[i], nltk_tags[i][1] ) )
headers = ['token', 'true_tag', 'nltk_tag']
df = pd.DataFrame(data, columns = headers)
df
# this finds out the tokens that the true_tag and nltk_tag are different.
df[df.true_tag != df.nltk_tag]
Explanation: Various algorithms can be used to perform POS tagging. In general, the accuracy is pretty high (state-of-the-art can reach approximately 97%). However, there are still incorrect tags. We demonstrate this below.
End of explanation |
9,459 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text segmentation example
Step1: Training examples
Training examples are defined through the Imageset class of TRIOSlib. The class defines a list of tuples with pairs of input and desired output image paths, and an (optional) binary image mask (see http
Step2: Training
We define a CNN architecture through the CNN_TFClassifier class. The classifier requires the input image shape and number of outputs for initialization. We define the input shape according to the patches extracted from the images, in this example, 11x11, and use a single sigmoid output unit for binary classification
Step3: Applying the operator to a new image | Python Code:
# Required modules
from trios.feature_extractors import RAWFeatureExtractor
import trios
import numpy as np
from TFClassifier import TFClassifier
from CNN_TFClassifier import CNN_TFClassifier
import scipy as sp
import scipy.ndimage
import trios.shortcuts.persistence as p
import matplotlib
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline
Explanation: Text segmentation example
End of explanation
train_imgs = trios.Imageset.read('images/train_images.set')
for i in range(len(train_imgs)):
print("sample %d:" % (i + 1))
print("\t input: %s" % train_imgs[i][0])
print("\t desired output: %s" % train_imgs[i][1])
print("\t mask: %s\n" % train_imgs[i][2])
print("The first pair of input and ouput examples:")
fig = plt.figure(1, figsize=(15,15))
img=mpimg.imread(train_imgs[0][0])
fig.add_subplot(121)
plt.imshow(img, cmap=cm.gray)
plt.title('Input')
img_gt=mpimg.imread(train_imgs[0][1])
fig.add_subplot(122)
plt.title('Desired output')
plt.imshow(img_gt, cmap=cm.gray)
Explanation: Training examples
Training examples are defined through the Imageset class of TRIOSlib. The class defines a list of tuples with pairs of input and desired output image paths, and an (optional) binary image mask (see http://trioslib.sourceforge.net/index.html for more details). In this case, we use a training set with two training examples, and for each example, we define the mask as being the input image, that is, the operator is applied in each white pixel of the input image:
End of explanation
patch_side = 19
num_outputs = 1
win = np.ones((patch_side, patch_side), np.uint8)
cnn_classifier = CNN_TFClassifier((patch_side, patch_side, 1), num_outputs, num_epochs=10, model_dir='cnn_text_segmentation')
op_tf = trios.WOperator(win, TFClassifier(cnn_classifier), RAWFeatureExtractor, batch=True)
op_tf.train(train_imgs)
Explanation: Training
We define a CNN architecture through the CNN_TFClassifier class. The classifier requires the input image shape and number of outputs for initialization. We define the input shape according to the patches extracted from the images, in this example, 11x11, and use a single sigmoid output unit for binary classification: text and non-text classes. Additional (optional) parameters include:
learning_rate (default 1e-4), dropout_prob (default 0.5), and output_activation=(default 'sigmoid').
End of explanation
test_img = sp.ndimage.imread('images/veja11.sh50.png', mode='L')
out_img = op_tf.apply(test_img, test_img)
fig = plt.figure(2, figsize=(15,15))
fig.add_subplot(121)
plt.imshow(test_img, cmap=cm.gray)
plt.title('Input')
fig.add_subplot(122)
plt.imshow(out_img, cmap=cm.gray)
plt.title('CNN output')
Explanation: Applying the operator to a new image
End of explanation |
9,460 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Post Training Quantization
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step1: Train and export the model
Step2: For the example, we only trained the model for a single epoch, so it only trains to ~96% accuracy.
Convert to a TFLite model
The savedmodel directory is named with a timestamp. Select the most recent one
Step3: Using the python TocoConverter, the saved model can be converted into a TFLite model.
First load the model using the TocoConverter
Step4: Write it out to a tflite file
Step5: To quantize the model on export, set the post_training_quantize flag
Step6: Note how the resulting file, with post_training_quantize set, is approximately 1/4 the size.
Step7: Run the TFLite models
We can run the TensorFlow Lite model using the python TensorFlow Lite
Interpreter.
load the test data
First let's load the mnist test data to feed to it
Step8: Load the model into an interpreter
Step9: Test the model on one image
Step10: Evaluate the models
Step11: We can repeat the evaluation on the weight quantized model to obtain
Step12: In this example, we have compressed model with no difference in the accuracy.
Optimizing an existing model
We now consider another example. Resnets with pre-activation layers (Resnet-v2) are widely used for vision applications.
Pre-trained frozen graph for resnet-v2-101 is available at the
Tensorflow Lite model repository.
We can convert the frozen graph to a TFLite flatbuffer with quantization by
Step13: The info.txt file lists the input and output names. You can also find them using TensorBoard to visually inspect the graph. | Python Code:
! pip uninstall -y tensorflow
! pip install -U tf-nightly
import tensorflow as tf
tf.enable_eager_execution()
! git clone --depth 1 https://github.com/tensorflow/models
import sys
import os
if sys.version_info.major >= 3:
import pathlib
else:
import pathlib2 as pathlib
# Add `models` to the python path.
models_path = os.path.join(os.getcwd(), "models")
sys.path.append(models_path)
Explanation: Post Training Quantization
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/tutorials/post_training_quant.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/tutorials/post_training_quant.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Overview
TensorFlow Lite now supports
converting weights to 8 bit precision as part of model conversion from
tensorflow graphdefs to TFLite's flat buffer format. Weight quantization
achieves a 4x reduction in the model size. In addition, TFLite supports on the
fly quantization and dequantization of activations to allow for:
Using quantized kernels for faster implementation when available.
Mixing of floating-point kernels with quantized kernels for different parts
of the graph.
Note that the activations are always stored in floating point. For ops that
support quantized kernels, the activations are quantized to 8 bits of precision
dynamically prior to processing and are de-quantized to float precision after
processing. Depending on the model being converted, this can give a speedup over
pure floating point computation.
In contrast to
quantization aware training
, the weights are quantized post training and the activations are quantized dynamically
at inference in this method.
Therefore, the model weights are not retrained to compensate for quantization
induced errors. It is important to check the accuracy of the quantized model to
ensure that the degradation is acceptable.
In this tutorial, we train an MNIST model from scratch, check its accuracy in
tensorflow and then convert the saved model into a Tensorflow Lite flatbuffer
with weight quantization. We finally check the
accuracy of the converted model and compare it to the original saved model. We
run the training script mnist.py from
Tensorflow official mnist tutorial.
Building an MNIST model
Setup
End of explanation
saved_models_root = "/tmp/mnist_saved_model"
# The above path addition is not visible to subprocesses, add the path for the subprocess as well.
# Note: channels_last is required here or the conversion may fail.
!PYTHONPATH={models_path} python models/official/mnist/mnist.py --train_epochs=1 --export_dir {saved_models_root} --data_format=channels_last
Explanation: Train and export the model
End of explanation
saved_model_dir = str(sorted(pathlib.Path(saved_models_root).glob("*"))[-1])
saved_model_dir
Explanation: For the example, we only trained the model for a single epoch, so it only trains to ~96% accuracy.
Convert to a TFLite model
The savedmodel directory is named with a timestamp. Select the most recent one:
End of explanation
import tensorflow as tf
tf.enable_eager_execution()
converter = tf.contrib.lite.TocoConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
Explanation: Using the python TocoConverter, the saved model can be converted into a TFLite model.
First load the model using the TocoConverter:
End of explanation
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
Explanation: Write it out to a tflite file:
End of explanation
# Note: If you don't have a recent tf-nightly installed, the
# "post_training_quantize" line will have no effect.
tf.logging.set_verbosity(tf.logging.INFO)
converter.post_training_quantize = True
tflite_quant_model = converter.convert()
tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_quant_model)
Explanation: To quantize the model on export, set the post_training_quantize flag:
End of explanation
!ls -lh {tflite_models_dir}
Explanation: Note how the resulting file, with post_training_quantize set, is approximately 1/4 the size.
End of explanation
import numpy as np
mnist_train, mnist_test = tf.keras.datasets.mnist.load_data()
images, labels = tf.to_float(mnist_test[0])/255.0, mnist_test[1]
# Note: If you change the batch size, then use
# `tf.contrib.lite.Interpreter.resize_tensor_input` to also change it for
# the interpreter.
mnist_ds = tf.data.Dataset.from_tensor_slices((images, labels)).batch(1)
Explanation: Run the TFLite models
We can run the TensorFlow Lite model using the python TensorFlow Lite
Interpreter.
load the test data
First let's load the mnist test data to feed to it:
End of explanation
interpreter = tf.contrib.lite.Interpreter(model_path=str(tflite_model_file))
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
tf.logging.set_verbosity(tf.logging.DEBUG)
interpreter_quant = tf.contrib.lite.Interpreter(model_path=str(tflite_model_quant_file))
interpreter_quant.allocate_tensors()
input_index = interpreter_quant.get_input_details()[0]["index"]
output_index = interpreter_quant.get_output_details()[0]["index"]
Explanation: Load the model into an interpreter
End of explanation
for img, label in mnist_ds.take(1):
break
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
import matplotlib.pylab as plt
plt.imshow(img[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(label[0].numpy()),
predict=str(predictions[0,0])))
plt.grid(False)
Explanation: Test the model on one image
End of explanation
def eval_model(interpreter, mnist_ds):
total_seen = 0
num_correct = 0
for img, label in mnist_ds:
total_seen += 1
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
if predictions == label.numpy():
num_correct += 1
if total_seen % 500 == 0:
print("Accuracy after %i images: %f" %
(total_seen, float(num_correct) / float(total_seen)))
return float(num_correct) / float(total_seen)
print(eval_model(interpreter_quant, mnist_ds))
Explanation: Evaluate the models
End of explanation
print(eval_model(interpreter_quant, mnist_ds))
Explanation: We can repeat the evaluation on the weight quantized model to obtain:
End of explanation
archive_path = tf.keras.utils.get_file("resnet_v2_101.tgz", "https://storage.googleapis.com/download.tensorflow.org/models/tflite_11_05_08/resnet_v2_101.tgz", extract=True)
archive_path = pathlib.Path(archive_path)
archive_dir = str(archive_path.parent)
Explanation: In this example, we have compressed model with no difference in the accuracy.
Optimizing an existing model
We now consider another example. Resnets with pre-activation layers (Resnet-v2) are widely used for vision applications.
Pre-trained frozen graph for resnet-v2-101 is available at the
Tensorflow Lite model repository.
We can convert the frozen graph to a TFLite flatbuffer with quantization by:
End of explanation
! cat {archive_dir}/resnet_v2_101_299_info.txt
graph_def_file = pathlib.Path(archive_path).parent/"resnet_v2_101_299_frozen.pb"
input_arrays = ["input"]
output_arrays = ["output"]
converter = tf.contrib.lite.TocoConverter.from_frozen_graph(
str(graph_def_file), input_arrays, output_arrays, input_shapes={"input":[1,299,299,3]})
converter.post_training_quantize = True
resnet_tflite_file = graph_def_file.parent/"resnet_v2_101_quantized.tflite"
resnet_tflite_file.write_bytes(converter.convert())
!ls -lh {archive_dir}/*.tflite
Explanation: The info.txt file lists the input and output names. You can also find them using TensorBoard to visually inspect the graph.
End of explanation |
9,461 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Sklearn Ridge - Training a Ridge Regression Model
| Python Code::
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error, mean_absolute_error
# initialise & fit a ridge regression model with alpha set to 1
# if the model is overfitting, increase the alpha value
model = Ridge(alpha=1)
model.fit(X_train, y_train)
# create dictionary that contains the feature coefficients
coef = dict(zip(X_train.columns, model.coef_.T))
print(coef)
# make prediction for test data
y_pred = model.predict(X_test)
# evaluate performance
print('RMSE:',mean_squared_error(y_test, y_pred, squared = False))
print('MAE:',mean_absolute_error(y_test, y_pred))
|
9,462 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
^ gor
Step2: Vidimo, da sta $f(0)$ in $f(1)$ različnega predznaka, kar pomeni, da je na intervalu $(0,1)$ ničla.
Step3: Predstavimo bisekcijo še grafično. | Python Code:
f = lambda x: x-2**(-x)
a,b=(0,1) # začetni interval
(f(a),f(b))
Explanation: ^ gor: Uvod
Reševanje enačb z bisekcijo
Vsako enačbo $l(x)=d(x)$ lahko prevedemo na iskanje ničle funkcije
$$f(x)=l(x)-d(x)=0.$$
Ničlo zvezne funkcije lahko zanesljivo poiščemo z bisekcijo. Ideja je preprosta. Če so vrednosti funkcije v krajiščih intervala $[a,b]$ različnega predznaka, potem znotraj intervala $(a,b)$ zagotovo leži ničla ($x\in(a,b)$, za katerega je $f(x)=0$).
Recimo, da je $f(a)>0$ in $f(b)<0$. Če izračunamo vrednost funkcije v središču intervala $c=\frac{1}{2}(a+b)$, lahko interval, za katerega vemo, da vsebuje ničlo, zmanjšamo na polovico.
Če je $f(c)=0$, smo ničlo že našli in lahko prenehamo z iskanjem.
Če je $f(c)<0$, potem je ničla zagotovo na intervalu $(a,c)$,
če pa je $f(c)>0$, je ničla zagotovo na intervalu $(c,b)$.
Če opisani postopek ponavljamo dovolj dolgo, lahko Interval, ki vsebuje ničlo, poljubno zmanjšamo,
Primer
Reši enačbo
$$x=2^{-x}.$$
Rešitev
Enačbo lahko prevedemo na iskanje ničle funkcije
$$f(x) = x-2^{-x}.$$
Najprej poiščemo interval za katerega smo prepričani, da vsebuje ničlo. Iščemo dve vrednosti $x$, za katere ima $f(x)$ različni predznak.
End of explanation
def bisekcija(f,a,b,n):
bisekcija(f,a,b,n) z bisekcijo izračuna interval širine (b-a)/2**n na katerem leži ničla funkcije f.
if n<=0:
return (a,b)
fa, fb = (f(a),f(b))
assert (fa*fb)<=0, "Predznaka v krajiščih intervala [%f,%f] sta enaka" % (a,b)
c = (a+b)/2 # središče intervala
fc = f(c)
if fc == 0:
return (c,c)
elif fc*fa<=0:
return bisekcija(f,a,c,n-1)
else:
return bisekcija(f,c,b,n-1)
a,b = (0,1)
# 10 korakov bisekcije
for i in range(10):
print(bisekcija(f,a,b,i))
Explanation: Vidimo, da sta $f(0)$ in $f(1)$ različnega predznaka, kar pomeni, da je na intervalu $(0,1)$ ničla.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
a , b = (0,1)
t = np.linspace(a,b)
plt.plot(t,f(t))
plt.plot([0,1],[0,0],'k')
for i in range(6):
plt.plot([a,a],[0,f(a)],'r-o') # levo krajišče
plt.plot([b,b],[0,f(b)],'k-o') # desno krajišče
plt.annotate("$a_%d$" % i, xy = (a,0),xytext = (a,0.07*(i+1)),fontsize=12)
plt.annotate("$b_%d$" % i, xy = (b,0),xytext = (b,-0.07*(i+1)),fontsize=12)
a,b = bisekcija(f,a,b,1)
plt.grid()
import disqus
%reload_ext disqus
%disqus matpy
Explanation: Predstavimo bisekcijo še grafično.
End of explanation |
9,463 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Generative Adversarial Networks (GANs)
So far in CS231N, all the applications of neural networks that we have explored have been discriminative models that take an input and are trained to produce a labeled output. This has ranged from straightforward classification of image categories to sentence generation (which was still phrased as a classification problem, our labels were in vocabulary space and we’d learned a recurrence to capture multi-word labels). In this notebook, we will expand our repetoire, and build generative models using neural networks. Specifically, we will learn how to build models which generate novel images that resemble a set of training images.
What is a GAN?
In 2014, Goodfellow et al. presented a method for training generative models called Generative Adversarial Networks (GANs for short). In a GAN, we build two different neural networks. Our first network is a traditional classification network, called the discriminator. We will train the discriminator to take images, and classify them as being real (belonging to the training set) or fake (not present in the training set). Our other network, called the generator, will take random noise as input and transform it using a neural network to produce images. The goal of the generator is to fool the discriminator into thinking the images it produced are real.
We can think of this back and forth process of the generator ($G$) trying to fool the discriminator ($D$), and the discriminator trying to correctly classify real vs. fake as a minimax game
Step2: Dataset
GANs are notoriously finicky with hyperparameters, and also require many training epochs. In order to make this assignment approachable without a GPU, we will be working on the MNIST dataset, which is 60,000 training and 10,000 test images. Each picture contains a centered image of white digit on black background (0 through 9). This was one of the first datasets used to train convolutional neural networks and it is fairly easy -- a standard CNN model can easily exceed 99% accuracy.
To simplify our code here, we will use the TensorFlow MNIST wrapper, which downloads and loads the MNIST dataset. See the documentation for more information about the interface. The default parameters will take 5,000 of the training examples and place them into a validation dataset. The data will be saved into a folder called MNIST_data.
Heads-up
Step4: LeakyReLU
In the cell below, you should implement a LeakyReLU. See the class notes (where alpha is small number) or equation (3) in this paper. LeakyReLUs keep ReLU units from dying and are often used in GAN methods (as are maxout units, however those increase model size and therefore are not used in this notebook).
HINT
Step5: Test your leaky ReLU implementation. You should get errors < 1e-10
Step7: Random Noise
Generate a TensorFlow Tensor containing uniform noise from -1 to 1 with shape [batch_size, dim].
Step8: Make sure noise is the correct shape and type
Step10: Discriminator
Our first step is to build a discriminator. You should use the layers in tf.layers to build the model.
All fully connected layers should include bias terms.
Architecture
Step11: Test to make sure the number of parameters in the discriminator is correct
Step13: Generator
Now to build a generator. You should use the layers in tf.layers to construct the model. All fully connected layers should include bias terms.
Architecture
Step14: Test to make sure the number of parameters in the generator is correct
Step16: GAN Loss
Compute the generator and discriminator loss. The generator loss is
Step17: Test your GAN loss. Make sure both the generator and discriminator loss are correct. You should see errors less than 1e-5.
Step19: Optimizing our loss
Make an AdamOptimizer with a 1e-3 learning rate, beta1=0.5 to mininize G_loss and D_loss separately. The trick of decreasing beta was shown to be effective in helping GANs converge in the Improved Techniques for Training GANs paper. In fact, with our current hyperparameters, if you set beta1 to the Tensorflow default of 0.9, there's a good chance your discriminator loss will go to zero and the generator will fail to learn entirely. In fact, this is a common failure mode in GANs; if your D(x) learns to be too fast (e.g. loss goes near zero), your G(z) is never able to learn. Often D(x) is trained with SGD with Momentum or RMSProp instead of Adam, but here we'll use Adam for both D(x) and G(z).
Step20: Putting it all together
Now just a bit of Lego Construction.. Read this section over carefully to understand how we'll be composing the generator and discriminator
Step22: Training a GAN!
Well that wasn't so hard, was it? In the iterations in the low 100s you should see black backgrounds, fuzzy shapes as you approach iteration 1000, and decent shapes, about half of which will be sharp and clearly recognizable as we pass 3000. In our case, we'll simply train D(x) and G(z) with one batch each every iteration. However, papers often experiment with different schedules of training D(x) and G(z), sometimes doing one for more steps than the other, or even training each one until the loss gets "good enough" and then switching to training the other.
Step23: Train your GAN! This should take about 10 minutes on a CPU, or less than a minute on GPU.
Step25: Least Squares GAN
We'll now look at Least Squares GAN, a newer, more stable alternative to the original GAN loss function. For this part, all we have to do is change the loss function and retrain the model. We'll implement equation (9) in the paper, with the generator loss
Step26: Test your LSGAN loss. You should see errors less than 1e-7.
Step27: Create new training steps so we instead minimize the LSGAN loss
Step29: INLINE QUESTION 1
Step31: Generator
For the generator, we will copy the architecture exactly from the InfoGAN paper. See Appendix C.1 MNIST. See the documentation for tf.nn.conv2d_transpose. We are always "training" in GAN mode.
Architecture
Step32: We have to recreate our network since we've changed our functions.
Step33: Train and evaluate a DCGAN
This is the one part of A3 that significantly benefits from using a GPU. It takes 3 minutes on a GPU for the requested five epochs. Or about 50 minutes on a dual core laptop on CPU (feel free to use 3 epochs if you do it on CPU).
Step35: INLINE QUESTION 2 | Python Code:
from __future__ import print_function, division
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# A bunch of utility functions
def show_images(images):
sqrtn = int(np.ceil(np.sqrt(images.shape[0])))
sqrtimg = int(np.ceil(np.sqrt(images.shape[1])))
fig = plt.figure(figsize=(sqrtn, sqrtn))
gs = gridspec.GridSpec(sqrtn, sqrtn)
gs.update(wspace=0.05, hspace=0.05)
for i, img in enumerate(images):
ax = plt.subplot(gs[i])
plt.axis('off')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_aspect('equal')
plt.imshow(img.reshape([sqrtimg,sqrtimg]))
return
def preprocess_img(x):
return 2 * x - 1.0
def deprocess_img(x):
return (x + 1.0) / 2.0
def rel_error(x,y):
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
def count_params():
Count the number of parameters in the current TensorFlow graph
param_count = np.sum([np.prod(x.get_shape().as_list()) for x in tf.global_variables()])
return param_count
def get_session():
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)
return session
answers = np.load('gan-checks-tf.npz')
Explanation: Generative Adversarial Networks (GANs)
So far in CS231N, all the applications of neural networks that we have explored have been discriminative models that take an input and are trained to produce a labeled output. This has ranged from straightforward classification of image categories to sentence generation (which was still phrased as a classification problem, our labels were in vocabulary space and we’d learned a recurrence to capture multi-word labels). In this notebook, we will expand our repetoire, and build generative models using neural networks. Specifically, we will learn how to build models which generate novel images that resemble a set of training images.
What is a GAN?
In 2014, Goodfellow et al. presented a method for training generative models called Generative Adversarial Networks (GANs for short). In a GAN, we build two different neural networks. Our first network is a traditional classification network, called the discriminator. We will train the discriminator to take images, and classify them as being real (belonging to the training set) or fake (not present in the training set). Our other network, called the generator, will take random noise as input and transform it using a neural network to produce images. The goal of the generator is to fool the discriminator into thinking the images it produced are real.
We can think of this back and forth process of the generator ($G$) trying to fool the discriminator ($D$), and the discriminator trying to correctly classify real vs. fake as a minimax game:
$$\underset{G}{\text{minimize}}\; \underset{D}{\text{maximize}}\; \mathbb{E}{x \sim p\text{data}}\left[\log D(x)\right] + \mathbb{E}{z \sim p(z)}\left[\log \left(1-D(G(z))\right)\right]$$
where $x \sim p\text{data}$ are samples from the input data, $z \sim p(z)$ are the random noise samples, $G(z)$ are the generated images using the neural network generator $G$, and $D$ is the output of the discriminator, specifying the probability of an input being real. In Goodfellow et al., they analyze this minimax game and show how it relates to minimizing the Jensen-Shannon divergence between the training data distribution and the generated samples from $G$.
To optimize this minimax game, we will aternate between taking gradient descent steps on the objective for $G$, and gradient ascent steps on the objective for $D$:
1. update the generator ($G$) to minimize the probability of the discriminator making the correct choice.
2. update the discriminator ($D$) to maximize the probability of the discriminator making the correct choice.
While these updates are useful for analysis, they do not perform well in practice. Instead, we will use a different objective when we update the generator: maximize the probability of the discriminator making the incorrect choice. This small change helps to allevaiate problems with the generator gradient vanishing when the discriminator is confident. This is the standard update used in most GAN papers, and was used in the original paper from Goodfellow et al..
In this assignment, we will alternate the following updates:
1. Update the generator ($G$) to maximize the probability of the discriminator making the incorrect choice on generated data:
$$\underset{G}{\text{maximize}}\; \mathbb{E}{z \sim p(z)}\left[\log D(G(z))\right]$$
2. Update the discriminator ($D$), to maximize the probability of the discriminator making the correct choice on real and generated data:
$$\underset{D}{\text{maximize}}\; \mathbb{E}{x \sim p_\text{data}}\left[\log D(x)\right] + \mathbb{E}_{z \sim p(z)}\left[\log \left(1-D(G(z))\right)\right]$$
What else is there?
Since 2014, GANs have exploded into a huge research area, with massive workshops, and hundreds of new papers. Compared to other approaches for generative models, they often produce the highest quality samples but are some of the most difficult and finicky models to train (see this github repo that contains a set of 17 hacks that are useful for getting models working). Improving the stabiilty and robustness of GAN training is an open research question, with new papers coming out every day! For a more recent tutorial on GANs, see here. There is also some even more recent exciting work that changes the objective function to Wasserstein distance and yields much more stable results across model architectures: WGAN, WGAN-GP.
GANs are not the only way to train a generative model! For other approaches to generative modeling check out the deep generative model chapter of the Deep Learning book. Another popular way of training neural networks as generative models is Variational Autoencoders (co-discovered here and here). Variational autoencoders combine neural networks with variational inference to train deep generative models. These models tend to be far more stable and easier to train but currently don't produce samples that are as pretty as GANs.
Example pictures of what you should expect (yours might look slightly different):
Setup
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('./cs231n/datasets/MNIST_data', one_hot=False)
# show a batch
show_images(mnist.train.next_batch(16)[0])
Explanation: Dataset
GANs are notoriously finicky with hyperparameters, and also require many training epochs. In order to make this assignment approachable without a GPU, we will be working on the MNIST dataset, which is 60,000 training and 10,000 test images. Each picture contains a centered image of white digit on black background (0 through 9). This was one of the first datasets used to train convolutional neural networks and it is fairly easy -- a standard CNN model can easily exceed 99% accuracy.
To simplify our code here, we will use the TensorFlow MNIST wrapper, which downloads and loads the MNIST dataset. See the documentation for more information about the interface. The default parameters will take 5,000 of the training examples and place them into a validation dataset. The data will be saved into a folder called MNIST_data.
Heads-up: The TensorFlow MNIST wrapper returns images as vectors. That is, they're size (batch, 784). If you want to treat them as images, we have to resize them to (batch,28,28) or (batch,28,28,1). They are also type np.float32 and bounded [0,1].
End of explanation
def leaky_relu(x, alpha=0.01):
Compute the leaky ReLU activation function.
Inputs:
- x: TensorFlow Tensor with arbitrary shape
- alpha: leak parameter for leaky ReLU
Returns:
TensorFlow Tensor with the same shape as x
# TODO: implement leaky ReLU
pass
Explanation: LeakyReLU
In the cell below, you should implement a LeakyReLU. See the class notes (where alpha is small number) or equation (3) in this paper. LeakyReLUs keep ReLU units from dying and are often used in GAN methods (as are maxout units, however those increase model size and therefore are not used in this notebook).
HINT: You should be able to use tf.maximum
End of explanation
def test_leaky_relu(x, y_true):
tf.reset_default_graph()
with get_session() as sess:
y_tf = leaky_relu(tf.constant(x))
y = sess.run(y_tf)
print('Maximum error: %g'%rel_error(y_true, y))
test_leaky_relu(answers['lrelu_x'], answers['lrelu_y'])
Explanation: Test your leaky ReLU implementation. You should get errors < 1e-10
End of explanation
def sample_noise(batch_size, dim):
Generate random uniform noise from -1 to 1.
Inputs:
- batch_size: integer giving the batch size of noise to generate
- dim: integer giving the dimension of the the noise to generate
Returns:
TensorFlow Tensor containing uniform noise in [-1, 1] with shape [batch_size, dim]
# TODO: sample and return noise
pass
Explanation: Random Noise
Generate a TensorFlow Tensor containing uniform noise from -1 to 1 with shape [batch_size, dim].
End of explanation
def test_sample_noise():
batch_size = 3
dim = 4
tf.reset_default_graph()
with get_session() as sess:
z = sample_noise(batch_size, dim)
# Check z has the correct shape
assert z.get_shape().as_list() == [batch_size, dim]
# Make sure z is a Tensor and not a numpy array
assert isinstance(z, tf.Tensor)
# Check that we get different noise for different evaluations
z1 = sess.run(z)
z2 = sess.run(z)
assert not np.array_equal(z1, z2)
# Check that we get the correct range
assert np.all(z1 >= -1.0) and np.all(z1 <= 1.0)
print("All tests passed!")
test_sample_noise()
Explanation: Make sure noise is the correct shape and type:
End of explanation
def discriminator(x):
Compute discriminator score for a batch of input images.
Inputs:
- x: TensorFlow Tensor of flattened input images, shape [batch_size, 784]
Returns:
TensorFlow Tensor with shape [batch_size, 1], containing the score
for an image being real for each input image.
with tf.variable_scope("discriminator"):
# TODO: implement architecture
pass
return logits
Explanation: Discriminator
Our first step is to build a discriminator. You should use the layers in tf.layers to build the model.
All fully connected layers should include bias terms.
Architecture:
* Fully connected layer from size 784 to 256
* LeakyReLU with alpha 0.01
* Fully connected layer from 256 to 256
* LeakyReLU with alpha 0.01
* Fully connected layer from 256 to 1
The output of the discriminator should have shape [batch_size, 1], and contain real numbers corresponding to the scores that each of the batch_size inputs is a real image.
End of explanation
def test_discriminator(true_count=267009):
tf.reset_default_graph()
with get_session() as sess:
y = discriminator(tf.ones((2, 784)))
cur_count = count_params()
if cur_count != true_count:
print('Incorrect number of parameters in discriminator. {0} instead of {1}. Check your achitecture.'.format(cur_count,true_count))
else:
print('Correct number of parameters in discriminator.')
test_discriminator()
Explanation: Test to make sure the number of parameters in the discriminator is correct:
End of explanation
def generator(z):
Generate images from a random noise vector.
Inputs:
- z: TensorFlow Tensor of random noise with shape [batch_size, noise_dim]
Returns:
TensorFlow Tensor of generated images, with shape [batch_size, 784].
with tf.variable_scope("generator"):
# TODO: implement architecture
pass
return img
Explanation: Generator
Now to build a generator. You should use the layers in tf.layers to construct the model. All fully connected layers should include bias terms.
Architecture:
* Fully connected layer from tf.shape(z)[1] (the number of noise dimensions) to 1024
* ReLU
* Fully connected layer from 1024 to 1024
* ReLU
* Fully connected layer from 1024 to 784
* TanH (To restrict the output to be [-1,1])
End of explanation
def test_generator(true_count=1858320):
tf.reset_default_graph()
with get_session() as sess:
y = generator(tf.ones((1, 4)))
cur_count = count_params()
if cur_count != true_count:
print('Incorrect number of parameters in generator. {0} instead of {1}. Check your achitecture.'.format(cur_count,true_count))
else:
print('Correct number of parameters in generator.')
test_generator()
Explanation: Test to make sure the number of parameters in the generator is correct:
End of explanation
def gan_loss(logits_real, logits_fake):
Compute the GAN loss.
Inputs:
- logits_real: Tensor, shape [batch_size, 1], output of discriminator
Log probability that the image is real for each real image
- logits_fake: Tensor, shape[batch_size, 1], output of discriminator
Log probability that the image is real for each fake image
Returns:
- D_loss: discriminator loss scalar
- G_loss: generator loss scalar
# TODO: compute D_loss and G_loss
D_loss = None
G_loss = None
pass
return D_loss, G_loss
Explanation: GAN Loss
Compute the generator and discriminator loss. The generator loss is:
$$\ell_G = -\mathbb{E}{z \sim p(z)}\left[\log D(G(z))\right]$$
and the discriminator loss is:
$$ \ell_D = -\mathbb{E}{x \sim p_\text{data}}\left[\log D(x)\right] - \mathbb{E}_{z \sim p(z)}\left[\log \left(1-D(G(z))\right)\right]$$
Note that these are negated from the equations presented earlier as we will be minimizing these losses.
HINTS: Use tf.ones_like and tf.zeros_like to generate labels for your discriminator. Use sigmoid_cross_entropy loss to help compute your loss function. Instead of computing the expectation, we will be averaging over elements of the minibatch, so make sure to combine the loss by averaging instead of summing.
End of explanation
def test_gan_loss(logits_real, logits_fake, d_loss_true, g_loss_true):
tf.reset_default_graph()
with get_session() as sess:
d_loss, g_loss = sess.run(gan_loss(tf.constant(logits_real), tf.constant(logits_fake)))
print("Maximum error in d_loss: %g"%rel_error(d_loss_true, d_loss))
print("Maximum error in g_loss: %g"%rel_error(g_loss_true, g_loss))
test_gan_loss(answers['logits_real'], answers['logits_fake'],
answers['d_loss_true'], answers['g_loss_true'])
Explanation: Test your GAN loss. Make sure both the generator and discriminator loss are correct. You should see errors less than 1e-5.
End of explanation
# TODO: create an AdamOptimizer for D_solver and G_solver
def get_solvers(learning_rate=1e-3, beta1=0.5):
Create solvers for GAN training.
Inputs:
- learning_rate: learning rate to use for both solvers
- beta1: beta1 parameter for both solvers (first moment decay)
Returns:
- D_solver: instance of tf.train.AdamOptimizer with correct learning_rate and beta1
- G_solver: instance of tf.train.AdamOptimizer with correct learning_rate and beta1
D_solver = None
G_solver = None
pass
return D_solver, G_solver
Explanation: Optimizing our loss
Make an AdamOptimizer with a 1e-3 learning rate, beta1=0.5 to mininize G_loss and D_loss separately. The trick of decreasing beta was shown to be effective in helping GANs converge in the Improved Techniques for Training GANs paper. In fact, with our current hyperparameters, if you set beta1 to the Tensorflow default of 0.9, there's a good chance your discriminator loss will go to zero and the generator will fail to learn entirely. In fact, this is a common failure mode in GANs; if your D(x) learns to be too fast (e.g. loss goes near zero), your G(z) is never able to learn. Often D(x) is trained with SGD with Momentum or RMSProp instead of Adam, but here we'll use Adam for both D(x) and G(z).
End of explanation
tf.reset_default_graph()
# number of images for each batch
batch_size = 128
# our noise dimension
noise_dim = 96
# placeholder for images from the training dataset
x = tf.placeholder(tf.float32, [None, 784])
# random noise fed into our generator
z = sample_noise(batch_size, noise_dim)
# generated images
G_sample = generator(z)
with tf.variable_scope("") as scope:
#scale images to be -1 to 1
logits_real = discriminator(preprocess_img(x))
# Re-use discriminator weights on new inputs
scope.reuse_variables()
logits_fake = discriminator(G_sample)
# Get the list of variables for the discriminator and generator
D_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'discriminator')
G_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'generator')
# get our solver
D_solver, G_solver = get_solvers()
# get our loss
D_loss, G_loss = gan_loss(logits_real, logits_fake)
# setup training steps
D_train_step = D_solver.minimize(D_loss, var_list=D_vars)
G_train_step = G_solver.minimize(G_loss, var_list=G_vars)
D_extra_step = tf.get_collection(tf.GraphKeys.UPDATE_OPS, 'discriminator')
G_extra_step = tf.get_collection(tf.GraphKeys.UPDATE_OPS, 'generator')
Explanation: Putting it all together
Now just a bit of Lego Construction.. Read this section over carefully to understand how we'll be composing the generator and discriminator
End of explanation
# a giant helper function
def run_a_gan(sess, G_train_step, G_loss, D_train_step, D_loss, G_extra_step, D_extra_step,\
show_every=250, print_every=50, batch_size=128, num_epoch=10):
Train a GAN for a certain number of epochs.
Inputs:
- sess: A tf.Session that we want to use to run our data
- G_train_step: A training step for the Generator
- G_loss: Generator loss
- D_train_step: A training step for the Generator
- D_loss: Discriminator loss
- G_extra_step: A collection of tf.GraphKeys.UPDATE_OPS for generator
- D_extra_step: A collection of tf.GraphKeys.UPDATE_OPS for discriminator
Returns:
Nothing
# compute the number of iterations we need
max_iter = int(mnist.train.num_examples*num_epoch/batch_size)
for it in range(max_iter):
# every show often, show a sample result
if it % show_every == 0:
samples = sess.run(G_sample)
fig = show_images(samples[:16])
plt.show()
print()
# run a batch of data through the network
minibatch,minbatch_y = mnist.train.next_batch(batch_size)
_, D_loss_curr = sess.run([D_train_step, D_loss], feed_dict={x: minibatch})
_, G_loss_curr = sess.run([G_train_step, G_loss])
# print loss every so often.
# We want to make sure D_loss doesn't go to 0
if it % print_every == 0:
print('Iter: {}, D: {:.4}, G:{:.4}'.format(it,D_loss_curr,G_loss_curr))
print('Final images')
samples = sess.run(G_sample)
fig = show_images(samples[:16])
plt.show()
Explanation: Training a GAN!
Well that wasn't so hard, was it? In the iterations in the low 100s you should see black backgrounds, fuzzy shapes as you approach iteration 1000, and decent shapes, about half of which will be sharp and clearly recognizable as we pass 3000. In our case, we'll simply train D(x) and G(z) with one batch each every iteration. However, papers often experiment with different schedules of training D(x) and G(z), sometimes doing one for more steps than the other, or even training each one until the loss gets "good enough" and then switching to training the other.
End of explanation
with get_session() as sess:
sess.run(tf.global_variables_initializer())
run_a_gan(sess,G_train_step,G_loss,D_train_step,D_loss,G_extra_step,D_extra_step)
Explanation: Train your GAN! This should take about 10 minutes on a CPU, or less than a minute on GPU.
End of explanation
def lsgan_loss(score_real, score_fake):
Compute the Least Squares GAN loss.
Inputs:
- score_real: Tensor, shape [batch_size, 1], output of discriminator
score for each real image
- score_fake: Tensor, shape[batch_size, 1], output of discriminator
score for each fake image
Returns:
- D_loss: discriminator loss scalar
- G_loss: generator loss scalar
# TODO: compute D_loss and G_loss
D_loss = None
G_loss = None
pass
return D_loss, G_loss
Explanation: Least Squares GAN
We'll now look at Least Squares GAN, a newer, more stable alternative to the original GAN loss function. For this part, all we have to do is change the loss function and retrain the model. We'll implement equation (9) in the paper, with the generator loss:
$$\ell_G = \frac{1}{2}\mathbb{E}{z \sim p(z)}\left[\left(D(G(z))-1\right)^2\right]$$
and the discriminator loss:
$$ \ell_D = \frac{1}{2}\mathbb{E}{x \sim p_\text{data}}\left[\left(D(x)-1\right)^2\right] + \frac{1}{2}\mathbb{E}_{z \sim p(z)}\left[ \left(D(G(z))\right)^2\right]$$
HINTS: Instead of computing the expectation, we will be averaging over elements of the minibatch, so make sure to combine the loss by averaging instead of summing. When plugging in for $D(x)$ and $D(G(z))$ use the direct output from the discriminator (score_real and score_fake).
End of explanation
def test_lsgan_loss(score_real, score_fake, d_loss_true, g_loss_true):
with get_session() as sess:
d_loss, g_loss = sess.run(
lsgan_loss(tf.constant(score_real), tf.constant(score_fake)))
print("Maximum error in d_loss: %g"%rel_error(d_loss_true, d_loss))
print("Maximum error in g_loss: %g"%rel_error(g_loss_true, g_loss))
test_lsgan_loss(answers['logits_real'], answers['logits_fake'],
answers['d_loss_lsgan_true'], answers['g_loss_lsgan_true'])
Explanation: Test your LSGAN loss. You should see errors less than 1e-7.
End of explanation
D_loss, G_loss = lsgan_loss(logits_real, logits_fake)
D_train_step = D_solver.minimize(D_loss, var_list=D_vars)
G_train_step = G_solver.minimize(G_loss, var_list=G_vars)
with get_session() as sess:
sess.run(tf.global_variables_initializer())
run_a_gan(sess, G_train_step, G_loss, D_train_step, D_loss, G_extra_step, D_extra_step)
Explanation: Create new training steps so we instead minimize the LSGAN loss:
End of explanation
def discriminator(x):
Compute discriminator score for a batch of input images.
Inputs:
- x: TensorFlow Tensor of flattened input images, shape [batch_size, 784]
Returns:
TensorFlow Tensor with shape [batch_size, 1], containing the score
for an image being real for each input image.
with tf.variable_scope("discriminator"):
# TODO: implement architecture
pass
return logits
test_discriminator(1102721)
Explanation: INLINE QUESTION 1:
Describe how the visual quality of the samples changes over the course of training. Do you notice anything about the distribution of the samples? How do the results change across different training runs?
(Write Your Answer In This Cell)
Deep Convolutional GANs
In the first part of the notebook, we implemented an almost direct copy of the original GAN network from Ian Goodfellow. However, this network architecture allows no real spatial reasoning. It is unable to reason about things like "sharp edges" in general because it lacks any convolutional layers. Thus, in this section, we will implement some of the ideas from DCGAN, where we use convolutional networks as our discriminators and generators.
Discriminator
We will use a discriminator inspired by the TensorFlow MNIST classification tutorial, which is able to get above 99% accuracy on the MNIST dataset fairly quickly. Be sure to check the dimensions of x and reshape when needed, fully connected blocks expect [N,D] Tensors while conv2d blocks expect [N,H,W,C] Tensors.
Architecture:
* 32 Filters, 5x5, Stride 1, Leaky ReLU(alpha=0.01)
* Max Pool 2x2, Stride 2
* 64 Filters, 5x5, Stride 1, Leaky ReLU(alpha=0.01)
* Max Pool 2x2, Stride 2
* Flatten
* Fully Connected size 4 x 4 x 64, Leaky ReLU(alpha=0.01)
* Fully Connected size 1
End of explanation
def generator(z):
Generate images from a random noise vector.
Inputs:
- z: TensorFlow Tensor of random noise with shape [batch_size, noise_dim]
Returns:
TensorFlow Tensor of generated images, with shape [batch_size, 784].
with tf.variable_scope("generator"):
# TODO: implement architecture
pass
return img
test_generator(6595521)
Explanation: Generator
For the generator, we will copy the architecture exactly from the InfoGAN paper. See Appendix C.1 MNIST. See the documentation for tf.nn.conv2d_transpose. We are always "training" in GAN mode.
Architecture:
* Fully connected of size 1024, ReLU
* BatchNorm
* Fully connected of size 7 x 7 x 128, ReLU
* BatchNorm
* Resize into Image Tensor
* 64 conv2d^T (transpose) filters of 4x4, stride 2, ReLU
* BatchNorm
* 1 conv2d^T (transpose) filter of 4x4, stride 2, TanH
End of explanation
tf.reset_default_graph()
batch_size = 128
# our noise dimension
noise_dim = 96
# placeholders for images from the training dataset
x = tf.placeholder(tf.float32, [None, 784])
z = sample_noise(batch_size, noise_dim)
# generated images
G_sample = generator(z)
with tf.variable_scope("") as scope:
#scale images to be -1 to 1
logits_real = discriminator(preprocess_img(x))
# Re-use discriminator weights on new inputs
scope.reuse_variables()
logits_fake = discriminator(G_sample)
# Get the list of variables for the discriminator and generator
D_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,'discriminator')
G_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,'generator')
D_solver,G_solver = get_solvers()
D_loss, G_loss = gan_loss(logits_real, logits_fake)
D_train_step = D_solver.minimize(D_loss, var_list=D_vars)
G_train_step = G_solver.minimize(G_loss, var_list=G_vars)
D_extra_step = tf.get_collection(tf.GraphKeys.UPDATE_OPS,'discriminator')
G_extra_step = tf.get_collection(tf.GraphKeys.UPDATE_OPS,'generator')
Explanation: We have to recreate our network since we've changed our functions.
End of explanation
with get_session() as sess:
sess.run(tf.global_variables_initializer())
run_a_gan(sess,G_train_step,G_loss,D_train_step,D_loss,G_extra_step,D_extra_step,num_epoch=5)
Explanation: Train and evaluate a DCGAN
This is the one part of A3 that significantly benefits from using a GPU. It takes 3 minutes on a GPU for the requested five epochs. Or about 50 minutes on a dual core laptop on CPU (feel free to use 3 epochs if you do it on CPU).
End of explanation
def discriminator(x):
with tf.variable_scope('discriminator'):
# TODO: implement architecture
pass
return logits
test_discriminator(3411649)
tf.reset_default_graph()
batch_size = 128
# our noise dimension
noise_dim = 96
# placeholders for images from the training dataset
x = tf.placeholder(tf.float32, [None, 784])
z = sample_noise(batch_size, noise_dim)
# generated images
G_sample = generator(z)
with tf.variable_scope("") as scope:
#scale images to be -1 to 1
logits_real = discriminator(preprocess_img(x))
# Re-use discriminator weights on new inputs
scope.reuse_variables()
logits_fake = discriminator(G_sample)
# Get the list of variables for the discriminator and generator
D_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,'discriminator')
G_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,'generator')
D_solver, G_solver = get_solvers()
def wgangp_loss(logits_real, logits_fake, batch_size, x, G_sample):
Compute the WGAN-GP loss.
Inputs:
- logits_real: Tensor, shape [batch_size, 1], output of discriminator
Log probability that the image is real for each real image
- logits_fake: Tensor, shape[batch_size, 1], output of discriminator
Log probability that the image is real for each fake image
- batch_size: The number of examples in this batch
- x: the input (real) images for this batch
- G_sample: the generated (fake) images for this batch
Returns:
- D_loss: discriminator loss scalar
- G_loss: generator loss scalar
# TODO: compute D_loss and G_loss
D_loss = None
G_loss = None
# lambda from the paper
lam = 10
# random sample of batch_size (tf.random_uniform)
eps = 0
x_hat = 0
# Gradients of Gradients is kind of tricky!
with tf.variable_scope('',reuse=True) as scope:
grad_D_x_hat = None
grad_norm = None
grad_pen = None
return D_loss, G_loss
D_loss, G_loss = wgangp_loss(logits_real, logits_fake, 128, x, G_sample)
D_train_step = D_solver.minimize(D_loss, var_list=D_vars)
G_train_step = G_solver.minimize(G_loss, var_list=G_vars)
D_extra_step = tf.get_collection(tf.GraphKeys.UPDATE_OPS,'discriminator')
G_extra_step = tf.get_collection(tf.GraphKeys.UPDATE_OPS,'generator')
with get_session() as sess:
sess.run(tf.global_variables_initializer())
run_a_gan(sess,G_train_step,G_loss,D_train_step,D_loss,G_extra_step,D_extra_step,batch_size=128,num_epoch=5)
Explanation: INLINE QUESTION 2:
What differences do you see between the DCGAN results and the original GAN results?
(Write Your Answer In This Cell)
Extra Credit
Be sure you don't destroy your results above, but feel free to copy+paste code to get results below
* For a small amount of extra credit, you can implement additional new GAN loss functions below, provided they converge. See AFI, BiGAN, Softmax GAN, Conditional GAN, InfoGAN, etc. They should converge to get credit.
* Likewise for an improved architecture or using a convolutional GAN (or even implement a VAE)
* For a bigger chunk of extra credit, load the CIFAR10 data (see last assignment) and train a compelling generative model on CIFAR-10
* Demonstrate the value of GANs in building semi-supervised models. In a semi-supervised example, only some fraction of the input data has labels; we can supervise this in MNIST by only training on a few dozen or hundred labeled examples. This was first described in Improved Techniques for Training GANs.
* Something new/cool.
Describe what you did here
WGAN-GP (Small Extra Credit)
Please only attempt after you have completed everything above.
We'll now look at Improved Wasserstein GAN as a newer, more stable alernative to the original GAN loss function. For this part, all we have to do is change the loss function and retrain the model. We'll implement Algorithm 1 in the paper.
You'll also need to use a discriminator and corresponding generator without max-pooling. So we cannot use the one we currently have from DCGAN. Pair the DCGAN Generator (from InfoGAN) with the discriminator from InfoGAN Appendix C.1 MNIST (We don't use Q, simply implement the network up to D). You're also welcome to define a new generator and discriminator in this notebook, in case you want to use the fully-connected pair of D(x) and G(z) you used at the top of this notebook.
Architecture:
* 64 Filters of 4x4, stride 2, LeakyReLU
* 128 Filters of 4x4, stride 2, LeakyReLU
* BatchNorm
* Flatten
* Fully connected 1024, LeakyReLU
* Fully connected size 1
End of explanation |
9,464 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: This notebook provides an introduction to some of the basic concepts of machine learning.
Let's start by generating some data to work with. Let's say that we have a dataset that has tested people on two continuous measures (processing speed and age) and one discrete measure (diagnosis with any psychiatric disorder). First let's create the continuous data assuming that there is a relationship between these two variables. We will make a function to generate a new dataset, since we will need to do this multiple times.
Step2: What is the simplest story that we could tell about processing speed these data? Well, we could simply say that the variable is normal with a mean of zero and a standard deviation of 1. Let's see how likely the observed processing speed values are given that set of parameters. First, let's create a function that returns the normal log-likelihood of the data given a set of predicted values.
Step3: We are pretty sure that the mean of our variables is not zero, so let's compute the mean and see if the likelihood of the data is higher.
Step4: What about using the observed variance as well?
Step6: Is there a relation between processing speed and age? Compute the linear regression equation to find out.
Step7: This shows us that linear regression can provide a simple description of a complex dataset - we can describe the entire dataset in 2 numbers. Now let's ask how good this description is for a new dataset generated by the same process
Step8: Now let's do this 100 times and look at how variable the fits are.
Step9: Cross-validation
The results above show that the fit of the model to the observed data overestimates our ability to predict on new data. In many cases we would like to be able to quantify how well our model generalizes to new data, but it's often not possible to collect additional data. The concept of cross-validation provides us with a way to measure how well a model generalizes. The idea is to iteratively train the model on subsets of the data and then test the model on the left-out portion. Let's first see what cross-validation looks like. Perhaps the simplest version to understand is "leave-one-out" crossvalidation, so let's look at that. Here is what the training and test datasets would look like for a dataset with 10 observations; in reality this is way too few observations, but we will use it as an exmaple
Step10: It is often more common to use larger test folds, both to speed up performance (since LOO can require lots of model fitting when there are a large number of observations) and because LOO error estimates can have high variance due to the fact that the models are so highly correlated. This is referred to as K-fold cross-validation; generally we want to choose K somewhere around 5-10. It's generally a good idea to shuffle the order of the observations so that the folds are grouped randomly.
Step11: Now let's perform leave-one-out cross-validation on our original dataset, so that we can compare it to the performance on new datasets. We expect that the correlation between LOO estimates and actual data should be very similar to the Mean R2 for new datasets. We can also plot a histogram of the estimates, to see how they vary across folds.
Step12: Now let's look at the effect of outliers on in-sample correlation and out-of-sample prediction.
Step14: Model selection
Often when we are fitting models to data we have to make decisions about the complexity of the model; after all, if the model has as many parameters as there are data points then we can fit the data exactly, but as we saw above, this model will not generalize very well to other datasets.
To see how we can use cross-validation to select our model complexity, let's generate some data with a certain polynomial order, and see whether crossvalidation can find the right model order.
Step15: Bias-variance tradeoffs
Another way to think about model complexity is in terms of bias-variance tradeoffs. Bias is the average distance between the prediction of our model and the correct value, whereas variance is the average distance between different predictions from the model. In standard statistics classes it is often taken as a given that an unbiased estimate is always best, but within machine learning we will often see that a bit of bias can go a long way towards reducing variance, and that some kinds of bias make particular sense.
Let's start with an example using linear regression. First, we will generate a dataset with 20 variables and 100 observations, but only two of the variables are actually related to the outcome (the rest are simply random noise).
Step16: Now let's fit two different models to the data that we will generate. First, we will fit a standard linear regression model, using ordinary least squares. This is the best linear unbiased estimator for the regression model. We will also fit a model that uses regularization, which places some constraints on the parameter estimates. In this case, we use the Lasso model, which minimizes the sum of squares while also constraining (or penalizing) the sum of the absolute parameter estimates (known as an L1 penalty). The parameter estimates of this model will be biased towards zero, and will be sparse, meaning that most of the estimates will be exactly zero.
One complication of the Lasso model is that we need to select a value for the alpha parameter, which determines how much penalty there will be. We will use crossvalidation within the training data set to do this; the sklearn LassoCV() function does it for us automatically. Let's generate a function that can run both standard regression and Lasso regression.
Step17: Let's run the simulation 100 times and look at the average parameter estimates.
Step18: The prediction error for the Lasso model is substantially less than the error for the linear regression model. What about the parameters? Let's display the mean parameter estimates and their variabilty across runs.
Step19: Another place where regularization is essential is when your data are wider than they are tall - that is, when you have more variables than observations. This is almost always the case for brain imaging data, when the number of voxels far outweighs the number of subjects or events. In this case, the ordinary least squares solution is ill-posed, meaning that it has an infinite number of possible solutions. The sklearn LinearRegression() estimator will return an estimate even in this case, but the parameter estimates will be highly variable. However, we can use a regularized regression technique to find more robust estimates in this case.
Let's run the same simulation, but now put 1000 variables instead of 20. This will take a few minutes to execute. | Python Code:
import numpy,pandas
%matplotlib inline
import matplotlib.pyplot as plt
import scipy.stats
from sklearn.model_selection import LeaveOneOut,KFold
from sklearn.preprocessing import PolynomialFeatures,scale
from sklearn.linear_model import LinearRegression,LassoCV,Ridge
import seaborn as sns
import statsmodels.formula.api as sm
from statsmodels.tools.tools import add_constant
recreate=True
if recreate:
seed=20698
else:
seed=numpy.ceil(numpy.random.rand()*100000).astype('int')
print(seed)
numpy.random.seed(seed)
def make_continuous_data(mean=[45,100],var=[10,10],cor=-0.6,N=100):
generate a synthetic data set with two variables
cor=numpy.array([[1.,cor],[cor,1.]])
var=numpy.array([[var[0],0],[0,var[1]]])
cov=var.dot(cor).dot(var)
return numpy.random.multivariate_normal(mean,cov,N)
n=50
d=make_continuous_data(N=n)
y=d[:,1]
plt.scatter(d[:,0],d[:,1])
plt.xlabel('age')
plt.ylabel('processing speed')
print('data R-squared: %f'%numpy.corrcoef(d.T)[0,1]**2)
Explanation: This notebook provides an introduction to some of the basic concepts of machine learning.
Let's start by generating some data to work with. Let's say that we have a dataset that has tested people on two continuous measures (processing speed and age) and one discrete measure (diagnosis with any psychiatric disorder). First let's create the continuous data assuming that there is a relationship between these two variables. We will make a function to generate a new dataset, since we will need to do this multiple times.
End of explanation
def loglike(y,yhat,s2=None,verbose=True):
N = len(y)
SSR = numpy.sum((y-yhat)**2)
if s2 is None:
# use observed stdev
s2 = SSR / float(N)
logLike = -(n/2.)*numpy.log(s2) - (n/2.)*numpy.log(2*numpy.pi) - SSR/(2*s2)
if verbose:
print('SSR:',SSR)
print('s2:',s2)
print('logLike:',logLike)
return logLike
logLike_null=loglike(y,numpy.zeros(len(y)),s2=1)
Explanation: What is the simplest story that we could tell about processing speed these data? Well, we could simply say that the variable is normal with a mean of zero and a standard deviation of 1. Let's see how likely the observed processing speed values are given that set of parameters. First, let's create a function that returns the normal log-likelihood of the data given a set of predicted values.
End of explanation
mean=numpy.mean(y)
print('mean:',mean)
pred=numpy.ones(len(y))*mean
logLike_mean=loglike(y,pred,s2=1)
Explanation: We are pretty sure that the mean of our variables is not zero, so let's compute the mean and see if the likelihood of the data is higher.
End of explanation
var=numpy.var(y)
print('variance',var)
pred=numpy.ones(len(y))*mean
logLike_mean_std=loglike(y,pred)
Explanation: What about using the observed variance as well?
End of explanation
X=d[:,0]
X=add_constant(X)
result = sm.OLS( y, X ).fit()
print(result.summary())
intercept=result.params[0]
slope=result.params[1]
pred=result.predict(X)
logLike_ols=loglike(y,pred)
plt.scatter(y,pred)
print('processing speed = %f + %f*age'%(intercept,slope))
print('p =%f'%result.pvalues[1])
def get_RMSE(y,pred):
return numpy.sqrt(numpy.mean((y - pred)**2))
def get_R2(y,pred):
compute r-squared
return numpy.corrcoef(y,pred)[0,1]**2
ax=plt.scatter(d[:,0],d[:,1])
plt.xlabel('age')
plt.ylabel('processing speed')
plt.plot(d[:,0], slope * d[:,0] + intercept, color='red')
# plot residual lines
d_predicted=slope*d[:,0] + intercept
for i in range(d.shape[0]):
x=d[i,0]
y=d[i,1]
plt.plot([x,x],[d_predicted[i],y],color='blue')
RMSE=get_RMSE(d[:,1],d_predicted)
rsq=get_R2(d[:,1],d_predicted)
print('rsquared=%f'%rsq)
Explanation: Is there a relation between processing speed and age? Compute the linear regression equation to find out.
End of explanation
d_new=make_continuous_data(N=n)
d_new_predicted=intercept + slope*d_new[:,0]
RMSE_new=get_RMSE(d_new[:,1],d_new_predicted)
rsq_new=get_R2(d_new[:,1],d_new_predicted)
print('R2 for new data: %f'%rsq_new)
ax=plt.scatter(d_new[:,0],d_new[:,1])
plt.xlabel('age')
plt.ylabel('processing speed')
plt.plot(d_new[:,0], slope * d_new[:,0] + intercept, color='red')
Explanation: This shows us that linear regression can provide a simple description of a complex dataset - we can describe the entire dataset in 2 numbers. Now let's ask how good this description is for a new dataset generated by the same process:
End of explanation
nruns=100
slopes=numpy.zeros(nruns)
intercepts=numpy.zeros(nruns)
rsquared=numpy.zeros(nruns)
fig = plt.figure()
ax = fig.gca()
for i in range(nruns):
data=make_continuous_data(N=n)
slopes[i],intercepts[i],_,_,_=scipy.stats.linregress(data[:,0],data[:,1])
ax.plot(data[:,0], slopes[i] * data[:,0] + intercepts[i], color='red', alpha=0.05)
pred_orig=intercept + slope*data[:,0]
rsquared[i]=get_R2(data[:,1],pred_orig)
print('Original R2: %f'%rsq)
print('Mean R2 for new datasets on original model: %f'%numpy.mean(rsquared))
Explanation: Now let's do this 100 times and look at how variable the fits are.
End of explanation
# initialize the sklearn leave-one-out operator
loo=LeaveOneOut()
for train,test in loo.split(range(10)):
print('train:',train,'test:',test)
Explanation: Cross-validation
The results above show that the fit of the model to the observed data overestimates our ability to predict on new data. In many cases we would like to be able to quantify how well our model generalizes to new data, but it's often not possible to collect additional data. The concept of cross-validation provides us with a way to measure how well a model generalizes. The idea is to iteratively train the model on subsets of the data and then test the model on the left-out portion. Let's first see what cross-validation looks like. Perhaps the simplest version to understand is "leave-one-out" crossvalidation, so let's look at that. Here is what the training and test datasets would look like for a dataset with 10 observations; in reality this is way too few observations, but we will use it as an exmaple
End of explanation
# initialize the sklearn leave-one-out operator
kf=KFold(n_splits=5,shuffle=True)
for train,test in kf.split(range(10)):
print('train:',train,'test:',test)
Explanation: It is often more common to use larger test folds, both to speed up performance (since LOO can require lots of model fitting when there are a large number of observations) and because LOO error estimates can have high variance due to the fact that the models are so highly correlated. This is referred to as K-fold cross-validation; generally we want to choose K somewhere around 5-10. It's generally a good idea to shuffle the order of the observations so that the folds are grouped randomly.
End of explanation
loo=LeaveOneOut()
slopes_loo=numpy.zeros(n)
intercepts_loo=numpy.zeros(n)
pred=numpy.zeros(n)
ctr=0
for train,test in loo.split(range(n)):
slopes_loo[ctr],intercepts_loo[ctr],_,_,_=scipy.stats.linregress(d[train,0],d[train,1])
pred[ctr]=intercepts_loo[ctr] + slopes_loo[ctr]*data[test,0]
ctr+=1
print('R2 for leave-one-out prediction: %f'%get_R2(pred,data[:,1]))
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
_=plt.hist(slopes_loo,20)
plt.xlabel('slope estimate')
plt.ylabel('frequency')
plt.subplot(1,2,2)
_=plt.hist(intercepts_loo,20)
plt.xlabel('intercept estimate')
plt.ylabel('frequency')
Explanation: Now let's perform leave-one-out cross-validation on our original dataset, so that we can compare it to the performance on new datasets. We expect that the correlation between LOO estimates and actual data should be very similar to the Mean R2 for new datasets. We can also plot a histogram of the estimates, to see how they vary across folds.
End of explanation
# add an outlier
data_null=make_continuous_data(N=n,cor=0.0)
outlier_multiplier=2.0
data=numpy.vstack((data_null,[numpy.max(data_null[:,0])*outlier_multiplier,
numpy.max(data_null[:,1])*outlier_multiplier*-1]))
plt.scatter(data[:,0],data[:,1])
slope,intercept,r,p,se=scipy.stats.linregress(data[:,0],data[:,1])
plt.plot([numpy.min(data[:,0]),intercept + slope*numpy.min(data[:,0])],
[numpy.max(data[:,0]),intercept + slope*numpy.max(data[:,0])])
rsq_outlier=r**2
print('R2 for regression with outlier: %f'%rsq_outlier)
loo=LeaveOneOut()
pred_outlier=numpy.zeros(data.shape[0])
ctr=0
for train,test in loo.split(range(data.shape[0])):
s,i,_,_,_=scipy.stats.linregress(data[train,0],data[train,1])
pred_outlier[ctr]=i + s*data[test,0]
ctr+=1
print('R2 for leave-one-out prediction: %f'%get_R2(pred_outlier,data[:,1]))
Explanation: Now let's look at the effect of outliers on in-sample correlation and out-of-sample prediction.
End of explanation
# from https://gist.github.com/iizukak/1287876
def gram_schmidt_columns(X):
Q, R = numpy.linalg.qr(X)
return Q
def make_continuous_data_poly(mean=0,var=1,betaval=5,order=1,N=100):
generate a synthetic data set with two variables
allowing polynomial functions up to 5-th order
x=numpy.random.randn(N)
x=x-numpy.mean(x)
pf=PolynomialFeatures(5,include_bias=False)
x_poly=gram_schmidt_columns(pf.fit_transform(x[:,numpy.newaxis]))
betas=numpy.zeros(5)
betas[0]=mean
for i in range(order):
betas[i]=betaval
func=x_poly.dot(betas)+numpy.random.randn(N)*var
d=numpy.vstack((x,func)).T
return d,x_poly
n=25
trueorder=2
data,x_poly=make_continuous_data_poly(N=n,order=trueorder)
# fit models of increasing complexity
npolyorders=7
plt.figure()
plt.scatter(data[:,0],data[:,1])
plt.title('fitted data')
xp=numpy.linspace(numpy.min(data[:,0]),numpy.max(data[:,0]),100)
for i in range(npolyorders):
f = numpy.polyfit(data[:,0], data[:,1], i)
p=numpy.poly1d(f)
plt.plot(xp,p(xp))
plt.legend(['%d'%i for i in range(npolyorders)])
# compute in-sample and out-of-sample error using LOO
loo=LeaveOneOut()
pred=numpy.zeros((n,npolyorders))
mean_trainerr=numpy.zeros(npolyorders)
prederr=numpy.zeros(npolyorders)
for i in range(npolyorders):
ctr=0
trainerr=numpy.zeros(n)
for train,test in loo.split(range(data.shape[0])):
f = numpy.polyfit(data[train,0], data[train,1], i)
p=numpy.poly1d(f)
trainerr[ctr]=numpy.sqrt(numpy.mean((data[train,1]-p(data[train,0]))**2))
pred[test,i]=p(data[test,0])
ctr+=1
mean_trainerr[i]=numpy.mean(trainerr)
prederr[i]=numpy.sqrt(numpy.mean((data[:,1]-pred[:,i])**2))
plt.plot(range(npolyorders),mean_trainerr)
plt.plot(range(npolyorders),prederr,color='red')
plt.xlabel('Polynomial order')
plt.ylabel('root mean squared error')
plt.legend(['training error','test error'],loc=9)
plt.plot([numpy.argmin(prederr),numpy.argmin(prederr)],
[numpy.min(mean_trainerr),numpy.max(prederr)],'k--')
plt.text(0.5,numpy.max(mean_trainerr),'underfitting')
plt.text(4.5,numpy.max(mean_trainerr),'overfitting')
print('True order:',trueorder)
print('Order estimated by cross validation:',numpy.argmin(prederr))
Explanation: Model selection
Often when we are fitting models to data we have to make decisions about the complexity of the model; after all, if the model has as many parameters as there are data points then we can fit the data exactly, but as we saw above, this model will not generalize very well to other datasets.
To see how we can use cross-validation to select our model complexity, let's generate some data with a certain polynomial order, and see whether crossvalidation can find the right model order.
End of explanation
def make_larger_dataset(beta,n,sd=1):
X=numpy.random.randn(n,len(beta)) # design matrix
beta=numpy.array(beta)
y=X.dot(beta)+numpy.random.randn(n)*sd
return(y-numpy.mean(y),X)
Explanation: Bias-variance tradeoffs
Another way to think about model complexity is in terms of bias-variance tradeoffs. Bias is the average distance between the prediction of our model and the correct value, whereas variance is the average distance between different predictions from the model. In standard statistics classes it is often taken as a given that an unbiased estimate is always best, but within machine learning we will often see that a bit of bias can go a long way towards reducing variance, and that some kinds of bias make particular sense.
Let's start with an example using linear regression. First, we will generate a dataset with 20 variables and 100 observations, but only two of the variables are actually related to the outcome (the rest are simply random noise).
End of explanation
def compare_lr_lasso(n=100,nvars=20,n_splits=8,sd=1):
beta=numpy.zeros(nvars)
beta[0]=1
beta[1]=-1
y,X=make_larger_dataset(beta,100,sd=1)
kf=KFold(n_splits=n_splits,shuffle=True)
pred_lr=numpy.zeros(X.shape[0])
coefs_lr=numpy.zeros((n_splits,X.shape[1]))
pred_lasso=numpy.zeros(X.shape[0])
coefs_lasso=numpy.zeros((n_splits,X.shape[1]))
lr=LinearRegression()
lasso=LassoCV()
ctr=0
for train,test in kf.split(X):
Xtrain=X[train,:]
Ytrain=y[train]
lr.fit(Xtrain,Ytrain)
lasso.fit(Xtrain,Ytrain)
pred_lr[test]=lr.predict(X[test,:])
coefs_lr[ctr,:]=lr.coef_
pred_lasso[test]=lasso.predict(X[test,:])
coefs_lasso[ctr,:]=lasso.coef_
ctr+=1
prederr_lr=numpy.sum((pred_lr-y)**2)
prederr_lasso=numpy.sum((pred_lasso-y)**2)
return [prederr_lr,prederr_lasso],numpy.mean(coefs_lr,0),numpy.mean(coefs_lasso,0),beta
Explanation: Now let's fit two different models to the data that we will generate. First, we will fit a standard linear regression model, using ordinary least squares. This is the best linear unbiased estimator for the regression model. We will also fit a model that uses regularization, which places some constraints on the parameter estimates. In this case, we use the Lasso model, which minimizes the sum of squares while also constraining (or penalizing) the sum of the absolute parameter estimates (known as an L1 penalty). The parameter estimates of this model will be biased towards zero, and will be sparse, meaning that most of the estimates will be exactly zero.
One complication of the Lasso model is that we need to select a value for the alpha parameter, which determines how much penalty there will be. We will use crossvalidation within the training data set to do this; the sklearn LassoCV() function does it for us automatically. Let's generate a function that can run both standard regression and Lasso regression.
End of explanation
nsims=100
prederr=numpy.zeros((nsims,2))
lrcoef=numpy.zeros((nsims,20))
lassocoef=numpy.zeros((nsims,20))
for i in range(nsims):
prederr[i,:],lrcoef[i,:],lassocoef[i,:],beta=compare_lr_lasso()
print('mean sum of squared error:')
print('linear regression:',numpy.mean(prederr,0)[0])
print('lasso:',numpy.mean(prederr,0)[1])
Explanation: Let's run the simulation 100 times and look at the average parameter estimates.
End of explanation
coefs_df=pandas.DataFrame({'True value':beta,'Regression (mean)':numpy.mean(lrcoef,0),'Lasso (mean)':numpy.mean(lassocoef,0),
'Regression(stdev)':numpy.std(lrcoef,0),'Lasso(stdev)':numpy.std(lassocoef,0)})
coefs_df
Explanation: The prediction error for the Lasso model is substantially less than the error for the linear regression model. What about the parameters? Let's display the mean parameter estimates and their variabilty across runs.
End of explanation
nsims=100
prederr=numpy.zeros((nsims,2))
lrcoef=numpy.zeros((nsims,1000))
lassocoef=numpy.zeros((nsims,1000))
for i in range(nsims):
prederr[i,:],lrcoef[i,:],lassocoef[i,:],beta=compare_lr_lasso(nvars=1000)
print('mean sum of squared error:')
print('linear regression:',numpy.mean(prederr,0)[0])
print('lasso:',numpy.mean(prederr,0)[1])
Explanation: Another place where regularization is essential is when your data are wider than they are tall - that is, when you have more variables than observations. This is almost always the case for brain imaging data, when the number of voxels far outweighs the number of subjects or events. In this case, the ordinary least squares solution is ill-posed, meaning that it has an infinite number of possible solutions. The sklearn LinearRegression() estimator will return an estimate even in this case, but the parameter estimates will be highly variable. However, we can use a regularized regression technique to find more robust estimates in this case.
Let's run the same simulation, but now put 1000 variables instead of 20. This will take a few minutes to execute.
End of explanation |
9,465 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I’m trying to solve a simple ODE to visualise the temporal response, which works well for constant input conditions using the new solve_ivp integration API in SciPy. For example: | Problem:
import scipy.integrate
import numpy as np
N0 = 10
time_span = [-0.1, 0.1]
def dN1_dt (t, N1):
return -100 * N1 + np.sin(t)
sol = scipy.integrate.solve_ivp(fun=dN1_dt, t_span=time_span, y0=[N0,]) |
9,466 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demonstration
Step2: Generate noisy samples
Draw real and imaginary part of each sample from independent normal distributions
Step3: Noise samples in complex plane
Step4: Can you see the Gaussian bell?
Where's the Gaussian?
Step5: Visualize Noise
Step6: Autocorrelation function
Step7: Histogram & PDF of normal distribution
Step8: Amplitudes are normally distributed. Try to play with number of samples and number of bins.
You may also want to take a look at histogram of quadrature component. What do you expect?
Power spectral density using Welch method
Step9: Not quite a constant, why not?
Low-Pass filtered Gaussian noise
Step10: Filter noisy samples
Actually generate low-pass filtered Gaussian noise.
Step11: Autocorrelation function
Step12: If you compare this with the autocorrelation function of the unfiltered noise, can you explain what happened?
Downsampling
Step13: What happened? Why did we take every 5th sample?
Hint | Python Code:
# import necessary libraries
import numpy as np # basic vector / matrix tools, numerical math
from matplotlib import pyplot as plt # Plotting
import seaborn # prettier plots
import math # General math functions
from scipy import signal, stats # Signal analysis, filter design; statistic/stochastics
# Show all plots inline (not in new window), make text large, and fix the figure size
%matplotlib inline
seaborn.set(font_scale=2)
plt.rc("figure", figsize = (1200/72, 400/72), dpi=72)
Explanation: Demonstration: Low-pass Filtered Additive White Gaussian Noise (AWGN)
End of explanation
# Time reference:
T_S = 1e-6 # seconds
f_S = 1 / T_S # sampling frequency
f_nyquist = f_S / 2 # nyquist frequency
print(
Sampling rate: {rate} kHz
Sample period: {period} µs
Nyquist frequency: {nyquist} kHz.format(rate=f_S/1000, period=T_S*1e6, nyquist=f_nyquist/1000))
N_0 = 1; # W/Hz
variance = N_0*f_S/2 # variance of each component (I, Q)
sigma = math.sqrt(variance) # standard deviation of each component
num_samples = 10000 # number of samples (= length of vector)
complex_noisy_samples = np.random.normal(0, sigma, num_samples)\
+ 1j * np.random.normal(0, sigma, num_samples)
Explanation: Generate noisy samples
Draw real and imaginary part of each sample from independent normal distributions:
End of explanation
plt.axis('square');
plt.scatter(complex_noisy_samples.real, complex_noisy_samples.imag, s=6, alpha=0.3)
plt.xlim(-3000, 3000); plt.ylim(-3000, 3000)
plt.xlabel('In-phase component'); plt.ylabel('Quadrature component')
plt.tight_layout()
Explanation: Noise samples in complex plane
End of explanation
seaborn.jointplot(complex_noisy_samples.real, complex_noisy_samples.imag, kind="reg", size=6, joint_kws={"scatter_kws": {"s":6, "alpha":0.3}})
plt.xlim(-3000, 3000); plt.ylim(-3000, 3000)
plt.xlabel('In-phase component'); plt.ylabel('Quadrature component')
Explanation: Can you see the Gaussian bell?
Where's the Gaussian?
End of explanation
t = np.arange(num_samples) * T_S # vector of sampling time instances
plt.plot(t*1e3, complex_noisy_samples.real, t*1e3, complex_noisy_samples.imag, alpha=0.7)
plt.title("Time Domain of I an Q components")
plt.xlabel('Time / ms'); plt.ylabel('Amplitude'); plt.legend(('inphase', 'quadrature'));
Explanation: Visualize Noise
End of explanation
plt.subplot(121)
plt.acorr(complex_noisy_samples.real, usevlines=True, maxlags=50)
plt.ylabel('$\phi_{\Re\Re}$'); plt.xlabel('lag / Samples'); plt.axis('tight')
plt.subplot(122)
plt.acorr(complex_noisy_samples.imag, usevlines=True, maxlags=50)
plt.ylabel('$\phi_{\Im\Im}$'); plt.xlabel('lag / Samples'); plt.axis('tight');
Explanation: Autocorrelation function
End of explanation
# Plot normalized histogram
plt.hist(complex_noisy_samples.real, bins=40, normed=True, alpha=0.5);
plt.xlabel('Amplitude'); plt.ylabel('Probability')
# Plot normal distribution
x = np.linspace(-3000, 3000, 100)
_ = plt.plot(x,stats.norm.pdf(x,0,sigma))
Explanation: Histogram & PDF of normal distribution
End of explanation
freqs, Pxx = signal.welch(complex_noisy_samples,
fs=f_S, nfft=1024, noverlap=0,
window="hanning", scaling="density",
return_onesided=False)
freqs = np.fft.fftshift(freqs); Pxx = np.fft.fftshift(Pxx)
# Plot PSD, use logarithmic scale:
plt.plot(freqs / 1000, 10*np.log10(np.abs(Pxx)))
plt.ylim(-70, 10)
plt.ylabel('$\Phi_{XX}(f)$ [dB]'); plt.xlabel('$f$ / kHz');
Explanation: Amplitudes are normally distributed. Try to play with number of samples and number of bins.
You may also want to take a look at histogram of quadrature component. What do you expect?
Power spectral density using Welch method
End of explanation
cutoff_freq = 1e5 # cutoff frequency of lowpass filter: 100 kHz
numtaps = 51 # number of filter taps
# FIR filter design:
lpass_taps = signal.firwin(numtaps, cutoff_freq, nyq=f_nyquist) # Get filter taps
freq_norm, response = signal.freqz(lpass_taps) # filter response in frequency domain
freq = freq_norm * f_nyquist / np.pi
# Plot frequency response:
plt.plot(freq / 1e3, 10*np.log10(np.abs(response)))
plt.title('Frequency response of lowpass filter'); plt.ylabel('$H(f)$ [dB]'); plt.xlabel('$f$ / kHz');
Explanation: Not quite a constant, why not?
Low-Pass filtered Gaussian noise
End of explanation
# Filter noise with lowpass:
filtered_x = signal.lfilter(lpass_taps, 1.0, complex_noisy_samples)
# Calculate PSD:
freqs, Pxx = signal.welch(filtered_x,
nfft=1024, fs=f_S, window="hanning", noverlap=0, scaling="density", return_onesided=False)
plt.plot(np.fft.fftshift(freqs),
10*np.log10(np.abs(np.fft.fftshift(Pxx))))
# Plot PSD, use logarithmic scale:
plt.title('PSD of low-pass filtered Gaussian noise');
plt.axis('tight'); plt.ylim(-70, 10); plt.ylabel('$P_{XX}(f)$'); plt.xlabel('$f$ / kHz');
Explanation: Filter noisy samples
Actually generate low-pass filtered Gaussian noise.
End of explanation
plt.acorr(filtered_x.real, usevlines=False, maxlags=50, marker=None, linestyle='-')
plt.acorr(filtered_x.imag, usevlines=False, maxlags=50, marker=None, linestyle='-')
plt.xlabel('lag / Samples')
plt.legend(('inphase', 'quadrature'));
Explanation: Autocorrelation function
End of explanation
# Take every 5th element of filtered signal
factor = 5; filt_x_dwnsampled = filtered_x[::factor]
plt.acorr(filt_x_dwnsampled.real, usevlines=False, maxlags=50, marker=None, linestyle='-')
plt.acorr(filt_x_dwnsampled.imag, usevlines=False, maxlags=50, marker=None, linestyle='-')
plt.title('Autocorrelation function of downsampled signal')
plt.xlabel('lag / Samples'); plt.axis('tight'); plt.legend(('inphase', 'quadrature'));
Explanation: If you compare this with the autocorrelation function of the unfiltered noise, can you explain what happened?
Downsampling
End of explanation
freqs, Pxx = signal.welch(filt_x_dwnsampled,
fs=f_S/factor,nfft=1024, window="hanning", noverlap=0, scaling="density", return_onesided=False)
# Plot PSD, use logarithmic scale:
plt.plot(np.fft.fftshift(freqs),
10*np.log10(np.abs(np.fft.fftshift(Pxx))))
plt.axis('tight'); plt.ylim(-70, 10)
plt.ylabel('$P_{XX}$'); plt.xlabel('$f$ / kHz');
Explanation: What happened? Why did we take every 5th sample?
Hint: take a look at the cutoff frequency of the filter and at the nyquist frequency.
PSD after downsampling
End of explanation |
9,467 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Define a few building blocks for generating our input data
Step1: And a method for plotting and evaluating target algorithms
Note that it supports two algorithms. One ref (reference) and one that is the "algorithm-under-test" (test). This allows you to compare and benchmark one algorithm with another.
Step4: This is our proposed algorithm
Step5: Test input 1
A heartbeat that comes every five to six minutes, throughout the day (changing because of load on the system). It varies a bit - up to 30 seconds between runs even if the load is static.
Step6: Test input 2
A constant heartbeat. Very predictive.
Step7: A more complex varying load
It also shows how the beat frequency was changed from once a minute to once every two minutes. This should of course be detected.
Step8: An increasingly jittery load
Step9: Handling a large spike
If a service is down for a while, and goes up again, how well does our algorithm recover?
We can see that the effects from the spike lasts very long. Actually, up to 20 samples (which is the window we're using). This could be further improved.
Step10: Handling double beats
Step11: As the algorithm only works on the last samples, depending on how often the spikes occur will affect the filtering.
Step12: Evaluate the actual implementation
Lovebeat provides a mode where you can specify time series data and it will output the result of the "auto algorithm" - perfect for testing and verifying that the implementation (in Go) is identical to the one we have here. | Python Code:
def constant(v, count = 100):
return [v] * count
def jitter(v, amplitude):
return [y + random.randint(0, amplitude) - amplitude * 0.5 for y in v]
def increasing(from_, to_, count = 100):
return list(np.arange(from_, to_, (to_ - from_) / count))
def sin(base, amplitude, count = 100):
return [base + amplitude * float(np.sin(2.0 * np.pi * t / count)) for t in range(count)]
Explanation: Define a few building blocks for generating our input data
End of explanation
def run_algorithm(algorithm, data):
if algorithm is None:
return []
# If the algorithm returns an array, it's an "online" algorithm and can work in a streaming way
# providing
result = algorithm(data)
if isinstance(result, list):
return result
# It's not. We have to call it with the data one sample at a time.
result = []
ts = []
for i in range(len(data)):
ts.append(data[i])
result.append(algorithm(ts))
return result
def evaluate(data, ref, test):
fig, ax = plt.subplots()
ax.plot(data, color="green", alpha=0.4)
tests = run_algorithm(test, data)
refs = run_algorithm(ref, data)
ax.plot(refs, color="red", ls='-.', label="ref")
ax.plot(tests, color="blue", ls=':', label="test")
ax.legend()
over = [i for i in range(len(data)) if data[i] > tests[i - 1]]
print(over)
Explanation: And a method for plotting and evaluating target algorithms
Note that it supports two algorithms. One ref (reference) and one that is the "algorithm-under-test" (test). This allows you to compare and benchmark one algorithm with another.
End of explanation
def unDef(a):
if math.isnan(a):
return True
if math.isinf(a):
return True
return False
def ewma(series, com):
Exponentially weighted moving average, as found in pandas
series = [1.0 * s for s in series]
com = 1.0 * com
N = len(series)
ret = [0.0] * N
if N == 0:
return ret
oldw = com / (1 + com)
adj = oldw
ret[0] = series[0] / (1 + com)
for i in range(1, N):
cur = series[i]
prev = ret[i - 1]
if unDef(cur):
ret[i] = prev
else:
if unDef(prev):
ret[i] = cur / (1 + com)
else:
ret[i] = (com * prev + cur) / (1 + com)
for i in range(N):
cur = ret[i]
if not math.isnan(cur):
ret[i] = ret[i] / (1.0 - adj)
adj *= oldw
else:
if i > 0:
ret[i] = ret[i - 1]
return ret
def ewmstd_last(series, com):
Exponentially weighted moving standard deviation, last element
m1st = ewma(series, com)
m2nd = ewma([v*v for v in series], com)
last = len(m1st) - 1
t = m2nd[last] - m1st[last] * m1st[last]
t *= (1.0 + 2.0 * com) / (2.0 * com)
if t < 0:
return 0
return math.sqrt(t)
def algo_ewma_std(timeseries, factor=3):
ts = timeseries[-20:]
median = ewma(ts, 10)[-1]
s = ewmstd_last(ts, 10)
try:
return (int)(factor * s + median + 1000)
except ValueError:
return 0
def reject_outliers(data, m=2):
u = np.mean(data)
s = np.std(data)
filtered = [e for e in data if (u - m * s < e < u + m * s)]
return filtered
def algo_ewma_std_reject_outliers(timeseries, factor=3):
ts = timeseries[-20:]
ts2 = reject_outliers(ts, 3)
if ts2:
ts = ts2
median = ewma(ts, 10)[-1]
s = ewmstd_last(ts, 10)
s2 = np.std(ts)
try:
return (int)(factor * s + median + 1000)
except ValueError:
return 0
ref = algo_ewma_std
test = algo_ewma_std_reject_outliers
Explanation: This is our proposed algorithm
End of explanation
data = jitter(sin(5 * 60000, 30000, count=500), 30000)
evaluate(data, ref, test)
Explanation: Test input 1
A heartbeat that comes every five to six minutes, throughout the day (changing because of load on the system). It varies a bit - up to 30 seconds between runs even if the load is static.
End of explanation
data = constant(10000)
evaluate(data, ref, test)
Explanation: Test input 2
A constant heartbeat. Very predictive.
End of explanation
data = jitter(constant(60000, count=50), 0) \
+ jitter(constant(60000, count=50), 0) \
+ jitter(constant(60000, count=200), 4000) \
+ jitter(constant(120000, count=40), 1000) \
+ jitter(increasing(120000, 60000, count=50), 1000) \
+ jitter(constant(60000, count=40), 1000) \
+ jitter(sin(60000, 2000, count=200), 1000)
evaluate(data, ref, test)
Explanation: A more complex varying load
It also shows how the beat frequency was changed from once a minute to once every two minutes. This should of course be detected.
End of explanation
data = jitter(constant(60000, count=50), 0) \
+ jitter(constant(60000, count=50), 1000) \
+ jitter(constant(60000, count=50), 2000) \
+ jitter(constant(60000, count=50), 4000) \
+ jitter(constant(60000, count=200), 8000) \
+ jitter(constant(60000, count=200), 12000) \
+ jitter(constant(60000, count=200), 16000) \
+ jitter(constant(60000, count=200), 20000) \
+ jitter(constant(60000, count=200), 40000)
evaluate(data, ref, test)
Explanation: An increasingly jittery load
End of explanation
data = jitter(constant(30000, 100), 20)
data[50] = 10 * 60000
evaluate(data, ref, test)
Explanation: Handling a large spike
If a service is down for a while, and goes up again, how well does our algorithm recover?
We can see that the effects from the spike lasts very long. Actually, up to 20 samples (which is the window we're using). This could be further improved.
End of explanation
data = jitter(constant(30000, 100), 20)
data[50] = 1
evaluate(data, ref, test)
data = [70726.2, 67643.6, 67704.3, 68023.6, 68743.4, 67782, 65726.9, 62416.6, 58031.9, 55566.6, 53365.4,
51251.1, 48292.4, 48025.3, 44631.8, 42370.5, 41162.8, 42348.7, 45643.9, 47511.3, 51991.6, 54398.1,
57022.7, 58653.3, 65816.9, 51578.8, 44560.1, 43882.7, 52602.1, 124490, 80847.1, 64499.1, 59527.6,
55655.6, 53964.6, 51776.3, 49120.2, 47653.9, 43989.7, 41220.6, 40241.4, 41943.6, 44538.6, 47536.9,
51360.7, 53624.9, 55779.5, 59666.4, 64510.6, 66311.3, 67667.6, 65440.1, 69684.1, 67364.7, 64902.5,
61256.6, 57664.2, 54714.4, 53109.6, 51618.9, 48533.5, 47920.4, 44887.2, 41486.3, 40844.7, 42928.3,
44422.5, 46622.3, 49935.8, 52484.1, 54765.2, 58326.3, 62001.3, 63918, 65927.1, 65990.6, 66997.1,
66293, 64181.4, 60773.9, 57239.3, 54165.6, 53191.3, 51350.9, 48038, 47631.1, 44122.9, 41345.9,
40352, 42332.6, 44061.6, 46537.8, 50086.9, 53440.9, 55781, 60158.8, 64306.6, 66479.2, 67567.2,
68380.5, 68766.3, 67552.7, 65607.8, 61651.6, 58161.6, 54662.9, 53720.1, 51923.5, 48246.2, 45374.6,
43337.6, 42652.8, 42010.5, 41856.4, 42056.2, 42576.8, 46273.7, 50938.6, 54639.3, 57342.4, 59774.6,
60706, 62984.4, 63172.7, 64042.6, 63036.4, 61925.8, 58798.6, 56394.9, 53050.7, 49905.7, 49781.8,
46577.2, 43242.3, 41269.5, 40422.3, 40587.8, 40530.8, 39845, 40492.2, 43246.3, 46699.2, 51971.9,
54654.5, 56630.8, 57076.9, 58955.2, 59459.6, 60647.5, 59267.8, 58642.9, 56231.5, 54234.9, 51979.6,
50252.1, 47931.1, 45590.3, 44554.6, 42227.4, 39722.7, 39453.8, 41632, 44427.5, 47465.5, 51199.2,
53105.7, 55873.8, 59402.5]
evaluate(data, ref, test)
Explanation: Handling double beats
End of explanation
data = jitter(constant(30000, 100), 20)
data[10] = 10 * 60000
data[15] = 5 * 60000
data[20] = 5 * 60000
data[50] = 10 * 60000
data[75] = 10 * 60000
evaluate(data, ref, test)
Explanation: As the algorithm only works on the last samples, depending on how often the spikes occur will affect the filtering.
End of explanation
LOVEBEAT_PATH = "~/r/go/src/github.com/boivie/lovebeat/lovebeat"
def goimplementation(data):
import os
import subprocess
args = [os.path.expanduser(LOVEBEAT_PATH), '--validate-auto']
proc = subprocess.Popen(args,stdout=subprocess.PIPE,
stdin=subprocess.PIPE, stderr=subprocess.PIPE)
result = []
for value in data:
proc.stdin.write("%d\n" % value)
result.append(int(proc.stdout.readline()))
proc.stdin.write("\n")
proc.stdin.close()
proc.wait()
return result
# NOTE: our 'ref' is actual our 'test' - the one we're benchmarking against!
evaluate(data, test, goimplementation)
Explanation: Evaluate the actual implementation
Lovebeat provides a mode where you can specify time series data and it will output the result of the "auto algorithm" - perfect for testing and verifying that the implementation (in Go) is identical to the one we have here.
End of explanation |
9,468 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Spectral Fits to Calculate Fluxes
3ML provides a module to calculate the integral flux from a spectral fit and additionally uses the covariance matrix or posteror to calculate the error in the flux value for the integration range selected
Step1: Data setup
Using GRB 080916C as an example, we will fit two models to the time-integrated spectrum to demostrate the flux calculations capabilites.
Step2: Model setup
We will fit two models
Step3: Fitting
MLE
We fit both models using MLE
Step4: Flux caluclation
Total flux
The JointLikelihood objects are passed to the SpectralFlux class.
Then either model_flux or component_flux are called depending on the flux desired.
The astropy system of units is used to specfiy flux units and an error is raised if the user selects an improper unit. The integration range is specified and the unit for this range can be altered.
Step5: A panadas DataFrame is returned with the sources (a fitting object can have multiple sources) flux, and flux error.
We can also change to photon fluxes by specifying the proper flux unit (here we changed to m^2). Here, the integration unit is also changed.
Step6: Components
If we want to look at component fluxes, we examine our second fit.
We can first look at the total flux
Step7: Then we can look at our component fluxes. The class automatically solves the error propagation equations to properly propagate the parameter errors into the components
Step8: A dictionary of sources is return that contains pandas DataFrames listing the fluxes and errors of each componenet.
NOTE
Step9: Flux Calculation
Total Flux
Just as with MLE, we pass the BayesianAnalysis object to the SpectralFlux class.
Now the propagation of fluxes is done using the posterior of the analysis.
Step10: Once again, a DataFrame is returned. This time, it contains the mean flux from the distribution, the specfied level (default is 0.05) credible regions and the flux distribution itself.
One can plot the distribtuion
Step11: Components
We can also look at components as before. A dictionary of sources is returned, each containing Dataframes of the components information and distributions.
Step12: We can easily now visulaize the flux distribtuions from the individual components. | Python Code:
%pylab inline
from threeML import *
Explanation: Using Spectral Fits to Calculate Fluxes
3ML provides a module to calculate the integral flux from a spectral fit and additionally uses the covariance matrix or posteror to calculate the error in the flux value for the integration range selected
End of explanation
# os.path.join is a way to generate system-independent
# paths (good for unix, windows, Mac...)
data_dir = os.path.join('gbm','bn080916009')
trigger_number = 'bn080916009'
# Download the data
data_dir_gbm = os.path.join('gbm',trigger_number)
gbm_data = download_GBM_trigger_data(trigger_number,detectors=['n3','b0'],destination_directory=data_dir_gbm,compress_tte=True)
src_selection = '0-71'
nai3 = FermiGBMTTELike('NAI3',
os.path.join(data_dir, "glg_tte_n3_bn080916009_v01.fit.gz"),
os.path.join(data_dir, "glg_cspec_n3_bn080916009_v00.rsp2"),
src_selection,
"-10-0,100-200",
verbose=False)
bgo0 = FermiGBMTTELike('BGO0',
os.path.join(data_dir, "glg_tte_b0_bn080916009_v01.fit.gz"),
os.path.join(data_dir, "glg_cspec_b0_bn080916009_v00.rsp2"),
src_selection,
"-10-0,100-200",
verbose=False)
nai3.set_active_measurements("8.0-30.0", "40.0-950.0")
bgo0.set_active_measurements("250-43000")
Explanation: Data setup
Using GRB 080916C as an example, we will fit two models to the time-integrated spectrum to demostrate the flux calculations capabilites.
End of explanation
triggerName = 'bn080916009'
ra = 121.8
dec = -61.3
data_list = DataList(nai3,bgo0 )
band = Band()
GRB1 = PointSource( triggerName, ra, dec, spectral_shape=band )
model1 = Model( GRB1 )
pl_bb= Powerlaw() + Blackbody()
GRB2 = PointSource( triggerName, ra, dec, spectral_shape=pl_bb )
model2 = Model( GRB2 )
Explanation: Model setup
We will fit two models: a Band function and a CPL+Blackbody
End of explanation
jl1 = JointLikelihood( model1, data_list, verbose=False )
res = jl1.fit()
jl2 = JointLikelihood( model2, data_list, verbose=False )
res = jl2.fit()
Explanation: Fitting
MLE
We fit both models using MLE
End of explanation
res = calculate_point_source_flux(10,40000,jl1.results,jl2.results,flux_unit='erg/(s cm2)',energy_unit='keV')
Explanation: Flux caluclation
Total flux
The JointLikelihood objects are passed to the SpectralFlux class.
Then either model_flux or component_flux are called depending on the flux desired.
The astropy system of units is used to specfiy flux units and an error is raised if the user selects an improper unit. The integration range is specified and the unit for this range can be altered.
End of explanation
res = calculate_point_source_flux(10,40000,jl1.results,jl2.results,flux_unit='1/(s cm2)',energy_unit='Hz',equal_tailed=False)
Explanation: A panadas DataFrame is returned with the sources (a fitting object can have multiple sources) flux, and flux error.
We can also change to photon fluxes by specifying the proper flux unit (here we changed to m^2). Here, the integration unit is also changed.
End of explanation
res = calculate_point_source_flux(10,40000,
jl1.results,jl2.results,
flux_unit='erg/(s cm2)',
energy_unit='keV',use_components=True)
Explanation: Components
If we want to look at component fluxes, we examine our second fit.
We can first look at the total flux:
End of explanation
res = calculate_point_source_flux(10,40000,jl1.results,jl2.results,flux_unit='erg/(s cm2)',
energy_unit='keV',
equal_tailed=False,
use_components=True, components_to_use=['Blackbody','total'])
Explanation: Then we can look at our component fluxes. The class automatically solves the error propagation equations to properly propagate the parameter errors into the components
End of explanation
pl_bb.K_1.prior = Log_uniform_prior(lower_bound = 1E-1, upper_bound = 1E2)
pl_bb.index_1.set_uninformative_prior(Uniform_prior)
pl_bb.K_2.prior = Log_uniform_prior(lower_bound = 1E-6, upper_bound = 1E-3)
pl_bb.kT_2.prior = Log_uniform_prior(lower_bound = 1E0, upper_bound = 1E4)
bayes = BayesianAnalysis(model2,data_list)
_=bayes.sample(30,100,500)
Explanation: A dictionary of sources is return that contains pandas DataFrames listing the fluxes and errors of each componenet.
NOTE: With proper error propagation, the total error is not always the sqrt of the sum of component errors squared!
Bayesian fitting
Now we will look at the results when a Bayesian fit is performed.
We set our priors and then sample:
End of explanation
res = calculate_point_source_flux(10,40000,
bayes.results,
flux_unit='erg/(s cm2)',
energy_unit='keV')
Explanation: Flux Calculation
Total Flux
Just as with MLE, we pass the BayesianAnalysis object to the SpectralFlux class.
Now the propagation of fluxes is done using the posterior of the analysis.
End of explanation
from astropy.visualization import quantity_support
quantity_support()
_=hist(res[1]['flux distribution'][0],bins=20)
Explanation: Once again, a DataFrame is returned. This time, it contains the mean flux from the distribution, the specfied level (default is 0.05) credible regions and the flux distribution itself.
One can plot the distribtuion:
End of explanation
res = calculate_point_source_flux(10,40000,
bayes.results,
flux_unit='erg/(s cm2)',
energy_unit='keV',
use_components=True)
Explanation: Components
We can also look at components as before. A dictionary of sources is returned, each containing Dataframes of the components information and distributions.
End of explanation
_=hist(log10(res[1]['flux distribution'][0].value),bins=20)
_=hist(log10(res[1]['flux distribution'][1].value),bins=20)
res = calculate_point_source_flux(10,40000,
bayes.results,jl1.results,jl2.results,
flux_unit='erg/(s cm2)',
energy_unit='keV')
Explanation: We can easily now visulaize the flux distribtuions from the individual components.
End of explanation |
9,469 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SimpleITK Image Basics <a href="https
Step1: Image Construction
There are a variety of ways to create an image. All images' initial value is well defined as zero.
Step2: Pixel Types
The pixel type is represented as an enumerated type. The following is a table of the enumerated list.
<table>
<tr><td>sitkUInt8</td><td>Unsigned 8 bit integer</td></tr>
<tr><td>sitkInt8</td><td>Signed 8 bit integer</td></tr>
<tr><td>sitkUInt16</td><td>Unsigned 16 bit integer</td></tr>
<tr><td>sitkInt16</td><td>Signed 16 bit integer</td></tr>
<tr><td>sitkUInt32</td><td>Unsigned 32 bit integer</td></tr>
<tr><td>sitkInt32</td><td>Signed 32 bit integer</td></tr>
<tr><td>sitkUInt64</td><td>Unsigned 64 bit integer</td></tr>
<tr><td>sitkInt64</td><td>Signed 64 bit integer</td></tr>
<tr><td>sitkFloat32</td><td>32 bit float</td></tr>
<tr><td>sitkFloat64</td><td>64 bit float</td></tr>
<tr><td>sitkComplexFloat32</td><td>complex number of 32 bit float</td></tr>
<tr><td>sitkComplexFloat64</td><td>complex number of 64 bit float</td></tr>
<tr><td>sitkVectorUInt8</td><td>Multi-component of unsigned 8 bit integer</td></tr>
<tr><td>sitkVectorInt8</td><td>Multi-component of signed 8 bit integer</td></tr>
<tr><td>sitkVectorUInt16</td><td>Multi-component of unsigned 16 bit integer</td></tr>
<tr><td>sitkVectorInt16</td><td>Multi-component of signed 16 bit integer</td></tr>
<tr><td>sitkVectorUInt32</td><td>Multi-component of unsigned 32 bit integer</td></tr>
<tr><td>sitkVectorInt32</td><td>Multi-component of signed 32 bit integer</td></tr>
<tr><td>sitkVectorUInt64</td><td>Multi-component of unsigned 64 bit integer</td></tr>
<tr><td>sitkVectorInt64</td><td>Multi-component of signed 64 bit integer</td></tr>
<tr><td>sitkVectorFloat32</td><td>Multi-component of 32 bit float</td></tr>
<tr><td>sitkVectorFloat64</td><td>Multi-component of 64 bit float</td></tr>
<tr><td>sitkLabelUInt8</td><td>RLE label of unsigned 8 bit integers</td></tr>
<tr><td>sitkLabelUInt16</td><td>RLE label of unsigned 16 bit integers</td></tr>
<tr><td>sitkLabelUInt32</td><td>RLE label of unsigned 32 bit integers</td></tr>
<tr><td>sitkLabelUInt64</td><td>RLE label of unsigned 64 bit integers</td></tr>
</table>
There is also sitkUnknown, which is used for undefined or erroneous pixel ID's. It has a value of -1.
The 64-bit integer types are not available on all distributions. When not available the value is sitkUnknown.
More Information about the Image class be obtained in the Docstring
SimpleITK classes and functions have the Docstrings derived from the C++ definitions and the Doxygen documentation.
Step3: Accessing Attributes
If you are familiar with ITK, then these methods will follow your expectations
Step4: Note
Step5: Since the dimension and pixel type of a SimpleITK image is determined at run-time accessors are needed.
Step6: What is the depth of a 2D image?
Step7: What is the dimension and size of a Vector image?
Step8: For certain file types such as DICOM, additional information about the image is contained in the meta-data dictionary.
Step9: Accessing Pixels
There are the member functions GetPixel and SetPixel which provides an ITK-like interface for pixel access.
Step10: Conversion between numpy and SimpleITK
Step11: The order of index and dimensions need careful attention during conversion
ITK's Image class does not have a bracket operator. It has a GetPixel which takes an ITK Index object as an argument, which is ordered as (x,y,z). This is the convention that SimpleITK's Image class uses for the GetPixel method and slicing operator as well. In numpy, an array is indexed in the opposite order (z,y,x). Also note that the access to channels is different. In SimpleITK you do not access the channel directly, rather the pixel value representing all channels for the specific pixel is returned and you then access the channel for that pixel. In the numpy array you are accessing the channel directly.
Step12: Are we still dealing with Image, because I haven't seen one yet...
While SimpleITK does not do visualization, it does contain a built in Show method. This function writes the image out to disk and than launches a program for visualization. By default it is configured to use ImageJ, because it is readily supports all the image types which SimpleITK has and load very quickly. However, it's easily customizable by setting environment variables.
Step13: By converting into a numpy array, matplotlib can be used for visualization for integration into the scientific python environment. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import SimpleITK as sitk
Explanation: SimpleITK Image Basics <a href="https://mybinder.org/v2/gh/InsightSoftwareConsortium/SimpleITK-Notebooks/master?filepath=Python%2F01_Image_Basics.ipynb"><img style="float: right;" src="https://mybinder.org/badge_logo.svg"></a>
This document will give a brief orientation to the SimpleITK Image class.
First we import the SimpleITK Python module. By convention our module is imported into the shorter and more Pythonic "sitk" local name.
End of explanation
image = sitk.Image(256, 128, 64, sitk.sitkInt16)
image_2D = sitk.Image(64, 64, sitk.sitkFloat32)
image_2D = sitk.Image([32, 32], sitk.sitkUInt32)
image_RGB = sitk.Image([128, 128], sitk.sitkVectorUInt8, 3)
Explanation: Image Construction
There are a variety of ways to create an image. All images' initial value is well defined as zero.
End of explanation
help(image)
Explanation: Pixel Types
The pixel type is represented as an enumerated type. The following is a table of the enumerated list.
<table>
<tr><td>sitkUInt8</td><td>Unsigned 8 bit integer</td></tr>
<tr><td>sitkInt8</td><td>Signed 8 bit integer</td></tr>
<tr><td>sitkUInt16</td><td>Unsigned 16 bit integer</td></tr>
<tr><td>sitkInt16</td><td>Signed 16 bit integer</td></tr>
<tr><td>sitkUInt32</td><td>Unsigned 32 bit integer</td></tr>
<tr><td>sitkInt32</td><td>Signed 32 bit integer</td></tr>
<tr><td>sitkUInt64</td><td>Unsigned 64 bit integer</td></tr>
<tr><td>sitkInt64</td><td>Signed 64 bit integer</td></tr>
<tr><td>sitkFloat32</td><td>32 bit float</td></tr>
<tr><td>sitkFloat64</td><td>64 bit float</td></tr>
<tr><td>sitkComplexFloat32</td><td>complex number of 32 bit float</td></tr>
<tr><td>sitkComplexFloat64</td><td>complex number of 64 bit float</td></tr>
<tr><td>sitkVectorUInt8</td><td>Multi-component of unsigned 8 bit integer</td></tr>
<tr><td>sitkVectorInt8</td><td>Multi-component of signed 8 bit integer</td></tr>
<tr><td>sitkVectorUInt16</td><td>Multi-component of unsigned 16 bit integer</td></tr>
<tr><td>sitkVectorInt16</td><td>Multi-component of signed 16 bit integer</td></tr>
<tr><td>sitkVectorUInt32</td><td>Multi-component of unsigned 32 bit integer</td></tr>
<tr><td>sitkVectorInt32</td><td>Multi-component of signed 32 bit integer</td></tr>
<tr><td>sitkVectorUInt64</td><td>Multi-component of unsigned 64 bit integer</td></tr>
<tr><td>sitkVectorInt64</td><td>Multi-component of signed 64 bit integer</td></tr>
<tr><td>sitkVectorFloat32</td><td>Multi-component of 32 bit float</td></tr>
<tr><td>sitkVectorFloat64</td><td>Multi-component of 64 bit float</td></tr>
<tr><td>sitkLabelUInt8</td><td>RLE label of unsigned 8 bit integers</td></tr>
<tr><td>sitkLabelUInt16</td><td>RLE label of unsigned 16 bit integers</td></tr>
<tr><td>sitkLabelUInt32</td><td>RLE label of unsigned 32 bit integers</td></tr>
<tr><td>sitkLabelUInt64</td><td>RLE label of unsigned 64 bit integers</td></tr>
</table>
There is also sitkUnknown, which is used for undefined or erroneous pixel ID's. It has a value of -1.
The 64-bit integer types are not available on all distributions. When not available the value is sitkUnknown.
More Information about the Image class be obtained in the Docstring
SimpleITK classes and functions have the Docstrings derived from the C++ definitions and the Doxygen documentation.
End of explanation
print(image.GetSize())
print(image.GetOrigin())
print(image.GetSpacing())
print(image.GetDirection())
print(image.GetNumberOfComponentsPerPixel())
Explanation: Accessing Attributes
If you are familiar with ITK, then these methods will follow your expectations:
End of explanation
print(image.GetWidth())
print(image.GetHeight())
print(image.GetDepth())
Explanation: Note: The starting index of a SimpleITK Image is always 0. If the output of an ITK filter has non-zero starting index, then the index will be set to 0, and the origin adjusted accordingly.
The size of the image's dimensions have explicit accessors:
End of explanation
print(image.GetDimension())
print(image.GetPixelIDValue())
print(image.GetPixelIDTypeAsString())
Explanation: Since the dimension and pixel type of a SimpleITK image is determined at run-time accessors are needed.
End of explanation
print(image_2D.GetSize())
print(image_2D.GetDepth())
Explanation: What is the depth of a 2D image?
End of explanation
print(image_RGB.GetDimension())
print(image_RGB.GetSize())
print(image_RGB.GetNumberOfComponentsPerPixel())
Explanation: What is the dimension and size of a Vector image?
End of explanation
for key in image.GetMetaDataKeys():
print(f'"{key}":"{image.GetMetaData(key)}"')
Explanation: For certain file types such as DICOM, additional information about the image is contained in the meta-data dictionary.
End of explanation
help(image.GetPixel)
print(image.GetPixel(0, 0, 0))
image.SetPixel(0, 0, 0, 1)
print(image.GetPixel(0, 0, 0))
print(image[0, 0, 0])
image[0, 0, 0] = 10
print(image[0, 0, 0])
Explanation: Accessing Pixels
There are the member functions GetPixel and SetPixel which provides an ITK-like interface for pixel access.
End of explanation
nda = sitk.GetArrayFromImage(image)
print(nda)
help(sitk.GetArrayFromImage)
# Get a view of the image data as a numpy array, useful for display
nda = sitk.GetArrayViewFromImage(image)
nda = sitk.GetArrayFromImage(image_RGB)
img = sitk.GetImageFromArray(nda)
img.GetSize()
help(sitk.GetImageFromArray)
img = sitk.GetImageFromArray(nda, isVector=True)
print(img)
Explanation: Conversion between numpy and SimpleITK
End of explanation
import numpy as np
multi_channel_3Dimage = sitk.Image([2, 4, 8], sitk.sitkVectorFloat32, 5)
x = multi_channel_3Dimage.GetWidth() - 1
y = multi_channel_3Dimage.GetHeight() - 1
z = multi_channel_3Dimage.GetDepth() - 1
multi_channel_3Dimage[x, y, z] = np.random.random(
multi_channel_3Dimage.GetNumberOfComponentsPerPixel()
)
nda = sitk.GetArrayFromImage(multi_channel_3Dimage)
print("Image size: " + str(multi_channel_3Dimage.GetSize()))
print("Numpy array size: " + str(nda.shape))
# Notice the index order and channel access are different:
print("First channel value in image: " + str(multi_channel_3Dimage[x, y, z][0]))
print("First channel value in numpy array: " + str(nda[z, y, x, 0]))
Explanation: The order of index and dimensions need careful attention during conversion
ITK's Image class does not have a bracket operator. It has a GetPixel which takes an ITK Index object as an argument, which is ordered as (x,y,z). This is the convention that SimpleITK's Image class uses for the GetPixel method and slicing operator as well. In numpy, an array is indexed in the opposite order (z,y,x). Also note that the access to channels is different. In SimpleITK you do not access the channel directly, rather the pixel value representing all channels for the specific pixel is returned and you then access the channel for that pixel. In the numpy array you are accessing the channel directly.
End of explanation
sitk.Show(image)
?sitk.Show
Explanation: Are we still dealing with Image, because I haven't seen one yet...
While SimpleITK does not do visualization, it does contain a built in Show method. This function writes the image out to disk and than launches a program for visualization. By default it is configured to use ImageJ, because it is readily supports all the image types which SimpleITK has and load very quickly. However, it's easily customizable by setting environment variables.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
z = 0
slice = sitk.GetArrayViewFromImage(image)[z, :, :]
plt.imshow(slice)
Explanation: By converting into a numpy array, matplotlib can be used for visualization for integration into the scientific python environment.
End of explanation |
9,470 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyGraphistry Tutorial
Step1: Load Protein Interactions
Select columns of interest and drop empty rows.
Step2: Let's have a quick peak at the data
Bind the columns storing the source/destination of each edge. This is the bare minimum to create a visualization.
Step3: A Fancier Visualization With Custom Labels and Colors
Let's lookup the name and organism of each protein in the BioGrid indentification DB.
Step4: We extract the proteins referenced as either sources or targets of interactions.
Step5: We join on the indentification DB to get the organism in which each protein belongs.
Step6: We assign colors to proteins based on their organism.
Step7: For convenience, let's add links to PubMed and RCSB.
Step8: Plotting
We bind columns to labels and colors and we are good to go. | Python Code:
import pandas
import graphistry
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
Explanation: PyGraphistry Tutorial: Visualize Protein Interactions From BioGrid
That is over 600.000 interactions across 50'000 proteins!
Notes
This notebook automatically downloads about 200 MB of BioGrid data. If you are going to run this notebook more than once, we recommend manually dowloading and saving the data to disk. To do so, unzip the two files and place their content in pygraphistry/demos/data.
- Protein Interactions: BIOGRID-ALL-3.3.123.tab2.zip
- Protein Identifiers: BIOGRID-IDENTIFIERS-3.3.123.tab.zip
End of explanation
url1 = 'https://s3-us-west-1.amazonaws.com/graphistry.demo.data/BIOGRID-ALL-3.3.123.tab2.txt.gz'
rawdata = pandas.read_table(url1, na_values=['-'], engine='c', compression='gzip')
# If using local data, comment the two lines above and uncomment the line below
# pandas.read_table('./data/BIOGRID-ALL-3.3.123.tab2.txt', na_values=['-'], engine='c')
cols = ['BioGRID ID Interactor A', 'BioGRID ID Interactor B', 'Official Symbol Interactor A',
'Official Symbol Interactor B', 'Pubmed ID', 'Author', 'Throughput']
interactions = rawdata[cols].dropna()
interactions[:3]
Explanation: Load Protein Interactions
Select columns of interest and drop empty rows.
End of explanation
g = graphistry.bind(source="BioGRID ID Interactor A", destination="BioGRID ID Interactor B")
g.plot(interactions.sample(10000))
Explanation: Let's have a quick peak at the data
Bind the columns storing the source/destination of each edge. This is the bare minimum to create a visualization.
End of explanation
# This downloads 170 MB, it might take some time.
url2 = 'https://s3-us-west-1.amazonaws.com/graphistry.demo.data/BIOGRID-IDENTIFIERS-3.3.123.tab.txt.gz'
raw_proteins = pandas.read_table(url2, na_values=['-'], engine='c', compression='gzip')
# If using local data, comment the two lines above and uncomment the line below
# raw_proteins = pandas.read_table('./data/BIOGRID-IDENTIFIERS-3.3.123.tab.txt', na_values=['-'], engine='c')
protein_ids = raw_proteins[['BIOGRID_ID', 'ORGANISM_OFFICIAL_NAME']].drop_duplicates() \
.rename(columns={'ORGANISM_OFFICIAL_NAME': 'ORGANISM'})
protein_ids[:3]
Explanation: A Fancier Visualization With Custom Labels and Colors
Let's lookup the name and organism of each protein in the BioGrid indentification DB.
End of explanation
source_proteins = interactions[["BioGRID ID Interactor A", "Official Symbol Interactor A"]].copy() \
.rename(columns={'BioGRID ID Interactor A': 'BIOGRID_ID',
'Official Symbol Interactor A': 'SYMBOL'})
target_proteins = interactions[["BioGRID ID Interactor B", "Official Symbol Interactor B"]].copy() \
.rename(columns={'BioGRID ID Interactor B': 'BIOGRID_ID',
'Official Symbol Interactor B': 'SYMBOL'})
all_proteins = pandas.concat([source_proteins, target_proteins], ignore_index=True).drop_duplicates()
all_proteins[:3]
Explanation: We extract the proteins referenced as either sources or targets of interactions.
End of explanation
protein_labels = pandas.merge(all_proteins, protein_ids, how='left', left_on='BIOGRID_ID', right_on='BIOGRID_ID')
protein_labels[:3]
Explanation: We join on the indentification DB to get the organism in which each protein belongs.
End of explanation
colors = protein_labels.ORGANISM.unique().tolist()
protein_labels['Color'] = protein_labels.ORGANISM.map(lambda x: colors.index(x))
Explanation: We assign colors to proteins based on their organism.
End of explanation
def makeRcsbLink(id):
if isinstance(id, str):
url = 'http://www.rcsb.org/pdb/gene/' + id.upper()
return '<a target="_blank" href="%s">%s</a>' % (url, id.upper())
else:
return 'n/a'
protein_labels.SYMBOL = protein_labels.SYMBOL.map(makeRcsbLink)
protein_labels[:3]
def makePubmedLink(id):
url = 'http://www.ncbi.nlm.nih.gov/pubmed/?term=%s' % id
return '<a target="_blank" href="%s">%s</a>' % (url, id)
interactions['Pubmed ID'] = interactions['Pubmed ID'].map(makePubmedLink)
interactions[:3]
Explanation: For convenience, let's add links to PubMed and RCSB.
End of explanation
# This will upload ~10MB of data, be patient!
g2 = g.bind(node='BIOGRID_ID', edge_title='Author', point_title='SYMBOL', point_color='Color')
g2.plot(interactions, protein_labels)
Explanation: Plotting
We bind columns to labels and colors and we are good to go.
End of explanation |
9,471 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gradient
Use metpy.calc.gradient.
This example demonstrates the various ways that MetPy's gradient function
can be utilized.
Step1: Create some test data to use for our example
Step2: Calculate the gradient using the coordinates of the data
Step3: It's also possible that we do not have the position of data points, but know
that they are evenly spaced. We can then specify a scalar delta value for each
axes.
Step4: Finally, the deltas can be arrays for unevenly spaced data. | Python Code:
import numpy as np
import metpy.calc as mpcalc
from metpy.units import units
Explanation: Gradient
Use metpy.calc.gradient.
This example demonstrates the various ways that MetPy's gradient function
can be utilized.
End of explanation
data = np.array([[23, 24, 23],
[25, 26, 25],
[27, 28, 27],
[24, 25, 24]]) * units.degC
# Create an array of x position data (the coordinates of our temperature data)
x = np.array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3],
[1, 2, 3]]) * units.kilometer
y = np.array([[1, 1, 1],
[2, 2, 2],
[3, 3, 3],
[4, 4, 4]]) * units.kilometer
Explanation: Create some test data to use for our example
End of explanation
grad = mpcalc.gradient(data, coordinates=(y, x))
print('Gradient in y direction: ', grad[0])
print('Gradient in x direction: ', grad[1])
Explanation: Calculate the gradient using the coordinates of the data
End of explanation
x_delta = 2 * units.km
y_delta = 1 * units.km
grad = mpcalc.gradient(data, deltas=(y_delta, x_delta))
print('Gradient in y direction: ', grad[0])
print('Gradient in x direction: ', grad[1])
Explanation: It's also possible that we do not have the position of data points, but know
that they are evenly spaced. We can then specify a scalar delta value for each
axes.
End of explanation
x_deltas = np.array([[2, 3],
[1, 3],
[2, 3],
[1, 2]]) * units.kilometer
y_deltas = np.array([[2, 3, 1],
[1, 3, 2],
[2, 3, 1]]) * units.kilometer
grad = mpcalc.gradient(data, deltas=(y_deltas, x_deltas))
print('Gradient in y direction: ', grad[0])
print('Gradient in x direction: ', grad[1])
Explanation: Finally, the deltas can be arrays for unevenly spaced data.
End of explanation |
9,472 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Basics
Step1: Indentation and Code Blocks
Step2: Statements/Multiline statements
A new line signals the end of a statement
Step3: For multi-line statements, use the \ character at the end of the line
Step4: or put the multilines in ()
Step6: Comments
Comments begin after the # character which can appear anywhere on the line. Multiline comments begin and end with 3 matching ' or " characters.
Step7: You can put a comment inside a muliline statement | Python Code:
# this is a correct variable name
variable_name = 10
# this is not a correct variable name - variable names can't start with a number
4tops = 5
# these variables are not the same
distance = 10
Distance = 20
disTance = 30
print "distance = ", distance
print "Distance = ", Distance
print "disTance = ", disTance
Explanation: The Basics: Python Synatx, Indentation, Comments, etc ...
Variables
Python has similar syntax to C/C++/Java so it uses the same convention for naming variables: variable names can contain upper and lower case character & numbers, including the underscore (_) character.
Variable names are case sensitive
Variable names can't start with numbers or special characters (except for the undescore character)
Variables cannot have the same names as Python keywords
| Keywords |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|and|assert | as |break |class|continue|def|del |
|elif |else |except |exec|finally |for|from |global |
|if |import|in |is|lambda |not|or |pass|
|print |raise|return |try|while |with|yield | |
End of explanation
x = 11
if x == 10:
print 'x = 10'
else:
print 'x != 10'
print 'Done'
print "Bye now"
Explanation: Indentation and Code Blocks
End of explanation
# these are two separate statements
a = 4
b = 6
Explanation: Statements/Multiline statements
A new line signals the end of a statement
End of explanation
# use line continuation symbol (use with care)
a = 10 \
+ \
10 + 10
print a
Explanation: For multi-line statements, use the \ character at the end of the line
End of explanation
# use prenthesis (preferable way)
a = (10
+ 20 +
10 + 10)
print a
Explanation: or put the multilines in ()
End of explanation
# this is a line comment
a = 10 # this is also a line comment
'''
this is indsie a multi-line comment
so is this
and this
'''
a = 100 # this is outside the multi-line comment
This is another way to declare a multi-line comment
Inside it
inside it too
a = 100 # this is outside the multi-line comment
Explanation: Comments
Comments begin after the # character which can appear anywhere on the line. Multiline comments begin and end with 3 matching ' or " characters.
End of explanation
# multiline statement with comments
a = (10 # this is line 1
+ 20 + # this is line 2
10 + 10)
print a
Explanation: You can put a comment inside a muliline statement
End of explanation |
9,473 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Business Dataset
Step1: Open/Closed
Step2: City and State
Step3: Review
Step4: https
Step5: User
Step6: https
Step7: Checkin
Step8: Tip | Python Code:
{
"business_id":"encrypted business id",
"name":"business name",
"neighborhood":"hood name",
"address":"full address",
"city":"city",
"state":"state -- if applicable --",
"postal code":"postal code",
"latitude":latitude,
"longitude":longitude,
"stars":star rating, ***rounded to half-stars***,
"review_count":number of reviews,
"is_open":0/1 (closed/open),
"attributes":["an array of strings: each array element is an attribute"],
"categories":["an array of strings of business categories"],
"hours":["an array of strings of business hours"],
"type": "business"
}
'Size of the business dataset: ' + str(len(business))
business.columns
business['attributes'][12]
business['categories'][20]
business.head()
Explanation: Business Dataset
End of explanation
'Percentage of open businesses: ' + str(business['is_open'].sum() / float(len(business)))
Explanation: Open/Closed
End of explanation
len(business.city.unique())
business['city'].value_counts().head(10)
business['city'].value_counts().tail(10)
len(business.state.unique())
business['state'].value_counts().head(10)
business['state'].value_counts().tail(10)
plt.figure(figsize=(10,10))
plt.scatter(business['review_count'], business['stars'])
plt.xlabel('Review Counts')
plt.ylabel('Stars')
plt.show()
business.groupby('state').median()['review_count']
business.groupby('state').median()['stars']
business[business['business_id'] == '2LfIuF3_sX6uwe-IR-P0jQ']
business.describe()
Explanation: City and State
End of explanation
{
"review_id":"encrypted review id",
"user_id":"encrypted user id",
"business_id":"encrypted business id",
"stars":star rating, rounded to half-stars,
"date":"date formatted like 2009-12-19",
"text":"review text",
"useful":number of useful votes received,
"funny":number of funny votes received,
"cool": number of cool review votes received,
"type": "review"
}
Explanation: Review
End of explanation
len(review)
review.head()
review['useful'].max()
review[review['business_id'] == '2LfIuF3_sX6uwe-IR-P0jQ']['stars'].mean()
review[review['business_id'] == '2aFiy99vNLklCx3T_tGS9A']
len(review['review_id'].unique())
plt.scatter(review['stars'], review['cool'])
plt.xlabel('Star')
plt.ylabel('Cool')
plt.show()
plt.scatter(review['stars'], review['useful'])
plt.xlabel('Star')
plt.ylabel('Useful')
plt.show()
plt.scatter(review['stars'], review['funny'])
plt.xlabel('Star')
plt.ylabel('Funny')
plt.show()
review.describe()
Explanation: https://www.yelp.com/dataset_challenge
https://www.yelp-support.com/Recommended_Reviews
9. Why is the user review count different than the actual number of reviews returned for that user?
The review count represents the total number of reviews a user had posted at the time of data collection, whether Yelp recommended them or not. As for the reviews, only the reviews that were recommended at the time of data collection are included. Also, we only include businesses that have had at least 3 reviews older than 14 days. So the review count number may differ from the number of actual reviews for any given user.
End of explanation
{
"user_id":"encrypted user id",
"name":"first name",
"review_count":number of reviews,
"yelping_since": date formatted like "2009-12-19",
"friends":["an array of encrypted ids of friends"],
"useful":"number of useful votes sent by the user",
"funny":"number of funny votes sent by the user",
"cool":"number of cool votes sent by the user",
"fans":"number of fans the user has",
"elite":["an array of years the user was elite"],
"average_stars":floating point average like 4.31,
"compliment_hot":number of hot compliments received by the user,
"compliment_more":number of more compliments received by the user,
"compliment_profile": number of profile compliments received by the user,
"compliment_cute": number of cute compliments received by the user,
"compliment_list": number of list compliments received by the user,
"compliment_note": number of note compliments received by the user,
"compliment_plain": number of plain compliments received by the user,
"compliment_cool": number of cool compliments received by the user,
"compliment_funny": number of funny compliments received by the user,
"compliment_writer": number of writer compliments received by the user,
"compliment_photos": number of photo compliments received by the user,
"type":"user"
}
Explanation: User
End of explanation
len(user)
user.columns
user.head()
user.select_dtypes(include=['number']).columns
user.select_dtypes(include=['number']).corr()
def correlation_matrix(df):
from matplotlib import pyplot as plt
from matplotlib import cm as cm
fig = plt.figure(figsize=(16,16))
ax1 = fig.add_subplot(111)
cmap = cm.get_cmap('jet', 30)
cax = ax1.imshow(df.corr(), interpolation="nearest", cmap=cmap)
ax1.grid(True)
plt.title('Numeric Feature Correlation')
labels = user.select_dtypes(include=['number']).columns
ax1.set_xticks(np.arange(len(labels)))
ax1.set_yticks(np.arange(len(labels)))
ax1.set_xticklabels(labels,fontsize=10,rotation=90)
ax1.set_yticklabels(labels,fontsize=10)
# Add colorbar, make sure to specify tick locations to match desired ticklabels
fig.colorbar(cax, ticks=[.75,.8,.85,.90,.95,1])
plt.show()
correlation_matrix(user.select_dtypes(include=['number']))
plt.scatter(user['average_stars'], user['review_count'])
plt.show()
plt.scatter(user['average_stars'], user['useful'])
plt.show()
plt.scatter(user['review_count'], user['useful'])
plt.show()
plt.scatter(user['useful'], user['fans'])
plt.show()
Explanation: https://www.yelp.com/elite
End of explanation
{
"time":["an array of check ins with the format day-hour:number of check ins from hour to hour+1"],
"business_id":"encrypted business id",
"type":"checkin"
}
len(checkin)
checkin.columns
checkin.head()
checkin['time'][0]
Explanation: Checkin
End of explanation
{
"text":"text of the tip",
"date":"date formatted like 2009-12-19",
"likes":compliment count,
"business_id":"encrypted business id",
"user_id":"encrypted user id",
"type":"tip"
}
len(tip)
tip.columns
tip.head()
plt.plot(tip['likes'])
plt.show()
Explanation: Tip
End of explanation |
9,474 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 3
Imports
Step1: Damped, driven nonlinear pendulum
Basic setup
Here are the basic parameters we are going to use for this exercise
Step3: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
Step5: The equations of motion for a simple pendulum of mass $m$, length $l$ are
Step6: Simple pendulum
Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy.
Integrate the equations of motion.
Plot $E/m$ versus time.
Plot $\theta(t)$ and $\omega(t)$ versus time.
Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant.
Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable.
Step8: Damped pendulum
Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$.
Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$.
Decrease your atol and rtol even futher and make sure your solutions have converged.
Make a parametric plot of $[\theta(t),\omega(t)]$ versus time.
Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$
Label your axes and customize your plot to make it beautiful and effective.
Step9: Here is an example of the output of your plot_pendulum function that should show a decaying spiral.
Step10: Use interact to explore the plot_pendulum function with | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Explanation: Ordinary Differential Equations Exercise 3
Imports
End of explanation
g = 9.81 # m/s^2
l = 0.5 # length of pendulum, in meters
tmax = 50. # seconds
t = np.linspace(0, tmax, int(100*tmax))
Explanation: Damped, driven nonlinear pendulum
Basic setup
Here are the basic parameters we are going to use for this exercise:
End of explanation
def derivs(y, t, a, b, omega0):
Compute the derivatives of the damped, driven pendulum.
Parameters
----------
y : ndarray
The solution vector at the current time t[i]: [theta[i],omega[i]].
t : float
The current time t[i].
a, b, omega0: float
The parameters in the differential equation.
Returns
-------
dy : ndarray
The vector of derviatives at t[i]: [dtheta[i],domega[i]].
# YOUR CODE HERE
theta_t = y[0]
omega_t = y[1]
domega = -g/l * np.sin(theta_t) - a*omega_t - b*np.sin(omega0*t)
#It took me a long time to understand that theta and omega are the position and angular velocity in a sysytem,
#and that that's why you can set the derivative of position as omega.
dtheta = omega_t
dy = np.array([dtheta, domega])
return dy
assert np.allclose(derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0), [1.,-1.])
Explanation: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
End of explanation
def energy(y):
Compute the energy for the state array y.
The state array y can have two forms:
1. It could be an ndim=1 array of np.array([theta,omega]) at a single time.
2. It could be an ndim=2 array where each row is the [theta,omega] at single
time.
Parameters
----------
y : ndarray, list, tuple
A solution vector
Returns
-------
E/m : float (ndim=1) or ndarray (ndim=2)
The energy per mass.
# YOUR CODE HERE
if y.ndim==1:
theta = y[0]
omega = y[1]
elif y.ndim==2:
theta = y[:,0]
omega = y[:,1]
return g*l*(1-np.cos(theta)) + 0.5*(l**2)*omega**2
### END SOLUTION
assert np.allclose(energy(np.array([np.pi,0])),g)
assert np.allclose(energy(np.ones((10,2))), np.ones(10)*energy(np.array([1,1])))
Explanation: The equations of motion for a simple pendulum of mass $m$, length $l$ are:
$$
\frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta
$$
When a damping and periodic driving force are added the resulting system has much richer and interesting dynamics:
$$
\frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta - a \omega - b \sin(\omega_0 t)
$$
In this equation:
$a$ governs the strength of the damping.
$b$ governs the strength of the driving force.
$\omega_0$ is the angular frequency of the driving force.
When $a=0$ and $b=0$, the energy/mass is conserved:
$$E/m =g\ell(1-\cos(\theta)) + \frac{1}{2}\ell^2\omega^2$$
End of explanation
# YOUR CODE HERE
thetai = np.pi #exactly vertical
omegai = 0. #starts at rest
ic = np.array([thetai, omegai])
y = odeint(derivs, ic, t, args=(0.0,0.0,0.0), atol=1e-6, rtol=1e-5)
# YOUR CODE HERE
plt.plot(t, energy(y))
plt.xlabel('$t$')
plt.ylabel('$E/m$')
plt.title('Energy per mass versus time');
# YOUR CODE HERE
plt.plot(t, y[:,0], label='$\\theta(t)$')
plt.plot(t, y[:,1], label='$\omega(t)$')
plt.xlabel('$t$')
plt.ylabel('Solution')
plt.title('State variables versus time')
plt.legend(loc='best');
assert True # leave this to grade the two plots and their tuning of atol, rtol.
Explanation: Simple pendulum
Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy.
Integrate the equations of motion.
Plot $E/m$ versus time.
Plot $\theta(t)$ and $\omega(t)$ versus time.
Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant.
Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable.
End of explanation
def plot_pendulum(a=0.0, b=0.0, omega0=0.0):
Integrate the damped, driven pendulum and make a phase plot of the solution.
# YOUR CODE HERE
thetai = -np.pi+0.1
omegai = 0.0
ic = np.array([thetai, omegai])
y = odeint(derivs, ic, t, args=(a,b,omega0), atol=1e-10, rtol=1e-9)
plt.plot(y[:,0], y[:,1])
plt.xlim(-2.0*np.pi,2.0*np.pi)
plt.ylim(-10,10)
plt.xlabel('$\\theta(t)$')
plt.ylabel('$\omega(t)$')
Explanation: Damped pendulum
Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$.
Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$.
Decrease your atol and rtol even futher and make sure your solutions have converged.
Make a parametric plot of $[\theta(t),\omega(t)]$ versus time.
Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$
Label your axes and customize your plot to make it beautiful and effective.
End of explanation
plot_pendulum(0.5, 0.0, 0.0)
Explanation: Here is an example of the output of your plot_pendulum function that should show a decaying spiral.
End of explanation
# YOUR CODE HERE
interact(plot_pendulum, a=(0.0,1.0,0.1), b=(0.0,10.0,0.1), omega0=(0.0,10.0,0.1));
Explanation: Use interact to explore the plot_pendulum function with:
a: a float slider over the interval $[0.0,1.0]$ with steps of $0.1$.
b: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.
omega0: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.
End of explanation |
9,475 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step9: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Step10: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
def sigmoid(x):
return 1/(1+np.exp(-x))
def sigmoid_prime(x):
return sigmoid(x) * (1 - sigmoid(x))
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = sigmoid
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output.
output_grad = 1
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors) # errors propagated to the hidden layer
hidden_grad = sigmoid_prime(hidden_inputs) # TODO: sigmoid prime can be further reduced
self.weights_hidden_to_output += self.lr * np.dot(output_grad * output_errors, hidden_outputs.T) # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * np.dot(hidden_grad * hidden_errors, inputs.T) # update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import sys
### Set the hyperparameters here ###
epochs = 1500
learning_rate = .01
hidden_nodes = 6
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
The model does a good job of predicting the data - especially the daily upswings and downswings. It didn't do as well at handling the days from Dec 21 - 31. The late December holiday season seems like it would be a difficult time to predict as it doesn't follow other holidays throughout the year. The training data only covers 2 years, so additional neurons and training data would help the network recognize the holiday season.
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation |
9,476 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
#self.activation_function = lambda x : 0 # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
def sigmoid(x):
return 1 / (1 + np.exp(-x))
self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(self.weights_hidden_to_output, error)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error * 1
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:,None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:,None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 3500
learning_rate = 0.9
hidden_nodes = 9
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
9,477 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cortical Signal Suppression (CSS) for removal of cortical signals
This script shows an example of how to use CSS
Step1: Load sample subject data
Step2: Find patches (labels) to activate
Step5: Simulate one cortical dipole (40 Hz) and one subcortical (239 Hz)
Step6: Process with CSS and plot PSD of EEG data before and after processing | Python Code:
# Author: John G Samuelsson <[email protected]>
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.simulation import simulate_sparse_stc, simulate_evoked
Explanation: Cortical Signal Suppression (CSS) for removal of cortical signals
This script shows an example of how to use CSS
:footcite:Samuelsson2019 . CSS suppresses the cortical contribution
to the signal subspace in EEG data using MEG data, facilitating
detection of subcortical signals. We will illustrate how it works by
simulating one cortical and one subcortical oscillation at different
frequencies; 40 Hz and 239 Hz for cortical and subcortical activity,
respectively, then process it with CSS and look at the power spectral
density of the raw and processed data.
End of explanation
data_path = sample.data_path()
subjects_dir = data_path / 'subjects'
meg_path = data_path / 'MEG' / 'sample'
fwd_fname = meg_path / 'sample_audvis-meg-eeg-oct-6-fwd.fif'
ave_fname = meg_path / 'sample_audvis-no-filter-ave.fif'
cov_fname = meg_path / 'sample_audvis-cov.fif'
trans_fname = meg_path / 'sample_audvis_raw-trans.fif'
bem_fname = subjects_dir / 'sample' / 'bem' / '/sample-5120-bem-sol.fif'
raw = mne.io.read_raw_fif(meg_path / 'sample_audvis_raw.fif')
fwd = mne.read_forward_solution(fwd_fname)
fwd = mne.convert_forward_solution(fwd, force_fixed=True, surf_ori=True)
fwd = mne.pick_types_forward(fwd, meg=True, eeg=True, exclude=raw.info['bads'])
cov = mne.read_cov(cov_fname)
Explanation: Load sample subject data
End of explanation
all_labels = mne.read_labels_from_annot(subject='sample',
subjects_dir=subjects_dir)
labels = []
for select_label in ['parahippocampal-lh', 'postcentral-rh']:
labels.append([lab for lab in all_labels if lab.name in select_label][0])
hiplab, postcenlab = labels
Explanation: Find patches (labels) to activate
End of explanation
def cortical_waveform(times):
Create a cortical waveform.
return 10e-9 * np.cos(times * 2 * np.pi * 40)
def subcortical_waveform(times):
Create a subcortical waveform.
return 10e-9 * np.cos(times * 2 * np.pi * 239)
times = np.linspace(0, 0.5, int(0.5 * raw.info['sfreq']))
stc = simulate_sparse_stc(fwd['src'], n_dipoles=2, times=times,
location='center', subjects_dir=subjects_dir,
labels=[postcenlab, hiplab],
data_fun=cortical_waveform)
stc.data[np.where(np.isin(stc.vertices[0], hiplab.vertices))[0], :] = \
subcortical_waveform(times)
evoked = simulate_evoked(fwd, stc, raw.info, cov, nave=15)
Explanation: Simulate one cortical dipole (40 Hz) and one subcortical (239 Hz)
End of explanation
evoked_subcortical = mne.preprocessing.cortical_signal_suppression(evoked,
n_proj=6)
chs = mne.pick_types(evoked.info, meg=False, eeg=True)
psd = np.mean(np.abs(np.fft.rfft(evoked.data))**2, axis=0)
psd_proc = np.mean(np.abs(np.fft.rfft(evoked_subcortical.data))**2, axis=0)
freq = np.arange(0, stop=int(evoked.info['sfreq'] / 2),
step=evoked.info['sfreq'] / (2 * len(psd)))
fig, ax = plt.subplots()
ax.plot(freq, psd, label='raw')
ax.plot(freq, psd_proc, label='processed')
ax.text(.2, .7, 'cortical', transform=ax.transAxes)
ax.text(.8, .25, 'subcortical', transform=ax.transAxes)
ax.set(ylabel='EEG Power spectral density', xlabel='Frequency (Hz)')
ax.legend()
# References
# ^^^^^^^^^^
#
# .. footbibliography::
Explanation: Process with CSS and plot PSD of EEG data before and after processing
End of explanation |
9,478 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
データセットをトレーニングデータセットとテストデータセットに分割する
Wineデータセットを用い、前処理を行った後次元数を減らすための特徴選択の手法を見ていく。
wineデータセットのクラスは1,2,3の3種類。これは3種類の葡萄を表している。
Step1: 特徴量の尺度を揃える
一般的な手法は__正規化(normalization)__と__標準化(standardization)__の2つ。
正規化
特徴量を[0,1]の範囲にスケーリングし直すこと。
$$ x_{norm}^{(i)} = \frac{x^{(i)} - x_{min}}{x_{max} - x_{min}} $$
Step2: 標準化
平均値0, 標準偏差1となるように変換する。以下の点で正規化より優れている。
特徴量の列は正規分布に従うため、重みを学習しやすくなる
外れ値に関する有益な情報が維持されるため、外れ値の影響を受けにくい
$$ x_{std}^{(i)} = \frac{x^{(i)} - \mu_x}{\sigma_x} $$
\( \mu_x \):特徴量の列の平均値
\( \sigma_x \):対応する標準偏差
Step3: 有益な特徴量の選択
汎化誤差を減らすための一般的な方法は以下のとおり
更に多くのトレーニングデータを集める
正則化を通じて複雑さにペナルティを課す
パラメータの数が少ない、より単純なモデルを選択する
データの次元の数を減らす
L1正則化による疎な解
L2正則化は以下だった。
$$ L2
Step4: 上記で切片が3つあるが、3種類のクラス(葡萄)を見分けるため、1つ目はクラス1対クラス2,3に適合するモデルの切片(2つ目以降同様)となっている。
重み係数も3×13の行列で、クラスごとに重みベクトルが含まれる。
総入力\( z \)は、各重みに対して特徴量をかける
$$ z = w_1x_1 + ... + w_mx_m + b = \sum_{j=1}^m x_jw_j + b = {\boldsymbol w^Tx} + b $$
L1正則化により殆どの重みが0となったため、無関係な特徴量に対しても頑健なモデルになった。
以下は正則化パス(正則化の強さに対する特徴量の重み係数)のグラフ。
Step5: 逐次特徴選択アルゴリズム
特徴選択は次元削減法の一つ
逐次特徴選択は貪欲探索アルゴリズムの一つ
貪欲探索アルゴリズムは、d次元の特徴空間をk次元に削減するために使用される
特徴選択の目的
関連データのみを計算することによる計算効率の改善
無関係なノイズを取り除くことによる汎化誤差の削減
逐次後退選択(Sequential Backward Selection
Step6: ランダムフォレストで特徴量の重要度にアクセスする
フォレスト内の全ての決定木から計算された不純度の平均的な減少量として特徴量の重要度を測定できる。
scikit-learnではfeature_importances_属性を使って値を取得できる | Python Code:
import pandas as pd
import numpy as np
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Magnesium', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']
print('Class labels', np.unique(df_wine['Class label']))
df_wine.head()
from sklearn.cross_validation import train_test_split
# X:特徴量 y: クラスラベル
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
Explanation: データセットをトレーニングデータセットとテストデータセットに分割する
Wineデータセットを用い、前処理を行った後次元数を減らすための特徴選択の手法を見ていく。
wineデータセットのクラスは1,2,3の3種類。これは3種類の葡萄を表している。
End of explanation
from sklearn.preprocessing import MinMaxScaler
mms = MinMaxScaler()
X_train_norm = mms.fit_transform(X_train)
X_test_norm = mms.transform(X_test)
print('正規化前')
print(X_train[0])
print('正規化後')
print(X_train_norm[0])
Explanation: 特徴量の尺度を揃える
一般的な手法は__正規化(normalization)__と__標準化(standardization)__の2つ。
正規化
特徴量を[0,1]の範囲にスケーリングし直すこと。
$$ x_{norm}^{(i)} = \frac{x^{(i)} - x_{min}}{x_{max} - x_{min}} $$
End of explanation
from sklearn.preprocessing import StandardScaler
stdsc = StandardScaler()
X_train_std = stdsc.fit_transform(X_train)
X_test_std = stdsc.transform(X_test)
print('標準化前')
print(X_train[0])
print('標準化後')
print(X_train_std[0])
Explanation: 標準化
平均値0, 標準偏差1となるように変換する。以下の点で正規化より優れている。
特徴量の列は正規分布に従うため、重みを学習しやすくなる
外れ値に関する有益な情報が維持されるため、外れ値の影響を受けにくい
$$ x_{std}^{(i)} = \frac{x^{(i)} - \mu_x}{\sigma_x} $$
\( \mu_x \):特徴量の列の平均値
\( \sigma_x \):対応する標準偏差
End of explanation
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(penalty='l1', C=0.1)
lr.fit(X_train_std, y_train)
print('Training accuracy:', lr.score(X_train_std, y_train))
print('Test accuracy:', lr.score(X_test_std, y_test))
print('切片:', lr.intercept_)
print('重み係数:', lr.coef_)
Explanation: 有益な特徴量の選択
汎化誤差を減らすための一般的な方法は以下のとおり
更に多くのトレーニングデータを集める
正則化を通じて複雑さにペナルティを課す
パラメータの数が少ない、より単純なモデルを選択する
データの次元の数を減らす
L1正則化による疎な解
L2正則化は以下だった。
$$ L2:||w||2^2 = \sum{j=1}^m w^2_j $$
L2正則化は以下のとおり。
$$ L1:||w||1 = \sum{j=1}^m |w_j| $$
差は二乗和を絶対値の和に置き換えている。
L1正則化によって返されるのは疎な特徴ベクトル
殆どの特徴量の重みは0
無関係な特徴量の個数が多い高次元データセットに対して特徴量を選択するのに有効
なぜ特徴量を選択できる?
L2正則化のペナルティは二乗和なので原点を中心とした円のようなものになる。
L1正則化のペナルティは絶対値の和なので原点を中心としたひし形のようなものになる。
→ ひし形の頂点がコストの一番低いところになりやすい。
頂点となる箇所はどちらかの重みがゼロで、どちらかが最大となる。
End of explanation
import matplotlib.pyplot as plt
fig = plt.figure()
ax = plt.subplot(111)
colors = ['blue', 'green', 'red', 'cyan',
'magenta', 'yellow', 'black',
'pink', 'lightgreen', 'lightblue',
'gray', 'indigo', 'orange']
weights, params = [], []
for c in np.arange(-4., 6.):
lr = LogisticRegression(penalty='l1', C=10.**c, random_state=0)
lr.fit(X_train_std, y_train)
weights.append(lr.coef_[1])
params.append(10.**c)
weights = np.array(weights)
for column, color in zip(range(weights.shape[1]), colors):
plt.plot(params, weights[:, column],
label=df_wine.columns[column + 1],
color=color)
plt.axhline(0, color='black', linestyle='--', linewidth=3)
plt.xlim([10**(-5), 10**5])
plt.ylabel('weight coefficient')
plt.xlabel('C')
plt.xscale('log')
plt.legend(loc='upper left')
ax.legend(loc='upper center',
bbox_to_anchor=(1.38, 1.03),
ncol=1, fancybox=True)
# plt.savefig('./figures/l1_path.png', dpi=300)
plt.show()
Explanation: 上記で切片が3つあるが、3種類のクラス(葡萄)を見分けるため、1つ目はクラス1対クラス2,3に適合するモデルの切片(2つ目以降同様)となっている。
重み係数も3×13の行列で、クラスごとに重みベクトルが含まれる。
総入力\( z \)は、各重みに対して特徴量をかける
$$ z = w_1x_1 + ... + w_mx_m + b = \sum_{j=1}^m x_jw_j + b = {\boldsymbol w^Tx} + b $$
L1正則化により殆どの重みが0となったため、無関係な特徴量に対しても頑健なモデルになった。
以下は正則化パス(正則化の強さに対する特徴量の重み係数)のグラフ。
End of explanation
from sklearn.base import clone
from itertools import combinations
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
class SBS():
'''逐次後退選択を実行するクラス
Parameters
---------
estimator : 推定器
k_features : 選択する特徴量の個数
scoring : 特徴量を評価する指標
test_size : テストデータの割合
random_state : 乱数シード
'''
def __init__(self, estimator, k_features, scoring=accuracy_score,
test_size=0.25, random_state=1):
self.scoring = scoring
self.estimator = clone(estimator)
self.k_features = k_features
self.test_size = test_size
self.random_state = random_state
def fit(self, X, y):
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=self.test_size,
random_state=self.random_state)
dim = X_train.shape[1]
self.indices_ = tuple(range(dim))
self.subsets_ = [self.indices_]
# 全ての特徴量を用いてスコアを算出する
score = self._calc_score(X_train, y_train,
X_test, y_test, self.indices_)
self.scores_ = [score]
# 指定した特徴量の個数になるまで処理
while dim > self.k_features:
scores = []
subsets = []
# 特徴量の部分集合を表す列インデックスの組み合わせごとに反復
for p in combinations(self.indices_, r=dim - 1):
score = self._calc_score(X_train, y_train,
X_test, y_test, p)
scores.append(score)
subsets.append(p)
# 一番良いスコアを抽出
best = np.argmax(scores)
self.indices_ = subsets[best]
self.subsets_.append(self.indices_)
# 特徴量の個数を1つ減らす
dim -= 1
self.scores_.append(scores[best])
self.k_score_ = self.scores_[-1]
return self
def transform(self, X):
return X[:, self.indices_]
def _calc_score(self, X_train, y_train, X_test, y_test, indices):
self.estimator.fit(X_train[:, indices], y_train)
y_pred = self.estimator.predict(X_test[:, indices])
score = self.scoring(y_test, y_pred)
return score
from sklearn.neighbors import KNeighborsClassifier
import matplotlib.pyplot as plt
knn = KNeighborsClassifier(n_neighbors=2)
sbs = SBS(knn, k_features=1)
sbs.fit(X_train_std, y_train)
# 近傍点の個数のリスト
k_feat = [len(k) for k in sbs.subsets_]
plt.plot(k_feat, sbs.scores_, marker='o')
plt.ylim([0.7, 1.1])
plt.ylabel('Accuracy')
plt.xlabel('Number of features')
plt.grid()
plt.show()
# 上記で100%の正答率を出した5つの特徴量を調べる
k5 = list(sbs.subsets_[8])
print(df_wine.columns[1:][k5])
# 特徴量の削減の様子
sbs.subsets_
# 全特徴量を使用した場合
knn.fit(X_train_std, y_train)
print('Training accuracy:', knn.score(X_train_std, y_train))
print('Test accuracy:', knn.score(X_test_std, y_test))
# 5角特徴量を使用した場合
knn.fit(X_train_std[:, k5], y_train)
print('Training accuracy:', knn.score(X_train_std[:, k5], y_train))
print('Test accuracy:', knn.score(X_test_std[:, k5], y_test))
Explanation: 逐次特徴選択アルゴリズム
特徴選択は次元削減法の一つ
逐次特徴選択は貪欲探索アルゴリズムの一つ
貪欲探索アルゴリズムは、d次元の特徴空間をk次元に削減するために使用される
特徴選択の目的
関連データのみを計算することによる計算効率の改善
無関係なノイズを取り除くことによる汎化誤差の削減
逐次後退選択(Sequential Backward Selection: SBS)
特徴量を逐次的に削除していく
削除する特徴量は評価関数\( J \)によって決め、性能の低下が最も少ない特徴量を削除する
ステップは以下の通り
アルゴリズムを\( k=d \)で初期化する。\( d \)は全体の特徴空間\( X_d \)の次元数を表す。
\( J \)の評価を最大化する特徴量\( x^- \)を決定する。\( x \)は\( x \in X_k \)である
$$ x^- = argmax J(X_k-x) $$
特徴量の集合から特徴量\( x^- \)を削除する
$$ x_{k-1} = X_k - x^-;k := k-1 $$
\( k \)が目的とする特徴量の個数に等しくなれば終了する。そうでなければ、ステップ2に戻る。
End of explanation
from sklearn.ensemble import RandomForestClassifier
feat_labels = df_wine.columns[1:]
forest = RandomForestClassifier(n_estimators=10000, random_state=0, n_jobs=-1)
forest.fit(X_train, y_train)
# 重要度を抽出
importances = forest.feature_importances_
indices = np.argsort(importances)[::-1]
for f in range(X_train.shape[1]):
print("%2d) %-*s %f" % (f + 1, 30, feat_labels[indices[f]], importances[indices[f]]))
plt.title('Feature Importances')
plt.bar(range(X_train.shape[1]), importances[indices], color='lightblue', align='center')
plt.xticks(range(X_train.shape[1]), feat_labels[indices], rotation=90)
plt.xlim([-1, X_train.shape[1]])
plt.tight_layout()
plt.show()
Explanation: ランダムフォレストで特徴量の重要度にアクセスする
フォレスト内の全ての決定木から計算された不純度の平均的な減少量として特徴量の重要度を測定できる。
scikit-learnではfeature_importances_属性を使って値を取得できる
End of explanation |
9,479 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text mining - Clustering
Machine Learning types
Step1: Scraping
Step2: TF-IDF vectorization
Step3: K-Means clustering
Step4: Important terms according to K-Means
Step5: Hierarchical (Agglomerative) clustering
Step6: Gensim - Word2Vec | Python Code:
import time
import requests
import numpy as np
import pandas as pd
from itertools import chain
from bs4 import BeautifulSoup
import matplotlib.pyplot as plt
from textblob import TextBlob
from gensim.models import word2vec
from scipy.cluster.hierarchy import ward, dendrogram
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.cluster import KMeans, AgglomerativeClustering
Explanation: Text mining - Clustering
Machine Learning types:
- Supervised learing (labeled data),
- Unsupervised learning (not labeled data),
- Semi-supervised learning (somewhere in the middle).
In this notebook we:
- Scrape all quotes (save both all and only the first page),
- Vectorize quotes using TF-IDF vectorizer,
- TF: Term frequency = how frequently a term appears in the target observation (quote),
- IDF: Inverce document frequency = is that word unique to that selected observation (quote or not).
- Use vectorized words to cluster all the quotes using:
- k-means clustering: unsupervised learning methods, that calculates distance between vectors and determines quotes that are "close" to each other based on some similarity metric (i.e. Euclidian distance). Number of clusters predetermined.
- hiearchical (agglomerative) clustering: starts with single word clusters (bottom up approach) and merges simjilar words until forms a single cluster for the total input. The biggest hierarchical distance determines number of clusters.
- Use quotes to tokenize them (just splitting by space for simplicity) and calculating word vectors to receive similar words (uses Neural Networks, is considered semi-supervised approach).
End of explanation
def get_quotes(url):
page = BeautifulSoup(requests.get(url).content, "html.parser")
quotes = [i.get_text() for i in page.find_all("span",class_="text")]
time.sleep(3)
return quotes
quotes = get_quotes("http://quotes.toscrape.com/")
urls = ["http://quotes.toscrape.com/page/"+str(i)+"/" for i in range(1,11)]
quotes_all = [get_quotes(i) for i in urls]
quotes_all = chain.from_iterable(quotes_all)
Explanation: Scraping
End of explanation
tfidf_vectorizer = TfidfVectorizer()
tfidf_matrix = tfidf_vectorizer.fit_transform(quotes)
print(tfidf_matrix.shape)
features = tfidf_vectorizer.get_feature_names()
data = tfidf_matrix.toarray()
tfidf_df = pd.DataFrame(data,columns=features)
Explanation: TF-IDF vectorization
End of explanation
k=5
k5 = KMeans(n_clusters=k)
k5.fit(tfidf_matrix)
clusters = k5.labels_.tolist()
my_dict = {'quotes': quotes, 'cluster': clusters}
df = pd.DataFrame(my_dict)
print(df)
df.cluster.value_counts()
Explanation: K-Means clustering
End of explanation
important_terms = k5.cluster_centers_.argsort()[:, ::-1]
key_list = list(tfidf_vectorizer.vocabulary_.keys())
val_list = list(tfidf_vectorizer.vocabulary_.values())
key_list[val_list.index(74)]
for i in range(k):
for j in important_terms[i, :5]:
print("Cluster: ", i, key_list[val_list.index(j)])
Explanation: Important terms according to K-Means
End of explanation
dist = 1 - cosine_similarity(tfidf_matrix)
linkage_matrix = ward(dist)
plt.subplots(figsize=(15, 20))
dendrogram(linkage_matrix, orientation="right", labels=quotes)
plt.savefig('clusters.png')
Explanation: Hierarchical (Agglomerative) clustering
End of explanation
tokenized_sentences = [sentence.split() for sentence in quotes_all]
model = word2vec.Word2Vec(tokenized_sentences, min_count=1)
w1 = "world"
w2 = "man"
w3 = w1
print(model.wv.similarity(w1,w2))
print("\n")
model.wv.most_similar(w3)
Explanation: Gensim - Word2Vec
End of explanation |
9,480 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Locality Sensitive Hashing
Locality Sensitive Hashing (LSH) provides for a fast, efficient approximate nearest neighbor search. The algorithm scales well with respect to the number of data points as well as dimensions.
In this assignment, you will
* Implement the LSH algorithm for approximate nearest neighbor search
* Examine the accuracy for different documents by comparing against brute force search, and also contrast runtimes
* Explore the role of the algorithm’s tuning parameters in the accuracy of the method
Note to Amazon EC2 users
Step1: Upgrading to Scipy 0.16.0 or later. This assignment requires SciPy 0.16.0 or later. To upgrade, uncomment and run the following cell
Step2: Load in the Wikipedia dataset
Step3: For this assignment, let us assign a unique ID to each document.
Step4: Extract TF-IDF matrix
We first use GraphLab Create to compute a TF-IDF representation for each document.
Step6: For the remainder of the assignment, we will use sparse matrices. Sparse matrices are [matrices](https
Step7: The conversion should take a few minutes to complete.
Step8: Checkpoint
Step9: Train an LSH model
LSH performs an efficient neighbor search by randomly partitioning all reference data points into different bins. Today we will build a popular variant of LSH known as random binary projection, which approximates cosine distance. There are other variants we could use for other choices of distance metrics.
The first step is to generate a collection of random vectors from the standard Gaussian distribution.
Step10: To visualize these Gaussian random vectors, let's look at an example in low-dimensions. Below, we generate 3 random vectors each of dimension 5.
Step11: We now generate random vectors of the same dimensionality as our vocubulary size (547979). Each vector can be used to compute one bit in the bin encoding. We generate 16 vectors, leading to a 16-bit encoding of the bin index for each document.
Step12: Next, we partition data points into bins. Instead of using explicit loops, we'd like to utilize matrix operations for greater efficiency. Let's walk through the construction step by step.
We'd like to decide which bin document 0 should go. Since 16 random vectors were generated in the previous cell, we have 16 bits to represent the bin index. The first bit is given by the sign of the dot product between the first random vector and the document's TF-IDF vector.
Step13: Similarly, the second bit is computed as the sign of the dot product between the second random vector and the document vector.
Step14: We can compute all of the bin index bits at once as follows. Note the absence of the explicit for loop over the 16 vectors. Matrix operations let us batch dot-product computation in a highly efficent manner, unlike the for loop construction. Given the relative inefficiency of loops in Python, the advantage of matrix operations is even greater.
Step15: All documents that obtain exactly this vector will be assigned to the same bin. We'd like to repeat the identical operation on all documents in the Wikipedia dataset and compute the corresponding bin indices. Again, we use matrix operations so that no explicit loop is needed.
Step16: We're almost done! To make it convenient to refer to individual bins, we convert each binary bin index into a single integer
Step17: The Operators
Step18: Since it's the dot product again, we batch it with a matrix operation
Step19: This array gives us the integer index of the bins for all documents.
Now we are ready to complete the following function. Given the integer bin indices for the documents, you should compile a list of document IDs that belong to each bin. Since a list is to be maintained for each unique bin index, a dictionary of lists is used.
Compute the integer bin indices. This step is already completed.
For each document in the dataset, do the following
Step20: Checkpoint.
Step21: Note. We will be using the model trained here in the following sections, unless otherwise indicated.
Inspect bins
Let us look at some documents and see which bins they fall into.
Step22: Quiz Question. What is the document id of Barack Obama's article?
Quiz Question. Which bin contains Barack Obama's article? Enter its integer index.
Step23: Recall from the previous assignment that Joe Biden was a close neighbor of Barack Obama.
Step24: Quiz Question. Examine the bit representations of the bins containing Barack Obama and Joe Biden. In how many places do they agree?
16 out of 16 places (Barack Obama and Joe Biden fall into the same bin)
14 out of 16 places
12 out of 16 places
10 out of 16 places
8 out of 16 places
Step25: Compare the result with a former British diplomat, whose bin representation agrees with Obama's in only 8 out of 16 places.
Step26: How about the documents in the same bin as Barack Obama? Are they necessarily more similar to Obama than Biden? Let's look at which documents are in the same bin as the Barack Obama article.
Step27: There is four other documents that belong to the same bin. Which document are they?
Step28: It turns out that Joe Biden is much closer to Barack Obama than any of the four documents, even though Biden's bin representation differs from Obama's by 2 bits.
Step29: Moral of the story. Similar data points will in general tend to fall into nearby bins, but that's all we can say about LSH. In a high-dimensional space such as text features, we often get unlucky with our selection of only a few random vectors such that dissimilar data points go into the same bin while similar data points fall into different bins. Given a query document, we must consider all documents in the nearby bins and sort them according to their actual distances from the query.
Query the LSH model
Let us first implement the logic for searching nearby neighbors, which goes like this
Step31: With this output in mind, implement the logic for nearby bin search
Step32: Checkpoint. Running the function with search_radius=0 should yield the list of documents belonging to the same bin as the query.
Step33: Checkpoint. Running the function with search_radius=1 adds more documents to the fore.
Step34: Note. Don't be surprised if few of the candidates look similar to Obama. This is why we add as many candidates as our computational budget allows and sort them by their distance to the query.
Now we have a function that can return all the candidates from neighboring bins. Next we write a function to collect all candidates and compute their true distance to the query.
Step35: Let's try it out with Obama
Step36: To identify the documents, it's helpful to join this table with the Wikipedia table
Step37: We have shown that we have a working LSH implementation!
Experimenting with your LSH implementation
In the following sections we have implemented a few experiments so that you can gain intuition for how your LSH implementation behaves in different situations. This will help you understand the effect of searching nearby bins and the performance of LSH versus computing nearest neighbors using a brute force search.
Effect of nearby bin search
How does nearby bin search affect the outcome of LSH? There are three variables that are affected by the search radius
Step38: Notice that the top 10 query results become more relevant as the search radius grows. Let's plot the three variables
Step39: Some observations
Step40: Quality metrics for neighbors
The above analysis is limited by the fact that it was run with a single query, namely Barack Obama. We should repeat the analysis for the entirety of data. Iterating over all documents would take a long time, so let us randomly choose 10 documents for our analysis.
For each document, we first compute the true 25 nearest neighbors, and then run LSH multiple times. We look at two metrics
Step41: The following cell will run LSH with multiple search radii and compute the quality metrics for each run. Allow a few minutes to complete.
Step42: The observations for Barack Obama generalize to the entire dataset.
Effect of number of random vectors
Let us now turn our focus to the remaining parameter | Python Code:
import numpy as np
import graphlab
from scipy.sparse import csr_matrix
from scipy.sparse.linalg import norm
from sklearn.metrics.pairwise import pairwise_distances
import time
from copy import copy
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Locality Sensitive Hashing
Locality Sensitive Hashing (LSH) provides for a fast, efficient approximate nearest neighbor search. The algorithm scales well with respect to the number of data points as well as dimensions.
In this assignment, you will
* Implement the LSH algorithm for approximate nearest neighbor search
* Examine the accuracy for different documents by comparing against brute force search, and also contrast runtimes
* Explore the role of the algorithm’s tuning parameters in the accuracy of the method
Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.
Import necessary packages
End of explanation
# !conda upgrade -y scipy
Explanation: Upgrading to Scipy 0.16.0 or later. This assignment requires SciPy 0.16.0 or later. To upgrade, uncomment and run the following cell:
End of explanation
wiki = graphlab.SFrame('people_wiki.gl/')
Explanation: Load in the Wikipedia dataset
End of explanation
wiki = wiki.add_row_number()
wiki
Explanation: For this assignment, let us assign a unique ID to each document.
End of explanation
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text'])
wiki
Explanation: Extract TF-IDF matrix
We first use GraphLab Create to compute a TF-IDF representation for each document.
End of explanation
def sframe_to_scipy(column):
Convert a dict-typed SArray into a SciPy sparse matrix.
Returns
-------
mat : a SciPy sparse matrix where mat[i, j] is the value of word j for document i.
mapping : a dictionary where mapping[j] is the word whose values are in column j.
# Create triples of (row_id, feature_id, count).
x = graphlab.SFrame({'X1':column})
# 1. Add a row number.
x = x.add_row_number()
# 2. Stack will transform x to have a row for each unique (row, key) pair.
x = x.stack('X1', ['feature', 'value'])
# Map words into integers using a OneHotEncoder feature transformation.
f = graphlab.feature_engineering.OneHotEncoder(features=['feature'])
# We first fit the transformer using the above data.
f.fit(x)
# The transform method will add a new column that is the transformed version
# of the 'word' column.
x = f.transform(x)
# Get the feature mapping.
mapping = f['feature_encoding']
# Get the actual word id.
x['feature_id'] = x['encoded_features'].dict_keys().apply(lambda x: x[0])
# Create numpy arrays that contain the data for the sparse matrix.
i = np.array(x['id'])
j = np.array(x['feature_id'])
v = np.array(x['value'])
width = x['id'].max() + 1
height = x['feature_id'].max() + 1
# Create a sparse matrix.
mat = csr_matrix((v, (i, j)), shape=(width, height))
return mat, mapping
Explanation: For the remainder of the assignment, we will use sparse matrices. Sparse matrices are [matrices](https://en.wikipedia.org/wiki/Matrix_(mathematics%29 ) that have a small number of nonzero entries. A good data structure for sparse matrices would only store the nonzero entries to save space and speed up computation. SciPy provides a highly-optimized library for sparse matrices. Many matrix operations available for NumPy arrays are also available for SciPy sparse matrices.
We first convert the TF-IDF column (in dictionary format) into the SciPy sparse matrix format.
End of explanation
start=time.time()
corpus, mapping = sframe_to_scipy(wiki['tf_idf'])
end=time.time()
print end-start
Explanation: The conversion should take a few minutes to complete.
End of explanation
assert corpus.shape == (59071, 547979)
print 'Check passed correctly!'
Explanation: Checkpoint: The following code block should return 'Check passed correctly', indicating that your matrix contains TF-IDF values for 59071 documents and 547979 unique words. Otherwise, it will return Error.
End of explanation
def generate_random_vectors(num_vector, dim):
return np.random.randn(dim, num_vector)
Explanation: Train an LSH model
LSH performs an efficient neighbor search by randomly partitioning all reference data points into different bins. Today we will build a popular variant of LSH known as random binary projection, which approximates cosine distance. There are other variants we could use for other choices of distance metrics.
The first step is to generate a collection of random vectors from the standard Gaussian distribution.
End of explanation
# Generate 3 random vectors of dimension 5, arranged into a single 5 x 3 matrix.
np.random.seed(0) # set seed=0 for consistent results
generate_random_vectors(num_vector=3, dim=5)
Explanation: To visualize these Gaussian random vectors, let's look at an example in low-dimensions. Below, we generate 3 random vectors each of dimension 5.
End of explanation
# Generate 16 random vectors of dimension 547979
np.random.seed(0)
random_vectors = generate_random_vectors(num_vector=16, dim=547979)
random_vectors.shape
Explanation: We now generate random vectors of the same dimensionality as our vocubulary size (547979). Each vector can be used to compute one bit in the bin encoding. We generate 16 vectors, leading to a 16-bit encoding of the bin index for each document.
End of explanation
doc = corpus[0, :] # vector of tf-idf values for document 0
doc.dot(random_vectors[:, 0]) >= 0 # True if positive sign; False if negative sign
Explanation: Next, we partition data points into bins. Instead of using explicit loops, we'd like to utilize matrix operations for greater efficiency. Let's walk through the construction step by step.
We'd like to decide which bin document 0 should go. Since 16 random vectors were generated in the previous cell, we have 16 bits to represent the bin index. The first bit is given by the sign of the dot product between the first random vector and the document's TF-IDF vector.
End of explanation
doc.dot(random_vectors[:, 1]) >= 0 # True if positive sign; False if negative sign
Explanation: Similarly, the second bit is computed as the sign of the dot product between the second random vector and the document vector.
End of explanation
doc.dot(random_vectors) >= 0 # should return an array of 16 True/False bits
np.array(doc.dot(random_vectors) >= 0, dtype=int) # display index bits in 0/1's
Explanation: We can compute all of the bin index bits at once as follows. Note the absence of the explicit for loop over the 16 vectors. Matrix operations let us batch dot-product computation in a highly efficent manner, unlike the for loop construction. Given the relative inefficiency of loops in Python, the advantage of matrix operations is even greater.
End of explanation
corpus[0:2].dot(random_vectors) >= 0 # compute bit indices of first two documents
corpus.dot(random_vectors) >= 0 # compute bit indices of ALL documents
Explanation: All documents that obtain exactly this vector will be assigned to the same bin. We'd like to repeat the identical operation on all documents in the Wikipedia dataset and compute the corresponding bin indices. Again, we use matrix operations so that no explicit loop is needed.
End of explanation
np.arange(15, -1, -1)
Explanation: We're almost done! To make it convenient to refer to individual bins, we convert each binary bin index into a single integer:
Bin index integer
[0,0,0,0,0,0,0,0,0,0,0,0] => 0
[0,0,0,0,0,0,0,0,0,0,0,1] => 1
[0,0,0,0,0,0,0,0,0,0,1,0] => 2
[0,0,0,0,0,0,0,0,0,0,1,1] => 3
...
[1,1,1,1,1,1,1,1,1,1,0,0] => 65532
[1,1,1,1,1,1,1,1,1,1,0,1] => 65533
[1,1,1,1,1,1,1,1,1,1,1,0] => 65534
[1,1,1,1,1,1,1,1,1,1,1,1] => 65535 (= 2^16-1)
By the rules of binary number representation, we just need to compute the dot product between the document vector and the vector consisting of powers of 2:
Notes
End of explanation
doc = corpus[0, :] # first document
index_bits = (doc.dot(random_vectors) >= 0)
powers_of_two = (1 << np.arange(15, -1, -1))
print index_bits
print powers_of_two
print index_bits.dot(powers_of_two)
Explanation: The Operators:
x << y
- Returns x with the bits shifted to the left by y places (and new bits on the right-hand-side are zeros). This is the same as multiplying x by 2**y.
End of explanation
index_bits = corpus.dot(random_vectors) >= 0
index_bits.dot(powers_of_two)
Explanation: Since it's the dot product again, we batch it with a matrix operation:
End of explanation
def train_lsh(data, num_vector=16, seed=None):
dim = corpus.shape[1]
if seed is not None:
np.random.seed(seed)
random_vectors = generate_random_vectors(num_vector, dim)
powers_of_two = 1 << np.arange(num_vector-1, -1, -1)
table = {}
# Partition data points into bins
bin_index_bits = (data.dot(random_vectors) >= 0)
# Encode bin index bits into integers
bin_indices = bin_index_bits.dot(powers_of_two)
# Update `table` so that `table[i]` is the list of document ids with bin index equal to i.
for data_index, bin_index in enumerate(bin_indices):
if bin_index not in table:
# If no list yet exists for this bin, assign the bin an empty list.
table[bin_index] = [] # YOUR CODE HERE
# Fetch the list of document ids associated with the bin and add the document id to the end.
# data_index: document ids
# append() will add a list of document ids to table dict() with key as bin_index
table[bin_index].append(data_index) # YOUR CODE HERE
model = {'data': data,
'bin_index_bits': bin_index_bits,
'bin_indices': bin_indices,
'table': table,
'random_vectors': random_vectors,
'num_vector': num_vector}
return model
Explanation: This array gives us the integer index of the bins for all documents.
Now we are ready to complete the following function. Given the integer bin indices for the documents, you should compile a list of document IDs that belong to each bin. Since a list is to be maintained for each unique bin index, a dictionary of lists is used.
Compute the integer bin indices. This step is already completed.
For each document in the dataset, do the following:
Get the integer bin index for the document.
Fetch the list of document ids associated with the bin; if no list yet exists for this bin, assign the bin an empty list.
Add the document id to the end of the list.
End of explanation
model = train_lsh(corpus, num_vector=16, seed=143)
table = model['table']
if 0 in table and table[0] == [39583] and \
143 in table and table[143] == [19693, 28277, 29776, 30399]:
print 'Passed!'
else:
print 'Check your code.'
Explanation: Checkpoint.
End of explanation
wiki[wiki['name'] == 'Barack Obama']
Explanation: Note. We will be using the model trained here in the following sections, unless otherwise indicated.
Inspect bins
Let us look at some documents and see which bins they fall into.
End of explanation
model
# document id of Barack Obama
wiki[wiki['name'] == 'Barack Obama']['id'][0]
# bin_index contains Barack Obama's article
print model['bin_indices'][35817] # integer format
Explanation: Quiz Question. What is the document id of Barack Obama's article?
Quiz Question. Which bin contains Barack Obama's article? Enter its integer index.
End of explanation
wiki[wiki['name'] == 'Joe Biden']
Explanation: Recall from the previous assignment that Joe Biden was a close neighbor of Barack Obama.
End of explanation
# document id of Joe Biden
wiki[wiki['name'] == 'Joe Biden']['id'][0]
# bin_index of Joe Biden
print np.array(model['bin_index_bits'][24478], dtype=int) # list of 0/1's
# bit representations of the bins containing Joe Biden
print model['bin_indices'][24478] # integer format
model['bin_index_bits'][35817] == model['bin_index_bits'][24478]
# sum of bits agree between Barack Obama and Joe Biden
sum(model['bin_index_bits'][35817] == model['bin_index_bits'][24478])
Explanation: Quiz Question. Examine the bit representations of the bins containing Barack Obama and Joe Biden. In how many places do they agree?
16 out of 16 places (Barack Obama and Joe Biden fall into the same bin)
14 out of 16 places
12 out of 16 places
10 out of 16 places
8 out of 16 places
End of explanation
wiki[wiki['name']=='Wynn Normington Hugh-Jones']
print np.array(model['bin_index_bits'][22745], dtype=int) # list of 0/1's
print model['bin_indices'][22745] # integer format
model['bin_index_bits'][35817] == model['bin_index_bits'][22745]
Explanation: Compare the result with a former British diplomat, whose bin representation agrees with Obama's in only 8 out of 16 places.
End of explanation
model['table'][model['bin_indices'][35817]]
Explanation: How about the documents in the same bin as Barack Obama? Are they necessarily more similar to Obama than Biden? Let's look at which documents are in the same bin as the Barack Obama article.
End of explanation
doc_ids = list(model['table'][model['bin_indices'][35817]])
doc_ids.remove(35817) # display documents other than Obama
docs = wiki.filter_by(values=doc_ids, column_name='id') # filter by id column
docs
Explanation: There is four other documents that belong to the same bin. Which document are they?
End of explanation
def cosine_distance(x, y):
xy = x.dot(y.T)
dist = xy/(norm(x)*norm(y))
return 1-dist[0,0]
obama_tf_idf = corpus[35817,:]
biden_tf_idf = corpus[24478,:]
print '================= Cosine distance from Barack Obama'
print 'Barack Obama - {0:24s}: {1:f}'.format('Joe Biden',
cosine_distance(obama_tf_idf, biden_tf_idf))
for doc_id in doc_ids:
doc_tf_idf = corpus[doc_id,:]
print 'Barack Obama - {0:24s}: {1:f}'.format(wiki[doc_id]['name'],
cosine_distance(obama_tf_idf, doc_tf_idf))
Explanation: It turns out that Joe Biden is much closer to Barack Obama than any of the four documents, even though Biden's bin representation differs from Obama's by 2 bits.
End of explanation
from itertools import combinations
num_vector = 16
search_radius = 3
for diff in combinations(range(num_vector), search_radius):
print diff
Explanation: Moral of the story. Similar data points will in general tend to fall into nearby bins, but that's all we can say about LSH. In a high-dimensional space such as text features, we often get unlucky with our selection of only a few random vectors such that dissimilar data points go into the same bin while similar data points fall into different bins. Given a query document, we must consider all documents in the nearby bins and sort them according to their actual distances from the query.
Query the LSH model
Let us first implement the logic for searching nearby neighbors, which goes like this:
1. Let L be the bit representation of the bin that contains the query documents.
2. Consider all documents in bin L.
3. Consider documents in the bins whose bit representation differs from L by 1 bit.
4. Consider documents in the bins whose bit representation differs from L by 2 bits.
...
To obtain candidate bins that differ from the query bin by some number of bits, we use itertools.combinations, which produces all possible subsets of a given list. See this documentation for details.
1. Decide on the search radius r. This will determine the number of different bits between the two vectors.
2. For each subset (n_1, n_2, ..., n_r) of the list [0, 1, 2, ..., num_vector-1], do the following:
* Flip the bits (n_1, n_2, ..., n_r) of the query bin to produce a new bit vector.
* Fetch the list of documents belonging to the bin indexed by the new bit vector.
* Add those documents to the candidate set.
Each line of output from the following cell is a 3-tuple indicating where the candidate bin would differ from the query bin. For instance,
(0, 1, 3)
indicates that the candiate bin differs from the query bin in first, second, and fourth bits.
End of explanation
def search_nearby_bins(query_bin_bits, table, search_radius=2, initial_candidates=set()):
For a given query vector and trained LSH model, return all candidate neighbors for
the query among all bins within the given search radius.
Example usage
-------------
>>> model = train_lsh(corpus, num_vector=16, seed=143)
>>> q = model['bin_index_bits'][0] # vector for the first document
>>> candidates = search_nearby_bins(q, model['table'])
num_vector = len(query_bin_bits)
powers_of_two = 1 << np.arange(num_vector-1, -1, -1)
# Allow the user to provide an initial set of candidates.
candidate_set = copy(initial_candidates)
for different_bits in combinations(range(num_vector), search_radius):
# Flip the bits (n_1,n_2,...,n_r) of the query bin to produce a new bit vector.
## Hint: you can iterate over a tuple like a list
alternate_bits = copy(query_bin_bits)
for i in different_bits:
# Flip the bits
alternate_bits[i] = ~alternate_bits[i] # YOUR CODE HERE
# Convert the new bit vector to an integer index
nearby_bin = alternate_bits.dot(powers_of_two)
# Fetch the list of documents belonging to the bin indexed by the new bit vector.
# Then add those documents to candidate_set
# Make sure that the bin exists in the table!
# Hint: update() method for sets lets you add an entire list to the set
if nearby_bin in table:
more_docs = table[nearby_bin] # Get all document_ids of the bin
candidate_set.update(more_docs) # YOUR CODE HERE: Update candidate_set with the documents in this bin.
return candidate_set
Explanation: With this output in mind, implement the logic for nearby bin search:
End of explanation
obama_bin_index = model['bin_index_bits'][35817] # bin index of Barack Obama
candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=0)
if candidate_set == set([35817, 21426, 53937, 39426, 50261]):
print 'Passed test'
else:
print 'Check your code'
print 'List of documents in the same bin as Obama: 35817, 21426, 53937, 39426, 50261'
Explanation: Checkpoint. Running the function with search_radius=0 should yield the list of documents belonging to the same bin as the query.
End of explanation
candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=1, initial_candidates=candidate_set)
if candidate_set == set([39426, 38155, 38412, 28444, 9757, 41631, 39207, 59050, 47773, 53937, 21426, 34547,
23229, 55615, 39877, 27404, 33996, 21715, 50261, 21975, 33243, 58723, 35817, 45676,
19699, 2804, 20347]):
print 'Passed test'
else:
print 'Check your code'
Explanation: Checkpoint. Running the function with search_radius=1 adds more documents to the fore.
End of explanation
def query(vec, model, k, max_search_radius):
data = model['data']
table = model['table']
random_vectors = model['random_vectors']
num_vector = random_vectors.shape[1]
# Compute bin index for the query vector, in bit representation.
bin_index_bits = (vec.dot(random_vectors) >= 0).flatten()
# Search nearby bins and collect candidates
candidate_set = set()
for search_radius in xrange(max_search_radius+1):
candidate_set = search_nearby_bins(bin_index_bits, table, search_radius, initial_candidates=candidate_set)
# Sort candidates by their true distances from the query
nearest_neighbors = graphlab.SFrame({'id':candidate_set})
candidates = data[np.array(list(candidate_set)),:]
nearest_neighbors['distance'] = pairwise_distances(candidates, vec, metric='cosine').flatten()
return nearest_neighbors.topk('distance', k, reverse=True), len(candidate_set)
Explanation: Note. Don't be surprised if few of the candidates look similar to Obama. This is why we add as many candidates as our computational budget allows and sort them by their distance to the query.
Now we have a function that can return all the candidates from neighboring bins. Next we write a function to collect all candidates and compute their true distance to the query.
End of explanation
query(corpus[35817,:], model, k=10, max_search_radius=3)
Explanation: Let's try it out with Obama:
End of explanation
query(corpus[35817,:], model, k=10, max_search_radius=3)[0].join(wiki[['id', 'name']], on='id').sort('distance')
Explanation: To identify the documents, it's helpful to join this table with the Wikipedia table:
End of explanation
wiki[wiki['name']=='Barack Obama']
num_candidates_history = []
query_time_history = []
max_distance_from_query_history = []
min_distance_from_query_history = []
average_distance_from_query_history = []
for max_search_radius in xrange(17):
start=time.time()
result, num_candidates = query(corpus[35817,:], model, k=10,
max_search_radius=max_search_radius)
end=time.time()
query_time = end-start
print 'Radius:', max_search_radius
print result.join(wiki[['id', 'name']], on='id').sort('distance')
average_distance_from_query = result['distance'][1:].mean()
max_distance_from_query = result['distance'][1:].max()
min_distance_from_query = result['distance'][1:].min()
num_candidates_history.append(num_candidates)
query_time_history.append(query_time)
average_distance_from_query_history.append(average_distance_from_query)
max_distance_from_query_history.append(max_distance_from_query)
min_distance_from_query_history.append(min_distance_from_query)
Explanation: We have shown that we have a working LSH implementation!
Experimenting with your LSH implementation
In the following sections we have implemented a few experiments so that you can gain intuition for how your LSH implementation behaves in different situations. This will help you understand the effect of searching nearby bins and the performance of LSH versus computing nearest neighbors using a brute force search.
Effect of nearby bin search
How does nearby bin search affect the outcome of LSH? There are three variables that are affected by the search radius:
* Number of candidate documents considered
* Query time
* Distance of approximate neighbors from the query
Let us run LSH multiple times, each with different radii for nearby bin search. We will measure the three variables as discussed above.
End of explanation
plt.figure(figsize=(7,4.5))
plt.plot(num_candidates_history, linewidth=4)
plt.xlabel('Search radius')
plt.ylabel('# of documents searched')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(query_time_history, linewidth=4)
plt.xlabel('Search radius')
plt.ylabel('Query time (seconds)')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(average_distance_from_query_history, linewidth=4, label='Average of 10 neighbors')
plt.plot(max_distance_from_query_history, linewidth=4, label='Farthest of 10 neighbors')
plt.plot(min_distance_from_query_history, linewidth=4, label='Closest of 10 neighbors')
plt.xlabel('Search radius')
plt.ylabel('Cosine distance of neighbors')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
Explanation: Notice that the top 10 query results become more relevant as the search radius grows. Let's plot the three variables:
End of explanation
for i, v in enumerate(average_distance_from_query_history):
if v <= 0.78:
print i, v
Explanation: Some observations:
* As we increase the search radius, we find more neighbors that are a smaller distance away.
* With increased search radius comes a greater number documents that have to be searched. Query time is higher as a consequence.
* With sufficiently high search radius, the results of LSH begin to resemble the results of brute-force search.
Quiz Question. What was the smallest search radius that yielded the correct nearest neighbor, namely Joe Biden?
Quiz Question. Suppose our goal was to produce 10 approximate nearest neighbors whose average distance from the query document is within 0.01 of the average for the true 10 nearest neighbors. For Barack Obama, the true 10 nearest neighbors are on average about 0.77. What was the smallest search radius for Barack Obama that produced an average distance of 0.78 or better?
Answer: What was the smallest search radius that yielded the correct nearest neighbor, namely Joe Biden?
Based on result table, the answer is: Radius: 2
Answer. Suppose our goal was to produce 10 approximate nearest neighbors whose average distance from the query document is within 0.01 of the average for the true 10 nearest neighbors. For Barack Obama, the true 10 nearest neighbors are on average about 0.77. What was the smallest search radius for Barack Obama that produced an average distance of 0.78 or better?
- Clearly, the smallest search radius is 7
End of explanation
def brute_force_query(vec, data, k):
num_data_points = data.shape[0]
# Compute distances for ALL data points in training set
nearest_neighbors = graphlab.SFrame({'id':range(num_data_points)})
nearest_neighbors['distance'] = pairwise_distances(data, vec, metric='cosine').flatten()
return nearest_neighbors.topk('distance', k, reverse=True)
Explanation: Quality metrics for neighbors
The above analysis is limited by the fact that it was run with a single query, namely Barack Obama. We should repeat the analysis for the entirety of data. Iterating over all documents would take a long time, so let us randomly choose 10 documents for our analysis.
For each document, we first compute the true 25 nearest neighbors, and then run LSH multiple times. We look at two metrics:
Precision@10: How many of the 10 neighbors given by LSH are among the true 25 nearest neighbors?
Average cosine distance of the neighbors from the query
Then we run LSH multiple times with different search radii.
End of explanation
max_radius = 17
precision = {i:[] for i in xrange(max_radius)}
average_distance = {i:[] for i in xrange(max_radius)}
query_time = {i:[] for i in xrange(max_radius)}
np.random.seed(0)
num_queries = 10
for i, ix in enumerate(np.random.choice(corpus.shape[0], num_queries, replace=False)):
print('%s / %s' % (i, num_queries))
ground_truth = set(brute_force_query(corpus[ix,:], corpus, k=25)['id'])
# Get the set of 25 true nearest neighbors
for r in xrange(1,max_radius):
start = time.time()
result, num_candidates = query(corpus[ix,:], model, k=10, max_search_radius=r)
end = time.time()
query_time[r].append(end-start)
# precision = (# of neighbors both in result and ground_truth)/10.0
precision[r].append(len(set(result['id']) & ground_truth)/10.0)
average_distance[r].append(result['distance'][1:].mean())
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(average_distance[i]) for i in xrange(1,17)], linewidth=4, label='Average over 10 neighbors')
plt.xlabel('Search radius')
plt.ylabel('Cosine distance')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(precision[i]) for i in xrange(1,17)], linewidth=4, label='Precison@10')
plt.xlabel('Search radius')
plt.ylabel('Precision')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(query_time[i]) for i in xrange(1,17)], linewidth=4, label='Query time')
plt.xlabel('Search radius')
plt.ylabel('Query time (seconds)')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
Explanation: The following cell will run LSH with multiple search radii and compute the quality metrics for each run. Allow a few minutes to complete.
End of explanation
precision = {i:[] for i in xrange(5,20)}
average_distance = {i:[] for i in xrange(5,20)}
query_time = {i:[] for i in xrange(5,20)}
num_candidates_history = {i:[] for i in xrange(5,20)}
ground_truth = {}
np.random.seed(0)
num_queries = 10
docs = np.random.choice(corpus.shape[0], num_queries, replace=False)
for i, ix in enumerate(docs):
ground_truth[ix] = set(brute_force_query(corpus[ix,:], corpus, k=25)['id'])
# Get the set of 25 true nearest neighbors
for num_vector in xrange(5,20):
print('num_vector = %s' % (num_vector))
model = train_lsh(corpus, num_vector, seed=143)
for i, ix in enumerate(docs):
start = time.time()
result, num_candidates = query(corpus[ix,:], model, k=10, max_search_radius=3)
end = time.time()
query_time[num_vector].append(end-start)
precision[num_vector].append(len(set(result['id']) & ground_truth[ix])/10.0)
average_distance[num_vector].append(result['distance'][1:].mean())
num_candidates_history[num_vector].append(num_candidates)
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(average_distance[i]) for i in xrange(5,20)], linewidth=4, label='Average over 10 neighbors')
plt.xlabel('# of random vectors')
plt.ylabel('Cosine distance')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(precision[i]) for i in xrange(5,20)], linewidth=4, label='Precison@10')
plt.xlabel('# of random vectors')
plt.ylabel('Precision')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(query_time[i]) for i in xrange(5,20)], linewidth=4, label='Query time (seconds)')
plt.xlabel('# of random vectors')
plt.ylabel('Query time (seconds)')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(num_candidates_history[i]) for i in xrange(5,20)], linewidth=4,
label='# of documents searched')
plt.xlabel('# of random vectors')
plt.ylabel('# of documents searched')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
Explanation: The observations for Barack Obama generalize to the entire dataset.
Effect of number of random vectors
Let us now turn our focus to the remaining parameter: the number of random vectors. We run LSH with different number of random vectors, ranging from 5 to 20. We fix the search radius to 3.
Allow a few minutes for the following cell to complete.
End of explanation |
9,481 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
写程序,可由键盘读入用户姓名例如Mr. right,让用户输入出生的月份与日期,判断用户星座,假设用户是金牛座,则输出,Mr. right,你是非常有性格的金牛座!
Step1: 写程序,可由键盘读入两个整数m与n(n不等于0),询问用户意图,如果要求和则计算从m到n的和输出,如果要乘积则计算从m到n的积并输出,如果要求余数则计算m除以n的余数的值并输出,否则则计算m整除n的值并输出。
Step2: 写程序,能够根据北京雾霾PM2.5数值给出对应的防护建议。如当PM2.5数值大于500,则应该打开空气净化器,戴防雾霾口罩等.
Step3: 英文单词单数转复数,要求输入一个英文动词(单数形式),能够得到其复数形式,或给出单数转复数形式的建议(提示,some_string.endswith(some_letter)函数可以判断某字符串结尾字符,可尝试运行:'myname'.endswith('me'),'liupengyuan'.endswith('n')`)。
Step4: 写程序,能够在屏幕上显示空行。
Step5: 写程序,由用户输入一些整数,能够得到几个整数中的次大值(第二大的值)并输出。 | Python Code:
name=input('请输入你的名字')
print(name)
date=float(input('请输入你的生日'))
if 1.19<date<2.19:
print('你是水瓶座')
elif 2.18<date<3.21:
print('你是双鱼座')
elif 3.20<date<4.20:
print('你是白羊座')
elif 4.19<date<5.21:
print('你是金牛座')
elif 5.20<date<6.22:
print('你是双子座')
elif 6.21<date<7.23:
print('你是巨蟹座')
elif 7.22<date<8.23:
print('你是狮子座')
elif 3.22<date<9.23:
print('你是处女座')
elif 9.22<date<10.24:
print('你是天枰座')
elif 10.23<date<11.23:
print('你是天蝎座')
elif 11.22<date<12.22:
print('你是射手座')
elif date>12.21 or date<1.20:
print('你是摩羯座')
Explanation: 写程序,可由键盘读入用户姓名例如Mr. right,让用户输入出生的月份与日期,判断用户星座,假设用户是金牛座,则输出,Mr. right,你是非常有性格的金牛座!
End of explanation
m=int(input('请输入一个整数'))
n=int(input('请输入一个不为零的整数'))
i=int(input('请输入0,1,2或其他数字'))
total=m
product=m
if i==0 and m>n:
while m>n:
total=total+n
n=n+1
print(total)
elif i==0 and m<=n:
while m<=n:
total=total+n
n=n-1
print(total)
elif i==1 and m>n:
while m>n:
product=product*n
n=n+1
print(product)
elif i==1 and m<=n:
while m<n:
product=product*n
n=n-1
print(product)
elif i==2:
remainder=m/n
print(remainder)
else:
result=int(m/n)
print(m/n)
Explanation: 写程序,可由键盘读入两个整数m与n(n不等于0),询问用户意图,如果要求和则计算从m到n的和输出,如果要乘积则计算从m到n的积并输出,如果要求余数则计算m除以n的余数的值并输出,否则则计算m整除n的值并输出。
End of explanation
n=int(input('请输入今日雾霾指数'))
if n>500:
print('请打开空气进化器')
else:
print('空气状况良好')
Explanation: 写程序,能够根据北京雾霾PM2.5数值给出对应的防护建议。如当PM2.5数值大于500,则应该打开空气净化器,戴防雾霾口罩等.
End of explanation
word=str(input('请输入一个单词'))
if word.endswith('s') or word.endswith('es'):
print(word,'es', sep = '')
elif word.endswith('y'):
print('变y为i加es')
else:
print(word,'s', sep ='')
Explanation: 英文单词单数转复数,要求输入一个英文动词(单数形式),能够得到其复数形式,或给出单数转复数形式的建议(提示,some_string.endswith(some_letter)函数可以判断某字符串结尾字符,可尝试运行:'myname'.endswith('me'),'liupengyuan'.endswith('n')`)。
End of explanation
print(' ')
Explanation: 写程序,能够在屏幕上显示空行。
End of explanation
m = int(input('请输入要输入的整数个数,回车结束。'))
max_number = int(input('请输入一个整数,回车结束'))
min_number = max_number
n=int(input('请输入一个整数,以回车结束'))
if n>max_number:
max_number=n
elif n<min_number:
min_number=n
i = 2
while i < m:
i += 1
n = int(input('请输入一个整数,回车结束'))
if n>max_number:
result=max_number
max_number=n
elif n<min_number:
result=min_number
min_number=n
elif min_number<n<max_number:
result=n
min_number=n
print(result)
Explanation: 写程序,由用户输入一些整数,能够得到几个整数中的次大值(第二大的值)并输出。
End of explanation |
9,482 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook, the article's simulations of the default procedure is rerun and compared to the procedure using Scipy's optimization scipy.optimize.least_square function.
Relevant modules are first imported
Step1: Below we define the simulated acquisition parameters
Step2: Next the ground truth values of tissue and water diffusion are defined
Step3: Having all parameters set, the simulations are processed below
Step4: Now the we process the simulated diffusion-weigthed signals using the default article's procedure. In addition the computing time of this procedure is measured.
Step5: Below we plot the results obtain from the default free water DTI fit procedure.
Step6: Similar to the default free water fit procedure, the procedure using scipy's optimization scipy.optimize.least_square function is tested using the simulated diffusion-weighted signals and timed.
Step7: Below we plot the results obtain from the default free water DTI fit procedure that uses the scipy's optimization scipy.optimize.least_square function.
Step8: From the figures above, one can see that both procedures have comparable performances. However the procedure using scipy's optimization scipy.optimize.least_square shows to be almost 4 times slower that the article's default procedure. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import time
import sys
import os
%matplotlib inline
# Change directory to the code folder
os.chdir('..//code')
# Functions to sample the diffusion-weighted gradient directions
from dipy.core.sphere import disperse_charges, HemiSphere
# Function to reconstruct the tables with the acquisition information
from dipy.core.gradients import gradient_table
# Functions to perform simulations based on multi-compartment models
from dipy.sims.voxel import multi_tensor
# Import Dipy's procedures to process diffusion tensor
import dipy.reconst.dti as dti
# Importing procedures to fit the free water elimination DTI model
from functions import (nls_fit_tensor, nls_fit_tensor_bounds)
Explanation: In this notebook, the article's simulations of the default procedure is rerun and compared to the procedure using Scipy's optimization scipy.optimize.least_square function.
Relevant modules are first imported:
End of explanation
# Sample the spherical cordinates of 32 random diffusion-weighted
# directions.
n_pts = 32
theta = np.pi * np.random.rand(n_pts)
phi = 2 * np.pi * np.random.rand(n_pts)
# Convert direction to cartesian coordinates. For this, Dipy's
# class object HemiSphere is used. Since diffusion possess central
# symmetric, this class object also projects the direction to an
# Hemisphere.
hsph_initial = HemiSphere(theta=theta, phi=phi)
# By using a electrostatic potential energy algorithm, the directions
# of the HemiSphere class object are moved util them are evenly
# distributed in the Hemi-sphere
hsph_updated, potential = disperse_charges(hsph_initial, 5000)
directions = hsph_updated.vertices
# Based on the evenly sampled directions, the acquistion parameters are
# simulated. Vector bvals containts the information of the b-values
# while matrix bvecs contains all gradient directions for all b-value repetitions.
bvals = np.hstack((np.zeros(6), 500 * np.ones(n_pts), 1500 * np.ones(n_pts)))
bvecs = np.vstack((np.zeros((6, 3)), directions, directions))
# bvals and bvecs are converted according to Dipy's accepted format using
# Dipy's function gradient_table
gtab = gradient_table(bvals, bvecs)
# The SNR is defined according to Hoy et al, 2014
SNR = 40
Explanation: Below we define the simulated acquisition parameters:
End of explanation
# Simulations are repeated for 11 free water volume fractions
VF = np.linspace(0, 100, num=11)
# The value of free water diffusion is set to its known value
Dwater = 3e-3
# Simulations are repeated for 5 levels of fractional anisotropy
FA = np.array([0.71, 0.30, 0.22, 0.11, 0.])
L1 = np.array([1.6e-3, 1.080e-3, 1.000e-3, 0.900e-3, 0.8e-03])
L2 = np.array([0.5e-3, 0.695e-3, 0.725e-3, 0.763e-3, 0.8e-03])
L3 = np.array([0.3e-3, 0.625e-3, 0.675e-3, 0.738e-3, 0.8e-03])
# According to Hoy et al., simulations are repeated for 120 different
# diffusion tensor directions (and each direction repeated 100 times).
nDTdirs = 120
nrep = 100
# These directions are sampled using the same procedure used
# to evenly sample the diffusion gradient directions
theta = np.pi * np.random.rand(nDTdirs)
phi = 2 * np.pi * np.random.rand(nDTdirs)
hsph_initial = HemiSphere(theta=theta, phi=phi)
hsph_updated, potential = disperse_charges(hsph_initial, 5000)
DTdirs = hsph_updated.vertices
Explanation: Next the ground truth values of tissue and water diffusion are defined:
End of explanation
# Initializing a matrix to save all synthetic diffusion-weighted
# signals. Each dimension of this matrix corresponds to the number
# of simulated FA levels, volume fractions, diffusion tensor
# directions, and diffusion-weighted signals of the given
# gradient table
DWI_simulates = np.empty((FA.size, VF.size, nrep * nDTdirs,
bvals.size))
for fa_i in range(FA.size):
# selecting the diffusion eigenvalues for a given FA level
mevals = np.array([[L1[fa_i], L2[fa_i], L3[fa_i]],
[Dwater, Dwater, Dwater]])
for vf_i in range(VF.size):
# estimating volume fractions for both simulations
# compartments
fractions = [100 - VF[vf_i], VF[vf_i]]
for di in range(nDTdirs):
# Select a diffusion tensor direction
d = DTdirs[di]
# Repeat simulations for the given directions
for s_i in np.arange(di * nrep, (di+1) * nrep):
# Multi-compartmental simulations are done using
# Dipy's function multi_tensor
signal, sticks = multi_tensor(gtab, mevals,
S0=100,
angles=[d, (1, 0, 0)],
fractions=fractions,
snr=SNR)
DWI_simulates[fa_i, vf_i, s_i, :] = signal
prog = (fa_i+1.0) / FA.size * 100
time.sleep(1)
sys.stdout.write("\r%f%%" % prog)
sys.stdout.flush()
Explanation: Having all parameters set, the simulations are processed below:
End of explanation
# All simulations are fitted simultaneously using function nls_fit_tensor
t0 = time.time()
fw_params = nls_fit_tensor(gtab, DWI_simulates, Diso=Dwater)
dt = time.time() - t0
print("This step took %f seconds to run" % dt)
Explanation: Now the we process the simulated diffusion-weigthed signals using the default article's procedure. In addition the computing time of this procedure is measured.
End of explanation
fig, axs = plt.subplots(nrows=2, ncols=2, figsize=(10, 10))
# Compute the tissue's diffusion tensor fractional anisotropy
# using function fractional_anisotropy of Dipy's module dti
fa = dti.fractional_anisotropy(fw_params[..., :3])
f = fw_params[..., 12]
# Initializing vectors for FA statistics
median_fa = np.empty(VF.size)
lower_p = np.empty(VF.size)
upper_p = np.empty(VF.size)
# Defining the colors of the figure
colors = {0: 'r', 1: 'magenta', 2: 'black', 3: 'b', 4: 'g'}
for fa_i in range(FA.size):
for vf_i in range(VF.size):
# Compute FA statistics for a given ground truth FA
# level and a water volume fraction
median_fa[vf_i] = np.median(fa[fa_i, vf_i, :])
p25, p75 = np.percentile(fa[fa_i, vf_i, :], [25, 75])
lower_p[vf_i] = median_fa[vf_i] - p25
lower_p[vf_i] = p75 - median_fa[vf_i]
# Plot FA statistics as a function of the ground truth
# water volume fraction
axs[0, 0].errorbar(VF/100, median_fa, fmt='.',
yerr=[lower_p, lower_p],
color=colors[fa_i],
ecolor=colors[fa_i],
linewidth=1.0,
label='$FA: %.2f$' % FA[fa_i])
# Adjust properties of the first panel of the figure
axs[0, 0].set_ylim([-0.1, 1.2])
axs[0, 0].set_xlim([-0.1, 1.2])
axs[0, 0].set_xlabel('Simulated f-value')
axs[0, 0].set_ylabel('Estimated FA')
axs[0, 0].legend(loc='center left', bbox_to_anchor=(1, 0.5))
# Turn off the upper right panel since it is not used.
axs[0, 1].axis('off')
# Initializing vectors for volume fraction statistics
median_f = np.empty(VF.size)
lower_p = np.empty(VF.size)
upper_p = np.empty(VF.size)
for idx, fa_i in enumerate([0, 4]):
for vf_i in range(VF.size):
# Compute FA statistics for a given ground truth FA
# level and a water volume fraction. Note that only
# the extreme FA values are plotted.
median_f[vf_i] = np.median(f[fa_i, vf_i, :])
p25, p75 = np.percentile(f[fa_i, vf_i, :], [25, 75])
lower_p[vf_i] = median_f[vf_i] - p25
lower_p[vf_i] = p75 - median_f[vf_i]
# Plot the water volume fraction statistics as a function
# of its ground truth value in a lower panel of the
# figure.
axs[1, idx].errorbar(VF/100, median_f, fmt='.',
yerr=[lower_p, lower_p],
color=colors[fa_i],
ecolor=colors[fa_i],
linewidth=3.0,
label='$FA: %.2f$' % FA[fa_i])
# plot identity lines
axs[1, idx].plot([0, 1], [0, 1], 'b', label='Simulated f-value')
# Adjust properties of a given lower panel of the figure
axs[1, idx].legend(loc='upper left')
axs[1, idx].set_ylim([-0.1, 1.2])
axs[1, idx].set_xlim([-0.1, 1.2])
axs[1, idx].set_xlabel('Simulated f-value')
axs[1, idx].set_ylabel('Estimated f-value')
# Save figure
fig.savefig('fwdti_simulations.png')
Explanation: Below we plot the results obtain from the default free water DTI fit procedure.
End of explanation
t0 = time.time()
fw_params = nls_fit_tensor_bounds(gtab, DWI_simulates, Diso=Dwater)
dt_bounds = time.time() - t0
print("This step took %f seconds to run" % dt_bounds)
Explanation: Similar to the default free water fit procedure, the procedure using scipy's optimization scipy.optimize.least_square function is tested using the simulated diffusion-weighted signals and timed.
End of explanation
fig, axs = plt.subplots(nrows=2, ncols=2, figsize=(10, 10))
# Compute the tissue's diffusion tensor fractional anisotropy
# using function fractional_anisotropy of Dipy's module dti
fa = dti.fractional_anisotropy(fw_params[..., :3])
f = fw_params[..., 12]
# Initializing vectors for FA statistics
median_fa = np.empty(VF.size)
lower_p = np.empty(VF.size)
upper_p = np.empty(VF.size)
# Defining the colors of the figure
colors = {0: 'r', 1: 'magenta', 2: 'black', 3: 'b', 4: 'g'}
for fa_i in range(FA.size):
for vf_i in range(VF.size):
# Compute FA statistics for a given ground truth FA
# level and a water volume fraction
median_fa[vf_i] = np.median(fa[fa_i, vf_i, :])
p25, p75 = np.percentile(fa[fa_i, vf_i, :], [25, 75])
lower_p[vf_i] = median_fa[vf_i] - p25
lower_p[vf_i] = p75 - median_fa[vf_i]
# Plot FA statistics as a function of the ground truth
# water volume fraction
axs[0, 0].errorbar(VF/100, median_fa, fmt='.',
yerr=[lower_p, lower_p],
color=colors[fa_i],
ecolor=colors[fa_i],
linewidth=1.0,
label='$FA: %.2f$' % FA[fa_i])
# Adjust properties of the first panel of the figure
axs[0, 0].set_ylim([-0.1, 1.2])
axs[0, 0].set_xlim([-0.1, 1.2])
axs[0, 0].set_xlabel('Simulated f-value')
axs[0, 0].set_ylabel('Estimated FA')
axs[0, 0].legend(loc='center left', bbox_to_anchor=(1, 0.5))
# Turn off the upper right panel since it is not used.
axs[0, 1].axis('off')
# Initializing vectors for volume fraction statistics
median_f = np.empty(VF.size)
lower_p = np.empty(VF.size)
upper_p = np.empty(VF.size)
for idx, fa_i in enumerate([0, 4]):
for vf_i in range(VF.size):
# Compute FA statistics for a given ground truth FA
# level and a water volume fraction. Note that only
# the extreme FA values are plotted.
median_f[vf_i] = np.median(f[fa_i, vf_i, :])
p25, p75 = np.percentile(f[fa_i, vf_i, :], [25, 75])
lower_p[vf_i] = median_f[vf_i] - p25
lower_p[vf_i] = p75 - median_f[vf_i]
# Plot the water volume fraction statistics as a function
# of its ground truth value in a lower panel of the
# figure.
axs[1, idx].errorbar(VF/100, median_f, fmt='.',
yerr=[lower_p, lower_p],
color=colors[fa_i],
ecolor=colors[fa_i],
linewidth=3.0,
label='$FA: %.2f$' % FA[fa_i])
# plot identity lines
axs[1, idx].plot([0, 1], [0, 1], 'b', label='Simulated f-value')
# Adjust properties of a given lower panel of the figure
axs[1, idx].legend(loc='upper left')
axs[1, idx].set_ylim([-0.1, 1.2])
axs[1, idx].set_xlim([-0.1, 1.2])
axs[1, idx].set_xlabel('Simulated f-value')
axs[1, idx].set_ylabel('Estimated f-value')
# Save figure
fig.savefig('fwdti_simulations_bounds.png')
Explanation: Below we plot the results obtain from the default free water DTI fit procedure that uses the scipy's optimization scipy.optimize.least_square function.
End of explanation
print(dt_bounds/dt)
Explanation: From the figures above, one can see that both procedures have comparable performances. However the procedure using scipy's optimization scipy.optimize.least_square shows to be almost 4 times slower that the article's default procedure.
End of explanation |
9,483 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Two-layer neural network
In this notebook a two-layer neural network is implemented from scratch following the methodology of the course http
Step1: The initial variance is scaled by a factor of $\sqrt[]{\frac{2}{N}}$, where $N$ is the number of inputs to each neuron in the layer (as per http
Step2: We use the notation $h_{i,1}$, $h_{i,2}$ to specify the output of $i-$th layer before and after a nonlinearity, respectively. For example, if the $i-$th hidden layer contains a sigmoid activation function, then $$h_{i,1}=h_{i-1,2}W_i$$ and $$h_{i,2}=\sigma({h_{i,1}}).$$ Additionally, the bias trick could be applied to the output with nonlinearity in order to implicitly account for the bias in the weights.
Step3: The loss function is defined as the mean of sample losses $$ L_i = -f_i[y_i] + \log\Sigma_j\, e^{f_i[j]},\; \text{where }\; f_i=x_i^TW.$$ The final loss is then
Step4: During forward prop, we compose multiple functions to get the final output. Those functions could be simple dot products in case of weights, or complicated nonlinear functions within neurons. An important question when doing backpropagation then is w.r.t. what to differentiate when applying the chain rule?
For example, assume the final output is a composition of $f_1, f_2,$ and $f_3$, i.e. $f(X) = f_3(f_2(f_1(X)))$.
We could apply the chain rule directly
Step5: Putting it all together
Now that all the neccessary functions are defined, we can prepare the data and train the network.
Step6: Training steps
Step7: Visualization
Step8: Tinkering with numpy
This part is used for tinkering with numpy to make sure operations are performed in the desired way and dimensions are preserved. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Data generation obtained from http://cs231n.github.io/neural-networks-case-study/
def generate_data(N, K):
D = 2 # Dimensionality
X = np.zeros((N * K, D)) # Data matrix (each row = single example)
y = np.zeros(N * K, dtype='uint8') # Class labels
for j in xrange(K):
ix = range(N * j, N * (j + 1))
r = np.linspace(0.0, 1, N) # radius
t = np.linspace(j * 8, (j + 1) * 8, N) + np.random.randn(N) * 0.2 # theta
X[ix] = np.c_[r * np.sin(t), r * np.cos(t)]
y[ix] = j
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral) # Visualize
plt.xlim([-1,1])
plt.ylim([-1,1])
return X, y
# Example:
generate_data(300, 3);
Explanation: Two-layer neural network
In this notebook a two-layer neural network is implemented from scratch following the methodology of the course http://cs231n.github.io/.
The structure of the network is the following:
INPUT -> FC -> ReLU -> FC -> OUTPUT -> SOFTMAX LOSS.
The goal of this notebook is to store some of my thoughts obtained while going through the course. Especially notes about backpropagation in case of composition of functions, since I found it difficult to fully understand w.r.t. (with respect to) which variable the derivatives are computed at each stage.
Important concept presented here is distinguishing between the layer's input and output, which makes the understanding of backpropagation easier.
End of explanation
# Initialization
def initialize(num_inputs, num_hidden):
# +1 is added to account for the bias trick.
W1 = np.random.randn(num_inputs + 1, num_hidden) * np.sqrt(2.0 / num_inputs + 1)
W2 = np.random.randn(num_hidden + 1, num_classes) * np.sqrt(2.0 / (num_hidden + 1))
return W1, W2
Explanation: The initial variance is scaled by a factor of $\sqrt[]{\frac{2}{N}}$, where $N$ is the number of inputs to each neuron in the layer (as per http://cs231n.github.io/neural-networks-2/), in order to provide the same initial output variance at each neuron.
End of explanation
# Forward propagate
def forw_prop(X, W1, W2):
# Hidden layer.
h11 = X.dot(W1)
h12 = np.maximum(0, h11) # ReLU nonlinearity
# Bias trick.
h12 = np.c_[h12, np.ones(h12.shape[0])]
# Final layer.
f = h12.dot(W2)
# Softmax transformation.
probs = np.exp(f)
prob_sums = probs.sum(axis=1, keepdims=True)
probs /= prob_sums
return probs, h11, h12
Explanation: We use the notation $h_{i,1}$, $h_{i,2}$ to specify the output of $i-$th layer before and after a nonlinearity, respectively. For example, if the $i-$th hidden layer contains a sigmoid activation function, then $$h_{i,1}=h_{i-1,2}W_i$$ and $$h_{i,2}=\sigma({h_{i,1}}).$$ Additionally, the bias trick could be applied to the output with nonlinearity in order to implicitly account for the bias in the weights.
End of explanation
# Compute the softmax loss http://cs231n.github.io/linear-classify/#softmax
def calc_loss(probs, y, W1, W2, reg):
data_loss = -np.mean(np.log(probs[range(y.shape[0]), y]))
reg_loss = reg * 0.5 * (np.sum(W1 * W1) + np.sum(W2 * W2))
return data_loss + reg_loss
Explanation: The loss function is defined as the mean of sample losses $$ L_i = -f_i[y_i] + \log\Sigma_j\, e^{f_i[j]},\; \text{where }\; f_i=x_i^TW.$$ The final loss is then: $$L = \frac{1}{N} \Sigma_i^N\, L_i + \frac{\lambda}{2}(||W_1||_2^2 + ||W_2||_2^2)$$
End of explanation
# Backpropagate
def back_prop(probs, X, y, h11, h12, W1, W2, reg):
# Partial derivatives at the final layer.
dL_df = probs
dL_df[range(y.shape[0]), y] -= 1
dL_df /= num_train
# Propagate back to the weights, along with the regularization term.
dL_dW2 = h12.T.dot(dL_df) + reg * W2
# At the output of the hidden layer.
dL_dh12 = dL_df.dot(W2.T)
# Propagate back through nonlinearities to the input of the layer.
dL_dh11 = dL_dh12[:,:-1] # Account for bias trick.
dL_dh11[h11 < 0] = 0 # ReLU
dL_dW1 = X.T.dot(dL_dh11) + reg * W1
return dL_dW1, dL_dW2
def accuracy(X, y, W1, W2):
h = np.maximum(0, X.dot(W1))
h = np.c_[h, np.ones(h.shape[0])]
f = h.dot(W2)
return np.mean(np.argmax(f, axis=1) == y)
Explanation: During forward prop, we compose multiple functions to get the final output. Those functions could be simple dot products in case of weights, or complicated nonlinear functions within neurons. An important question when doing backpropagation then is w.r.t. what to differentiate when applying the chain rule?
For example, assume the final output is a composition of $f_1, f_2,$ and $f_3$, i.e. $f(X) = f_3(f_2(f_1(X)))$.
We could apply the chain rule directly:
$$\frac{\partial{f}}{\partial{X}} = \frac{\partial{f_3}}{\partial{f_2}}\frac{\partial{f_2}}{\partial{f_1}}\frac{\partial{f_1}}{\partial{X}},$$
or, for example, define $g = f_3 \circ f_2$ to get:
$$\frac{\partial{f}}{\partial{X}} = \frac{\partial{g}}{\partial{f_1}} \frac{\partial{f_1}}{\partial{X}}.$$
The common approach is combine the nonlinear function(s) at the hidden layers, differentiate w.r.t. the output of the hidden layer, and back propagate through the nonlinearity to get the differential w.r.t. the input of the layer.
End of explanation
# Hyperparameters.
reg = 0.001
step_size = 0.1
num_hidden = 200
data_per_class = 300 # Number of points per class
num_classes = 3 # Number of classes
X, y = generate_data(data_per_class, num_classes)
num_inputs = X.shape[1]
W1, W2 = initialize(num_inputs, num_hidden)
# Preprocess the data.
# Split data into train and test data.
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.33)
num_train = X_train.shape[0]
num_test = X_test.shape[0]
# The bias trick.
X_train = np.c_[X_train, np.ones(num_train)]
X_test = np.c_[X_test, np.ones(num_test)]
Explanation: Putting it all together
Now that all the neccessary functions are defined, we can prepare the data and train the network.
End of explanation
# Now we can perform gradient descent.
for i in xrange(5001):
probs, h11, h12 = forw_prop(X_train, W1, W2)
loss = calc_loss(probs, y_train, W1, W2, reg)
dW1, dW2 = back_prop(probs, X_train, y_train, h11, h12, W1, W2, reg)
W1 -= step_size * dW1
W2 -= step_size * dW2
if i % 500 == 0:
print "Step %4d. Loss=%.3f, train accuracy=%.5f" % (i, loss, accuracy(X_train, y_train, W1, W2))
print "Test accuracy=%.5f" % accuracy(X_test, y_test, W1, W2)
Explanation: Training steps
End of explanation
# Plot the resulting classifier on the test data.
h = 0.02
x_min, x_max = X_test[:, 0].min() - 1, X_test[:, 0].max() + 1
y_min, y_max = X_test[:, 1].min() - 1, X_test[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
h = np.maximum(0, np.dot(np.c_[xx.ravel(), yy.ravel(), np.ones(xx.ravel().shape)], W1))
h = np.c_[h, np.ones(h.shape[0])]
Z = np.dot(h, W2)
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
fig = plt.figure()
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max());
Explanation: Visualization
End of explanation
# Tinkering
a = np.array([[-1, 4, 5], [2, 8, 0]])
print a
print np.sum(a, axis=1)
a / a.sum(axis=1, keepdims=True).astype(float)
print np.maximum(0, a)
Explanation: Tinkering with numpy
This part is used for tinkering with numpy to make sure operations are performed in the desired way and dimensions are preserved.
End of explanation |
9,484 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Note that you have to execute the command jupyter notebook in the parent directory of
this directory for otherwise jupyter won't be able to access the file style.css.
Step1: This example has been extracted from the official documentation of Ply.
A Tokenizer for Numbers and the Arithmetical Operators
The module ply.lex contains the code that is necessary to create a scanner.
Step2: We start with a definition of the <em style="color
Step3: Next, we define regular expressions that define the tokens that are to be recognized.
Note that some operators have to be prefixed with a backslash since these operators are
also used as operators for regular expressions. Note also that the token names have to be prefixed with
the string t_.
Step4: If we need to transform a token, we can define the token via a function. In that case, the first line of the function
has to be a string that is a regular expression. This regular expression then defines the token. After that,
we can add code to transform the token. The string that makes up the token is stored in t.value. Below, this string
is transformed into an integer.
Step5: The rule below is used to keep track of line numbers. We use the function length since there might be
more than one newline.
Step6: The keyword t_ignore specifies those characters that should be discarded.
In this case, spaces and tabs are ignored.
Step7: All characters not recognized by any of the defined tokens are handled by the function t_error.
The function t.lexer.skip(1) skips the character that has not been recognized. Scanning resumes
after this character.
Step8: Below the function lex.lex() creates the lexer specified above. Since this code is expected to be part
of some python file but really isn't since it is placed in a Jupyter notebook we have to set the variable
__file__ manually to fool the system into believing that the code given above is located in a file
called hugo.py. Of course, the name hugo is totally irrelevant and could be replaced by any other name.
Step10: Lets test the generated scanner, that is stored in lexer, with the following string
Step11: Let us feed the scanner with the string data.
Step12: Now we put the lexer to work by using it as an iterable. | Python Code:
from IPython.core.display import HTML
with open ("../style.css", "r") as file:
css = file.read()
HTML(css)
Explanation: Note that you have to execute the command jupyter notebook in the parent directory of
this directory for otherwise jupyter won't be able to access the file style.css.
End of explanation
import ply.lex as lex
Explanation: This example has been extracted from the official documentation of Ply.
A Tokenizer for Numbers and the Arithmetical Operators
The module ply.lex contains the code that is necessary to create a scanner.
End of explanation
tokens = [
'NUMBER',
'PLUS',
'MINUS',
'TIMES',
'DIVIDE',
'LPAREN',
'RPAREN'
]
Explanation: We start with a definition of the <em style="color:blue">token names</em>. Note that all token names have to start with
a capital letter.
End of explanation
t_PLUS = r'\+'
t_MINUS = r'-'
t_TIMES = r'\*'
t_DIVIDE = r'/'
t_LPAREN = r'\('
t_RPAREN = r'\)'
Explanation: Next, we define regular expressions that define the tokens that are to be recognized.
Note that some operators have to be prefixed with a backslash since these operators are
also used as operators for regular expressions. Note also that the token names have to be prefixed with
the string t_.
End of explanation
def t_NUMBER(t):
r'0|[1-9][0-9]*'
t.value = int(t.value)
return t
Explanation: If we need to transform a token, we can define the token via a function. In that case, the first line of the function
has to be a string that is a regular expression. This regular expression then defines the token. After that,
we can add code to transform the token. The string that makes up the token is stored in t.value. Below, this string
is transformed into an integer.
End of explanation
def t_newline(t):
r'\n+'
t.lexer.lineno += len(t.value)
Explanation: The rule below is used to keep track of line numbers. We use the function length since there might be
more than one newline.
End of explanation
t_ignore = ' \t'
Explanation: The keyword t_ignore specifies those characters that should be discarded.
In this case, spaces and tabs are ignored.
End of explanation
def t_error(t):
print("Illegal character '%s'" % t.value[0])
t.lexer.skip(1)
Explanation: All characters not recognized by any of the defined tokens are handled by the function t_error.
The function t.lexer.skip(1) skips the character that has not been recognized. Scanning resumes
after this character.
End of explanation
__file__ = 'hugo'
lexer = lex.lex()
Explanation: Below the function lex.lex() creates the lexer specified above. Since this code is expected to be part
of some python file but really isn't since it is placed in a Jupyter notebook we have to set the variable
__file__ manually to fool the system into believing that the code given above is located in a file
called hugo.py. Of course, the name hugo is totally irrelevant and could be replaced by any other name.
End of explanation
data = 3 + 4 * 10 + 007 + (-20) * 2
3 + 4 * 10 + 007 + (-20) * 2
Explanation: Lets test the generated scanner, that is stored in lexer, with the following string:
End of explanation
lexer.input(data)
Explanation: Let us feed the scanner with the string data.
End of explanation
for tok in lexer:
print(tok)
Explanation: Now we put the lexer to work by using it as an iterable.
End of explanation |
9,485 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementing a Neural Network from Scratch - An Introduction
In this post we will implement a simple 3-layer neural network from scratch. We won't derive all the math that's required, but I will try to give an intuitive explanation of what we are doing and will point to resources to read up on the details.
In this post I'm assuming that you are familiar with basic Calculus and Machine Learning concepts, e.g. you know what classification and regularization is. Ideally you also know a bit about how optimization techniques like gradient descent work. But even if you're not familiar with any of the above this post could still turn out to be interesting ;)
But why implement a Neural Network from scratch at all? Even if you plan on using Neural Network libraries like PyBrain in the future, implementing a network from scratch at least once is an extremely valuable exercise. It helps you gain an understanding of how neural networks work, and that is essential to designing effective models.
One thing to note is that the code examples here aren't terribly efficient. They are meant to be easy to understand. In an upcoming post I will explore how to write an efficient Neural Network implementation using Theano.
Step1: Generating a dataset
Let's start by generating a dataset we can play with. Fortunately, scikit-learn has some useful dataset generators, so we don't need to write the code ourselves. We will go with the make_moons function.
Step2: The dataset we generated has two classes, plotted as red and blue points. You can think of the blue dots as male patients and the red dots as female patients, with the x- and y- axis being medical measurements.
Our goal is to train a Machine Learning classifier that predicts the correct class (male of female) given the x- and y- coordinates. Note that the data is not linearly separable, we can't draw a straight line that separates the two classes. This means that linear classifiers, such as Logistic Regression, won't be able to fit the data unless you hand-engineer non-linear features (such as polynomials) that work well for the given dataset.
In fact, that's one of the major advantages of Neural Networks. You don't need to worry about feature engineering. The hidden layer of a neural network will learn features for you.
Logistic Regression
To demonstrate the point let's train a Logistic Regression classifier. It's input will be the x- and y-values and the output the predicted class (0 or 1). To make our life easy we use the Logistic Regression class from scikit-learn.
Step3: The graph shows the decision boundary learned by our Logistic Regression classifier. It separates the data as good as it can using a straight line, but it's unable to capture the "moon shape" of our data.
Training a Neural Network
Let's now build a 3-layer neural network with one input layer, one hidden layer, and one output layer. The number of nodes in the input layer is determined by the dimensionality of our data, 2. Similarly, the number of nodes in the output layer is determined by the number of classes we have, also 2. (Because we only have 2 classes we could actually get away with only one output node predicting 0 or 1, but having 2 makes it easier to extend the network to more classes later on). The input to the network will be x- and y- coordinates and its output will be two probabilities, one for class 0 ("female") and one for class 1 ("male"). It looks something like this
Step4: First let's implement the loss function we defined above. We use this to evaluate how well our model is doing
Step5: We also implement a helper function to calculate the output of the network. It does forward propagation as defined above and returns the class with the highest probability.
Step6: Finally, here comes the function to train our Neural Network. It implements batch gradient descent using the backpropagation derivates we found above.
Step7: A network with a hidden layer of size 3
Let's see what happens if we train a network with a hidden layer size of 3.
Step8: Yay! This looks pretty good. Our neural networks was able to find a decision boundary that successfully separates the classes.
Varying the hidden layer size
In the example above we picked a hidden layer size of 3. Let's now get a sense of how varying the hidden layer size affects the result. | Python Code:
# Package imports
import matplotlib.pyplot as plt
import numpy as np
import sklearn
import sklearn.datasets
import sklearn.linear_model
import matplotlib
# Display plots inline and change default figure size
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
Explanation: Implementing a Neural Network from Scratch - An Introduction
In this post we will implement a simple 3-layer neural network from scratch. We won't derive all the math that's required, but I will try to give an intuitive explanation of what we are doing and will point to resources to read up on the details.
In this post I'm assuming that you are familiar with basic Calculus and Machine Learning concepts, e.g. you know what classification and regularization is. Ideally you also know a bit about how optimization techniques like gradient descent work. But even if you're not familiar with any of the above this post could still turn out to be interesting ;)
But why implement a Neural Network from scratch at all? Even if you plan on using Neural Network libraries like PyBrain in the future, implementing a network from scratch at least once is an extremely valuable exercise. It helps you gain an understanding of how neural networks work, and that is essential to designing effective models.
One thing to note is that the code examples here aren't terribly efficient. They are meant to be easy to understand. In an upcoming post I will explore how to write an efficient Neural Network implementation using Theano.
End of explanation
# Generate a dataset and plot it
np.random.seed(0)
X, y = sklearn.datasets.make_moons(200, noise=0.20)
plt.scatter(X[:,0], X[:,1], s=40, c=y, cmap=plt.cm.Spectral)
Explanation: Generating a dataset
Let's start by generating a dataset we can play with. Fortunately, scikit-learn has some useful dataset generators, so we don't need to write the code ourselves. We will go with the make_moons function.
End of explanation
# Train the logistic rgeression classifier
clf = sklearn.linear_model.LogisticRegressionCV()
clf.fit(X, y)
# Helper function to plot a decision boundary.
# If you don't fully understand this function don't worry, it just generates the contour plot below.
def plot_decision_boundary(pred_func):
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
Z = pred_func(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Spectral)
# Plot the decision boundary
plot_decision_boundary(lambda x: clf.predict(x))
plt.title("Logistic Regression")
Explanation: The dataset we generated has two classes, plotted as red and blue points. You can think of the blue dots as male patients and the red dots as female patients, with the x- and y- axis being medical measurements.
Our goal is to train a Machine Learning classifier that predicts the correct class (male of female) given the x- and y- coordinates. Note that the data is not linearly separable, we can't draw a straight line that separates the two classes. This means that linear classifiers, such as Logistic Regression, won't be able to fit the data unless you hand-engineer non-linear features (such as polynomials) that work well for the given dataset.
In fact, that's one of the major advantages of Neural Networks. You don't need to worry about feature engineering. The hidden layer of a neural network will learn features for you.
Logistic Regression
To demonstrate the point let's train a Logistic Regression classifier. It's input will be the x- and y-values and the output the predicted class (0 or 1). To make our life easy we use the Logistic Regression class from scikit-learn.
End of explanation
num_examples = len(X) # training set size
nn_input_dim = 2 # input layer dimensionality
nn_output_dim = 2 # output layer dimensionality
# Gradient descent parameters (I picked these by hand)
epsilon = 0.01 # learning rate for gradient descent
reg_lambda = 0.01 # regularization strength
Explanation: The graph shows the decision boundary learned by our Logistic Regression classifier. It separates the data as good as it can using a straight line, but it's unable to capture the "moon shape" of our data.
Training a Neural Network
Let's now build a 3-layer neural network with one input layer, one hidden layer, and one output layer. The number of nodes in the input layer is determined by the dimensionality of our data, 2. Similarly, the number of nodes in the output layer is determined by the number of classes we have, also 2. (Because we only have 2 classes we could actually get away with only one output node predicting 0 or 1, but having 2 makes it easier to extend the network to more classes later on). The input to the network will be x- and y- coordinates and its output will be two probabilities, one for class 0 ("female") and one for class 1 ("male"). It looks something like this:
<img src='./nn-3-layer-network.png' style='width: 50%'/>
We can choose the dimensionality (the number of nodes) of the hidden layer. The more nodes we put into the hidden layer the more complex functions we will be able fit. But higher dimensionality comes at a cost. First, more computation is required to make predictions and learn the network parameters. A bigger number of parameters also means we become more prone to overfitting our data.
How to choose the size of the hidden layer? While there are some general guidelines and recommendations, it always depends on your specific problem and is more of an art than a science. We will play with the number of nodes in the hidden later later on and see how it affects our output.
We also need to pick an activation function for our hidden layer. The activation function transforms the inputs of the layer into its outputs. A nonlinear activation function is what allows us to fit nonlinear hypotheses. Common chocies for activation functions are tanh, the sigmoid function, or [ReLUs](https://en.wikipedia.org/wiki/Rectifier_(neural_networks). We will use tanh, which performs quite well in many scenarios. A nice property of these functions is that their derivate can be computed using the original function value. For example, the derivative of $\tanh x$ is $1-\tanh^2 x$. This is useful because it allows us to compute $\tanh x$ once and re-use its value later on to get the derivative.
Because we want our network to output probabilities the activation function for the output layer will be the softmax, which is simply a way to convert raw scores to probabilities. If you're familiar with the logistic function you can think of softmax as its generalization to multiple classes.
How our network makes predictions
Our network makes predictions using forward propagation, which is just a bunch of matrix multiplications and the application of the activation function(s) we defined above. If $x$ is the 2-dimensional input to our network then we calculate our prediction $\hat{y}$ (also two-dimensional) as follows:
$$
\begin{aligned}
z_1 & = xW_1 + b_1 \
a_1 & = \tanh(z_1) \
z_2 & = a_1W_2 + b_2 \
a_2 & = \hat{y} = \mathrm{softmax}(z_2)
\end{aligned}
$$
$z_i$ is the input of layer $i$ and $a_i$ is the output of layer $i$ after applying the activation function. $W_1, b_1, W_2, b_2$ are parameters of our network, which we need to learn from our training data. You can think of them as matrices transforming data between layers of the network. Looking at the matrix multiplications above we can figure out the dimensionality of these matrices. If we use 500 nodes for our hidden layer then $W_1 \in \mathbb{R}^{2\times500}$, $b_1 \in \mathbb{R}^{500}$, $W_2 \in \mathbb{R}^{500\times2}$, $b_2 \in \mathbb{R}^{2}$. Now you see why we have more parameters if we increase the size of the hidden layer.
Learning the Parameters
Learning the parameters for our network means finding parameters ($W_1, b_1, W_2, b_2$) that minimize the error on our training data. But how do we define the error? We call the function that measures our error the loss function. A common choice with the softmax output is the cross-entropy loss. If we have $N$ training examples and $C$ classes then the loss for our prediction $\hat{y}$ with respect to the true labels $y$ is given by:
$$
\begin{aligned}
L(y,\hat{y}) = - \frac{1}{N} \sum_{n \in N} \sum_{i \in C} y_{n,i} \log\hat{y}_{n,i}
\end{aligned}
$$
The formula looks complicated, but all it really does is sum over our training examples and add to the loss if we predicted the incorrect class. So, the further away $y$ (the correct labels) and $\hat{y}$ (our predictions) are, the greater our loss will be.
Remember that our goal is to find the parameters that minimize our loss function. We can use gradient descent to find its minimum. I will implement the most vanilla version of gradient descent, also called batch gradient descent with a fixed learning rate. Variations such as SGD (stochastic gradient descent) or minibatch gradient descent typically perform better in practice. So if you are serious you'll want to use one of these, and ideally you would also decay the learning rate over time.
As an input, gradient descent needs the gradients (vector of derivatives) of the loss function with respect to our parameters: $\frac{\partial{L}}{\partial{W_1}}$, $\frac{\partial{L}}{\partial{b_1}}$, $\frac{\partial{L}}{\partial{W_2}}$, $\frac{\partial{L}}{\partial{b_2}}$. To calculate these gradients we use the famous backpropagation algorithm, which is a way to efficiently calculate the gradients starting from the output. I won't go into detail how backpropagation works, but there are many excellent explanations (here or here) floating around the web.
Applying the backpropagation formula we find the following (trust me on this):
$$
\begin{aligned}
& \delta_3 = y - \hat{y} \
& \delta_2 = (1 - \tanh^2z_1) \circ \delta_3W_2^T \
& \frac{\partial{L}}{\partial{W_2}} = a_1^T \delta_3 \
& \frac{\partial{L}}{\partial{b_2}} = \delta_3\
& \frac{\partial{L}}{\partial{W_1}} = x^T \delta_2\
& \frac{\partial{L}}{\partial{b_1}} = \delta_2 \
\end{aligned}
$$
Implementation
Now we are ready for our implementation. We start by defining some useful variables and parameters for gradient descent:
End of explanation
# Helper function to evaluate the total loss on the dataset
def calculate_loss(model):
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Forward propagation to calculate our predictions
z1 = X.dot(W1) + b1
a1 = np.tanh(z1)
z2 = a1.dot(W2) + b2
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
# Calculating the loss
corect_logprobs = -np.log(probs[range(num_examples), y])
data_loss = np.sum(corect_logprobs)
# Add regulatization term to loss (optional)
data_loss += reg_lambda/2 * (np.sum(np.square(W1)) + np.sum(np.square(W2)))
return 1./num_examples * data_loss
Explanation: First let's implement the loss function we defined above. We use this to evaluate how well our model is doing:
End of explanation
# Helper function to predict an output (0 or 1)
def predict(model, x):
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Forward propagation
z1 = x.dot(W1) + b1
a1 = np.tanh(z1)
z2 = a1.dot(W2) + b2
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
return np.argmax(probs, axis=1)
Explanation: We also implement a helper function to calculate the output of the network. It does forward propagation as defined above and returns the class with the highest probability.
End of explanation
# This function learns parameters for the neural network and returns the model.
# - nn_hdim: Number of nodes in the hidden layer
# - num_passes: Number of passes through the training data for gradient descent
# - print_loss: If True, print the loss every 1000 iterations
def build_model(nn_hdim, num_passes=20000, print_loss=False):
# Initialize the parameters to random values. We need to learn these.
np.random.seed(0)
W1 = np.random.randn(nn_input_dim, nn_hdim) / np.sqrt(nn_input_dim)
b1 = np.zeros((1, nn_hdim))
W2 = np.random.randn(nn_hdim, nn_output_dim) / np.sqrt(nn_hdim)
b2 = np.zeros((1, nn_output_dim))
# This is what we return at the end
model = {}
# Gradient descent. For each batch...
for i in xrange(0, num_passes):
# Forward propagation
z1 = X.dot(W1) + b1
a1 = np.tanh(z1)
z2 = a1.dot(W2) + b2
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
# Backpropagation
delta3 = probs
delta3[range(num_examples), y] -= 1
dW2 = (a1.T).dot(delta3)
db2 = np.sum(delta3, axis=0, keepdims=True)
delta2 = delta3.dot(W2.T) * (1 - np.power(a1, 2))
dW1 = np.dot(X.T, delta2)
db1 = np.sum(delta2, axis=0)
# Add regularization terms (b1 and b2 don't have regularization terms)
dW2 += reg_lambda * W2
dW1 += reg_lambda * W1
# Gradient descent parameter update
W1 += -epsilon * dW1
b1 += -epsilon * db1
W2 += -epsilon * dW2
b2 += -epsilon * db2
# Assign new parameters to the model
model = { 'W1': W1, 'b1': b1, 'W2': W2, 'b2': b2}
# Optionally print the loss.
# This is expensive because it uses the whole dataset, so we don't want to do it too often.
if print_loss and i % 1000 == 0:
print "Loss after iteration %i: %f" %(i, calculate_loss(model))
return model
Explanation: Finally, here comes the function to train our Neural Network. It implements batch gradient descent using the backpropagation derivates we found above.
End of explanation
# Build a model with a 3-dimensional hidden layer
model = build_model(3, print_loss=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(model, x))
plt.title("Decision Boundary for hidden layer size 3")
Explanation: A network with a hidden layer of size 3
Let's see what happens if we train a network with a hidden layer size of 3.
End of explanation
plt.figure(figsize=(16, 32))
hidden_layer_dimensions = [1, 2, 3, 4, 5, 20, 50]
for i, nn_hdim in enumerate(hidden_layer_dimensions):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer size %d' % nn_hdim)
model = build_model(nn_hdim)
plot_decision_boundary(lambda x: predict(model, x))
plt.show()
Explanation: Yay! This looks pretty good. Our neural networks was able to find a decision boundary that successfully separates the classes.
Varying the hidden layer size
In the example above we picked a hidden layer size of 3. Let's now get a sense of how varying the hidden layer size affects the result.
End of explanation |
9,486 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Utilities
The global variable gCache is used as a cache for the function evaluate defined later. Instead of just storing the values for a given State, the cache stores pairs of the form
* ('=', v),
* ('≤', v), or
* ('≥', v).
The first component of these pairs is a flag that specifies whether the stored value v is exact or whether it only is a lower or upper bound. Concretely, provided gCache[State] is defined and value(State) computes the value of a given State from the perspective of the maximizing
player, the following invariants are satisfied
Step1: In order to have some variation in our game, we use random numbers to choose between optimal moves.
Step2: Alpha-Beta Pruning with Progressive Deepening, Move Ordering, and Memoization
The function pd_evaluate takes three arguments
Step3: The function evaluate takes five arguments
Step4: The function store_cache is called with five arguments
Step5: The function value_cache receives a State and a limit as parameters. If a value for State has been computed to the given evaluation depth, this value is returned. Otherwise, 0 is returned.
Step6: The module heapq implements heaps. The implementation of maxValue and minValue use heaps as priority queues in order to sort the moves. This improves the performance of alpha-beta pruning.
Step7: The function maxValue satisfies the following specification
Step8: The function minValue satisfies the following specification
Step9: In the state shown below, Red can force a win by pushing his stones in the 6th row. Due to this fact, *alpha-beta pruning is able to prune large parts of the search path and hence the evaluation is fast.
Step10: For the start state, the evaluation takes about 22 seconds, if the depth limit is set to 9.
Step11: In order to evaluate the effect of progressive deepening, we reset the cache and can then evaluate the test state without progressive deepening.
Step12: For the start state, progressive deepening does not seem to be beneficial. The reason is that initially the values of the states do not differ very much.
Playing the Game
The variable gMoveCounter stores the number of moves that have already been executed.
Step13: The function best_move takes two arguments
Step14: The next line is needed because we need the function IPython.display.clear_output to clear the output in a cell.
Step15: The function play_game plays on the given canvas. The game played is specified indirectly by specifying the following | Python Code:
gCache = {}
Explanation: Utilities
The global variable gCache is used as a cache for the function evaluate defined later. Instead of just storing the values for a given State, the cache stores pairs of the form
* ('=', v),
* ('≤', v), or
* ('≥', v).
The first component of these pairs is a flag that specifies whether the stored value v is exact or whether it only is a lower or upper bound. Concretely, provided gCache[State] is defined and value(State) computes the value of a given State from the perspective of the maximizing
player, the following invariants are satisfied:
* $\texttt{gCache[State]} = (\texttt{'='}, v) \rightarrow \texttt{value(State)} = v$.
* $\texttt{gCache[State]} = (\texttt{'≤'}, v) \rightarrow \texttt{value(State)} \leq v$.
* $\texttt{gCache[State]} = (\texttt{'≥'}, v) \rightarrow \texttt{value(State)} \geq v$.
End of explanation
import random
random.seed(0)
Explanation: In order to have some variation in our game, we use random numbers to choose between optimal moves.
End of explanation
def maxValue():
pass # this function will be properly defined later
def pd_evaluate(State, limit, f=maxValue):
for l in range(limit+1):
value = evaluate(State, l, f)
if value in [-1, 1]:
return value
return value
Explanation: Alpha-Beta Pruning with Progressive Deepening, Move Ordering, and Memoization
The function pd_evaluate takes three arguments:
- State is the current state of the game,
- limit determines how deep the game tree is searched,
- f is either the function maxValue or the function minValue.
The function pd_evaluate uses progressive deepening to compute the value of State. The given State is evaluated for a depth of $0$, $1$, $\cdots$, limit. The values calculated for a depth of $l$ are stored and used to sort the states when State is next evaluated for a depth of $l+1$. This is beneficial for alpha-beta pruning because alpha-beta pruning can cut off more branches from the search tree if we start be evaluating the best moves first.
We need to declare the function maxValue since we use it as a default value for the parameter f of the function pd_evaluate.
End of explanation
def evaluate(State, limit, f=maxValue, alpha=-1, beta=1):
global gCache
if (State, limit) in gCache:
flag, v = gCache[(State, limit)]
if flag == '=':
return v
if flag == '≤':
if v <= alpha:
return v
elif alpha < v < beta:
w = f(State, limit, alpha, v)
store_cache(State, limit, alpha, v, w)
return w
else: # beta <= v:
w = f(State, limit, alpha, beta)
store_cache(State, limit, alpha, beta, w)
return w
if flag == '≥':
if beta <= v:
return v
elif alpha < v < beta:
w = f(State, limit, v, beta)
store_cache(State, limit, v, beta, w)
return w
else: # v <= alpha
w = f(State, limit, alpha, beta)
store_cache(State, limit, alpha, beta, w)
return w
else:
v = f(State, limit, alpha, beta)
store_cache(State, limit, alpha, beta, v)
return v
Explanation: The function evaluate takes five arguments:
- State is the current state of the game,
- limit determines the lookahead. To be more precise, it is the number of half-moves that are investigated to compute the value. If limit is 0 and the game has not ended, the game is evaluated via the function heuristic. This function is supposed to be defined in the notebook defining the game.
- f is either the function maxValue or the function minValue.
f = maxValue if it's the maximizing player's turn in State. Otherwise,
f = minValue.
- alpha and beta are the parameters from alpha-beta pruning.
The function evaluate returns the value that the given State has if both players play their optimal game.
- If the maximizing player can force a win, the return value is 1.
- If the maximizing player can at best force a draw, the return value is 0.
- If the maximizing player might lose even when playing optimal, the return value is -1.
Otherwise, the value is calculated according to a heuristic.
For reasons of efficiency, the function evaluate is memoized using the global variable gCache. This work in the same way as described in the notebook Alpha-Beta-Pruning-Memoization.ipynb.
End of explanation
def store_cache(State, limit, alpha, beta, value):
global gCache
if value <= alpha:
gCache[(State, limit)] = ('≤', value)
elif value < beta:
gCache[(State, limit)] = ('=', value)
else: # value >= beta
gCache[(State, limit)] = ('≥', value)
Explanation: The function store_cache is called with five arguments:
* State is a state of the game,
* limit is the search depth,
* alpha is a number,
* beta is a number, and
* value is a number such that:
$$\texttt{evaluate(State, limit, f, alpha, beta)} = \texttt{value}$$
The function stores the value in the dictionary Cache under the key State.
It also stores an indicator that is either '≤', '=', or '≥'. The value that is stored
satisfies the following conditions:
* If Cache[State, limit] = ('≤', value), then evaluate(State, limit) ≤ value.
* If Cache[State, limit] = ('=', value), then evaluate(State, limit) = value.
* If Cache[State, limit] = ('≥', value), then evaluate(State, limit) ≥ value.
End of explanation
def value_cache(State, limit):
flag, value = gCache.get((State, limit), ('=', 0))
return value
Explanation: The function value_cache receives a State and a limit as parameters. If a value for State has been computed to the given evaluation depth, this value is returned. Otherwise, 0 is returned.
End of explanation
import heapq
Explanation: The module heapq implements heaps. The implementation of maxValue and minValue use heaps as priority queues in order to sort the moves. This improves the performance of alpha-beta pruning.
End of explanation
def maxValue(State, limit, alpha=-1, beta=1):
if finished(State):
return utility(State)
if limit == 0:
return heuristic(State)
value = alpha
NextStates = next_states(State, gPlayers[0])
Moves = [] # empty priority queue
for ns in NextStates:
# heaps are sorted ascendingly, hence the minus
heapq.heappush(Moves, (-value_cache(ns, limit-2), ns))
while Moves != []:
_, ns = heapq.heappop(Moves)
value = max(value, evaluate(ns, limit-1, minValue, value, beta))
if value >= beta:
return value
return value
Explanation: The function maxValue satisfies the following specification:
- $\alpha \leq \texttt{value}(s) \leq \beta \;\rightarrow\;\texttt{maxValue}(s, l, \alpha, \beta) = \texttt{value}(s)$
- $\texttt{value}(s) < \alpha \;\rightarrow\; \texttt{maxValue}(s, l, \alpha, \beta) \leq \alpha$
- $\beta < \texttt{value}(s) \;\rightarrow\; \beta \leq \texttt{maxValue}(s, \alpha, \beta)$
It assumes that gPlayers[0] is the maximizing player. This function implements alpha-beta pruning. After searching up to a depth of limit, the value is approximated using the function heuristic.
End of explanation
def minValue(State, limit, alpha=-1, beta=1):
if finished(State):
return utility(State)
if limit == 0:
return heuristic(State)
value = beta
NextStates = next_states(State, gPlayers[1])
Moves = [] # empty priority queue
for ns in NextStates:
heapq.heappush(Moves, (value_cache(ns, limit-2), ns))
while Moves != []:
_, ns = heapq.heappop(Moves)
value = min(value, evaluate(ns, limit-1, maxValue, alpha, value))
if value <= alpha:
return value
return value
%%capture
%run Connect-Four.ipynb
Explanation: The function minValue satisfies the following specification:
- $\alpha \leq \texttt{value}(s) \leq \beta \;\rightarrow\;\texttt{minValue}(s, l, \alpha, \beta) = \texttt{value}(s)$
- $\texttt{value}(s) < \alpha \;\rightarrow\; \texttt{minValue}(s, l, \alpha, \beta) \leq \alpha$
- $\beta < \texttt{value}(s) \;\rightarrow\; \beta \leq \texttt{minValue}(s, \alpha, \beta)$
It assumes that gPlayers[1] is the minimizing player. This function implements alpha-beta pruning. After searching up to a depth of limit, the value is approximated using the function heuristic.
End of explanation
canvas = create_canvas()
draw(gTestState, canvas, '?')
gCache = {}
%%time
value = pd_evaluate(gTestState, 10, maxValue)
value
len(gCache)
Explanation: In the state shown below, Red can force a win by pushing his stones in the 6th row. Due to this fact, *alpha-beta pruning is able to prune large parts of the search path and hence the evaluation is fast.
End of explanation
gCache = {}
%%time
value = pd_evaluate(gStart, 9, maxValue)
value
len(gCache)
Explanation: For the start state, the evaluation takes about 22 seconds, if the depth limit is set to 9.
End of explanation
gCache = {}
%%time
value = evaluate(gStart, 9, maxValue)
value
len(gCache)
Explanation: In order to evaluate the effect of progressive deepening, we reset the cache and can then evaluate the test state without progressive deepening.
End of explanation
gMoveCounter = 0
Explanation: For the start state, progressive deepening does not seem to be beneficial. The reason is that initially the values of the states do not differ very much.
Playing the Game
The variable gMoveCounter stores the number of moves that have already been executed.
End of explanation
def best_move(State, limit):
NextStates = next_states(State, gPlayers[0])
if gMoveCounter < 9:
bestValue = evaluate(State, limit, maxValue)
else:
bestValue = pd_evaluate(State, limit, maxValue)
BestMoves = [s for s in NextStates
if evaluate(s, limit-1, minValue) == bestValue
]
BestState = random.choice(BestMoves)
return bestValue, BestState
Explanation: The function best_move takes two arguments:
- State is the current state of the game,
- limit is the depth limit of the recursion.
The function best_move returns a pair of the form $(v, s)$ where $s$ is a state and $v$ is the value of this state. The state $s$ is a state that is reached from State if the player makes one of her optimal moves. In order to have some variation in the game, the function randomly chooses any of the optimal moves.
End of explanation
import IPython.display
import time
Explanation: The next line is needed because we need the function IPython.display.clear_output to clear the output in a cell.
End of explanation
def play_game(canvas, limit):
global gCache, gMoveCounter
State = gStart
History = []
while (True):
gCache = {}
gMoveCounter += 1
firstPlayer = gPlayers[0]
start = time.time()
val, State = best_move(State, limit)
stop = time.time()
diff = round(stop - start, 2)
History.append(diff)
draw(State, canvas, f'{round(diff, 2)} seconds, value = {round(val, 2)}.')
if finished(State):
final_msg(State)
break
IPython.display.clear_output(wait=True)
State = get_move(State)
draw(State, canvas, '')
if finished(State):
IPython.display.clear_output(wait=True)
final_msg(State)
break
for i, d in enumerate(History):
print(f'{i}: {d} seconds')
canvas = create_canvas()
draw(gStart, canvas, f'Current value of game for "X": {round(value, 2)}')
play_game(canvas, 8)
len(gCache)
Explanation: The function play_game plays on the given canvas. The game played is specified indirectly by specifying the following:
- Start is a global variable defining the start state of the game.
- next_states is a function such that $\texttt{next_states}(s, p)$ computes the set of all possible states that can be reached from state $s$ if player $p$ is next to move.
- finished is a function such that $\texttt{finished}(s)$ is true for a state $s$ if the game is over in state $s$.
- utility is a function such that $\texttt{utility}(s, p)$ returns either -1, 0, or 1 in the terminal state $s$. We have that
- $\texttt{utility}(s, p)= -1$ iff the game is lost for player $p$ in state $s$,
- $\texttt{utility}(s, p)= 0$ iff the game is drawn, and
- $\texttt{utility}(s, p)= 1$ iff the game is won for player $p$ in state $s$.
End of explanation |
9,487 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Model.fit の処理をカスタマイズする
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: 最初の簡単な例
簡単な例から始めてみましょう。
keras.Model をサブクラス化する新しいクラスを作成します。
train_step(self, data) メソッドだけをオーバーライドします。
メトリクス名(損失を含む)をマッピングするディクショナリを現在の値に返します。
入力引数の data は、トレーニングデータとして適合するために渡される値です。
fit(x, y, ...) を呼び出して Numpy 配列を渡す場合は、data はタプル型 (x, y) になります。
fit(dataset, ...) を呼び出して tf.data.Dataset を渡す場合は、data は各バッチで dataset により生成される値になります。
train_step メソッドの本体には、既に使い慣れているものと同様の定期的なトレーニングアップデートを実装しています。重要なのは、損失の計算を self.compiled_loss を介して行っていることで、それによって compile() に渡された損失関数がラップされています。
同様に、self.compiled_metrics.update_state(y, y_pred) を呼び出して compile() に渡されたメトリクスの状態を更新し、最後に self.metrics の結果をクエリして現在の値を取得しています。
Step3: これを試してみましょう。
Step4: 低レベルにする
当然ながら、compile() に損失関数を渡すことを省略し、代わりに train_step ですべてを手動で実行することは可能です。これはメトリクスの場合でも同様です。
オプティマイザの構成に compile() のみを使用した、低レベルの例を次に示します。
まず、損失と MAE スコアを追跡する Metric インスタンスを作成します。
これらのメトリクスの状態を更新するカスタム train_step() を実装し(メトリクスで update_state() を呼び出します)、現在の平均値を返して進捗バーで表示し、任意のコールバックに渡せるようにメトリクスをクエリします(result() を使用)。
エポックごとにメトリクスに reset_states() を呼び出す必要があるところに注意してください。呼び出さない場合、result() は通常処理しているエポックごとの平均ではなく、トレーニングを開始してからの平均を返してしまいます。幸いにも、これはフレームワークが行ってくれるため、モデルの metrics プロパティにリセットするメトリクスをリストするだけで実現できます。モデルは、そこにリストされているオブジェクトに対する reset_states() の呼び出しを各 fit() エポックの開始時または evaluate() への呼び出しの開始時に行うようになります。
Step5: sample_weight と class_weight をサポートする
最初の基本的な例では、サンプルの重み付けについては何も言及していないことに気付いているかもしれません。fit() の引数 sample_weight と class_weight をサポートする場合には、次のようにします。
data 引数から sample_weight をアンパックします。
それを compiled_loss と compiled_metrics に渡します(もちろん、 損失とメトリクスが compile() に依存しない場合は手動での適用が可能です)。
それがリストです。
Step6: 独自の評価ステップを提供する
model.evaluate() への呼び出しに同じことをする場合はどうしたらよいでしょう?その場合は、まったく同じ方法で test_step をオーバーライドします。これは次のようになります。
Step7: まとめ
Step8: ここにフィーチャーコンプリートの GAN クラスがあります。compile()をオーバーライドして独自のシグネチャを使用することにより、GAN アルゴリズム全体をtrain_stepの 17 行で実装しています。
Step9: 試運転してみましょう。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import tensorflow as tf
from tensorflow import keras
Explanation: Model.fit の処理をカスタマイズする
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/keras/customizing_what_happens_in_fit.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/keras/customizing_what_happens_in_fit.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/keras/customizing_what_happens_in_fit.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
はじめに
教師あり学習を実行するときに fit() を使用するとスムーズに学習を進めることができます。
独自のトレーニングループを新規で書く必要がある場合には、GradientTape を使用すると、細かく制御することができます。
しかし、カスタムトレーニングアルゴリズムが必要ながらも、コールバック、組み込みの分散サポート、ステップ結合など、fit() の便利な機能を利用したい場合には、どうすればよいのでしょうか?
Keras の基本原則は、複雑性のプログレッシブディスクロージャ―です。常に段階的に低レベルのワークフローに入ることが可能で、高レベルの機能性がユースケースと完全に一致しない場合でも、急激に性能が落ちるようなことはありません。相応の高レベルの利便性を維持しながら細部をさらに制御することができます。
fit() の動作をカスタマイズする必要がある場合は、Model クラスのトレーニングステップ関数をオーバーライドする必要があります。これはデータのバッチごとに fit() に呼び出される関数です。これによって、通常通り fit() を呼び出せるようになり、独自の学習アルゴリズムが実行されるようになります。
このパターンは Functional API を使用したモデル構築を妨げるものではないことに注意してください。これは、Sequential モデル、Functional API モデル、サブクラス化されたモデルのいずれを構築する場合にも適用可能です。
では、その仕組みを見ていきましょう。
セットアップ
TensorFlow 2.2 以降が必要です。
End of explanation
class CustomModel(keras.Model):
def train_step(self, data):
# Unpack the data. Its structure depends on your model and
# on what you pass to `fit()`.
x, y = data
with tf.GradientTape() as tape:
y_pred = self(x, training=True) # Forward pass
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)
# Compute gradients
trainable_vars = self.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
self.compiled_metrics.update_state(y, y_pred)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self.metrics}
Explanation: 最初の簡単な例
簡単な例から始めてみましょう。
keras.Model をサブクラス化する新しいクラスを作成します。
train_step(self, data) メソッドだけをオーバーライドします。
メトリクス名(損失を含む)をマッピングするディクショナリを現在の値に返します。
入力引数の data は、トレーニングデータとして適合するために渡される値です。
fit(x, y, ...) を呼び出して Numpy 配列を渡す場合は、data はタプル型 (x, y) になります。
fit(dataset, ...) を呼び出して tf.data.Dataset を渡す場合は、data は各バッチで dataset により生成される値になります。
train_step メソッドの本体には、既に使い慣れているものと同様の定期的なトレーニングアップデートを実装しています。重要なのは、損失の計算を self.compiled_loss を介して行っていることで、それによって compile() に渡された損失関数がラップされています。
同様に、self.compiled_metrics.update_state(y, y_pred) を呼び出して compile() に渡されたメトリクスの状態を更新し、最後に self.metrics の結果をクエリして現在の値を取得しています。
End of explanation
import numpy as np
# Construct and compile an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = CustomModel(inputs, outputs)
model.compile(optimizer="adam", loss="mse", metrics=["mae"])
# Just use `fit` as usual
x = np.random.random((1000, 32))
y = np.random.random((1000, 1))
model.fit(x, y, epochs=3)
Explanation: これを試してみましょう。
End of explanation
loss_tracker = keras.metrics.Mean(name="loss")
mae_metric = keras.metrics.MeanAbsoluteError(name="mae")
class CustomModel(keras.Model):
def train_step(self, data):
x, y = data
with tf.GradientTape() as tape:
y_pred = self(x, training=True) # Forward pass
# Compute our own loss
loss = keras.losses.mean_squared_error(y, y_pred)
# Compute gradients
trainable_vars = self.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Compute our own metrics
loss_tracker.update_state(loss)
mae_metric.update_state(y, y_pred)
return {"loss": loss_tracker.result(), "mae": mae_metric.result()}
@property
def metrics(self):
# We list our `Metric` objects here so that `reset_states()` can be
# called automatically at the start of each epoch
# or at the start of `evaluate()`.
# If you don't implement this property, you have to call
# `reset_states()` yourself at the time of your choosing.
return [loss_tracker, mae_metric]
# Construct an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = CustomModel(inputs, outputs)
# We don't passs a loss or metrics here.
model.compile(optimizer="adam")
# Just use `fit` as usual -- you can use callbacks, etc.
x = np.random.random((1000, 32))
y = np.random.random((1000, 1))
model.fit(x, y, epochs=5)
Explanation: 低レベルにする
当然ながら、compile() に損失関数を渡すことを省略し、代わりに train_step ですべてを手動で実行することは可能です。これはメトリクスの場合でも同様です。
オプティマイザの構成に compile() のみを使用した、低レベルの例を次に示します。
まず、損失と MAE スコアを追跡する Metric インスタンスを作成します。
これらのメトリクスの状態を更新するカスタム train_step() を実装し(メトリクスで update_state() を呼び出します)、現在の平均値を返して進捗バーで表示し、任意のコールバックに渡せるようにメトリクスをクエリします(result() を使用)。
エポックごとにメトリクスに reset_states() を呼び出す必要があるところに注意してください。呼び出さない場合、result() は通常処理しているエポックごとの平均ではなく、トレーニングを開始してからの平均を返してしまいます。幸いにも、これはフレームワークが行ってくれるため、モデルの metrics プロパティにリセットするメトリクスをリストするだけで実現できます。モデルは、そこにリストされているオブジェクトに対する reset_states() の呼び出しを各 fit() エポックの開始時または evaluate() への呼び出しの開始時に行うようになります。
End of explanation
class CustomModel(keras.Model):
def train_step(self, data):
# Unpack the data. Its structure depends on your model and
# on what you pass to `fit()`.
if len(data) == 3:
x, y, sample_weight = data
else:
sample_weight = None
x, y = data
with tf.GradientTape() as tape:
y_pred = self(x, training=True) # Forward pass
# Compute the loss value.
# The loss function is configured in `compile()`.
loss = self.compiled_loss(
y,
y_pred,
sample_weight=sample_weight,
regularization_losses=self.losses,
)
# Compute gradients
trainable_vars = self.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Update the metrics.
# Metrics are configured in `compile()`.
self.compiled_metrics.update_state(y, y_pred, sample_weight=sample_weight)
# Return a dict mapping metric names to current value.
# Note that it will include the loss (tracked in self.metrics).
return {m.name: m.result() for m in self.metrics}
# Construct and compile an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = CustomModel(inputs, outputs)
model.compile(optimizer="adam", loss="mse", metrics=["mae"])
# You can now use sample_weight argument
x = np.random.random((1000, 32))
y = np.random.random((1000, 1))
sw = np.random.random((1000, 1))
model.fit(x, y, sample_weight=sw, epochs=3)
Explanation: sample_weight と class_weight をサポートする
最初の基本的な例では、サンプルの重み付けについては何も言及していないことに気付いているかもしれません。fit() の引数 sample_weight と class_weight をサポートする場合には、次のようにします。
data 引数から sample_weight をアンパックします。
それを compiled_loss と compiled_metrics に渡します(もちろん、 損失とメトリクスが compile() に依存しない場合は手動での適用が可能です)。
それがリストです。
End of explanation
class CustomModel(keras.Model):
def test_step(self, data):
# Unpack the data
x, y = data
# Compute predictions
y_pred = self(x, training=False)
# Updates the metrics tracking the loss
self.compiled_loss(y, y_pred, regularization_losses=self.losses)
# Update the metrics.
self.compiled_metrics.update_state(y, y_pred)
# Return a dict mapping metric names to current value.
# Note that it will include the loss (tracked in self.metrics).
return {m.name: m.result() for m in self.metrics}
# Construct an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = CustomModel(inputs, outputs)
model.compile(loss="mse", metrics=["mae"])
# Evaluate with our custom test_step
x = np.random.random((1000, 32))
y = np.random.random((1000, 1))
model.evaluate(x, y)
Explanation: 独自の評価ステップを提供する
model.evaluate() への呼び出しに同じことをする場合はどうしたらよいでしょう?その場合は、まったく同じ方法で test_step をオーバーライドします。これは次のようになります。
End of explanation
from tensorflow.keras import layers
# Create the discriminator
discriminator = keras.Sequential(
[
keras.Input(shape=(28, 28, 1)),
layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.GlobalMaxPooling2D(),
layers.Dense(1),
],
name="discriminator",
)
# Create the generator
latent_dim = 128
generator = keras.Sequential(
[
keras.Input(shape=(latent_dim,)),
# We want to generate 128 coefficients to reshape into a 7x7x128 map
layers.Dense(7 * 7 * 128),
layers.LeakyReLU(alpha=0.2),
layers.Reshape((7, 7, 128)),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"),
layers.LeakyReLU(alpha=0.2),
layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"),
],
name="generator",
)
Explanation: まとめ: エンドツーエンド GAN の例
ここで学んだことをすべて採り入れたエンドツーエンドの例を見てみましょう。
以下を検討してみましょう。
28x28x1 の画像を生成するジェネレーターネットワーク。
28x28x1 の画像を 2 つのクラス(「偽物」と「本物」)に分類するディスクリミネーターネットワーク。
それぞれに 1 つのオプティマイザ。
ディスクリミネーターをトレーニングする損失関数。
End of explanation
class GAN(keras.Model):
def __init__(self, discriminator, generator, latent_dim):
super(GAN, self).__init__()
self.discriminator = discriminator
self.generator = generator
self.latent_dim = latent_dim
def compile(self, d_optimizer, g_optimizer, loss_fn):
super(GAN, self).compile()
self.d_optimizer = d_optimizer
self.g_optimizer = g_optimizer
self.loss_fn = loss_fn
def train_step(self, real_images):
if isinstance(real_images, tuple):
real_images = real_images[0]
# Sample random points in the latent space
batch_size = tf.shape(real_images)[0]
random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))
# Decode them to fake images
generated_images = self.generator(random_latent_vectors)
# Combine them with real images
combined_images = tf.concat([generated_images, real_images], axis=0)
# Assemble labels discriminating real from fake images
labels = tf.concat(
[tf.ones((batch_size, 1)), tf.zeros((batch_size, 1))], axis=0
)
# Add random noise to the labels - important trick!
labels += 0.05 * tf.random.uniform(tf.shape(labels))
# Train the discriminator
with tf.GradientTape() as tape:
predictions = self.discriminator(combined_images)
d_loss = self.loss_fn(labels, predictions)
grads = tape.gradient(d_loss, self.discriminator.trainable_weights)
self.d_optimizer.apply_gradients(
zip(grads, self.discriminator.trainable_weights)
)
# Sample random points in the latent space
random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))
# Assemble labels that say "all real images"
misleading_labels = tf.zeros((batch_size, 1))
# Train the generator (note that we should *not* update the weights
# of the discriminator)!
with tf.GradientTape() as tape:
predictions = self.discriminator(self.generator(random_latent_vectors))
g_loss = self.loss_fn(misleading_labels, predictions)
grads = tape.gradient(g_loss, self.generator.trainable_weights)
self.g_optimizer.apply_gradients(zip(grads, self.generator.trainable_weights))
return {"d_loss": d_loss, "g_loss": g_loss}
Explanation: ここにフィーチャーコンプリートの GAN クラスがあります。compile()をオーバーライドして独自のシグネチャを使用することにより、GAN アルゴリズム全体をtrain_stepの 17 行で実装しています。
End of explanation
# Prepare the dataset. We use both the training & test MNIST digits.
batch_size = 64
(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()
all_digits = np.concatenate([x_train, x_test])
all_digits = all_digits.astype("float32") / 255.0
all_digits = np.reshape(all_digits, (-1, 28, 28, 1))
dataset = tf.data.Dataset.from_tensor_slices(all_digits)
dataset = dataset.shuffle(buffer_size=1024).batch(batch_size)
gan = GAN(discriminator=discriminator, generator=generator, latent_dim=latent_dim)
gan.compile(
d_optimizer=keras.optimizers.Adam(learning_rate=0.0003),
g_optimizer=keras.optimizers.Adam(learning_rate=0.0003),
loss_fn=keras.losses.BinaryCrossentropy(from_logits=True),
)
# To limit the execution time, we only train on 100 batches. You can train on
# the entire dataset. You will need about 20 epochs to get nice results.
gan.fit(dataset.take(100), epochs=1)
Explanation: 試運転してみましょう。
End of explanation |
9,488 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Capstone Project
Santiago Giraldo
July 29, 2017
Perhaps one of the trendiest topics in the world right now is machine learning and big data, which have become recurring topics on Main Street. The application of these in different fields including branches of knowledge that range from astronomy to manufacturing, but perhaps the field of medicine is one the most interesting of them all. Current advances in this field include disease diagnosis, image analysis, new drug developments and pandemic forecasts among others. These progress poses new challenges and opportunities to improve the quality of treatments and the life expectancy of human beings.
Being able to participate in something that contributes to human well-being motivated me to research and apply machine learning in an area of medicine to the capstone project.
From personal experience, a recurring problem that affects people at certain ages is pneumonia, which in some cases is usually fatal especially when it refers to children or older adults. Knowing this and the availability of records in Intensive Care Unit MIMIC-III database, I decided to select pneumonia deaths as a theme to develop the capstone project, that in addition, could be useful for the prognosis of deaths in Intensive Care Units, and with further investigations be the outset for development of tools that doctors and nurses could use to improve their work.
The hypothesis is that from microbiological variables coming from tests it’s possible predict whether a patient can or not die by this disease. This leads us to a problem that can be represented binarily, and that can be related to physicochemical variables obtained from microbiological tests. These relationships allow the modeling of the pneumonia death problem by means of a supervised learning model such as the Support Vector Machines, Decision Trees, Logistic Regression and Ada boost ensemble.
Step1: The MIMIC III database consists of a collection of csv files, which can be imported into PostgreSQL. Once imported to PostgreSQL, is possible to use libraries from python to analyze the different data contained in it, make the necessary transformations to implement the desired forecast model. The input variables will be defined from different tables and seeking relate the independent binary variable (life or death) of subjects with age, sex, results of microbiological events (test results) and severity of these results.
Before you can download the data, you must complete the CITI "Data or Specimens Only Research" course . Once you accomplish the course you can download the data from https
Step2: The first step was creating four tables to facilitate consult the required data for the analysis, these tables are
Step3: For each subject, I categorized the population in five types of groups according to the age recorded at the time of admission to the ICU, which are neonates [0,1], middle (1, 14), adults (14, 65), Older adults [65, 85] and older elderly people (85, 91.4].
Step4: In the Website they explain that the average age of these patients is 91.4 years, reason why I decided that if I want have some consistent data from this segment of the population I should replace it at least for its average value
Step5: Hospital_expire_flag has the binary dependent variable to the model, 0 when the patient goes out from ICU alive, and 1 when the patient has deceased while stay in ICU.
Subject_id is the key value which relates the respective record with an acute patient in ICU. gender give the patient sex of the subject. Last_admit_age (is a computed field) has the age when the patient is admitted in ICU.
Age_group (is a computed field) serves to categorize the sample by age.
Valuenum_avg is the average number for valuenum of this respective label measure. Org_name contains the names of the microorganisms (bacteria) related to pneumonia, where the main ones are staph aureus coag +, klebsiella pneumoniae, escherichia coli, pseudomonas aeruginosa, staphylococcus, coagulase negative, klebsiella oxytoca, enterococcus sp, which represent 80% of the sample.
Category is employed to categorize the charted events, the main categories of this data column are Labs, Respiratory, Alarms, Routine Vital Signs, Chemistry which gathering the 82% of the records which are present in this query.
Step6: Label is the detail of the category, and is represented in 578 labels, where the most important are
Step7: Ab_name indicates which antibiotic is sensitive the microorganism, this field together with the interpretation indicates if the microorganism the degree of resistance of this one to the antibiotic., the main antibiotics evaluated are gentamicin, trimethoprim/sulfa, levofloxacin, ceftazidime, tobramycin, cefepime, ciprofloxacin, meropenem, erythromycin, oxacillin, vancomycin, ceftriaxone, tetracycline, clindamycin, piperacillin/tazo, which represent 80% of the sample.
Step8: org_name has the microorganisms name are present in pneumonia patients. The main organims found in this patients are staph aureus coag +, klebsiella pneumoniae, escherichia coli, pseudomonas aeruginosa, staphylococcus, coagulase negative, klebsiella oxytoca, enterococcus sp., acinetobacter baumannii complex, serratia marcescens, enterobacter cloacae.
Step9: interpretation indicates the results of the test, “S” when the antibiotic is sensitive, “R” when is resistant, “I” when the antibiotic is intermediate, and “P” when is pending.
Step10: To transform the matrix in a pivot table, the first step is transform some categorical variables as dummy variables. The chosen variables are gender, age_group, category, label, org_name, ab_name, and interpretation. This operation was done with pandas get_dummies command. The result of this transformation is a panda data frame with shape 2,430,640 rows by 716 columns, these new columns are binaries variables and only take the number 1 once the categorical effect happened.
Step11: The next step, is to transform the matrix into a PivotTable, the purpose of this transformation is to be able to have the medical data in individual lines per subject and numerical form.
To do that, I employed pandas pivot_table command, and using as indexes subject_id and hospital_expire_flag. With this transformation, the resulting panda data frame has 829 rows by 724 columns. The data transformed in this form allow apply the classifier models to this data.
Step12: In all models, the variable dependent is survival state (Alive / Deceased). In order to sub-setting the data I work with a test size of 25% of the sample, I chose this value after some essays, a higher percentage could lower the computer velocity, and a higher value could make the results will be spurious or meaningless.
Step13: Support Vector Machine
Step14: I was struggling a lot with this model, for that reason I will not use this for capstone project
print ("Fitting the classifier to the training set")
from sklearn.tree import DecisionTreeClassifier
param_grid = {
'C'
Step15: Decision Tree Classifier is a no parametric method that learns through binary decisions that when deployed are forming a decision tree.
Step16: Ensemble methods like Random Forest, Extremely Tree and Ada Boost Classifiers. These methods “combine the predictions of several base estimators built with a given learning algorithm in order to improve generalizability / robustness over a single estimator ”. The first two are in averaging methods, where independent estimators are used over random samples and resulting predictions are averaged, getting as result a lower variance than a single estimator.
Step17: Logistic Regression Classifier is the most traditional method applied to classification problems. Here a logistic probability function is applied to data, and the result obtained is a probability of occurrence of the binary categorical variable
Step19: Ensemble voting classifier
All models, this is not a good option, it inherits all the problems of models that do not run well
Step20: The following ensemble model only aggregate five models
Step21: This ensemble voting model aggregates decision tree and the extremely tree models
Step22: The best model found here is the ensemble model with the decision tree and the extremely tree, even though the ensemble model with five aggregate methods shows a slightly better score, applying the principle of Occam's razor make the simplest better. At this time, the resulting model can accurately predict the deaths given a series of medical examinations, as proposed in the hypothesis. While accuracy is not the best (76%), I think it may be a good start for future investigations of interdisciplinary teams in ICU forecasting diseases.
From the forest model is possible to find how features weights in the results, such weights are called importance. As you can see only 129 features are important to the model, the rest has no weights. As you can expect, the antibiotic sensitivity are the most important feature (together weights 57% of importance) and AMIKACIN antibiotic the most important feature of the sample. Every feature from age group weight 0.97% in average, followed by category feature which everyone weights less than 1%. | Python Code:
import numpy as np
import pandas as pd
import datetime
import scipy as sp
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import psycopg2
import time
import itertools
from pandas.io.sql import read_sql
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
# from sklearn.preprocessing import PolynomialFeatures
from sklearn.svm import SVC
from sklearn.svm import LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.ensemble import VotingClassifier
from sklearn import metrics
Explanation: Machine Learning Engineer Nanodegree
Capstone Project
Santiago Giraldo
July 29, 2017
Perhaps one of the trendiest topics in the world right now is machine learning and big data, which have become recurring topics on Main Street. The application of these in different fields including branches of knowledge that range from astronomy to manufacturing, but perhaps the field of medicine is one the most interesting of them all. Current advances in this field include disease diagnosis, image analysis, new drug developments and pandemic forecasts among others. These progress poses new challenges and opportunities to improve the quality of treatments and the life expectancy of human beings.
Being able to participate in something that contributes to human well-being motivated me to research and apply machine learning in an area of medicine to the capstone project.
From personal experience, a recurring problem that affects people at certain ages is pneumonia, which in some cases is usually fatal especially when it refers to children or older adults. Knowing this and the availability of records in Intensive Care Unit MIMIC-III database, I decided to select pneumonia deaths as a theme to develop the capstone project, that in addition, could be useful for the prognosis of deaths in Intensive Care Units, and with further investigations be the outset for development of tools that doctors and nurses could use to improve their work.
The hypothesis is that from microbiological variables coming from tests it’s possible predict whether a patient can or not die by this disease. This leads us to a problem that can be represented binarily, and that can be related to physicochemical variables obtained from microbiological tests. These relationships allow the modeling of the pneumonia death problem by means of a supervised learning model such as the Support Vector Machines, Decision Trees, Logistic Regression and Ada boost ensemble.
End of explanation
conn=psycopg2.connect(
dbname='mimic',
user='postgres',
host='localhost',
port=5432,
password= 123
)
cur = conn.cursor()
process = time.process_time()
print (process)
Explanation: The MIMIC III database consists of a collection of csv files, which can be imported into PostgreSQL. Once imported to PostgreSQL, is possible to use libraries from python to analyze the different data contained in it, make the necessary transformations to implement the desired forecast model. The input variables will be defined from different tables and seeking relate the independent binary variable (life or death) of subjects with age, sex, results of microbiological events (test results) and severity of these results.
Before you can download the data, you must complete the CITI "Data or Specimens Only Research" course . Once you accomplish the course you can download the data from https://physionet.org/works/MIMICIIIClinicalDatabase/. Then, the first step was to understand the structure of the database. This consists of a collection of 26 csv files, which can be imported into PostgreSQL. These files contain medical, economic, demographic and death of patients admitted information for several years at the ICU of Beth Israel Deaconess Medical Center, as the data is sensitive, some records were changed like date of admittance and date of birth, in order to avoid the identification of the patients from these records, and this information will be misused in the future.
End of explanation
sql_sl = " SELECT hospital_expire_flag, subject_id, gender, last_admit_age, age_group, category, \
label, valuenum_avg, org_name,ab_name, interpretation \
FROM mimiciii.pneumonia;"
patients_pn= read_sql(sql_sl, conn, coerce_float=True, params=None)
print (patients_pn.info())
process = time.process_time()
print (process)
Explanation: The first step was creating four tables to facilitate consult the required data for the analysis, these tables are:
last_event: This table born from a join of patients and admissions tables. In this, was selected the fields subject_id, dob, and gender. The age is computed for all patients, the last admission column is created and all age are classified by age groups as categorical variable.
age: Is a join between last_event and admission tables. In this, I selected the subject_id, last_admit_age, gender, last_admit_time, but the records are limited to last patient admission (there are records for several admissions for some patients, so is important filter the last one to related the records with deaths when these occur) computed in last_event table.
valuenum_avg: In a first instance, I have grouped the 14 tables that have the data records of graphical events. As a group, it is the largest table in the database and it contains 330,712,483 records. Given the size of the data, hardware constraints, I considered a strong assumption, and is that the records in this table where the numerical value (valuenum) of these graphic events are measured, can be averaged (huge assumption) can serve as a numerical dependent variable within the models to be studied. It is a huge assumption because I have no evidence from other studies, at least as far as I know, the results average can be done. But on the other hand, you can think by experience (as patient because I’m not physician), the results from exams are a good estimation, and the issue, at least for me, is if this data could be averaged as I did and if it could be a good proxy regressor. For this table, I take this data: subject_id, hadm_id, itemid, and compute valuenum_avg.
pneumonia: It is the most important table for this study because I group the relevant data from others tables like microbiology events, charted events, and demographic data. The specific fields grouped here are: hospital_expire_flag, subject_id, hadm_id, gender, last_admittime, last_admit_age, age_group, itemid, label, category, valuenum_avg, icd9_code, short_title, spec_type_desc, org_name, ab_name, interpretation. And this data where filtered by pneumonia word in long_title diagnosis field, values not null in interpretation in microbiology events, values not null in category laboratory items and admittime is equal to last_admittime. The objective here is assuring me that data is complete (not null records), is related with pneumonia diagnosis and the records selected where from the last admission.
The final result of these process is a sql query which filter and transform the data in a matrix with this columns fields: hospital_expire_flag, subject_id, gender, last_admit_age, age_group, category, label, valuenum_avg, org_name, ab_name, interpretation, and 2,430,640 records to 829 patients with some diagnosis related with pneumonia panda data frame
End of explanation
patients_pn.head()
Explanation: For each subject, I categorized the population in five types of groups according to the age recorded at the time of admission to the ICU, which are neonates [0,1], middle (1, 14), adults (14, 65), Older adults [65, 85] and older elderly people (85, 91.4].
End of explanation
row_index = patients_pn.last_admit_age >= 300
patients_pn.loc[row_index , 'last_admit_age' ] = 91.4 #https://mimic.physionet.org/mimictables/patients/
Explanation: In the Website they explain that the average age of these patients is 91.4 years, reason why I decided that if I want have some consistent data from this segment of the population I should replace it at least for its average value
End of explanation
patients_pn['category'].unique()
patients_pn['category'].value_counts()
patients_category = patients_pn['category'].value_counts().reset_index()
patients_category.columns=['category','Count']
patients_category['Count'].apply('{:,.2f}'.format)
patients_category['cum_perc'] = 100*patients_category.Count/patients_category.Count.sum()
patients_category['cum_perc'] = patients_category['cum_perc'].map('{:,.4f}%'.format)
print (patients_category)
patients_pn['category'].value_counts().plot(kind='bar')
Explanation: Hospital_expire_flag has the binary dependent variable to the model, 0 when the patient goes out from ICU alive, and 1 when the patient has deceased while stay in ICU.
Subject_id is the key value which relates the respective record with an acute patient in ICU. gender give the patient sex of the subject. Last_admit_age (is a computed field) has the age when the patient is admitted in ICU.
Age_group (is a computed field) serves to categorize the sample by age.
Valuenum_avg is the average number for valuenum of this respective label measure. Org_name contains the names of the microorganisms (bacteria) related to pneumonia, where the main ones are staph aureus coag +, klebsiella pneumoniae, escherichia coli, pseudomonas aeruginosa, staphylococcus, coagulase negative, klebsiella oxytoca, enterococcus sp, which represent 80% of the sample.
Category is employed to categorize the charted events, the main categories of this data column are Labs, Respiratory, Alarms, Routine Vital Signs, Chemistry which gathering the 82% of the records which are present in this query.
End of explanation
patients_pn['label'].unique()
patients_pn['label'].value_counts()
patients_label = patients_pn['label'].value_counts().reset_index()
patients_label.columns=['label','Count']
patients_label['Count'].apply('{:,.2f}'.format)
patients_label['cum_perc'] = 100*patients_label.Count/patients_label.Count.sum()
patients_label['cum_perc'] = patients_label['cum_perc'].map('{:,.4f}%'.format)
print (patients_label)
patients_pn['label'].value_counts().plot(kind='bar')
Explanation: Label is the detail of the category, and is represented in 578 labels, where the most important are: Hemoglobin Arterial Base Excess, Phosphorous, WBC, Creatinine, Magnesium, PTT, INR, ALT, AST, Lactic Acid. And the largest amount (Hemoglobin Arterial Base Excess) represents 0.94% of the sample and the lowest (Lactic Acid) 0.82% of the sample.
End of explanation
patients_pn['ab_name'].unique()
patients_pn['ab_name'].value_counts()
patients_ab_name = patients_pn['ab_name'].value_counts().reset_index()
patients_ab_name.columns=['ab_name','Count']
patients_ab_name['Count'].apply('{:,.2f}'.format)
patients_ab_name['cum_perc'] = 100*patients_ab_name.Count/patients_ab_name.Count.sum()
patients_ab_name['cum_perc'] = patients_ab_name ['cum_perc'].map('{:,.4f}%'.format)
print (patients_ab_name)
patients_pn['ab_name'].value_counts().plot(kind='bar')
Explanation: Ab_name indicates which antibiotic is sensitive the microorganism, this field together with the interpretation indicates if the microorganism the degree of resistance of this one to the antibiotic., the main antibiotics evaluated are gentamicin, trimethoprim/sulfa, levofloxacin, ceftazidime, tobramycin, cefepime, ciprofloxacin, meropenem, erythromycin, oxacillin, vancomycin, ceftriaxone, tetracycline, clindamycin, piperacillin/tazo, which represent 80% of the sample.
End of explanation
patients_pn['org_name'].unique()
patients_pn['org_name'].value_counts()
patients_pn['org_name'].value_counts().plot(kind='bar')
patients_org_name = patients_pn['org_name'].value_counts().reset_index()
patients_org_name.columns=['org_name','Count']
patients_org_name['Count'].apply('{:,.2f}'.format)
patients_org_name['cum_perc'] = 100*patients_org_name.Count/patients_org_name.Count.sum()
patients_org_name['cum_perc'] = patients_org_name['cum_perc'].map('{:,.4f}%'.format)
print (patients_org_name)
Explanation: org_name has the microorganisms name are present in pneumonia patients. The main organims found in this patients are staph aureus coag +, klebsiella pneumoniae, escherichia coli, pseudomonas aeruginosa, staphylococcus, coagulase negative, klebsiella oxytoca, enterococcus sp., acinetobacter baumannii complex, serratia marcescens, enterobacter cloacae.
End of explanation
patients_pn['interpretation'].unique()
patients_pn['interpretation'].value_counts()
patients_interpretation = patients_pn['interpretation'].value_counts().reset_index()
patients_interpretation.columns=['interpretation','Count']
patients_interpretation['Count'].apply('{:,.2f}'.format)
patients_interpretation['cum_perc'] = 100*patients_interpretation.Count/patients_interpretation.Count.sum()
patients_interpretation['cum_perc'] = patients_interpretation ['cum_perc'].map('{:,.4f}%'.format)
print (patients_interpretation)
patients_pn['interpretation'].value_counts().plot(kind='bar')
patients_pn.head()
Explanation: interpretation indicates the results of the test, “S” when the antibiotic is sensitive, “R” when is resistant, “I” when the antibiotic is intermediate, and “P” when is pending.
End of explanation
patients_dummy = pd.get_dummies(patients_pn,prefix=['gender', 'age_group', 'category','label',
'org_name','ab_name', 'interpretation'])
patients_dummy.head()
Explanation: To transform the matrix in a pivot table, the first step is transform some categorical variables as dummy variables. The chosen variables are gender, age_group, category, label, org_name, ab_name, and interpretation. This operation was done with pandas get_dummies command. The result of this transformation is a panda data frame with shape 2,430,640 rows by 716 columns, these new columns are binaries variables and only take the number 1 once the categorical effect happened.
End of explanation
patients_data = pd.pivot_table(patients_dummy,index=["subject_id", "hospital_expire_flag" ])
process = time.process_time()
print (process)
patients_data.head()
patients_data.info()
patients = patients_data.reset_index()
patients.head()
p_data= patients.ix[:,2:]
p_data.head()
p_target= patients['hospital_expire_flag']
p_target.head()
Explanation: The next step, is to transform the matrix into a PivotTable, the purpose of this transformation is to be able to have the medical data in individual lines per subject and numerical form.
To do that, I employed pandas pivot_table command, and using as indexes subject_id and hospital_expire_flag. With this transformation, the resulting panda data frame has 829 rows by 724 columns. The data transformed in this form allow apply the classifier models to this data.
End of explanation
X_train, X_test, y_train, y_test = train_test_split(
p_data, p_target, test_size=0.25, random_state=123)
X_train.head()
X_train.shape, y_train.shape
X_test.shape, y_test.shape
Explanation: In all models, the variable dependent is survival state (Alive / Deceased). In order to sub-setting the data I work with a test size of 25% of the sample, I chose this value after some essays, a higher percentage could lower the computer velocity, and a higher value could make the results will be spurious or meaningless.
End of explanation
# the same model as above, only has changed the jobs
clf_SVC = SVC(kernel='linear', C=1)
scores_SVC = cross_val_score(clf_SVC, X_train, y_train, cv=4, n_jobs=-1)
print(scores_SVC)
print("Accuracy: %0.4f (+/- %0.4f)" % (scores_SVC.mean(), scores_SVC.std() * 2))
clf_SVC = clf_SVC.fit(X_train, y_train)
y_predicted_SVC = clf_SVC.predict(X_test)
print (metrics.classification_report(y_test, y_predicted_SVC))
process = time.process_time()
print (process)
sns.heatmap(confusion_matrix(y_test, y_predicted_SVC), annot = True, fmt = '', cmap = "GnBu")
print ("Fitting the Support Vector Classification - kernel Radial Basis Function classifier to the training set")
param_grid = {
'C': [1e-3, 1e-2, 1, 1e3, 5e3, 1e4, 5e4, 1e5],
'gamma': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.1, 1],
}
# for sklearn version 0.16 or prior, the class_weight parameter value is 'auto'
clf_RBF = GridSearchCV(SVC(kernel='rbf', class_weight='balanced', cache_size=1000), param_grid)
clf_RBF = clf_RBF.fit(X_train, y_train)
scores_RBF = cross_val_score(clf_RBF, X_train, y_train, cv=4, n_jobs=-1)
print(scores_RBF)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores_RBF.mean(), scores_RBF.std() * 2))
print ("Best estimator found by grid search:")
print (clf_RBF.best_estimator_)
y_predicted_RBF = clf_RBF.predict(X_test)
print (metrics.classification_report(y_test, y_predicted_RBF))
process = time.process_time()
print (process)
sns.heatmap(confusion_matrix(y_test, y_predicted_RBF), annot = True, fmt = '', cmap = "GnBu")
# Mimic-iii_Model-Pulmonary.ipynb
print ("Fitting the Linear Support Vector Classification - Hingue loss classifier to the training set")
param_grid = {
'C': [1e-3, 1e-2, 1, 1e3, 5e3, 1e4, 5e4, 1e5],
#'gamma': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.1, 1],
}
# for sklearn version 0.16 or prior, the class_weight parameter value is 'auto'
clf_LSCV = GridSearchCV(LinearSVC(C=1, loss= 'hinge'), param_grid, n_jobs=-1)
clf_LSCV = clf_LSCV.fit(X_train, y_train)
scores_LSCV = cross_val_score(clf_LSCV, X_train, y_train, cv=4, n_jobs=-1)
print(scores_LSCV)
print("Accuracy: %0.4f (+/- %0.4f)" % (scores_LSCV.mean(), scores_LSCV.std() * 2))
print ("Best estimator found by grid search:")
print (clf_LSCV.best_estimator_)
y_predicted_LSCV = clf_LSCV.predict(X_test)
print (metrics.classification_report(y_test, y_predicted_LSCV))
process = time.process_time()
print (process)
sns.heatmap(confusion_matrix(y_test, y_predicted_LSCV), annot = True, fmt = '', cmap = "GnBu")
Explanation: Support Vector Machine: Machine Support Vector (SVM) is a classification method that separates a sample points in different hyperplanes in multidimensional spaces, which are separated by different labels. The algorithm seeks to group the data (classify) by optimal search (minimum distance) between the hyperplanes, the resulting vectors are called support vectors . The optimization is made over kernels (mathematical functions), in this analysis I used different methods: linear, radial basis function (rbf) and sigmoid. I purposely avoided using the polynomial kernels, more because of parameterization problems that did not allow me to run the data with this algorithm.
End of explanation
print ("Fitting the Support Vector Classification - kernel Sigmoid classifier to the training set")
param_grid = {
'C': [1e3, 1e4, 1e5],
'gamma': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.1, 1],
'coef0':[-1,0,1]
}
# for sklearn version 0.16 or prior, the class_weight parameter value is 'auto'
clf_SIGMOID = GridSearchCV(SVC(kernel='sigmoid', class_weight='balanced'), param_grid, n_jobs=-1)
clf_SIGMOID = clf_SIGMOID.fit(X_train, y_train)
scores_SIGMOID = cross_val_score(clf_SIGMOID, X_train, y_train, cv=4, n_jobs=-1)
print(scores_SIGMOID)
print("Accuracy: %0.4f (+/- %0.4f)" % (scores_SIGMOID.mean(), scores_SIGMOID.std() * 2))
print ("Best estimator found by grid search:")
print (clf_SIGMOID.best_estimator_)
y_predicted_SIGMOID = clf_SIGMOID.predict(X_test)
print (metrics.classification_report(y_test, y_predicted_SIGMOID))
process = time.process_time()
print (process)
sns.heatmap(confusion_matrix(y_test, y_predicted_SIGMOID), annot = True, fmt = '', cmap = "GnBu")
Explanation: I was struggling a lot with this model, for that reason I will not use this for capstone project
print ("Fitting the classifier to the training set")
from sklearn.tree import DecisionTreeClassifier
param_grid = {
'C': [1e-3, 1e-2, 1, 1e3, 5e3, 1e4, 5e4, 1e5],
'gamma': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.1, 1],
'degree': [3,4,5]
}
- for sklearn version 0.16 or prior, the class_weight parameter value is 'auto'
clf_poly = GridSearchCV(SVC(kernel='poly', class_weight='balanced'), param_grid, n_jobs=-1)
clf_poly = clf_poly.fit(X_train, y_train)
scores_poly = cross_val_score(clf_poly, X_train, y_train, cv=4,n_jobs=-1)
print(scores_poly)
print("Accuracy: %0.4f (+/- %0.4f)" % (scores_poly.mean(), scores_poly.std() * 2))
print ("Best estimator found by grid search:")
print (clf_poly.best_estimator_)
y_predicted_poly = clf_poly.predict(X_test)
print (metrics.classification_report(y_test, y_predicted_poly))
process = time.process_time()
print (process)
End of explanation
print ("Fitting the Decision Tree Classifier to the training set")
param_grid = {
'max_depth': [2, 3, 4, 5, 6,7],
#'gamma': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.1, 1],
}
# for sklearn version 0.16 or prior, the class_weight parameter value is 'auto'
clf_DTC = GridSearchCV(DecisionTreeClassifier(criterion='entropy', random_state=123,
class_weight='balanced'), param_grid, n_jobs=-1)
clf_DTC = clf_DTC.fit(X_train, y_train)
scores_DTC = cross_val_score(clf_DTC, X_train, y_train, cv=4, n_jobs=-1)
print(scores_DTC)
print("Accuracy: %0.4f (+/- %0.4f)" % (scores_DTC.mean(), scores_DTC.std() * 2))
print ("Best estimator found by grid search:")
print (clf_DTC.best_estimator_)
y_predicted_DTC = clf_DTC.predict(X_test)
print (metrics.classification_report(y_test, y_predicted_DTC))
process = time.process_time()
print (process)
sns.heatmap(confusion_matrix(y_test, y_predicted_DTC), annot = True, fmt = '', cmap = "GnBu")
Explanation: Decision Tree Classifier is a no parametric method that learns through binary decisions that when deployed are forming a decision tree.
End of explanation
print ("Fitting the Random Forest Classifier to the training set")
param_grid = {
'n_estimators' :[3,5,7,10],
'max_depth': [2, 3, 4, 5, 6,7],
#'gamma': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.1, 1],
}
# for sklearn version 0.16 or prior, the class_weight parameter value is 'auto'
clf_RFC = GridSearchCV(RandomForestClassifier(min_samples_split=2, random_state=123, class_weight='balanced'),
param_grid, n_jobs=-1)
clf_RFC = clf_RFC.fit(X_train, y_train)
scores_RFC = cross_val_score(clf_RFC, X_train, y_train, cv=4, n_jobs=-1)
print(scores_RFC)
print("Accuracy: %0.4f (+/- %0.4f)" % (scores_RFC.mean(), scores_RFC.std() * 2))
print ("Best estimator found by grid search:")
print (clf_RFC.best_estimator_)
y_predicted_RFC = clf_RFC.predict(X_test)
print (metrics.classification_report(y_test, y_predicted_RFC))
process = time.process_time()
print (process)
sns.heatmap(confusion_matrix(y_test, y_predicted_RFC), annot = True, fmt = '', cmap = "GnBu")
print ("Fitting the Extremely Tree Classifier to the training set")
param_grid = {
'n_estimators' :[3,5,10],
'max_depth': [2, 3, 4, 5, 6,7],
#'gamma': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.1, 1],
}
# for sklearn version 0.16 or prior, the class_weight parameter value is 'auto'
clf_EFC = GridSearchCV(ExtraTreesClassifier(min_samples_split=2, random_state=123, class_weight='balanced'),
param_grid, n_jobs=-1)
clf_EFC = clf_EFC.fit(X_train, y_train)
scores_EFC = cross_val_score(clf_EFC, X_train, y_train, cv=4, n_jobs=-1)
print(scores_EFC)
print("Accuracy: %0.4f (+/- %0.4f)" % (scores_EFC.mean(), scores_EFC.std() * 2))
print ("Best estimator found by grid search:")
print (clf_EFC.best_estimator_)
y_predicted_EFC = clf_EFC.predict(X_test)
print (metrics.classification_report(y_test, y_predicted_EFC))
process = time.process_time()
print (process)
sns.heatmap(confusion_matrix(y_test, y_predicted_EFC), annot = True, fmt = '', cmap = "GnBu")
print ("Fitting the Ada Boost Classifier to the training set")
param_grid = {
'n_estimators' :[3,5,10],
'learning_rate': [0.01],
#'max_depth': [2, 3, 4, 5, 6,7],
#'gamma': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.1, 1],
}
# for sklearn version 0.16 or prior, the class_weight parameter value is 'auto'
clf_ABC = GridSearchCV(AdaBoostClassifier(random_state=123), param_grid, n_jobs=-1)
clf_ABC = clf_ABC.fit(X_train, y_train)
scores_ABC = cross_val_score(clf_ABC, X_train, y_train, cv=4, n_jobs=-1)
print(scores_ABC)
print("Accuracy: %0.4f (+/- %0.4f)" % (scores_ABC.mean(), scores_ABC.std() * 2))
print ("Best estimator found by grid search:")
print (clf_ABC.best_estimator_)
y_predicted_ABC = clf_ABC.predict(X_test)
print (metrics.classification_report(y_test, y_predicted_ABC))
process = time.process_time()
print (process)
sns.heatmap(confusion_matrix(y_test, y_predicted_ABC), annot = True, fmt = '', cmap = "GnBu")
Explanation: Ensemble methods like Random Forest, Extremely Tree and Ada Boost Classifiers. These methods “combine the predictions of several base estimators built with a given learning algorithm in order to improve generalizability / robustness over a single estimator ”. The first two are in averaging methods, where independent estimators are used over random samples and resulting predictions are averaged, getting as result a lower variance than a single estimator.
End of explanation
print ("Fitting the Logistic Regression Classification - Hingue loss classifier to the training set")
param_grid = {
'C': [1e-3, 1e-2, 1, 1e3, 5e3, 1e4, 5e4, 1e5],
#'gamma': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.1, 1],
}
# for sklearn version 0.16 or prior, the class_weight parameter value is 'auto'
clf_LOGREG = GridSearchCV(LogisticRegression(random_state=123), param_grid, n_jobs=-1)
clf_LOGREG= clf_LOGREG .fit(X_train, y_train)
scores_LOGREG = cross_val_score(clf_LOGREG, X_train, y_train, cv=4, n_jobs=-1)
print(scores_LOGREG)
print("Accuracy: %0.4f (+/- %0.4f)" % (scores_LOGREG.mean(), scores_LOGREG.std() * 2))
print ("Best estimator found by grid search:")
print (clf_LOGREG.best_estimator_)
y_predicted_LOGREG = clf_LOGREG.predict(X_test)
print (metrics.classification_report(y_test, y_predicted_LOGREG))
process = time.process_time()
print (process)
sns.heatmap(confusion_matrix(y_test, y_predicted_LOGREG), annot = True, fmt = '', cmap = "GnBu")
# Best Models
clf_RBF_b = SVC(C=1, cache_size=200, class_weight='balanced', coef0=0.0,
decision_function_shape=None, degree=3, gamma=1, kernel='rbf',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False)
y_predicted_RBF_b = clf_RBF_b.fit(X_train,y_train).predict(X_test)
clf_LSCV_b = LinearSVC(C=0.001, class_weight=None, dual=True, fit_intercept=True,
intercept_scaling=1, loss='hinge', max_iter=1000, multi_class='ovr',
penalty='l2', random_state=None, tol=0.0001, verbose=0)
y_predicted_LSCV_b = clf_LSCV_b.fit(X_train,y_train).predict(X_test)
clf_SIGMOID_b = SVC(C=1000.0, cache_size=200, class_weight='balanced', coef0=1,
decision_function_shape=None, degree=3, gamma=0.001, kernel='sigmoid',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False)
y_predicted_SIGMOID_b = clf_SIGMOID_b.fit(X_train,y_train).predict(X_test)
clf_DTC_b = DecisionTreeClassifier(class_weight='balanced', criterion='entropy',
max_depth=7, max_features=None, max_leaf_nodes=None,
min_impurity_split=1e-07, min_samples_leaf=1,
min_samples_split=2, min_weight_fraction_leaf=0.0,
presort=False, random_state=123, splitter='best')
y_predicted_DTC_b = clf_DTC_b.fit(X_train,y_train).predict(X_test)
clf_RFC_b = RandomForestClassifier(bootstrap=True, class_weight='balanced',
criterion='gini', max_depth=7, max_features='auto',
max_leaf_nodes=None, min_impurity_split=1e-07,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,
oob_score=False, random_state=123, verbose=0, warm_start=False)
y_predicted_RFC_b = clf_RFC_b.fit(X_train,y_train).predict(X_test)
clf_EFC_b = ExtraTreesClassifier(bootstrap=False, class_weight='balanced',
criterion='gini', max_depth=5, max_features='auto',
max_leaf_nodes=None, min_impurity_split=1e-07,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,
oob_score=False, random_state=123, verbose=0, warm_start=False)
y_predicted_EFC_b = clf_EFC_b.fit(X_train,y_train).predict(X_test)
clf_ABC_b = AdaBoostClassifier(algorithm='SAMME.R', base_estimator=None,
learning_rate=0.01, n_estimators=3, random_state=123)
y_predicted_ABC_b = clf_ABC_b.fit(X_train,y_train).predict(X_test)
clf_LR_b= LogisticRegression(C=0.001, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
penalty='l2', random_state=123, solver='liblinear', tol=0.0001,
verbose=0, warm_start=False)
y_predicted_LR_b = clf_LR_b.fit(X_train,y_train).predict(X_test)
fig, axes = plt.subplots(2,4)
sns.heatmap(confusion_matrix(y_test, y_predicted_RBF_b), annot = True, fmt = '', cmap = "GnBu", ax=axes[0, 0])
sns.heatmap(confusion_matrix(y_test, y_predicted_LSCV_b), annot = True, fmt = '', cmap = "GnBu", ax=axes[0, 1])
sns.heatmap(confusion_matrix(y_test, y_predicted_SIGMOID_b), annot = True, fmt = '', cmap = "GnBu", ax=axes[0, 2])
sns.heatmap(confusion_matrix(y_test, y_predicted_DTC_b), annot = True, fmt = '', cmap = "GnBu", ax=axes[0, 3])
sns.heatmap(confusion_matrix(y_test, y_predicted_RFC_b), annot = True, fmt = '', cmap = "GnBu", ax=axes[1, 0])
sns.heatmap(confusion_matrix(y_test, y_predicted_EFC_b), annot = True, fmt = '', cmap = "GnBu", ax=axes[1, 1])
sns.heatmap(confusion_matrix(y_test, y_predicted_ABC_b), annot = True, fmt = '', cmap = "GnBu", ax=axes[1, 2])
sns.heatmap(confusion_matrix(y_test, y_predicted_LR_b), annot = True, fmt = '', cmap = "GnBu", ax=axes[1, 3])
Explanation: Logistic Regression Classifier is the most traditional method applied to classification problems. Here a logistic probability function is applied to data, and the result obtained is a probability of occurrence of the binary categorical variable
End of explanation
eclf1 = VotingClassifier(estimators=[
('rbf',clf_RBF_b), ('LSCV',clf_LSCV_b),
('sigmoid',clf_SIGMOID_b), ('DTC',clf_DTC_b),
('RFC',clf_RFC_b),('EFC',clf_EFC_b),
('ABC',clf_ABC_b), ('svc',clf_LR_b)],
voting='hard')
eclf1 = eclf1.fit(X_train, y_train)
y_predict_eclf1 = eclf1.predict(X_test)
print (eclf1.get_params(deep=True))
print (eclf1.score(X_train, y_train, sample_weight=None))
eclf2 = VotingClassifier(estimators=[
('rbf',clf_RBF), ('LSCV',clf_LSCV),
('sigmoid',clf_SIGMOID), ('DTC',clf_DTC),
('RFC',clf_RFC),('EFC',clf_EFC),
('ABC',clf_ABC), ('svc',clf_LOGREG)],
voting='hard')
eclf2 = eclf2.fit(X_train, y_train)
y_predict_eclf2 = eclf2.predict(X_test)
print (eclf2.get_params(deep=True))
print (eclf2.score(X_train, y_train, sample_weight=None))
#Basically does the same that chose the best models, this function uses the best models too
# Source: http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
cnf_matrix_RBF = confusion_matrix(y_test, y_predicted_RBF_b)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_RBF, classes= '1',
title='RBF Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_RBF, classes='1', normalize=True,
title='RBF Normalized confusion matrix')
plt.show()
cnf_matrix_LSCV = confusion_matrix(y_test, y_predicted_LSCV_b)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_LSCV, classes='1',
title='LSCV Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_LSCV, classes='1', normalize=True,
title='LSCV Normalized confusion matrix')
plt.show()
cnf_matrix_SIGMOID = confusion_matrix(y_test, y_predicted_SIGMOID_b)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_SIGMOID, classes='1',
title='SIGMOID Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_SIGMOID, classes='1', normalize=True,
title='SIGMOID Normalized confusion matrix')
plt.show()
cnf_matrix_DTC = confusion_matrix(y_test, y_predicted_DTC_b)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_DTC, classes='1',
title='DTC Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_DTC, classes='1', normalize=True,
title='DTC Normalized confusion matrix')
plt.show()
cnf_matrix_RFC = confusion_matrix(y_test, y_predicted_RFC_b)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_RFC, classes='1',
title='RFC Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_RFC, classes='1', normalize=True,
title='RFC Normalized confusion matrix')
plt.show()
cnf_matrix_EFC = confusion_matrix(y_test, y_predicted_EFC_b)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_EFC, classes='1',
title='EFC Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_EFC, classes='1', normalize=True,
title='EFC Normalized confusion matrix')
plt.show()
cnf_matrix_ABC = confusion_matrix(y_test, y_predicted_ABC_b)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_ABC, classes='1',
title='ABC Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_ABC, classes='1', normalize=True,
title='ABC Normalized confusion matrix')
plt.show()
cnf_matrix_LR = confusion_matrix(y_test, y_predicted_LR_b)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_LR, classes='1',
title='LR Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_LR, classes='1', normalize=True,
title='LR Normalized confusion matrix')
plt.show()
Explanation: Ensemble voting classifier
All models, this is not a good option, it inherits all the problems of models that do not run well
End of explanation
eclf3 = VotingClassifier(estimators=[
('rbf',clf_RBF), ('sigmoid',clf_SIGMOID), ('DTC',clf_DTC),
('RFC',clf_RFC),('EFC',clf_EFC)],
voting='hard')
eclf3 = eclf3.fit(X_train, y_train)
y_predict_eclf3 = eclf3.predict(X_test)
print (eclf3.get_params(deep=True))
print (eclf3.score(X_train, y_train, sample_weight=None))
print(y_predict_eclf3)
scores_eclf3 = cross_val_score(eclf3 , X_train, y_train, cv=4, n_jobs=-1)
print(scores_eclf3 )
print("Accuracy: %0.4f (+/- %0.4f)" % (scores_eclf3.mean(), scores_eclf3.std() * 2))
print (metrics.classification_report(y_test, y_predict_eclf3))
cnf_matrix_eclf3 = confusion_matrix(y_test, y_predict_eclf3)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_eclf3, classes='1',
title='ECLF Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_eclf3, classes='1', normalize=True,
title='ECLF Normalized confusion matrix')
plt.show()
Explanation: The following ensemble model only aggregate five models:
End of explanation
eclf4 = VotingClassifier(estimators=[
('DTC',clf_DTC), ('EFC',clf_EFC)],
voting='hard')
eclf4 = eclf4.fit(X_train, y_train)
y_predict_eclf4 = eclf4.predict(X_test)
print (eclf4.get_params(deep=True))
print (eclf4.score(X_train, y_train, sample_weight=None))
scores_eclf4 = cross_val_score(eclf4 , X_train, y_train, cv=4, n_jobs=-1)
print(scores_eclf4 )
print("Accuracy: %0.4f (+/- %0.4f)" % (scores_eclf4.mean(), scores_eclf4.std() * 2))
print (metrics.classification_report(y_test, y_predict_eclf4))
cnf_matrix_eclf4 = confusion_matrix(y_test, y_predict_eclf4)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_eclf4, classes='1',
title='ECLF Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix_eclf4, classes='1', normalize=True,
title='ECLF Normalized confusion matrix')
plt.show()
Explanation: This ensemble voting model aggregates decision tree and the extremely tree models:
End of explanation
# Code source: http://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_importances.html
from sklearn.datasets import make_classification
forest = clf_EFC_b
forest.fit(X_train, y_train)
feature_names = X_train.columns
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X_train.shape[1]):
if importances[indices[f]]>0:
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
#plt.xticks(range(heart_train.shape[1]), )
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(X_train.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X_train.shape[1]), feature_names)
plt.xlim([-1, X_train.shape[1]])
plt.show()
from sklearn.datasets import make_classification
forest = clf_EFC_b
forest.fit(X_train, y_train)
feature_names = X_train.columns
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
impor = []
for f in range(X_train.shape[1]):
if importances[indices[f]]>0:
impor.append({'Feature': feature_names[f] , 'Importance': importances[indices[f]]})
feature_importance = pd.DataFrame(impor).sort_values('Importance',ascending = False)
print(feature_importance.to_string())
feature_importance.info()
feature_importance['Importance'].sum()
#surveys_df[surveys_df.year == 2002]
more_important= feature_importance[feature_importance.Importance >= 0.009]
more_important
more_important.plot(kind='bar')
Explanation: The best model found here is the ensemble model with the decision tree and the extremely tree, even though the ensemble model with five aggregate methods shows a slightly better score, applying the principle of Occam's razor make the simplest better. At this time, the resulting model can accurately predict the deaths given a series of medical examinations, as proposed in the hypothesis. While accuracy is not the best (76%), I think it may be a good start for future investigations of interdisciplinary teams in ICU forecasting diseases.
From the forest model is possible to find how features weights in the results, such weights are called importance. As you can see only 129 features are important to the model, the rest has no weights. As you can expect, the antibiotic sensitivity are the most important feature (together weights 57% of importance) and AMIKACIN antibiotic the most important feature of the sample. Every feature from age group weight 0.97% in average, followed by category feature which everyone weights less than 1%.
End of explanation |
9,489 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Esercizio 2
Considerare il file movies.csv ottenuto estraendo i primi 1000 record del dataset scaricabile all'indirizzo https
Step1: Importazione dei moduli pandas e ast e numpy.
Step2: 1) Definizione della funzione get_items()
La funzione get_items() prende in input un valore di campo genres|production_countries e restituisce la lista dei generi|paesi di produzione.
Step3: 2) Lettura del file csv con Pandas
Step4: 3) Costruzione delle tre strutture dati di base
a) Dizionario delle informazioni sui film
Step5: b) Lista dei paesi che hanno prodotto almeno un film (ciascun paese deve essere presente nella lista esattamente il numero di volte in cui ha prodotto un film).
Step6: c) Dizionario delle popolarità
Step7: 4) Estrazione dei 10 paesi che hanno prodotto più film
Costruire la lista (primo output) dei primi 10 paesi che hanno prodotto più film, ordinandoli per numero decrescente di film. Ogni paese deve essere rappresentato come tupla (nome del paese, numero di film prodotti).
Step8: 5) Estrazione, per ogni genere, degli n_most_popular film più popolari ordinati per popolarità descrescente, ed estrazione delle lingue coinvolte per ciascuno dei generi
a) Derivare dal dizionario delle popolarità il dizionario che ha la stessa struttura di chiavi e valori, con la differenza che il valore relativo a una chiave (genere) è la lista degli n_most_popular film più popolari ordinati per popolarità decrescente.
NOTA BENE
Step9: b) Derivare dal dizionario precedente il dizionario con la stessa struttura in cui le liste [popolarità, id] sono sostituite dalle liste [titolo originale, tagline] (secondo output).
Costruire la lista delle tuple chiave-valore e in seguito costruire il dizionario passando alla funzione dict() tale lista.
Step10: c) Estrarre dal dizionario del punto 5a il dizionario degli insiemi delle lingue originali coinvolte | Python Code:
input_file_name = './movies.csv'
n_most_popular = 15 # Parametro N
Explanation: Esercizio 2
Considerare il file movies.csv ottenuto estraendo i primi 1000 record del dataset scaricabile all'indirizzo https://www.kaggle.com/rounakbanik/the-movies-dataset#movies_metadata.csv.
Tale dataset è in formato csv e contiene, in record di 24 campi separati da virgola, le informazioni su film.
Il file movies.csv contiene solo un subset dei campi del dataset originale.
I campi del file csv che occorrono per risolvere l'esercizio sono:
id: indice progressivo
genres: stringa che rappresenta il letterale di una lista di dizionari con chiavi id e name che forniscono ciascuno un genere
[{'id': 16, 'name': 'Animation'}, {'id': 35, 'name': 'Comedy'}]
original_title: titolo originale
popularity: valore di popolarità
tagline: tagline del film
original_language: lingua originale
production_countries: stringa che rappresenta il letterale di una lista di dizionari con chiavi iso_3166_1 e name che forniscono ciascuno un paese di origine [{'iso_3166_1': 'DE', 'name': 'Germany'}, {'iso_3166_1': 'US', 'name': 'United States of America'}]
Si richiede di:
- elencare i 10 paesi che hanno prodotto più film, ordinandoli per numero decrescente di film prodotti, specificando per ognuno il numero di film prodotti
- fornire per ognuno dei generi cinematografici presenti nel dataset la classifica degli N (parametro in input) film più popolari (per quel genere) ordinandoli per popolarità decrescente e specificando per ognuno di essi il titolo originale e la tagline
- l'insieme delle lingue originali che sono coinvolte nella classifica precedente
Parametri di input:
- dataset dei film
- parametro N
Requisiti generali:
definire una funzione get_items() che prenda in input uno qualsiasi tra i due campi genrese production_countries (indifferentemente) ed estragga
la lista dei generi nel caso si passi come argomento il valore di un campo genres
la lista dei paesi di produzione nel caso si passi come argomento il valore di un campo production_countries
Produrre l'output nelle seguenti variabili:
lista di 10 tuple di due elementi (nome di paese, numero di film prodotti) contenenti i primi 10 paesi che hanno prodotto più film, ordinate per numero decrescente di film prodotti
dizionario delle classifiche per genere dei primi N film ordinati per popolarità decrescente:
chiave: genere di un film
valore: lista di N liste di due elementi [titolo originale, tagline] con i primi N film ordinati per popolarità decrescente
dizionario degli insiemi delle lingue coinvolte in ciascuna delle classifiche precedenti per genere:
chiave: genere di un film
valore: insieme delle lingue originali coinvolte
Soluzione
Parametri di input
End of explanation
import pandas as pd
import ast
import numpy as np
Explanation: Importazione dei moduli pandas e ast e numpy.
End of explanation
def get_items(arg_string):
return [d['name'] for d in ast.literal_eval(arg_string)]
#get_items("[{'iso_3166_1': 'DE', 'name': 'Germany'}, {'iso_3166_1': 'US', 'name': 'United States of America'}]")
Explanation: 1) Definizione della funzione get_items()
La funzione get_items() prende in input un valore di campo genres|production_countries e restituisce la lista dei generi|paesi di produzione.
End of explanation
df = pd.read_csv('movies.csv')
#df
Explanation: 2) Lettura del file csv con Pandas
End of explanation
info_dict = {}
for (index, record) in df.iterrows():
info_dict[index] = (record['original_title'], record['tagline'], record['original_language'])
info_dict
Explanation: 3) Costruzione delle tre strutture dati di base
a) Dizionario delle informazioni sui film:
- chiave: id dei film
- valore: tupla (titolo originale, tagline, lingua originale)
End of explanation
country_list = []
for (index, record) in df.iterrows():
country_list.extend(get_items(record['production_countries']))
country_list
Explanation: b) Lista dei paesi che hanno prodotto almeno un film (ciascun paese deve essere presente nella lista esattamente il numero di volte in cui ha prodotto un film).
End of explanation
pop_dict = {}
for (index, record) in df.iterrows():
if np.isnan(record['popularity']) == False:
for gen in get_items(record['genres']):
value = pop_dict.get(gen, [])
value.append([record['popularity'], index])
pop_dict[gen] = value
pop_dict
Explanation: c) Dizionario delle popolarità:
- chiave: genere cinematografico
- valore: lista dei film associati al genere (ognuno dei film deve essere rappresentato come lista annidata [popolarità, id])
NB: controllare che il campo popularity sia diverso da NaN (Not a Number).
End of explanation
from collections import Counter
country_rank_list = Counter(country_list).most_common()[:10]
country_rank_list
Explanation: 4) Estrazione dei 10 paesi che hanno prodotto più film
Costruire la lista (primo output) dei primi 10 paesi che hanno prodotto più film, ordinandoli per numero decrescente di film. Ogni paese deve essere rappresentato come tupla (nome del paese, numero di film prodotti).
End of explanation
tuple_list = [(genere, sorted(pop_dict[genere])[::-1][:n_most_popular]) for genere in pop_dict]
pop_rank_dict = dict(tuple_list)
pop_rank_dict
Explanation: 5) Estrazione, per ogni genere, degli n_most_popular film più popolari ordinati per popolarità descrescente, ed estrazione delle lingue coinvolte per ciascuno dei generi
a) Derivare dal dizionario delle popolarità il dizionario che ha la stessa struttura di chiavi e valori, con la differenza che il valore relativo a una chiave (genere) è la lista degli n_most_popular film più popolari ordinati per popolarità decrescente.
NOTA BENE: i valori di questo dizionario sono le liste del dizionario delle popolarità ordinate per popolarità decrescente e troncate ai primi n_most_popular elementi.
Costruire la lista delle tuple chiave-valore e in seguito costruire il dizionario passando alla funzione dict() tale lista.
End of explanation
tuple_list = []
for genere in pop_rank_dict:
new_list = []
for film in pop_rank_dict[genere]:
film_id = film[1]
original_title = info_dict[film_id][0]
tagline = info_dict[film_id][1]
new_film = [original_title, tagline]
new_list.append(new_film)
tuple_list.append((genere, new_list))
pop_rank_dict_out = dict(tuple_list)
pop_rank_dict_out
Explanation: b) Derivare dal dizionario precedente il dizionario con la stessa struttura in cui le liste [popolarità, id] sono sostituite dalle liste [titolo originale, tagline] (secondo output).
Costruire la lista delle tuple chiave-valore e in seguito costruire il dizionario passando alla funzione dict() tale lista.
End of explanation
tuple_list = []
for genere in pop_rank_dict:
language_set = set()
for film in pop_rank_dict[genere]:
language_set.add(info_dict[film[1]][2])
tuple_list.append((genere, language_set))
language_set_dict = dict(tuple_list)
language_set_dict
Explanation: c) Estrarre dal dizionario del punto 5a il dizionario degli insiemi delle lingue originali coinvolte:
- chiave: genere cinematografico
- valore: insieme delle lingue originali coinvolte (oggetto di tipo set)
Costruire la lista delle tuple chiave-valore e in seguito costruire il dizionario passando alla funzione dict() tale lista.
End of explanation |
9,490 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
5. Funkcije
Do sada smo koristili funkcije, no samo one unaprijed definirane u Pythonu poput funkcije float(), len() i slično. U ovom ćemo poglavlju naučiti pisati vlastite funkcije i to iz dva važna razloga
Step1: Ovdje smo definirali (def) novu funkciju imena uvecaj koja prima jedan argument ((broj)) te koja vraća (return) taj argument uvećan za jedan (broj +1). Iz tijela funkcije vidljivo je da argument mora biti brojčana vrijednost. Kako je Python dinamički tipiziran jezik, tj. kako se tip podataka određuje tijekom izvršavanja koda, nije moguće, odnosno potrebno ograničavati tip podataka koji se funkciji pruža kao argument.
Pogledajmo kako bi napisali funkciju koja prebrojava broj samoglasnika u nekom Unicode nizu znakova
Step2: U većini funkcija računamo rješenje nekog problema (u ovom slučaju broj samoglasnika u nizu znakova) te na kraju izračuna konačnu vrijednost vraćamo koristeći rezerviranu riječ return. Funkcija u ovom primjeru kao argument prima niz znakova (niz_znakova), a vraća cjelobrojnu vrijednost (rezultat) koja odgovara broju samoglasnika u tom nizu znakova.
Na sljedeći način ćemo upotrijebiti dvije napisane funkcije
Step3: U prvom primjeru smo broj 23 uvećali za jedan, a u drugom smo izračunali broj samoglasnika u nizu Ovo je niz znakova..
U nastavku ćemo napisati dvije funkcije za računanje frekvencijske distribucije te njeno sortiranje. Oba smo postupka upoznali u poglavlju o rječnicima, no sada ćemo ih zapisati kao funkcije.
Prva je funkcija za računanje frekvencijske distribucije
Step4: Funkcija frek_distr kao argument prima neki iterabilni objekt iterabilni. U našem slučaju će to najčešće biti niz znakova te nešto kasnije i lista riječi iz nizova znakova. Zatim stvara prazan rječnik (rjecnik={}) koji će sadržavati pojave i njihove vrijednosti. Nakon toga for petljom iterira kroz iterabilni objekt (niz znakova ili lista) (for element in iterabilni) te računa u novostvorenom rječniku koliko se puta pojavila određena vrijednost, odnosno računa frekvencijsku distribuciju (rjecnik[element]=rjecnik.get(element,0)+1). Konačno vraća taj novostvoreni rječnik s uređenim parovima (pojava, broj pojavljivanja) (return rjecnik).
Druga funkcija sortira frekvencijsku distribuciju
Step5: Funkcija sortiraj_distr kao argument prima rječnik (rjecnik) te vraća listu parova sortiranu po drugoj vrijednosti para (return sorted(rjecnik.items(),key=lambda x
Step6: Prednost ovakvog rješenja je, kako smo na početku ovog poglavlja i rekli, ta što je čitljivost posljednjeg koda bitno veća te ta što nismo nanovo morali definirati rješenje ta dva problema kao što to nećemo morati učiniti ni ubuduće.
Osim što možemo računati frekvencijsku distribuciju znakova u nizu, možemo izračunati i frekvencijsku distribuciju pojavnica. Prvo ćemo napisati funkciju za opojavničenje nekog niza znakova
Step7: Funkcija opojavnici kao argument uzima neki niz znakova niz. Zatim učitava modul za regularne izraze re. Konačno, vraća sve Unicode alfanumeričke znakove.
Ove tri funkcije predstavljaju osnovne alate za obrade.
Na sljedeći ćemo način opojavničiti neki niz znakova, izračunati frekvencijsku distribuciju pojavnica, sortirati ih te ispisati sortiranu listu
Step8: U nastavku ćemo pokrenuti funkcije nad datotekom datoteka.txt te ispisati sortiranu frekvencijsku distribuciju znakova i frekvencijsku distribuciju pojavnica. | Python Code:
def uvecaj(broj):
return broj+1
Explanation: 5. Funkcije
Do sada smo koristili funkcije, no samo one unaprijed definirane u Pythonu poput funkcije float(), len() i slično. U ovom ćemo poglavlju naučiti pisati vlastite funkcije i to iz dva važna razloga:
1. organizacija koda koji rješava neki problem funkcijama čini kod bitno preglednijim te
2. ono izrazito povećava ponovnu upotrebljivost koda, tj. omogućuje da ne moramo uvijek nanovo pisati rješenje nekog problema.
Kako se funkcije pišu vidjet ćemo najlakše na primjeru.
End of explanation
def broj_samoglasnika(niz_znakova):
rezultat=0
for znak in niz_znakova:
if znak.lower() in 'aeiou':
rezultat+=1
Explanation: Ovdje smo definirali (def) novu funkciju imena uvecaj koja prima jedan argument ((broj)) te koja vraća (return) taj argument uvećan za jedan (broj +1). Iz tijela funkcije vidljivo je da argument mora biti brojčana vrijednost. Kako je Python dinamički tipiziran jezik, tj. kako se tip podataka određuje tijekom izvršavanja koda, nije moguće, odnosno potrebno ograničavati tip podataka koji se funkciji pruža kao argument.
Pogledajmo kako bi napisali funkciju koja prebrojava broj samoglasnika u nekom Unicode nizu znakova:
End of explanation
print uvecaj(23)
print broj_samoglasnika('Ovo je niz znakova.')
Explanation: U većini funkcija računamo rješenje nekog problema (u ovom slučaju broj samoglasnika u nizu znakova) te na kraju izračuna konačnu vrijednost vraćamo koristeći rezerviranu riječ return. Funkcija u ovom primjeru kao argument prima niz znakova (niz_znakova), a vraća cjelobrojnu vrijednost (rezultat) koja odgovara broju samoglasnika u tom nizu znakova.
Na sljedeći način ćemo upotrijebiti dvije napisane funkcije:
End of explanation
def frek_distr(iterabilni):
rjecnik={}
for element in iterabilni:
rjecnik[element]=rjecnik.get(element,0)+1
return rjecnik
Explanation: U prvom primjeru smo broj 23 uvećali za jedan, a u drugom smo izračunali broj samoglasnika u nizu Ovo je niz znakova..
U nastavku ćemo napisati dvije funkcije za računanje frekvencijske distribucije te njeno sortiranje. Oba smo postupka upoznali u poglavlju o rječnicima, no sada ćemo ih zapisati kao funkcije.
Prva je funkcija za računanje frekvencijske distribucije:
End of explanation
def sortiraj_distr(rjecnik):
return sorted(rjecnik.items(),key=lambda x:-x[1])
Explanation: Funkcija frek_distr kao argument prima neki iterabilni objekt iterabilni. U našem slučaju će to najčešće biti niz znakova te nešto kasnije i lista riječi iz nizova znakova. Zatim stvara prazan rječnik (rjecnik={}) koji će sadržavati pojave i njihove vrijednosti. Nakon toga for petljom iterira kroz iterabilni objekt (niz znakova ili lista) (for element in iterabilni) te računa u novostvorenom rječniku koliko se puta pojavila određena vrijednost, odnosno računa frekvencijsku distribuciju (rjecnik[element]=rjecnik.get(element,0)+1). Konačno vraća taj novostvoreni rječnik s uređenim parovima (pojava, broj pojavljivanja) (return rjecnik).
Druga funkcija sortira frekvencijsku distribuciju:
End of explanation
niz_znakova='otorinolaringologija'
fd=frek_distr(niz_znakova)
print fd
print sortiraj_distr(fd)
Explanation: Funkcija sortiraj_distr kao argument prima rječnik (rjecnik) te vraća listu parova sortiranu po drugoj vrijednosti para (return sorted(rjecnik.items(),key=lambda x:-x[1])). Ta će nam funkcija trebati kad ćemo htjeti napraviti uvid u najčešće ili pak najrjeđe događaje u frekvencijskoj distribuciji.
Na primjer, obje funkcije primijenit ćemo na sljedeći način:
End of explanation
def opojavnici(niz):
import re
return re.findall(r'\w+',niz,re.UNICODE)
Explanation: Prednost ovakvog rješenja je, kako smo na početku ovog poglavlja i rekli, ta što je čitljivost posljednjeg koda bitno veća te ta što nismo nanovo morali definirati rješenje ta dva problema kao što to nećemo morati učiniti ni ubuduće.
Osim što možemo računati frekvencijsku distribuciju znakova u nizu, možemo izračunati i frekvencijsku distribuciju pojavnica. Prvo ćemo napisati funkciju za opojavničenje nekog niza znakova:
End of explanation
pojavnice=opojavnici('biti ili ne biti')
fd=frek_distr(pojavnice)
sd=sortiraj_distr(fd)
print sd
Explanation: Funkcija opojavnici kao argument uzima neki niz znakova niz. Zatim učitava modul za regularne izraze re. Konačno, vraća sve Unicode alfanumeričke znakove.
Ove tri funkcije predstavljaju osnovne alate za obrade.
Na sljedeći ćemo način opojavničiti neki niz znakova, izračunati frekvencijsku distribuciju pojavnica, sortirati ih te ispisati sortiranu listu:
End of explanation
dat=open('datoteka.txt').read().decode('utf8').lower()
pojavnice=opojavnici(dat)
fd_znak=frek_distr(dat)
fd_pojavnica=frek_distr(pojavnice)
sd_znak=sortiraj_distr(fd_znak)
sd_pojavnica=sortiraj_distr(fd_pojavnica)
print sd_znak
print sd_pojavnica
Explanation: U nastavku ćemo pokrenuti funkcije nad datotekom datoteka.txt te ispisati sortiranu frekvencijsku distribuciju znakova i frekvencijsku distribuciju pojavnica.
End of explanation |
9,491 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
In this exercise, you'll add dropout to the Spotify model from Exercise 4 and see how batch normalization can let you successfully train models on difficult datasets.
Run the next cell to get started!
Step1: First load the Spotify dataset.
Step2: 1) Add Dropout to Spotify Model
Here is the last model from Exercise 4. Add two dropout layers, one after the Dense layer with 128 units, and one after the Dense layer with 64 units. Set the dropout rate on both to 0.3.
Step3: Now run this next cell to train the model see the effect of adding dropout.
Step4: 2) Evaluate Dropout
Recall from Exercise 4 that this model tended to overfit the data around epoch 5. Did adding dropout seem to help prevent overfitting this time?
Step5: Now, we'll switch topics to explore how batch normalization can fix problems in training.
Load the Concrete dataset. We won't do any standardization this time. This will make the effect of batch normalization much more apparent.
Step6: Run the following cell to train the network on the unstandardized Concrete data.
Step7: Did you end up with a blank graph? Trying to train this network on this dataset will usually fail. Even when it does converge (due to a lucky weight initialization), it tends to converge to a very large number.
3) Add Batch Normalization Layers
Batch normalization can help correct problems like this.
Add four BatchNormalization layers, one before each of the dense layers. (Remember to move the input_shape argument to the new first layer.)
Step8: Run the next cell to see if batch normalization will let us train the model.
Step9: 4) Evaluate Batch Normalization
Did adding batch normalization help? | Python Code:
# Setup plotting
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
# Set Matplotlib defaults
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('animation', html='html5')
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.deep_learning_intro.ex5 import *
Explanation: Introduction
In this exercise, you'll add dropout to the Spotify model from Exercise 4 and see how batch normalization can let you successfully train models on difficult datasets.
Run the next cell to get started!
End of explanation
import pandas as pd
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import make_column_transformer
from sklearn.model_selection import GroupShuffleSplit
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import callbacks
spotify = pd.read_csv('../input/dl-course-data/spotify.csv')
X = spotify.copy().dropna()
y = X.pop('track_popularity')
artists = X['track_artist']
features_num = ['danceability', 'energy', 'key', 'loudness', 'mode',
'speechiness', 'acousticness', 'instrumentalness',
'liveness', 'valence', 'tempo', 'duration_ms']
features_cat = ['playlist_genre']
preprocessor = make_column_transformer(
(StandardScaler(), features_num),
(OneHotEncoder(), features_cat),
)
def group_split(X, y, group, train_size=0.75):
splitter = GroupShuffleSplit(train_size=train_size)
train, test = next(splitter.split(X, y, groups=group))
return (X.iloc[train], X.iloc[test], y.iloc[train], y.iloc[test])
X_train, X_valid, y_train, y_valid = group_split(X, y, artists)
X_train = preprocessor.fit_transform(X_train)
X_valid = preprocessor.transform(X_valid)
y_train = y_train / 100
y_valid = y_valid / 100
input_shape = [X_train.shape[1]]
print("Input shape: {}".format(input_shape))
Explanation: First load the Spotify dataset.
End of explanation
# YOUR CODE HERE: Add two 30% dropout layers, one after 128 and one after 64
model = keras.Sequential([
layers.Dense(128, activation='relu', input_shape=input_shape),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
# Check your answer
q_1.check()
#%%RM_IF(PROD)%%
# Wrong dropout layers
model = keras.Sequential([
layers.Dense(128, activation='relu', input_shape=input_shape),
layers.Dropout(0.3),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
# Wrong dropout rate
model = keras.Sequential([
layers.Dense(128, activation='relu', input_shape=input_shape),
layers.Dropout(0.7),
layers.Dense(64, activation='relu'),
layers.Dropout(0.7),
layers.Dense(1)
])
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
model = keras.Sequential([
layers.Dense(128, activation='relu', input_shape=input_shape),
layers.Dropout(0.3),
layers.Dense(64, activation='relu'),
layers.Dropout(0.3),
layers.Dense(1)
])
q_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
Explanation: 1) Add Dropout to Spotify Model
Here is the last model from Exercise 4. Add two dropout layers, one after the Dense layer with 128 units, and one after the Dense layer with 64 units. Set the dropout rate on both to 0.3.
End of explanation
model.compile(
optimizer='adam',
loss='mae',
)
history = model.fit(
X_train, y_train,
validation_data=(X_valid, y_valid),
batch_size=512,
epochs=50,
verbose=0,
)
history_df = pd.DataFrame(history.history)
history_df.loc[:, ['loss', 'val_loss']].plot()
print("Minimum Validation Loss: {:0.4f}".format(history_df['val_loss'].min()))
Explanation: Now run this next cell to train the model see the effect of adding dropout.
End of explanation
# View the solution (Run this cell to receive credit!)
q_2.check()
Explanation: 2) Evaluate Dropout
Recall from Exercise 4 that this model tended to overfit the data around epoch 5. Did adding dropout seem to help prevent overfitting this time?
End of explanation
import pandas as pd
concrete = pd.read_csv('../input/dl-course-data/concrete.csv')
df = concrete.copy()
df_train = df.sample(frac=0.7, random_state=0)
df_valid = df.drop(df_train.index)
X_train = df_train.drop('CompressiveStrength', axis=1)
X_valid = df_valid.drop('CompressiveStrength', axis=1)
y_train = df_train['CompressiveStrength']
y_valid = df_valid['CompressiveStrength']
input_shape = [X_train.shape[1]]
Explanation: Now, we'll switch topics to explore how batch normalization can fix problems in training.
Load the Concrete dataset. We won't do any standardization this time. This will make the effect of batch normalization much more apparent.
End of explanation
model = keras.Sequential([
layers.Dense(512, activation='relu', input_shape=input_shape),
layers.Dense(512, activation='relu'),
layers.Dense(512, activation='relu'),
layers.Dense(1),
])
model.compile(
optimizer='sgd', # SGD is more sensitive to differences of scale
loss='mae',
metrics=['mae'],
)
history = model.fit(
X_train, y_train,
validation_data=(X_valid, y_valid),
batch_size=64,
epochs=100,
verbose=0,
)
history_df = pd.DataFrame(history.history)
history_df.loc[0:, ['loss', 'val_loss']].plot()
print(("Minimum Validation Loss: {:0.4f}").format(history_df['val_loss'].min()))
Explanation: Run the following cell to train the network on the unstandardized Concrete data.
End of explanation
# YOUR CODE HERE: Add a BatchNormalization layer before each Dense layer
model = keras.Sequential([
layers.Dense(512, activation='relu', input_shape=input_shape),
layers.Dense(512, activation='relu'),
layers.Dense(512, activation='relu'),
layers.Dense(1),
])
# Check your answer
q_3.check()
#%%RM_IF(PROD)%%
# Wrong layers
model = keras.Sequential([
layers.Dense(512, activation='relu', input_shape=input_shape),
layers.BatchNormalization(),
layers.Dense(512, activation='relu'),
layers.BatchNormalization(),
layers.Dense(512, activation='relu'),
layers.BatchNormalization(),
layers.Dense(1),
])
q_3.assert_check_failed()
#%%RM_IF(PROD)%%
model = keras.Sequential([
layers.BatchNormalization(input_shape=input_shape),
layers.Dense(512, activation='relu'),
layers.BatchNormalization(),
layers.Dense(512, activation='relu'),
layers.BatchNormalization(),
layers.Dense(512, activation='relu'),
layers.BatchNormalization(),
layers.Dense(1),
])
q_3.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_3.hint()
#_COMMENT_IF(PROD)_
q_3.solution()
Explanation: Did you end up with a blank graph? Trying to train this network on this dataset will usually fail. Even when it does converge (due to a lucky weight initialization), it tends to converge to a very large number.
3) Add Batch Normalization Layers
Batch normalization can help correct problems like this.
Add four BatchNormalization layers, one before each of the dense layers. (Remember to move the input_shape argument to the new first layer.)
End of explanation
model.compile(
optimizer='sgd',
loss='mae',
metrics=['mae'],
)
EPOCHS = 100
history = model.fit(
X_train, y_train,
validation_data=(X_valid, y_valid),
batch_size=64,
epochs=EPOCHS,
verbose=0,
)
history_df = pd.DataFrame(history.history)
history_df.loc[0:, ['loss', 'val_loss']].plot()
print(("Minimum Validation Loss: {:0.4f}").format(history_df['val_loss'].min()))
Explanation: Run the next cell to see if batch normalization will let us train the model.
End of explanation
# View the solution (Run this cell to receive credit!)
q_4.check()
Explanation: 4) Evaluate Batch Normalization
Did adding batch normalization help?
End of explanation |
9,492 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Classifying The Habeerman Dataset
When complete, I will review your code, so please submit your code via pull-request to the Introduction to Machine Learning with Scikit-Learn repository!
Habeerman Kernel Example
Downloaded from the UCI Machine Learning Repository on August 24, 2016. The first thing is to fully describe your data in a README file. The dataset description is as follows
Step2: Data Extraction
One way that we can structure our data for easy management is to save files on disk. The Scikit-Learn datasets are already structured this way, and when loaded into a Bunch (a class imported from the datasets module of Scikit-Learn) we can expose a data API that is very familiar to how we've trained on our toy datasets in the past. A Bunch object exposes some important properties
Step4: Classification
Now that we have a dataset Bunch loaded and ready, we can begin the classification process. Let's attempt to build a classifier with kNN, SVM, and Random Forest classifiers. | Python Code:
%matplotlib inline
import os
import json
import time
import pickle
import requests
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
URL = "http://archive.ics.uci.edu/ml/machine-learning-databases/haberman/haberman.data"
def fetch_data(fname='habeerman.data'):
Helper method to retreive the ML Repository dataset.
response = requests.get(URL)
outpath = os.path.abspath(fname)
with open(outpath, 'w') as f:
f.write(response.content)
return outpath
# Fetch the data if required
DATA = fetch_data()
print(DATA)
FEATURES = [
"Age",
"year",
"nodes",
"label"
]
LABEL_MAP = {
1: "Survived",
2: "Died",
}
# Read the data into a DataFrame
df = pd.read_csv(DATA, header=None, names=FEATURES)
print(df.head())
# Convert class labels into text
for k,v in LABEL_MAP.items():
df.ix[df.label == k, 'label'] = v
print(df.label.unique())
#check label values
print(df.head())
# Describe the dataset
print df.describe()
# Determine the shape of the data
print "{} instances with {} features\n".format(*df.shape)
#I believe the shape includes the lables.
# Determine the frequency of each class
print df.groupby('label')['label'].count()
# Create a scatter matrix of the dataframe features
from pandas.tools.plotting import scatter_matrix
scatter_matrix(df, alpha=0.2, figsize=(12, 12), diagonal='kde')
plt.show()
from pandas.tools.plotting import parallel_coordinates
plt.figure(figsize=(12,12))
parallel_coordinates(df, 'label')
plt.show()
from pandas.tools.plotting import radviz
plt.figure(figsize=(12,12))
radviz(df, 'label')
plt.show()
Explanation: Classifying The Habeerman Dataset
When complete, I will review your code, so please submit your code via pull-request to the Introduction to Machine Learning with Scikit-Learn repository!
Habeerman Kernel Example
Downloaded from the UCI Machine Learning Repository on August 24, 2016. The first thing is to fully describe your data in a README file. The dataset description is as follows:
Data Set Information:
The dataset contains cases from a study that was conducted between 1958 and 1970 at the University of Chicago's Billings Hospital on the survival of patients who had undergone surgery for breast cancer.
The data set can be used for the tasks of classification and cluster analysis.
Attribute Information:
Below are the attributes:
Age of patient at time of operation (numerical)
Patient's year of operation (year - 1900, numerical)
Number of positive axillary nodes detected (numerical)
Survival status (class attribute)
1 = the patient survived 5 years or longer
2 = the patient died within 5 year
Data Exploration
In this section we will begin to explore the dataset to determine relevant information.
End of explanation
from sklearn.datasets.base import Bunch
DATA_DIR = os.path.abspath(os.path.join(".", "..", "data",'habeerman'))
#C:\Users\pbw50\machine-learning\notebook\habeerman.data
# Show the contents of the data directory
for name in os.listdir(DATA_DIR):
if name.startswith("."): continue
print "- {}".format(name)
def load_data(root=DATA_DIR):
# Construct the `Bunch` for the habeerman dataset
filenames = {
'meta': os.path.join(root, 'meta.json'),
'rdme': os.path.join(root, 'README.md'),
'data': os.path.join(root, 'habeerman.data'),
}
#Load the meta data from the meta json
with open(filenames['meta'], 'r') as f:
meta = json.load(f)
target_names = meta['target_names']
feature_names = meta['feature_names']
# Load the description from the README.
with open(filenames['rdme'], 'r') as f:
DESCR = f.read()
# Load the dataset from the data file.
dataset = pd.read_csv(filenames['data'], header=None)
#tranform to numpy
data1 = dataset.ix[:, 0:2]
target1 = dataset.ix[:,3]
# Extract the target from the data
data = np.array(data1)
target = np.array(target1)
# Create the bunch object
return Bunch(
data=data,
target=target,
filenames=filenames,
target_names=target_names,
feature_names=feature_names,
DESCR=DESCR
)
# Save the dataset as a variable we can use.
dataset = load_data()
print dataset.data.shape
print dataset.target.shape
Explanation: Data Extraction
One way that we can structure our data for easy management is to save files on disk. The Scikit-Learn datasets are already structured this way, and when loaded into a Bunch (a class imported from the datasets module of Scikit-Learn) we can expose a data API that is very familiar to how we've trained on our toy datasets in the past. A Bunch object exposes some important properties:
data: array of shape n_samples * n_features
target: array of length n_samples
feature_names: names of the features
target_names: names of the targets
filenames: names of the files that were loaded
DESCR: contents of the readme
Note: This does not preclude database storage of the data, in fact - a database can be easily extended to load the same Bunch API. Simply store the README and features in a dataset description table and load it from there. The filenames property will be redundant, but you could store a SQL statement that shows the data load.
In order to manage our data set on disk, we'll structure our data as follows:
End of explanation
from sklearn import metrics
from sklearn import cross_validation
from sklearn.cross_validation import KFold
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
def fit_and_evaluate(dataset, model, label, **kwargs):
Because of the Scikit-Learn API, we can create a function to
do all of the fit and evaluate work on our behalf!
start = time.time() # Start the clock!
scores = {'precision':[], 'recall':[], 'accuracy':[], 'f1':[]}
for train, test in KFold(dataset.data.shape[0], n_folds=12, shuffle=True):
X_train, X_test = dataset.data[train], dataset.data[test]
y_train, y_test = dataset.target[train], dataset.target[test]
estimator = model(**kwargs)
estimator.fit(X_train, y_train)
expected = y_test
predicted = estimator.predict(X_test)
# Append our scores to the tracker
scores['precision'].append(metrics.precision_score(expected, predicted, average="binary"))
scores['recall'].append(metrics.recall_score(expected, predicted, average="binary"))
scores['accuracy'].append(metrics.accuracy_score(expected, predicted))
scores['f1'].append(metrics.f1_score(expected, predicted, average="binary"))
# Report
print "Build and Validation of {} took {:0.3f} seconds".format(label, time.time()-start)
print "Validation scores are as follows:\n"
print pd.DataFrame(scores).mean()
# Write official estimator to disk
estimator = model(**kwargs)
estimator.fit(dataset.data, dataset.target)
outpath = label.lower().replace(" ", "-") + ".pickle"
with open(outpath, 'w') as f:
pickle.dump(estimator, f)
print "\nFitted model written to:\n{}".format(os.path.abspath(outpath))
# Perform SVC Classification
fit_and_evaluate(dataset, SVC, "habeerman SVM Classifier", )
# Perform kNN Classification
fit_and_evaluate(dataset, KNeighborsClassifier, "habeerman kNN Classifier", n_neighbors=12)
# Perform Random Forest Classification
fit_and_evaluate(dataset, RandomForestClassifier, "habeerman Random Forest Classifier")
Explanation: Classification
Now that we have a dataset Bunch loaded and ready, we can begin the classification process. Let's attempt to build a classifier with kNN, SVM, and Random Forest classifiers.
End of explanation |
9,493 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
HydroTrend
Link to this notebook
Step1: And load the HydroTrend plugin.
Step2: HydroTrend will now be activated in PyMT.
Exercise 1
Step3: Q1a
Step4: Q1b
Step5: Q1c | Python Code:
import matplotlib.pyplot as plt
import numpy as np
Explanation: HydroTrend
Link to this notebook: https://github.com/csdms/pymt/blob/master/docs/demos/hydrotrend.ipynb
Package installation command: $ conda install notebook pymt_hydrotrend
Command to download a local copy:
$ curl -O https://raw.githubusercontent.com/csdms/pymt/master/docs/demos/hydrotrend.ipynb
HydroTrend is a 2D hydrological water balance and transport model that simulates water discharge and sediment load at a river outlet. You can read more about the model, find references or download the source code at: https://csdms.colorado.edu/wiki/Model:HydroTrend.
River Sediment Supply Modeling
This notebook is meant to give you a better understanding of what the model is capable of. In this example we are using a theoretical river basin of ~1990 km<sup>2</sup>, with 1200m of relief and a river length of
~100 km. All parameters that are shown by default once the HydroTrend Model is loaded are based
on a present-day, temperate climate. Whereas these runs are not meant to be specific, we are
using parameters that are realistic for the Waiapaoa River in New Zealand. The Waiapaoa River
is located on North Island and receives high rain and has erodible soils, so the river sediment
loads are exceptionally high. It has been called the "dirtiest small river in the world".
To learn more about HydroTrend and its approach to sediment supply modeling, you can download
this presentation.
A more detailed description of applying HydroTrend to the Waipaoa basin, New Zealand has been published in WRR: hydrotrend_waipaoa_paper.
A more detailed description of applying HydroTrend to the Waipaoa basin, New Zealand has been published in WRR: hydrotrend_waipaoa_paper.
Exercise
To start, import numpy and matplotlib.
End of explanation
import pymt.models
hydrotrend = pymt.models.Hydrotrend()
Explanation: And load the HydroTrend plugin.
End of explanation
# Set up Hydrotrend model by indicating the number of years to run
config_file, config_folder = hydrotrend.setup(run_duration=100)
hydrotrend.initialize(config_file, config_folder)
hydrotrend.output_var_names
hydrotrend.start_time, hydrotrend.time, hydrotrend.end_time, hydrotrend.time_step, hydrotrend.time_units
n_days = int(hydrotrend.end_time)
q = np.empty(n_days)
qs = np.empty(n_days)
cs = np.empty(n_days)
qb = np.empty(n_days)
for i in range(n_days):
hydrotrend.update()
q[i] = hydrotrend.get_value("channel_exit_water__volume_flow_rate")
qs[i] = hydrotrend.get_value("channel_exit_water_sediment~suspended__mass_flow_rate")
cs[i] = hydrotrend.get_value("channel_exit_water_sediment~suspended__mass_concentration")
qb[i] = hydrotrend.get_value("channel_exit_water_sediment~bedload__mass_flow_rate")
plt.plot(qs)
Explanation: HydroTrend will now be activated in PyMT.
Exercise 1: Explore the base-case river simulation
For this case study, we will run a simulation for 100 years at daily time-step.
This means you run Hydrotrend for 36,500 days total.
End of explanation
(
(q.mean(), hydrotrend.get_var_units("channel_exit_water__volume_flow_rate")),
(cs.mean(), hydrotrend.get_var_units("channel_exit_water_sediment~suspended__mass_flow_rate")),
(qs.mean(), hydrotrend.get_var_units("channel_exit_water_sediment~suspended__mass_concentration")),
(qb.mean(), hydrotrend.get_var_units("channel_exit_water_sediment~bedload__mass_flow_rate"))
)
hydrotrend.get_var_units("channel_exit_water__volume_flow_rate")
Explanation: Q1a: Calculate mean water discharge Q, mean suspended load Qs, mean sediment concentration Cs, and mean bedload Qb.
Note all values are reported as daily averages. What are the units?
A1a:
End of explanation
flood_day = q.argmax()
flood_year = flood_day // 365
plt.plot(q[flood_year * 365: (flood_year + 1) * 365])
q.max()
Explanation: Q1b: Identify the highest flood event for this simulation. Is this the 50-year flood? Plot the year of Q-data which includes the flood.
A1b:
End of explanation
qs_by_year = qs.reshape((-1, 365))
qs_annual = qs_by_year.sum(axis=1)
plt.plot(qs_annual)
qs_annual.mean()
Explanation: Q1c: Calculate the mean annual sediment load for this river system.
A1c:
End of explanation |
9,494 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id="topcell"></a>
Tellurium Notebook Tutorial
The Tellurium notebook environment is a self-contained Jupyter-like environment based on the nteract project. Tellurium adds special cells for working with SBML and COMBINE archives by representing these standards in human-readable form.
Tellurium also features a variety of Python packages, such as the libroadrunner simulator, designed to provide a complete biochemical network modeling environment using Python.
Contents
Step1: <a id="ex2"></a>
Example 2
Step2: <a id="ex3"></a>
Example 3 | Python Code:
model simple()
S1 -> S2; k1*S1
k1 = 0.1
S1 = 10
end
simple.simulate(0, 50, 100)
simple.plot()
Explanation: <a id="topcell"></a>
Tellurium Notebook Tutorial
The Tellurium notebook environment is a self-contained Jupyter-like environment based on the nteract project. Tellurium adds special cells for working with SBML and COMBINE archives by representing these standards in human-readable form.
Tellurium also features a variety of Python packages, such as the libroadrunner simulator, designed to provide a complete biochemical network modeling environment using Python.
Contents:
Example 1: A Simple SBML Model
Example 2: Advanced SBML Features
Example 3: Creating a COMBINE Archive
<a id="ex1"></a>
Example 1: A Simple SBML Model
This example generates a very simple SBML model. Reactant S1 is converted to product S2 at a rate k1*S1. Running the following cell will generate an executable version of the model in the variable simple. You can then call the simulate method on this variable (specifying the start time, end time, and number of points), and plot the result.
Back to top
End of explanation
model advanced()
# Create two compartments
compartment compA=1, compB=0.5 # B is half the volume of A
species A in compA, B in compB
# Use the label `J0` for the reaction
J0: A -> B; k*A
# C is defined by an assignment rule
species C
C := sin(2*time/3.14) # a sine wave
k = 0.1
A = 10
# Event: half-way through the simulation,
# add a bolus of A
at time>=5: A = A+10
end
advanced.simulate(0, 10, 100)
advanced.plot()
Explanation: <a id="ex2"></a>
Example 2: Advanced SBML Features
In this example, we will demonstrate the use of SBML events, compartments, and assignment rules. Events occur at discrete instants in time, and can be used to model the addition of a bolus of ligand etc. to the system. Compartments allow modeling of discrete volumetric spaces within a cell or system. Assignment rules provide a way to explicitly specify a value, as a function of time (as we do here) or otherwise.
There are two compartments: one containing species A, and one containing species B.
One mass unit of A is converted to one mass unit of B, but because B's compartment is half the size, the concentration of B increases at twice the rate as A diminishes.
Half-way through the simulation, we add a bolus of A
Species C is neither created nor destroyed in a reaction - it is defined entirely by a rate rule.
Back to top
End of explanation
model simple()
S1 -> S2; k1*S1
k1 = 0.1
S1 = 10
end
# Models
model1 = model "simple"
# Simulations
sim1 = simulate uniform(0, 50, 1000)
// Tasks
task1 = run sim1 on model1
// Outputs
plot "COMBINE Archive Plot" time vs S1, S2
Explanation: <a id="ex3"></a>
Example 3: Creating a COMBINE Archive
COMBINE archives are containers for standards. They enable models encoded in SBML and simulations encoded in SED-ML to be exchanged between different tools. Tellurium displays COMBINE archives in an inline, human-readable form.
To convert the SBML model of Example 1 into a COMBINE archive, we need to define four steps in the workflow, which correspond to distinct elements in SED–ML: (1) models, (2) simulations, (3) tasks, and (4) outputs.
You can export this cell as a COMBINE archive by clicking on the diskette icon in the upper-right. You should be able to import it using other tools which support COMBINE archives, such as the SED-ML Web Tools or iBioSim.
Back to top
End of explanation |
9,495 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to NumbaSOM
A fast Self-Organizing Map Python library implemented in Numba.
This is a fast and simple to use SOM library. It utilizes online training (one data point at the time) rather than batch training. The implemented topologies are a simple 2D lattice or a torus.
How to Install
To install this package with pip run
Step1: A Self-Organizing Map is often used to show the underlying structure in data. To show how to use the library, we will train it on 200 random 3-dimensional vectors (so we can render them as colors)
Step2: Initialize the library
We initalize a map with 50 rows and 100 columns. The default topology is a 2D lattice. We can also train it on a torus by setting is_torus=True
Step3: Train the SOM
We will adapt the lattice by iterating 10.000 times through our data points. If we set normalize=True, data will be normalized before training.
Step4: To access an individual cell type
Step5: To access multiple cells, slicing works
Step6: The shape of the lattice should be (50, 100, 3)
Step7: Visualizing the lattice
Since our lattice is made of 3-dimensional vectors, we can represent it as a lattice of colors.
Step8: Compute U-matrix
Since the most of the data will not be 3-dimensional, we can use the u_matrix (unified distance matrix by Alfred Ultsch) to visualise the map and the clusters emerging on it.
Step9: Each cell of the lattice is just a single value, thus the shape is
Step10: Plot U-matrix
The library contains a function plot_u_matrix that can help visualise it.
Step11: Project on the lattice
To project data on the lattice, use project_on_lattice function.
Let's project a couple of predefined color on the trained lattice and see in which cells they will end up
Step12: Find every cell's closest vector in the data
To find every cell's closes vector in the provided data, use lattice_closest_vectors function.
We can again use the colors example
Step13: We can ask now to which value in color_labels are out lattice cells closest to
Step14: We can find the closest vectors without supplying an additional list. Then we get the association between the lattice and the data vectors that we can display as colors.
Step15: We take the values of the closest_vec vector and reshape it into a numpy vector values.
Step16: We can now visualise the projection of our 8 hard-coded colors onto the lattice
Step17: Compute how each data vector 'activates' the lattice
We can use the function lattice_activations
Step18: Now we can show how the vector blue
Step19: If we wish to scale the higher values up, and scale down the lower values, we can use the argument exponent when computing the activations | Python Code:
from numbasom import *
Explanation: Welcome to NumbaSOM
A fast Self-Organizing Map Python library implemented in Numba.
This is a fast and simple to use SOM library. It utilizes online training (one data point at the time) rather than batch training. The implemented topologies are a simple 2D lattice or a torus.
How to Install
To install this package with pip run:
pip install numbasom
To install this package with conda run:
conda install -c mnikola numbasom
How to use
To import the library you can safely use:
End of explanation
import numpy as np
data = np.random.random([200,3])
Explanation: A Self-Organizing Map is often used to show the underlying structure in data. To show how to use the library, we will train it on 200 random 3-dimensional vectors (so we can render them as colors):
Create 200 random colors
End of explanation
som = SOM(som_size=(50,100), is_torus=False)
Explanation: Initialize the library
We initalize a map with 50 rows and 100 columns. The default topology is a 2D lattice. We can also train it on a torus by setting is_torus=True
End of explanation
lattice = som.train(data, num_iterations=15000)
Explanation: Train the SOM
We will adapt the lattice by iterating 10.000 times through our data points. If we set normalize=True, data will be normalized before training.
End of explanation
lattice[5,3]
Explanation: To access an individual cell type
End of explanation
lattice[1::6,1]
Explanation: To access multiple cells, slicing works
End of explanation
lattice.shape
Explanation: The shape of the lattice should be (50, 100, 3)
End of explanation
import matplotlib.pyplot as plt
plt.imshow(lattice)
plt.show()
Explanation: Visualizing the lattice
Since our lattice is made of 3-dimensional vectors, we can represent it as a lattice of colors.
End of explanation
um = u_matrix(lattice)
Explanation: Compute U-matrix
Since the most of the data will not be 3-dimensional, we can use the u_matrix (unified distance matrix by Alfred Ultsch) to visualise the map and the clusters emerging on it.
End of explanation
um.shape
Explanation: Each cell of the lattice is just a single value, thus the shape is:
End of explanation
plot_u_matrix(um, fig_size=(6.2,6.2))
Explanation: Plot U-matrix
The library contains a function plot_u_matrix that can help visualise it.
End of explanation
colors = np.array([[1.,0.,0.],[0.,1.,0.],[0.,0.,1.],[1.,1.,0.],[0.,1.,1.],[1.,0.,1.],[0.,0.,0.],[1.,1.,1.]])
color_labels = ['red', 'green', 'blue', 'yellow', 'cyan', 'purple','black', 'white']
projection = project_on_lattice(colors, lattice, additional_list=color_labels)
for p in projection:
if projection[p]:
print (p, projection[p][0])
Explanation: Project on the lattice
To project data on the lattice, use project_on_lattice function.
Let's project a couple of predefined color on the trained lattice and see in which cells they will end up:
End of explanation
closest = lattice_closest_vectors(colors, lattice, additional_list=color_labels)
Explanation: Find every cell's closest vector in the data
To find every cell's closes vector in the provided data, use lattice_closest_vectors function.
We can again use the colors example:
End of explanation
closest[(1,1)]
closest[(40,80)]
Explanation: We can ask now to which value in color_labels are out lattice cells closest to:
End of explanation
closest_vec = lattice_closest_vectors(colors, lattice)
Explanation: We can find the closest vectors without supplying an additional list. Then we get the association between the lattice and the data vectors that we can display as colors.
End of explanation
values = np.array(list(closest_vec.values())).reshape(50,100,-1)
Explanation: We take the values of the closest_vec vector and reshape it into a numpy vector values.
End of explanation
plt.imshow(values)
plt.show()
Explanation: We can now visualise the projection of our 8 hard-coded colors onto the lattice:
End of explanation
activations = lattice_activations(colors, lattice)
Explanation: Compute how each data vector 'activates' the lattice
We can use the function lattice_activations:
End of explanation
plt.imshow(activations[2])
plt.show()
Explanation: Now we can show how the vector blue: [0.,0.,1.] activates the lattice:
End of explanation
activations = lattice_activations(colors, lattice, exponent=8)
plt.imshow(activations[2])
plt.show()
Explanation: If we wish to scale the higher values up, and scale down the lower values, we can use the argument exponent when computing the activations:
End of explanation |
9,496 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.eco - Les expressions régulières
Step2: Lorsqu'on remplit un formulaire, on voit souvent le format "MM/JJ/AAAA" qui précise sous quelle forme on s'attend à ce qu’une date soit écrite. Les expressions régulières permettent de définir également ce format et de chercher dans un texte toutes les chaînes de caractères qui sont conformes à ce format.
La liste qui suit contient des dates de naissance. On cherche à obtenir toutes les dates de cet exemple sachant que les jours ou les mois contiennent un ou deux chiffres, les années deux ou quatre.
Step3: Le premier chiffre du jour est soit 0, 1, 2, ou 3 ; ceci se traduit par [0-3]. Le second chiffre est compris entre 0 et 9, soit [0-9]. Le format des jours est traduit par [0-3][0-9]. Mais le premier jour est facultatif, ce qu'on précise avec le symbole ?
Step4: Le résultat une liste de couples dont chaque élément correspond aux parties comprises entre parenthèses qu'on appelle des groupes. Lorsque les expressions régulières sont utilisées, on doit d'abord se demander comment définir ce qu’on cherche puis quelles fonctions utiliser pour obtenir les résultats de cette recherche. Les deux paragraphes qui suivent y répondent.
Syntaxe
La syntaxe des expressions régulières est décrite sur le site officiel de python. La page Regular Expression Syntax décrit comment se servir des expressions régulières, les deux pages sont en anglais. Comme toute grammaire, celle des expressions régulières est susceptible d’évoluer au fur et à mesure des versions du langage python.
Les ensembles de caractères
Lors d’une recherche, on s’intéresse aux caractères et souvent aux classes de caractères
Step5: Les multiplicateurs
Les multiplicateurs permettent de définir des expressions régulières comme
Step6: <.*> correspond avec <h1>, </h1> ou encore <h1>mot</h1>.
Par conséquent, l’expression régulière correspond à trois morceaux. Par défaut, il prendra le plus grand. Pour choisir les plus petits, il faudra écrire les multiplicateurs comme ceci
Step8: Exercice 1
Recherchez les dates présentes dans la phrase suivante
Step10: Puis dans celle-ci | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 2A.eco - Les expressions régulières : à quoi ça sert ? (correction)
Chercher un mot dans un texte est une tâche facile, c'est l'objectif de la méthode find attachée aux chaînes de caractères, elle suffit encore lorsqu'on cherche un mot au pluriel ou au singulier mais il faut l'appeler au moins deux fois pour chercher ces deux formes. Pour des expressions plus compliquées, il est conseillé d'utiliser les expressions régulières. C'est une fonctionnalité qu'on retrouve dans beaucoup de langages. C'est une forme de grammaire qui permet de rechercher des expressions.
End of explanation
s = date 0 : 14/9/2000
date 1 : 20/04/1971 date 2 : 14/09/1913 date 3 : 2/3/1978
date 4 : 1/7/1986 date 5 : 7/3/47 date 6 : 15/10/1914
date 7 : 08/03/1941 date 8 : 8/1/1980 date 9 : 30/6/1976
Explanation: Lorsqu'on remplit un formulaire, on voit souvent le format "MM/JJ/AAAA" qui précise sous quelle forme on s'attend à ce qu’une date soit écrite. Les expressions régulières permettent de définir également ce format et de chercher dans un texte toutes les chaînes de caractères qui sont conformes à ce format.
La liste qui suit contient des dates de naissance. On cherche à obtenir toutes les dates de cet exemple sachant que les jours ou les mois contiennent un ou deux chiffres, les années deux ou quatre.
End of explanation
import re
# première étape : construction
expression = re.compile("([0-3]?[0-9]/[0-1]?[0-9]/([0-2][0-9])?[0-9][0-9])")
# seconde étape : recherche
res = expression.findall(s)
print(res)
Explanation: Le premier chiffre du jour est soit 0, 1, 2, ou 3 ; ceci se traduit par [0-3]. Le second chiffre est compris entre 0 et 9, soit [0-9]. Le format des jours est traduit par [0-3][0-9]. Mais le premier jour est facultatif, ce qu'on précise avec le symbole ? : [0-3]?[0-9]. Les mois suivent le même principe : [0-1]?[0-9]. Pour les années, ce sont les deux premiers chiffres qui sont facultatifs, le symbole ? s'appliquent sur les deux premiers chiffres, ce qu'on précise avec des parenthèses : ([0-2][0-9])?[0-9][0-9]. Le format final d'une date devient :
Le module re gère les expressions régulières, celui-ci traite différemment les parties de l'expression régulière qui sont entre parenthèses de celles qui ne le sont pas : c'est un moyen de dire au module re que nous nous intéressons à telle partie de l'expression qui est signalée entre parenthèses. Comme la partie qui nous intéresse - une date - concerne l'intégralité de l'expression régulière, il faut insérer celle-ci entre parenthèses.
La première étape consiste à construire l'expression régulière, la seconde à rechercher toutes les fois qu'un morceau de la chaîne s définie plus haut correspond à l’expression régulière.
End of explanation
import re
s = "something\\support\\vba\\image/vbatd1_4.png"
print(re.compile("[\\\\/]image[\\\\/].*[.]png").search(s)) # résultat positif
print(re.compile("[\\\\/]image[\\\\/].*[.]png").search(s)) # même résultat
Explanation: Le résultat une liste de couples dont chaque élément correspond aux parties comprises entre parenthèses qu'on appelle des groupes. Lorsque les expressions régulières sont utilisées, on doit d'abord se demander comment définir ce qu’on cherche puis quelles fonctions utiliser pour obtenir les résultats de cette recherche. Les deux paragraphes qui suivent y répondent.
Syntaxe
La syntaxe des expressions régulières est décrite sur le site officiel de python. La page Regular Expression Syntax décrit comment se servir des expressions régulières, les deux pages sont en anglais. Comme toute grammaire, celle des expressions régulières est susceptible d’évoluer au fur et à mesure des versions du langage python.
Les ensembles de caractères
Lors d’une recherche, on s’intéresse aux caractères et souvent aux classes de caractères : on cherche un chiffre, une lettre, un caractère dans un ensemble précis ou un caractère qui n’appartient pas à un ensemble précis. Certains ensembles sont prédéfinis, d’autres doivent être définis à l’aide de crochets.
Pour définir un ensemble de caractères, il faut écrire cet ensemble entre crochets : [0123456789] désigne un chiffre. Comme c’est une séquence de caractères consécutifs, on peut résumer cette écriture en [0-9]. Pour inclure les symboles -, +, il suffit d’écrire : [-0-9+]. Il faut penser à mettre le symbole - au début pour éviter qu’il ne désigne une séquence.
Le caractère ^ inséré au début du groupe signifie que le caractère cherché ne doit pas être un de ceux qui suivent. Le tableau suivant décrit les ensembles prédéfinis et leur équivalent en terme d’ensemble de caractères :
. désigne tout caractère non spécial quel qu'il soit.
\d désigne tout chiffre, est équivalent à [0-9].
\D désigne tout caractère différent d'un chiffre, est équivalent à [^0-9].
\s désigne tout espace ou caractère approché, est équivalent à [\; \t\n\r\f\v]. Ces caractères sont spéciaux, les plus utilisés sont \t qui est une tabulation, \n qui est une fin de ligne et qui \r qui est un retour à la ligne.
\S désigne tout caractère différent d'un espace, est équivalent à [^ \t\n\r\f\v].
\w désigne tout lettre ou chiffre, est équivalent à [a-zA-Z0-9_].
\W désigne tout caractère différent d'une lettre ou d'un chiffre, est équivalent à [^a-zA-Z0-9_].
^ désigne le début d'un mot sauf s'il est placé entre crochets.
$ désigne la fin d'un mot sauf s'il est placé entre crochets.
A l'instar des chaînes de caractères, comme le caractère \ est un caractère spécial, il faut le doubler : [\\].
Le caractère \ est déjà un caractère spécial pour les chaînes de caractères en python, il faut donc le quadrupler pour l'insérer dans un expression régulière. L'expression suivante filtre toutes les images dont l’extension est png et qui sont enregistrées dans un répertoire image.
End of explanation
"<h1>mot</h1>"
Explanation: Les multiplicateurs
Les multiplicateurs permettent de définir des expressions régulières comme : un mot entre six et huit lettres qu’on écrira [\w]{6,8}. Le tableau suivant donne la liste des multiplicateurs principaux :
* présence de l'ensemble de caractères qui précède entre 0 fois et l'infini
+ présence de l'ensemble de caractères qui précède entre 1 fois et l'infini
? présence de l'ensemble de caractères qui précède entre 0 et 1 fois
{m,n} présence de l'ensemble de caractères qui précède entre m et n fois, si m=n, cette expression peut être résumée par {n}.
(?!(...)) absence du groupe désigné par les points de suspensions.
L’algorithme des expressions régulières essaye toujours de faire correspondre le plus grand morceau à l’expression régulière.
End of explanation
import re
s = "<h1>mot</h1>"
print(re.compile("(<.*>)").match(s).groups()) # ('<h1>mot</h1>',)
print(re.compile("(<.*?>)").match(s).groups()) # ('<h1>',)
print(re.compile("(<.+?>)").match(s).groups()) # ('<h1>',)
Explanation: <.*> correspond avec <h1>, </h1> ou encore <h1>mot</h1>.
Par conséquent, l’expression régulière correspond à trois morceaux. Par défaut, il prendra le plus grand. Pour choisir les plus petits, il faudra écrire les multiplicateurs comme ceci : *?, +?
End of explanation
texte = Je suis né le 28/12/1903 et je suis mort le 08/02/1957. Ma seconde femme est morte le 10/11/1963.
J'ai écrit un livre intitulé 'Comprendre les fractions : les exemples en page 12/46/83'
import re
expression = re.compile("[0-9]{2}/[0-9]{2}/[0-9]{4}")
cherche = expression.findall(texte)
print(cherche)
Explanation: Exercice 1
Recherchez les dates présentes dans la phrase suivante
End of explanation
texte = Je suis né le 28/12/1903 et je suis mort le 08/02/1957. Je me suis marié le 8/5/45.
J'ai écrit un livre intitulé 'Comprendre les fractions : les exemples en page 12/46/83'
expression = re.compile("[0-3]?[0-9]/[0-1]?[0-9]/[0-1]?[0-9]?[0-9]{2}")
cherche = expression.findall(texte)
print(cherche)
Explanation: Puis dans celle-ci :
End of explanation |
9,497 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression
Predicting a Category or Class
ACKNOWLEDGEMENT
Some of the code in this notebook is based on John D. Wittenauer's notebooks that cover the exercises in Andrew Ng's course on Machine Learning on Coursera. I've also modified some code from Sebastian Raschka's book Python Machine Learning, and used some code from Sonya Sawtelle's blog.
What is Logistic Regression?
Despite the fancy technical name, logistic regression is not a scary thing. It just means predicting a class or category rather than a number.
Or more accurately, it's about predicting a categorical value rather than a a numerical value.
Why do Logistic Regression?
Because many business problems are really classification problems in disguise.
Will person A respond to my marketing email?
Will customer B renew their subscription for our services?
Will this jet engine work well once it's installed on an aircraft?
Will student C be accepted by Princeton?
Is this bank note a counterfeit?
Exercise 1
Can you think of a (business) problem you're familiar with that's really a classification problem in disguise?
The Problem We'll Tackle
How to distinguish a real from a fake banknote?
Modern banknotes have a large number of subtle distinguishing characteristics like watermarks, background lettering, and holographic images.
It would be hard (and time consuming and even counterproductive) to write these down as a concrete set of rules. Especially as notes can age, tear, and get mangled in a number of ways these rules can start to get very complex.
Can a machine learn to do it using image data?
Let's see...
Load the Data
About the data. It comes from the University of California at Irvine's repository of data sets. According to the authors of the data,
"Data were extracted from images that were taken from genuine and forged banknote-like specimens. For digitization, an industrial camera usually used for print inspection was used. The final images have 400x 400 pixels. Due to the object lens and distance to the investigated object gray-scale pictures with a resolution of about 660 dpi were gained. [A] Wavelet Transform tool were used to extract features from images."
The features of the data are values from this wavelet transform process that the images were put through.
Step1: Step 1
Step2:
Step3:
Step4:
Step5:
Step6: Exercise 2
Use Orange to replicate the scatter plots for all features in the dataset. The data is available from the course's GitHub repository.
Let's use features V1 and V2 alone to begin with. In addition to keeping things simpler, it will let us visualize what's going on.
<img src="../Images/classification-lines.jpg" alt="Classification Boundaries" style="width
Step7: Step 2b
Step8: Step 3
Step9: Notice that the sigmoid is never less than zero or greater than 1.
Although it looks like the sigmoid rapidly gets to 1 (on the positive side) and 0 on the negative side and stays there, mathematically speaking, the sigmoid never gets to 1 or 0 -- it gets closer and closer but never gets there.
Because the sigmoid can never be less than zero or greater than 1, the sigmoid can take any number and convert it into another number between 0 and 1.
But that still doesn't get us to just 1 or just 0.
If you look at the sigmoid above, you'll see that when $\hat{y}$ is around 5 or higher, $sigmoid(\hat{y})$ is very close to 1.
Similarly, when $\hat{y}$ is around -5 or lower, $sigmoid(\hat{y})$ is very close to 0.
But we develop this much simpler rule
Step10: Keep your eye on the orange curve. This is for the case when the actual value of a row in the dataset is 0 (the banknote is a fake). If the banknote is a fake and say $\hat{y}$ is 7, then $sigmoid(\hat{y})$ is going to be close to 1, say 0.9. This means that the penalty is going to be very high because the orange curve increases rapidly in value as it approaches 1.
Similarly, when the actual value of the dataset is 1, the blue penalty curve comes into play. If $\hat{y}$ is 7, then once again $sigmoid(\hat{y})$ is going to be close to 1, say 0.9. But in this case the penalty is very low because the blue curve decreases rapidly in value as it approaches 1.
<img src="../Images/inputs-to-penalty.png" alt="Going from Inputs to the Penalty" width="500px"/>
<img src="../Images/logistic-regression-dataset-view.png" alt="Going from Inputs to the Penalty" width="500px"/>
Step 5
Step11: Step 6 | Python Code:
# Import our usual libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import os
# OS-independent way to navigate the file system
# Data directory is one directory up in relation to directory of this notebook
data_dir_root = os.path.normpath(os.getcwd() + os.sep + os.pardir + os.sep + "Data")
# Where the file is
file_url = data_dir_root + os.sep + "forged-bank-notes.csv"
#file_url
# header=0 drops the header row in the csv file
data = pd.read_csv(file_url, header=0, names=['V1', 'V2', 'V3', 'V4', 'Genuine'])
# Number of rows and columns in the data
data.shape
# First few rows of the datastet
data.head()
Explanation: Logistic Regression
Predicting a Category or Class
ACKNOWLEDGEMENT
Some of the code in this notebook is based on John D. Wittenauer's notebooks that cover the exercises in Andrew Ng's course on Machine Learning on Coursera. I've also modified some code from Sebastian Raschka's book Python Machine Learning, and used some code from Sonya Sawtelle's blog.
What is Logistic Regression?
Despite the fancy technical name, logistic regression is not a scary thing. It just means predicting a class or category rather than a number.
Or more accurately, it's about predicting a categorical value rather than a a numerical value.
Why do Logistic Regression?
Because many business problems are really classification problems in disguise.
Will person A respond to my marketing email?
Will customer B renew their subscription for our services?
Will this jet engine work well once it's installed on an aircraft?
Will student C be accepted by Princeton?
Is this bank note a counterfeit?
Exercise 1
Can you think of a (business) problem you're familiar with that's really a classification problem in disguise?
The Problem We'll Tackle
How to distinguish a real from a fake banknote?
Modern banknotes have a large number of subtle distinguishing characteristics like watermarks, background lettering, and holographic images.
It would be hard (and time consuming and even counterproductive) to write these down as a concrete set of rules. Especially as notes can age, tear, and get mangled in a number of ways these rules can start to get very complex.
Can a machine learn to do it using image data?
Let's see...
Load the Data
About the data. It comes from the University of California at Irvine's repository of data sets. According to the authors of the data,
"Data were extracted from images that were taken from genuine and forged banknote-like specimens. For digitization, an industrial camera usually used for print inspection was used. The final images have 400x 400 pixels. Due to the object lens and distance to the investigated object gray-scale pictures with a resolution of about 660 dpi were gained. [A] Wavelet Transform tool were used to extract features from images."
The features of the data are values from this wavelet transform process that the images were put through.
End of explanation
# Scatter of V1 versus V2
positive = data[data['Genuine'].isin([1])]
negative = data[data['Genuine'].isin([0])]
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(positive['V1'], positive['V2'], s=30, c='b', marker='.', label='Genuine')
ax.scatter(negative['V1'], negative['V2'], s=30, c='r', marker='.', label='Forged')
ax.legend(loc='lower right')
ax.set_xlabel('V1')
ax.set_ylabel('V2')
plt.title('Bank Note Validation Based on Feature Values 1 and 2');
Explanation: Step 1: Visualize the Data
End of explanation
# Scatter of V3 versus V4
positive = data[data['Genuine'].isin([1])]
negative = data[data['Genuine'].isin([0])]
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(positive['V3'], positive['V4'], s=30, c='b', marker='+', label='Genuine')
ax.scatter(negative['V3'], negative['V4'], s=30, c='r', marker='s', label='Forged')
ax.legend(loc='lower right')
ax.set_xlabel('V3')
ax.set_ylabel('V4')
plt.title('Bank Note Validation Based on Feature Values V3 and V4');
Explanation:
End of explanation
# Scatter of V1 versus V4
positive = data[data['Genuine'].isin([1])]
negative = data[data['Genuine'].isin([0])]
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(positive['V1'], positive['V4'], s=30, c='b', marker='+', label='Genuine')
ax.scatter(negative['V1'], negative['V4'], s=30, c='r', marker='s', label='Forged')
ax.legend(loc='lower right')
ax.set_xlabel('V1')
ax.set_ylabel('V4')
plt.title('Bank Note Validation Based on Feature Values 1 and 4');
Explanation:
End of explanation
# Scatter of V2 versus V3
positive = data[data['Genuine'].isin([1])]
negative = data[data['Genuine'].isin([0])]
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(positive['V2'], positive['V3'], s=30, c='b', marker='+', label='Genuine')
ax.scatter(negative['V2'], negative['V3'], s=30, c='r', marker='s', label='Forged')
ax.legend(loc='lower right')
ax.set_xlabel('V2')
ax.set_ylabel('V3')
plt.title('Bank Note Validation Based on Feature Values V2 and V3');
Explanation:
End of explanation
# Scatter of Skewness versus Entropy
positive = data[data['Genuine'].isin([1])]
negative = data[data['Genuine'].isin([0])]
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(positive['V2'], positive['V4'], s=30, c='b', marker='+', label='Genuine')
ax.scatter(negative['V2'], negative['V4'], s=30, c='r', marker='s', label='Forged')
ax.legend(loc='lower right')
ax.set_xlabel('V2')
ax.set_ylabel('V4')
plt.title('Bank Note Validation Based on Feature Values V2 and V4');
Explanation:
End of explanation
# First few rows of the input
inputs = data[['V1', 'V2']]
inputs.head()
Explanation: Exercise 2
Use Orange to replicate the scatter plots for all features in the dataset. The data is available from the course's GitHub repository.
Let's use features V1 and V2 alone to begin with. In addition to keeping things simpler, it will let us visualize what's going on.
<img src="../Images/classification-lines.jpg" alt="Classification Boundaries" style="width:600px"/>
Right away we see that this doesn't even look like a regular regression problem -- there are two classes -- Genuine and Forged. These are not continuous values -- it's one or the other.
Moreover, the classes don't separate cleanly. This is what we usually face in the real world. No matter how we try to separate these classes, we're probably never going to get it 100% right.
Step 2: Define the Task You Want to Accomplish
Task = Classify a banknote as genuine or counterfeit given the values of its features V1 and V2.
Step 2a: Identify the Inputs
The inputs are the features V1 and V2 generated by the instrument reading the images (the wavelet transform tool).
End of explanation
# First few rows of the output/target
output = data[['Genuine']]
output.head()
Explanation: Step 2b: Identify the Output/Target
The output or target we'd like to predict is the feature called "Genuine". It takes the value 1 when the banknote is real and 0 when the banknote is counterfeit.
End of explanation
# Define the sigmoid function or transformation
# NOTE: ALSO PUT INTO THE SharedFunctions notebook
def sigmoid(z):
return 1 / (1 + np.exp(-z))
# Plot the sigmoid function
# Generate the values to be plotted
x_vals = np.linspace(-10,10,1000)
y_vals = [sigmoid(x) for x in x_vals]
# Plot the values
fig, ax = plt.subplots(figsize=(12,6))
ax.plot(x_vals, y_vals, 'blue')
ax.grid()
# Draw some constant lines to aid visualization
plt.axvline(x=0, color='black')
plt.axhline(y=0.5, color='black')
plt.yticks(np.arange(0,1.1,0.1))
plt.xticks(np.arange(-10,11,1))
plt.xlabel(r'$\hat{y}$', fontsize=15)
plt.ylabel(r'$sigmoid(\hat{y})$', fontsize=15)
plt.title('The Sigmoid Transformation', fontsize=15)
ax.plot;
Explanation: Step 3: Define the Model
Step 3a: Define the Features
We have 2 features: V1 and V2.
Step 3b: Transform the Inputs Into an Output
Although the task we now face is different from the regression task, we're going to start just as we did before.
$$\hat{y} = w_{0} * x_{0}\ +\ w_{1} * x_{1} +\ w_{2} * x_{2}$$
It looks like the form of a linear regression and that's exactly what it is.
But now a twist...
When we transform the inputs V1 and V2 using the expression
$$\hat{y} = w_{0} * x_{0}\ +\ w_{1} * x_{1} +\ w_{2} * x_{2}$$
we're going to end up with a numeric value. It might be 4.2 or -12.56 or whatever depending on the values you plug in for $w_{0}$, $w_{1}$, and $w_{2}$.
But what we need is an output of 0 or 1.
Question: How to go from a numeric (continuous) value like -12.56 to a categorical value like 0 or 1?
The Sigmoid
The way to transform a numerical value into a categorical value is through something called a sigmoid. Here's what it looks like.
End of explanation
# Visualize the penalty function when y = 1 and y = 0
x_vals = np.linspace(0,1,100)
y_1_vals = -np.log(x_vals)
y_0_vals = -np.log(1 - x_vals)
fig, ax = plt.subplots(figsize=(12,6))
ax.grid()
ax.plot(x_vals, y_1_vals, color='blue', linestyle='solid', label='actual value of y = 1')
ax.plot(x_vals, y_0_vals, color='orange', linestyle='solid', label='actual value of y = 0')
plt.legend(loc='upper center')
plt.xlabel(r'$sigmoid(\hat{y})$', fontsize=15)
plt.ylabel('Penalty', fontsize=15)
ax.plot;
Explanation: Notice that the sigmoid is never less than zero or greater than 1.
Although it looks like the sigmoid rapidly gets to 1 (on the positive side) and 0 on the negative side and stays there, mathematically speaking, the sigmoid never gets to 1 or 0 -- it gets closer and closer but never gets there.
Because the sigmoid can never be less than zero or greater than 1, the sigmoid can take any number and convert it into another number between 0 and 1.
But that still doesn't get us to just 1 or just 0.
If you look at the sigmoid above, you'll see that when $\hat{y}$ is around 5 or higher, $sigmoid(\hat{y})$ is very close to 1.
Similarly, when $\hat{y}$ is around -5 or lower, $sigmoid(\hat{y})$ is very close to 0.
But we develop this much simpler rule:
When the value of $sigmoid(\hat{y})$ is greater than 0.5, treat it as 1.
When the value of $sigmoid(\hat{y})$ is less than or equal to 0.5, treat it as a 0.
That's it. A system for going from any number (positive or negative) to either a 0 or a 1.
Let's recap what we've done so far to build a model for logistic regression.
A model is a scheme for transforming inputs to an output.
The model for logistic regression transforms the inputs of each row of the dataset to an output in three steps:
First, it uses the same scheme we used for regression: $\hat{y} = w_{0} * x_{0}\ +\ w_{1} * x_{1} +\ w_{2} * x_{2}$
Then it takes $\hat{y}$ and transforms it using the sigmoid into $sigmoid(\hat{y})$.
Finally,
if $sigmoid(\hat{y})$ is greater than 0.5, the output is equal to 1.
if $sigmoid(\hat{y})$ is less than or equal to 0.5, the output is equal to 0.
Step 3c: Clarify the Parameters of the Model
Just as they were before, the parameters of the model are still $w_{0}$, $w_{1}$, and $w_{2}$.
Step 4: Define the Penalty for Getting it Wrong
Here's where things change quite a bit from what we've seen in regression.
A penalty applies when the model (i.e., the scheme for transforming inputs to an ouput) gives the wrong answer.
The intuition is: the more wrong the model output is, the higher the penalty should be.
Let's see what this looks like.
End of explanation
# Set up the training data
X_train = inputs.values
#X_train.shape
# Set up the target data
y = output.values
# Change the shape of y to suit scikit learn's requirements
y_train = np.array(list(y.squeeze()))
#y_train.shape
# Set up the logistic regression model from SciKit Learn
from sklearn.linear_model import LogisticRegression
# Solvers that seem to work well are 'liblinear' and 'newton-cg"
lr = LogisticRegression(C=100.0, random_state=0, solver='liblinear', verbose=2)
# Train the model and find the optimal parameter values
lr.fit(X_train, y_train)
# These are the optimal values of w0, w1 and w2
w0 = lr.intercept_[0]
w1 = lr.coef_.squeeze()[0]
w2 = lr.coef_.squeeze()[1]
print("w0: {}\nw1: {}\nw2: {}".format(w0, w1, w2))
Explanation: Keep your eye on the orange curve. This is for the case when the actual value of a row in the dataset is 0 (the banknote is a fake). If the banknote is a fake and say $\hat{y}$ is 7, then $sigmoid(\hat{y})$ is going to be close to 1, say 0.9. This means that the penalty is going to be very high because the orange curve increases rapidly in value as it approaches 1.
Similarly, when the actual value of the dataset is 1, the blue penalty curve comes into play. If $\hat{y}$ is 7, then once again $sigmoid(\hat{y})$ is going to be close to 1, say 0.9. But in this case the penalty is very low because the blue curve decreases rapidly in value as it approaches 1.
<img src="../Images/inputs-to-penalty.png" alt="Going from Inputs to the Penalty" width="500px"/>
<img src="../Images/logistic-regression-dataset-view.png" alt="Going from Inputs to the Penalty" width="500px"/>
Step 5: Find the Parameter Values that Minimize the Penalty
We've set up the logistic regression model and we'll use the familiar algorithm of gradient descent to learn the optimal values of the parameters.
End of explanation
# Genuine or fake for the entire data set
y_pred = lr.predict(X_train)
print(y_pred)
# How do the predictions compare with the actual labels on the data set?
y_train == y_pred
# The probabilities of [Genuine = 0, Genuine = 1]
y_pred_probs = lr.predict_proba(X_train)
print(y_pred_probs)
# Where did the model misclassify banknotes?
errors = data[data['Genuine'] != y_pred]
#errors
# Following Sonya Sawtelle
# (https://sdsawtelle.github.io/blog/output/week3-andrew-ng-machine-learning-with-python.html)
# This is the classifier boundary line when z=0
x1 = np.linspace(-6,6,100) # Array of exam1 value
x2 = (-w0/w2) - (w1/w2)*x1 # Corresponding V2 values along the line z=0
# Following Sonya Sawtelle
# (https://sdsawtelle.github.io/blog/output/week3-andrew-ng-machine-learning-with-python.html)
# Scatter of V1 versus V2
positive = data[data['Genuine'].isin([1])]
negative = data[data['Genuine'].isin([0])]
fig, ax = plt.subplots(figsize=(15,10))
#colors = ["r", "b"]
#la = ["Forged", "Genuine"]
#markers = [colors[gen] for gen in data['Genuine']] # this is a cool way to color the categories!
#labels = [la[gen] for gen in data['Genuine']]
#ax.scatter(data['V1'], data['V2'], color=markers, s=10, label=labels)
ax.scatter(positive['V1'], positive['V2'], s=30, c='b', marker='.', label='Genuine')
ax.scatter(negative['V1'], negative['V2'], s=30, c='r', marker='.', label='Forged')
ax.set_xlabel('V1')
ax.set_ylabel('V2')
# Now plot black circles around data points that were incorrectly predicted
ax.scatter(errors["V1"], errors["V2"], facecolors="none", edgecolors="m", s=80, label="Wrongly Classified")
# Finally plot the line which represents the decision boundary
ax.plot(x1, x2, color="green", linestyle="--", marker=None, label="boundary")
ax.legend(loc='upper right')
plt.title('Bank Note Validation Based on Feature Values 1 and 2');
Explanation: Step 6: Use the Model and Optimal Parameter Values to Make Predictions
End of explanation |
9,498 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SQL practice session
To start off, re-write the code to extract the 5GB file to a database. You can look at my code if you get stuck, but dont copy paste. Look at the code, but type it in yourself.
Make sure you delete the old database file, or rename it
I recommend you open a new Jupyter sheet to run your code
Click on File->New Notebook -> Python3
Next
Step1: It returned empty because there is no zoidberg in our list.
Add it below.
Name
Step2: Next
Step3: Next
Step4: There is a problem with the above | Python Code:
# import sqlite3 here
#open connection to database
# 1st challenge: Write a sql query to search for the name: zoidberg
# Note: It will return 0
Explanation: SQL practice session
To start off, re-write the code to extract the 5GB file to a database. You can look at my code if you get stuck, but dont copy paste. Look at the code, but type it in yourself.
Make sure you delete the old database file, or rename it
I recommend you open a new Jupyter sheet to run your code
Click on File->New Notebook -> Python3
Next: Some practice of sql queries
End of explanation
# Add the zoidberg data to the database below. remeber to commit()
# Search for zoidberg again. This time, you should get the results below:
Explanation: It returned empty because there is no zoidberg in our list.
Add it below.
Name: Zoidberg
Legal name: Planet Express
City: New New York
State: New New York
End of explanation
# Count number of practices in New York vs Texas
# 1st, get number in New York. You should get the values below
print("Number in NY: ", len(result))
# Now get Texas:
print("Number in TX: ", len(result))
Explanation: Next: Compare the number of practices in New york (NY) vs Texas (Tx)
End of explanation
# Find number of Johns. Remember, this uses the % symbol
Explanation: Next: Find the number of people with John in their name
End of explanation
print(len(result))
#This time, printing the 1st 6 results as well, to check.
print(result[:6])
# Now find all Johns in the state 'AL'
print(len(result))
print(result[:6])
# Finally, Johns in AL and city of Mobile.
print(len(result))
print(result[:6])
#Always close the database!
conn.close()
Explanation: There is a problem with the above: It will include names like Johnathan.
What if we only want people with the name John?
Hint: In the search query, use 'john %'. Notice the space before %
Try that below:
End of explanation |
9,499 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Double Multiple Stripe Analysis (2MSA) for Single Degree of Freedom (SDOF) Oscillators
<img src="../../../../figures/intact-damaged.jpg" width="500" align="middle">
Step1: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
If the User wants to specify the cyclic hysteretic behaviour of the SDOF system, please input the path of the file where the hysteretic parameters are contained, using the variable sdof_hysteresis. The parameters should be defined according to the format described in the RMTK manual. If instead default parameters want to be assumed, please set the sdof_hysteresis variable to "Default"
Step2: Load ground motion records
For what concerns the ground motions to be used in the Double Multiple Stripe Analysis the following inputs are required
Step3: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
Currently the user can provide spectral displacement, capacity curve dependent and interstorey drift damage model type.
If the damage model type is interstorey drift the user has to input interstorey drift values of the MDOF system. The user can then provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements of the SDOF system, otherwise a linear relationship is assumed.
Step4: Calculate fragility function
In order to obtain the fragility model, it is necessary to input the location of the damage model (damage_model), using the format described in the RMTK manual. It is as well necessary to input the damping value of the structure(s) under analysis and the value of the period (T) to be considered in the regression analysis. The method allows to consider or not degradation. Finally, if desired, it is possible to save the resulting fragility model in a .csv file.
Step5: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above
Step6: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above
Step7: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above | Python Code:
import numpy
from rmtk.vulnerability.common import utils
from rmtk.vulnerability.derivation_fragility.NLTHA_on_SDOF import MSA_utils
from rmtk.vulnerability.derivation_fragility.NLTHA_on_SDOF.read_pinching_parameters import read_parameters
from rmtk.vulnerability.derivation_fragility.NLTHA_on_SDOF import double_MSA_on_SDOF
%matplotlib inline
Explanation: Double Multiple Stripe Analysis (2MSA) for Single Degree of Freedom (SDOF) Oscillators
<img src="../../../../figures/intact-damaged.jpg" width="500" align="middle">
End of explanation
capacity_curves_file = '/Users/chiaracasotto/GitHub/rmtk_data/2MSA/capacity_curves.csv'
sdof_hysteresis = "/Users/chiaracasotto/GitHub/rmtk_data/pinching_parameters.csv"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
capacity_curves = utils.check_SDOF_curves(capacity_curves)
utils.plot_capacity_curves(capacity_curves)
hysteresis = read_parameters(sdof_hysteresis)
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
If the User wants to specify the cyclic hysteretic behaviour of the SDOF system, please input the path of the file where the hysteretic parameters are contained, using the variable sdof_hysteresis. The parameters should be defined according to the format described in the RMTK manual. If instead default parameters want to be assumed, please set the sdof_hysteresis variable to "Default"
End of explanation
gmrs_folder = '../../../../../rmtk_data/MSA_records'
number_models_in_DS = 1
no_bins = 2
no_rec_bin = 10
damping_ratio = 0.05
minT = 0.1
maxT = 2
filter_aftershocks = 'FALSE'
Mw_multiplier = 0.92
waveform_path = '../../../../../rmtk_data/2MSA/waveform.csv'
gmrs = utils.read_gmrs(gmrs_folder)
gmr_characteristics = MSA_utils.assign_Mw_Tg(waveform_path, gmrs, Mw_multiplier,
damping_ratio, filter_aftershocks)
#utils.plot_response_spectra(gmrs,minT,maxT)
Explanation: Load ground motion records
For what concerns the ground motions to be used in the Double Multiple Stripe Analysis the following inputs are required:
1. gmrs_folder: path to the folder containing the ground motion records to be used in the analysis. Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.
2. record_scaled_folder. In this folder there should be a csv file for each Intensity Measure bin selected for the MSA, containing the names of the records that should be scaled to that IM bin, and the corresponding scaling factors. An example of this type of file is provided in the RMTK manual.
3. no_bins: number of Intensity Measure bins.
4. no_rec_bin: number of records per bin
5. number_models_in_DS: the number of model to populate each initial damage state with.
If a certain relationship wants to be kept between the ground motion characteristics of the mainshock and the aftershock, the variable filter_aftershocks should be set to TRUE and the following parameters should be defined:
1. Mw_multiplier: the ratio between the aftershock magnitude and the mainshock magnitude.
2. waveform_path: the path to the file containing for each gmr magnitude and predominant period;
Otherwise the variable filter_aftershocks should be set to FALSE and the aforementioned parameters can be left empty.
If the user wants to plot acceleration, displacement and velocity response spectra, the function utils.plot_response_spectra(gmrs, minT, maxT) should be un-commented. The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.
End of explanation
damage_model_file = "/Users/chiaracasotto/GitHub/rmtk_data/2MSA/damage_model_ISD.csv"
damage_model = utils.read_damage_model(damage_model_file)
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
Currently the user can provide spectral displacement, capacity curve dependent and interstorey drift damage model type.
If the damage model type is interstorey drift the user has to input interstorey drift values of the MDOF system. The user can then provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements of the SDOF system, otherwise a linear relationship is assumed.
End of explanation
degradation = False
record_scaled_folder = "../../../../../rmtk_data/2MSA/Scaling_factors"
msa = MSA_utils.define_2MSA_parameters(no_bins,no_rec_bin,record_scaled_folder,filter_aftershocks)
PDM, Sds, gmr_info = double_MSA_on_SDOF.calculate_fragility(
capacity_curves, hysteresis, msa, gmrs, gmr_characteristics,
damage_model, damping_ratio,degradation, number_models_in_DS)
Explanation: Calculate fragility function
In order to obtain the fragility model, it is necessary to input the location of the damage model (damage_model), using the format described in the RMTK manual. It is as well necessary to input the damping value of the structure(s) under analysis and the value of the period (T) to be considered in the regression analysis. The method allows to consider or not degradation. Finally, if desired, it is possible to save the resulting fragility model in a .csv file.
End of explanation
IMT = 'Sa'
T = 0.47
#T = numpy.arange(0.4,1.91,0.01)
regression_method = 'max likelihood'
fragility_model = MSA_utils.calculate_fragility_model_damaged( PDM,gmrs,gmr_info,IMT,msa,damage_model,
T,damping_ratio, regression_method)
Explanation: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:
1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sa","Sd" and "HI" (Housner Intensity).
2. period: This parameter defines the period for which a spectral intensity measure should be computed. If Housner Intensity is selected as intensity measure a range of periods should be defined instead (for example T=np.arange(0.3,3.61,0.01)).
3. regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood".
End of explanation
minIML, maxIML = 0.01, 4
MSA_utils.plot_fragility_model(fragility_model,damage_model,minIML, maxIML)
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
output_type = "csv"
output_path = "../../../../../rmtk_data/2MSA/"
minIML, maxIML = 0.01, 4
tax = 'RC'
MSA_utils.save_mean_fragility(fragility_model,damage_model,tax,output_type,output_path,minIML, maxIML)
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.