Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
14,300 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
My first precipitation nowcast
In this example, we will use pysteps to compute and plot an extrapolation nowcast using the NSSL's Multi-Radar/Multi-Sensor System
(MRMS) rain rate product.
The MRMS precipitation product is available every 2 minutes, over the contiguous US.
Each precipitation composite has 3500 x 7000 grid points, separated 1 km from each other.
Set-up Colab environment
Important
Step1: Install pysteps
Now that all dependencies are installed, we can install pysteps.
Step2: Getting the example data
Now that we have the environment ready, let's install the example data and configure the pysteps's default parameters by following this tutorial.
First, we will use the pysteps.datasets.download_pysteps_data() function to download the data.
Step3: Next, we need to create a default configuration file that points to the downloaded data.
By default, pysteps will place the configuration file in $HOME/.pysteps (unix and Mac OS X) or $USERPROFILE/pysteps (windows).
To quickly create a configuration file, we will use the pysteps.datasets.create_default_pystepsrc() helper function.
Step4: Since pysteps was already initialized in this notebook, we need to load the new configuration file and update the default configuration.
Step5: Let's see what the default parameters look like (these are stored in the
pystepsrc file). We will be using them to load the MRMS data set.
Step6: This should have printed the following lines
Step7: Let's have a look at the values returned by the load_dataset() function.
precipitation
Step8: Note that the shape of the precipitation is 4 times smaller than the raw MRMS data (3500 x 7000).
The load_dataset() function uses the default parameters from importers to read the data. By default, the MRMS importer upscales the data 4x. That is, from ~1km resolution to ~4km. It also uses single precision to reduce the memory requirements.
Thanks to the upscaling, the memory footprint of this example dataset is ~200Mb instead of the 3.1Gb of the raw (3500 x 7000) data.
Step9: Time to make a nowcast
So far, we have 1 hour and 10 minutes of precipitation images, separated 2 minutes from each other.
But, how do we use that data to run a precipitation forecast?
A simple way is by extrapolating the precipitation field, assuming it will continue to move as observed in the recent past, and without changes in intensity. This is commonly known as Lagrangian persistence.
The first step to run our nowcast based on Lagrangian persistence, is the estimation of the motion field from a sequence of past precipitation observations.
We use the Lucas-Kanade (LK) optical flow method implemented in pysteps.
This method follows a local tracking approach that relies on the OpenCV package.
Local features are tracked in a sequence of two or more radar images.
The scheme includes a final interpolation step to produce a smooth field of motion vectors.
Other optical flow methods are also available in pysteps.
Check the full list here.
Now let's use the first 5 precipitation images (10 min) to estimate the motion field of the radar pattern and the remaining 30 images (1h) to evaluate the quality of our forecast.
Step10: Let's see what this 'training' precipitation event looks like using the pysteps.visualization.plot_precip_field function.
Step11: Did you note the shaded grey regions? Those are the regions were no valid observations where available to estimate the precipitation (e.g., due to ground clutter, no radar coverage, or radar beam blockage).
Those regions need to be handled with care when we run our nowcast.
Data exploration
Before we produce a forecast, let's explore the precipitation data. In particular, let's see how the distribution of the rain rate values looks.
Step12: The histogram shows that rain rate values have a non-Gaussian and asymmetric distribution that is bounded at zero. Also, the probability of occurrence decays extremely fast with increasing rain rate values (note the logarithmic y-axis).
For better performance of the motion estimation algorithms, we can convert the rain rate values (in mm/h) to a more log-normal distribution of rain rates by applying the following logarithmic transformation
Step13: Let's inspect the resulting transformed precipitation distribution.
Step14: That looks more like a log-normal distribution. Note the large peak at -15dB. That peak corresponds to "zero" (below threshold) precipitation. The jump with no data in between -15 and -10 dB is caused by the precision of the data, which we had set to 1 decimal. Hence, the lowest precipitation intensities (above zero) are 0.1 mm/h (= -10 dB).
Compute the nowcast
These are the minimal steps to compute a short-term forecast using Lagrangian extrapolation of the precipitation patterns
Step15: Extrapolate the observations
We have all ingredients to make an extrapolation nowcast now.
The final step is to advect the most recent radar rainfall field along the estimated motion field, producing an extrapolation forecast.
Step16: Let's inspect the last forecast time (hence this is the forecast rainfall an hour ahead).
Step17: Evaluate the forecast quality
Many verification methods are already present in pysteps (see a complete list here). We just have to import them.
Here, we will evaluate our forecast using the Fractions Skill Score (FSS).
This metric provides an intuitive assessment of the dependency of forecast skill on spatial scale and intensity. This makes the FSS an ideal skill score for high-resolution precipitation forecasts.
More precisely, the FSS is a neighborhood spatial verification method that directly compares the fractional coverage of events in windows surrounding the observations and forecasts.
The FSS varies from 0 (total mismatch) to 1 (perfect forecast).
For most situations, an FSS value of > 0.5 serves as a good indicator of a useful forecast (Roberts and Lean, 2008 and Skok and Roberts, 2016). | Python Code:
# These libraries are needed for the pygrib library in Colab.
# Note that is needed if you install pygrib using pip.
# If you use conda, the libraries will be installed automatically.
! apt-get install libeccodes-dev libproj-dev
# Install the python packages
! pip install pyproj
! pip install pygrib
# Uninstall existing shapely
# We will re-install shapely in the next step by ignoring the binary
# wheels to make it compatible with other modules that depend on
# GEOS, such as Cartopy (used here).
!pip uninstall --yes shapely
# To install cartopy in Colab using pip, we need to install the library
# dependencies first.
!apt-get install -qq libgdal-dev libgeos-dev
!pip install shapely --no-binary shapely
!pip install cartopy
Explanation: My first precipitation nowcast
In this example, we will use pysteps to compute and plot an extrapolation nowcast using the NSSL's Multi-Radar/Multi-Sensor System
(MRMS) rain rate product.
The MRMS precipitation product is available every 2 minutes, over the contiguous US.
Each precipitation composite has 3500 x 7000 grid points, separated 1 km from each other.
Set-up Colab environment
Important: In colab, execute this section one cell at a time. Trying to excecute all the cells at once may results in cells being skipped and some dependencies not being installed.
First, let's set up our working environment. Note that these steps are only needed to work with google colab.
To install pysteps locally, you can follow these instructions.
First, let's install the latest Pysteps version from the Python Package Index (PyPI) using pip. This will also install the minimal dependencies needed to run pysteps.
Install optional dependencies
Now, let's install the optional dependendies that will allow us to plot and read the example data.
- pygrib: to read the MRMS data grib format
- pyproj: needed by pygrib
NOTE: Do not import pysteps in this notebook until the following optional dependencies are loaded. Otherwise, pysteps will assume that they are not installed and some of its functionalities won't work.
End of explanation
# ! pip install git+https://github.com/pySTEPS/pysteps
! pip install pysteps
Explanation: Install pysteps
Now that all dependencies are installed, we can install pysteps.
End of explanation
# Import the helper functions
from pysteps.datasets import download_pysteps_data, create_default_pystepsrc
# Download the pysteps data in the "pysteps_data"
download_pysteps_data("pysteps_data")
Explanation: Getting the example data
Now that we have the environment ready, let's install the example data and configure the pysteps's default parameters by following this tutorial.
First, we will use the pysteps.datasets.download_pysteps_data() function to download the data.
End of explanation
# If the configuration file is placed in one of the default locations
# (https://pysteps.readthedocs.io/en/latest/user_guide/set_pystepsrc.html#configuration-file-lookup)
# it will be loaded automatically when pysteps is imported.
config_file_path = create_default_pystepsrc("pysteps_data")
Explanation: Next, we need to create a default configuration file that points to the downloaded data.
By default, pysteps will place the configuration file in $HOME/.pysteps (unix and Mac OS X) or $USERPROFILE/pysteps (windows).
To quickly create a configuration file, we will use the pysteps.datasets.create_default_pystepsrc() helper function.
End of explanation
# Import pysteps and load the new configuration file
import pysteps
_ = pysteps.load_config_file(config_file_path, verbose=True)
Explanation: Since pysteps was already initialized in this notebook, we need to load the new configuration file and update the default configuration.
End of explanation
# The default parameters are stored in pysteps.rcparams.
from pprint import pprint
pprint(pysteps.rcparams.data_sources['mrms'])
Explanation: Let's see what the default parameters look like (these are stored in the
pystepsrc file). We will be using them to load the MRMS data set.
End of explanation
from pysteps.datasets import load_dataset
# We'll import the time module to measure the time the importer needed
import time
start_time = time.time()
# Import the data
precipitation, metadata, timestep = load_dataset('mrms',frames=35) # precipitation in mm/h
end_time = time.time()
print("Precipitation data imported")
print("Importing the data took ", (end_time - start_time), " seconds")
Explanation: This should have printed the following lines:
fn_ext: 'grib2' -- The file extension
fn_pattern: 'PrecipRate_00.00_%Y%m%d-%H%M%S' -- The file naming convention of the MRMS data.
importer: 'mrms_grib' -- The name of the importer for the MRMS data.
importer_kwargs: {} -- Extra options provided to the importer. None in this example.
path_fmt: '%Y/%m/%d' -- The folder structure in which the files are stored. Here, year/month/day/filename.
root_path: '/content/pysteps_data/mrms' -- The root path of the MRMS-data.
timestep: 2 -- The temporal interval of the (radar) rainfall data
Note that the default timestep parameter is 2 minutes, which corresponds to the time interval at which the MRMS product is available.
Load the MRMS example data
Now that we have installed the example data, let's import the example MRMS dataset using the load_dataset() helper function from the pysteps.datasets module.
We import 1 hour and 10 minutes of data, which corresponds to a sequence of 35 frames of 2-D precipitation composites.
Note that importing the data takes approximately 30 seconds.
End of explanation
# Let's inspect the shape of the imported data array
precipitation.shape
Explanation: Let's have a look at the values returned by the load_dataset() function.
precipitation: A numpy array with (time, latitude, longitude) dimensions.
metadata: A dictionary with additional information (pixel sizes, map projections, etc.).
timestep: Time separation between each sample (in minutes)
End of explanation
timestep # In minutes
pprint(metadata)
Explanation: Note that the shape of the precipitation is 4 times smaller than the raw MRMS data (3500 x 7000).
The load_dataset() function uses the default parameters from importers to read the data. By default, the MRMS importer upscales the data 4x. That is, from ~1km resolution to ~4km. It also uses single precision to reduce the memory requirements.
Thanks to the upscaling, the memory footprint of this example dataset is ~200Mb instead of the 3.1Gb of the raw (3500 x 7000) data.
End of explanation
# precipitation[0:5] -> Used to find motion (past data). Let's call it training precip.
train_precip = precipitation[0:5]
# precipitation[5:] -> Used to evaluate forecasts (future data, not available in "real" forecast situation)
# Let's call it observed precipitation because we will use it to compare our forecast with the actual observations.
observed_precip = precipitation[3:]
Explanation: Time to make a nowcast
So far, we have 1 hour and 10 minutes of precipitation images, separated 2 minutes from each other.
But, how do we use that data to run a precipitation forecast?
A simple way is by extrapolating the precipitation field, assuming it will continue to move as observed in the recent past, and without changes in intensity. This is commonly known as Lagrangian persistence.
The first step to run our nowcast based on Lagrangian persistence, is the estimation of the motion field from a sequence of past precipitation observations.
We use the Lucas-Kanade (LK) optical flow method implemented in pysteps.
This method follows a local tracking approach that relies on the OpenCV package.
Local features are tracked in a sequence of two or more radar images.
The scheme includes a final interpolation step to produce a smooth field of motion vectors.
Other optical flow methods are also available in pysteps.
Check the full list here.
Now let's use the first 5 precipitation images (10 min) to estimate the motion field of the radar pattern and the remaining 30 images (1h) to evaluate the quality of our forecast.
End of explanation
from matplotlib import pyplot as plt
from pysteps.visualization import plot_precip_field
# Set a figure size that looks nice ;)
plt.figure(figsize=(9, 5), dpi=100)
# Plot the last rainfall field in the "training" data.
# train_precip[-1] -> Last available composite for nowcasting.
plot_precip_field(train_precip[-1], geodata=metadata, axis="off")
plt.show() # (This line is actually not needed if you are using jupyter notebooks)
Explanation: Let's see what this 'training' precipitation event looks like using the pysteps.visualization.plot_precip_field function.
End of explanation
import numpy as np
# Let's define some plotting default parameters for the next plots
# Note: This is not strictly needed.
plt.rc('figure', figsize=(4,4))
plt.rc('figure', dpi=100)
plt.rc('font', size=14) # controls default text sizes
plt.rc('axes', titlesize=14) # fontsize of the axes title
plt.rc('axes', labelsize=14) # fontsize of the x and y labels
plt.rc('xtick', labelsize=14) # fontsize of the tick labels
plt.rc('ytick', labelsize=14) # fontsize of the tick labels
# Let's use the last available composite for nowcasting from the "training" data (train_precip[-1])
# Also, we will discard any invalid value.
valid_precip_values = train_precip[-1][~np.isnan(train_precip[-1])]
# Plot the histogram
bins= np.concatenate( ([-0.01,0.01], np.linspace(1,40,39)))
plt.hist(valid_precip_values,bins=bins,log=True, edgecolor='black')
plt.autoscale(tight=True, axis='x')
plt.xlabel("Rainfall intensity [mm/h]")
plt.ylabel("Counts")
plt.title('Precipitation rain rate histogram in mm/h units')
plt.show()
Explanation: Did you note the shaded grey regions? Those are the regions were no valid observations where available to estimate the precipitation (e.g., due to ground clutter, no radar coverage, or radar beam blockage).
Those regions need to be handled with care when we run our nowcast.
Data exploration
Before we produce a forecast, let's explore the precipitation data. In particular, let's see how the distribution of the rain rate values looks.
End of explanation
from pysteps.utils import transformation
# Log-transform the data to dBR.
# The threshold of 0.1 mm/h sets the fill value to -15 dBR.
train_precip_dbr, metadata_dbr = transformation.dB_transform(train_precip, metadata,
threshold=0.1,
zerovalue=-15.0)
Explanation: The histogram shows that rain rate values have a non-Gaussian and asymmetric distribution that is bounded at zero. Also, the probability of occurrence decays extremely fast with increasing rain rate values (note the logarithmic y-axis).
For better performance of the motion estimation algorithms, we can convert the rain rate values (in mm/h) to a more log-normal distribution of rain rates by applying the following logarithmic transformation:
\begin{equation}
R\rightarrow
\begin{cases}
10\log_{10}R, & \text{if } R\geq 0.1\text{mm h$^{-1}$} \
-15, & \text{otherwise}
\end{cases}
\end{equation}
The transformed precipitation corresponds to logarithmic rain rates in units of dBR. The value of −15 dBR is equivalent to assigning a rain rate of approximately 0.03 mm h$^{−1}$ to the zeros.
End of explanation
# Only use the valid data!
valid_precip_dbr = train_precip_dbr[-1][~np.isnan(train_precip_dbr[-1])]
plt.figure(figsize=(4, 4), dpi=100)
# Plot the histogram
counts, bins, _ = plt.hist(valid_precip_dbr, bins=40, log=True, edgecolor="black")
plt.autoscale(tight=True, axis="x")
plt.xlabel("Rainfall intensity [dB]")
plt.ylabel("Counts")
plt.title("Precipitation rain rate histogram in dB units")
# Let's add a lognormal distribution that fits that data to the plot.
import scipy
bin_center = (bins[1:] + bins[:-1]) * 0.5
bin_width = np.diff(bins)
# We will only use one composite to fit the function to speed up things.
# First, remove the no precip areas."
precip_to_fit = valid_precip_dbr[valid_precip_dbr > -15]
fit_params = scipy.stats.lognorm.fit(precip_to_fit)
fitted_pdf = scipy.stats.lognorm.pdf(bin_center, *fit_params)
# Multiply pdf by the bin width and the total number of grid points: pdf -> total counts per bin.
fitted_pdf = fitted_pdf * bin_width * precip_to_fit.size
# Plot the log-normal fit
plt.plot(bin_center, fitted_pdf, label="Fitted log-normal")
plt.legend()
plt.show()
Explanation: Let's inspect the resulting transformed precipitation distribution.
End of explanation
# Estimate the motion field with Lucas-Kanade
from pysteps import motion
from pysteps.visualization import plot_precip_field, quiver
# Import the Lucas-Kanade optical flow algorithm
oflow_method = motion.get_method("LK")
# Estimate the motion field from the training data (in dBR)
motion_field = oflow_method(train_precip_dbr)
## Plot the motion field.
# Use a figure size that looks nice ;)
plt.figure(figsize=(9, 5), dpi=100)
plt.title("Estimated motion field with the Lukas-Kanade algorithm")
# Plot the last rainfall field in the "training" data.
# Remember to use the mm/h precipitation data since plot_precip_field assumes
# mm/h by default. You can change this behavior using the "units" keyword.
plot_precip_field(train_precip[-1], geodata=metadata, axis="off")
# Plot the motion field vectors
quiver(motion_field, geodata=metadata, step=40)
plt.show()
Explanation: That looks more like a log-normal distribution. Note the large peak at -15dB. That peak corresponds to "zero" (below threshold) precipitation. The jump with no data in between -15 and -10 dB is caused by the precision of the data, which we had set to 1 decimal. Hence, the lowest precipitation intensities (above zero) are 0.1 mm/h (= -10 dB).
Compute the nowcast
These are the minimal steps to compute a short-term forecast using Lagrangian extrapolation of the precipitation patterns:
Estimate the precipitation motion field.
Use the motion field to advect the most recent radar rainfall field and produce an extrapolation forecast.
Estimate the motion field
Now we can estimate the motion field. Here we use a local feature-tracking approach (Lucas-Kanade).
However, check the other methods available in the pysteps.motion module.
End of explanation
from pysteps import nowcasts
start = time.time()
# Extrapolate the last radar observation
extrapolate = nowcasts.get_method("extrapolation")
# You can use the precipitation observations directly in mm/h for this step.
last_observation = train_precip[-1]
last_observation[~np.isfinite(last_observation)] = metadata["zerovalue"]
# We set the number of leadtimes (the length of the forecast horizon) to the
# length of the observed/verification preipitation data. In this way, we'll get
# a forecast that covers these time intervals.
n_leadtimes = observed_precip.shape[0]
# Advect the most recent radar rainfall field and make the nowcast.
precip_forecast = extrapolate(train_precip[-1], motion_field, n_leadtimes)
# This shows the shape of the resulting array with [time intervals, rows, cols]
print("The shape of the resulting array is: ", precip_forecast.shape)
end = time.time()
print("Advecting the radar rainfall fields took ", (end - start), " seconds")
Explanation: Extrapolate the observations
We have all ingredients to make an extrapolation nowcast now.
The final step is to advect the most recent radar rainfall field along the estimated motion field, producing an extrapolation forecast.
End of explanation
# Plot precipitation at the end of the forecast period.
plt.figure(figsize=(9, 5), dpi=100)
plot_precip_field(precip_forecast[-1], geodata=metadata, axis="off")
plt.show()
Explanation: Let's inspect the last forecast time (hence this is the forecast rainfall an hour ahead).
End of explanation
from pysteps import verification
fss = verification.get_method("FSS")
# Compute fractions skill score (FSS) for all lead times for different scales using a 1 mm/h detection threshold.
scales = [
2,
4,
8,
16,
32,
64,
] # In grid points.
scales_in_km = np.array(scales)*4
# Set the threshold
thr = 1.0 # in mm/h
score = []
# Calculate the FSS for every lead time and all predefined scales.
for i in range(n_leadtimes):
score_ = []
for scale in scales:
score_.append(
fss(precip_forecast[i, :, :], observed_precip[i, :, :], thr, scale)
)
score.append(score_)
# Now plot it
plt.figure()
x = np.arange(1, n_leadtimes+1) * timestep
plt.plot(x, score, lw=2.0)
plt.xlabel("Lead time [min]")
plt.ylabel("FSS ( > 1.0 mm/h ) ")
plt.title("Fractions Skill Score")
plt.legend(
scales_in_km,
title="Scale [km]",
loc="center left",
bbox_to_anchor=(1.01, 0.5),
bbox_transform=plt.gca().transAxes,
)
plt.autoscale(axis="x", tight=True)
plt.show()
Explanation: Evaluate the forecast quality
Many verification methods are already present in pysteps (see a complete list here). We just have to import them.
Here, we will evaluate our forecast using the Fractions Skill Score (FSS).
This metric provides an intuitive assessment of the dependency of forecast skill on spatial scale and intensity. This makes the FSS an ideal skill score for high-resolution precipitation forecasts.
More precisely, the FSS is a neighborhood spatial verification method that directly compares the fractional coverage of events in windows surrounding the observations and forecasts.
The FSS varies from 0 (total mismatch) to 1 (perfect forecast).
For most situations, an FSS value of > 0.5 serves as a good indicator of a useful forecast (Roberts and Lean, 2008 and Skok and Roberts, 2016).
End of explanation |
14,301 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ground Penetrating Radar Lab 6 Notebook
This notebook contains two apps, which are used to complete part 3 in GPR Lab 6
Step1: Pipe Fitting App
<img style="float
Step2: Slab Fitting App
<img style="float | Python Code:
from geoscilabs.gpr.GPRlab1 import downloadRadargramImage, PipeWidget, WallWidget
from SimPEG.utils import download
Explanation: Ground Penetrating Radar Lab 6 Notebook
This notebook contains two apps, which are used to complete part 3 in GPR Lab 6:
Pipe Fitting App: This app simulates the radargram signature from a cylindrical pipe and lays it over a set of field collected data.
Slab Fitting App: This app simulates the radargram signature from a rectangular slab and lays it over a set of field collected data.
By using the models provided (pipe/slab) to fit data signatures within field collected radargram data, we can determine the existence, location and dimensions of pipes and slabs. You may also use this app to learn how radargram signatures from pipes and rectangular slabs change as the parameters provided are altered.
Importing Packages
End of explanation
URL = "http://github.com/geoscixyz/geosci-labs/raw/main/images/gpr/ubc_GPRdata.png"
radargramImage = downloadRadargramImage(URL)
PipeWidget(radargramImage);
Explanation: Pipe Fitting App
<img style="float: right; width: 500px" src="https://github.com/geoscixyz/geosci-labs/blob/main/images/gpr/pipemodel.png?raw=true">
In the context of the lab exercise (Interpretation of Field Data), it is known that several pipes are likely buried below the GPR acquisition line. Unfortunately, we do no know the location or dimensions of the pipes. Here, you will attempt to fit radargram signatures corresponding to pipe-shaped objects. Parameters which provide the best fit can be used to infer characteristics of buried pipes.
Parameters for the App:
epsr: Relative permittivity of the background medium
h: Distance from center of the pipe to the surface
xc: Horizontal location of the pipe center
r: radius of the pipe
End of explanation
URL = "http://github.com/geoscixyz/geosci-labs/raw/main/images/gpr/ubc_GPRdata.png"
radargramImage = downloadRadargramImage(URL)
WallWidget(radargramImage);
Explanation: Slab Fitting App
<img style="float: right; width: 500px" src="https://github.com/geoscixyz/geosci-labs/blob/main/images/gpr/slabmodel.png?raw=true">
In the context of the lab exercise (Interpretation of Field Data), it is known that a concrete casing is buried below the GPR acquisition line. Unfortunately, we do no know the location or depth of the casing. Here, you will attempt to fit radargram signatures corresponding to the casing using a rectangular slab model. Parameters which provide the best fit can be used to infer information about the casing.
Parameters for the App:
epsr: Relative permittivity of the background medium
h: Distance from center of the pipe to the surface
x1: Horizontal location of left boundary of the concrete casing model
x2: Horizontal location of horizontal location of right boundary of the concrete casing modelradius of the pipe
End of explanation |
14,302 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
How to convert a numpy array of dtype=object to torch Tensor? | Problem:
import pandas as pd
import torch
import numpy as np
x_array = load_data()
x_tensor = torch.from_numpy(x_array.astype(float)) |
14,303 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stellar mass profiles based on sg_fluxtable
Preliminary stellar mass profiles of the HST sample based on the radial aperture photometry sg_fluxtable_nm.txt generated in July 2017 at Bates.
Step1: Read the original photometry, the fitting results, and the K-corrections.
Step2: Plot the individual stellar mass profiles.
Fluxes were measured in circular apertures with radii ranging from 1-40 pixels (0.05-2 arcsec). Below we calculate the surface mass density in units of Mstar per comoving kpc2. | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import fitsio
import astropy.units as u
from astropy.io import ascii
from astropy.table import Table
from astropy.cosmology import FlatLambdaCDM
%pylab inline
mpl.rcParams.update({'font.size': 18})
cosmo = FlatLambdaCDM(H0=70, Om0=0.3)
Explanation: Stellar mass profiles based on sg_fluxtable
Preliminary stellar mass profiles of the HST sample based on the radial aperture photometry sg_fluxtable_nm.txt generated in July 2017 at Bates.
End of explanation
massdir = os.path.join( os.getenv('HIZEA_PROJECT'), 'massprofiles', 'isedfit' )
etcdir = os.path.join( os.getenv('HIZEA_DIR'), 'etc' )
photfile = os.path.join(etcdir, 'sg_fluxtable_nm.txt')
isedfile = os.path.join(massdir, 'massprofiles_fsps_v2.4_miles_chab_charlot_sfhgrid01.fits.gz')
kcorrfile = os.path.join(massdir, 'massprofiles_fsps_v2.4_miles_chab_charlot_sfhgrid01_kcorr.z0.0.fits.gz')
print('Reading {}'.format(photfile))
phot = ascii.read(photfile)
phot[:2]
print('Reading {}'.format(isedfile))
ised = Table(fitsio.read(isedfile, ext=1, upper=True))
ised[:2]
print('Reading {}'.format(kcorrfile))
kcorr = Table(fitsio.read(kcorrfile, ext=1, upper=True))
kcorr[:2]
galaxy = [gg[:5] for gg in phot['ID'].data]
galaxy = np.unique(galaxy)
ngal = len(galaxy)
Explanation: Read the original photometry, the fitting results, and the K-corrections.
End of explanation
nrad = 40
radpix = np.linspace(1.0, 40.0, nrad) # [pixels]
radarcsec = radpix * 0.05 # [arcsec]
mstar = ised['MSTAR_AVG'].data.reshape(ngal, nrad)
mstar_err = ised['MSTAR_ERR'].data.reshape(ngal, nrad)
redshift = phot['z'].data.reshape(ngal, nrad)[:, 0]
area = np.pi * np.insert(np.diff(radarcsec**2), 0, radarcsec[0]**2) # aperture annulus [arcsec2]
sigma = np.zeros_like(mstar) # surface mass density [Mstar/kpc2]
radkpc = np.zeros_like(mstar) # radius [comoving kpc]
for igal in range(ngal):
arcsec2kpc = cosmo.arcsec_per_kpc_comoving(redshift[igal]).value
radkpc[igal, :] = radarcsec / arcsec2kpc
areakpc2 = area / arcsec2kpc**2
sigma[igal, :] = np.log10( 10**mstar[igal, :] / areakpc2 )
massrange = (8, 10.2)
sigmarange = (6, 9.6)
fig, ax = plt.subplots(3, 4, figsize=(14, 8), sharey=True, sharex=True)
for ii, thisax in enumerate(ax.flat):
thisax.errorbar(radarcsec, mstar[ii, :], yerr=mstar_err[ii, :],
label=galaxy[ii])
thisax.set_ylim(massrange)
#thisax.legend(loc='upper right', frameon=False)
thisax.annotate(galaxy[ii], xy=(0.9, 0.9), xycoords='axes fraction',
size=16, ha='right', va='top')
fig.text(0.0, 0.5, r'$\log_{10}\, (M / M_{\odot})$', ha='center',
va='center', rotation='vertical')
fig.text(0.5, 0.0, 'Radius (arcsec)', ha='center',
va='center')
fig.subplots_adjust(wspace=0.05, hspace=0.05)
fig.tight_layout()
fig, ax = plt.subplots(figsize=(10, 7))
for igal in range(ngal):
ax.plot(radkpc[igal, :], np.log10(np.cumsum(10**mstar[igal, :])), label=galaxy[igal])
ax.legend(loc='lower right', fontsize=16, ncol=3, frameon=False)
ax.set_xlabel(r'Galactocentric Radius $r_{kpc}$ (Comoving kpc)')
ax.set_ylabel(r'$\log_{10}\, M(<r_{kpc})\ (M_{\odot})$')
fig, ax = plt.subplots(figsize=(10, 7))
for igal in range(ngal):
ax.plot(radkpc[igal, :], sigma[igal, :], label=galaxy[igal])
ax.legend(loc='upper right', fontsize=16, ncol=3, frameon=False)
ax.set_xlabel(r'Galactocentric Radius $r_{kpc}$ (Comoving kpc)')
ax.set_ylabel(r'$\log_{10}\, \Sigma\ (M_{\odot}\ /\ {\rm kpc}^2)$')
ax.set_ylim(sigmarange)
Explanation: Plot the individual stellar mass profiles.
Fluxes were measured in circular apertures with radii ranging from 1-40 pixels (0.05-2 arcsec). Below we calculate the surface mass density in units of Mstar per comoving kpc2.
End of explanation |
14,304 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Welcome to my analysis on who died during the sinking of the Titanic. In this notebook I will be exploring some basic trends to see what are the best predictors of who survived and who perished and discuss how certain methods aid in my final result.
A submission worthy script can be found on my GitHub.
1. Import In The Data
Before we can do anything with the analysis we need to import in the data in order to visualise it and identify possible trends. To do this I decided to import in the csv using pandas' read_csv function and import the data into a dataframe.
Step1: As you should be able to see, there are a few different parameters we can consider in our models
Step2: From this visualisation you can see that only 40% of people survived on the Titanic. Thus if we assume that the testing data is similar to the training script (which is a very valid assumption) then we could simply predict that everyone died and we would still be right about 60% of the time. So already we can beat random chance! But we should be able to determine a better model then that anyway.
The next visualisation I want to do is the likelihood of surviving based on gender and ticket class. I have a feeling that women survived more likely then men and the higher class tickets had a better survival rate then the lower classes.
Step3: Wow! I expected some kind of a trend but nothing like this one. As you can see nearly all women survived in first class and the same trend is observed with second class. The women in third class survived more often then men but significantly less then the women in first and second class. The men in first class have a higher chance of surviving then the men the second and third class who have nearly identical chances.
Using this result I think I predict who died with around 70-80% alone. However, if I want a more accurate model, I'll have to keep exploring other behaviours.
2.2. Visualisation By Age
I would like to see the distribution of ages on the Titanic as a purely interest thing. Perhaps it will provide some sort of insight too. I have decided to simply group the ages by year for visualisation.
Step4: Interestingly, there seems to be a non-normal distribution for the ages. There is a small spike for young children before having a right-skewed distribution centring around 25 years. A likely explanation for this behaviour would be that families bring their children on the voyage but teenagers are expected to be old enough to care for themselves if the parents went away.
2.3. Visualisation By Ticket Class
How about the distribution by ticket class? Let us look at these values using pie charts.
Step7: 3. Building a Simple Model
Now that we have identified some basic trends we can begin to create a model that predicts if a person survives or dies based on data alone.
Again we are going to use the assumption that the testing data is similar to the training data such that the percentage of people who died in both sets should be identical. I will use the mean number of deaths for certain parameters as the threshold to break under a random roll to see if they survive or die. Let us try this right now.
Each probability is stored in a dictionary with the key being a list of values for each of the columns I'm testing for. Since currently I'm just testing for sex and ticket class the key will be [Sex, Ticket Class]
Step8: 4. Evaluating the Model
With our first model developed, how can we be sure of its accuracy? Firstly let us compute the mean of our accuracy (which is percentage correct in this case as the result is only true or false).
Step9: Our guess is alright! On average I get around 75% guess accuracy but you might be seeing another number. My model works on probability so the number of correct guesses changes when rerun.
To properly get a measure of how correct the model is, I've decided to Monte-Carlo the experiment and view the histogram. This will tell us how much of a spread the model has guessing the right answer and what on average I expect my accuracy to be.
Step10: As you can see, the model is normally distributed about 75.5% and has a spread from 70% to 80% accuracy. That means we are getting the right answer 3/4 of the time! That isn't too bad but I know we can do better.
4.1. How Can We Improve The Model?
The model currently only uses one (or two if you don't count the AND of probabilities as one test) measure of survival rate. This is fine and good but what happens if we want to include more parameters in the model? There has to be a way of combining different likelihoods of surviving into a single measure.
My idea is as follows. The likelihood of surviving is determined by a weighted average. Each parameter or collection of parameters are given a weighting depending on how far away the prediction is to random chance and normalised so that the weightings sum up to one. I'll illustrate this with an example.
Say that there is a 40 year old women travelling in first class. The fact that she is a women in first class gives a likelihood of surviving as 90% and the fact that she is 40 years old gives a likelihood of 60%. I would assign the weighting of 80%-20% since the first parameter is 40% away from random chance while the second parameter is only 10% away. These percentages normalised to sum to 100% give 80% and 20% respectively.
I am not sure if this would work but it is worth a shot irregardless. We can tweak the model later if the result isn't consistent.
If I am going to improve this model then I would want to remove the two columns of Guess and CorrectGuess from the dataframe. They will get re-added at the end with the new model.
Step11: 5. Improving The Model By Including Age
A new factor to include in the model is the age of the passengers. I expect that there should be some kind of trend with
the age of the passenger and their likelihood of surviving. Let us try to identify this trend by visualising the survival rate histogram overlaid with the ages histogram.
First plot the histograms without filtering
Step12: Interestingly, it seems that children below 16 years have a really high chance of surviving as well as passengers above 50 years old. The worst survival rate is for passengers between 18 to 45 years.
Let us now redo this analysis but split the figure into one for males and one for females.
Step13: This result supports what we found before, that females mostly survived over males, but it also provides some new insight. Notice that for male children their survival rate is still really high (<15 years) but is consistently low otherwise. As such you could tweak the model to say that children are much more likely to survived irregardless of gender.
Let us try to visualise the same plot again but set the bin width as 5 years.
Step14: Our conclusion is supported! Now we have to figure out if we can include this in the model.
Let us compute the survival rate on 5 year bin-widths and use that in the final model.
Step17: Now with these results visualised it should be easier to see. The survival rate for infants (<5 years) is quite high, while for men between 20 to 25 years it is only 35%. Anytime there is a 50% reading this is because their isn't enough information and you can only conclude that the probability matches random chance.
6. Bringing all of it Together
Now that we have computed enough information for our model, we can begin to combine it all together and form our weighted probability of surviving.
In the end I decided that the best way to evaluate the model is through the use of ensembling as it gave a much better result when dealing with unbias and unrelated decision trees. That is to say, each parameter throws separate dice some number of times and a majority vote is taken. That way we don't have to deal with problems with weighting and can treat each parameter separately.
Step18: Currently the execution time for the below cell is really long because I haven't bothered to optimise it. | Python Code:
# IMPORT STATEMENTS.
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns # data visualisation
import matplotlib.pyplot as plt # data visualisation
import random # Used to sample survival.
# DEFINE GLOBALS.
NUM_OF_ROLLS = 3
df = pd.read_csv("../input/train.csv", index_col=0)
df.head()
Explanation: Introduction
Welcome to my analysis on who died during the sinking of the Titanic. In this notebook I will be exploring some basic trends to see what are the best predictors of who survived and who perished and discuss how certain methods aid in my final result.
A submission worthy script can be found on my GitHub.
1. Import In The Data
Before we can do anything with the analysis we need to import in the data in order to visualise it and identify possible trends. To do this I decided to import in the csv using pandas' read_csv function and import the data into a dataframe.
End of explanation
# First visualise the general case (i.e. no considerations)
total, survivors = df.shape[0], df[df.Survived==1].shape[0]
survival_rate = float(survivors)/float(total)*100
f, ax = plt.subplots(figsize=(7, 7))
ax.set_title("Proportion of People Who Died On The Titanic")
ax.pie(
[survival_rate, 100-survival_rate],
autopct='%1.1f%%',
labels=['Survived', 'Died']
)
None # Removes console output
Explanation: As you should be able to see, there are a few different parameters we can consider in our models:
Pclass: The ticket class which is either 1, 2 or 3. Although I can't support this yet, I would assume that this be a major deciding factor in if someone survived.
Name: The name of the passenger for their id.
Sex Either male or female. Again I would assume this is a heavily weighted variable in determining if a person survived or died.
Age: This is a floating-point number. If the age is estimated then it is given in the form xx.5 and infants ages are given as fractions of one whole year.
SibSp: Indicates the number of siblings or spouses on-board.
Parch: Indicates the number of parents or children on-board.
Ticket: Gives the ticket number. I think this wouldn't be very useful in the analysis.
Fare: The cost of the ticket. This should be pretty well correlated with the ticket class.
Cabin: Tells where the ticket holder's cabin is. This might be more useful in higher-level analysis when certain areas had higher mortality rates then others.
Embarked: Where the passenger embarked from. There are only three ports to consider: Cherbourg, Queenstown and Southampton
2. Visualise The Data
Before we can begin to analyse the data and form an appropriate model, we need to identify trends visually. This section looks at some common trends to see where we can develop our model.
2.1. Visualisation of Deaths
Let us start by looking at how many people died on the Titanic generally and then by category. This might provide us with some interesting insights in how your likelihood of surviving depends on certain parameters.
End of explanation
sns.set_style('white')
f, ax = plt.subplots(figsize=(8, 8))
sns.barplot(
ax=ax,
x='Pclass',
y='Survived',
hue='Sex',
data=df,
capsize=0.05
)
ax.set_title("Survival By Gender and Ticket Class")
ax.set_ylabel("Survival (%)")
ax.set_xlabel("")
ax.set_xticklabels(["First Class", "Second Class", "Third Class"])
None # Suppress console output
Explanation: From this visualisation you can see that only 40% of people survived on the Titanic. Thus if we assume that the testing data is similar to the training script (which is a very valid assumption) then we could simply predict that everyone died and we would still be right about 60% of the time. So already we can beat random chance! But we should be able to determine a better model then that anyway.
The next visualisation I want to do is the likelihood of surviving based on gender and ticket class. I have a feeling that women survived more likely then men and the higher class tickets had a better survival rate then the lower classes.
End of explanation
sns.set_style("whitegrid")
f, ax = plt.subplots(figsize=(12, 5))
ax = sns.distplot(
df.Age.dropna().values, bins=range(0, 81, 1), kde=False,
axlabel='Age (Years)', ax=ax
)
Explanation: Wow! I expected some kind of a trend but nothing like this one. As you can see nearly all women survived in first class and the same trend is observed with second class. The women in third class survived more often then men but significantly less then the women in first and second class. The men in first class have a higher chance of surviving then the men the second and third class who have nearly identical chances.
Using this result I think I predict who died with around 70-80% alone. However, if I want a more accurate model, I'll have to keep exploring other behaviours.
2.2. Visualisation By Age
I would like to see the distribution of ages on the Titanic as a purely interest thing. Perhaps it will provide some sort of insight too. I have decided to simply group the ages by year for visualisation.
End of explanation
total, classes_count = float(df['Pclass'].shape[0]), df['Pclass'].value_counts()
proportions = list(map(lambda x: classes_count.loc[x]/total*100, [1, 2, 3]))
f, ax = plt.subplots(figsize=(8, 8))
ax.set_title('Proportion of Passengers By Class')
ax.pie(proportions, autopct='%1.1f%%', labels=['First Class', 'Second Class', 'Third Class'])
None # Removes console output
Explanation: Interestingly, there seems to be a non-normal distribution for the ages. There is a small spike for young children before having a right-skewed distribution centring around 25 years. A likely explanation for this behaviour would be that families bring their children on the voyage but teenagers are expected to be old enough to care for themselves if the parents went away.
2.3. Visualisation By Ticket Class
How about the distribution by ticket class? Let us look at these values using pie charts.
End of explanation
def probability(df, key_list):
Finds the probability of surviving based on the parameters passed in key_list.
The key_list input is structured like so:
[Ticket Class, Sex]
So for example, an input could be [1, 'male'].
pclass, sex = key_list
filtered_df = df[(df.Sex == sex) & (df.Pclass == pclass)]
return filtered_df['Survived'].mean()
##############################################################################################
sexes = df.Sex.unique()
ticket_classes = df.Pclass.unique()
probability_dict = dict()
for x in ticket_classes:
for y in sexes:
key = [x, y]
probability_dict[str(key)] = probability(df, key)
##############################################################################################
def make_guesses(df):
Makes guesses on if the passengers survived or died.
guesses = list()
for passenger_index, row in df.iterrows():
# Find if the passenger survived.
survival_key = [row.Pclass, row.Sex]
survival_odds = probability_dict[str(survival_key)]
survived_rolls = list(map(lambda x: random.random() <= survival_odds, range(NUM_OF_ROLLS)))
survived = sum(survived_rolls) > NUM_OF_ROLLS/2
# Add the result to the guesses
guesses.append(survived)
return guesses
##############################################################################################
df['Guess'] = make_guesses(df)
df['CorrectGuess'] = df.Guess == df.Survived
df.head()
Explanation: 3. Building a Simple Model
Now that we have identified some basic trends we can begin to create a model that predicts if a person survives or dies based on data alone.
Again we are going to use the assumption that the testing data is similar to the training data such that the percentage of people who died in both sets should be identical. I will use the mean number of deaths for certain parameters as the threshold to break under a random roll to see if they survive or die. Let us try this right now.
Each probability is stored in a dictionary with the key being a list of values for each of the columns I'm testing for. Since currently I'm just testing for sex and ticket class the key will be [Sex, Ticket Class]
End of explanation
df.CorrectGuess.mean()
Explanation: 4. Evaluating the Model
With our first model developed, how can we be sure of its accuracy? Firstly let us compute the mean of our accuracy (which is percentage correct in this case as the result is only true or false).
End of explanation
results = list()
for ii in range(10**2):
guesses = make_guesses(df)
correct_guesses = (df.Survived == guesses)
results.append(correct_guesses.mean())
sns.distplot(results, kde=False)
None
Explanation: Our guess is alright! On average I get around 75% guess accuracy but you might be seeing another number. My model works on probability so the number of correct guesses changes when rerun.
To properly get a measure of how correct the model is, I've decided to Monte-Carlo the experiment and view the histogram. This will tell us how much of a spread the model has guessing the right answer and what on average I expect my accuracy to be.
End of explanation
df.drop('Guess', axis=1, inplace=True)
df.drop('CorrectGuess', axis=1, inplace=True)
df.head()
Explanation: As you can see, the model is normally distributed about 75.5% and has a spread from 70% to 80% accuracy. That means we are getting the right answer 3/4 of the time! That isn't too bad but I know we can do better.
4.1. How Can We Improve The Model?
The model currently only uses one (or two if you don't count the AND of probabilities as one test) measure of survival rate. This is fine and good but what happens if we want to include more parameters in the model? There has to be a way of combining different likelihoods of surviving into a single measure.
My idea is as follows. The likelihood of surviving is determined by a weighted average. Each parameter or collection of parameters are given a weighting depending on how far away the prediction is to random chance and normalised so that the weightings sum up to one. I'll illustrate this with an example.
Say that there is a 40 year old women travelling in first class. The fact that she is a women in first class gives a likelihood of surviving as 90% and the fact that she is 40 years old gives a likelihood of 60%. I would assign the weighting of 80%-20% since the first parameter is 40% away from random chance while the second parameter is only 10% away. These percentages normalised to sum to 100% give 80% and 20% respectively.
I am not sure if this would work but it is worth a shot irregardless. We can tweak the model later if the result isn't consistent.
If I am going to improve this model then I would want to remove the two columns of Guess and CorrectGuess from the dataframe. They will get re-added at the end with the new model.
End of explanation
f, ax = plt.subplots(figsize=(12, 8))
sns.distplot(
df.Age.dropna().values, bins=range(0, 81, 1), kde=False,
axlabel='Age (Years)', ax=ax
)
sns.distplot(
df[(df.Survived == 1)].Age.dropna().values, bins=range(0, 81, 1), kde=False,
axlabel='Age (Years)', ax=ax
)
None # Suppress console output.
Explanation: 5. Improving The Model By Including Age
A new factor to include in the model is the age of the passengers. I expect that there should be some kind of trend with
the age of the passenger and their likelihood of surviving. Let us try to identify this trend by visualising the survival rate histogram overlaid with the ages histogram.
First plot the histograms without filtering:
End of explanation
f, ax = plt.subplots(2, figsize=(12, 8))
# Plot both sexes on different axes
for ii, sex in enumerate(['male', 'female']):
sns.distplot(
df[df.Sex == sex].Age.dropna().values, bins=range(0, 81, 1), kde=False,
axlabel='Age (Years)', ax=ax[ii]
)
sns.distplot(
df[(df.Survived == 1)&(df.Sex == sex)].Age.dropna().values, bins=range(0, 81, 1), kde=False,
axlabel='Age (Years)', ax=ax[ii]
)
None # Suppress console output.
Explanation: Interestingly, it seems that children below 16 years have a really high chance of surviving as well as passengers above 50 years old. The worst survival rate is for passengers between 18 to 45 years.
Let us now redo this analysis but split the figure into one for males and one for females.
End of explanation
f, ax = plt.subplots(2, figsize=(12, 8))
# Plot both sexes on different axes
for ii, sex in enumerate(['male', 'female']):
sns.distplot(
df[df.Sex == sex].Age.dropna().values, bins=range(0, 81, 5), kde=False,
axlabel='Age (Years)', ax=ax[ii]
)
sns.distplot(
df[(df.Survived == 1)&(df.Sex == sex)].Age.dropna().values, bins=range(0, 81, 5), kde=False,
axlabel='Age (Years)', ax=ax[ii]
)
None # Suppress console output.
Explanation: This result supports what we found before, that females mostly survived over males, but it also provides some new insight. Notice that for male children their survival rate is still really high (<15 years) but is consistently low otherwise. As such you could tweak the model to say that children are much more likely to survived irregardless of gender.
Let us try to visualise the same plot again but set the bin width as 5 years.
End of explanation
survival_rates, survival_labels = list(), list()
for x in range(0, 90+5, 5):
aged_df = df[(x <= df.Age)&(df.Age <= x+5)]
survival_rate = aged_df['Survived'].mean()
survival_rate = 0.5 if (survival_rate == 0.0 or survival_rate == 1.0) else survival_rate
survival_rates.append(survival_rate if (survival_rate != 0.0 or survival_rate != 1.0) else 0.5)
survival_labels.append('(%i, %i]' % (x, x+5))
f, ax = plt.subplots(figsize=(12, 8))
ax = sns.barplot(x=survival_labels, y=survival_rates, ax=ax)
ax.set_xticklabels(ax.get_xticklabels(), rotation=50)
None # Suppress console output
Explanation: Our conclusion is supported! Now we have to figure out if we can include this in the model.
Let us compute the survival rate on 5 year bin-widths and use that in the final model.
End of explanation
def getProbability(passengerId, df):
Finds the weighted probability of surviving based on the passenger's parameters.
This function finds the passenger's information by looking for their id in the dataframe
and extracting the information that it needs. Currently the probability is found using a
weighted mean on the following parameters:
- Pclass: Higher the ticket class the more likely they will survive.
- Sex: Women on average had a higher chance of living.
- Age: Infants and older people had a greater chance of living.
passenger = df.loc[passengerId]
# Survival rate based on sex and ticket class.
bySexAndClass = df[
(df.Sex == passenger.Sex) &
(df.Pclass == passenger.Pclass)
].Survived.mean()
# Survival rate based on sex and age.
byAge = df[
(df.Sex == passenger.Sex) &
((df.Age//5-1)*5 <= passenger.Age) & (passenger.Age <= (df.Age//5)*5)
].Survived.mean()
# Find the weighting for each of the rates.
parameters = [bySexAndClass, byAge]
rolls = [5, 4] # Roll numbers are hardcoded until I figure out the weighting system
probabilities = []
for Nrolls, prob in zip(rolls, parameters):
for _ in range(Nrolls):
probabilities += [prob]
return probabilities
##############################################################################################
def make_guesses(df):
Makes guesses on if the passengers survived or died.
guesses = list()
for passenger_index, _row in df.iterrows():
# Find if the passenger survived.
survival_odds = getProbability(passenger_index, df)
roll_outcomes = []
for prob in survival_odds:
roll_outcomes += [random.random() <= prob]
survived = sum(roll_outcomes) > len(roll_outcomes)/2
# Add the result to the guesses
guesses.append(survived)
return guesses
##############################################################################################
df['Guess'] = make_guesses(df)
df['CorrectGuess'] = df.Guess == df.Survived
df.head()
df.CorrectGuess.mean()
Explanation: Now with these results visualised it should be easier to see. The survival rate for infants (<5 years) is quite high, while for men between 20 to 25 years it is only 35%. Anytime there is a 50% reading this is because their isn't enough information and you can only conclude that the probability matches random chance.
6. Bringing all of it Together
Now that we have computed enough information for our model, we can begin to combine it all together and form our weighted probability of surviving.
In the end I decided that the best way to evaluate the model is through the use of ensembling as it gave a much better result when dealing with unbias and unrelated decision trees. That is to say, each parameter throws separate dice some number of times and a majority vote is taken. That way we don't have to deal with problems with weighting and can treat each parameter separately.
End of explanation
results = list()
for ii in range(10**2):
guesses = make_guesses(df)
correct_guesses = (df.Survived == guesses)
results.append(correct_guesses.mean())
if ii % 10 == 0: print("%i/%i" % (ii, 10**2))
sns.distplot(results, kde=False)
None
Explanation: Currently the execution time for the below cell is really long because I haven't bothered to optimise it.
End of explanation |
14,305 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs[None:,], self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = error * self.weights_hidden_to_output
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error
hidden_error_term = hidden_error.T * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += np.dot(X[:, None], hidden_error_term)
# Weight step (hidden to output)
delta_weights_h_o += (output_error_term * hidden_outputs)[:,None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * (delta_weights_h_o/n_records) # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * (delta_weights_i_h/n_records) # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 3500
learning_rate = 0.4
hidden_nodes = 4
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
14,306 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced indexing
Step1: This dataset is borrowed from the PyCon tutorial of Brandon Rhodes (so all credit to him!). You can download these data from here
Step2: Setting columns as the index
Why is it useful to have an index?
Giving meaningful labels to your data -> easier to remember which data are where
Unleash some powerful methods, eg with a DatetimeIndex for time series
Easier and faster selection of data
It is this last one we are going to explore here!
Setting the title column as the index
Step3: Instead of doing
Step4: we can now do
Step5: But you can also have multiple columns as the index, leading to a multi-index or hierarchical index | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
try:
import seaborn
except ImportError:
pass
pd.options.display.max_rows = 10
Explanation: Advanced indexing
End of explanation
cast = pd.read_csv('data/cast.csv')
cast.head()
titles = pd.read_csv('data/titles.csv')
titles.head()
Explanation: This dataset is borrowed from the PyCon tutorial of Brandon Rhodes (so all credit to him!). You can download these data from here: titles.csv and cast.csv and put them in the /data folder.
End of explanation
c = cast.set_index('title')
c.head()
Explanation: Setting columns as the index
Why is it useful to have an index?
Giving meaningful labels to your data -> easier to remember which data are where
Unleash some powerful methods, eg with a DatetimeIndex for time series
Easier and faster selection of data
It is this last one we are going to explore here!
Setting the title column as the index:
End of explanation
%%time
cast[cast['title'] == 'Hamlet']
Explanation: Instead of doing:
End of explanation
%%time
c.loc['Hamlet']
Explanation: we can now do:
End of explanation
c = cast.set_index(['title', 'year'])
c.head()
%%time
c.loc[('Hamlet', 2000),:]
c2 = c.sort_index()
%%time
c2.loc[('Hamlet', 2000),:]
Explanation: But you can also have multiple columns as the index, leading to a multi-index or hierarchical index:
End of explanation |
14,307 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Traffic flow
Step1: The LWR model
Recall the continuity equation for any density that is advected with a flow
Step2: Combining the two equations above, our conservation law says
$$\rho_t + (\rho (1-\rho))_x = 0$$
with the flux function
$$f(\rho) = \rho(1-\rho)$$
giving the rate of flow of cars. Notice how the flux is zero when there are no cars ($\rho=0$) and also when the road is completely full ($\rho=1$). The maximum flow of traffic actually occurs when the road is half full, as the plot below shows.
Step3: Like the flux in Bugers' equation, the LWR flux is nonlinear, so we again expect to see shock waves and rarefaction waves in the solution. We can superficially make this equation look like the advection equation by using the chain rule to write it in quasilinear form
Step4: Note that the vehicle trajectory plot above shows the motion of cars moving at velocity $u_\ell = 1 - \rho_\ell>0$ as they approach the traffic jam, and at speed $u_r = 1-\rho_r = 0$ to the right of the shock, where the cars are stationary.
Unlike the case of linear advection, the characteristic speeds are different, with $f'(\rho_\ell) = 1 - 2\rho_\ell$, which could be either negative or positive depending on $\rho_\ell$, and $f'(\rho_r) = -1$.
Speed of a shock wave
Step5: However, the shock is not stationary, so the line is moving. Let $s$ be the speed of the shock. Then as the line moves to the left, some cars that were to the left are now to the right of the line. The rate of cars removed from the left is $s \rho_\ell$ and the rate of cars added on the right is $s \rho_r$, as shown in this figure
Step6: So in order to avoid an infinite density of cars at the shock, these two effects need to be balanced
Step7: In the $x$-$t$ plane plot above, the black curves show vehicle trajectories while the blue rays are characteristics corresponding to values of $\rho$ between $\rho_\ell = 1$ and $\rho_r$, with the left-most characteristic following $x=f'(q_\ell)t$ and the right-most characteristic following $x= f'(q_r)t$.
Entropy condition
How can we determine whether an initial discontinuity will lead to a shock or a rarefaction? We have already addressed this for scalar equations in Burgers. Recall the Lax Entropy Condition introduced there
Step8: On the other hand, if $f'(\rho_\ell)< f'(\rho_r)$, then a rarefaction wave results and the initial discontinuity immediately spreads out.
Here is an example showing the characteristics in a rarefaction.
Step9: Interactive Riemann solution
In the live notebook, the interactive module below shows the solution
of the Riemann problem for any inputs $(\rho_\ell,\rho_r)$ with slider bars to adjust these. The characteristics and vehicle trajectories are also plotted. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
from ipywidgets import interact
from ipywidgets import FloatSlider, fixed
from exact_solvers import traffic_LWR
from exact_solvers import traffic_demos
from IPython.display import Image
Explanation: Traffic flow: the Lighthill-Whitham-Richards model
In this chapter we investigate a conservation law that models the flow of traffic. This model is sometimes referred to as the Lighthill-Whitham-Richards (or LWR) traffic model (see <cite data-cite="lighthill1955kinematic"><a href="riemann.html#lighthill1955kinematic">(Lighthill, 1955)</a></cite> and <cite data-cite="richards1956shock"><a href="riemann.html#richards1956shock">(Richards, 1956)</a></cite>). This model and the corresponding Riemann problem are discussed in many places; the discussion here is most closely related to that in Chapter 11 of <cite data-cite="fvmhp"><a href="riemann.html#fvmhp">(LeVeque, 2002)</a></cite>.
This nonlinear scalar problem is similar to Burgers' equation that we already discussed in Burgers in many ways, since both involve a quadratic (and hence convex) flux function. In this notebook we repeat some of the discussion from Burgers in order to reinforce essential concepts that will be important throughout the remainder of the book.
If you wish to examine the Python code for this chapter, please see:
exact_solvers/traffic_LWR.py ...
on github,
exact_solvers/traffic_demos.py ...
on github.
End of explanation
Image('figures/LWR-Velocity.png', width=350)
Explanation: The LWR model
Recall the continuity equation for any density that is advected with a flow:
$$\rho_t + (u\rho)_x = 0.$$
In this chapter, $\rho$ represents the density of cars on a road, traveling with velocity $u$. Note that we're not keeping track of the individual cars, but just of the average number of cars per unit length of road. Thus $\rho=0$ represents an empty stretch of road, and we can choose the units so that $\rho=1$ represents bumper-to-bumper traffic.
We'll also choose units so that the speed limit is $u_\text{max}=1$, and assume that drivers never go faster than this (yeah, right!). If we assume that drivers always travel at a single uniform velocity, we obtain once again the advection equation that we studied in Advection. But we all know that's not accurate in practice -- cars go faster in light traffic and slower when there is congestion. The simplest way to incorporate this effect is to make the velocity a linearly decreasing function of the density:
$$u(\rho) = 1 - \rho.$$
Notice that $u$ goes to zero as $\rho$ approaches the maximum density of 1, while $u$ goes to the maximum value of 1 as traffic density goes to zero. Obviously, both $\rho$ and $u$ should always stay in the interval $[0,1]$.
Here is a plot of this velocity function:
End of explanation
f = lambda rho: rho*(1-rho)
traffic_demos.plot_flux(f)
Explanation: Combining the two equations above, our conservation law says
$$\rho_t + (\rho (1-\rho))_x = 0$$
with the flux function
$$f(\rho) = \rho(1-\rho)$$
giving the rate of flow of cars. Notice how the flux is zero when there are no cars ($\rho=0$) and also when the road is completely full ($\rho=1$). The maximum flow of traffic actually occurs when the road is half full, as the plot below shows.
End of explanation
interact(traffic_demos.jam,
rho_l=FloatSlider(min=0.1,max=0.9,value=0.5,
description=r'$\rho_l$'),
t=FloatSlider(min=0.,max=1.,value=0.5), fig=fixed(0));
Explanation: Like the flux in Bugers' equation, the LWR flux is nonlinear, so we again expect to see shock waves and rarefaction waves in the solution. We can superficially make this equation look like the advection equation by using the chain rule to write it in quasilinear form:
$$f(\rho)_x = f'(\rho) \rho_x = (1-2\rho)\rho_x.$$
Then we have
$$\rho_t + (1-2\rho)\rho_x = 0.$$
This is like the advection equation, but with a velocity $1-2\rho$ that depends on the density of cars. The value $f'(\rho)=1-2\rho$ is referred to as the characteristic speed. This characteristic speed is not the speed at which cars move (notice that it can even be negative, whereas cars only drive to the right in our model) Rather, it is the speed at which information is transmitted along the road. Notice that the LWR flux is not convex but concave; because of this, the characteristic speed is a decreasing function of the density.
Example: Traffic jam
What does our model predict when traffic approaches a totally congested ($\rho=1$) area? This might be due to construction, an accident or a red light somewhere to the right; upstream of the obstruction, cars will be bumper-to-bumper, so we set $\rho=1$ for $x>0$ (supposing that traffic has backed up to that point). For $x<0$ we'll assume a lower density $\rho_\ell<1$. This is another example of a Riemann problem: two constant states separated by a discontinuity. We have
$$
\rho(x,t=0) = \begin{cases} \rho_\ell & x<0 \
1 & x>0. \end{cases}
$$
What will happen as time goes forward? Intuitively, we expect traffic to continue backing up to the left, so the region with $\rho=1$ will extend further and further to the left. This corresponds to the discontinuity (or shock wave) moving to the left. How quickly will it move? The example below shows the solution (on the left) and individual vehicle trajectories in the $x-t$ plane (on the right).
End of explanation
Image('figures/shock_diagram_traffic_a.png', width=350)
Explanation: Note that the vehicle trajectory plot above shows the motion of cars moving at velocity $u_\ell = 1 - \rho_\ell>0$ as they approach the traffic jam, and at speed $u_r = 1-\rho_r = 0$ to the right of the shock, where the cars are stationary.
Unlike the case of linear advection, the characteristic speeds are different, with $f'(\rho_\ell) = 1 - 2\rho_\ell$, which could be either negative or positive depending on $\rho_\ell$, and $f'(\rho_r) = -1$.
Speed of a shock wave: the Rankine-Hugoniot condition
In the plot above, we see a shock wave (i.e., a discontinuity) that moves to the left as more and more cars pile up behind the traffic jam. How quickly does this discontinuity move to the left?
We can figure it out by putting an imaginary line at the location of the shock, as shown in the next figure.
Let $\rho_\ell$ be the density of cars just to the left of the line, and let $\rho_r$ be the density of cars just to the right. Imagine for a moment that the line is stationary. Then the rate of cars reaching the line from the left is $f(\rho_\ell)$ and the rate of cars departing from the line to the right is $f(\rho_r)$. If the line really were stationary, we would need to have $f(\rho_\ell)-f(\rho_r)=0$ to avoid cars accumulating at the line.
End of explanation
Image('figures/shock_diagram_traffic_b.png', width=350)
Explanation: However, the shock is not stationary, so the line is moving. Let $s$ be the speed of the shock. Then as the line moves to the left, some cars that were to the left are now to the right of the line. The rate of cars removed from the left is $s \rho_\ell$ and the rate of cars added on the right is $s \rho_r$, as shown in this figure:
End of explanation
interact(traffic_demos.green_light,
rho_r=FloatSlider(min=0.,max=0.9,value=0.3,
description=r'$\rho_r$'),
t=FloatSlider(min=0.,max=1.), fig=fixed(0));
Explanation: So in order to avoid an infinite density of cars at the shock, these two effects need to be balanced:
$$f(\rho_\ell) - f(\rho_r) = s(\rho_\ell - \rho_r).$$
This same condition was used for Burgers' equation in Burgers, and is known as the Rankine-Hugoniot condition. It holds for any shock wave in the solution of any hyperbolic PDE (even systems of equations, where the corresponding vector version gives even more information about the structure of allowable shock waves).
Returning to our traffic jam scenario, we set $\rho_r=1$. Then we find that the Rankine-Hugoniot condition gives the shock speed
$$s = \frac{f(\rho_\ell)-f(\rho_r)}{\rho_\ell-\rho_r} = \frac{f(\rho_\ell)}{\rho_\ell-1} = -\rho_\ell.$$
This makes sense: the traffic jam propagates back along the road, and it does so more quickly if there is a greater density of approaching cars.
Example: green light
What about when a traffic light turns green? At $t=0$, when the light changes, there will be a discontinuity, with
traffic backed up behind the light but little or no traffic after the light. With the light at $x=0$, this takes the form of another Riemann problem:
$$
\rho(x,t=0) = \begin{cases} 1 & x<0, \
\rho_r & x>0, \end{cases}
$$
with $\rho_r = 0$, for example.
In this case we don't expect the discontinuity in density to propagate. Physically, the reason is clear: after the light turns green, the cars in front accelerate and spread out; then the cars behind them accelerate, and so forth. This kind of expansion wave is referred to as a rarefaction wave because the drivers experience a decrease in density (a rarefaction) as they pass through this wave. Initially, the solution is discontinuous, but after time zero it becomes continuous.
Similarity solutions
The exact form of the solution at a green light can be determined by assuming that the solution $\rho(x,t)$ depends only on $x/t$. A solution with this property is referred to as a similarity solution because it remains the same if we rescale both $x$ and $t$ by the same factor. The solution of any Riemann problem is, in fact, a similarity solution. Writing $\rho(x,t) = \tilde{\rho}(x/t)$ we have (with $\xi = x/t$):
\begin{align}
\rho_t & = -\frac{x}{t^2}\tilde{\rho}'(\xi) & f(\rho)_x & = \frac{1}{t}\tilde{\rho}'(\xi) f'(\tilde{\rho}(\xi)).
\end{align}
Thus
\begin{align}
\rho_t + f(\rho)x = -\frac{x}{t^2}\tilde{\rho}'(\xi) + \frac{1}{t}\tilde{\rho}'(\xi) f'(\tilde{\rho}(\xi)) = 0.
\end{align}
This can be solved to find
\begin{align}
f'(\tilde{\rho}(\xi)) & = \frac{x}{t}
\end{align}
or, since $f'(\tilde{\rho}) = 1-2\tilde{\rho}$,
\begin{align}
\tilde{\rho}(\xi) & = \frac{1}{2}\left(1 - \frac{x}{t}\right).
\end{align}
We know that the solution far enough to the left is just $\rho\ell=1$, and far enough to the right it is $\rho_r$. The formula above gives the solution in the region between these constant states. For instance, if $\rho_r=0$ (i.e., the road beyond the light is empty at time zero), then
\begin{align}
\rho(x,t) & = \begin{cases}
1 & x/t \le -1 \
\frac{1}{2}\left(1 - x/t\right) & -1 < x/t < 1 \
0 & 1 \le x/t.
\end{cases}
\end{align}
The plot below shows the solution density and vehicle trajectories for a green light at $x=0$.
End of explanation
traffic_LWR.plot_riemann_traffic(0.2,0.4,t=0.5)
Explanation: In the $x$-$t$ plane plot above, the black curves show vehicle trajectories while the blue rays are characteristics corresponding to values of $\rho$ between $\rho_\ell = 1$ and $\rho_r$, with the left-most characteristic following $x=f'(q_\ell)t$ and the right-most characteristic following $x= f'(q_r)t$.
Entropy condition
How can we determine whether an initial discontinuity will lead to a shock or a rarefaction? We have already addressed this for scalar equations in Burgers. Recall the Lax Entropy Condition introduced there:
Shocks appear in regions where characteristics converge, as in the traffic jam example above.
Rarefactions appear in regions where characteristics are spreading out, as in the green light example.
More precisely, if the solution is a shock wave with left state $\rho_\ell$ and right state $\rho_r$, then it must be that $f'(\rho_\ell)>f'(\rho_r)$. In fact the shock speed must lie between these characteristic speeds:
$$f'(\rho_\ell) > s > f'(\rho_r).$$
We say that the characteristics impinge on the shock.
Here is an example showing the characteristics near a shock:
End of explanation
traffic_LWR.plot_riemann_traffic(0.4,0.2,t=0.5)
Explanation: On the other hand, if $f'(\rho_\ell)< f'(\rho_r)$, then a rarefaction wave results and the initial discontinuity immediately spreads out.
Here is an example showing the characteristics in a rarefaction.
End of explanation
traffic_LWR.riemann_solution_interact()
Explanation: Interactive Riemann solution
In the live notebook, the interactive module below shows the solution
of the Riemann problem for any inputs $(\rho_\ell,\rho_r)$ with slider bars to adjust these. The characteristics and vehicle trajectories are also plotted.
End of explanation |
14,308 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Playing atari with advantage actor-critic
This time we're going to learn something harder then CartPole
Step3: Processing game image
Raw atari images are large, 210x160x3 by default. However, we don't need that level of detail in order to learn them.
We can thus save a lot of time by preprocessing game image, including
* Resizing to a smaller shape
* Converting to grayscale
* Cropping irrelevant image parts
Step4: Basic agent setup
Here we define a simple agent that maps game images into policy using simple convolutional neural network.
Step5: Network body
Here will need to build a convolutional network that consists of 4 layers
Step6: Network head
You will now need to build output layers.
Since we're building advantage actor-critic algorithm, out network will require two outputs
Step7: Finally, agent
We declare that this network is and MDP agent with such and such inputs, states and outputs
Step8: Create and manage a pool of atari sessions to play with
To make training more stable, we shall have an entire batch of game sessions each happening independent of others
Why several parallel agents help training
Step9: Advantage actor-critic
An agent has a method that produces symbolic environment interaction sessions
Such sessions are in sequences of observations, agent memory, actions, q-values,etc
one has to pre-define maximum session length.
SessionPool also stores rewards, alive indicators, etc.
Code mostly copied from here
Step11: Demo run
Step12: Training loop
Step14: Evaluating results
Here we plot learning curves and sample testimonials | Python Code:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
#setup theano/lasagne. Prefer GPU
%env THEANO_FLAGS=device=gpu,floatX=float32
#If you are running on a server, launch xvfb to record game videos
#Please make sure you have xvfb installed (apt-get install xvfb, see gym readme on xvfb)
import os
if os.environ.get("DISPLAY") is str and len(os.environ.get("DISPLAY"))!=0:
!bash xvfb start
%env DISPLAY=:1
Explanation: Playing atari with advantage actor-critic
This time we're going to learn something harder then CartPole :)
Gym atari games only allow raw image pixels as observation, hence demanding a more powerful agent network to find meaningful features. We shall use a convolutional neural network for such task.
Most of the code in this notebook is written for you, however you are strongly encouraged to experiment with it to find better agent configuration and/or learning algorithm.
End of explanation
from gym.core import ObservationWrapper
from gym.spaces import Box
from scipy.misc import imresize
class PreprocessAtari(ObservationWrapper):
def __init__(self, env):
A gym wrapper that crops, scales image into the desired shapes and optionally grayscales it.
ObservationWrapper.__init__(self,env)
self.img_size = (64, 64)
self.observation_space = Box(0.0, 1.0, self.img_size)
def _observation(self, img):
what happens to each observation
# Here's what you need to do:
# * crop image, remove irrelevant parts
# * resize image to self.img_size
# (use imresize imported above or any library you want,
# e.g. opencv, skimage, PIL, keras)
# * cast image to grayscale
# * convert image pixels to (0,1) range, float32 type
<Your code here>
return <...>
import gym
#game maker consider https://gym.openai.com/envs
def make_env():
env = gym.make("KungFuMaster-v0")
return PreprocessAtari(env)
#spawn game instance
env = make_env()
observation_shape = env.observation_space.shape
n_actions = env.action_space.n
obs = env.reset()
plt.imshow(obs[0],interpolation='none',cmap='gray')
Explanation: Processing game image
Raw atari images are large, 210x160x3 by default. However, we don't need that level of detail in order to learn them.
We can thus save a lot of time by preprocessing game image, including
* Resizing to a smaller shape
* Converting to grayscale
* Cropping irrelevant image parts
End of explanation
import theano, lasagne
import theano.tensor as T
from lasagne.layers import *
from agentnet.memory import WindowAugmentation
#observation goes here
observation_layer = InputLayer((None,)+observation_shape,)
#4-tick window over images
prev_wnd = InputLayer((None,4)+observation_shape,name='window from last tick')
new_wnd = WindowAugmentation(observation_layer,prev_wnd,name='updated window')
#reshape to (frame, h,w). If you don't use grayscale, 4 should become 12.
wnd_reshape = reshape(new_wnd, (-1,4*observation_shape[0])+observation_shape[1:])
Explanation: Basic agent setup
Here we define a simple agent that maps game images into policy using simple convolutional neural network.
End of explanation
from lasagne.nonlinearities import rectify,elu,tanh,softmax
#network body
conv0 = Conv2DLayer(wnd_reshape,<...>)
conv1 = <another convolutional layer, growing from conv0>
conv2 = <yet another layer...>
dense = DenseLayer(<what is it's input?>,
nonlinearity=tanh,
name='dense "neck" layer')
Explanation: Network body
Here will need to build a convolutional network that consists of 4 layers:
* 3 convolutional layers with 32 filters, 5x5 window size, 2x2 stride
* Choose any nonlinearity but for softmax
* You may want to increase number of filters for the last layer
* Dense layer on top of all convolutions
* anywhere between 100 and 512 neurons
You may find a template for such network below
End of explanation
#actor head
logits_layer = DenseLayer(dense,n_actions,nonlinearity=None)
#^^^ separately define pre-softmax policy logits to regularize them later
policy_layer = NonlinearityLayer(logits_layer,softmax)
#critic head
V_layer = DenseLayer(dense,1,nonlinearity=None)
#sample actions proportionally to policy_layer
from agentnet.resolver import ProbabilisticResolver
action_layer = ProbabilisticResolver(policy_layer)
Explanation: Network head
You will now need to build output layers.
Since we're building advantage actor-critic algorithm, out network will require two outputs:
* policy, $pi(a|s)$, defining action probabilities
* state value, $V(s)$, defining expected reward from the given state
Both those layers will grow from final dense layer from the network body.
End of explanation
from agentnet.agent import Agent
#all together
agent = Agent(observation_layers=observation_layer,
policy_estimators=(logits_layer,V_layer),
agent_states={new_wnd:prev_wnd},
action_layers=action_layer)
#Since it's a single lasagne network, one can get it's weights, output, etc
weights = lasagne.layers.get_all_params([V_layer,policy_layer],trainable=True)
weights
Explanation: Finally, agent
We declare that this network is and MDP agent with such and such inputs, states and outputs
End of explanation
from agentnet.experiments.openai_gym.pool import EnvPool
#number of parallel agents
N_AGENTS = 10
pool = EnvPool(agent,make_env, N_AGENTS) #may need to adjust
%%time
#interact for 7 ticks
_,action_log,reward_log,_,_,_ = pool.interact(10)
print('actions:')
print(action_log[0])
print("rewards")
print(reward_log[0])
# batch sequence length (frames)
SEQ_LENGTH = 25
#load first sessions (this function calls interact and remembers sessions)
pool.update(SEQ_LENGTH)
Explanation: Create and manage a pool of atari sessions to play with
To make training more stable, we shall have an entire batch of game sessions each happening independent of others
Why several parallel agents help training: http://arxiv.org/pdf/1602.01783v1.pdf
Alternative approach: store more sessions: https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf
End of explanation
#get agent's Qvalues obtained via experience replay
#we don't unroll scan here and propagate automatic updates
#to speed up compilation at a cost of runtime speed
replay = pool.experience_replay
_,_,_,_,(logits_seq,V_seq) = agent.get_sessions(
replay,
session_length=SEQ_LENGTH,
experience_replay=True,
unroll_scan=False,
)
auto_updates = agent.get_automatic_updates()
# compute pi(a|s) and log(pi(a|s)) manually [use logsoftmax]
# we can't guarantee that theano optimizes logsoftmax automatically since it's still in dev
logits_flat = logits_seq.reshape([-1,logits_seq.shape[-1]])
policy_seq = T.nnet.softmax(logits_flat).reshape(logits_seq.shape)
logpolicy_seq = T.nnet.logsoftmax(logits_flat).reshape(logits_seq.shape)
# get policy gradient
from agentnet.learning import a2c
elwise_actor_loss,elwise_critic_loss = a2c.get_elementwise_objective(policy=logpolicy_seq,
treat_policy_as_logpolicy=True,
state_values=V_seq[:,:,0],
actions=replay.actions[0],
rewards=replay.rewards/100.,
is_alive=replay.is_alive,
gamma_or_gammas=0.99,
n_steps=None,
return_separate=True)
# (you can change them more or less harmlessly, this usually just makes learning faster/slower)
# also regularize to prioritize exploration
reg_logits = T.mean(logits_seq**2)
reg_entropy = T.mean(T.sum(policy_seq*logpolicy_seq,axis=-1))
#add-up loss components with magic numbers
loss = 0.1*elwise_actor_loss.mean() +\
0.25*elwise_critic_loss.mean() +\
1e-3*reg_entropy +\
1e-3*reg_logits
# Compute weight updates, clip by norm
grads = T.grad(loss,weights)
grads = lasagne.updates.total_norm_constraint(grads,10)
updates = lasagne.updates.adam(grads, weights,1e-4)
#compile train function
train_step = theano.function([],loss,updates=auto_updates+updates)
Explanation: Advantage actor-critic
An agent has a method that produces symbolic environment interaction sessions
Such sessions are in sequences of observations, agent memory, actions, q-values,etc
one has to pre-define maximum session length.
SessionPool also stores rewards, alive indicators, etc.
Code mostly copied from here
End of explanation
untrained_reward = np.mean(pool.evaluate(save_path="./records",
record_video=True))
#show video
from IPython.display import HTML
import os
video_names = list(filter(lambda s:s.endswith(".mp4"),os.listdir("./records/")))
HTML(
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
.format("./records/"+video_names[-1])) #this may or may not be _last_ video. Try other indices
Explanation: Demo run
End of explanation
#starting epoch
epoch_counter = 1
#full game rewards
rewards = {}
loss,reward_per_tick,reward =0,0,0
from tqdm import trange
from IPython.display import clear_output
#the algorithm almost converges by 15k iterations, 50k is for full convergence
for i in trange(150000):
#play
pool.update(SEQ_LENGTH)
#train
loss = 0.95*loss + 0.05*train_step()
if epoch_counter%10==0:
#average reward per game tick in current experience replay pool
reward_per_tick = 0.95*reward_per_tick + 0.05*pool.experience_replay.rewards.get_value().mean()
print("iter=%i\tloss=%.3f\treward/tick=%.3f"%(epoch_counter,
loss,
reward_per_tick))
##record current learning progress and show learning curves
if epoch_counter%100 ==0:
reward = 0.95*reward + 0.05*np.mean(pool.evaluate(record_video=False))
rewards[epoch_counter] = reward
clear_output(True)
plt.plot(*zip(*sorted(rewards.items(),key=lambda (t,r):t)))
plt.show()
epoch_counter +=1
# Time to drink some coffee!
Explanation: Training loop
End of explanation
import pandas as pd
plt.plot(*zip(*sorted(rewards.items(),key=lambda k:k[0])))
from agentnet.utils.persistence import save
save(action_layer,"kung_fu.pcl")
###LOAD FROM HERE
from agentnet.utils.persistence import load
load(action_layer,"kung_fu.pcl")
rw = pool.evaluate(n_games=20,save_path="./records",record_video=True)
print("mean session score=%f.5"%np.mean(rw))
#show video
from IPython.display import HTML
import os
video_names = list(filter(lambda s:s.endswith(".mp4"),os.listdir("./records/")))
HTML(
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
.format("./records/"+video_names[-1])) #this may or may not be _last_ video. Try other indices
Explanation: Evaluating results
Here we plot learning curves and sample testimonials
End of explanation |
14,309 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing the 1-D DVR
Use matplotlib inline so that plots show up in the notebook.
Step1: Next import the dvr_1d module. We import dvr_1d using a series of ipython notebook magic commands so that we can make changes to the module file and test those changes in this notebook without having to restart the notebook kernel.
Step2: First, we'll test the 1-D sinc function DVR on a simple harmonic oscillator potential.
Step3: Let's try the same potential but with a Hermite basis. We see that since the Hermite polynomials are the exact solutions to the SHO problem, we can use npts == num_eigs. The eigenvectors will look horrible but the eigenvalues will be very accurate.
Step4: Next we'll test the 1-D Sinc DVR on a finite and an infinite square well. "Infinite" here just means really really huge.
Step5: And we might as well try it out with the Hermite basis set too.
Step6: Let's repeat all these tests with the 1-D Fourier Sine DVR.
Step7: Let's test the Bessel DVR
Step8: Testing the 2-D DVR
We're going to be using 3D plots so let's change the matplotlib backend so the we can look at the plots in a separate window and manipulate them.
Step9: Now we'll construct a 2-D Sinc DVR from a product basis of 1-D Sinc DVRs.
Step10: Testing the 3-D DVR | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy.sparse as sp
import scipy.sparse.linalg as sla
Explanation: Testing the 1-D DVR
Use matplotlib inline so that plots show up in the notebook.
End of explanation
# autoreload the lattice module so that we can make changes to it
# without restarting the ipython notebook server
%load_ext autoreload
%autoreload 1
%aimport dvr_1d
%aimport dvr_2d
%aimport dvr_3d
Explanation: Next import the dvr_1d module. We import dvr_1d using a series of ipython notebook magic commands so that we can make changes to the module file and test those changes in this notebook without having to restart the notebook kernel.
End of explanation
d = dvr_1d.SincDVR(npts=200, L=14)
d.sho_test(precision=12)
Explanation: First, we'll test the 1-D sinc function DVR on a simple harmonic oscillator potential.
End of explanation
d = dvr_1d.HermiteDVR(npts=5)
d.sho_test(k=1., precision=11)
Explanation: Let's try the same potential but with a Hermite basis. We see that since the Hermite polynomials are the exact solutions to the SHO problem, we can use npts == num_eigs. The eigenvectors will look horrible but the eigenvalues will be very accurate.
End of explanation
d = dvr_1d.SincDVR(npts=500, L=20)
d.square_well_test(precision=6)
d.inf_square_well_test(precision=6)
Explanation: Next we'll test the 1-D Sinc DVR on a finite and an infinite square well. "Infinite" here just means really really huge.
End of explanation
d = dvr_1d.HermiteDVR(npts=268)
d.square_well_test(precision=6)
d.inf_square_well_test(precision=6)
Explanation: And we might as well try it out with the Hermite basis set too.
End of explanation
d = dvr_1d.SineDVR(npts=1000, xmin=-15, xmax=15)
d.sho_test(precision=12)
d.square_well_test(precision=6)
d.inf_square_well_test(precision=6)
Explanation: Let's repeat all these tests with the 1-D Fourier Sine DVR.
End of explanation
d = dvr_1d.BesselDVR(npts=100, R=20., dim=3, lam=1)
d.sho_test(xmin=0., xmax=8., ymin=0., ymax=12.)
Explanation: Let's test the Bessel DVR
End of explanation
%matplotlib osx
Explanation: Testing the 2-D DVR
We're going to be using 3D plots so let's change the matplotlib backend so the we can look at the plots in a separate window and manipulate them.
End of explanation
d1d = dvr_1d.SincDVR(npts = 30, L=10)
d2d = dvr_2d.DVR(dvr1d=d1d)
E, U = d2d.sho_test(num_eigs=5, precision=14)
d1d = dvr_1d.HermiteDVR(npts=5)
d2d = dvr_2d.DVR(dvr1d=d1d)
E, U = d2d.sho_test(num_eigs=5, precision=14)
d1d = dvr_1d.SincDVR(npts = 30, L=10)
d2d = dvr_2d.DVR(dvr1d=d1d)
E, U = d2d.sho_test(num_eigs=5, precision=10, uscale=3.5, doshow=True)
d1d = dvr_1d.SineDVR(npts = 30, xmin=-5., xmax=5.)
d2d = dvr_2d.DVR(dvr1d=d1d)
E, U = d2d.sho_test(num_eigs=5, uscale=3., doshow=False)
#plt.spy(d2d.t().toarray(), markersize=.1);
#plt.savefig('/Users/Adam/Dropbox/amath585_final/K_2D_sparsity.png', bbox_inches='tight', dpi=400)
Explanation: Now we'll construct a 2-D Sinc DVR from a product basis of 1-D Sinc DVRs.
End of explanation
d1d = dvr_1d.SincDVR(npts = 16, L=10)
d3d = dvr_3d.DVR(dvr1d=d1d, spf='csr')
E, U = d3d.sho_test(num_eigs=5)
#plt.spy(d3d.t().toarray(), markersize=.1);
#plt.savefig('/Users/Adam/Dropbox/amath585_final/K_3D_sparsity.png', bbox_inches='tight', dpi=400)
Explanation: Testing the 3-D DVR
End of explanation |
14,310 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multitask GP Regression
Introduction
Multitask regression, introduced in this paper learns similarities in the outputs simultaneously. It's useful when you are performing regression on multiple functions that share the same inputs, especially if they have similarities (such as being sinusodial).
Given inputs $x$ and $x'$, and tasks $i$ and $j$, the covariance between two datapoints and two tasks is given by
$$ k([x, i], [x', j]) = k_\text{inputs}(x, x') * k_\text{tasks}(i, j)
$$
where $k_\text{inputs}$ is a standard kernel (e.g. RBF) that operates on the inputs.
$k_\text{task}$ is a lookup table containing inter-task covariance.
Step1: Set up training data
In the next cell, we set up the training data for this example. We'll be using 100 regularly spaced points on [0,1] which we evaluate the function on and add Gaussian noise to get the training labels.
We'll have two functions - a sine function (y1) and a cosine function (y2).
For MTGPs, our train_targets will actually have two dimensions
Step2: Define a multitask model
The model should be somewhat similar to the ExactGP model in the simple regression example.
The differences
Step3: Train the model hyperparameters
Step4: Make predictions with the model | Python Code:
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
Explanation: Multitask GP Regression
Introduction
Multitask regression, introduced in this paper learns similarities in the outputs simultaneously. It's useful when you are performing regression on multiple functions that share the same inputs, especially if they have similarities (such as being sinusodial).
Given inputs $x$ and $x'$, and tasks $i$ and $j$, the covariance between two datapoints and two tasks is given by
$$ k([x, i], [x', j]) = k_\text{inputs}(x, x') * k_\text{tasks}(i, j)
$$
where $k_\text{inputs}$ is a standard kernel (e.g. RBF) that operates on the inputs.
$k_\text{task}$ is a lookup table containing inter-task covariance.
End of explanation
train_x = torch.linspace(0, 1, 100)
train_y = torch.stack([
torch.sin(train_x * (2 * math.pi)) + torch.randn(train_x.size()) * 0.2,
torch.cos(train_x * (2 * math.pi)) + torch.randn(train_x.size()) * 0.2,
], -1)
Explanation: Set up training data
In the next cell, we set up the training data for this example. We'll be using 100 regularly spaced points on [0,1] which we evaluate the function on and add Gaussian noise to get the training labels.
We'll have two functions - a sine function (y1) and a cosine function (y2).
For MTGPs, our train_targets will actually have two dimensions: with the second dimension corresponding to the different tasks.
End of explanation
class MultitaskGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(MultitaskGPModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.MultitaskMean(
gpytorch.means.ConstantMean(), num_tasks=2
)
self.covar_module = gpytorch.kernels.MultitaskKernel(
gpytorch.kernels.RBFKernel(), num_tasks=2, rank=1
)
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultitaskMultivariateNormal(mean_x, covar_x)
likelihood = gpytorch.likelihoods.MultitaskGaussianLikelihood(num_tasks=2)
model = MultitaskGPModel(train_x, train_y, likelihood)
Explanation: Define a multitask model
The model should be somewhat similar to the ExactGP model in the simple regression example.
The differences:
We're going to wrap ConstantMean with a MultitaskMean. This makes sure we have a mean function for each task.
Rather than just using a RBFKernel, we're using that in conjunction with a MultitaskKernel. This gives us the covariance function described in the introduction.
We're using a MultitaskMultivariateNormal and MultitaskGaussianLikelihood. This allows us to deal with the predictions/outputs in a nice way. For example, when we call MultitaskMultivariateNormal.mean, we get a n x num_tasks matrix back.
You may also notice that we don't use a ScaleKernel, since the IndexKernel will do some scaling for us. (This way we're not overparameterizing the kernel.)
End of explanation
# this is for running the notebook in our testing framework
import os
smoke_test = ('CI' in os.environ)
training_iterations = 2 if smoke_test else 50
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
for i in range(training_iterations):
optimizer.zero_grad()
output = model(train_x)
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f' % (i + 1, training_iterations, loss.item()))
optimizer.step()
Explanation: Train the model hyperparameters
End of explanation
# Set into eval mode
model.eval()
likelihood.eval()
# Initialize plots
f, (y1_ax, y2_ax) = plt.subplots(1, 2, figsize=(8, 3))
# Make predictions
with torch.no_grad(), gpytorch.settings.fast_pred_var():
test_x = torch.linspace(0, 1, 51)
predictions = likelihood(model(test_x))
mean = predictions.mean
lower, upper = predictions.confidence_region()
# This contains predictions for both tasks, flattened out
# The first half of the predictions is for the first task
# The second half is for the second task
# Plot training data as black stars
y1_ax.plot(train_x.detach().numpy(), train_y[:, 0].detach().numpy(), 'k*')
# Predictive mean as blue line
y1_ax.plot(test_x.numpy(), mean[:, 0].numpy(), 'b')
# Shade in confidence
y1_ax.fill_between(test_x.numpy(), lower[:, 0].numpy(), upper[:, 0].numpy(), alpha=0.5)
y1_ax.set_ylim([-3, 3])
y1_ax.legend(['Observed Data', 'Mean', 'Confidence'])
y1_ax.set_title('Observed Values (Likelihood)')
# Plot training data as black stars
y2_ax.plot(train_x.detach().numpy(), train_y[:, 1].detach().numpy(), 'k*')
# Predictive mean as blue line
y2_ax.plot(test_x.numpy(), mean[:, 1].numpy(), 'b')
# Shade in confidence
y2_ax.fill_between(test_x.numpy(), lower[:, 1].numpy(), upper[:, 1].numpy(), alpha=0.5)
y2_ax.set_ylim([-3, 3])
y2_ax.legend(['Observed Data', 'Mean', 'Confidence'])
y2_ax.set_title('Observed Values (Likelihood)')
None
Explanation: Make predictions with the model
End of explanation |
14,311 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear classifier on sensor data with plot patterns and filters
Here decoding, a.k.a MVPA or supervised machine learning, is applied to M/EEG
data in sensor space. Fit a linear classifier with the LinearModel object
providing topographical patterns which are more neurophysiologically
interpretable
Step1: Set parameters
Step2: Decoding in sensor space using a LogisticRegression classifier
Step3: Let's do the same on EEG data using a scikit-learn pipeline | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Romain Trachel <[email protected]>
# Jean-Remi King <[email protected]>
#
# License: BSD-3-Clause
import mne
from mne import io, EvokedArray
from mne.datasets import sample
from mne.decoding import Vectorizer, get_coef
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
# import a linear classifier from mne.decoding
from mne.decoding import LinearModel
print(__doc__)
data_path = sample.data_path()
sample_path = data_path / 'MEG' / 'sample'
Explanation: Linear classifier on sensor data with plot patterns and filters
Here decoding, a.k.a MVPA or supervised machine learning, is applied to M/EEG
data in sensor space. Fit a linear classifier with the LinearModel object
providing topographical patterns which are more neurophysiologically
interpretable :footcite:HaufeEtAl2014 than the classifier filters (weight
vectors). The patterns explain how the MEG and EEG data were generated from
the discriminant neural sources which are extracted by the filters.
Note patterns/filters in MEG data are more similar than EEG data
because the noise is less spatially correlated in MEG than EEG.
End of explanation
raw_fname = sample_path / 'sample_audvis_filt-0-40_raw.fif'
event_fname = sample_path / 'sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.4
event_id = dict(aud_l=1, vis_l=3)
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(.5, 25, fir_design='firwin')
events = mne.read_events(event_fname)
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
decim=2, baseline=None, preload=True)
del raw
labels = epochs.events[:, -1]
# get MEG and EEG data
meg_epochs = epochs.copy().pick_types(meg=True, eeg=False)
meg_data = meg_epochs.get_data().reshape(len(labels), -1)
Explanation: Set parameters
End of explanation
clf = LogisticRegression(solver='liblinear') # liblinear is faster than lbfgs
scaler = StandardScaler()
# create a linear model with LogisticRegression
model = LinearModel(clf)
# fit the classifier on MEG data
X = scaler.fit_transform(meg_data)
model.fit(X, labels)
# Extract and plot spatial filters and spatial patterns
for name, coef in (('patterns', model.patterns_), ('filters', model.filters_)):
# We fitted the linear model onto Z-scored data. To make the filters
# interpretable, we must reverse this normalization step
coef = scaler.inverse_transform([coef])[0]
# The data was vectorized to fit a single model across all time points and
# all channels. We thus reshape it:
coef = coef.reshape(len(meg_epochs.ch_names), -1)
# Plot
evoked = EvokedArray(coef, meg_epochs.info, tmin=epochs.tmin)
evoked.plot_topomap(title='MEG %s' % name, time_unit='s')
Explanation: Decoding in sensor space using a LogisticRegression classifier
End of explanation
X = epochs.pick_types(meg=False, eeg=True)
y = epochs.events[:, 2]
# Define a unique pipeline to sequentially:
clf = make_pipeline(
Vectorizer(), # 1) vectorize across time and channels
StandardScaler(), # 2) normalize features across trials
LinearModel( # 3) fits a logistic regression
LogisticRegression(solver='liblinear')
)
)
clf.fit(X, y)
# Extract and plot patterns and filters
for name in ('patterns_', 'filters_'):
# The `inverse_transform` parameter will call this method on any estimator
# contained in the pipeline, in reverse order.
coef = get_coef(clf, name, inverse_transform=True)
evoked = EvokedArray(coef, epochs.info, tmin=epochs.tmin)
evoked.plot_topomap(title='EEG %s' % name[:-1], time_unit='s')
Explanation: Let's do the same on EEG data using a scikit-learn pipeline
End of explanation |
14,312 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring the Lorenz System of Differential Equations
Downloaded 10/2017 from the ipywidgets docs
In this Notebook we explore the Lorenz system of differential equations
Step2: Computing the trajectories and plotting the result
We define a function that can integrate the differential equations numerically and then plot the solutions. This function has arguments that control the parameters of the differential equation (\(\sigma\), \(\beta\), \(\rho\)), the numerical integration (N, max_time) and the visualization (angle).
Step3: Let's call the function once to view the solutions. For this set of parameters, we see the trajectories swirling around two points, called attractors.
Step4: Using IPython's interactive function, we can explore how the trajectories behave as we change the various parameters.
Step5: The object returned by interactive is a Widget object and it has attributes that contain the current result and arguments
Step6: After interacting with the system, we can take the result and perform further computations. In this case, we compute the average positions in \(x\), \(y\) and \(z\).
Step7: Creating histograms of the average positions (across different trajectories) show that on average the trajectories swirl about the attractors. | Python Code:
%matplotlib inline
from ipywidgets import interact, interactive
from IPython.display import clear_output, display, HTML
import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.colors import cnames
from matplotlib import animation
Explanation: Exploring the Lorenz System of Differential Equations
Downloaded 10/2017 from the ipywidgets docs
In this Notebook we explore the Lorenz system of differential equations:
$$
\begin{aligned}
\dot{x} & = \sigma(y-x) \
\dot{y} & = \rho x - y - xz \
\dot{z} & = -\beta z + xy
\end{aligned}
$$
This is one of the classic systems in non-linear differential equations. It exhibits a range of different behaviors as the parameters (\(\sigma\), \(\beta\), \(\rho\)) are varied.
Imports
First, we import the needed things from IPython, NumPy, Matplotlib and SciPy.
End of explanation
def solve_lorenz(N=10, angle=0.0, max_time=4.0, sigma=10.0, beta=8./3, rho=28.0):
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1], projection='3d')
ax.axis('off')
# prepare the axes limits
ax.set_xlim((-25, 25))
ax.set_ylim((-35, 35))
ax.set_zlim((5, 55))
def lorenz_deriv(x_y_z, t0, sigma=sigma, beta=beta, rho=rho):
Compute the time-derivative of a Lorenz system.
x, y, z = x_y_z
return [sigma * (y - x), x * (rho - z) - y, x * y - beta * z]
# Choose random starting points, uniformly distributed from -15 to 15
np.random.seed(1)
x0 = -15 + 30 * np.random.random((N, 3))
# Solve for the trajectories
t = np.linspace(0, max_time, int(250*max_time))
x_t = np.asarray([integrate.odeint(lorenz_deriv, x0i, t)
for x0i in x0])
# choose a different color for each trajectory
colors = plt.cm.viridis(np.linspace(0, 1, N))
for i in range(N):
x, y, z = x_t[i,:,:].T
lines = ax.plot(x, y, z, '-', c=colors[i])
plt.setp(lines, linewidth=2)
ax.view_init(30, angle)
plt.show()
return t, x_t
Explanation: Computing the trajectories and plotting the result
We define a function that can integrate the differential equations numerically and then plot the solutions. This function has arguments that control the parameters of the differential equation (\(\sigma\), \(\beta\), \(\rho\)), the numerical integration (N, max_time) and the visualization (angle).
End of explanation
t, x_t = solve_lorenz(angle=0, N=10)
Explanation: Let's call the function once to view the solutions. For this set of parameters, we see the trajectories swirling around two points, called attractors.
End of explanation
w = interactive(solve_lorenz, angle=(0.,360.), max_time=(0.1, 4.0),
N=(0,50), sigma=(0.0,50.0), rho=(0.0,50.0))
display(w)
Explanation: Using IPython's interactive function, we can explore how the trajectories behave as we change the various parameters.
End of explanation
t, x_t = w.result
w.kwargs
Explanation: The object returned by interactive is a Widget object and it has attributes that contain the current result and arguments:
End of explanation
xyz_avg = x_t.mean(axis=1)
xyz_avg.shape
Explanation: After interacting with the system, we can take the result and perform further computations. In this case, we compute the average positions in \(x\), \(y\) and \(z\).
End of explanation
plt.hist(xyz_avg[:,0])
plt.title('Average $x(t)$');
plt.hist(xyz_avg[:,1])
plt.title('Average $y(t)$');
Explanation: Creating histograms of the average positions (across different trajectories) show that on average the trajectories swirl about the attractors.
End of explanation |
14,313 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predictiong with SLM
In this notebook we compare how accurate is a SVM prediction using SLM as its features when the latter are normalized (its values are only 0, 1) in opposition to the current state where its values are 0 and 256.
Step1: We load the file
Step2: Accuracy with non-normalized SLM
Step3: Accuracy with normalized SLM | Python Code:
import numpy as np
import h5py
from sklearn import svm, cross_validation, preprocessing
Explanation: Predictiong with SLM
In this notebook we compare how accurate is a SVM prediction using SLM as its features when the latter are normalized (its values are only 0, 1) in opposition to the current state where its values are 0 and 256.
End of explanation
# First we load the file
file_location = '../results_database/text_wall_street_big.hdf5'
run_name = '/low-resolution'
f = h5py.File(file_location, 'r')
# Now we need to get the letters and align them
text_directory = '../data/wall_street_letters.npy'
letters_sequence = np.load(text_directory)
Nletters = len(letters_sequence)
symbols = set(letters_sequence)
# Nexa parameters
Nspatial_clusters = 5
Ntime_clusters = 15
Nembedding = 3
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
delay = 4
N = 5000
cache_size = 1000
Explanation: We load the file
End of explanation
# Exctrat and normalized SLM
SLM = np.array(f[run_name]['SLM'])
print('Standarized')
X = SLM[:,:(N - delay)].T
y = letters_sequence[delay:N]
# We now scale X
X = preprocessing.scale(X)
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf_linear = svm.SVC(C=1.0, cache_size=cache_size, kernel='linear')
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
print('Score in linear', score)
clf_rbf = svm.SVC(C=1.0, cache_size=cache_size, kernel='rbf')
clf_rbf.fit(X_train, y_train)
score = clf_rbf.score(X_test, y_test) * 100.0
print('Score in rbf', score)
print('Not standarized')
X = SLM[:,:(N - delay)].T
y = letters_sequence[delay:N]
# We now scale X
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf_linear = svm.SVC(C=1.0, cache_size=cache_size, kernel='linear')
clf_linear.fit(X_train, y_train)
score = clf_linear.score(X_test, y_test) * 100.0
print('Score in linear', score)
clf_rbf = svm.SVC(C=1.0, cache_size=cache_size, kernel='linear')
clf_rbf.fit(X_train, y_train)
score = clf_rbf.score(X_test, y_test) * 100.0
print('Score in rbf', score)
Explanation: Accuracy with non-normalized SLM
End of explanation
# Exctrat and normalized SLM
SLM = np.array(f[run_name]['SLM'])
SLM[SLM < 200] = 0
SLM[SLM >= 200] = 1
print('Standarized')
X = SLM[:,:(N - delay)].T
y = letters_sequence[delay:N]
# We now scale X
X = preprocessing.scale(X)
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf_linear = svm.SVC(C=1.0, cache_size=cache_size, kernel='linear')
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
print('Score in linear', score)
clf_rbf = svm.SVC(C=1.0, cache_size=cache_size, kernel='rbf')
clf_rbf.fit(X_train, y_train)
score = clf_rbf.score(X_test, y_test) * 100.0
print('Score in rbf', score)
print('Not standarized')
X = SLM[:,:(N - delay)].T
y = letters_sequence[delay:N]
# We now scale X
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf_linear = svm.SVC(C=1.0, cache_size=cache_size, kernel='linear')
clf_linear.fit(X_train, y_train)
score = clf_linear.score(X_test, y_test) * 100.0
print('Score in linear', score)
clf_rbf = svm.SVC(C=1.0, cache_size=cache_size, kernel='linear')
clf_rbf.fit(X_train, y_train)
score = clf_rbf.score(X_test, y_test) * 100.0
print('Score in rbf', score)
Explanation: Accuracy with normalized SLM
End of explanation |
14,314 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
License
Copyright (C) 2017 J. Patrick Hall, [email protected]
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions
Step1: Perform basic feature extraction
Create a sample data set
Step2: Compress x1 and x2 into a single principal component
Step3: Principal components analysis finds vectors that represent that direction(s) of most variance in a data set. These are called eigenvectors.
Step4: Principal components are the projection of the data onto these eigenvectors. Principal components are usually centered around zero and each principal component is uncorrelated with all the others, i.e. principal components are orthogonal to one-another. Becuase prinicipal components represent the highest variance dimensions in the data and are not correlated with one another, they do an excellent job summarizing a data set with only a few dimensions (e.g. columns) and PCA is probably the most popular feature extraction technique. | Python Code:
import pandas as pd # pandas for handling mixed data sets
import numpy as np # numpy for basic math and matrix operations
import matplotlib.pyplot as plt # pyplot for plotting
# scikit-learn for machine learning and data preprocessing
from sklearn.decomposition import PCA
Explanation: License
Copyright (C) 2017 J. Patrick Hall, [email protected]
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Simple feature extraction - Pandas and Scikit-Learn
Imports
End of explanation
scratch_df = pd.DataFrame({'x1': [1, 2.5, 3, 4.5],
'x2': [1.5, 2, 3.5, 4]})
scratch_df
Explanation: Perform basic feature extraction
Create a sample data set
End of explanation
pca = PCA(n_components=1)
pca.fit(scratch_df)
Explanation: Compress x1 and x2 into a single principal component
End of explanation
print('First eigenvector = ', pca.components_)
Explanation: Principal components analysis finds vectors that represent that direction(s) of most variance in a data set. These are called eigenvectors.
End of explanation
scratch_df['Centered_PC1'] = pca.transform(scratch_df[['x1', 'x2']])
scratch_df['Non_centered_PC1'] = pca.transform(scratch_df[['x1', 'x2']] + pca.mean_)
scratch_df['PC1_x1_back_projection'] = pd.Series(np.arange(1,8,2)) * pca.components_[0][0]
scratch_df['PC1_x2_back_projection'] = pd.Series(np.arange(1,8,2)) * pca.components_[0][1]
scratch_df
x = plt.scatter(scratch_df.x1, scratch_df.x2, color='b')
pc, = plt.plot(scratch_df.PC1_x1_back_projection, scratch_df.PC1_x2_back_projection, color='r')
plt.legend([x, pc], ['Observed data (x)', 'First principal component projection'], loc=4)
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
Explanation: Principal components are the projection of the data onto these eigenvectors. Principal components are usually centered around zero and each principal component is uncorrelated with all the others, i.e. principal components are orthogonal to one-another. Becuase prinicipal components represent the highest variance dimensions in the data and are not correlated with one another, they do an excellent job summarizing a data set with only a few dimensions (e.g. columns) and PCA is probably the most popular feature extraction technique.
End of explanation |
14,315 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training a part-of-speech tagger with transformers (BERT)
This example shows how to use Thinc and Hugging Face's transformers library to implement and train a part-of-speech tagger on the Universal Dependencies AnCora corpus. This notebook assumes familiarity with machine learning concepts, transformer models and Thinc's config system and Model API (see the "Thinc for beginners" notebook and the documentation for more info).
Step1: First, let's use Thinc's prefer_gpu helper to make sure we're performing operations on GPU if available. The function should be called right after importing Thinc, and it returns a boolean indicating whether the GPU has been activated. If we're on GPU, we can also call use_pytorch_for_gpu_memory to route cupy's memory allocation via PyTorch, so both can play together nicely.
Step3: Overview
Step5: Defining the model
The Thinc model we want to define should consist of 3 components
Step6: The wrapped tokenizer will take a list-of-lists as input (the texts) and will output a TokensPlus object containing the fully padded batch of tokens. The wrapped transformer will take a list of TokensPlus objects and will output a list of 2-dimensional arrays.
TransformersTokenizer
Step7: The forward pass takes the model and a list-of-lists of strings and outputs the TokensPlus dataclass. It also outputs a dummy callback function, to meet the API contract for Thinc models. Even though there's no way we can meaningfully "backpropagate" this layer, we need to make sure the function has the right signature, so that it can be used interchangeably with other layers.
2. Wrapping the transformer
To load and wrap the transformer, we can use transformers.AutoModel and Thinc's PyTorchWrapper. The forward method of the wrapped model can take arbitrary positional arguments and keyword arguments. Here's what the wrapped model is going to look like
Step8: The input and output transformation functions give you full control of how data is passed into and out of the underlying PyTorch model, so you can work with PyTorch layers that expect and return arbitrary objects. Putting it all together, we now have a nice layer that is configured with the name of a transformer model, that acts as a function mapping tokenized input into feature vectors.
Step9: We can now combine the TransformersTokenizer and Transformer into a feed-forward network using the chain combinator. The with_array layer transforms a sequence of data into a contiguous 2d array on the way into and
out of a model.
Step10: Training the model
Setting up model and data
Since we've registered all layers via @thinc.registry.layers, we can construct the model, its settings and other functions we need from a config (see CONFIG above). The result is a config object with a model, an optimizer, a function to calculate the loss and the training settings.
Step11: We’ve prepared a separate package ml-datasets with loaders for some common datasets, including the AnCora data. If we're using a GPU, calling ops.asarray on the outputs ensures that they're converted to cupy arrays (instead of numpy arrays). Calling Model.initialize with a batch of inputs and outputs allows Thinc to infer the missing dimensions.
Step12: Helper functions for training and evaluation
Before we can train the model, we also need to set up the following helper functions for batching and evaluation
Step13: The training loop
Transformers often learn best with large batch sizes – larger than fits in GPU memory. But you don't have to backprop the whole batch at once. Here we consider the "logical" batch size (number of examples per update) separately from the physical batch size. For the physical batch size, what we care about is the number of words (considering padding too). We also want to sort by length, for efficiency.
At the end of the batch, we call the optimizer with the accumulated gradients, and advance the learning rate schedules. You might want to evaluate more often than once per epoch – that's up to you. | Python Code:
!pip install "thinc>=8.0.0" transformers torch "ml_datasets>=0.2.0" "tqdm>=4.41"
Explanation: Training a part-of-speech tagger with transformers (BERT)
This example shows how to use Thinc and Hugging Face's transformers library to implement and train a part-of-speech tagger on the Universal Dependencies AnCora corpus. This notebook assumes familiarity with machine learning concepts, transformer models and Thinc's config system and Model API (see the "Thinc for beginners" notebook and the documentation for more info).
End of explanation
from thinc.api import prefer_gpu, use_pytorch_for_gpu_memory
is_gpu = prefer_gpu()
print("GPU:", is_gpu)
if is_gpu:
use_pytorch_for_gpu_memory()
Explanation: First, let's use Thinc's prefer_gpu helper to make sure we're performing operations on GPU if available. The function should be called right after importing Thinc, and it returns a boolean indicating whether the GPU has been activated. If we're on GPU, we can also call use_pytorch_for_gpu_memory to route cupy's memory allocation via PyTorch, so both can play together nicely.
End of explanation
CONFIG =
[model]
@layers = "TransformersTagger.v1"
starter = "bert-base-multilingual-cased"
[optimizer]
@optimizers = "Adam.v1"
[optimizer.learn_rate]
@schedules = "warmup_linear.v1"
initial_rate = 0.01
warmup_steps = 3000
total_steps = 6000
[loss]
@losses = "SequenceCategoricalCrossentropy.v1"
[training]
batch_size = 128
words_per_subbatch = 2000
n_epoch = 10
Explanation: Overview: the final config
Here's the final config for the model we're building in this notebook. It references a custom TransformersTagger that takes the name of a starter (the pretrained model to use), an optimizer, a learning rate schedule with warm-up and the general training settings. You can keep the config string within your file or notebook, or save it to a conig.cfg file and load it in via Config.from_disk.
End of explanation
from typing import Optional, List
import numpy
from thinc.types import Ints1d, Floats2d
from dataclasses import dataclass
import torch
from transformers import BatchEncoding, TokenSpan
@dataclass
class TokensPlus:
batch_size: int
tok2wp: List[Ints1d]
input_ids: torch.Tensor
token_type_ids: torch.Tensor
attention_mask: torch.Tensor
def __init__(self, inputs: List[List[str]], wordpieces: BatchEncoding):
self.input_ids = wordpieces["input_ids"]
self.attention_mask = wordpieces["attention_mask"]
self.token_type_ids = wordpieces["token_type_ids"]
self.batch_size = self.input_ids.shape[0]
self.tok2wp = []
for i in range(self.batch_size):
spans = [wordpieces.word_to_tokens(i, j) for j in range(len(inputs[i]))]
self.tok2wp.append(self.get_wp_starts(spans))
def get_wp_starts(self, spans: List[Optional[TokenSpan]]) -> Ints1d:
Calculate an alignment mapping each token index to its first wordpiece.
alignment = numpy.zeros((len(spans)), dtype="i")
for i, span in enumerate(spans):
if span is None:
raise ValueError(
"Token did not align to any wordpieces. Was the tokenizer "
"run with is_split_into_words=True?"
)
else:
alignment[i] = span.start
return alignment
def test_tokens_plus(name: str="bert-base-multilingual-cased"):
from transformers import AutoTokenizer
inputs = [
["Our", "band", "is", "called", "worlthatmustbedivided", "!"],
["We", "rock", "!"]
]
tokenizer = AutoTokenizer.from_pretrained(name)
wordpieces = tokenizer(
inputs,
is_split_into_words=True,
add_special_tokens=True,
return_token_type_ids=True,
return_attention_mask=True,
return_length=True,
return_tensors="pt",
padding="longest"
)
tplus = TokensPlus(inputs, wordpieces)
assert len(tplus.tok2wp) == len(inputs) == len(tplus.input_ids)
for i, align in enumerate(tplus.tok2wp):
assert len(align) == len(inputs[i])
for j in align:
assert j >= 0 and j < tplus.input_ids.shape[1]
test_tokens_plus()
Explanation: Defining the model
The Thinc model we want to define should consist of 3 components: the transformers tokenizer, the actual transformer implemented in PyTorch and a softmax-activated output layer.
1. Wrapping the tokenizer
To make it easier to keep track of the data that's passed around (and get type errors if something goes wrong), we first create a TokensPlus dataclass that holds the information we need from the transformers tokenizer. The most important work we'll do in this class is to build an alignment map. The transformer models are trained on input sequences that over-segment the sentence, so that they can work on smaller vocabularies. These over-segmentations are generally called "word pieces". The transformer will return a tensor with one vector per wordpiece. We need to map that to a tensor with one vector per POS-tagged token. We'll pass those token representations into a feed-forward network to predict the tag probabilities. During the backward pass, we'll then need to invert this mapping, so that we can calculate the gradients with respect to the wordpieces given the gradients with respect to the tokens. To keep things relatively simple, we'll store the alignment as a list of arrays, with each array mapping one token to one wordpiece vector (its first one). To make this work, we'll need to run the tokenizer with is_split_into_words=True, which should ensure that we get at least one wordpiece per token.
End of explanation
import thinc
from thinc.api import Model
from transformers import AutoTokenizer
@thinc.registry.layers("transformers_tokenizer.v1")
def TransformersTokenizer(name: str) -> Model[List[List[str]], TokensPlus]:
def forward(model, inputs: List[List[str]], is_train: bool):
tokenizer = model.attrs["tokenizer"]
wordpieces = tokenizer(
inputs,
is_split_into_words=True,
add_special_tokens=True,
return_token_type_ids=True,
return_attention_mask=True,
return_length=True,
return_tensors="pt",
padding="longest"
)
return TokensPlus(inputs, wordpieces), lambda d_tokens: []
return Model("tokenizer", forward, attrs={"tokenizer": AutoTokenizer.from_pretrained(name)})
Explanation: The wrapped tokenizer will take a list-of-lists as input (the texts) and will output a TokensPlus object containing the fully padded batch of tokens. The wrapped transformer will take a list of TokensPlus objects and will output a list of 2-dimensional arrays.
TransformersTokenizer: List[List[str]] → TokensPlus
Transformer: TokensPlus → List[Array2d]
💡 Since we're adding type hints everywhere (and Thinc is fully typed, too), you can run your code through mypy to find type errors and inconsistencies. If you're using an editor like Visual Studio Code, you can enable mypy linting and type errors will be highlighted in real time as you write code.
To use the tokenizer as a layer in our network, we register a new function that returns a Thinc Model. The function takes the name of the pretrained weights (e.g. "bert-base-multilingual-cased") as an argument that can later be provided via the config. After loading the AutoTokenizer, we can stash it in the attributes. This lets us access it at any point later on via model.attrs["tokenizer"].
End of explanation
from typing import List, Tuple, Callable
from thinc.api import ArgsKwargs, torch2xp, xp2torch
from thinc.types import Floats2d
def convert_transformer_inputs(model, tokens: TokensPlus, is_train):
kwargs = {
"input_ids": tokens.input_ids,
"attention_mask": tokens.attention_mask,
"token_type_ids": tokens.token_type_ids,
}
return ArgsKwargs(args=(), kwargs=kwargs), lambda dX: []
def convert_transformer_outputs(
model: Model,
inputs_outputs: Tuple[TokensPlus, Tuple[torch.Tensor]],
is_train: bool
) -> Tuple[List[Floats2d], Callable]:
tplus, trf_outputs = inputs_outputs
wp_vectors = torch2xp(trf_outputs[0])
tokvecs = [wp_vectors[i, idx] for i, idx in enumerate(tplus.tok2wp)]
def backprop(d_tokvecs: List[Floats2d]) -> ArgsKwargs:
# Restore entries for BOS and EOS markers
d_wp_vectors = model.ops.alloc3f(*trf_outputs[0].shape, dtype="f")
for i, idx in enumerate(tplus.tok2wp):
d_wp_vectors[i, idx] += d_tokvecs[i]
return ArgsKwargs(
args=(trf_outputs[0],),
kwargs={"grad_tensors": xp2torch(d_wp_vectors)},
)
return tokvecs, backprop
Explanation: The forward pass takes the model and a list-of-lists of strings and outputs the TokensPlus dataclass. It also outputs a dummy callback function, to meet the API contract for Thinc models. Even though there's no way we can meaningfully "backpropagate" this layer, we need to make sure the function has the right signature, so that it can be used interchangeably with other layers.
2. Wrapping the transformer
To load and wrap the transformer, we can use transformers.AutoModel and Thinc's PyTorchWrapper. The forward method of the wrapped model can take arbitrary positional arguments and keyword arguments. Here's what the wrapped model is going to look like:
python
@thinc.registry.layers("transformers_model.v1")
def Transformer(name) -> Model[TokensPlus, List[Floats2d]]:
return PyTorchWrapper(
AutoModel.from_pretrained(name),
convert_inputs=convert_transformer_inputs,
convert_outputs=convert_transformer_outputs,
)
The Transformer layer takes our TokensPlus dataclass as input and outputs a list of 2-dimensional arrays. The convert functions are used to map inputs and outputs to and from the PyTorch model. Each function should return the converted output, and a callback to use during the backward pass. To make the arbitrary positional and keyword arguments easier to manage, Thinc uses an ArgsKwargs dataclass, essentially a named tuple with args and kwargs that can be spread into a function as *ArgsKwargs.args and **ArgsKwargs.kwargs. The ArgsKwargs objects will be passed straight into the model in the forward pass, and straight into torch.autograd.backward during the backward pass.
End of explanation
import thinc
from thinc.api import PyTorchWrapper
from transformers import AutoModel
@thinc.registry.layers("transformers_model.v1")
def Transformer(name: str) -> Model[TokensPlus, List[Floats2d]]:
return PyTorchWrapper(
AutoModel.from_pretrained(name),
convert_inputs=convert_transformer_inputs,
convert_outputs=convert_transformer_outputs,
)
Explanation: The input and output transformation functions give you full control of how data is passed into and out of the underlying PyTorch model, so you can work with PyTorch layers that expect and return arbitrary objects. Putting it all together, we now have a nice layer that is configured with the name of a transformer model, that acts as a function mapping tokenized input into feature vectors.
End of explanation
from thinc.api import chain, with_array, Softmax
@thinc.registry.layers("TransformersTagger.v1")
def TransformersTagger(starter: str, n_tags: int = 17) -> Model[List[List[str]], List[Floats2d]]:
return chain(
TransformersTokenizer(starter),
Transformer(starter),
with_array(Softmax(n_tags)),
)
Explanation: We can now combine the TransformersTokenizer and Transformer into a feed-forward network using the chain combinator. The with_array layer transforms a sequence of data into a contiguous 2d array on the way into and
out of a model.
End of explanation
from thinc.api import Config, registry
C = registry.resolve(Config().from_str(CONFIG))
model = C["model"]
optimizer = C["optimizer"]
calculate_loss = C["loss"]
cfg = C["training"]
Explanation: Training the model
Setting up model and data
Since we've registered all layers via @thinc.registry.layers, we can construct the model, its settings and other functions we need from a config (see CONFIG above). The result is a config object with a model, an optimizer, a function to calculate the loss and the training settings.
End of explanation
import ml_datasets
(train_X, train_Y), (dev_X, dev_Y) = ml_datasets.ud_ancora_pos_tags()
train_Y = list(map(model.ops.asarray, train_Y)) # convert to cupy if needed
dev_Y = list(map(model.ops.asarray, dev_Y)) # convert to cupy if needed
model.initialize(X=train_X[:5], Y=train_Y[:5])
Explanation: We’ve prepared a separate package ml-datasets with loaders for some common datasets, including the AnCora data. If we're using a GPU, calling ops.asarray on the outputs ensures that they're converted to cupy arrays (instead of numpy arrays). Calling Model.initialize with a batch of inputs and outputs allows Thinc to infer the missing dimensions.
End of explanation
def minibatch_by_words(pairs, max_words):
pairs = list(zip(*pairs))
pairs.sort(key=lambda xy: len(xy[0]), reverse=True)
batch = []
for X, Y in pairs:
batch.append((X, Y))
n_words = max(len(xy[0]) for xy in batch) * len(batch)
if n_words >= max_words:
yield batch[:-1]
batch = [(X, Y)]
if batch:
yield batch
def evaluate_sequences(model, Xs: List[Floats2d], Ys: List[Floats2d], batch_size: int) -> float:
correct = 0.0
total = 0.0
for X, Y in model.ops.multibatch(batch_size, Xs, Ys):
Yh = model.predict(X)
for yh, y in zip(Yh, Y):
correct += (y.argmax(axis=1) == yh.argmax(axis=1)).sum()
total += y.shape[0]
return float(correct / total)
Explanation: Helper functions for training and evaluation
Before we can train the model, we also need to set up the following helper functions for batching and evaluation:
minibatch_by_words: Group pairs of sequences into minibatches under max_words in size, considering padding. The size of a padded batch is the length of its longest sequence multiplied by the number of elements in the batch.
evaluate_sequences: Evaluate the model sequences of two-dimensional arrays and return the score.
End of explanation
from tqdm.notebook import tqdm
from thinc.api import fix_random_seed
fix_random_seed(0)
for epoch in range(cfg["n_epoch"]):
batches = model.ops.multibatch(cfg["batch_size"], train_X, train_Y, shuffle=True)
for outer_batch in tqdm(batches, leave=False):
for batch in minibatch_by_words(outer_batch, cfg["words_per_subbatch"]):
inputs, truths = zip(*batch)
inputs = list(inputs)
guesses, backprop = model(inputs, is_train=True)
backprop(calculate_loss.get_grad(guesses, truths))
model.finish_update(optimizer)
optimizer.step_schedules()
score = evaluate_sequences(model, dev_X, dev_Y, cfg["batch_size"])
print(epoch, f"{score:.3f}")
Explanation: The training loop
Transformers often learn best with large batch sizes – larger than fits in GPU memory. But you don't have to backprop the whole batch at once. Here we consider the "logical" batch size (number of examples per update) separately from the physical batch size. For the physical batch size, what we care about is the number of words (considering padding too). We also want to sort by length, for efficiency.
At the end of the batch, we call the optimizer with the accumulated gradients, and advance the learning rate schedules. You might want to evaluate more often than once per epoch – that's up to you.
End of explanation |
14,316 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
word2vec
This notebook is equivalent to demo-word.sh, demo-analogy.sh, demo-phrases.sh and demo-classes.sh from the Google examples.
Step1: Training
Download some data, for example
Step2: Run word2phrase to group up similar words "Los Angeles" to "Los_Angeles"
Step3: This created a text8-phrases file that we can use as a better input for word2vec.
Note that you could easily skip this previous step and use the text data as input for word2vec directly.
Now train the word2vec model.
Step4: That created a text8.bin file containing the word vectors in a binary format.
Generate the clusters of the vectors based on the trained model.
Step5: That created a text8-clusters.txt with the cluster for every word in the vocabulary
Predictions
Step6: Import the word2vec binary file created above
Step7: We can take a look at the vocabulary as a numpy array
Step8: Or take a look at the whole matrix
Step9: We can retreive the vector of individual words
Step10: We can calculate the distance between two or more (all combinations) words.
Step11: Similarity
We can do simple queries to retreive words similar to "socks" based on cosine similarity
Step12: This returned a tuple with 2 items
Step13: There is a helper function to create a combined response as a numpy record array
Step14: Is easy to make that numpy array a pure python response
Step15: Phrases
Since we trained the model with the output of word2phrase we can ask for similarity of "phrases", basically compained words such as "Los Angeles"
Step16: Analogies
Its possible to do more complex queries like analogies such as
Step17: Clusters
Step18: We can see get the cluster number for individual words
Step19: We can see get all the words grouped on an specific cluster
Step20: We can add the clusters to the word2vec model and generate a response that includes the clusters | Python Code:
%load_ext autoreload
%autoreload 2
Explanation: word2vec
This notebook is equivalent to demo-word.sh, demo-analogy.sh, demo-phrases.sh and demo-classes.sh from the Google examples.
End of explanation
import word2vec
Explanation: Training
Download some data, for example: http://mattmahoney.net/dc/text8.zip
You could use make test-data from the root of the repo.
End of explanation
word2vec.word2phrase('../data/text8', '../data/text8-phrases', verbose=True)
Explanation: Run word2phrase to group up similar words "Los Angeles" to "Los_Angeles"
End of explanation
word2vec.word2vec('../data/text8-phrases', '../data/text8.bin', size=100, binary=True, verbose=True)
Explanation: This created a text8-phrases file that we can use as a better input for word2vec.
Note that you could easily skip this previous step and use the text data as input for word2vec directly.
Now train the word2vec model.
End of explanation
word2vec.word2clusters('../data/text8', '../data/text8-clusters.txt', 100, verbose=True)
Explanation: That created a text8.bin file containing the word vectors in a binary format.
Generate the clusters of the vectors based on the trained model.
End of explanation
%load_ext autoreload
%autoreload 2
import word2vec
Explanation: That created a text8-clusters.txt with the cluster for every word in the vocabulary
Predictions
End of explanation
model = word2vec.load('../data/text8.bin')
Explanation: Import the word2vec binary file created above
End of explanation
model.vocab
Explanation: We can take a look at the vocabulary as a numpy array
End of explanation
model.vectors.shape
model.vectors
Explanation: Or take a look at the whole matrix
End of explanation
model['dog'].shape
model['dog'][:10]
Explanation: We can retreive the vector of individual words
End of explanation
model.distance("dog", "cat", "fish")
Explanation: We can calculate the distance between two or more (all combinations) words.
End of explanation
indexes, metrics = model.similar("dog")
indexes, metrics
Explanation: Similarity
We can do simple queries to retreive words similar to "socks" based on cosine similarity:
End of explanation
model.vocab[indexes]
Explanation: This returned a tuple with 2 items:
1. numpy array with the indexes of the similar words in the vocabulary
2. numpy array with cosine similarity to each word
We can get the words for those indexes
End of explanation
model.generate_response(indexes, metrics)
Explanation: There is a helper function to create a combined response as a numpy record array
End of explanation
model.generate_response(indexes, metrics).tolist()
Explanation: Is easy to make that numpy array a pure python response:
End of explanation
indexes, metrics = model.similar('los_angeles')
model.generate_response(indexes, metrics).tolist()
Explanation: Phrases
Since we trained the model with the output of word2phrase we can ask for similarity of "phrases", basically compained words such as "Los Angeles"
End of explanation
indexes, metrics = model.analogy(pos=['king', 'woman'], neg=['man'])
indexes, metrics
model.generate_response(indexes, metrics).tolist()
Explanation: Analogies
Its possible to do more complex queries like analogies such as: king - man + woman = queen
This method returns the same as cosine the indexes of the words in the vocab and the metric
End of explanation
clusters = word2vec.load_clusters('../data/text8-clusters.txt')
Explanation: Clusters
End of explanation
clusters.vocab
Explanation: We can see get the cluster number for individual words
End of explanation
clusters.get_words_on_cluster(90).shape
clusters.get_words_on_cluster(90)[:10]
Explanation: We can see get all the words grouped on an specific cluster
End of explanation
model.clusters = clusters
indexes, metrics = model.analogy(pos=["paris", "germany"], neg=["france"])
model.generate_response(indexes, metrics).tolist()
Explanation: We can add the clusters to the word2vec model and generate a response that includes the clusters
End of explanation |
14,317 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Control Flow
Step1: NOTE on notation
* _x, _y, _z, ...
Step2: Q5. Given x, return the truth value of NOT x element-wise. | Python Code:
from __future__ import print_function
import tensorflow as tf
import numpy as np
from datetime import date
date.today()
author = "kyubyong. https://github.com/Kyubyong/tensorflow-exercises"
tf.__version__
np.__version__
sess = tf.InteractiveSession()
Explanation: Control Flow
End of explanation
x = tf.constant([True, False, False], tf.bool)
y = tf.constant([True, True, False], tf.bool)
Explanation: NOTE on notation
* _x, _y, _z, ...: NumPy 0-d or 1-d arrays
* _X, _Y, _Z, ...: NumPy 2-d or higer dimensional arrays
* x, y, z, ...: 0-d or 1-d tensors
* X, Y, Z, ...: 2-d or higher dimensional tensors
Control Flow Operations
Q1. Let x and y be random 0-D tensors. Return x + y
if x < y and x - y otherwise.
Q2. Let x and y be 0-D int32 tensors randomly selected from 0 to 5. Return x + y 2 if x < y, x - y elif x > y, 0 otherwise.
Q3. Let X be a tensor [[-1, -2, -3], [0, 1, 2]] and Y be a tensor of zeros with the same shape as X. Return a boolean tensor that yields True if X equals Y elementwise.
Logical Operators
Q4. Given x and y below, return the truth value x AND/OR/XOR y element-wise.
End of explanation
x = tf.constant([True, False, False], tf.bool)
Explanation: Q5. Given x, return the truth value of NOT x element-wise.
End of explanation |
14,318 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https
Step1: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
Step2: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code)
Step3: Below I'm running images through the VGG network in batches.
Exercise
Step4: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
Step5: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise
Step6: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so
Step7: If you did it right, you should see these sizes for the training sets
Step9: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
Step10: Training
Here, we'll train the network.
Exercise
Step11: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
Step12: Below, feel free to choose images and see how the trained classifier predicts the flowers in them. | Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
Explanation: Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash.
git clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg
This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.
End of explanation
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
Explanation: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
End of explanation
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
Explanation: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,
feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)
End of explanation
def buildNetwork():
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
return input_, vgg, sess
#inputPlacheholder, vggObject, modelSession = buildNetwork()
#example_im = np.random.rand(20, 224, 224, 3)
#modelSession.run(vggObject.relu6, feed_dict={inputPlacheholder:example_im})
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
# TODO: Build the vgg network here
inputPlacheholder, vggObject, modelSession = buildNetwork()
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# TODO: Get the values from the relu6 layer of the VGG network
codes_batch = modelSession.run(vggObject.relu6, feed_dict={inputPlacheholder:images})
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
Explanation: Below I'm running images through the VGG network in batches.
Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).
End of explanation
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
set(labels)
Explanation: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
End of explanation
from sklearn.preprocessing import LabelBinarizer
encoder = LabelBinarizer()
encoder.fit(list(set(labels)))
labels_vecs = encoder.transform(labels).astype(np.float32)
Explanation: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
End of explanation
from sklearn.model_selection import StratifiedShuffleSplit
#Define first splitter (train/val+test)
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
for train_index, test_index in ss.split(codes, labels_vecs):
train_x = codes[train_index]
train_y = labels_vecs[train_index]
valtest_x = codes[test_index]
valtes_y = labels_vecs[test_index]
#Validation / Test splitting (50/50)
ss2 = StratifiedShuffleSplit(n_splits=1, test_size=0.5)
for val_index, test_index in ss2.split(valtest_x, valtes_y):
val_x = codes[val_index]
val_y = labels_vecs[val_index]
test_x = codes[test_index]
test_y = labels_vecs[test_index]
#train_x, train_y =
#val_x, val_y =
#test_x, test_y =
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so:
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
Then split the data with
splitter = ss.split(x, y)
ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.
Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.
End of explanation
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
fc_dim = 256
# TODO: Classifier layers and operations
#First FC layer
fc_weights = tf.Variable(tf.truncated_normal((4096,fc_dim),stddev=0.01))
fc_biases = tf.Variable(tf.zeros(fc_dim))
fc = tf.nn.relu(
tf.add(tf.matmul(inputs_, fc_weights),fc_biases)
)
#Readout layer
classifier_weights = tf.Variable(tf.truncated_normal((fc_dim,5),stddev=0.01))
classifier_biases = tf.Variable(tf.zeros(5))
logits = tf.add(tf.matmul(fc, classifier_weights), classifier_biases)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels_))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: If you did it right, you should see these sizes for the training sets:
Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
Classifier layers
Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.
Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.
End of explanation
def get_batches(x, y, n_batches=10):
Return a generator that yields batches from arrays x and y.
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
for batch in get_batches(train_x, train_y):
X, Y = batch
print(X)
print(Y)
break
Explanation: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
End of explanation
EPOCHS = 10
saver = tf.train.Saver()
init = tf.global_variables_initializer()
with tf.Session() as sess:
#Run global variavbles initializers
sess.run(init)
for epoch in range(EPOCHS):
for batch in get_batches(train_x, train_y):
#Extraxct x and y for the current batch
batch_x, batch_y = batch
#launch train
sess.run(optimizer, feed_dict={inputs_: batch_x, labels_: batch_y})
#Print training loss for epoch
c = sess.run(cost, feed_dict={inputs_: batch_x, labels_: batch_y})
print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c))
val_accuracy = sess.run(accuracy, feed_dict={inputs_: val_x, labels_: val_y})
print(val_accuracy)
#print("Validation accuracy: {0}".format(val_accuracy))
# TODO: Your training code here
saver.save(sess, "checkpoints/flowers.ckpt")
Explanation: Training
Here, we'll train the network.
Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!
End of explanation
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
Explanation: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
End of explanation
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
print(list(set(labels))[np.argmax(prediction)])
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
End of explanation |
14,319 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
本章讨论的话题是接口,从鸭子类型代表特征动态协议,到使接口更明确,能验证是否符合规定的抽象基类(Abstract Base Class,ABC)
在 Python 中 上章所说的鸭子类型是接口的常规方式,新只是是抽象基类和类型检查。Python 语言诞生 15 年之后,Python 2.6 才引入抽象基类。
本章先说明 Python 社区以往对接口的不严谨理解:部分实现接口通常被认为是可接受的。我们通过几个示例强调鸭子类型的动态本性,从而澄清这一点
接着,通过 Alex Martelili 写的一篇短文,对抽象基类作介绍,还为 Python 编程下一个新趋势下定义。本章余下的内容专门讲解抽象基类。首先,本章说明抽象基类的常见用途:实现接口时作为超类使用。
然后,说明抽象基类如何检查具体子类是否符合接口定义,以及如何使用注册机制声明一个类实现了某个接口,而不进行子类化操作。最后,说明如何让抽象基类自动 ”识别“ 任何符合接口的类 -- 不进行子类化或注册
我们将实现一个新抽象基类,看看它的运作方式。但是,作者和 Alex Martelli 都不建议你自己编写抽象基类,因为容易过度设计
抽象基类与描述符和元类一样,是用于构建框架的工具。因此,只有少数 Python 开发者编写的抽象基类不会对用户施加不必要的限制,让他们做无用功
下面从 Python 风格探索接口
Python 文化中的接口和协议
在引入抽象基类之前,Python 已经非常成功了,即使现在也很少有代码使用抽象基类。在第一章就讨论了鸭子类型和协议,在上一章,我们将协议定义为非正式的接口,是让 Python 这种动态类型语言实现多态的方式。
接口在动态类型语言是怎么运作的呢?首先,基本的事实是,Python 语言没有 interface 关键字,而且除了抽象基类,每个类都有接口:类实现或继承的公开属性(方法或数据的属性),包括特殊方法,如 __getitem__ 或 __add__
按照定义,受保护的属性和私有属性不在接口中:即便有“受保护”属性也只是采用命名约定实现的(单个前导下划线)私有属性也可以轻松的访问(第 9 章),原因也是如此,不要违背这些约定
另一方面,不要觉得把公开数据属性放入对象接口中不妥,因为如果需要,总能实现读值方法和设值方法,把数据属性变成特性,使用 obj.attr 语法的客户代码不会受到影响。Vector2d 类就是这么做的
下面的例子 x,y 是公开属性
Step1: 在第 9 章中,我们将其变成了只读属性,这是重大的重构,但是 Vector2d 的接口基本没变,用户仍然能读取 my_vector.x 和 my_vector.y。下面是使用特性实现 x,y(第 9 章的代码)
Step2: 关于接口,这里有个实用的补充定义:对象公开方法的子集,让对象在系统中扮演特定的角色。Python 文档中的 “文件类对象” 或 “可迭代对象” 就是这个意思,这种说法指的不是特定的类。接口是实现特定角色的方法集合,这样理解正是 Smalltalk 程序员说的协议,其他动态预言社区都借鉴了这个术语,协议与继承没有关系。一个类可能会实现多个接口,从而让实例扮演多个角色
协议是接口,但不是正式的(只由文档和约定定义),因此协议不能像正式接口那样施加限制(本章后面会说明抽象基类对接口一致性的强制)。一个类可能只实现部分接口,这是允许的。有时,某些 API 只要求 “文件类对象” 返回字节序列 .read() 方法。在特定的上下文中可能需要其他文件操作方法,也可能不需要
作者写书时候,Python 3 中的 memoryview 的文档说,它能处理“支持缓冲协议的对象“,不过缓冲协议的文档是 C API 的。 bytearray 的构造方法接受”一个符合缓冲接口的对象”。如今,文档正在改变用词,使用“字节序列类对象”这样更加友好的表述。我指出这一点是为了强调,对 Python 程序员来说,'X' 类对象 和 'X' 协议和 'X' 接口都是一个意思
序列协议是 Python 是最基础的协议之一。即使对象只实现了那个协议的最基本的一部分,解释器也能负责任的处理,如下一节所示。
Python 喜欢序列
Python 数据模型的哲学是尽量支持基本协议。对序列来说,即便是最简单的实现,Python 也会力求做的最好
抽象基类 Sequence 的正式接口如下:__getitem__, __contains__, __iter__, __reversed__, index, count
下面的例子 Foo 类没有继承 abc.Sequence,而且只实现了序列协议的一个方法
Step3: 虽然没有 __iter__ 方法,但是 Foo 实例是可迭代对象,因为发现有 __getitem__ 方法时,Python 会调用它,传入从 0 开始的整数索引,尝试迭代对象(这是一种后备机制)。尽管没有实现 __contains__ 方法,但是 Python 足够只能,能迭代 Foo实例,因此也能使用 in 运算符:Python 会做全面检查,看看有没有指定的元素
综上,鉴于序列协议的重要性,如果没有 __iter__ 和 __contains__ 方法,Python 会调用 __getitem__ 方法,设法让迭代和 in 运算符可用
第一章定义的 FrenchDeck 类也没有继承 abc.Sequence,但是实现了序列协议的两个方法
Step4: 下面分析一个示例,着重强调协议的动态本性
使用猴子补丁在运行时实现协议
在上面 FrenchDeck 类有一个重大的缺陷:无法洗牌。几年前,第一次编写 FrenchDeck 示例时,我发现了 shuffle 方法。后来。我对 Python 风格有了深刻理解,我发现如果 FrenchDeck 实例的行为像序列,那么就不需要 shuffle 方法,因为已经有 random.shuffle 函数可用,文档中说它的作用就是就地打乱序列
Step5: 然而,要打乱 FrenckDeck 函实例,会出现异常,如下所示:
Step6: 错误消息很明确,FrenchDeck 对象不支持为元素赋值。这个问题的原因是,FrenchDeck 只实现了不可变的序列协议,可变序列还需要提供 __setitem__ 方法。
Python 是动态语言,因此我们可以在运行时修正这个问题,甚至还可以在交互式控制台中,修正方法如下所示:
Step7: __setitem__ 在语言参考中使用的参数是 self, key, value,我们这里用的是 deck, position 和 card,这是为了提醒我们,每个 Python 方法说到底就是一个普通的函数,把第一个参数命名为 self 是一种约定。在控制台会话使用那几个参数没问题,但是在 Python 源码文件中最好按照文档那样使用 self, key, value
这里的关键是,set_card 函数要知道 deck 对象有一个名为 _cards 的属性,而且 _cards 必须是可变序列,然后吧 setcard 方法赋给特殊方法 __setitem__,从而把它依附到 FrenchDeck 类上。这种技术叫猴子补丁:在运行时修改类或模块,而不改动源码,猴子补丁很强大,但是打补丁的代码与要补丁的程序耦合十分紧密,而且往往要处理隐藏和没有文档的部分
除了举例的猴子补丁之外,上面例子还强调了协议是动态的:random.shuffle() 函数不关心参数类型,只要那个对象实现了部分可变序列协议即可。即便对象一开始没有所需的方法也没关系,后来再提供也行
目前,本章讨论的主题是 “鸭子类型”:对象的类型无关紧要,只要实现了特定的协议即可。
前面给出的抽象基类 Sequence 是为了展示协议与抽象基类的文档中所说的接口之间的关系,但是目前为止还没有真正的实现抽象基类
在下面的几节中,我们将直接使用抽象基类,而不是将其视为文档
Alex Martelli 的水禽
Alex Martelli 讲故事:
wiki 说我协助传播了“鸭子类型”这种言简意赅的说法(即斛律对象的真正类型,转而关注对象有没有实现所需的方法,签名和语义)
对 Python 来说,这基本上指避免使用 instance 检查对象类型(更别提 type(foo) is bar 这种更糟糕的检查方式了,这样做没有任何好处,甚至禁止最简单的继承方式)。
总的来说,鸭子类型很有用,但是有的时候继承的方式更好。
在生物学中,平行进化会导致不相关的种类产生相似的属性,形态和举止方面都是如此,但是生态位的相似性是偶然的,不同的种类仍然属于不同的生态位。编程语言也有这种“偶然的相似性”,比如下面经典的面向对象编程示例:
Step8: 显然,因为 x,y 两个对象刚好都有一个名为 draw 的方法,而且调用时候不用传入参数,即 x.draw() 和 y.draw(),远远不能确保二者可以相互调用,或者具有相同的抽象。也就是说,从这样的调用中不能推导出语义的相似性。相反,我们需要一位渊博的程序员主动把这种等价维持在一定层次上。
例如,草雁(以前认为与其他鹅类比较类似)和麻鸭(以前认为与其他鸭类比较类似)现在被分到 Tadornidae 亚科(表明二者相似性比鸭科其他动物高,因为他们的共同祖先比较接近)。此外,DNA 分析表明,白翅木鸭与美洲家鸭(属于麻鸭)不是很像,至少没有形态和举止看起来那么像,因此把木鸭单独分成了一组,完全不再 Tadornidae 亚科中。
知道这些有什么用呢?视情况而定,比如,逮到一直水禽之后,决定如何烹制才最美味时,显著的特征(不是全部,例如一身羽毛并不重要)主要是口感和风味(过时的表征学),这比支序学重要的多。但是其他方面,如对不同病原体的抗性(圈养水禽还是放养),DNA 接近性的作用就大多了。
因此,参照水禽的分类学演化,我建议在鸭子类型基础上增加白鹅类型(goose typing)。
白鹅类型是指,只要 cls 是抽象基类,即 cls 元类是 abcABCMeta 就可以使用 isinstance(obj, cls)。
colections.abc 中有很多有用的抽象基类(Python 标准库的 numbers 模块还有一些)。
与具体类相比,抽象基类有很多理论上的优点,Python 抽象基类还有一个重要的实用优势:可以使用 register 类方法在终端用户的代码把某个类 “声明” 为一个抽象基类的 “虚拟” 子类(为此,被注册的类必须满足抽象基类对方法名称和签名的要求,最重要的是要满足底层语义契约;但是,开发那个类时不用了解抽象基类,更不用继承抽象基类)。这大大打破了严格的强耦合,与面向对象编程人员掌握的知识有很大出入,因此使用继承时要小心
有时,为了让抽象基类识别子类,甚至不用注册。
其实,抽象基类的本质就是几个特殊方法。例如:
Step9: 可以看出,无需注册,abc.Sized 也能把 Struggle 识别为自己的子类,只要实现了特殊方法 __len__ 即可(要使用正确的语法和语义实现,前者要求没有参数,后者要求返回一个非负整数,指明对象的长度;如果不使用规定的语法和语义实现 __len__ 方法,会导致非常严重的问题)
最后要说的是:如果实现的类具体实现了 numbers, collections.abc 或者其他框架中的抽象基类的概念,要么继承相应的抽象基类(必要时),要么类注册到相应的抽象基类中。开始开发程序时,不要使用提供注册功能的库或框架,要自己动手注册,如果必须检查参数的类型(这是最常见的),例如检查是不是 “序列”,那么就这么做:
isinstance(the_arg, collections.abc.Sequence)
此外,不要在生产代码中定义抽象基类(或元类)。。。如果你很想这样做,我打赌你可能要找茬(= =),刚拿到新工具的人都有大干一场的冲动。如果你能避开这些深奥的概念,你(以及未来的代码维护者)的生活将更加愉快,因为代码会变得简洁明了,讲完啦~
除了提出 ”白鹅类型“ 之外,Alex 还指出,继承抽象基类很简单,只需要实现所需的方法,这样也能明确表明开发者意图,这一意图还能通过注册虚拟子类实现。
此外,使用 isinstance 和 issubclass 测试抽象基类更为人接受,过去,这两个函数用来测试鸭子类型,但用于抽象基类会更加灵活。毕竟,如果某个组件没有继承抽象基类,事后还可以注册,让显式类型检查通过
然而即使是抽象基类,也不能滥用 isinstance 检查,用多了可能导致代码很糟糕。在一连串 if/elif/elif 中使用 isinstance 做检查,然后根据对象类型做不同操作,是十分糟糕的做法,此时应该使用堕胎,即使用一定的方式定义类,让解释器把调用分派给正确的方法,而不是采用 if/elif/elif 块硬编码分派逻辑
具体使用时,上述建议有一个常见的例外:有些 Python API 接受一个字符串或字符串序列,如果只有一个字符串,可以把它放到列表中,从而简化处理。因为字符串是序列类型,所以为了把他和其它不可变序列区分,最简单的方式是使用 isinstance(x, str) 检查
可惜,Python 3.4 没有能把字符串和元组或者其他不可变序列区分开的抽象基类,因此必须测试 str。在 Python 2 中,basestr 类型可以协助这样的测试。basestr 不是抽象基类,但是他是 str 和 unicode 的超类。然而,Python 3 却把 basestr 去掉了,奇怪的是,Python 3 中有个 collections.abc.ByteString 类型,但是只能检测 byte 和 bytearray 类型
另一方面,如果必须强制执行 API 契约,通常可以使用 isinstance 检查抽象基类。“老兄,如果你想调用我,必须实现这个”,正如本书审校 Lennart Regebro 所说。这对采用插入式架构的系统来说特别有用。在框架之外,鸭子类型通常比类型检查更简单灵活
例如,本书有几个示例要使用序列,把他当成列表,我没有检查参数的类型是不是 list,而是直接接受参数,立即使用它构建一个列表。这样,我就可以接受任何可迭代对象,如果参数不是可迭代对象,调用立即失败,并提供清晰的错误信息,本章后面有一个这样的例子。当然,如果序列太长或者需要就地修改序列而导致无法复制参数,就不能采用这种方式,此时,使用 isinstance(x, abc.MutableSequence) 更好,如果可以接受任何可迭代对象,也可以调用 iter(x) 函数获得一个迭代器,第 14 章详细讲
模仿 collections.namedtuple 处理 field_names 参数的方式也是一例,field_names 的值可以是单个字符串,以空格或者逗号分隔符,也可以是一个标识符序列,此时可能想使用 isinstance,但是我会使用鸭子类型,如下所示:
使用鸭子类型处理单个字符串或者由字符串组成的可迭代对象:
Step10: 最后再 Alex Martelli 强调一下建议尽量不要自己实现抽象基类,容易造成灾难性后果,下面通过实例讲解白鹅类型:
定义抽象基类的子类
我们按照 Martelli 的建议,先利用现有抽象基类(collections.MutableSequence),然后再自己定义。在下面例子,我们将 FrenchDeck2 声明为collections.MutableSequence 的子类
Step11: Python 不会在模块导入时候检查抽象方法的实现,而是在实例化 FrenchDeck2 类时才会真正的检查。因此,如果没有正确实现某个抽象方法,Python 会抛出 TypeError 异常,并把错误消息设为 “Can't instantiate abstract class FrenchDeck2 with abstract methods __delitem__, insert"。正是这个原因,即使 FrenchDeck2 类不需要 __delitem__ 和 insert 提供的行为,但是因为继承 MutableSequence 抽象基类,必须实现他们。
FrenchDeck2 从 Sequence(因为 MultableSequence 继承了 Sequence) 继承了几个拿来即用的具体方法 __contains__, __iter__, __reversed__, index 和 count。FrenchDeck2 从 MutableSequence 继承了 append,extend,pop,remove 和 __iadd__
在 collections.abc 中,每个抽象基类的具体方法都是作为类的公开接口实现的,因此不用知道实例的内部接口
要想实现子类,可以覆盖抽象基类中继承的方法,以更高效的方式实现,例如,__contains__ 方法会全面扫描序列,如果面门定义的序列按顺序保存元素,就可以重新实现 __contain__ 方法,使用 bisect 函数做二分查找,提高速度
标准库中的抽象基类
Python 2.6 开始,标准库提供了抽象基类,大多数抽象基类在 collections.abc 模块中定义,不过其他地方也有,例如 numbers 和 io 包中包含一些,但是 collections.abc 中的抽象基类最常用,我们看看这个模块有那些抽象基类
标准库有两个 abc 模块,我们讨论的是 collections.abc 为了减少加载时间, Python 3.4 在 collections 包之外实现了这个模块,因此要与 cloections 分开导入。另一个 abc 模块就是 abc,这里定义的是 abc.ABC 类,每个抽象基类都依赖这个类,但是不用导入他,除非重新定义新的抽象基类
下面介绍几个抽象基类:
Iterable, Container, Sized
每个集合都应该继承这 3 个抽象基类,或者至少要实现兼容协议。Iterable 通过 __iter__ 方法支持迭代,Ccontainer 通过 __contains__ 方法支持 in 运算符,Sized 通过 __len__ 方法支持 len 函数
Sequence, Mapping 和 Set
这三个是主要的不可变集合类型,而且各自都有可变的子类。
MappingView
在 Python 3 中,映射方法 .items(), .keys(), .values() 返回的对象分别时 ItemsView,KeysView 和 ValuesView 的实例。前两个类还从 Set 类继承了丰富的接口,包含第 3 章所述的所有运算符
Callable 和 Hashable
这两个抽象基类和集合没有太大关系,只不过因为 collections.abc 是标准库中定义抽象基类的第一个模块,而他们又太重要了,所以才放到 collections.abc 模块中。我从未见过 Callable 或 Hashable 的子类。这两个抽象基类的主要作用是为内置函数 isinstance 提供支持提供支持,以一种安全的方式判断对象能不能调用或三裂
若检查是否能调用,可以使用内置的 callable() 函数,但是没有类似的 hashable() 函数,因此测试对象是否可散列,最好使用 isinstance(my_obj, Hashable)
Iterator
注意它是 Iterable 的子类,我们在第 14 章继续讨论
继 collections.abc 之后,标准库中最有用的抽象基类包是 numbers,我们来介绍一下
抽象基类的数字塔
numbers 包定义的是 “数字塔”(即各个抽象基类的层次结构是线性的),其中 Number 是位于最顶端的超类,随后是 Complex,依次往下,最底端是 Integral 类
Number
Complex
Real
Rational
Integral
因此,要检查一个数是不是整数,可以使用 isinstance(x, numbers.Integral),这样代码就能接受 int, bool(inti 的子类),或者外部库使用 numbers 抽象基类注册的其他类型。为了满足检查需要,你或者你的 API 用户始终可以把兼容的类型注册为 numbers.Integral 的虚拟子类
与之类似,如果一个值可能是浮点数类型,可以使用 isinstance(x, numbers.Real) 检查。这样代码就能接受 bool,int,float,fractions.Fraction 或者外部库(如 Numpy,它做了相应注册)提供非复数的类型
decimal.Decimal 没有注册为 numbers.Real 的虚拟子类,这有写奇怪。没注册的原因时,如果你的程序需要 Decimal 的精度,要防止与其他低精度的数字类型混淆,尤其是浮点数
了解一些现有的抽象基类之后,我们将从零开始实现一个抽象基类,然后开始使用,以此实现白鹅类型。这么做的目的不是鼓励每个人都立即定义抽象基类,而是教你怎么阅读标准库和其他包中的抽象基类源码。
定义并使用一个抽象基类
假如我们要在网站随机显示广告,但是在整个广告清单轮转一遍,不重复显示广告。我们正在构建一个广告管理框架,名叫 ADAM,它的职责之一是,支持用户随机挑选无重复类,我们将为此定义一个抽象基类
收到栈和队列启发,我们将使用现实中物体命名这个抽象基类:宾果机和彩票机是随机从有限集合挑选物品的机器,选出的物品没有重复,直到选完为止
我们将这个抽象基类命名为 Tombola,这是宾果机和打乱数字的滚动容器的意大利名字
Tombla 抽象基类有四个方法,其中两个是抽象方法。
.load(...) 把元素放入容器
.pick() 从容器中随机拿出一个元素,返回选中的元素
下面两个是具体方法:
.loaded() 如果容器中至少有一个元素,返回 True
.inspect() 返回一个有序元组,由容器现有元素构成,不会修改容器内容
抽象基类的定义如下所示:
Step12: 其实,抽象方法也可有实现代码,即使实现了,子类也必须覆盖抽象方法,但是在子类中可以用 super() 函数调用抽象方法,为它添加功能,而不是从头实现。@abstractmethod 装饰器用法参见 abc 文档
上面 inspect 方法实现的比较笨拙,不过却表明,有了 pick(),和 load(...) 方法,如果想查看 Tombola 内容,可以先把所有元素挑出,再放回去。这个例子的目的是强调抽象基类可以提供具体方法,只要依赖接口中其他方法就行。Tombola 具体子类知晓内部数据结构,可以覆盖 inspect() 方法,使用更聪明的方式实现
上面的 loaded() 方法看起来不笨,但是耗时间,调用 inspect() 方法构建有序元组只是为了看看序列是不是空。这样做可以,但是在子类做会更好,后文会讲到
注意,实现 inspect() 方法采用的是迂回方式捕获 pick() 方法抛出的 LookupError。pick() 抛出 LookupError 也是接口的一部分,但是在 Python 中无法声明,只能在文档说明
选择使用 LookupError 的原因是,在 Python 异常关系层次中,它是 IndexError 和 KeyError 的父类,这两个是具体实现 Tombola 所有的数据结构最有可能抛出的异常
为了看看抽象基类对接口做的检查,下面我们尝试使用一个有缺陷的实现糊弄 Tombola:
Step13: 抽象基类语法简介
声明抽象基类的最简单方式是继承 abc.ABC 或其他抽象基类。
然而,abc.ABC 是 Python 3.4 增加的新类,如果使用旧版 Python,无法继承现有抽象基类,必须用 metaclass= 关键字,把值设为 abc.ABCMeta(不是 abc.ABC)。
写成下面这样:
Step14: metaclass= 关键字是 Python 3 引入的。在 Python 2 中必须使用 __metaclass__ 类属性:
Step15: 元类将在 21 章讲解,我们先将其理解为一种特殊的类,同样也把抽象基类理解为一种特殊的类。例如:“常规的”类不会检查子类,因此这是抽象基类的特殊行为
除了 @abstractmethod 之外,abc 模块还定义了 @abstractclassmethod, @abstractstaticmethodm, @abstractproperty 三个装饰器。然而,后 3 个装饰器从 Python 3.3 废弃了,因为装饰器可以在 @abstractmethod 上对叠,那三个就显得多余了。例如,生成是抽象类方法的推荐方式是:
Step16: 在函数上堆叠装饰器的顺序非常重要,@abstractmethod 文档就特别指出:
与其他描述符一起使用时,abstractmethod() 应该放在最里层.
也就是说,在 @abstractmethod 和 def 语句之间不能有其它装饰器
定义 Tombola 抽象基类的子类
定义好 Tombola 抽象基类之后,我们要开发两个具体子类,满足 Tombola 规定的接口。
下面的 BingoCage 类例子是根据第五章例子修改的,使用了更好的随机发生器。BingoCage 实现了所需的首相方法 load 和 pick
从 Tombola 中继承了 loaded 方法,覆盖了 inspect 方法,增加了 __call__ 方法。
Step17: random.SystemRandom 使用 os.urandom(...) 函数实现 random API,根据 os 模块文档,这个函数生成“适合用于加密”的随即字节序列
BingoCage 从 Tombola 继承了耗时的 loaded 方法和笨拙的 inspect 方法。这两个方法都可以覆盖,变成下面例子中更快的方法,这里想表达的观点是:我们可以偷懒,直接从抽象基类中继承不是那么理想的具体方法。从 Tombola 中继承的方法没有 BingoCage 自己定义的那么快,不过只要 Tbombola 子类能正确的实现 pick 和 load 方法,就能提供正确的结果
下面是 Tombola 接口的另一种实现,虽然与之前不同,但是完全有效。LotteryBlower 打乱“数字球”后没有提取最后一个,而是提取一个随即位置上的球
Step18: 有个习惯做法值得指出,在 __init__ 方法中,self._balls 保存的是 list(iterable),而不是 iterable 的引用,这样会 LotterBlower 更灵活,因为 iterable 参数可以是任意可迭代的类型。把元素存入列表中还确保能取出元素。就算 iterable 参数始终传入列表,list(iterable) 会创建参数副本,这依然是好的做法,因为用户可能不希望自己提供的数据被改变
Tombola 的虚拟子类
白鹅类型的一个基本特性(也是值得用水禽来命名的原因):即使不继承,也有办法把一个类注册为抽象基类的虚拟子类。这样做时,我们保证注册的类忠实地实现了抽象基类定义的接口,而 Python 会相信我们从不做检查。如果我们说谎了,那么常规运行时异常会把我们捕获
注册虚拟子类的方式是在抽象基类上调用 register 方法。这么做之后,注册的类会变成抽象基类的虚拟子类,而且 issubclass 和 isinstance 等函数都能识别,但是注册的类不会从抽象基类中继承任何方法或属性。
虚拟子类不会继承注册的抽象基类,而且任何时候都不会检查它是符合抽象基类的接口,即便在实例化时也不会检查。为了避免运行时错误,虚拟子类实现所需的全部方法
register 方法通常作为普通的函数调用,不过也可以作为装饰器使用。在下面的例子,我们使用装饰器语法实现了 TomboList 类,这是 Tombola 的一个虚拟子类
Step19: 注册之后,可以使用 issubclass 和 isinstance 函数判断 TomboList 是不是 Tombola 的子类
Step20: 然而,类的继承关系在一个特殊的类中指定 -- __mro__,即方法解析顺序(Method Resolution Order)。这个属性的作用域很简单,按顺序列出类及超类,Python 会按照这个顺序搜索方法。查看 TomboList 类的 __mro__ 属性,你会发现它只列出了 “真实的” 超类,即 list 和 object:
Step21: Python 使用 register 的方式
Python 3.3 之前的版本不能将 register 当做装饰器使用,必须定义类以后像普通函数那样调用。
虽然现在可以当装饰器使用了,但是更常见的做法还是当函数,例如 collections.abc 模块源码中:
Sequence.register(tuple)
Sequence.register(str)
Sequence.register(range)
Sequence.register(memoryview)
鹅的行为可能像鸭子
Alex 讲故事时候说过,即使不注册,抽象基类也能把一个类识别成虚拟子类,下面是他举得一个例子,我添加了一些代码,用 issubclass 来测试:
Step22: 经过 issubclass 函数确认(isinstance 也会得到相同的结论),Struggle 是 abc.Sized 的子类,这是因为 abc.Sized 实现了一个特殊的方法, __subclasshook__。看下面 Sized 类的源码: | Python Code:
class Vector2d:
typecode = 'd'
def __init__(self, x, y):
self.x = float(x)
self.y = float(y)
def __iter__(self):
return (i for i in (self.x, self.y))
Explanation: 本章讨论的话题是接口,从鸭子类型代表特征动态协议,到使接口更明确,能验证是否符合规定的抽象基类(Abstract Base Class,ABC)
在 Python 中 上章所说的鸭子类型是接口的常规方式,新只是是抽象基类和类型检查。Python 语言诞生 15 年之后,Python 2.6 才引入抽象基类。
本章先说明 Python 社区以往对接口的不严谨理解:部分实现接口通常被认为是可接受的。我们通过几个示例强调鸭子类型的动态本性,从而澄清这一点
接着,通过 Alex Martelili 写的一篇短文,对抽象基类作介绍,还为 Python 编程下一个新趋势下定义。本章余下的内容专门讲解抽象基类。首先,本章说明抽象基类的常见用途:实现接口时作为超类使用。
然后,说明抽象基类如何检查具体子类是否符合接口定义,以及如何使用注册机制声明一个类实现了某个接口,而不进行子类化操作。最后,说明如何让抽象基类自动 ”识别“ 任何符合接口的类 -- 不进行子类化或注册
我们将实现一个新抽象基类,看看它的运作方式。但是,作者和 Alex Martelli 都不建议你自己编写抽象基类,因为容易过度设计
抽象基类与描述符和元类一样,是用于构建框架的工具。因此,只有少数 Python 开发者编写的抽象基类不会对用户施加不必要的限制,让他们做无用功
下面从 Python 风格探索接口
Python 文化中的接口和协议
在引入抽象基类之前,Python 已经非常成功了,即使现在也很少有代码使用抽象基类。在第一章就讨论了鸭子类型和协议,在上一章,我们将协议定义为非正式的接口,是让 Python 这种动态类型语言实现多态的方式。
接口在动态类型语言是怎么运作的呢?首先,基本的事实是,Python 语言没有 interface 关键字,而且除了抽象基类,每个类都有接口:类实现或继承的公开属性(方法或数据的属性),包括特殊方法,如 __getitem__ 或 __add__
按照定义,受保护的属性和私有属性不在接口中:即便有“受保护”属性也只是采用命名约定实现的(单个前导下划线)私有属性也可以轻松的访问(第 9 章),原因也是如此,不要违背这些约定
另一方面,不要觉得把公开数据属性放入对象接口中不妥,因为如果需要,总能实现读值方法和设值方法,把数据属性变成特性,使用 obj.attr 语法的客户代码不会受到影响。Vector2d 类就是这么做的
下面的例子 x,y 是公开属性
End of explanation
class Vector2d:
typecode = 'd'
def __init__(self, x, y):
self._x = float(x)
self._y = float(y)
@property
def x(self):
return self._x
@property
def y(self):
return self._y
def __iter__(self):
return (i for i in (self.x, self.y))
Explanation: 在第 9 章中,我们将其变成了只读属性,这是重大的重构,但是 Vector2d 的接口基本没变,用户仍然能读取 my_vector.x 和 my_vector.y。下面是使用特性实现 x,y(第 9 章的代码)
End of explanation
class Foo:
def __getitem__(self, pos):
return range(0, 30, 10)[pos]
f = Foo()
f[1]
for i in f: print(i)
20 in f
15 in f
Explanation: 关于接口,这里有个实用的补充定义:对象公开方法的子集,让对象在系统中扮演特定的角色。Python 文档中的 “文件类对象” 或 “可迭代对象” 就是这个意思,这种说法指的不是特定的类。接口是实现特定角色的方法集合,这样理解正是 Smalltalk 程序员说的协议,其他动态预言社区都借鉴了这个术语,协议与继承没有关系。一个类可能会实现多个接口,从而让实例扮演多个角色
协议是接口,但不是正式的(只由文档和约定定义),因此协议不能像正式接口那样施加限制(本章后面会说明抽象基类对接口一致性的强制)。一个类可能只实现部分接口,这是允许的。有时,某些 API 只要求 “文件类对象” 返回字节序列 .read() 方法。在特定的上下文中可能需要其他文件操作方法,也可能不需要
作者写书时候,Python 3 中的 memoryview 的文档说,它能处理“支持缓冲协议的对象“,不过缓冲协议的文档是 C API 的。 bytearray 的构造方法接受”一个符合缓冲接口的对象”。如今,文档正在改变用词,使用“字节序列类对象”这样更加友好的表述。我指出这一点是为了强调,对 Python 程序员来说,'X' 类对象 和 'X' 协议和 'X' 接口都是一个意思
序列协议是 Python 是最基础的协议之一。即使对象只实现了那个协议的最基本的一部分,解释器也能负责任的处理,如下一节所示。
Python 喜欢序列
Python 数据模型的哲学是尽量支持基本协议。对序列来说,即便是最简单的实现,Python 也会力求做的最好
抽象基类 Sequence 的正式接口如下:__getitem__, __contains__, __iter__, __reversed__, index, count
下面的例子 Foo 类没有继承 abc.Sequence,而且只实现了序列协议的一个方法: __getitem__(没有实现 __len__ 方法),看到这样足够访问元素,迭代和使用 in 运算符了
End of explanation
import collections
Card = collections.namedtuple('Card', ['rank', 'suit'])
class FrenchDeck:
ranks = [str(n) for n in range(2, 11)] + list('JQKA')
suits = 'spades diamonds clubs hearts'.split()
def __init__(self):
self._cards = [Card(rank, suit) for suit in self.suits
for rank in self.ranks]
def __len__(self):
return len(self._cards)
def __getitem__(self, position):
return self._cards[position]
Explanation: 虽然没有 __iter__ 方法,但是 Foo 实例是可迭代对象,因为发现有 __getitem__ 方法时,Python 会调用它,传入从 0 开始的整数索引,尝试迭代对象(这是一种后备机制)。尽管没有实现 __contains__ 方法,但是 Python 足够只能,能迭代 Foo实例,因此也能使用 in 运算符:Python 会做全面检查,看看有没有指定的元素
综上,鉴于序列协议的重要性,如果没有 __iter__ 和 __contains__ 方法,Python 会调用 __getitem__ 方法,设法让迭代和 in 运算符可用
第一章定义的 FrenchDeck 类也没有继承 abc.Sequence,但是实现了序列协议的两个方法: __getitem__ 和 __len__。第一章那些示例之所以能用,大部分是由于 Python 会特殊对待看起来像序列的对象。Python 中迭代是鸭子类型的一种极端形式:为了迭代对象,解释器会尝试调用两个不同的方法
End of explanation
from random import shuffle
l = list(range(10))
shuffle(l)
l
Explanation: 下面分析一个示例,着重强调协议的动态本性
使用猴子补丁在运行时实现协议
在上面 FrenchDeck 类有一个重大的缺陷:无法洗牌。几年前,第一次编写 FrenchDeck 示例时,我发现了 shuffle 方法。后来。我对 Python 风格有了深刻理解,我发现如果 FrenchDeck 实例的行为像序列,那么就不需要 shuffle 方法,因为已经有 random.shuffle 函数可用,文档中说它的作用就是就地打乱序列
End of explanation
from random import shuffle
deck = FrenchDeck()
shuffle(deck)
Explanation: 然而,要打乱 FrenckDeck 函实例,会出现异常,如下所示:
End of explanation
def set_card(deck, position, card):
deck._cards[position] = card
FrenchDeck.__setitem__ = set_card
shuffle(deck) # 现在可以打乱顺序了,因为实现了可变序列协议所需要的方法
deck[:5]
Explanation: 错误消息很明确,FrenchDeck 对象不支持为元素赋值。这个问题的原因是,FrenchDeck 只实现了不可变的序列协议,可变序列还需要提供 __setitem__ 方法。
Python 是动态语言,因此我们可以在运行时修正这个问题,甚至还可以在交互式控制台中,修正方法如下所示:
End of explanation
class Aritist:
def draw(self):
pass
class Gunslinger:
def draw(self):
pass
class Lottery:
def draw(self):
pass
Explanation: __setitem__ 在语言参考中使用的参数是 self, key, value,我们这里用的是 deck, position 和 card,这是为了提醒我们,每个 Python 方法说到底就是一个普通的函数,把第一个参数命名为 self 是一种约定。在控制台会话使用那几个参数没问题,但是在 Python 源码文件中最好按照文档那样使用 self, key, value
这里的关键是,set_card 函数要知道 deck 对象有一个名为 _cards 的属性,而且 _cards 必须是可变序列,然后吧 setcard 方法赋给特殊方法 __setitem__,从而把它依附到 FrenchDeck 类上。这种技术叫猴子补丁:在运行时修改类或模块,而不改动源码,猴子补丁很强大,但是打补丁的代码与要补丁的程序耦合十分紧密,而且往往要处理隐藏和没有文档的部分
除了举例的猴子补丁之外,上面例子还强调了协议是动态的:random.shuffle() 函数不关心参数类型,只要那个对象实现了部分可变序列协议即可。即便对象一开始没有所需的方法也没关系,后来再提供也行
目前,本章讨论的主题是 “鸭子类型”:对象的类型无关紧要,只要实现了特定的协议即可。
前面给出的抽象基类 Sequence 是为了展示协议与抽象基类的文档中所说的接口之间的关系,但是目前为止还没有真正的实现抽象基类
在下面的几节中,我们将直接使用抽象基类,而不是将其视为文档
Alex Martelli 的水禽
Alex Martelli 讲故事:
wiki 说我协助传播了“鸭子类型”这种言简意赅的说法(即斛律对象的真正类型,转而关注对象有没有实现所需的方法,签名和语义)
对 Python 来说,这基本上指避免使用 instance 检查对象类型(更别提 type(foo) is bar 这种更糟糕的检查方式了,这样做没有任何好处,甚至禁止最简单的继承方式)。
总的来说,鸭子类型很有用,但是有的时候继承的方式更好。
在生物学中,平行进化会导致不相关的种类产生相似的属性,形态和举止方面都是如此,但是生态位的相似性是偶然的,不同的种类仍然属于不同的生态位。编程语言也有这种“偶然的相似性”,比如下面经典的面向对象编程示例:
End of explanation
class Struggle:
def __len__(self): return 23
from collections import abc
isinstance(Struggle(), abc.Sized)
Explanation: 显然,因为 x,y 两个对象刚好都有一个名为 draw 的方法,而且调用时候不用传入参数,即 x.draw() 和 y.draw(),远远不能确保二者可以相互调用,或者具有相同的抽象。也就是说,从这样的调用中不能推导出语义的相似性。相反,我们需要一位渊博的程序员主动把这种等价维持在一定层次上。
例如,草雁(以前认为与其他鹅类比较类似)和麻鸭(以前认为与其他鸭类比较类似)现在被分到 Tadornidae 亚科(表明二者相似性比鸭科其他动物高,因为他们的共同祖先比较接近)。此外,DNA 分析表明,白翅木鸭与美洲家鸭(属于麻鸭)不是很像,至少没有形态和举止看起来那么像,因此把木鸭单独分成了一组,完全不再 Tadornidae 亚科中。
知道这些有什么用呢?视情况而定,比如,逮到一直水禽之后,决定如何烹制才最美味时,显著的特征(不是全部,例如一身羽毛并不重要)主要是口感和风味(过时的表征学),这比支序学重要的多。但是其他方面,如对不同病原体的抗性(圈养水禽还是放养),DNA 接近性的作用就大多了。
因此,参照水禽的分类学演化,我建议在鸭子类型基础上增加白鹅类型(goose typing)。
白鹅类型是指,只要 cls 是抽象基类,即 cls 元类是 abcABCMeta 就可以使用 isinstance(obj, cls)。
colections.abc 中有很多有用的抽象基类(Python 标准库的 numbers 模块还有一些)。
与具体类相比,抽象基类有很多理论上的优点,Python 抽象基类还有一个重要的实用优势:可以使用 register 类方法在终端用户的代码把某个类 “声明” 为一个抽象基类的 “虚拟” 子类(为此,被注册的类必须满足抽象基类对方法名称和签名的要求,最重要的是要满足底层语义契约;但是,开发那个类时不用了解抽象基类,更不用继承抽象基类)。这大大打破了严格的强耦合,与面向对象编程人员掌握的知识有很大出入,因此使用继承时要小心
有时,为了让抽象基类识别子类,甚至不用注册。
其实,抽象基类的本质就是几个特殊方法。例如:
End of explanation
field_names = ['kaka,hello,world', "test, test2"]
try:
field_names = field_names.replace(',', ' ').split() # 逗号换成空格并拆分成列表
except AttributeError:
pass
field_names = tuple(field_names) # 为了确保传进去的是可迭代对象,也为了创建一个备份,使用所得值创建一个元组
field_names
Explanation: 可以看出,无需注册,abc.Sized 也能把 Struggle 识别为自己的子类,只要实现了特殊方法 __len__ 即可(要使用正确的语法和语义实现,前者要求没有参数,后者要求返回一个非负整数,指明对象的长度;如果不使用规定的语法和语义实现 __len__ 方法,会导致非常严重的问题)
最后要说的是:如果实现的类具体实现了 numbers, collections.abc 或者其他框架中的抽象基类的概念,要么继承相应的抽象基类(必要时),要么类注册到相应的抽象基类中。开始开发程序时,不要使用提供注册功能的库或框架,要自己动手注册,如果必须检查参数的类型(这是最常见的),例如检查是不是 “序列”,那么就这么做:
isinstance(the_arg, collections.abc.Sequence)
此外,不要在生产代码中定义抽象基类(或元类)。。。如果你很想这样做,我打赌你可能要找茬(= =),刚拿到新工具的人都有大干一场的冲动。如果你能避开这些深奥的概念,你(以及未来的代码维护者)的生活将更加愉快,因为代码会变得简洁明了,讲完啦~
除了提出 ”白鹅类型“ 之外,Alex 还指出,继承抽象基类很简单,只需要实现所需的方法,这样也能明确表明开发者意图,这一意图还能通过注册虚拟子类实现。
此外,使用 isinstance 和 issubclass 测试抽象基类更为人接受,过去,这两个函数用来测试鸭子类型,但用于抽象基类会更加灵活。毕竟,如果某个组件没有继承抽象基类,事后还可以注册,让显式类型检查通过
然而即使是抽象基类,也不能滥用 isinstance 检查,用多了可能导致代码很糟糕。在一连串 if/elif/elif 中使用 isinstance 做检查,然后根据对象类型做不同操作,是十分糟糕的做法,此时应该使用堕胎,即使用一定的方式定义类,让解释器把调用分派给正确的方法,而不是采用 if/elif/elif 块硬编码分派逻辑
具体使用时,上述建议有一个常见的例外:有些 Python API 接受一个字符串或字符串序列,如果只有一个字符串,可以把它放到列表中,从而简化处理。因为字符串是序列类型,所以为了把他和其它不可变序列区分,最简单的方式是使用 isinstance(x, str) 检查
可惜,Python 3.4 没有能把字符串和元组或者其他不可变序列区分开的抽象基类,因此必须测试 str。在 Python 2 中,basestr 类型可以协助这样的测试。basestr 不是抽象基类,但是他是 str 和 unicode 的超类。然而,Python 3 却把 basestr 去掉了,奇怪的是,Python 3 中有个 collections.abc.ByteString 类型,但是只能检测 byte 和 bytearray 类型
另一方面,如果必须强制执行 API 契约,通常可以使用 isinstance 检查抽象基类。“老兄,如果你想调用我,必须实现这个”,正如本书审校 Lennart Regebro 所说。这对采用插入式架构的系统来说特别有用。在框架之外,鸭子类型通常比类型检查更简单灵活
例如,本书有几个示例要使用序列,把他当成列表,我没有检查参数的类型是不是 list,而是直接接受参数,立即使用它构建一个列表。这样,我就可以接受任何可迭代对象,如果参数不是可迭代对象,调用立即失败,并提供清晰的错误信息,本章后面有一个这样的例子。当然,如果序列太长或者需要就地修改序列而导致无法复制参数,就不能采用这种方式,此时,使用 isinstance(x, abc.MutableSequence) 更好,如果可以接受任何可迭代对象,也可以调用 iter(x) 函数获得一个迭代器,第 14 章详细讲
模仿 collections.namedtuple 处理 field_names 参数的方式也是一例,field_names 的值可以是单个字符串,以空格或者逗号分隔符,也可以是一个标识符序列,此时可能想使用 isinstance,但是我会使用鸭子类型,如下所示:
使用鸭子类型处理单个字符串或者由字符串组成的可迭代对象:
End of explanation
import collections
Card = collections.namedtuple('Card', ['rank', 'suit'])
class FrenchDeck2(collections.MutableSequence):
ranks = [str(n) for n in range(2, 11)] + list('JQKA')
suits = 'spades diamonds clubs hearts'.split()
def __init__(self):
self._cards = [Card(rank, suit) for suit in self.suits
for rank in self.ranks]
def __len__(self):
return len(self._cards)
def __getitem(self, position):
return self._cards[position]
def __setitem(self, position, value): # 为了支持洗牌
self._cards[position] = value
def __delitem__(self, position): # 继承 MutableSequence 必须实现 __delitem__ 方法,这是它的一个抽象方法
del self._cards[position]
def insert(self, position, value): # insert 方法也是 MutableSequence 类的第三个方法
self._cards.insert(position, value)
Explanation: 最后再 Alex Martelli 强调一下建议尽量不要自己实现抽象基类,容易造成灾难性后果,下面通过实例讲解白鹅类型:
定义抽象基类的子类
我们按照 Martelli 的建议,先利用现有抽象基类(collections.MutableSequence),然后再自己定义。在下面例子,我们将 FrenchDeck2 声明为collections.MutableSequence 的子类
End of explanation
import abc
class Tombola(abc.ABC):
@abc.abstractmethod
def load(self, iterable):
'''从可迭代对象中添加元素'''
@abc.abstractmethod # 抽象方法使用此标记
def pick(self):
'''随机删除元素,然后将其返回
如果实例为空,这个方法抛出 LookupError
'''
def loaded(self):
'''如果至少有一个元素,返回 True,否则返回 False'''
return bool(self.inspect()) # 抽象基类中的具体方法只能依赖抽象基类定义的接口(即只能使用抽象基类的其他具体方法,抽象方法或特性)
def inspect(self):
'''返回一个有序元组,由当前元素构成'''
items = []
while 1: # 我们不知道具体子类如何存储元素,为了得到 inspect 结果,不断调用 pick 方法,把 Tombola 清空
try:
items.append(self.pick())
except LookupError:
break
self.load(items) # 再加回去元素
return tuple(sorted(items))
Explanation: Python 不会在模块导入时候检查抽象方法的实现,而是在实例化 FrenchDeck2 类时才会真正的检查。因此,如果没有正确实现某个抽象方法,Python 会抛出 TypeError 异常,并把错误消息设为 “Can't instantiate abstract class FrenchDeck2 with abstract methods __delitem__, insert"。正是这个原因,即使 FrenchDeck2 类不需要 __delitem__ 和 insert 提供的行为,但是因为继承 MutableSequence 抽象基类,必须实现他们。
FrenchDeck2 从 Sequence(因为 MultableSequence 继承了 Sequence) 继承了几个拿来即用的具体方法 __contains__, __iter__, __reversed__, index 和 count。FrenchDeck2 从 MutableSequence 继承了 append,extend,pop,remove 和 __iadd__
在 collections.abc 中,每个抽象基类的具体方法都是作为类的公开接口实现的,因此不用知道实例的内部接口
要想实现子类,可以覆盖抽象基类中继承的方法,以更高效的方式实现,例如,__contains__ 方法会全面扫描序列,如果面门定义的序列按顺序保存元素,就可以重新实现 __contain__ 方法,使用 bisect 函数做二分查找,提高速度
标准库中的抽象基类
Python 2.6 开始,标准库提供了抽象基类,大多数抽象基类在 collections.abc 模块中定义,不过其他地方也有,例如 numbers 和 io 包中包含一些,但是 collections.abc 中的抽象基类最常用,我们看看这个模块有那些抽象基类
标准库有两个 abc 模块,我们讨论的是 collections.abc 为了减少加载时间, Python 3.4 在 collections 包之外实现了这个模块,因此要与 cloections 分开导入。另一个 abc 模块就是 abc,这里定义的是 abc.ABC 类,每个抽象基类都依赖这个类,但是不用导入他,除非重新定义新的抽象基类
下面介绍几个抽象基类:
Iterable, Container, Sized
每个集合都应该继承这 3 个抽象基类,或者至少要实现兼容协议。Iterable 通过 __iter__ 方法支持迭代,Ccontainer 通过 __contains__ 方法支持 in 运算符,Sized 通过 __len__ 方法支持 len 函数
Sequence, Mapping 和 Set
这三个是主要的不可变集合类型,而且各自都有可变的子类。
MappingView
在 Python 3 中,映射方法 .items(), .keys(), .values() 返回的对象分别时 ItemsView,KeysView 和 ValuesView 的实例。前两个类还从 Set 类继承了丰富的接口,包含第 3 章所述的所有运算符
Callable 和 Hashable
这两个抽象基类和集合没有太大关系,只不过因为 collections.abc 是标准库中定义抽象基类的第一个模块,而他们又太重要了,所以才放到 collections.abc 模块中。我从未见过 Callable 或 Hashable 的子类。这两个抽象基类的主要作用是为内置函数 isinstance 提供支持提供支持,以一种安全的方式判断对象能不能调用或三裂
若检查是否能调用,可以使用内置的 callable() 函数,但是没有类似的 hashable() 函数,因此测试对象是否可散列,最好使用 isinstance(my_obj, Hashable)
Iterator
注意它是 Iterable 的子类,我们在第 14 章继续讨论
继 collections.abc 之后,标准库中最有用的抽象基类包是 numbers,我们来介绍一下
抽象基类的数字塔
numbers 包定义的是 “数字塔”(即各个抽象基类的层次结构是线性的),其中 Number 是位于最顶端的超类,随后是 Complex,依次往下,最底端是 Integral 类
Number
Complex
Real
Rational
Integral
因此,要检查一个数是不是整数,可以使用 isinstance(x, numbers.Integral),这样代码就能接受 int, bool(inti 的子类),或者外部库使用 numbers 抽象基类注册的其他类型。为了满足检查需要,你或者你的 API 用户始终可以把兼容的类型注册为 numbers.Integral 的虚拟子类
与之类似,如果一个值可能是浮点数类型,可以使用 isinstance(x, numbers.Real) 检查。这样代码就能接受 bool,int,float,fractions.Fraction 或者外部库(如 Numpy,它做了相应注册)提供非复数的类型
decimal.Decimal 没有注册为 numbers.Real 的虚拟子类,这有写奇怪。没注册的原因时,如果你的程序需要 Decimal 的精度,要防止与其他低精度的数字类型混淆,尤其是浮点数
了解一些现有的抽象基类之后,我们将从零开始实现一个抽象基类,然后开始使用,以此实现白鹅类型。这么做的目的不是鼓励每个人都立即定义抽象基类,而是教你怎么阅读标准库和其他包中的抽象基类源码。
定义并使用一个抽象基类
假如我们要在网站随机显示广告,但是在整个广告清单轮转一遍,不重复显示广告。我们正在构建一个广告管理框架,名叫 ADAM,它的职责之一是,支持用户随机挑选无重复类,我们将为此定义一个抽象基类
收到栈和队列启发,我们将使用现实中物体命名这个抽象基类:宾果机和彩票机是随机从有限集合挑选物品的机器,选出的物品没有重复,直到选完为止
我们将这个抽象基类命名为 Tombola,这是宾果机和打乱数字的滚动容器的意大利名字
Tombla 抽象基类有四个方法,其中两个是抽象方法。
.load(...) 把元素放入容器
.pick() 从容器中随机拿出一个元素,返回选中的元素
下面两个是具体方法:
.loaded() 如果容器中至少有一个元素,返回 True
.inspect() 返回一个有序元组,由容器现有元素构成,不会修改容器内容
抽象基类的定义如下所示:
End of explanation
class Fake(Tombola):
def pick(self):
return 13
Fake
f = Fake() #实例化的时候会报错
Explanation: 其实,抽象方法也可有实现代码,即使实现了,子类也必须覆盖抽象方法,但是在子类中可以用 super() 函数调用抽象方法,为它添加功能,而不是从头实现。@abstractmethod 装饰器用法参见 abc 文档
上面 inspect 方法实现的比较笨拙,不过却表明,有了 pick(),和 load(...) 方法,如果想查看 Tombola 内容,可以先把所有元素挑出,再放回去。这个例子的目的是强调抽象基类可以提供具体方法,只要依赖接口中其他方法就行。Tombola 具体子类知晓内部数据结构,可以覆盖 inspect() 方法,使用更聪明的方式实现
上面的 loaded() 方法看起来不笨,但是耗时间,调用 inspect() 方法构建有序元组只是为了看看序列是不是空。这样做可以,但是在子类做会更好,后文会讲到
注意,实现 inspect() 方法采用的是迂回方式捕获 pick() 方法抛出的 LookupError。pick() 抛出 LookupError 也是接口的一部分,但是在 Python 中无法声明,只能在文档说明
选择使用 LookupError 的原因是,在 Python 异常关系层次中,它是 IndexError 和 KeyError 的父类,这两个是具体实现 Tombola 所有的数据结构最有可能抛出的异常
为了看看抽象基类对接口做的检查,下面我们尝试使用一个有缺陷的实现糊弄 Tombola:
End of explanation
#class Tombola(metaclass=abc.ABCMeta):
# pass
Explanation: 抽象基类语法简介
声明抽象基类的最简单方式是继承 abc.ABC 或其他抽象基类。
然而,abc.ABC 是 Python 3.4 增加的新类,如果使用旧版 Python,无法继承现有抽象基类,必须用 metaclass= 关键字,把值设为 abc.ABCMeta(不是 abc.ABC)。
写成下面这样:
End of explanation
#class Tombola(object): # Python 2
# __metaclass__ = abc.ABCMeta
# pass
Explanation: metaclass= 关键字是 Python 3 引入的。在 Python 2 中必须使用 __metaclass__ 类属性:
End of explanation
# import abc
# class MyABC(abc.ABC):
# @classmethod
# @abc.abstractmethod
# def an_abstract_classmethod(cls, ...):
# pass
Explanation: 元类将在 21 章讲解,我们先将其理解为一种特殊的类,同样也把抽象基类理解为一种特殊的类。例如:“常规的”类不会检查子类,因此这是抽象基类的特殊行为
除了 @abstractmethod 之外,abc 模块还定义了 @abstractclassmethod, @abstractstaticmethodm, @abstractproperty 三个装饰器。然而,后 3 个装饰器从 Python 3.3 废弃了,因为装饰器可以在 @abstractmethod 上对叠,那三个就显得多余了。例如,生成是抽象类方法的推荐方式是:
End of explanation
import random
class BingoCage(Tombola):
def __init__(self, items):
self._randomizer = random.SystemRandom()
self._items = []
self.load(items)
def load(self, items):
self._items.extend(items)
self._randomizer.shuffle(self._items)
def pick(self):
try:
return self._items.pop()
except IndexError:
raise LookupError('pick from empty BingoCage')
def __call__(self):
self.pick()
Explanation: 在函数上堆叠装饰器的顺序非常重要,@abstractmethod 文档就特别指出:
与其他描述符一起使用时,abstractmethod() 应该放在最里层.
也就是说,在 @abstractmethod 和 def 语句之间不能有其它装饰器
定义 Tombola 抽象基类的子类
定义好 Tombola 抽象基类之后,我们要开发两个具体子类,满足 Tombola 规定的接口。
下面的 BingoCage 类例子是根据第五章例子修改的,使用了更好的随机发生器。BingoCage 实现了所需的首相方法 load 和 pick
从 Tombola 中继承了 loaded 方法,覆盖了 inspect 方法,增加了 __call__ 方法。
End of explanation
import random
class LotteryBlower(Tombola):
def __init__(self, iterable):
self._balls = list(iterable)
def load(self, iterable):
self._balls.extend(iterable)
def pick(self):
try:
position = random.randrange(len(self._balls))
except ValueError:
# 为了兼容 Tombola,我们抛出 LookupError
raise LookupError('pick from empty LotteryBlower')
return self._balls.pop(position)
def loaded(self):
return bool(self._balls)
def inspect(self):
return tuple(sorted(self._balls))
Explanation: random.SystemRandom 使用 os.urandom(...) 函数实现 random API,根据 os 模块文档,这个函数生成“适合用于加密”的随即字节序列
BingoCage 从 Tombola 继承了耗时的 loaded 方法和笨拙的 inspect 方法。这两个方法都可以覆盖,变成下面例子中更快的方法,这里想表达的观点是:我们可以偷懒,直接从抽象基类中继承不是那么理想的具体方法。从 Tombola 中继承的方法没有 BingoCage 自己定义的那么快,不过只要 Tbombola 子类能正确的实现 pick 和 load 方法,就能提供正确的结果
下面是 Tombola 接口的另一种实现,虽然与之前不同,但是完全有效。LotteryBlower 打乱“数字球”后没有提取最后一个,而是提取一个随即位置上的球
End of explanation
from random import randrange
@Tombola.register #注册虚拟子类
class TomboList(list): #继承 list
def pick(self):
if self: # 从 list 继承 __bool__ 方法,列表不为空时候返回 True
position = randrange(len(self))
return self.pop(position) #调用继承自 list 的 pop 方法
else:
raise LooupError('pop from empty TomboList')
load = list.extend # Tombolist.load 和 list.extend 一样
def loaded(self):
return bool(self)
def inspect(self):
return tuple(sorted(self))
#Tombola.register(TomboList) # Python 3.3 之前不能把 register 当做类装饰器使用,必须使用标准的调用语法
Explanation: 有个习惯做法值得指出,在 __init__ 方法中,self._balls 保存的是 list(iterable),而不是 iterable 的引用,这样会 LotterBlower 更灵活,因为 iterable 参数可以是任意可迭代的类型。把元素存入列表中还确保能取出元素。就算 iterable 参数始终传入列表,list(iterable) 会创建参数副本,这依然是好的做法,因为用户可能不希望自己提供的数据被改变
Tombola 的虚拟子类
白鹅类型的一个基本特性(也是值得用水禽来命名的原因):即使不继承,也有办法把一个类注册为抽象基类的虚拟子类。这样做时,我们保证注册的类忠实地实现了抽象基类定义的接口,而 Python 会相信我们从不做检查。如果我们说谎了,那么常规运行时异常会把我们捕获
注册虚拟子类的方式是在抽象基类上调用 register 方法。这么做之后,注册的类会变成抽象基类的虚拟子类,而且 issubclass 和 isinstance 等函数都能识别,但是注册的类不会从抽象基类中继承任何方法或属性。
虚拟子类不会继承注册的抽象基类,而且任何时候都不会检查它是符合抽象基类的接口,即便在实例化时也不会检查。为了避免运行时错误,虚拟子类实现所需的全部方法
register 方法通常作为普通的函数调用,不过也可以作为装饰器使用。在下面的例子,我们使用装饰器语法实现了 TomboList 类,这是 Tombola 的一个虚拟子类
End of explanation
issubclass(TomboList, Tombola)
t = TomboList(range(100))
isinstance(t, Tombola)
Explanation: 注册之后,可以使用 issubclass 和 isinstance 函数判断 TomboList 是不是 Tombola 的子类
End of explanation
TomboList.__mro__
Explanation: 然而,类的继承关系在一个特殊的类中指定 -- __mro__,即方法解析顺序(Method Resolution Order)。这个属性的作用域很简单,按顺序列出类及超类,Python 会按照这个顺序搜索方法。查看 TomboList 类的 __mro__ 属性,你会发现它只列出了 “真实的” 超类,即 list 和 object:
End of explanation
class Struggle:
def __len__(self): return 23
from collections import abc
isinstance(Struggle(), abc.Sized)
issubclass(Struggle, abc.Sized)
Explanation: Python 使用 register 的方式
Python 3.3 之前的版本不能将 register 当做装饰器使用,必须定义类以后像普通函数那样调用。
虽然现在可以当装饰器使用了,但是更常见的做法还是当函数,例如 collections.abc 模块源码中:
Sequence.register(tuple)
Sequence.register(str)
Sequence.register(range)
Sequence.register(memoryview)
鹅的行为可能像鸭子
Alex 讲故事时候说过,即使不注册,抽象基类也能把一个类识别成虚拟子类,下面是他举得一个例子,我添加了一些代码,用 issubclass 来测试:
End of explanation
# class Sized(metaclass = ABCMeta):
# __slots__ = ()
# @abstractmethod
# def __len__(self):
# return 0
# @classmethod
# def __subclasshook__(cls, C):
# if cls is Sized:
# # 对于 C 类以及其超类,如果 `__dict__` 属性中名为 `__len__` 的属性。。。
# if any("__len__" in B.__dict__ for B in C.__mro__):
# return True # 返回 True,表明 C 是 Sized 的虚拟子类
# return NotImplemented #否则,返回 NotImplement,让子类检查
Explanation: 经过 issubclass 函数确认(isinstance 也会得到相同的结论),Struggle 是 abc.Sized 的子类,这是因为 abc.Sized 实现了一个特殊的方法, __subclasshook__。看下面 Sized 类的源码:
End of explanation |
14,320 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Laplace transform
This notebook is a short tutorial of Laplace transform using SymPy.
The main functions to use are laplace_transform and inverse_laplace_transform.
Step1: Let us compute the Laplace transform from variables $t$ to $s$, then, we have the condition that $t>0$ (and real).
Step2: To calculate the Laplace transform of the expression $t^4$, we enter
Step3: This function returns (F, a, cond) where F is the Laplace transform of f, $\mathcal{R}(s)>a$ is the half-plane of convergence, and cond are auxiliary convergence conditions.
If we are not interested in the conditions for the convergence of this transform, we can use noconds=True
Step4: Right now, Sympy does not support the tranformation of derivatives.
If we do
Step5: we don't obtain, the expected
Step6: or,
$$\mathcal{L}\lbrace f^{(n)}(t)\rbrace = s^n F(s) - \sum_{k=1}^{n} s^{n - k} f^{(k - 1)}(0)\, ,$$
in general.
We can still, operate with the trasformation of a differential equation.
For example, let us consider the equation
$$\frac{d f(t)}{dt} = 3f(t) + e^{-t}\, ,$$
that has as Laplace transform
$$sF(s) - f(0) = 3F(s) + \frac{1}{s+1}\, .$$
Step7: We then solve for $F(s)$
Step8: and compute the inverse Laplace transform
Step9: and we verify this using dsolve
Step10: that is equal if $4C_1 = 4f(0) + 1$.
It is common to use practial fraction decomposition when computing inverse
Laplace transforms. We can do this using apart, as follows
Step11: We can also compute the Laplace transform of Heaviside
and Dirac's Delta "functions"
Step12: The next cell change the format of the notebook. | Python Code:
from sympy import *
init_session()
Explanation: Laplace transform
This notebook is a short tutorial of Laplace transform using SymPy.
The main functions to use are laplace_transform and inverse_laplace_transform.
End of explanation
t = symbols("t", real=True, positive=True)
s = symbols("s")
Explanation: Let us compute the Laplace transform from variables $t$ to $s$, then, we have the condition that $t>0$ (and real).
End of explanation
laplace_transform(t**4, t, s)
Explanation: To calculate the Laplace transform of the expression $t^4$, we enter
End of explanation
laplace_transform(t**4, t, s, noconds=True)
fun = 1/((s-2)*(s-1)**2)
fun
inverse_laplace_transform(fun, s, t)
Explanation: This function returns (F, a, cond) where F is the Laplace transform of f, $\mathcal{R}(s)>a$ is the half-plane of convergence, and cond are auxiliary convergence conditions.
If we are not interested in the conditions for the convergence of this transform, we can use noconds=True
End of explanation
laplace_transform(f(t).diff(t), t, s, noconds=True)
Explanation: Right now, Sympy does not support the tranformation of derivatives.
If we do
End of explanation
s*LaplaceTransform(f(t), t, s) - f(0)
Explanation: we don't obtain, the expected
End of explanation
eq = Eq(s*LaplaceTransform(f(t), t, s) - f(0),
3*LaplaceTransform(f(t), t, s) + 1/(s +1))
eq
Explanation: or,
$$\mathcal{L}\lbrace f^{(n)}(t)\rbrace = s^n F(s) - \sum_{k=1}^{n} s^{n - k} f^{(k - 1)}(0)\, ,$$
in general.
We can still, operate with the trasformation of a differential equation.
For example, let us consider the equation
$$\frac{d f(t)}{dt} = 3f(t) + e^{-t}\, ,$$
that has as Laplace transform
$$sF(s) - f(0) = 3F(s) + \frac{1}{s+1}\, .$$
End of explanation
sol = solve(eq, LaplaceTransform(f(t), t, s))
sol
Explanation: We then solve for $F(s)$
End of explanation
inverse_laplace_transform(sol[0], s, t)
Explanation: and compute the inverse Laplace transform
End of explanation
factor(dsolve(f(t).diff(t) - 3*f(t) - exp(-t)))
Explanation: and we verify this using dsolve
End of explanation
frac = 1/(x**2*(x**2 + 1))
frac
apart(frac)
Explanation: that is equal if $4C_1 = 4f(0) + 1$.
It is common to use practial fraction decomposition when computing inverse
Laplace transforms. We can do this using apart, as follows
End of explanation
laplace_transform(Heaviside(t - 3), t, s, noconds=True)
laplace_transform(DiracDelta(t - 2), t, s, noconds=True)
Explanation: We can also compute the Laplace transform of Heaviside
and Dirac's Delta "functions"
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open('./styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
Explanation: The next cell change the format of the notebook.
End of explanation |
14,321 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bootstrap
Import and settings
In this example, we need to import numpy, pandas, and graphviz in addition to lingam.
Step1: Test data
We create test data consisting of 6 variables.
Step2: Bootstrapping
We call bootstrap() method instead of fit(). Here, the second argument specifies the number of bootstrap sampling.
Step3: Causal Directions
Since BootstrapResult object is returned, we can get the ranking of the causal directions extracted by get_causal_direction_counts() method. In the following sample code, n_directions option is limited to the causal directions of the top 8 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.01 or more.
Step4: We can check the result by utility function.
Step5: Directed Acyclic Graphs
Also, using the get_directed_acyclic_graph_counts() method, we can get the ranking of the DAGs extracted. In the following sample code, n_dags option is limited to the dags of the top 3 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.01 or more.
Step6: We can check the result by utility function.
Step7: Probability
Using the get_probabilities() method, we can get the probability of bootstrapping.
Step8: Total Causal Effects
Using the get_total_causal_effects() method, we can get the list of total causal effect. The total causal effects we can get are dictionary type variable.
We can display the list nicely by assigning it to pandas.DataFrame. Also, we have replaced the variable index with a label below.
Step9: We can easily perform sorting operations with pandas.DataFrame.
Step10: And with pandas.DataFrame, we can easily filter by keywords. The following code extracts the causal direction towards x1.
Step11: Because it holds the raw data of the total causal effect (the original data for calculating the median), it is possible to draw a histogram of the values of the causal effect, as shown below.
Step12: Bootstrap Probability of Path
Using the get_paths() method, we can explore all paths from any variable to any variable and calculate the bootstrap probability for each path. The path will be output as an array of variable indices. For example, the array [3, 0, 1] shows the path from variable X3 through variable X0 to variable X1. | Python Code:
import numpy as np
import pandas as pd
import graphviz
import lingam
from lingam.utils import print_causal_directions, print_dagc, make_dot
print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])
np.set_printoptions(precision=3, suppress=True)
np.random.seed(0)
Explanation: Bootstrap
Import and settings
In this example, we need to import numpy, pandas, and graphviz in addition to lingam.
End of explanation
x3 = np.random.uniform(size=1000)
x0 = 3.0*x3 + np.random.uniform(size=1000)
x2 = 6.0*x3 + np.random.uniform(size=1000)
x1 = 3.0*x0 + 2.0*x2 + np.random.uniform(size=1000)
x5 = 4.0*x0 + np.random.uniform(size=1000)
x4 = 8.0*x0 - 1.0*x2 + np.random.uniform(size=1000)
X = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5]).T ,columns=['x0', 'x1', 'x2', 'x3', 'x4', 'x5'])
X.head()
m = np.array([[0.0, 0.0, 0.0, 3.0, 0.0, 0.0],
[3.0, 0.0, 2.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 6.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[8.0, 0.0,-1.0, 0.0, 0.0, 0.0],
[4.0, 0.0, 0.0, 0.0, 0.0, 0.0]])
make_dot(m)
Explanation: Test data
We create test data consisting of 6 variables.
End of explanation
model = lingam.DirectLiNGAM()
result = model.bootstrap(X, n_sampling=100)
Explanation: Bootstrapping
We call bootstrap() method instead of fit(). Here, the second argument specifies the number of bootstrap sampling.
End of explanation
cdc = result.get_causal_direction_counts(n_directions=8, min_causal_effect=0.01, split_by_causal_effect_sign=True)
Explanation: Causal Directions
Since BootstrapResult object is returned, we can get the ranking of the causal directions extracted by get_causal_direction_counts() method. In the following sample code, n_directions option is limited to the causal directions of the top 8 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.01 or more.
End of explanation
print_causal_directions(cdc, 100)
Explanation: We can check the result by utility function.
End of explanation
dagc = result.get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01, split_by_causal_effect_sign=True)
Explanation: Directed Acyclic Graphs
Also, using the get_directed_acyclic_graph_counts() method, we can get the ranking of the DAGs extracted. In the following sample code, n_dags option is limited to the dags of the top 3 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.01 or more.
End of explanation
print_dagc(dagc, 100)
Explanation: We can check the result by utility function.
End of explanation
prob = result.get_probabilities(min_causal_effect=0.01)
print(prob)
Explanation: Probability
Using the get_probabilities() method, we can get the probability of bootstrapping.
End of explanation
causal_effects = result.get_total_causal_effects(min_causal_effect=0.01)
# Assign to pandas.DataFrame for pretty display
df = pd.DataFrame(causal_effects)
labels = [f'x{i}' for i in range(X.shape[1])]
df['from'] = df['from'].apply(lambda x : labels[x])
df['to'] = df['to'].apply(lambda x : labels[x])
df
Explanation: Total Causal Effects
Using the get_total_causal_effects() method, we can get the list of total causal effect. The total causal effects we can get are dictionary type variable.
We can display the list nicely by assigning it to pandas.DataFrame. Also, we have replaced the variable index with a label below.
End of explanation
df.sort_values('effect', ascending=False).head()
df.sort_values('probability', ascending=True).head()
Explanation: We can easily perform sorting operations with pandas.DataFrame.
End of explanation
df[df['to']=='x1'].head()
Explanation: And with pandas.DataFrame, we can easily filter by keywords. The following code extracts the causal direction towards x1.
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
%matplotlib inline
from_index = 3 # index of x3
to_index = 0 # index of x0
plt.hist(result.total_effects_[:, to_index, from_index])
Explanation: Because it holds the raw data of the total causal effect (the original data for calculating the median), it is possible to draw a histogram of the values of the causal effect, as shown below.
End of explanation
from_index = 3 # index of x3
to_index = 1 # index of x0
pd.DataFrame(result.get_paths(from_index, to_index))
Explanation: Bootstrap Probability of Path
Using the get_paths() method, we can explore all paths from any variable to any variable and calculate the bootstrap probability for each path. The path will be output as an array of variable indices. For example, the array [3, 0, 1] shows the path from variable X3 through variable X0 to variable X1.
End of explanation |
14,322 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Concepts and data from "An Introduction to Statistical Learning, with applications in R" (Springer, 2013) with permission from the authors
Step1: Acquiring and seeing trends in multidimensional data
Step2: FIGURE 3.1. For the Advertising data, the least squares fit for the regression of sales onto TV is shown. The fit is found by minimizing the sum of squared errors.
Each line segment represents an error, and the fit makes a compromise by averaging their squares. In this case a linear fit captures the essence of the relationship, although it is somewhat deficient in the left of the plot.
Step3: Let $\hat{y}_i = \hat{\beta_0} + \hat{\beta_1}x_i$ be the prediction for $Y$ based on the $i$-th value of $X$.
Step4: The residual is the difference between the model output and the observed output
Step5: The residual is not very useful directly because balances over-estimates and under-estimates to produce the best overall estimate with the least error. We can understand the overall goodness of fit by the residual sum of squares - RSS.
Step6: With this, we can build an independent model for each of the inputs and look at the associates RSS.
Step7: We can look at how well each of these inputs predict the output by visualizing the residuals.
The magnitude of the RSS gives a sense of the error.
Step8: Figure 3.2 - RSS and least squares regression
Regression using least squares finds parameters $b0$ and $b1$ tohat minimize the RSS. The least squares regression line estimates the population regression line.
Figure 3.2 shows (what is claimed to be) the RSS contour and surface around the regression point. The computed analog of Figure 3.2 is shown below, with the role of $b0$ and $b1$ reversed to match the tuple returned from the regression method. The plot is the text is incorrect is some important ways. The RSS is not radially symmetric around ($b0$, $b1$). Lines with a larger intercept and smaller slope or vice versa are very close to the minima, i.e., the surface is nearly flat along the upper-left to lower-right diagonal, especially where fit is not very good, since the output depends on more than this one input.
Just because a process minimizes the error does not mean that the minima is sharply defined or that the error surface has no structure. Below we go a bit beyond the text to illustrate this more fully.
Step9: 3.1.2 Assessing the accuracy of coefficient estimates
The particular minima that is found through least squares regression is effected by the particular sample of the population that is observed and utilized for estimating the coefficients of the underlying population model.
To see this we can generate a synthetic population based on an ideal model plus noise. Here we can peek at the entire population (which in most settings cannot be observed). We then take samples of this population and fit a regression to those. We can then see how these regression lines differ from the population regression line, which is not exactly the ideal model.
Step10: "The property of unbiasedness holds for the least squares coefficient estimates given by (3.4) as well
Step11: 3.1.3 Assessing the Acurracy of the Model
Once we have rejected the null hypothesis in favor of the alternative hypothesis, it is natural to want to quantify the extent to which the model fits the data. The quality of a linear regression fit is typically assessed using two related quantities
Step12: The $R^2$ statistic provides an alternative measure of fit. It takes the form of a proportion—the proportion of variance explained—and so it always takes on a value between 0 and 1, and is independent of the scale of Y.
To calculate $R^2$, we use the formula
$R^2 = \frac{TSS−RSS}{TSS} = 1 − \frac{RSS}{TSS}$
where $TSS = \sum (y_i - \bar{y})^2$ is the total sum of squares. | Python Code:
# HIDDEN
# For Tables reference see http://data8.org/datascience/tables.html
# This useful nonsense should just go at the top of your notebook.
from datascience import *
%matplotlib inline
import matplotlib.pyplot as plots
import numpy as np
from sklearn import linear_model
plots.style.use('fivethirtyeight')
plots.rc('lines', linewidth=1, color='r')
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
# datascience version number of last run of this notebook
version.__version__
import sys
sys.path.append("..")
from ml_table import ML_Table
import locale
locale.setlocale( locale.LC_ALL, 'en_US.UTF-8' )
Explanation: Concepts and data from "An Introduction to Statistical Learning, with applications in R" (Springer, 2013) with permission from the authors: G. James, D. Witten, T. Hastie and R. Tibshirani " available at www.StatLearning.com.
For Tables reference see http://data8.org/datascience/tables.html
http://jeffskinnerbox.me/notebooks/matplotlib-2d-and-3d-plotting-in-ipython.html
End of explanation
# Getting the data
advertising = ML_Table.read_table("./data/Advertising.csv")
advertising.relabel(0, "id")
Explanation: Acquiring and seeing trends in multidimensional data
End of explanation
ax = advertising.plot_fit_1d('Sales', 'TV', advertising.regression_1d('Sales', 'TV'))
_ = ax.set_xlim(-20,300)
lr = advertising.linear_regression('Sales', 'TV')
advertising.plot_fit_1d('Sales', 'TV', lr.model)
advertising.lm_summary_1d('Sales', 'TV')
Explanation: FIGURE 3.1. For the Advertising data, the least squares fit for the regression of sales onto TV is shown. The fit is found by minimizing the sum of squared errors.
Each line segment represents an error, and the fit makes a compromise by averaging their squares. In this case a linear fit captures the essence of the relationship, although it is somewhat deficient in the left of the plot.
End of explanation
# Get the actual parameters that are captured within the model
advertising.regression_1d_params('Sales', 'TV')
# Regression yields a model. The computational representation of a model is a function
# That can be applied to an input, 'TV' to get an estimate of an output, 'Sales'
advertise_model_tv = advertising.regression_1d('Sales', 'TV')
# Sales with no TV advertising
advertise_model_tv(0)
# Sales with 100 units TV advertising
advertise_model_tv(100)
# Here's the output of the model applied to the input data
advertise_model_tv(advertising['TV'])
Explanation: Let $\hat{y}_i = \hat{\beta_0} + \hat{\beta_1}x_i$ be the prediction for $Y$ based on the $i$-th value of $X$.
End of explanation
residual = advertising['Sales'] - advertise_model_tv(advertising['TV'])
residual
Explanation: The residual is the difference between the model output and the observed output
End of explanation
# Residual Sum of Squares
RSS = sum(residual*residual)
RSS
# This is common enough that we have it provided as a method
advertising.RSS_model('Sales', advertising.regression_1d('Sales', 'TV'), 'TV')
# And we should move toward a general regression framework
advertising.RSS_model('Sales', advertising.linear_regression('Sales', 'TV').model, 'TV')
Explanation: The residual is not very useful directly because balances over-estimates and under-estimates to produce the best overall estimate with the least error. We can understand the overall goodness of fit by the residual sum of squares - RSS.
End of explanation
advertising_models = Table().with_column('Input', ['TV', 'Radio', 'Newspaper'])
advertising_models['Model'] = advertising_models.apply(lambda i: advertising.regression_1d('Sales', i), 'Input')
advertising_models['RSS'] = advertising_models.apply(lambda i, m: advertising.RSS_model('Sales', m, i), ['Input', 'Model'])
advertising_models
advertising_models = Table().with_column('Input', ['TV', 'Radio', 'Newspaper'])
advertising_models['Model'] = advertising_models.apply(lambda i: advertising.linear_regression('Sales', i).model, 'Input')
advertising_models['RSS'] = advertising_models.apply(lambda i, m: advertising.RSS_model('Sales', m, i), ['Input', 'Model'])
advertising_models
Explanation: With this, we can build an independent model for each of the inputs and look at the associates RSS.
End of explanation
for mode, mdl in zip(advertising_models['Input'], advertising_models['Model']) :
advertising.plot_fit_1d('Sales', mode, mdl)
# RSS at arbitrary point
res = lambda b0, b1: advertising.RSS('Sales', b0 + b1*advertising['TV'])
res(7.0325935491276965, 0.047536640433019729)
Explanation: We can look at how well each of these inputs predict the output by visualizing the residuals.
The magnitude of the RSS gives a sense of the error.
End of explanation
ax = advertising.RSS_contour('Sales', 'TV', sensitivity=0.2)
ax = advertising.RSS_wireframe('Sales', 'TV', sensitivity=0.2)
# The minima point
advertising.linear_regression('Sales', 'TV').params
# Some other points along the trough
points = [(0.042, 8.0), (0.044, 7.6), (0.050, 6.6), (0.054, 6.0)]
ax = advertising.RSS_contour('Sales', 'TV', sensitivity=0.2)
ax.plot([b0 for b1, b0 in points], [b1 for b1, b0 in points], 'ro')
[advertising.RSS('Sales', b0 + b1*advertising['TV']) for b1,b0 in points]
# Models as lines corresponding to points along the near-minimal vectors.
ax = advertising.plot_fit_1d('Sales', 'TV', advertising.linear_regression('Sales', 'TV').model)
for b1, b0 in points:
fit = lambda x: b0 + b1*x
ax.plot([0, 300], [fit(0), fit(300)])
_ = ax.set_xlim(-20,300)
Explanation: Figure 3.2 - RSS and least squares regression
Regression using least squares finds parameters $b0$ and $b1$ tohat minimize the RSS. The least squares regression line estimates the population regression line.
Figure 3.2 shows (what is claimed to be) the RSS contour and surface around the regression point. The computed analog of Figure 3.2 is shown below, with the role of $b0$ and $b1$ reversed to match the tuple returned from the regression method. The plot is the text is incorrect is some important ways. The RSS is not radially symmetric around ($b0$, $b1$). Lines with a larger intercept and smaller slope or vice versa are very close to the minima, i.e., the surface is nearly flat along the upper-left to lower-right diagonal, especially where fit is not very good, since the output depends on more than this one input.
Just because a process minimizes the error does not mean that the minima is sharply defined or that the error surface has no structure. Below we go a bit beyond the text to illustrate this more fully.
End of explanation
def model (x):
return 3*x + 2
def population(n, noise_scale = 1):
sample = ML_Table.runiform('x', n, -2, 2)
noise = ML_Table.rnorm('e', n, sd=noise_scale)
sample['Y'] = sample.apply(model, 'x') + noise['e']
return sample
data = population(100, 2)
data.scatter('x')
ax = data.plot_fit('Y', data.linear_regression('Y').model)
data.linear_regression('Y').params
# A random sample of the population
sample = data.sample(10)
sample.plot_fit('Y', sample.linear_regression('Y').model)
nsamples = 5
ax = data.plot_fit('Y', data.linear_regression('Y').model, linewidth=3)
for s in range(nsamples):
fit = data.sample(10).linear_regression('Y').model
ax.plot([-2, 2], [fit(-2), fit(2)], linewidth=1)
Explanation: 3.1.2 Assessing the accuracy of coefficient estimates
The particular minima that is found through least squares regression is effected by the particular sample of the population that is observed and utilized for estimating the coefficients of the underlying population model.
To see this we can generate a synthetic population based on an ideal model plus noise. Here we can peek at the entire population (which in most settings cannot be observed). We then take samples of this population and fit a regression to those. We can then see how these regression lines differ from the population regression line, which is not exactly the ideal model.
End of explanation
adv_sigma = advertising.RSE_model('Sales', advertising.linear_regression('Sales', 'TV').model, 'TV')
adv_sigma
b0, b1 = advertising.linear_regression('Sales', 'TV').params
b0, b1
advertising.RSS_model('Sales', advertising.linear_regression('Sales', 'TV').model, 'TV')
SE_b0, SE_b1 = advertising.SE_1d_params('Sales', 'TV')
SE_b0, SE_b1
# b0 95% confidence interval
(b0-2*SE_b0, b0+2*SE_b0)
# b1 95% confidence interval
(b1-2*SE_b1, b1+2*SE_b1)
# t-statistic of the slope
b0/SE_b0
# t-statistics of the intercept
b1/SE_b1
# Similar to summary of a linear model in R
advertising.lm_summary_1d('Sales', 'TV')
# We can just barely reject the null hypothesis for Newspaper
# advertising effecting sales
advertising.lm_summary_1d('Sales', 'Newspaper')
Explanation: "The property of unbiasedness holds for the least squares coefficient estimates given by (3.4) as well: if we estimate β0 and β1 on the basis of a particular data set, then our estimates won’t be exactly equal to β0 and β1. But if we could average the estimates obtained over a huge number of data sets, then the average of these estimates would be spot on!"
To compute the standard errors associated with $β_0$ and $β_1$, we use the following formulas.
The slope, $b_1$:
$SE(\hat{β_1})^2 = \frac{σ^2}{\sum_{i=1}^n (x_i - \bar{x})^2}$
The intercept, $b_0$:
$SE(\hat{β_0})^2 = σ^2 [\frac{1}{n} + \frac{\bar{x}^2}{\sum_{i=1}^n (x_i - \bar{x})^2} ] $,
where $σ^2 = Var(ε)$.
In general, $σ^2$ is not known, but can be estimated from the data.
The estimate of σ is known as the residual standard error, and is given by the formula
$RSE = \sqrt{RSS/(n − 2)}$.
End of explanation
adver_model = advertising.regression_1d('Sales', 'TV')
advertising.RSE_model('Sales', adver_model, 'TV')
Explanation: 3.1.3 Assessing the Acurracy of the Model
Once we have rejected the null hypothesis in favor of the alternative hypothesis, it is natural to want to quantify the extent to which the model fits the data. The quality of a linear regression fit is typically assessed using two related quantities: the residual standard error (RSE) and the $R^2$ statistic.
The RSE provides an absolute measure of lack of fit of the model to the data.
End of explanation
advertising.R2_model('Sales', adver_model, 'TV')
# the other models of advertising suggest that there is more going on
advertising.R2_model('Sales', adver_model, 'Radio')
advertising.R2_model('Sales', adver_model, 'Newspaper')
Explanation: The $R^2$ statistic provides an alternative measure of fit. It takes the form of a proportion—the proportion of variance explained—and so it always takes on a value between 0 and 1, and is independent of the scale of Y.
To calculate $R^2$, we use the formula
$R^2 = \frac{TSS−RSS}{TSS} = 1 − \frac{RSS}{TSS}$
where $TSS = \sum (y_i - \bar{y})^2$ is the total sum of squares.
End of explanation |
14,323 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using MinBLEP to generate a Saw
Step1: Picking up from where we left off with the MinBlep notebook
Step2: The above saw algorithm goes from -1 to 1, but the blep is from 0 to 1, so it needs to be scaled
Step3: An interesting observation here | Python Code:
pylab inline
import numpy as np
from minblep import generate_min_blep
sample_rate = 44100
Explanation: Using MinBLEP to generate a Saw
End of explanation
plot(generate_min_blep(15, 400))
def gen_pure_saw(osc_freq, sample_rate, num_samples, initial_phase=0):
peak_amplitude = 1.0
two_pi = 2.0 * np.pi
phase = initial_phase
for i in range(0, num_samples):
out_amp = peak_amplitude - (peak_amplitude / np.pi * phase)
phase = phase + ((2 * np.pi * osc_freq) / sample_rate)
if phase >= two_pi:
phase -= two_pi
yield out_amp
plot(list(gen_pure_saw(440, sample_rate, 800)))
# 5hz, 1000 samples per second, 1000 samples, beginning phase is pi
# this should plot 5 cycles, and begin at 0
plot(list(gen_pure_saw(5, 1000, 1000, np.pi)))
blep_buf = generate_min_blep(20, 10)
plot(blep_buf)
Explanation: Picking up from where we left off with the MinBlep notebook:
End of explanation
scaled_blep = [val * 2 - 1 for val in blep_buf]
plot(scaled_blep)
def gen_bl_saw(blep_buffer, osc_freq, sample_rate, num_samples, initial_phase=0):
blep_size = len(blep_buffer)
blep_pointers = []
peak_amplitude = 1.0
two_pi = 2.0 * np.pi
phase = initial_phase
for i in range(0, num_samples):
out_amp = peak_amplitude - (peak_amplitude / np.pi * phase)
for ptr in blep_pointers:
out_amp *= blep_buffer[ptr]
blep_pointers = [(ptr + 1) for ptr in blep_pointers if ptr + 1 < blep_size]
phase = phase + ((2 * np.pi * osc_freq) / sample_rate)
if phase >= two_pi:
phase -= two_pi
blep_pointers.append(0)
yield out_amp
raw_wave = gen_pure_saw(880, sample_rate, 1000, np.pi)
bl_wave = list(gen_bl_saw(scaled_blep, 880, sample_rate, 1000, np.pi))
plot(bl_wave)
def plot_spectrum(signal, sample_rate):
# http://samcarcagno.altervista.org/blog/basic-sound-processing-python/
size = len(signal)
fft_res = np.fft.fft(signal)
fft_unique_points = int(ceil((size+1)/2.0))
p = abs(fft_res[0:fft_unique_points])
p = p / float(size)
p = p ** 2
if size % 2 == 1:
p[1:len(p)] = p[1:len(p)] * 2
else:
p[1:len(p) - 1] = p[1:len(p) - 1] * 2
freq_axis_array = arange(0, fft_unique_points, 1.0) * (sample_rate / size)
plot(freq_axis_array/1000, 10*log10(p))
#xscale('log')
#xlim(xmin=20)
xlabel('Frequency (kHz)')
ylabel('Power (dB)')
plot_spectrum(bl_wave, sample_rate)
plot_spectrum(list(raw_wave), sample_rate)
raw_wave = list(gen_pure_saw(1760, sample_rate, 1000, np.pi))
bl_wave = list(gen_bl_saw(scaled_blep, 1760, sample_rate, 1000, np.pi))
plot(bl_wave)
plot(raw_wave)
plot_spectrum(raw_wave, sample_rate)
plot_spectrum(bl_wave, sample_rate)
other_blep = [val * 2 - 1 for val in generate_min_blep(10, 2)]
bl_wave = list(gen_bl_saw(other_blep, 1760, sample_rate, 1000, np.pi))
plot_spectrum(list(gen_pure_saw(1760, sample_rate, 1000, np.pi)), sample_rate)
plot_spectrum(bl_wave, sample_rate)
plot(other_blep)
lf_blep = [val * 2 - 1 for val in generate_min_blep(10, 2)]
bl_wave = list(gen_bl_saw(lf_blep, 220, sample_rate, 1000, np.pi))
plot_spectrum(list(gen_pure_saw(220, sample_rate, 1000, np.pi)), sample_rate)
plot_spectrum(bl_wave, sample_rate)
lf_blep = [val * 2 - 1 for val in generate_min_blep(50,2)]
bl_wave = list(gen_bl_saw(lf_blep, 220, sample_rate, 1000, np.pi))
plot_spectrum(list(gen_pure_saw(220, sample_rate, 1000, np.pi)), sample_rate)
plot_spectrum(bl_wave, sample_rate)
Explanation: The above saw algorithm goes from -1 to 1, but the blep is from 0 to 1, so it needs to be scaled:
End of explanation
lf_blep = [val * 2 - 1 for val in generate_min_blep(50, 2)]
bl_wave = list(gen_bl_saw(lf_blep, 1760, sample_rate, 1000, np.pi))
plot_spectrum(list(gen_pure_saw(1760, sample_rate, 1000, np.pi)), sample_rate)
plot_spectrum(bl_wave, sample_rate)
lf_blep = [val * 2 - 1 for val in generate_min_blep(50, 8)]
bl_wave = list(gen_bl_saw(lf_blep, 3520, sample_rate, 1000, np.pi))
plot_spectrum(list(gen_pure_saw(3520, sample_rate, 1000, np.pi)), sample_rate)
plot_spectrum(bl_wave, sample_rate)
Explanation: An interesting observation here: In this plot, it really looks like this is doing what we want it to do: The whole spectrum above ~11k falls off pretty sharply. The problem here is that we want that drop-off to happen at nyquist. The problem here is that the gen_bl_saw algorithm isn't properly resampling the blep buffer, and since the buffer is 2x oversampled, the drop-off happens at 1/2 Nyquist.
This also indicates that we should expect some aliasing, but that it will be about 90dB quieter than the fundamental, and about 45dB quieter than Nyquist.
End of explanation |
14,324 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color="#04B404"><h1 align="center">Machine Learning 2017-2018</h1></font>
<font color="#6E6E6E"><h2 align="center">Práctica 4
Step1: 1.1. Kernel lineal
Completa el código de la función linear_kernel y comprueba tu solución.
Step2: 1.2. Kernel polinómico
Completa el código de la función poly_kernel y comprueba tu solución.
Step3: 1.3. Kernel gausiano
Completa el código de la función rbf_kernel y comprueba tu solución.
Step4: 2. El algoritmo SMO
A continuación vas a completar la clase SVM, que representa un clasificador basado en máquinas de vectores de soporte, y vas a implementar el algoritmo SMO dentro de esta clase.
2.1. Implementa el método evaluate_model
Lo primero que debes hacer es completar el método evaluate_model, que recibe un array con los datos (atributos) del problema $x$ y calcula $f(x)$. Una vez que lo tengas, para comprobar que funciona bien, puedes ejecutar la siguiente celda de código
Step5: 2.2. Completa el resto de métodos de la clase SVM
Completa los métodos select_alphas, calculate_eta, update_alphas y update_b. Cuando los tengas acabados, ejecuta las celdas siguientes para comprobar tu implementación.
La primera prueba consiste en entrenar el problema del XOR
Step6: La siguiente prueba genera un problema al azar y lo resuelve con tu método y con sklearn. Ambas soluciones deberían ser parecidas, aunque la tuya será mucho más lenta. Prueba con los diferentes tipos de kernels para comprobar tu implementación.
Step7: 3. Visualización de modelos sencillos
Finalmente vamos a utilizar la implementación de sklearn para resolver problemas sencillos de clasificación en dos dimensiones. El objetivo es entender cómo funcionan los distintos tipos de kernel (polinómico y RBF) con problemas que se pueden visualizar fácilmente.
Para implementar los modelos utilizaremos la clase <a href="http
Step8: La siguiente celda realiza las siguientes acciones | Python Code:
# Imports
import numpy as np
import svm as svm
from sklearn.metrics.pairwise import polynomial_kernel
from sklearn.metrics.pairwise import rbf_kernel
# Datos de prueba:
n = 10
m = 8
d = 4
x = np.random.randn(n, d)
y = np.random.randn(m, d)
print x.shape
print y.shape
Explanation: <font color="#04B404"><h1 align="center">Machine Learning 2017-2018</h1></font>
<font color="#6E6E6E"><h2 align="center">Práctica 4: Support Vector Machines - El algoritmo SMO</h2></font>
En esta práctica vamos a implementar una versión simplificada del algoritmo SMO basada en <a href="http://cs229.stanford.edu/materials/smo.pdf">estas notas</a>. El algoritmo SMO original (<a href="https://www.microsoft.com/en-us/research/publication/sequential-minimal-optimization-a-fast-algorithm-for-training-support-vector-machines/">Platt, 1998</a>) utiliza una heurística algo compleja para seleccionar las alphas con respecto a las cuales optimizar. La versión que vamos a implementar simplifica el proceso de elección de las alphas, a costa de una convergencia más lenta. Una vez implementado, compararemos los resultados de nuestro algoritmo con los obtenidos usando a clase <a href="http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html">SVC</a> del paquete sklearn.svm.
Finalmente, utilizaremos la implementación de sklearn para resolver problemas sencillos de clasificación en dos dimensiones y visualizar fácilmente la frontera de decisión y el margen.
Todo el código que tienes que desarrollar lo debes incluir en el fichero svm.py, en los lugares indicados.
Entrega:
La fecha tope para la entrega es el <font color="#931405">21/12/2017 a las 23:59</font>. Se debe subir a la plataforma moodle un único fichero comprimido con el siguiente contenido:
El fichero svm.py con todo el código añadido para la realización de esta práctica.
Este notebook con las respuestas a las preguntas planteadas al final.
1. Implementando los kernels
Lo primero que vamos a hacer es implementar las funciones que calculan los kernels. En el fichero svm.py, completa el código de las funciones linear_kernel, poly_kernel y rbf_kernel. Luego ejecuta las celdas siguientes, que comparan los resultados de estas funciones con funciones equivalentes de sklearn.
End of explanation
# Con tu implementación:
K = svm.linear_kernel(x, y)
print "El array K deberia tener dimensiones: (%d, %d)" % (n, m)
print "A ti te sale un array con dimensiones:", K.shape
# Con la implementación de sklearn:
K_ = polynomial_kernel(x, y, degree=1, gamma=1, coef0=1)
# Diferencia entre tu kernel y el de sklearn (deberia salir practicamente 0):
maxdif = np.max(np.abs(K - K_))
print "Maxima diferencia entre tu implementacion y la de sklearn:", maxdif
Explanation: 1.1. Kernel lineal
Completa el código de la función linear_kernel y comprueba tu solución.
End of explanation
# Con tu implementación:
K = svm.poly_kernel(x, y, deg=2, b=1)
print "El array K deberia tener dimensiones: (%d, %d)" % (n, m)
print "A ti te sale un array con dimensiones:", K.shape
# Con la implementación de sklearn:
K_ = polynomial_kernel(x, y, degree=2, gamma=1, coef0=1)
# Diferencia entre tu kernel y el de sklearn (deberia salir practicamente 0):
maxdif = np.max(np.abs(K - K_))
print "Maxima diferencia entre tu implementacion y la de sklearn:", maxdif
Explanation: 1.2. Kernel polinómico
Completa el código de la función poly_kernel y comprueba tu solución.
End of explanation
s = 1.0
# Con tu implementación:
K = svm.rbf_kernel(x, y, sigma=s)
print "El array K deberia tener dimensiones: (%d, %d)" % (n, m)
print "A ti te sale un array con dimensiones:", K.shape
# Con la implementación de sklearn:
K_ = rbf_kernel(x, y, gamma=1/(2*s**2))
# Diferencia entre tu kernel y el de sklearn (deberia salir practicamente 0):
maxdif = np.max(np.abs(K - K_))
print "Maxima diferencia entre tu implementacion y la de sklearn:", maxdif
Explanation: 1.3. Kernel gausiano
Completa el código de la función rbf_kernel y comprueba tu solución.
End of explanation
# Datos de prueba (problema XOR):
x = np.array([[-1, -1], [1, 1], [1, -1], [-1, 1]])
y = np.array([1, 1, -1, -1])
# Alphas y b:
alpha = np.array([0.125, 0.125, 0.125, 0.125])
b = 0
# Clasificador, introducimos la solucion a mano:
svc = svm.SVM(C=1000, kernel="poly", sigma=1, deg=2, b=1)
svc.init_model(alpha, b, x, y)
# Clasificamos los puntos x:
y_ = svc.evaluate_model(x)
# Las predicciones deben ser exactamente iguales que las clases:
print "Predicciones (deberian ser [1, 1, -1, -1]):", y_
Explanation: 2. El algoritmo SMO
A continuación vas a completar la clase SVM, que representa un clasificador basado en máquinas de vectores de soporte, y vas a implementar el algoritmo SMO dentro de esta clase.
2.1. Implementa el método evaluate_model
Lo primero que debes hacer es completar el método evaluate_model, que recibe un array con los datos (atributos) del problema $x$ y calcula $f(x)$. Una vez que lo tengas, para comprobar que funciona bien, puedes ejecutar la siguiente celda de código:
End of explanation
# Prueba con los datos del XOR:
x = np.array([[-1, -1], [1, 1], [1, -1], [-1, 1]])
y = np.array([1, 1, -1, -1])
# Clasificador que entrenamos para resolver el problema:
svc = svm.SVM(C=1000, kernel="poly", sigma=1, deg=2, b=1)
svc.simple_smo(x, y, maxiter = 100, verb=True)
# Imprimimos los alphas y el bias (deberian ser alpha_i = 0.125, b = 0):
print "alpha =", svc.alpha
print "b =", svc.b
# Clasificamos los puntos x (las predicciones deberian ser iguales a las clases reales):
y_ = svc.evaluate_model(x)
print "Predicciones =", y_
Explanation: 2.2. Completa el resto de métodos de la clase SVM
Completa los métodos select_alphas, calculate_eta, update_alphas y update_b. Cuando los tengas acabados, ejecuta las celdas siguientes para comprobar tu implementación.
La primera prueba consiste en entrenar el problema del XOR:
End of explanation
# Prueba con otros problemas y comparacion con sklearn:
from sklearn.svm import SVC
# Generacion de los datos:
n = 20
X = np.random.rand(n,2)
y = 2.0*(X[:,0] > X[:,1]) -1
# Uso de SVC:
clf = SVC(C=10.0, kernel='rbf', degree=2.0, coef0=1.0, gamma=0.5)
clf.fit(X, y)
print "Resultados con sklearn:"
print " -- Num. vectores de soporte =", clf.dual_coef_.shape[1]
print " -- Bias b =", clf.intercept_[0]
# Uso de tu algoritmo:
svc = svm.SVM(C=10, kernel="rbf", sigma=1.0, deg=2.0, b=1.0)
svc.simple_smo(X, y, maxiter = 500, tol=1.e-15, verb=True, print_every=10)
print "Resultados con tu algoritmo:"
print " -- Num. vectores de soporte =", svc.num_sv
print " -- Bias b =", svc.b
# Comparacion entre las alphas:
a1 = clf.dual_coef_
a2 = (svc.alpha * y)[svc.is_sv]
# Maxima diferencia entre tus alphas y las de sklearn:
maxdif = np.max(np.abs(np.sort(a1) - np.sort(a2)))
print "Maxima diferencia entre tus alphas y las de sklearn:", maxdif
Explanation: La siguiente prueba genera un problema al azar y lo resuelve con tu método y con sklearn. Ambas soluciones deberían ser parecidas, aunque la tuya será mucho más lenta. Prueba con los diferentes tipos de kernels para comprobar tu implementación.
End of explanation
from p4_utils import *
import matplotlib.pyplot as plt
from sklearn.svm import SVC
%matplotlib inline
np.random.seed(19)
Explanation: 3. Visualización de modelos sencillos
Finalmente vamos a utilizar la implementación de sklearn para resolver problemas sencillos de clasificación en dos dimensiones. El objetivo es entender cómo funcionan los distintos tipos de kernel (polinómico y RBF) con problemas que se pueden visualizar fácilmente.
Para implementar los modelos utilizaremos la clase <a href="http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html">SVC</a> del paquete sklearn.svm.
Primero importamos algunos módulos adicionales, establecemos el modo inline para las gráficas de matplotlib e inicializamos la semilla del generador de números aleatorios. El módulo p4_utils contiene funciones para generar datos en 2D y visualizar los modelos.
End of explanation
# Creación del problema, datos de entrenamiento y test:
np.random.seed(300)
n = 50
model = 'linear'
ymargin = 0.5
x, y = createDataSet(n, model, ymargin)
xtest, ytest = createDataSet(n, model, ymargin)
# Construcción del clasificador:
clf = SVC(C=10, kernel='linear', degree=1.0, coef0=1.0, gamma=0.1)
clf.fit(x, y)
# Vectores de soporte:
print("Vectores de soporte:")
for i in clf.support_:
print(" [%f, %f] c = %d" % (x[i,0], x[i,1], y[i]))
# Coeficientes a_i y b:
print("Coeficientes a_i:")
print " ", clf.dual_coef_
print("Coeficiente b:")
print " ", clf.intercept_[0]
# Calculo del acierto en los conjuntos de entrenamiento y test:
score_train = clf.score(x, y)
print("Score train = %f" % (score_train))
score_test = clf.score(xtest, ytest)
print("Score test = %f" % (score_test))
# Gráficas:
plt.figure(figsize=(12,6))
plt.subplot(121)
plotModel(x[:,0],x[:,1],y,clf,"Training, score = %f" % (score_train))
for i in clf.support_:
if y[i] == -1:
plt.plot(x[i,0],x[i,1],'ro',ms=10)
else:
plt.plot(x[i,0],x[i,1],'bo',ms=10)
plt.subplot(122)
plotModel(xtest[:,0],xtest[:,1],ytest,clf,"Test, score = %f" % (score_test))
Explanation: La siguiente celda realiza las siguientes acciones:
Crea un problema con dos conjuntos de datos (entrenamiento y test) de 50 puntos cada uno, y dos clases (+1 y -1). La frontera que separa las clases es lineal.
Entrena un clasificador SVC para separar las dos clases, con un kernel lineal.
Imprime los vectores de soporte, las alphas y el bias.
Obtiene la tasa de acierto en training y en test.
Y finalmente dibuja el modelo sobre los datos de entrenamiento y test. La línea negra es la frontera de separación, mientras que las líneas azul y roja representan los márgenes para las clases azul y roja respectivamente. Sobre la gráfica de entrenamiento muestra además los vectores de soporte.
End of explanation |
14,325 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div style='background-image
Step1: 1. Initialization of setup
Step2: 2. The Mass Matrix
Now we initialize the mass and stiffness matrices. In general, the mass matrix at the elemental level is given
\begin{equation}
M_{ji}^e \ = \ w_j \ \rho (\xi) \ \frac{\mathrm{d}x}{\mathrm{d}\xi} \delta_{ij} \vert_ {\xi = \xi_j}
\end{equation}
Exercise 1
Implement the mass matrix using the integration weights at GLL locations $w$, the jacobian $J$, and density $\rho$. Then, perform the global assembly of the mass matrix, compute its inverse, and display the inverse mass matrix to visually inspect how it looks like.
Step3: 3. The Stiffness matrix
On the other hand, the general form of the stiffness matrix at the elemtal level is
\begin{equation}
K_{ji}^e \ = \ \sum_{k = 1}^{N+1} w_k \mu (\xi) \partial_\xi \ell_j (\xi) \partial_\xi \ell_i (\xi) \left(\frac{\mathrm{d}\xi}{\mathrm{d}x} \right)^2 \frac{\mathrm{d}x}{\mathrm{d}\xi} \vert_{\xi = \xi_k}
\end{equation}
Exercise 2
Implement the stiffness matrix using the integration weights at GLL locations $w$, the jacobian $J$, and shear stress $\mu$. Then, perform the global assembly of the mass matrix and display the matrix to visually inspect how it looks like.
Step4: 4. Finite element solution
Finally we implement the spectral element solution using the computed mass $M$ and stiffness $K$ matrices together with a finite differences extrapolation scheme
\begin{equation}
\mathbf{u}(t + dt) = dt^2 (\mathbf{M}^T)^{-1}[\mathbf{f} - \mathbf{K}^T\mathbf{u}] + 2\mathbf{u} - \mathbf{u}(t-dt).
\end{equation} | Python Code:
# Import all necessary libraries, this is a configuration step for the exercise.
# Please run it before the simulation code!
import numpy as np
import matplotlib.pyplot as plt
from gll import gll
from lagrange1st import lagrange1st
from ricker import ricker
# Show the plots in the Notebook.
plt.switch_backend("nbagg")
Explanation: <div style='background-image: url("../../share/images/header.svg") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>
<div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px">
<div style="position: relative ; top: 50% ; transform: translatey(-50%)">
<div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Computational Seismology</div>
<div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)">Spectral Element Method - 1D Elastic Wave Equation</div>
</div>
</div>
</div>
Seismo-Live: http://seismo-live.org
Authors:
David Vargas (@dvargas)
Heiner Igel (@heinerigel)
Basic Equations
This notebook presents the numerical solution for the 1D elastic wave equation
\begin{equation}
\rho(x) \partial_t^2 u(x,t) = \partial_x (\mu(x) \partial_x u(x,t)) + f(x,t),
\end{equation}
using the spectral element method. This is done after a series of steps summarized as follow:
1) The wave equation is written into its Weak form
2) Apply stress Free Boundary Condition after integration by parts
3) Approximate the wave field as a linear combination of some basis
\begin{equation}
u(x,t) \ \approx \ \overline{u}(x,t) \ = \ \sum_{i=1}^{n} u_i(t) \ \varphi_i(x)
\end{equation}
4) Use the same basis functions in $u(x, t)$ as test functions in the weak form, the so call Galerkin principle.
6) The continuous weak form is written as a system of linear equations by considering the approximated displacement field.
\begin{equation}
\mathbf{M}^T\partial_t^2 \mathbf{u} + \mathbf{K}^T\mathbf{u} = \mathbf{f}
\end{equation}
7) Time extrapolation with centered finite differences scheme
\begin{equation}
\mathbf{u}(t + dt) = dt^2 (\mathbf{M}^T)^{-1}[\mathbf{f} - \mathbf{K}^T\mathbf{u}] + 2\mathbf{u} - \mathbf{u}(t-dt).
\end{equation}
where $\mathbf{M}$ is known as the mass matrix, and $\mathbf{K}$ the stiffness matrix.
The above solution is exactly the same presented for the classic finite-element method. Now we introduce appropriated basis functions and integration scheme to efficiently solve the system of matrices.
Interpolation with Lagrange Polynomials
At the elemental level (see section 7.4), we introduce as interpolating functions the Lagrange polynomials and use $\xi$ as the space variable representing our elemental domain:
\begin{equation}
\varphi_i \ \rightarrow \ \ell_i^{(N)} (\xi) \ := \ \prod_{j \neq i}^{N+1} \frac{\xi - \xi_j}{\xi_i-\xi_j}, \qquad i,j = 1, 2, \dotsc , N + 1
\end{equation}
Numerical Integration
The integral of a continuous function $f(x)$ can be calculated after replacing $f(x)$ by a polynomial approximation that can be integrated analytically. As interpolating functions we use again the Lagrange polynomials and
obtain Gauss-Lobatto-Legendre quadrature. Here, the GLL points are used to perform the integral.
\begin{equation}
\int_{-1}^1 f(x) \ dx \approx \int {-1}^1 P_N(x) dx = \sum{i=1}^{N+1}
w_i f(x_i)
\end{equation}
End of explanation
# Initialization of setup
# ---------------------------------------------------------------
nt = 10000 # number of time steps
xmax = 10000. # Length of domain [m]
vs = 2500. # S velocity [m/s]
rho = 2000 # Density [kg/m^3]
mu = rho * vs**2 # Shear modulus mu
N = 3 # Order of Lagrange polynomials
ne = 250 # Number of elements
Tdom = .2 # Dominant period of Ricker source wavelet
iplot = 20 # Plotting each iplot snapshot
# variables for elemental matrices
Me = np.zeros(N+1, dtype = float)
Ke = np.zeros((N+1, N+1), dtype = float)
# ----------------------------------------------------------------
# Initialization of GLL points integration weights
[xi, w] = gll(N) # xi, N+1 coordinates [-1 1] of GLL points
# w Integration weights at GLL locations
# Space domain
le = xmax/ne # Length of elements
# Vector with GLL points
k = 0
xg = np.zeros((N*ne)+1)
xg[k] = 0
for i in range(1,ne+1):
for j in range(0,N):
k = k+1
xg[k] = (i-1)*le + .5*(xi[j+1]+1)*le
# ---------------------------------------------------------------
dxmin = min(np.diff(xg))
eps = 0.1 # Courant value
dt = eps*dxmin/vs # Global time step
# Mapping - Jacobian
J = le/2
Ji = 1/J # Inverse Jacobian
# 1st derivative of Lagrange polynomials
l1d = lagrange1st(N) # Array with GLL as columns for each N+1 polynomial
Explanation: 1. Initialization of setup
End of explanation
#################################################################
# IMPLEMENT THE MASS MATRIX HERE!
#################################################################
#################################################################
# PERFORM THE GLOBAL ASSEMBLY OF M HERE!
#################################################################
#################################################################
# COMPUTE THE INVERSE MASS MATRIX HERE!
#################################################################
#################################################################
# DISPLAY THE INVERSE MASS MATRIX HERE!
#################################################################
Explanation: 2. The Mass Matrix
Now we initialize the mass and stiffness matrices. In general, the mass matrix at the elemental level is given
\begin{equation}
M_{ji}^e \ = \ w_j \ \rho (\xi) \ \frac{\mathrm{d}x}{\mathrm{d}\xi} \delta_{ij} \vert_ {\xi = \xi_j}
\end{equation}
Exercise 1
Implement the mass matrix using the integration weights at GLL locations $w$, the jacobian $J$, and density $\rho$. Then, perform the global assembly of the mass matrix, compute its inverse, and display the inverse mass matrix to visually inspect how it looks like.
End of explanation
#################################################################
# IMPLEMENT THE STIFFNESS MATRIX HERE!
#################################################################
#################################################################
# PERFORM THE GLOBAL ASSEMBLY OF K HERE!
#################################################################
#################################################################
# DISPLAY THE STIFFNESS MATRIX HERE!
#################################################################
Explanation: 3. The Stiffness matrix
On the other hand, the general form of the stiffness matrix at the elemtal level is
\begin{equation}
K_{ji}^e \ = \ \sum_{k = 1}^{N+1} w_k \mu (\xi) \partial_\xi \ell_j (\xi) \partial_\xi \ell_i (\xi) \left(\frac{\mathrm{d}\xi}{\mathrm{d}x} \right)^2 \frac{\mathrm{d}x}{\mathrm{d}\xi} \vert_{\xi = \xi_k}
\end{equation}
Exercise 2
Implement the stiffness matrix using the integration weights at GLL locations $w$, the jacobian $J$, and shear stress $\mu$. Then, perform the global assembly of the mass matrix and display the matrix to visually inspect how it looks like.
End of explanation
# SE Solution, Time extrapolation
# ---------------------------------------------------------------
# initialize source time function and force vector f
src = ricker(dt,Tdom)
isrc = int(np.floor(ng/2)) # Source location
# Initialization of solution vectors
u = np.zeros(ng)
uold = u
unew = u
f = u
# Initialize animated plot
# ---------------------------------------------------------------
plt.figure(figsize=(10,6))
lines = plt.plot(xg, u, lw=1.5)
plt.title('SEM 1D Animation', size=16)
plt.xlabel(' x (m)')
plt.ylabel(' Amplitude ')
plt.ion() # set interective mode
plt.show()
# ---------------------------------------------------------------
# Time extrapolation
# ---------------------------------------------------------------
for it in range(nt):
# Source initialization
f= np.zeros(ng)
if it < len(src):
f[isrc-1] = src[it-1]
# Time extrapolation
unew = dt**2 * Minv @ (f - K @ u) + 2 * u - uold
uold, u = u, unew
# --------------------------------------
# Animation plot. Display solution
if not it % iplot:
for l in lines:
l.remove()
del l
# --------------------------------------
# Display lines
lines = plt.plot(xg, u, color="black", lw = 1.5)
plt.gcf().canvas.draw()
Explanation: 4. Finite element solution
Finally we implement the spectral element solution using the computed mass $M$ and stiffness $K$ matrices together with a finite differences extrapolation scheme
\begin{equation}
\mathbf{u}(t + dt) = dt^2 (\mathbf{M}^T)^{-1}[\mathbf{f} - \mathbf{K}^T\mathbf{u}] + 2\mathbf{u} - \mathbf{u}(t-dt).
\end{equation}
End of explanation |
14,326 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Read input from JSON records.
Step1: Create a pandas DataFrame
Step2: Create new features
Step3: Filter for PostTypeId == 1 or PostTypeId == 2
Step4: Are any relationships apparent in the raw data and the obvious features?
Step5: Problem
Step7: Feature extraction
Use sklearns Tranformer class to extract features from the original DataFrame.
Step8: Let's use the word frequency vectors in combination with the obvious features we created previously to try to predict PostTypeId.
Step9: Since we're overfitting, we can try reducing the total number of features. One approach to this is to perform the $\chi^2$ statistical test of independence on each feature with respect to the label (PostTypeId) and remove the features that are most independent of the label. Here, we put SelectKBest into the model pipeline and keep only the 10 most dependent features.
Step10: So, the generalization the model improved, but what is the right number of features to keep? For this, we can use model cross-validation. The class GridSearchCV allows us to vary a hyperparameter of the model and compute the model score for each candidate parameter (or set of parameters).
Step11: We can refine our range of hyperparameters to hone in on the best number.
Step12: Finally, we can inspect the confusion matrix to determine the types of errors encounter
True Positive | False Positive
------|------
False Negative | True Negative | Python Code:
lines = []
for part in ("00000", "00001"):
with open("../output/2017-01-03_13.57.34/part-%s" % part) as f:
lines += f.readlines()
print(lines[0])
Explanation: Read input from JSON records.
End of explanation
import pandas as pd
df = pd.read_json('[%s]' % ','.join(lines))
print(df.info())
df.head()
Explanation: Create a pandas DataFrame
End of explanation
df["has_qmark"] = df.Body.apply(lambda s: "?" in s)
df["num_qmarks"] = df.Body.apply(lambda s: s.count("?"))
df["body_length"] = df.Body.apply(lambda s: len(s))
df
Explanation: Create new features
End of explanation
df = df.loc[df.PostTypeId.apply(lambda x: x in [1, 2]), :]
df = df.reset_index(drop=True)
df.head()
n_questions = np.sum(df.PostTypeId == 1)
n_answers = np.sum(df.PostTypeId == 2)
print("No. questions {0} / No. answers {1}".format(n_questions, n_answers))
Explanation: Filter for PostTypeId == 1 or PostTypeId == 2
End of explanation
df.plot.scatter(x="num_qmarks",y="PostTypeId")
df.plot.scatter(x="body_length",y="PostTypeId")
Explanation: Are any relationships apparent in the raw data and the obvious features?
End of explanation
from sklearn.linear_model import RidgeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, StratifiedShuffleSplit
X = df.loc[:, ['num_qmarks', 'body_length']]
y = df.loc[:, 'PostTypeId']
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=42)
classifiers = [("Ridge", RidgeClassifier()), ("RandomForest", RandomForestClassifier())]
for name, classifier in classifiers:
classifier.fit(X_train, y_train)
print(name + " " + "-"*(60 - len(name)))
print("R2_train: {0}, R2_test: {1}".format(classifier.score(X_train, y_train), classifier.score(X_test, y_test)))
print()
Explanation: Problem:
Can PostTypeId be predicted from the post body? Here, we try the linear RidgeClassifier and the nonlinear RandomForestClassifier and compare the accuracy in the training set to the accuracy in the test set.
From the results, we see that the RandomForestClassifier is more accurate than the linear model, but is actually overfitting the data. The overfitting is likely to improve with more training examples, so let's choose the RF classifier.
End of explanation
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import CountVectorizer
class FSTransformer(BaseEstimator, TransformerMixin):
Returns the different feature names
def __init__(self, features):
self.features = features
pass
def fit(self, X, y):
return self
def transform(self, df):
return df[self.features].as_matrix()
class CountVecTransformer(BaseEstimator, TransformerMixin):
def __init__(self):
self.vectorizer = CountVectorizer(binary=False)
pass
def fit(self, df, y=None):
self.vectorizer.fit(df.Body)
return self
def transform(self, df):
return self.vectorizer.transform(df.Body).todense()
df.head()
fst = FSTransformer(["has_qmark"])
fst.transform(df)
CountVecTransformer().fit_transform(df)
Explanation: Feature extraction
Use sklearns Tranformer class to extract features from the original DataFrame.
End of explanation
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.metrics import f1_score
model_pipe = Pipeline([
("features",
FeatureUnion([
("derived", FSTransformer(["has_qmark", "num_qmarks", "body_length"])),
("count_vec", CountVecTransformer())
])
),
("clf", RandomForestClassifier())
])
sss = StratifiedShuffleSplit(n_splits=5, test_size=0.3, random_state=42)
X = df
y = df.PostTypeId
for train_index, test_index in sss.split(X.as_matrix(), y.as_matrix()):
X_train, X_test = X.iloc[train_index, :], X.iloc[test_index, :]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
model_pipe.fit(X_train, y_train)
r2_train = model_pipe.score(X_train, y_train)
r2_test = model_pipe.score(X_test, y_test)
y_pred = model_pipe.predict(X_test)
f1 = f1_score(y_test, y_pred)
print("R2_train: {0} R2_test: {1} f1: {2}".format(r2_train, r2_test, f1))
Explanation: Let's use the word frequency vectors in combination with the obvious features we created previously to try to predict PostTypeId.
End of explanation
from sklearn.feature_selection import SelectKBest, chi2
model_pipe = Pipeline([
("features",
FeatureUnion([
("derived", FSTransformer(["has_qmark", "num_qmarks", "body_length"])),
("count_vec", CountVecTransformer())
])
),
("best_features", SelectKBest(chi2, k=10)),
("clf", RandomForestClassifier())
])
sss = StratifiedShuffleSplit(n_splits=5, test_size=0.3, random_state=42)
X = df
y = df.PostTypeId
for train_index, test_index in sss.split(X.as_matrix(), y.as_matrix()):
X_train, X_test = X.iloc[train_index, :], X.iloc[test_index, :]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
model_pipe.fit(X_train, y_train)
r2_train = model_pipe.score(X_train, y_train)
r2_test = model_pipe.score(X_test, y_test)
y_pred = model_pipe.predict(X_test)
f1 = f1_score(y_test, y_pred)
print("R2_train: {0} R2_test: {1} f1: {2}".format(r2_train, r2_test, f1))
Explanation: Since we're overfitting, we can try reducing the total number of features. One approach to this is to perform the $\chi^2$ statistical test of independence on each feature with respect to the label (PostTypeId) and remove the features that are most independent of the label. Here, we put SelectKBest into the model pipeline and keep only the 10 most dependent features.
End of explanation
from sklearn.model_selection import GridSearchCV
modelCV = GridSearchCV(model_pipe, {"best_features__k":[3 ** i for i in range(1,7)]})
modelCV.fit(X,y)
cv_accuracy = pd.DataFrame([{**score.parameters, **{"mean_validation_score": score.mean_validation_score}}
for score in modelCV.grid_scores_])
cv_accuracy.plot(x="best_features__k", y="mean_validation_score")
cv_accuracy
Explanation: So, the generalization the model improved, but what is the right number of features to keep? For this, we can use model cross-validation. The class GridSearchCV allows us to vary a hyperparameter of the model and compute the model score for each candidate parameter (or set of parameters).
End of explanation
modelCV = GridSearchCV(model_pipe, {"best_features__k":list(range(80,120,10))})
modelCV.fit(X,y)
cv_accuracy = pd.DataFrame([{**score.parameters, **{"mean_validation_score": score.mean_validation_score}}
for score in modelCV.grid_scores_])
cv_accuracy.plot(x="best_features__k", y="mean_validation_score")
cv_accuracy
Explanation: We can refine our range of hyperparameters to hone in on the best number.
End of explanation
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, modelCV.predict(X_test))
cm = cm / cm.sum(axis=0)
print(cm)
cm = confusion_matrix(y_test, modelCV.predict(X_test))
cm
plt.imshow(cm, interpolation="nearest", cmap="Blues")
plt.colorbar()
Explanation: Finally, we can inspect the confusion matrix to determine the types of errors encounter
True Positive | False Positive
------|------
False Negative | True Negative
End of explanation |
14,327 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
In group efforts, there is sometimes the impression that there are those who work, and those who talk. A naive question to ask is whether or not the people that tend to talk a lot actually get any work done. This is an obviously and purposefully obtuse question with an interesting answer.
We can use BigBang's newest feature, git data collection, to compare all of the contributors to a project, in this case Scipy, based on their email and git commit activity. The hypothesis in this case was that people who commit a lot will also tend to email a lot, and vice versa, since their involvement in a project would usually require them to do both. This hypothesis was proven to be correct. However, the data reveals many more interesting phenomenon.
Step1: Entity Resolution
Git and Email data comes from two different datatables. To observe a single person's git and email data, we need a way to identify that person across the two different datatables.
To solve this problem, I wrote an entity resolution client that will parse a Pandas dataframe and add a new column to it called "Person-ID" which gives each row an ID that represents one unique contributor. A person may go by many names ("Robert Smith, Rob B. Smith, Bob S., etc.) and use many different emails. However, this client will read through these data tables in one pass and consolidate these identities based on a few strategies.
Step2: After we've run entity resolution on our dataframes, we split the dataframe into slices based on time. So for the entire life-span of the project, we will have NUM_SLICES different segments to analyze. We will be able to look at the git and email data up until that certain date, which can let us analyze these changes over time.
Step3: Merging Data Tables
Now we want to merge these two tables based on their Person-ID values. Basically, we first count how many emails / commits a certain contributor had in a certain slice. We then join all the rows with the same Person-ID to each other, so that we have the number of emails and the number of commits of each person in one row per person in one consolidated dataframe. We then delete all the rows where both of these values aren't defined. These represent people for whom we have git data but not mail data, or vice versa.
Step4: Coloring
We now assign a float value [0 --> 1] to each person. This isn't neccesary, but can let us graph these changes in a scatter plot and give each contributor a unique color to differentiate them. This will help us track an individual as their dot travels over time.
Step5: Here we graph our data. Each dot represents a unique contributor's number of emails and commits. As you'll notice, the graph is on a log-log scale.
Step6: Animations
Below this point, you'll find the code for generating animations. This can take a long time (~30 mins) for a large number of slices. However, the pre-generated videos are below.
The first video just shows all the contributors over time without unique colors. The second video has a color for each contributor, but also contains a Matplotlib bug where the minimum x and y values for the axes is not followed.
There is a lot to observe. As to our hypothesis, it's clear that people who email more commit more. In our static graph, we could see many contributors on the x-axis -- people who only email -- but this dynamic graph allows us to see the truth. While it may seem that they're people who only email, the video shows that even these contributors eventually start committing. Most committers don't really get past 10 commits without starting to email the rest of the project, for pretty clear reasons. However, the emailers can "get away with" exclusively emailing for longer, but eventually they too start to commit. In general, not only is there a positive correlation, there's a general trend of everyone edging close to having a stable and relatively equal ratio of commits to emails. | Python Code:
# Load the raw email and git data
url = "http://mail.python.org/pipermail/scipy-dev/"
arx = Archive(url,archive_dir="../archives")
mailInfo = arx.data
repo = repo_loader.get_repo("bigbang")
gitInfo = repo.commit_data;
Explanation: Introduction
In group efforts, there is sometimes the impression that there are those who work, and those who talk. A naive question to ask is whether or not the people that tend to talk a lot actually get any work done. This is an obviously and purposefully obtuse question with an interesting answer.
We can use BigBang's newest feature, git data collection, to compare all of the contributors to a project, in this case Scipy, based on their email and git commit activity. The hypothesis in this case was that people who commit a lot will also tend to email a lot, and vice versa, since their involvement in a project would usually require them to do both. This hypothesis was proven to be correct. However, the data reveals many more interesting phenomenon.
End of explanation
entityResolve = bigbang.entity_resolution.entityResolve
mailAct = mailInfo.apply(entityResolve, axis=1, args =("From",None))
gitAct = gitInfo.apply(entityResolve, axis=1, args =("Committer Email","Committer Name"))
Explanation: Entity Resolution
Git and Email data comes from two different datatables. To observe a single person's git and email data, we need a way to identify that person across the two different datatables.
To solve this problem, I wrote an entity resolution client that will parse a Pandas dataframe and add a new column to it called "Person-ID" which gives each row an ID that represents one unique contributor. A person may go by many names ("Robert Smith, Rob B. Smith, Bob S., etc.) and use many different emails. However, this client will read through these data tables in one pass and consolidate these identities based on a few strategies.
End of explanation
NUM_SLICES = 1500 # Number of animation frames. More means more loading time
mailAct.sort("Date")
gitAct.sort("Time")
def getSlices(df, numSlices):
sliceSize = len(df)/numSlices
slices = []
for i in range(1, numSlices + 1):
start = 0
next = (i)*sliceSize;
next = min(next, len(df)-1) # make sure we don't go out of bounds
slice = df.iloc[start:next]
slices.append(slice)
return slices
mailSlices = getSlices(mailAct, NUM_SLICES)
gitSlices = getSlices(gitAct, NUM_SLICES)
Explanation: After we've run entity resolution on our dataframes, we split the dataframe into slices based on time. So for the entire life-span of the project, we will have NUM_SLICES different segments to analyze. We will be able to look at the git and email data up until that certain date, which can let us analyze these changes over time.
End of explanation
def processSlices(slices) :
for i in range(len(slices)):
slice = slices[i]
slice = slice.groupby("Person-ID").size()
slice.sort()
slices[i] = slice
def concatSlices(slicesA, slicesB) :
# assumes they have the same number of slices
# First is emails, second is commits
ansSlices = []
for i in range(len(slicesA)):
sliceA = slicesA[i]
sliceB = slicesB[i]
ans = pd.concat({"Emails" : sliceA, "Commits": sliceB}, axis = 1)
ans = ans[pd.notnull(ans["Emails"])]
ans = ans[pd.notnull(ans["Commits"])]
ansSlices.append(ans);
return ansSlices
processSlices(mailSlices)
processSlices(gitSlices)
finalSlices = concatSlices(mailSlices, gitSlices)
Explanation: Merging Data Tables
Now we want to merge these two tables based on their Person-ID values. Basically, we first count how many emails / commits a certain contributor had in a certain slice. We then join all the rows with the same Person-ID to each other, so that we have the number of emails and the number of commits of each person in one row per person in one consolidated dataframe. We then delete all the rows where both of these values aren't defined. These represent people for whom we have git data but not mail data, or vice versa.
End of explanation
def idToFloat(id):
return id*1.0/400.0;
for i in range(len(finalSlices)):
slice = finalSlices[i]
toSet = []
for i in slice.index.values:
i = idToFloat(i)
toSet.append(i)
slice["color"] = toSet
Explanation: Coloring
We now assign a float value [0 --> 1] to each person. This isn't neccesary, but can let us graph these changes in a scatter plot and give each contributor a unique color to differentiate them. This will help us track an individual as their dot travels over time.
End of explanation
data = finalSlices[len(finalSlices)-1] # Will break if there are 0 slices
fig = plt.figure(figsize=(8, 8))
d = data
x = d["Emails"]
y = d["Commits"]
c = d["color"]
ax = plt.axes(xscale='log', yscale = 'log')
plt.scatter(x, y, c=c, s=75)
plt.ylim(0, 10000)
plt.xlim(0, 10000)
ax.set_xlabel("Emails")
ax.set_ylabel("Commits")
plt.plot([0, 1000],[0, 1000], linewidth=5)
plt.show()
Explanation: Here we graph our data. Each dot represents a unique contributor's number of emails and commits. As you'll notice, the graph is on a log-log scale.
End of explanation
from IPython.display import YouTubeVideo
display(YouTubeVideo('GCcYJBq1Bcc', width=500, height=500))
display(YouTubeVideo('uP-z4jJqxmI', width=500, height=500))
fig = plt.figure(figsize=(8, 8))
a = finalSlices[0]
print(type(plt))
ax = plt.axes(xscale='log', yscale = 'log')
graph, = ax.plot(x ,y, 'o', c='red', alpha=1, markeredgecolor='none')
ax.set_xlabel("Emails")
ax.set_ylabel("Commits")
plt.ylim(0, 10000)
plt.xlim(0, 10000)
def init():
graph.set_data([],[]);
return graph,
def animate(i):
a = finalSlices[i]
x = a["Emails"]
y = a["Commits"]
graph.set_data(x, y)
return graph,
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=NUM_SLICES, interval=1, blit=True)
anim.save('t1.mp4', fps=15)
def main():
data = finalSlices
first = finalSlices[0]
fig = plt.figure(figsize=(8, 8))
d = data
x = d[0]["Emails"]
y = d[0]["Commits"]
c = d[0]["color"]
ax = plt.axes(xscale='log', yscale='log')
scat = plt.scatter(x, y, c=c, s=100)
plt.ylim(0, 10000)
plt.xlim(0, 10000)
plt.xscale('log')
plt.yscale('log')
ani = animation.FuncAnimation(fig, update_plot, frames=NUM_SLICES,
fargs=(data, scat), blit=True)
ani.save('test.mp4', fps=10)
#plt.show()
def update_plot(i, d, scat):
x = d[i]["Emails"]
y = d[i]["Commits"]
c = d[i]["color"]
plt.cla()
ax = plt.axes()
ax.set_xscale('log')
ax.set_yscale('log')
scat = plt.scatter(x, y, c=c, s=100)
plt.ylim(0, 10000)
plt.xlim(0, 10000)
plt.xlabel("Emails")
plt.ylabel("Commits")
return scat,
main()
Explanation: Animations
Below this point, you'll find the code for generating animations. This can take a long time (~30 mins) for a large number of slices. However, the pre-generated videos are below.
The first video just shows all the contributors over time without unique colors. The second video has a color for each contributor, but also contains a Matplotlib bug where the minimum x and y values for the axes is not followed.
There is a lot to observe. As to our hypothesis, it's clear that people who email more commit more. In our static graph, we could see many contributors on the x-axis -- people who only email -- but this dynamic graph allows us to see the truth. While it may seem that they're people who only email, the video shows that even these contributors eventually start committing. Most committers don't really get past 10 commits without starting to email the rest of the project, for pretty clear reasons. However, the emailers can "get away with" exclusively emailing for longer, but eventually they too start to commit. In general, not only is there a positive correlation, there's a general trend of everyone edging close to having a stable and relatively equal ratio of commits to emails.
End of explanation |
14,328 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing source space SNR
This example shows how to compute and plot source space SNR as in
Step1: EEG
Next we do the same for EEG and plot the result on the cortex | Python Code:
# Author: Padma Sundaram <[email protected]>
# Kaisu Lankinen <[email protected]>
#
# License: BSD-3-Clause
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
import numpy as np
import matplotlib.pyplot as plt
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
# Read data
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname_evoked, condition='Left Auditory',
baseline=(None, 0))
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
fwd = mne.read_forward_solution(fname_fwd)
cov = mne.read_cov(fname_cov)
# Read inverse operator:
inv_op = make_inverse_operator(evoked.info, fwd, cov, fixed=True, verbose=True)
# Calculate MNE:
snr = 3.0
lambda2 = 1.0 / snr ** 2
stc = apply_inverse(evoked, inv_op, lambda2, 'MNE', verbose=True)
# Calculate SNR in source space:
snr_stc = stc.estimate_snr(evoked.info, fwd, cov)
# Plot an average SNR across source points over time:
ave = np.mean(snr_stc.data, axis=0)
fig, ax = plt.subplots()
ax.plot(evoked.times, ave)
ax.set(xlabel='Time (sec)', ylabel='SNR MEG-EEG')
fig.tight_layout()
# Find time point of maximum SNR
maxidx = np.argmax(ave)
# Plot SNR on source space at the time point of maximum SNR:
kwargs = dict(initial_time=evoked.times[maxidx], hemi='split',
views=['lat', 'med'], subjects_dir=subjects_dir, size=(600, 600),
clim=dict(kind='value', lims=(-100, -70, -40)),
transparent=True, colormap='viridis')
brain = snr_stc.plot(**kwargs)
Explanation: Computing source space SNR
This example shows how to compute and plot source space SNR as in
:footcite:GoldenholzEtAl2009.
End of explanation
evoked_eeg = evoked.copy().pick_types(eeg=True, meg=False)
inv_op_eeg = make_inverse_operator(evoked_eeg.info, fwd, cov, fixed=True,
verbose=True)
stc_eeg = apply_inverse(evoked_eeg, inv_op_eeg, lambda2, 'MNE', verbose=True)
snr_stc_eeg = stc_eeg.estimate_snr(evoked_eeg.info, fwd, cov)
brain = snr_stc_eeg.plot(**kwargs)
Explanation: EEG
Next we do the same for EEG and plot the result on the cortex:
End of explanation |
14,329 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
XPCS fitting with lmfit
The experimentatl X-ray Photon Correlation Sepectroscopy(XPCS) data
are fitted with Intermediate Scattering Factor(ISF) using
lmfit Model (http
Step1: Easily switch between interactive and static matplotlib plots
Step2: This data provided by Dr. Andrei Fluerasu
L. Li, P. Kwasniewski, D. Oris, L Wiegart, L. Cristofolini, C. Carona and A. Fluerasu ,
"Photon statistics and speckle visibility spectroscopy with partially coherent x-rays"
J. Synchrotron Rad., vol 21, p 1288-1295, 2014.
Step3: Create the Rings Mask
Use the skbeam.core.roi module to create Ring ROIs (ROI Mask)¶ (https
Step4: convert the edge values of the rings to q ( reciprocal space)
Step5: Create a labeled array using roi.rings
Step6: Find the experimental auto correlation functions
Use the skbeam.core.correlation module (https
Step7: Do the fitting
One time correlation data is fitted using the model in skbeam.core.correlation module
(auto_corr_scat_factor)
(https
Step8: Plot the relaxation rates vs (q_ring_center)**2
Step9: Fitted the Diffusion Coefficinet D0 | Python Code:
# analysis tools from scikit-beam (https://github.com/scikit-beam/scikit-beam/tree/master/skbeam/core)
import skbeam.core.roi as roi
import skbeam.core.correlation as corr
import skbeam.core.utils as utils
from lmfit import Model
# plotting tools from xray_vision (https://github.com/Nikea/xray-vision/blob/master/xray_vision/mpl_plotting/roi.py)
import xray_vision.mpl_plotting as mpl_plot
import numpy as np
import os, sys
import zipfile
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
from matplotlib.colors import LogNorm
Explanation: XPCS fitting with lmfit
The experimentatl X-ray Photon Correlation Sepectroscopy(XPCS) data
are fitted with Intermediate Scattering Factor(ISF) using
lmfit Model (http://lmfit.github.io/lmfit-py/model.html)
End of explanation
interactive_mode = False
if interactive_mode:
%matplotlib notebook
else:
%matplotlib inline
backend = mpl.get_backend()
Explanation: Easily switch between interactive and static matplotlib plots
End of explanation
%run download.py
data_dir = "Duke_data/"
duke_rdata = np.load(data_dir+"duke_img_1_5000.npy")
duke_dark = np.load(data_dir+"duke_dark.npy")
duke_data = []
for i in range(duke_rdata.shape[0]):
duke_data.append(duke_rdata[i] - duke_dark)
duke_ndata=np.asarray(duke_data)
# load the mask(s) and mask the data
mask1 = np.load(data_dir+"new_mask4.npy")
mask2 = np.load(data_dir+"Luxi_duke_mask.npy")
N_mask = ~(mask1 + mask2)
mask_data = N_mask*duke_ndata
# get the average image
avg_img = np.average(duke_ndata, axis=0)
# if matplotlib version 1.5 or later
if float('.'.join(mpl.__version__.split('.')[:2])) >= 1.5:
cmap = 'viridis'
else:
cmap = 'Dark2'
# plot the average image data after masking
plt.figure()
plt.imshow(N_mask*avg_img, vmax=1e0, cmap= cmap )
plt.title("Averaged masked data for Duke Silica Gel ")
plt.colorbar()
plt.show()
Explanation: This data provided by Dr. Andrei Fluerasu
L. Li, P. Kwasniewski, D. Oris, L Wiegart, L. Cristofolini, C. Carona and A. Fluerasu ,
"Photon statistics and speckle visibility spectroscopy with partially coherent x-rays"
J. Synchrotron Rad., vol 21, p 1288-1295, 2014.
End of explanation
inner_radius = 24 # radius of the first ring
width = 1 # width of each ring
spacing = 0 # no spacing between rings
num_rings = 5 # number of rings
center = (133, 143) # center of the spckle pattern
# find the edges of the required rings
edges = roi.ring_edges(inner_radius, width, spacing, num_rings)
edges
Explanation: Create the Rings Mask
Use the skbeam.core.roi module to create Ring ROIs (ROI Mask)¶ (https://github.com/scikit-beam/scikit-beam/blob/master/skbeam/core/roi.py)
End of explanation
dpix = 0.055 # The physical size of the pixels
lambda_ = 1.5498 # wavelength of the X-rays
Ldet = 2200. # # detector to sample distance
two_theta = utils.radius_to_twotheta(Ldet, edges*dpix)
q_val = utils.twotheta_to_q(two_theta, lambda_)
q_val
q_ring = np.mean(q_val, axis=1)
q_ring
Explanation: convert the edge values of the rings to q ( reciprocal space)
End of explanation
rings = roi.rings(edges, center, avg_img.shape)
mask_data2 = N_mask*duke_data[0:4999]
ring_mask = rings*N_mask
# plot the figure
fig, axes = plt.subplots()
axes.set_title("Ring Mask")
im = mpl_plot.show_label_array(axes, ring_mask, cmap="Dark2")
plt.show()
Explanation: Create a labeled array using roi.rings
End of explanation
num_levels = 7
num_bufs = 8
g2, lag_steps = corr.multi_tau_auto_corr(num_levels, num_bufs, ring_mask,
mask_data2)
exposuretime=0.001;
deadtime=60e-6;
timeperframe = exposuretime+deadtime
lags = lag_steps*timeperframe
roi_names = ['gray', 'orange', 'brown', 'red', 'green']
fig, axes = plt.subplots(num_rings, sharex=True, figsize=(5, 14))
axes[num_rings-1].set_xlabel("lags")
for i, roi_color in zip(range(num_rings), roi_names):
axes[i].set_ylabel("g2")
axes[i].set_title(" Q ring value " + str(q_ring[i]))
axes[i].semilogx(lags, g2[:, i], 'o', markerfacecolor=roi_color, markersize=6)
axes[i].set_ylim(bottom=1, top=np.max(g2[1:, i]))
plt.show()
Explanation: Find the experimental auto correlation functions
Use the skbeam.core.correlation module (https://github.com/scikit-beam/scikit-beam/blob/master/skbeam/core/correlation.py)
End of explanation
mod = Model(corr.auto_corr_scat_factor)
rate = [] # relaxation rate
sx = int( round (np.sqrt(num_rings)) )
if num_rings%sx==0:
sy = int(num_rings/sx)
else:
sy = int(num_rings/sx+1)
fig = plt.figure(figsize=(14, 10))
plt.title('Duke Silica Gel', fontsize=20, y =1.02)
plt.axes(frameon=False)
plt.xticks([])
plt.yticks([])
for i in range(num_rings):
ax = fig.add_subplot(sx, sy, i+1 )
y=g2[1:, i]
result1 = mod.fit(y, lags=lags[1:], beta=.1,
relaxation_rate =.5, baseline=1.0)
rate.append(result1.best_values['relaxation_rate'])
ax.semilogx(lags[1:], y, 'ro')
ax.semilogx(lags[1:], result1.best_fit, '-b')
ax.set_title(" Q= " + '%.5f '%(q_ring[i]) + r'$\AA^{-1}$')
ax.set_ylim([min(y)*.95, max(y[1:]) *1.05])
txts = r'$\gamma$' + r'$ = %.3f$'%(rate[i]) + r'$ s^{-1}$'
ax.text(x =0.015, y=.55, s=txts, fontsize=14, transform=ax.transAxes)
fig.tight_layout()
plt.show()
rate
result1.best_values
Explanation: Do the fitting
One time correlation data is fitted using the model in skbeam.core.correlation module
(auto_corr_scat_factor)
(https://github.com/scikit-beam/scikit-beam/blob/master/skbeam/core/correlation.py)
End of explanation
fig, ax = plt.subplots()
ax.plot(q_ring**2, rate, 'ro', ls='--')
ax.set_ylabel('Relaxation rate 'r'$\gamma$'"($s^{-1}$)")
ax.set_xlabel("$q^2$"r'($\AA^{-2}$)')
plt.show()
Explanation: Plot the relaxation rates vs (q_ring_center)**2
End of explanation
D0 = np.polyfit(q_ring**2, rate, 1)
gmfit = np.poly1d(D0)
D0[0] # (Angstroms)^(-2)
fig, ax = plt.subplots()
ax.plot(q_ring**2, rate, 'ro', ls='--')
ax.plot(q_ring**2, gmfit(q_ring**2), ls='-')
ax.set_ylabel('Relaxation rate 'r'$\gamma$'"($s^{-1}$)")
ax.set_xlabel("$q^2$"r'($\AA^{-2}$)')
plt.show()
import skbeam
print(skbeam.__version__)
Explanation: Fitted the Diffusion Coefficinet D0
End of explanation |
14,330 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Limits
You can use algebraeic methods to calculate the rate of change over a function interval by joining two points on the function with a secant line and measuring its slope. For example, a function might return the distance travelled by a cyclist in a period of time, and you can use a secant line to measure the average velocity between two points in time. However, this doesn't tell you the cyclist's vecolcity at any single point in time - just the average speed over an interval.
To find the cyclist's velocity at a specific point in time, you need the ability to find the slope of a curve at a given point. Differential Calculus enables us to do through the use of derivatives. We can use derivatives to find the slope at a specific x value by calculating a delta for x<sub>1</sub> and x<sub>2</sub> values that are infinitesimally close together - so you can think of it as measuring the slope of a tiny straight line that comprises part of the curve.
Introduction to Limits
However, before we can jump straight into derivatives, we need to examine another aspect of differential calculus - the limit of a function; which helps us measure how a function's value changes as the x<sub>2</sub> value approaches x<sub>1</sub>
To better understand limits, let's take a closer look at our function, and note that although we graph the function as a line, it is in fact made up of individual points. Run the following cell to show the points that we've plotted for integer values of x - the line is created by interpolating the points in between
Step1: We know from the function that the f(x) values are calculated by squaring the x value and adding x, so we can easily calculate points in between and show them - run the following code to see this
Step2: Now we can see more clearly that this function line is formed of a continuous series of points, so theoretically for any given value of x there is a point on the line, and there is an adjacent point on either side with a value that is as close to x as possible, but not actually x.
Run the following code to visualize a specific point for x = 5, and try to identify the closest point either side of it
Step3: You can see the point where x is 5, and you can see that there are points shown on the graph that appear to be right next to this point (at x=4.75 and x=5.25). However, if we zoomed in we'd see that there are still gaps that could be filled by other values of x that are even closer to 5; for example, 4.9 and 5.1, or 4.999 and 5.001. If we could zoom infinitely close to the line we'd see that no matter how close a value you use (for example, 4.999999999999), there is always a value that's fractionally closer (for example, 4.9999999999999).
So what we can say is that there is a hypothetical number that's as close as possible to our desired value of x without actually being x, but we can't express it as a real number. Instead, we express its symbolically as a limit, like this
Step4: Look closely at the plot, and note the gap the line where x = 0. This indicates that the function is not defined here.The domain of the function (it's set of possible input values) not include 0, and it's range (the set of possible output values) does not include a value for x=0.
This is a non-continuous function - in other words, it includes at least one gap when plotted (so you couldn't plot it by hand without lifting your pen). Specifically, the function is non-continuous at x=0.
By convention, when a non-continuous function is plotted, the points that form a continuous line (or interval) are shown as a line, and the end of each line where there is a discontinuity is shown as a circle, which is filled if the value at that point is included in the line and empty if the value is not included in the line.
In this case, the function produces two intervals with a gap between them where the function is not defined, so we can show the discontinuous point as an unfilled circle - run the following code to visualize this with Python
Step5: There are a number of reasons a function might be non-continuous. For example, consider the following function
Step6: Now, suppose we have a function like this
Step7: Finding Limits of Functions Graphically
So the question arises, how do we find a value for the limit of a function at a specific point?
Let's explore this function, a
Step8: Note that this function is continuous at all points, there are no gaps in its range. However, the range of the function is {a(x) ≥ 1} (in other words, all real numbers that are greater than or equal to 1). For negative values of x, the function appears to return ever-decreasing values as x gets closer to 0, and for positive values of x, the function appears to return ever-increasing values as x gets further from 0; but it never returns 0.
Let's plot the function for an x value of 0 and find out what the a(0) value is returned
Step9: OK, so a(0) returns 1.
What happens if we use x values that are very slightly higher or lower than 0?
Step10: These x values return a(x) values that are just slightly above 1, and if we were to keep plotting numbers that are increasingly close to 0, for example 0.0000000001 or -0.0000000001, the function would still return a value that is just slightly greater than 1. The limit of function a(x) as x approaches 0, is 1; and the notation to indicate this is
Step11: The output from this function contains a gap in the line where x = 0. It seems that not only does the domain of the function (the values that can be passed in as x) exclude 0; but the range of the function (the set of values that can be returned from it) also excludes 0.
We can't evaluate the function for an x value of 0, but we can see what it returns for a value that is just very slightly less than 0
Step12: We can even try a negative x value that's a little closer to 0.
Step13: So as the value of x gets closer to 0 from the left (negative), the value of b(x) is decreasing towards 0. We can show this with the following notation
Step14: What happens if we decrease the value of x so that it's even closer to 0?
Step15: As with the negative side, as x approaches 0 from the positive side, the value of b(x) gets closer to 0; and we can show that like this
Step16: The plot of the function shows a line in which the c(x) value increases towards 25 as x approaches 5 from the negative side
Step17: What's the limit of d as x approaches 25?
We can plot a few points to help us
Step18: From these plotted values, we can see that as x approaches 25 from the negative side, d(x) is decreasing, and as x approaches 25 from the positive side, d(x) is increasing. As x gets closer to 25, d(x) increases or decreases more significantly.
If we were to plot every fractional value of d(x) for x values between 24.9 and 25, we'd see a line that decreases indefintely, getting closer and closer to the x = 25 vertical line, but never actually reaching it. Similarly, plotting every x value between 25 and 25.1 would result in a line going up indefinitely, but always staying to the right of the vertical x = 25 line.
The x = 25 line in this case is an asymptote - a line to which a curve moves ever closer but never actually reaches. The positive limit for x = 25 in this case in not a real numbered value, but infinity
Step19: Looking at the output, you can see that the function values are getting closer to 1 as x approaches 0 from both sides, so
Step20: As before, you can see that as the x values approach 0 from both sides, the value of the function gets closer to 1, so
Step21: Determining Limits Analytically
We've seen how to estimate limits visually on a graph, and by creating a table of x and f(x) values either side of a point. There are also some mathematical techniques we can use to calculate limits.
Direct Substitution
Recall that our definition for a function to be continuous at a point is that the two-directional limit must exist and that it must be equal to the function value at that point. It therefore follows, that if we know that a function is continuous at a given point, we can determine the limit simply by evaluating the function for that point.
For example, let's consider the following function g
Step22: Now, suppose we need to find the limit of g(x) as x approaches 4. We can try to find this by simply substituting 4 for the x values in the function
Step23: Factorization
OK, now let's try to find the limit of g(x) as x approaches 1.
We know from the function definition that the function is not defined at x = 1, but we're not trying to find the value of g(x) when x equals 1; we're trying to find the limit of g(x) as x approaches 1.
The direct substitution approach won't work in this case
Step24: Rationalization
Let's look at another function
Step25: To find the limit of h(x) as x approaches 4, we can't use the direct substitution method because the function is not defined at that point. However, we can take an alternative approach by multiplying both the numerator and denominator in the function by the conjugate of the numerator to rationalize the square root term (a conjugate is a binomial formed by reversing the sign of the second term of a binomial)
Step26: Rules for Limit Operations
When you are working with functions and limits, you may want to combine limits using arithmetic operations. There are some intuitive rules for doing this.
Let's define two simple functions, j | Python Code:
%matplotlib inline
# Here's the function
def f(x):
return x**2 + x
from matplotlib import pyplot as plt
# Create an array of x values from 0 to 10 to plot
x = list(range(0, 11))
# Get the corresponding y values from the function
y = [f(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('f(x)')
plt.grid()
# Plot the function
plt.plot(x,y, color='lightgrey', marker='o', markeredgecolor='green', markerfacecolor='green')
plt.show()
Explanation: Limits
You can use algebraeic methods to calculate the rate of change over a function interval by joining two points on the function with a secant line and measuring its slope. For example, a function might return the distance travelled by a cyclist in a period of time, and you can use a secant line to measure the average velocity between two points in time. However, this doesn't tell you the cyclist's vecolcity at any single point in time - just the average speed over an interval.
To find the cyclist's velocity at a specific point in time, you need the ability to find the slope of a curve at a given point. Differential Calculus enables us to do through the use of derivatives. We can use derivatives to find the slope at a specific x value by calculating a delta for x<sub>1</sub> and x<sub>2</sub> values that are infinitesimally close together - so you can think of it as measuring the slope of a tiny straight line that comprises part of the curve.
Introduction to Limits
However, before we can jump straight into derivatives, we need to examine another aspect of differential calculus - the limit of a function; which helps us measure how a function's value changes as the x<sub>2</sub> value approaches x<sub>1</sub>
To better understand limits, let's take a closer look at our function, and note that although we graph the function as a line, it is in fact made up of individual points. Run the following cell to show the points that we've plotted for integer values of x - the line is created by interpolating the points in between:
End of explanation
%matplotlib inline
# Here's the function
def f(x):
return x**2 + x
from matplotlib import pyplot as plt
# Create an array of x values from 0 to 10 to plot
x = list(range(0,5))
x.append(4.25)
x.append(4.5)
x.append(4.75)
x.append(5)
x.append(5.25)
x.append(5.5)
x.append(5.75)
x = x + list(range(6,11))
# Get the corresponding y values from the function
y = [f(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('f(x)')
plt.grid()
# Plot the function
plt.plot(x,y, color='lightgrey', marker='o', markeredgecolor='green', markerfacecolor='green')
plt.show()
Explanation: We know from the function that the f(x) values are calculated by squaring the x value and adding x, so we can easily calculate points in between and show them - run the following code to see this:
End of explanation
%matplotlib inline
# Here's the function
def f(x):
return x**2 + x
from matplotlib import pyplot as plt
# Create an array of x values from 0 to 10 to plot
x = list(range(0,5))
x.append(4.25)
x.append(4.5)
x.append(4.75)
x.append(5)
x.append(5.25)
x.append(5.5)
x.append(5.75)
x = x + list(range(6,11))
# Get the corresponding y values from the function
y = [f(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('f(x)')
plt.grid()
# Plot the function
plt.plot(x,y, color='lightgrey', marker='o', markeredgecolor='green', markerfacecolor='green')
zx = 5
zy = f(zx)
plt.plot(zx, zy, color='red', marker='o', markersize=10)
plt.annotate('x=' + str(zx),(zx, zy), xytext=(zx - 0.5, zy + 5))
# Plot f(x) when x = 5.1
posx = 5.25
posy = f(posx)
plt.plot(posx, posy, color='blue', marker='<', markersize=10)
plt.annotate('x=' + str(posx),(posx, posy), xytext=(posx + 0.5, posy - 1))
# Plot f(x) when x = 4.9
negx = 4.75
negy = f(negx)
plt.plot(negx, negy, color='orange', marker='>', markersize=10)
plt.annotate('x=' + str(negx),(negx, negy), xytext=(negx - 1.5, negy - 1))
plt.show()
Explanation: Now we can see more clearly that this function line is formed of a continuous series of points, so theoretically for any given value of x there is a point on the line, and there is an adjacent point on either side with a value that is as close to x as possible, but not actually x.
Run the following code to visualize a specific point for x = 5, and try to identify the closest point either side of it:
End of explanation
%matplotlib inline
# Define function g
def g(x):
if x != 0:
return -(12/(2*x))**2
# Plot output from function g
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-20, 21)
# Get the corresponding y values from the function
y = [g(a) for a in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('g(x)')
plt.grid()
# Plot x against g(x)
plt.plot(x,y, color='green')
plt.show()
Explanation: You can see the point where x is 5, and you can see that there are points shown on the graph that appear to be right next to this point (at x=4.75 and x=5.25). However, if we zoomed in we'd see that there are still gaps that could be filled by other values of x that are even closer to 5; for example, 4.9 and 5.1, or 4.999 and 5.001. If we could zoom infinitely close to the line we'd see that no matter how close a value you use (for example, 4.999999999999), there is always a value that's fractionally closer (for example, 4.9999999999999).
So what we can say is that there is a hypothetical number that's as close as possible to our desired value of x without actually being x, but we can't express it as a real number. Instead, we express its symbolically as a limit, like this:
\begin{equation}\lim_{x \to 5} f(x)\end{equation}
This is interpreted as the limit of function f(x) as x approaches 5.
Limits and Continuity
The function f(x) is continuous for all real numbered values of x. Put simply, this means that you can draw the line created by the function without lifting your pen (we'll look at a more formal definition later in this course).
However, this isn't necessarily true of all functions. Consider function g(x) below:
\begin{equation}g(x) = -(\frac{12}{2x})^{2}\end{equation}
This function is a little more complex than the previous one, but the key thing to note is that it requires a division by 2x. Now, ask yourself; what would happen if you applied this function to an x value of 0?
Well, 2 x 0 is 0, and anything divided by 0 is undefined. So the domain of this function does not include 0; in other words, the function is defined when x is any real number such that x is not equal to 0. The function should therefore be written like this:
\begin{equation}g(x) = -(\frac{12}{2x})^{2},\;\; x \ne 0\end{equation}
So why is this important? Let's investigate by running the following Python code to define the function and plot it for a set of arbitrary of values:
End of explanation
%matplotlib inline
# Define function g
def g(x):
if x != 0:
return -(12/(2*x))**2
# Plot output from function g
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-20, 21)
# Get the corresponding y values from the function
y = [g(a) for a in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('g(x)')
plt.grid()
# Plot x against g(x)
plt.plot(x,y, color='green')
# plot a circle at the gap (or close enough anyway!)
xy = (0,g(1))
plt.annotate('O',xy, xytext=(-0.7, -37),fontsize=14,color='green')
plt.show()
Explanation: Look closely at the plot, and note the gap the line where x = 0. This indicates that the function is not defined here.The domain of the function (it's set of possible input values) not include 0, and it's range (the set of possible output values) does not include a value for x=0.
This is a non-continuous function - in other words, it includes at least one gap when plotted (so you couldn't plot it by hand without lifting your pen). Specifically, the function is non-continuous at x=0.
By convention, when a non-continuous function is plotted, the points that form a continuous line (or interval) are shown as a line, and the end of each line where there is a discontinuity is shown as a circle, which is filled if the value at that point is included in the line and empty if the value is not included in the line.
In this case, the function produces two intervals with a gap between them where the function is not defined, so we can show the discontinuous point as an unfilled circle - run the following code to visualize this with Python:
End of explanation
%matplotlib inline
def h(x):
if x >= 0:
import numpy as np
return 2 * np.sqrt(x)
# Plot output from function h
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-20, 21)
# Get the corresponding y values from the function
y = [h(a) for a in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('h(x)')
plt.grid()
# Plot x against h(x)
plt.plot(x,y, color='green')
# plot a circle close enough to the h(-x) limit for our purposes!
plt.plot(0, h(0), color='green', marker='o', markerfacecolor='green', markersize=10)
plt.show()
Explanation: There are a number of reasons a function might be non-continuous. For example, consider the following function:
\begin{equation}h(x) = 2\sqrt{x},\;\; x \ge 0\end{equation}
Applying this function to a non-negative x value returns a valid output; but for any value where x is negative, the output is undefined, because the square root of a negative value is not a real number.
Here's the Python to plot function h:
End of explanation
%matplotlib inline
def k(x):
import numpy as np
if x <= 0:
return x + 20
else:
return x - 100
# Plot output from function h
from matplotlib import pyplot as plt
# Create an array of x values for each non-contonuous interval
x1 = range(-20, 1)
x2 = range(1, 20)
# Get the corresponding y values from the function
y1 = [k(i) for i in x1]
y2 = [k(i) for i in x2]
# Set up the graph
plt.xlabel('x')
plt.ylabel('k(x)')
plt.grid()
# Plot x against k(x)
plt.plot(x1,y1, color='green')
plt.plot(x2,y2, color='green')
# plot a circle at the interval ends
plt.plot(0, k(0), color='green', marker='o', markerfacecolor='green', markersize=10)
plt.plot(0, k(0.0001), color='green', marker='o', markerfacecolor='w', markersize=10)
plt.show()
Explanation: Now, suppose we have a function like this:
\begin{equation}
k(x) = \begin{cases}
x + 20, & \text{if } x \le 0, \
x - 100, & \text{otherwise }
\end{cases}
\end{equation}
In this case, the function's domain includes all real numbers, but its output is still non-continuous because of the way different values are returned depending on the value of x. The range of possible outputs for k(x ≤ 0) is ≤ 20, and the range of output values for k(x > 0) is x ≥ -100.
Let's use Python to plot function k:
End of explanation
%matplotlib inline
# Define function a
def a(x):
return x**2 + 1
# Plot output from function a
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-10, 11)
# Get the corresponding y values from the function
y = [a(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('a(x)')
plt.grid()
# Plot x against a(x)
plt.plot(x,y, color='purple')
plt.show()
Explanation: Finding Limits of Functions Graphically
So the question arises, how do we find a value for the limit of a function at a specific point?
Let's explore this function, a:
\begin{equation}a(x) = x^{2} + 1\end{equation}
We can start by plotting it:
End of explanation
%matplotlib inline
# Define function a
def a(x):
return x**2 + 1
# Plot output from function a
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-10, 11)
# Get the corresponding y values from the function
y = [a(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('a(x)')
plt.grid()
# Plot x against a(x)
plt.plot(x,y, color='purple')
# Plot a(x) when x = 0
zx = 0
zy = a(zx)
plt.plot(zx, zy, color='red', marker='o', markersize=10)
plt.annotate(str(zy),(zx, zy), xytext=(zx, zy + 5))
plt.show()
Explanation: Note that this function is continuous at all points, there are no gaps in its range. However, the range of the function is {a(x) ≥ 1} (in other words, all real numbers that are greater than or equal to 1). For negative values of x, the function appears to return ever-decreasing values as x gets closer to 0, and for positive values of x, the function appears to return ever-increasing values as x gets further from 0; but it never returns 0.
Let's plot the function for an x value of 0 and find out what the a(0) value is returned:
End of explanation
%matplotlib inline
# Define function a
def a(x):
return x**2 + 1
# Plot output from function a
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-10, 11)
# Get the corresponding y values from the function
y = [a(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('a(x)')
plt.grid()
# Plot x against a(x)
plt.plot(x,y, color='purple')
# Plot a(x) when x = 0.1
posx = 0.1
posy = a(posx)
plt.plot(posx, posy, color='blue', marker='<', markersize=10)
plt.annotate(str(posy),(posx, posy), xytext=(posx + 1, posy))
# Plot a(x) when x = -0.1
negx = -0.1
negy = a(negx)
plt.plot(negx, negy, color='orange', marker='>', markersize=10)
plt.annotate(str(negy),(negx, negy), xytext=(negx - 2, negy))
plt.show()
Explanation: OK, so a(0) returns 1.
What happens if we use x values that are very slightly higher or lower than 0?
End of explanation
%matplotlib inline
# Define function b
def b(x):
if x != 0:
return (-2*x**2) * 1/x
# Plot output from function g
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-10, 11)
# Get the corresponding y values from the function
y = [b(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('b(x)')
plt.grid()
# Plot x against b(x)
plt.plot(x,y, color='purple')
plt.show()
Explanation: These x values return a(x) values that are just slightly above 1, and if we were to keep plotting numbers that are increasingly close to 0, for example 0.0000000001 or -0.0000000001, the function would still return a value that is just slightly greater than 1. The limit of function a(x) as x approaches 0, is 1; and the notation to indicate this is:
\begin{equation}\lim_{x \to 0} a(x) = 1 \end{equation}
This reflects a more formal definition of function continuity. Previously, we stated that a function is continuous at a point if you can draw it at that point without lifting your pen. The more mathematical definition is that a function is continuous at a point if the limit of the function as it approaches that point from both directions is equal to the function's value at that point. In this case, as we approach x = 0 from both sides, the limit is 1; and the value of a(0) is also 1; so the function is continuous at x = 0.
Limits at Non-Continuous Points
Let's try another function, which we'll call b:
\begin{equation}b(x) = -2x^{2} \cdot \frac{1}{x},\;\;x\ne0\end{equation}
Note that this function has a domain that includes all real number values of x such that x does not equal 0. In other words, the function will return a valid output for any number other than 0.
Let's create it and plot it with Python:
End of explanation
%matplotlib inline
# Define function b
def b(x):
if x != 0:
return (-2*x**2) * 1/x
# Plot output from function g
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-10, 11)
# Get the corresponding y values from the function
y = [b(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('b(x)')
plt.grid()
# Plot x against b(x)
plt.plot(x,y, color='purple')
# Plot b(x) for x = -0.1
negx = -0.1
negy = b(negx)
plt.plot(negx, negy, color='orange', marker='>', markersize=10)
plt.annotate(str(negy),(negx, negy), xytext=(negx + 1, negy))
plt.show()
Explanation: The output from this function contains a gap in the line where x = 0. It seems that not only does the domain of the function (the values that can be passed in as x) exclude 0; but the range of the function (the set of values that can be returned from it) also excludes 0.
We can't evaluate the function for an x value of 0, but we can see what it returns for a value that is just very slightly less than 0:
End of explanation
%matplotlib inline
# Define function b
def b(x):
if x != 0:
return (-2*x**2) * 1/x
# Plot output from function g
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-10, 11)
# Get the corresponding y values from the function
y = [b(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('b(x)')
plt.grid()
# Plot x against b(x)
plt.plot(x,y, color='purple')
# Plot b(x) for x = -0.0001
negx = -0.0001
negy = b(negx)
plt.plot(negx, negy, color='orange', marker='>', markersize=10)
plt.annotate(str(negy),(negx, negy), xytext=(negx + 1, negy))
plt.show()
Explanation: We can even try a negative x value that's a little closer to 0.
End of explanation
%matplotlib inline
# Define function b
def b(x):
if x != 0:
return (-2*x**2) * 1/x
# Plot output from function g
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-10, 11)
# Get the corresponding y values from the function
y = [b(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('b(x)')
plt.grid()
# Plot x against b(x)
plt.plot(x,y, color='purple')
# Plot b(x) for x = 0.1
posx = 0.1
posy = b(posx)
plt.plot(posx, posy, color='blue', marker='<', markersize=10)
plt.annotate(str(posy),(posx, posy), xytext=(posx + 1, posy))
plt.show()
Explanation: So as the value of x gets closer to 0 from the left (negative), the value of b(x) is decreasing towards 0. We can show this with the following notation:
\begin{equation}\lim_{x \to 0^{-}} b(x) = 0 \end{equation}
Note that the arrow points to 0<sup>-</sup> (with a minus sign) to indicate that we're describing the limit as we approach 0 from the negative side.
So what about the positive side?
Let's see what the function value is when x is 0.1:
End of explanation
%matplotlib inline
# Define function b
def b(x):
if x != 0:
return (-2*x**2) * 1/x
# Plot output from function g
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-10, 11)
# Get the corresponding y values from the function
y = [b(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('b(x)')
plt.grid()
# Plot x against b(x)
plt.plot(x,y, color='purple')
# Plot b(x) for x = 0.0001
posx = 0.0001
posy = b(posx)
plt.plot(posx, posy, color='blue', marker='<', markersize=10)
plt.annotate(str(posy),(posx, posy), xytext=(posx + 1, posy))
plt.show()
Explanation: What happens if we decrease the value of x so that it's even closer to 0?
End of explanation
%matplotlib inline
def c(x):
import numpy as np
if x <= 0:
return x + 20
else:
return x - 100
# Plot output from function h
from matplotlib import pyplot as plt
# Create arrays of x values
x1 = range(-20, 6)
x2 = range(6, 21)
# Get the corresponding y values from the function
y1 = [c(i) for i in x1]
y2 = [c(i) for i in x2]
# Set up the graph
plt.xlabel('x')
plt.ylabel('c(x)')
plt.grid()
# Plot x against c(x)
plt.plot(x1,y1, color='purple')
plt.plot(x2,y2, color='purple')
# plot a circle close enough to the c limits for our purposes!
plt.plot(5, c(5), color='purple', marker='o', markerfacecolor='purple', markersize=10)
plt.plot(5, c(5.001), color='purple', marker='o', markerfacecolor='w', markersize=10)
# plot some points from the +ve direction
posx = [20, 15, 10, 6]
posy = [c(i) for i in posx]
plt.scatter(posx, posy, color='blue', marker='<', s=70)
for p in posx:
plt.annotate(str(c(p)),(p, c(p)),xytext=(p, c(p) + 5))
# plot some points from the -ve direction
negx = [-15, -10, -5, 0, 4]
negy = [c(i) for i in negx]
plt.scatter(negx, negy, color='orange', marker='>', s=70)
for n in negx:
plt.annotate(str(c(n)),(n, c(n)),xytext=(n, c(n) + 5))
plt.show()
Explanation: As with the negative side, as x approaches 0 from the positive side, the value of b(x) gets closer to 0; and we can show that like this:
\begin{equation}\lim_{x \to 0^{+}} b(x) = 0 \end{equation}
Now, even although the function is not defined at x = 0; since the limit as we approach x = 0 from the negative side is 0, and the limit when we approach x = 0 from the positive side is also 0; we can say that the overall, or two-sided limit for the function at x = 0 is 0:
\begin{equation}\lim_{x \to 0} b(x) = 0 \end{equation}
So can we therefore just ignore the gap and say that the function is continuous at x = 0? Well, recall that the formal definition for continuity is that to be continuous at a point, the function's limit as we approach the point in both directions must be equal to the function's value at that point. In this case, the two-sided limit as we approach x = 0 is 0, but b(0) is not defined; so the function is non-continuous at x = 0.
One-Sided Limits
Let's take a look at a different function. We'll call this one c:
\begin{equation}
c(x) = \begin{cases}
x + 20, & \text{if } x \le 0, \
x - 100, & \text{otherwise }
\end{cases}
\end{equation}
In this case, the function's domain includes all real numbers, but its range is still non-continuous because of the way different values are returned depending on the value of x. The range of possible outputs for c(x ≤ 0) is ≤ 20, and the range of output values for c(x > 0) is x ≥ -100.
Let's use Python to plot function c with some values for c(x) marked on the line
End of explanation
%matplotlib inline
# Define function d
def d(x):
if x != 25:
return 4 / (x - 25)
# Plot output from function d
from matplotlib import pyplot as plt
# Create an array of x values
x = list(range(-100, 24))
x.append(24.9) # Add some fractional x
x.append(25) # values around
x.append(25.1) # 25 for finer-grain results
x = x + list(range(26, 101))
# Get the corresponding y values from the function
y = [d(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('d(x)')
plt.grid()
# Plot x against d(x)
plt.plot(x,y, color='purple')
plt.show()
Explanation: The plot of the function shows a line in which the c(x) value increases towards 25 as x approaches 5 from the negative side:
\begin{equation}\lim_{x \to 5^{-}} c(x) = 25 \end{equation}
However, the c(x) value decreases towards -95 as x approaches 5 from the positive side:
\begin{equation}\lim_{x \to 5^{+}} c(x) = -95 \end{equation}
So what can we say about the two-sided limit of this function at x = 5?
The limit as we approach x = 5 from the negative side is not equal to the limit as we approach x = 5 from the positive side, so no two-sided limit exists for this function at that point:
\begin{equation}\lim_{x \to 5} \text{does not exist} \end{equation}
Asymptotes and Infinity
OK, time to look at another function:
\begin{equation}d(x) = \frac{4}{x - 25},\;\; x \ne 25\end{equation}
End of explanation
%matplotlib inline
# Define function d
def d(x):
if x != 25:
return 4 / (x - 25)
# Plot output from function d
from matplotlib import pyplot as plt
# Create an array of x values
x = list(range(-100, 24))
x.append(24.9) # Add some fractional x
x.append(25) # values around
x.append(25.1) # 25 for finer-grain results
x = x + list(range(26, 101))
# Get the corresponding y values from the function
y = [d(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('d(x)')
plt.grid()
# Plot x against d(x)
plt.plot(x,y, color='purple')
# plot some points from the +ve direction
posx = [75, 50, 30, 25.5, 25.2, 25.1]
posy = [d(i) for i in posx]
plt.scatter(posx, posy, color='blue', marker='<')
for p in posx:
plt.annotate(str(d(p)),(p, d(p)))
# plot some points from the -ve direction
negx = [-55, 0, 23, 24.5, 24.8, 24.9]
negy = [d(i) for i in negx]
plt.scatter(negx, negy, color='orange', marker='>')
for n in negx:
plt.annotate(str(d(n)),(n, d(n)))
plt.show()
Explanation: What's the limit of d as x approaches 25?
We can plot a few points to help us:
End of explanation
# Define function a
def a(x):
return x**2 + 1
import pandas as pd
# Create a dataframe with an x column containing values either side of 0
df = pd.DataFrame ({'x': [-1, -0.5, -0.2, -0.1, -0.01, 0, 0.01, 0.1, 0.2, 0.5, 1]})
# Add an a(x) column by applying the function to x
df['a(x)'] = a(df['x'])
#Display the dataframe
df
Explanation: From these plotted values, we can see that as x approaches 25 from the negative side, d(x) is decreasing, and as x approaches 25 from the positive side, d(x) is increasing. As x gets closer to 25, d(x) increases or decreases more significantly.
If we were to plot every fractional value of d(x) for x values between 24.9 and 25, we'd see a line that decreases indefintely, getting closer and closer to the x = 25 vertical line, but never actually reaching it. Similarly, plotting every x value between 25 and 25.1 would result in a line going up indefinitely, but always staying to the right of the vertical x = 25 line.
The x = 25 line in this case is an asymptote - a line to which a curve moves ever closer but never actually reaches. The positive limit for x = 25 in this case in not a real numbered value, but infinity:
\begin{equation}\lim_{x \to 25^{+}} d(x) = \infty \end{equation}
Conversely, the negative limit for x = 25 is negative infinity:
\begin{equation}\lim_{x \to 25^{-}} d(x) = -\infty \end{equation}
Finding Limits Numerically Using a Table
Up to now, we've estimated limits for a point graphically by examining a graph of a function. You can also approximate limits by creating a table of x values and the corresponding function values either side of the point for which you want to find the limits.
For example, let's return to our a function:
\begin{equation}a(x) = x^{2} + 1\end{equation}
If we want to find the limits as x is approaching 0, we can apply the function to some values either side of 0 and view them as a table. Here's some Python code to do that:
End of explanation
# Define function e
def e(x):
if x == 0:
return 5
else:
return 1 + x**2
import pandas as pd
# Create a dataframe with an x column containing values either side of 0
x= [-1, -0.5, -0.2, -0.1, -0.01, 0, 0.01, 0.1, 0.2, 0.5, 1]
y =[e(i) for i in x]
df = pd.DataFrame ({' x':x, 'e(x)': y })
df
Explanation: Looking at the output, you can see that the function values are getting closer to 1 as x approaches 0 from both sides, so:
\begin{equation}\lim_{x \to 0} a(x) = 1 \end{equation}
Additionally, you can see that the actual value of the function when x = 0 is also 1, so:
\begin{equation}\lim_{x \to 0} a(x) = a(0) \end{equation}
Which according to our earlier definition, means that the function is continuous at 0.
However, you should be careful not to assume that the limit when x is approaching 0 will always be the same as the value when x = 0; even when the function is defined for x = 0.
For example, consider the following function:
\begin{equation}
e(x) = \begin{cases}
x = 5, & \text{if } x = 0, \
x = 1 + x^{2}, & \text{otherwise }
\end{cases}
\end{equation}
Let's see what the function returns for x values either side of 0 in a table:
End of explanation
%matplotlib inline
# Define function e
def e(x):
if x == 0:
return 5
else:
return 1 + x**2
from matplotlib import pyplot as plt
x= [-1, -0.5, -0.2, -0.1, -0.01, 0.01, 0.1, 0.2, 0.5, 1]
y =[e(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('e(x)')
plt.grid()
# Plot x against e(x)
plt.plot(x, y, color='purple')
# (we're cheating slightly - we'll manually plot the discontinous point...)
plt.scatter(0, e(0), color='purple')
# (... and overplot the gap)
plt.plot(0, 1, color='purple', marker='o', markerfacecolor='w', markersize=10)
plt.show()
Explanation: As before, you can see that as the x values approach 0 from both sides, the value of the function gets closer to 1, so:
\begin{equation}\lim_{x \to 0} e(x) = 1 \end{equation}
However the actual value of the function when x = 0 is 5, not 1; so:
\begin{equation}\lim_{x \to 0} e(x) \ne e(0) \end{equation}
Which according to our earlier definition, means that the function is non-continuous at 0.
Run the following cell to see what this looks like as a graph:
End of explanation
%matplotlib inline
# Define function f
def g(x):
if x != 1:
return (x**2 - 1) / (x - 1)
# Plot output from function g
from matplotlib import pyplot as plt
# Create an array of x values
x= range(-20, 21)
y =[g(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('g(x)')
plt.grid()
# Plot x against g(x)
plt.plot(x,y, color='purple')
plt.show()
Explanation: Determining Limits Analytically
We've seen how to estimate limits visually on a graph, and by creating a table of x and f(x) values either side of a point. There are also some mathematical techniques we can use to calculate limits.
Direct Substitution
Recall that our definition for a function to be continuous at a point is that the two-directional limit must exist and that it must be equal to the function value at that point. It therefore follows, that if we know that a function is continuous at a given point, we can determine the limit simply by evaluating the function for that point.
For example, let's consider the following function g:
\begin{equation}g(x) = \frac{x^{2} - 1}{x - 1}, x \ne 1\end{equation}
Run the following code to see this function as a graph:
End of explanation
%matplotlib inline
# Define function g
def g(x):
if x != 1:
return (x**2 - 1) / (x - 1)
# Plot output from function f
from matplotlib import pyplot as plt
# Create an array of x values
x= range(-20, 21)
y =[g(i) for i in x]
# Set the x point we're interested in
zx = 4
plt.xlabel('x')
plt.ylabel('g(x)')
plt.grid()
# Plot x against g(x)
plt.plot(x,y, color='purple')
# Plot g(x) when x = 0
zy = g(zx)
plt.plot(zx, zy, color='red', marker='o', markersize=10)
plt.annotate(str(zy),(zx, zy), xytext=(zx - 2, zy + 1))
plt.show()
print ('Limit as x -> ' + str(zx) + ' = ' + str(zy))
Explanation: Now, suppose we need to find the limit of g(x) as x approaches 4. We can try to find this by simply substituting 4 for the x values in the function:
\begin{equation}g(4) = \frac{4^{2} - 1}{4 - 1}\end{equation}
This simplifies to:
\begin{equation}g(4) = \frac{15}{3}\end{equation}
So:
\begin{equation}\lim_{x \to 4} g(x) = 5\end{equation}
Let's take a look:
End of explanation
%matplotlib inline
# Define function g
def f(x):
if x != 1:
return (x**2 - 1) / (x - 1)
# Plot output from function g
from matplotlib import pyplot as plt
# Create an array of x values
x= range(-20, 21)
y =[g(i) for i in x]
# Set the x point we're interested in
zx = 1
# Calculate the limit of g(x) when x->zx using the factored equation
zy = zx + 1
plt.xlabel('x')
plt.ylabel('g(x)')
plt.grid()
# Plot x against g(x)
plt.plot(x,y, color='purple')
# Plot the limit of g(x)
zy = zx + 1
plt.plot(zx, zy, color='red', marker='o', markersize=10)
plt.annotate(str(zy),(zx, zy), xytext=(zx - 2, zy + 1))
plt.show()
print ('Limit as x -> ' + str(zx) + ' = ' + str(zy))
Explanation: Factorization
OK, now let's try to find the limit of g(x) as x approaches 1.
We know from the function definition that the function is not defined at x = 1, but we're not trying to find the value of g(x) when x equals 1; we're trying to find the limit of g(x) as x approaches 1.
The direct substitution approach won't work in this case:
\begin{equation}g(1) = \frac{1^{2} - 1}{1 - 1}\end{equation}
Simplifies to:
\begin{equation}g(1) = \frac{0}{0}\end{equation}
Anything divided by 0 is undefined; so all we've done is to confirm that the function is not defined at this point. You might be tempted to assume that this means the limit does not exist, but <sup>0</sup>/<sub>0</sub> is a special case; it's what's known as the indeterminate form; and there may be a way to solve this problem another way.
We can factor the x<sup>2</sup> - 1 numerator in the definition of g as as (x - 1)(x + 1), so the limit equation can we rewritten like this:
\begin{equation}\lim_{x \to a} g(x) = \frac{(x-1)(x+1)}{x - 1}\end{equation}
The x - 1 in the numerator and the x - 1 in the denominator cancel each other out:
\begin{equation}\lim_{x \to a} g(x)= x+1\end{equation}
So we can now use substitution for x = 1 to calculate the limit as 1 + 1:
\begin{equation}\lim_{x \to 1} g(x) = 2\end{equation}
Let's see what that looks like:
End of explanation
%matplotlib inline
# Define function h
def h(x):
import math
if x >= 0 and x != 4:
return (math.sqrt(x) - 2) / (x - 4)
# Plot output from function h
from matplotlib import pyplot as plt
# Create an array of x values
x= range(-20, 21)
y =[h(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.ylabel('h(x)')
plt.grid()
# Plot x against h(x)
plt.plot(x,y, color='purple')
plt.show()
Explanation: Rationalization
Let's look at another function:
\begin{equation}h(x) = \frac{\sqrt{x} - 2}{x - 4}, x \ne 4 \text{ and } x \ge 0\end{equation}
Run the following cell to plot this function as a graph:
End of explanation
%matplotlib inline
# Define function h
def h(x):
import math
if x >= 0 and x != 4:
return (math.sqrt(x) - 2) / (x - 4)
# Plot output from function h
from matplotlib import pyplot as plt
# Create an array of x values
x= range(-20, 21)
y =[h(i) for i in x]
# Specify the point we're interested in
zx = 4
# Calculate the limit of f(x) when x->zx using factored equation
import math
zy = 1 / ((math.sqrt(zx)) + 2)
plt.xlabel('x')
plt.ylabel('h(x)')
plt.grid()
# Plot x against h(x)
plt.plot(x,y, color='purple')
# Plot the limit of h(x) when x->zx
plt.plot(zx, zy, color='red', marker='o', markersize=10)
plt.annotate(str(zy),(zx, zy), xytext=(zx + 2, zy))
plt.show()
print ('Limit as x -> ' + str(zx) + ' = ' + str(zy))
Explanation: To find the limit of h(x) as x approaches 4, we can't use the direct substitution method because the function is not defined at that point. However, we can take an alternative approach by multiplying both the numerator and denominator in the function by the conjugate of the numerator to rationalize the square root term (a conjugate is a binomial formed by reversing the sign of the second term of a binomial):
\begin{equation}\lim_{x \to a}h(x) = \frac{\sqrt{x} - 2}{x - 4}\cdot\frac{\sqrt{x} + 2}{\sqrt{x} + 2}\end{equation}
This simplifies to:
\begin{equation}\lim_{x \to a}h(x) = \frac{(\sqrt{x})^{2} - 2^{2}}{(x - 4)({\sqrt{x} + 2})}\end{equation}
The √x<sup>2</sup> is x, and 2<sup>2</sup> is 4, so we can simplify the numerator as follows:
\begin{equation}\lim_{x \to a}h(x) = \frac{x - 4}{(x - 4)({\sqrt{x} + 2})}\end{equation}
Now we can cancel out the x - 4 in both the numerator and denominator:
\begin{equation}\lim_{x \to a}h(x) = \frac{1}{{\sqrt{x} + 2}}\end{equation}
So for x approaching 4, this is:
\begin{equation}\lim_{x \to 4}h(x) = \frac{1}{{\sqrt{4} + 2}}\end{equation}
This simplifies to:
\begin{equation}\lim_{x \to 4}h(x) = \frac{1}{2 + 2}\end{equation}
Which is of course:
\begin{equation}\lim_{x \to 4}h(x) = \frac{1}{4}\end{equation}
So the limit of h(x) as x approaches 4 is <sup>1</sup>/<sub>4</sub> or 0.25.
Let's calculate and plot this with Python:
End of explanation
%matplotlib inline
# Define function j
def j(x):
return x * 2 - 2
# Define function l
def l(x):
return -x * 2 + 4
# Plot output from functions j and l
from matplotlib import pyplot as plt
# Create an array of x values
x = range(-10, 11)
# Get the corresponding y values from the functions
jy = [j(i) for i in x]
ly = [l(i) for i in x]
# Set up the graph
plt.xlabel('x')
plt.xticks(range(-10,11, 1))
plt.ylabel('y')
plt.yticks(range(-30,30, 2))
plt.grid()
# Plot x against j(x)
plt.plot(x,jy, color='green', label='j(x)')
# Plot x against l(x)
plt.plot(x,ly, color='magenta', label='l(x)')
plt.legend()
plt.show()
Explanation: Rules for Limit Operations
When you are working with functions and limits, you may want to combine limits using arithmetic operations. There are some intuitive rules for doing this.
Let's define two simple functions, j:
\begin{equation}j(x) = 2x - 2\end{equation}
and l:
\begin{equation}l(x) = -2x + 4\end{equation}
Run the cell below to plot these functions:
End of explanation |
14,331 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p style="text-align
Step1: To define a floating point number, you may use one of the following notations
Step2: Arithmetic operations
We can arithmetic operations that are common in many programing languages.
- +, -, *, /
- // is a special integer division even if the operands aren't
- x**y is used for $x^y$
- n % k calculates the remainder (modulo) of the integer division of n by k
Try it out!
Step3: Strings
Strings are defined either with a single quote or a double quotes. Many other languages interpret them differently.
Step4: The difference between the two is that using double quotes makes it easy to include apostrophes (whereas these would terminate the string if using single quotes)
Step5: Operators
Some the arithmetic operators can be applied to strings, though they have a different interpretation
- + will concatenate two strings
- * multiplies a string with an integer, i.e. the result is that many copies of the original string.
- % has a very special purpose to fill in values into strings
Python provides a large number of operations to manipulate text strings. Examples are given at https
Step6: Lists
Lists are construct for holding multiple objects or values of different types (if this makes sense). We can dynamically add, replace, or remove elements from a list.
Usually we iterate through list in order to perform some operations, though, we can also address a specific element by its position (index) in the list.
The + and * operators work on lists in a similar way as they do on strings.
Complete documentation at https
Step7: List Comprehension
This technique comes in handy and is often used.
Step8: Conditions
Python uses boolean variables to evaluate conditions. The boolean values True and False are returned when an expression is compared or evaluated.
Notice that variable assignment is done using a single equals operator "=", whereas comparison between two variables is done using the double equals operator "==". The "not equals" operator is marked as "!=".
Step9: The "and", "or" and "not" boolean operators allow building complex boolean expressions, for example
Step10: The "in" operator could be used to check if a specified object exists within an iterable object container, such as a list
Step11: Python uses indentation to define code blocks, instead of brackets. The standard Python indentation is 4 spaces, although tabs and any other space size will work, as long as it is consistent. Notice that code blocks do not need any termination.
Here is an example for using Python's "if" statement using code blocks
Step12: A statement is evaulated as true if one of the following is correct
Step13: "while" loops
While loops repeat as long as a certain boolean condition is met. For example
Step14: "break" and "continue" statements
break is used to exit a for loop or a while loop, whereas continue is used to skip the current block, and return to the "for" or "while" statement. A few examples
Step15: can we use "else" clause for loops?
Step16: Functions and methods
Functions are a convenient way to divide your code into useful blocks, allowing us to order our code, make it more readable, reuse it and save some time. Also functions are a key way to define interfaces so programmers can share their code.
As we have seen on previous tutorials, Python makes use of blocks.
A block is a area of code of written in the format of
Step17: Functions may also receive arguments (variables passed from the caller to the function). For example
Step18: Functions may return a value to the caller, using the keyword- 'return' . For example
Step19: How to call functions
Step20: In this exercise you'll use an existing function,
and while adding your own to create a fully functional program.
Add a function named list_benefits() that returns the following list of strings
Step21: Methods
Methods are very similar to functions with the difference that, typically, a method associated with an objects.
Step22: Hint | Python Code:
myint = 7
print myint
print type(myint)
Explanation: <p style="text-align:right;color:red;font-weight:bold;font-size:16pt;padding-bottom:20px">Please, rename this notebook before editing!</p>
The Programming Language Python
References
Here are some references to freshen up on concepts:
- Self-paced online tutorials
- CodeAcademy (13h estimated time) https://www.codecademy.com/tracks/python
- Brief overview with live examples https://www.learnpython.org/en/Welcome
Books
Python for Everybody (HTML, PDF, Kindle) https://www.py4e.com/book
Python Practice Book http://anandology.com/python-practice-book/index.html
Learning Python (free, requires registration to download) https://www.packtpub.com/packt/free-ebook/learning-python
Python 2 vs Python 3
While there are a number of major differences between the versions the majority of libraries and tools that we are concerned with operate on both. Most changes in Python 3 concern the internal workings and performance. Though, there are some syntax changes, and some operations behave differently. This pages offers a comprehensive look at the key changes: http://sebastianraschka.com/Articles/2014_python_2_3_key_diff.html
We provide both versions on our cluster: the binaries python and python2 for Python 2.7, and python3 for version 3.4.
All assignments are expected to run on Python 2.7
Resources & and Important Links
Official web-site of the Python Software Foundation https://www.python.org/
API Refernce https://docs.python.org/2.7/
StackOverflow https://stackoverflow.com/questions/tagged/python
The following has been adopted from https://www.learnpython.org/
Variables and Types
One of the conveniences and pit-falls is that Python does not require to explicitly declare variables before using them. This is common in many other programming languages. Python is not statically typed, but rather follows the object oriented paradigm. Every variable in Python is an object.
However, the values that variables hold have a designated data type.
This tutorial will go over a few basic types of variables.
Numbers
Python supports two types of numbers - integers and floating point numbers. Basic arithmetic operations yield different results for integers and floats. Special attention needs to be given when mixing values of different types within expressions the results might be unexpected.
To define an integer, use the following syntax:
End of explanation
myfloat = 7.0
print myfloat, type(myfloat)
myfloat = float(42)
print myfloat, type(myfloat)
Explanation: To define a floating point number, you may use one of the following notations:
End of explanation
(numerator, denominator) = 3.5.as_integer_ratio()
print numerator, denominator, numerator/denominator, 1.0*numerator/denominator
Explanation: Arithmetic operations
We can arithmetic operations that are common in many programing languages.
- +, -, *, /
- // is a special integer division even if the operands aren't
- x**y is used for $x^y$
- n % k calculates the remainder (modulo) of the integer division of n by k
Try it out!
End of explanation
mystring = 'hello'
print(mystring)
mystring = "hello"
print(mystring)
Explanation: Strings
Strings are defined either with a single quote or a double quotes. Many other languages interpret them differently.
End of explanation
mystring = "Don't worry about apostrophes"
print(mystring)
Explanation: The difference between the two is that using double quotes makes it easy to include apostrophes (whereas these would terminate the string if using single quotes)
End of explanation
print "The magic number is %.2f!" % (7.0/13.0)
Explanation: Operators
Some the arithmetic operators can be applied to strings, though they have a different interpretation
- + will concatenate two strings
- * multiplies a string with an integer, i.e. the result is that many copies of the original string.
- % has a very special purpose to fill in values into strings
Python provides a large number of operations to manipulate text strings. Examples are given at https://www.tutorialspoint.com/python/python_strings.htm
For the complete documentation refer to https://docs.python.org/2/library/string.html
String Formatting
Python uses C-style string formatting to create new, formatted strings. The "%" operator is used to format a set of variables enclosed in a "tuple" (a fixed size list), together with a format string, which contains normal text together with "argument specifiers", special symbols like "%s" and "%d".
Some basic argument specifiers you should know:
%s - String (or any object with a string representation, like numbers)
%d - Integers
%f - Floating point numbers
%.<number of digits>f - Floating point numbers with a fixed amount of digits to the right of the dot.
%x/%X - Integers in hex representation (lowercase/uppercase)
End of explanation
mylist = [1, 2, "three", ("a", 7)]
print len(mylist)
print mylist
mylist[3]
mylist + [7]
mylist * 2
Explanation: Lists
Lists are construct for holding multiple objects or values of different types (if this makes sense). We can dynamically add, replace, or remove elements from a list.
Usually we iterate through list in order to perform some operations, though, we can also address a specific element by its position (index) in the list.
The + and * operators work on lists in a similar way as they do on strings.
Complete documentation at https://docs.python.org/2/tutorial/datastructures.html
End of explanation
power_of_twos = [2**k for k in xrange(0,10)]
print power_of_twos
[i*j for i in xrange(1,11) for j in xrange(1,11)]
[ [i*j for i in xrange(1,11) ] for j in xrange(1,11)]
Explanation: List Comprehension
This technique comes in handy and is often used.
End of explanation
x = 2
print(x == 2) # prints out True
print(x == 3) # prints out False
print(x < 3) # prints out True
Explanation: Conditions
Python uses boolean variables to evaluate conditions. The boolean values True and False are returned when an expression is compared or evaluated.
Notice that variable assignment is done using a single equals operator "=", whereas comparison between two variables is done using the double equals operator "==". The "not equals" operator is marked as "!=".
End of explanation
name = "John"
age = 23
if name == "John" and age == 23:
print("Your name is John, and you are also 23 years old.")
if name == "John" or name == "Rick":
print("Your name is either John or Rick.")
Explanation: The "and", "or" and "not" boolean operators allow building complex boolean expressions, for example:
End of explanation
name = "John"
if name in ["John", "Rick"]:
print("Your name is either John or Rick.")
Explanation: The "in" operator could be used to check if a specified object exists within an iterable object container, such as a list:
End of explanation
x = 2
if x == 2:
print("x equals two!")
else:
print("x does not equal to two.")
Explanation: Python uses indentation to define code blocks, instead of brackets. The standard Python indentation is 4 spaces, although tabs and any other space size will work, as long as it is consistent. Notice that code blocks do not need any termination.
Here is an example for using Python's "if" statement using code blocks:
if <statement is true>:
<do something>
....
....
elif <another statement is true>: # else if
<do something else>
....
....
else:
<do another thing>
....
....
End of explanation
# Prints out the numbers 0,1,2,3,4
for x in range(5):
print(x)
# Prints out 3,4,5
for x in range(3, 6):
print(x)
# Prints out 3,5,7
for x in range(3, 8, 2):
print(x)
Explanation: A statement is evaulated as true if one of the following is correct: 1. The "True" boolean variable is given, or calculated using an expression, such as an arithmetic comparison. 2. An object which is not considered "empty" is passed.
Here are some examples for objects which are considered as empty: 1. An empty string: "" 2. An empty list: [] 3. The number zero: 0 4. The false boolean variable: False
Loops
There are two types of loops in Python, for and while.
The "for" loop
For loops iterate over a given sequence. Here is an example:
primes = [2, 3, 5, 7]
for prime in primes:
print(prime)
For loops can iterate over a sequence of numbers using the range and xrange functions. The difference between range and xrange is that the range function returns a new list with numbers of that specified range, whereas xrange returns an iterator, which is more efficient. (Python 3 uses the range function, which acts like xrange). Note that the range function is zero based.
End of explanation
count = 0
while count < 5:
print(count)
count += 1 # This is the same as count = count + 1
## compute the Greatest Common Denominator (GCD)
a = 15
b = 12
while a!=b:
# put smaller number in a
(a, b) = (a, b) if a<b else (b, a)
b = b - a
print "The GCD is %d"%a
Explanation: "while" loops
While loops repeat as long as a certain boolean condition is met. For example:
End of explanation
# Prints out 0,1,2,3,4
count = 0
while True:
print(count)
count += 1
if count >= 5:
break
# Prints out only odd numbers - 1,3,5,7,9
for x in range(10):
# Check if x is even
if x % 2 == 0:
continue
print(x)
Explanation: "break" and "continue" statements
break is used to exit a for loop or a while loop, whereas continue is used to skip the current block, and return to the "for" or "while" statement. A few examples:
End of explanation
# Prints out 0,1,2,3,4 and then it prints "count value reached 5"
count=0
while(count<5):
print(count)
count +=1
else:
print("count value reached %d" %(count))
# Prints out 1,2,3,4
for i in range(1, 10):
if(i%5==0):
break
print(i)
else:
print("this is not printed because for loop is terminated because of break but not due to fail in condition")
Explanation: can we use "else" clause for loops?
End of explanation
def my_function():
print("Hello From My Function!")
my_function()
Explanation: Functions and methods
Functions are a convenient way to divide your code into useful blocks, allowing us to order our code, make it more readable, reuse it and save some time. Also functions are a key way to define interfaces so programmers can share their code.
As we have seen on previous tutorials, Python makes use of blocks.
A block is a area of code of written in the format of:
block_head:
1st block line
2nd block line
Where a block line is more Python code (even another block), and the block head is of the following format: block_keyword block_name(argument1,argument2, ...) Block keywords you already know are "if", "for", and "while".
Functions in python are defined using the block keyword "def", followed with the function's name as the block's name. For example:
End of explanation
def my_function_with_args(username, greeting):
print("Hello, %s , From My Function!, I wish you %s"%(username, greeting))
my_function_with_args("Peter", "a wonderful day")
Explanation: Functions may also receive arguments (variables passed from the caller to the function). For example:
End of explanation
def sum_two_numbers(a, b):
return a + b
sum_two_numbers(5,19)
Explanation: Functions may return a value to the caller, using the keyword- 'return' . For example:
End of explanation
def my_function():
print("Hello From My Function!")
def my_function_with_args(username, greeting):
print("Hello, %s , From My Function!, I wish you %s"%(username, greeting))
def sum_two_numbers(a, b):
return a + b
# print(a simple greeting)
my_function()
#prints - "Hello, John Doe, From My Function!, I wish you a great year!"
my_function_with_args("John Doe", "a great year!")
# after this line x will hold the value 3!
x = sum_two_numbers(1,2)
Explanation: How to call functions
End of explanation
# Modify this function to return a list of strings as defined above
def list_benefits():
pass
# Modify this function to concatenate to each benefit - " is a benefit of functions!"
def build_sentence(benefit):
pass
def name_the_benefits_of_functions():
list_of_benefits = list_benefits()
for benefit in list_of_benefits:
print(build_sentence(benefit))
name_the_benefits_of_functions()
Explanation: In this exercise you'll use an existing function,
and while adding your own to create a fully functional program.
Add a function named list_benefits() that returns the following list of strings: "More organized code", "More readable code", "Easier code reuse", "Allowing programmers to share and connect code together"
Add a function named build_sentence(info) which receives a single argument containing a string and returns a sentence starting with the given string and ending with the string " is a benefit of functions!"
Run and see all the functions work together!
End of explanation
s = "Hello WORLD"
s.lower()
3.75.as_integer_ratio()
Explanation: Methods
Methods are very similar to functions with the difference that, typically, a method associated with an objects.
End of explanation
# These two lines are critical to using matplotlib within the noteboook
%matplotlib inline
import matplotlib.pyplot as plt
x = range(10)
x
x = [float(i-50)/50.0 for i in range(100)]
from math import *
y = [ xx**2 for xx in x]
y2 = [xx**3 for xx in x]
plt.plot(x, y, label="x^2")
plt.plot(x, y2, label="x^3")
plt.legend(loc="best")
plt.title("Exponential Functions")
theta = [ pi*0.02*float(t-50) for t in range (100)]
theta[:10]
x = [sin(t) for t in theta]
y = [cos(t) for t in theta]
plt.figure(figsize=(6,6))
plt.plot(x,y)
Explanation: Hint: while typing in the notebook or at the ipython prompt use the [TAB]-key after adding a "." (period) behind an object to see available methods:
Type the name of an already defined object: s
Add a period "." and hit the [TAB]-key: s. $\leftarrow$ This should show a list of available methods to a string.
Plotting something
Let's put some of our knowledge about lists and functions to use.
The following examples will create list of values, and then graph them.
We use a module of the Matplotlib library https://matplotlib.org/ The web-site provides detailed documentation and a wealth of examples.
<img src="https://matplotlib.org/_static/logo2.svg" style="top:5px;width:200px;right:5px;position:absolute" />
End of explanation |
14,332 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Forensic Intelligence Applications
In this section we will explore the use of similar source, score-based, non-anchored LRs for forensic intelligence applications. Score-based LRs can be a valuable tool for relating forensic traces to each other in a manner that pertains to
Step1: Define some characteristics of the dataset for our example
We will generate a dataset meant to emulate the kinds of intel availabel in pan-european drug seizure analysis. <br>
The variables bellow will define our dataset for this example, including the number of seizure batches, the number of samples per seizure batch and the relative variability observed in the samples. Some noise parameters are introduced as well and we assume a particular number of marker compounds are measured between seizures.<br>
Remember! This is only example data to illustrate the technology you can come back here and change the nature of the exmaple data any time
Step2: Generate a sample data set
The following code cell will generate a new simulated data set according to the parameters defined in the previous code cell
Step3: Examine the generated data
The following code cell will display a 2-dimensional projection of your simulated illicit drugs seizure data set.<br>
The colours indicate similar source seizures. Remember, this is a projection of the higher-dimensional data for plotting purposes only.
Step4: Defining a similarity model to generate score-based likelihood ratios
Here we must define three parameters that will affect the shape of our graph.<br>
* The distance metric will convert the high dimensional data into a set of univariate pair-wise scores between samples.
* The distribution that will be used to model the scores in one dimension must be chosen here.
* The holdout size is a set of sample from our generated data set that will be removed before modelling the distributions.
The holdout samples will be used later to draw graphs. We can verify if useful intel is being provided based on the original seizure identities of the holdout samples. The holdout set is a random sample from the full data generated.
Step5: Dividing the data
The following code cell executes the data division as defined above and reports on the size of your reference collection, holdout set, and the dimensionality of the data. Additionally a graphical sample of the data is displayed with variable magnitudes mapped to a cold->hot colour scale.
Step6: Modeling the general sense of similarity
The following code cell does a lot of the work necessary to get from multivariate data to univariate score-based likelihood ratios
If you attended the lecture from Jacob about score-based LRs then this should be very familiar!
First, pair wise comparisons are where source identity is known (because the data was generated with a ground truth)
The comparisons are accumalated into groups based on the source identity same or different batch
The acumalated score distributions are modeled using a probability density function
The parameters of those distributions and a graphical representation is displayed
Step7: Relating a score to a likelihood ratio
The following code cell compares the new seizures (remember we seperated them into the holdout set early on) against the distributions that we modelled from our forensic reference collection to determine how rare a particular score between two illicit drug profiles is in the context of our cummalitive knowlegde of the market as define by our seizures collection.
* The distribution relating to pair-wise similarities between same source samples is plotted in green
* The distribution relating to pair-wise similarities between different source samples is plotted in blue
* The new recovered samples are compared to one another and plotted as dots along the top of the figure
Step8: Set thresholds for edges in your undirected graph
In the following code cell we must set the thresholds for drawing edges between seizures in a graph based on thier similarity and the likelihood ratio of observing that similarity given they originate from the same source. The output figure at the bottom of the previous code cell can help you decide on a realistic threshold. <br>
The same points will be used for the graph using similarity scores as the graph using likelihood ratios so that they can be compared
Step9: Generate a graph using the similarity between samples as a linkage function
The following code block examines the pairwise similarity between hold out samples and then compares that similarity to the threshold you defined.
The weight of the edges is determined by the magnitude of the similarity score (normalized to a range of 0.0-1.0).
* Closer nodes are more similar to one another.
* Edges not meeting the threshold criteria are removed.
* Any unconnected nodes are removed.
Step10: Generate a graph using the likelihood ratio as a linkage function
The following code block examines the likelihood ratio of observing a particular score between pairs of the hold out samples. The scores are then compared against the similarities modeled using our forensic reference samples to determine how rare the observation of such a score is. The weight of the edges is determined by the magnitude of the likelihood ratio of observing the score given the following competing hypotheses | Python Code:
from IPython.core.display import HTML
import os
#def css_styling():
#response = urllib.request.urlopen('https://dl.dropboxusercontent.com/u/24373111/custom.css')
#desktopFile = os.path.expanduser("~\Desktop\EAFS_LR_software\ForensicIntelligence\custom.css")
# styles = open(desktopFile, "r").read()
# return HTML(styles)
## importing nice mathy things
import numpy as np # numericla operators
import matplotlib.pyplot as plt # plotting operators
import matplotlib
from numpy import ndarray
import random # for random data generation
from matplotlib.mlab import PCA as mlabPCA # pca required for plotting higher dimensional data
import matplotlib.mlab as mlab
from scipy.stats import norm
import networkx as nx # for generating and plotting graphs
from scipy.spatial.distance import pdist, cdist, squareform # pairwise distance operators
from scipy import sparse as sp
#css_styling() # defines the style and colour scheme for the markdown and code cells.
Explanation: Forensic Intelligence Applications
In this section we will explore the use of similar source, score-based, non-anchored LRs for forensic intelligence applications. Score-based LRs can be a valuable tool for relating forensic traces to each other in a manner that pertains to:
The relative rarity of a relationship between two traces
The different senses of similarity and thier expression in terms of multivariable features
We will explore the use of likelihood ratios as a method for exploring the connectivity between forensic traces. As well as the interactive process of asking different kinds of forensic questions and interpretting the results.
How to work with the Jupyter notebook interface
Each cell contains some code that is executed when you select any cell and:
<br />
<center>click on the execute icon<br />
OR<br />
presses <code>CRTL + ENTER</code><br />
</center>
<br />
In between these code cells are text cells that give explanations about the input and output of these cells as well as information about the operations being performed. In this workshop segment we will work our way down this iPython3 notebook. All the code cells can be edited to change the parameters of our example. We will pause to discuss the output generated at various stages of this process.
Accessing this notebook
After this workshop is over, you will be able to download this notebook any time at:
http://www.score_LR_demo.pendingassimilation.com
Lets get started
Execute the following code cell in order to import al of the Python3 libraries that will be required to run our example. <br/>
This code will also change the colour scheme of all code cells so that you can more easily tell the difference between <br/> instruction cells (black text white background) and code cells syntax coloured text on a black background.
End of explanation
NUMBER_OF_CLASSES = 20
# number of different classes
SAMPLES_PER_CLASS = 400
# we assume balanced classes
NOISINESS_INTRA_CLASS = 0.25
#expresses the spread of the classes (between 0.0 and 1.0 gives workable data)
NOISINESS_INTER_CLASS = 2.5
# expresses the spaces in between the clases (between 5 and 10 times the value of NOISINESS_INTRA_CLASS is nice)
DIMENSIONALITY = 10
# how many features are measured for this multivariate data
FAKE_DIMENSIONS = 2 # these features are drawn from a different (non class specific distribution) so as to have no actual class descriminating power.
if NUMBER_OF_CLASSES * SAMPLES_PER_CLASS > 10000:
print('Too many samples requested, please limit simulated data to 10000 samples to avoid slowing down the server kernel... Using Default values')
print(NUMBER_OF_CLASSES)
Explanation: Define some characteristics of the dataset for our example
We will generate a dataset meant to emulate the kinds of intel availabel in pan-european drug seizure analysis. <br>
The variables bellow will define our dataset for this example, including the number of seizure batches, the number of samples per seizure batch and the relative variability observed in the samples. Some noise parameters are introduced as well and we assume a particular number of marker compounds are measured between seizures.<br>
Remember! This is only example data to illustrate the technology you can come back here and change the nature of the exmaple data any time
End of explanation
##simulate interesting multiclass data
myData = np.empty((0,DIMENSIONALITY), int)
nonInformativeFeats = np.array(random.sample(range(DIMENSIONALITY), FAKE_DIMENSIONS))
labels = np.repeat(range(NUMBER_OF_CLASSES), SAMPLES_PER_CLASS) # integer class labels
#print labels
for x in range(0, NUMBER_OF_CLASSES):
A = np.random.rand(DIMENSIONALITY,DIMENSIONALITY)
cov = np.dot(A,A.transpose())*NOISINESS_INTRA_CLASS # ensure a positive semi-definite covariance matrix, but random relation between variables
mean = np.random.uniform(-NOISINESS_INTER_CLASS, NOISINESS_INTER_CLASS, DIMENSIONALITY) # random n-dimensional mean in a space we can plot easily
#print('random mutlivariate distribution mean for today is', mean)
#print('random positive semi-define matrix for today is', cov)
x = np.random.multivariate_normal(mean,cov,SAMPLES_PER_CLASS)
myData = np.append(myData, np.array(x), axis=0)
# exit here and concatenate
x = np.random.multivariate_normal(np.zeros(FAKE_DIMENSIONS),mlab.identity(FAKE_DIMENSIONS),SAMPLES_PER_CLASS*NUMBER_OF_CLASSES)
myData[:, nonInformativeFeats.astype(int)] = x
# substitute the noninformative dimensions with smaples drawn from the sampe boring distribution
#print myData
Explanation: Generate a sample data set
The following code cell will generate a new simulated data set according to the parameters defined in the previous code cell
End of explanation
## plotting our dataset to see if it is what we expect
names = matplotlib.colors.cnames #colours for plotting
names_temp = names
col_means = myData.mean(axis=0,keepdims=True)
myData = myData - col_means # mean center the data before PCA
col_stds = myData.std(axis=0,keepdims=True)
myData = myData / col_stds # unit variance scaling
results = mlabPCA(myData)# PCA results into and ND array scores, loadings
%matplotlib inline
fig = plt.figure()
ax = fig.add_subplot(111)
ax.axis('equal');
for i in range(NUMBER_OF_CLASSES):
plt.plot(results.Y[labels==i,0],results.Y[labels==i,1], 'o', markersize=7, color=names_temp.popitem()[1], alpha=0.5)
# plot the classes after PCA just for rough idea of their overlap.
plt.xlabel('x_values')
plt.ylabel('y_values')
plt.xlim([-4,4])
plt.ylim([-4,4])
plt.title('Transformed samples with class labels from matplotlib.mlab.PCA()')
plt.show()
Explanation: Examine the generated data
The following code cell will display a 2-dimensional projection of your simulated illicit drugs seizure data set.<br>
The colours indicate similar source seizures. Remember, this is a projection of the higher-dimensional data for plotting purposes only.
End of explanation
DISTANCE_METRIC = 'canberra'
# this can be any of: euclidean, minkowski, cityblock, seuclidean, sqeuclidean, cosine, correlation
# hamming, jaccard, chebyshev, canberra, braycurtis, mahalanobis, yule
DISTRIBUTION = 'normal'
HOLDOUT_SIZE = 0.01
# optional feature selection/masking for different questions
sz = myData.shape
RELEVANT_FEATURES = range(0,sz[1])
#### Later feature selcection occurs here ####
#RELEVANT_FEATURES = [2,5,7] ####
#### Selected features specifically relevant to precursor chemical composition
#####################################
myData = myData[:,RELEVANT_FEATURES]
Explanation: Defining a similarity model to generate score-based likelihood ratios
Here we must define three parameters that will affect the shape of our graph.<br>
* The distance metric will convert the high dimensional data into a set of univariate pair-wise scores between samples.
* The distribution that will be used to model the scores in one dimension must be chosen here.
* The holdout size is a set of sample from our generated data set that will be removed before modelling the distributions.
The holdout samples will be used later to draw graphs. We can verify if useful intel is being provided based on the original seizure identities of the holdout samples. The holdout set is a random sample from the full data generated.
End of explanation
# divide the dataset
idx = np.random.choice(np.arange(NUMBER_OF_CLASSES*SAMPLES_PER_CLASS), int(NUMBER_OF_CLASSES*SAMPLES_PER_CLASS*HOLDOUT_SIZE), replace=False)
#10% holdout set removed to demonstrate the LR values of new samples!
test_samples = myData[idx,:]
test_labels = labels[idx]
train_samples = np.delete(myData, idx, 0)
train_labels = np.delete(labels, idx)
print(train_samples.shape, 'is the size of the data used to model same-source and different-source distributions')
print(test_samples.shape, 'are the points we will evaluate to see LRs achieved')
fig, axes = plt.subplots(nrows=1, ncols=2)
axes[0].imshow(train_samples[np.random.randint(DIMENSIONALITY,size=DIMENSIONALITY*2),:])
axes[1].imshow(test_samples[np.random.randint(test_samples.shape[0],size=test_samples.shape[0]),:])
plt.show()
Explanation: Dividing the data
The following code cell executes the data division as defined above and reports on the size of your reference collection, holdout set, and the dimensionality of the data. Additionally a graphical sample of the data is displayed with variable magnitudes mapped to a cold->hot colour scale.
End of explanation
#Pairwise distance calculations are going in here
same_dists = np.empty((0,1))
diff_dists = np.empty((0,1))
for labInstance in np.unique(train_labels):
dists = pdist(train_samples[train_labels==labInstance,:],DISTANCE_METRIC)
# this is already the condensed-form (lower triangle) with no duplicate comparisons.
same_dists = np.append(same_dists, np.array(dists))
del dists
dists = cdist(train_samples[train_labels==labInstance,:], train_samples[train_labels!=labInstance,:], DISTANCE_METRIC)
#print dists.shape
train_samples[train_labels!=labInstance,:]
diff_dists = np.append(diff_dists, np.array(dists).flatten())
#print same_dists.shape
#print diff_dists.shape
minval = min(np.min(diff_dists),np.min(same_dists))
maxval = max(np.max(diff_dists),np.max(same_dists))
# plot the histograms to see difference in distributions
# Same source data
mu_s, std_s = norm.fit(same_dists) # fit the intensities wth a normal
plt.hist(same_dists, np.arange(minval, maxval, abs(minval-maxval)/100), normed=1, facecolor='green', alpha=0.65)
y_same = mlab.normpdf(np.arange(minval, maxval, abs(minval-maxval)/100), mu_s, std_s) # estimate the pdf over the plot range
l=plt.plot(np.arange(minval, maxval, abs(minval-maxval)/100), y_same, 'g--', linewidth=1)
# Different source data
mu_d, std_d = norm.fit(diff_dists) # fit the intensities wth a normal
plt.hist(diff_dists, np.arange(minval, maxval, abs(minval-maxval)/100), normed=1, facecolor='blue', alpha=0.65)
y_diff = mlab.normpdf(np.arange(minval, maxval, abs(minval-maxval)/100), mu_d, std_d) # estimate the pdf over the plot range
l=plt.plot(np.arange(minval, maxval, abs(minval-maxval)/100), y_diff, 'b--', linewidth=1)
print('same source comparisons made: ', same_dists.shape[0])
print('diff source comparisons made: ', diff_dists.shape[0])
Explanation: Modeling the general sense of similarity
The following code cell does a lot of the work necessary to get from multivariate data to univariate score-based likelihood ratios
If you attended the lecture from Jacob about score-based LRs then this should be very familiar!
First, pair wise comparisons are where source identity is known (because the data was generated with a ground truth)
The comparisons are accumalated into groups based on the source identity same or different batch
The acumalated score distributions are modeled using a probability density function
The parameters of those distributions and a graphical representation is displayed
End of explanation
print(' mu same: ', mu_s, ' std same: ', std_s)
print(' mu diff: ', mu_d, ' std diff: ', std_d)
newDists = squareform(pdist((test_samples),DISTANCE_METRIC)) # new samples (unknown group memebership)
l=plt.plot(np.arange(minval, maxval, abs(minval-maxval)/100), y_diff, 'b-', linewidth=1)
l=plt.plot(np.arange(minval, maxval, abs(minval-maxval)/100), y_same, 'g-', linewidth=1)
l=plt.scatter(squareform(newDists), np.ones(squareform(newDists).shape[0], dtype=np.int)*max(y_same))
# plot the new distances compared to the distributions
lr_holder = [];
for element in squareform(newDists):
value = mlab.normpdf(element, mu_s, std_s)/mlab.normpdf(element, mu_d, std_d)
lr_holder.append(value)
#print value
#lr_holder = np.true_divide(lr_holder,1) #inversion becasue now it will be used as a spring for network plotting
newDists[newDists==0] = 0.000001 # avoid divide by zero
newDists = np.true_divide(newDists,newDists.max())
Explanation: Relating a score to a likelihood ratio
The following code cell compares the new seizures (remember we seperated them into the holdout set early on) against the distributions that we modelled from our forensic reference collection to determine how rare a particular score between two illicit drug profiles is in the context of our cummalitive knowlegde of the market as define by our seizures collection.
* The distribution relating to pair-wise similarities between same source samples is plotted in green
* The distribution relating to pair-wise similarities between different source samples is plotted in blue
* The new recovered samples are compared to one another and plotted as dots along the top of the figure
End of explanation
# SET A THRESHOLD FOR EDGES:
EDGE_THRESHOLD_DISTANCE = 0.75
EDGE_THRESHOLD_LR = 1.0
Explanation: Set thresholds for edges in your undirected graph
In the following code cell we must set the thresholds for drawing edges between seizures in a graph based on thier similarity and the likelihood ratio of observing that similarity given they originate from the same source. The output figure at the bottom of the previous code cell can help you decide on a realistic threshold. <br>
The same points will be used for the graph using similarity scores as the graph using likelihood ratios so that they can be compared
End of explanation
# Plot just the distance based graph
G = nx.Graph(); # empty graph
I,J,V = sp.find(newDists) # index pairs associated with distance
G.add_weighted_edges_from(np.column_stack((I,J,V))) # distance as edge weight
#pos=nx.spectral_layout(G) # automate layout in 2-d space
#nx.draw(G, pos, node_size=200, edge_color='k', with_labels=True, linewidths=1,prog='neato') # draw
#print G.number_of_edges()
edge_ind_collector = []
for e in G.edges_iter(data=True):
if G[e[0]][e[1]]['weight'] > EDGE_THRESHOLD_DISTANCE:
edge_ind_collector.append(e) # remove edges that indicate weak linkage
G.remove_edges_from(edge_ind_collector)
pos=nx.spring_layout(G) # automate layout in 2-d space
G = nx.convert_node_labels_to_integers(G)
nx.draw(G, pos, node_size=200, edge_color='k', with_labels=True, linewidths=1,prog='neato') # draw
print('node indeces:', G.nodes())
print(' seizure ids:', list(test_labels))
Explanation: Generate a graph using the similarity between samples as a linkage function
The following code block examines the pairwise similarity between hold out samples and then compares that similarity to the threshold you defined.
The weight of the edges is determined by the magnitude of the similarity score (normalized to a range of 0.0-1.0).
* Closer nodes are more similar to one another.
* Edges not meeting the threshold criteria are removed.
* Any unconnected nodes are removed.
End of explanation
# plot the likelihood based graph
#print lr_holder
G2 = nx.Graph(); # empty graph
I,J,V = sp.find(squareform(lr_holder)) # index pairs associated with distance
G2.add_weighted_edges_from(np.column_stack((I,J,V))) # LR value as edge weight
edge_ind_collector = []
for f in G2.edges_iter(data=True):
if G2[f[0]][f[1]]['weight'] < EDGE_THRESHOLD_LR:
#print(f)
edge_ind_collector.append(f) # remove edges that indicate weak linkage
G2.remove_edges_from(edge_ind_collector)
#print G.number_of_edges()
pos=nx.spring_layout(G2) # automate layout in 2-d space
G2 = nx.convert_node_labels_to_integers(G2)
nx.draw(G2, pos, node_size=200, edge_color='k', with_labels=True, linewidths=1,prog='neato')
print('node indeces:', G2.nodes())
print(' seizure ids:', list(test_labels))
Explanation: Generate a graph using the likelihood ratio as a linkage function
The following code block examines the likelihood ratio of observing a particular score between pairs of the hold out samples. The scores are then compared against the similarities modeled using our forensic reference samples to determine how rare the observation of such a score is. The weight of the edges is determined by the magnitude of the likelihood ratio of observing the score given the following competing hypotheses:
* Hp: The samples originate from a common source
* Hd: The samples originate from a different and arbitrary source
As with the graph based on similarity scores alone:
* Edges not meeting the threshold criteria are removed.
End of explanation |
14,333 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
T-Tests and P-Values
Let's say we're running an A/B test. We'll fabricate some data that randomly assigns order amounts from customers in sets A and B, with B being a little bit higher
Step1: The t-statistic is a measure of the difference between the two sets expressed in units of standard error. Put differently, it's the size of the difference relative to the variance in the data. A high t value means there's probably a real difference between the two sets; you have "significance". The P-value is a measure of the probability of an observation lying at extreme t-values; so a low p-value also implies "significance." If you're looking for a "statistically significant" result, you want to see a very low p-value and a high t-statistic (well, a high absolute value of the t-statistic more precisely). In the real world, statisticians seem to put more weight on the p-value result.
Let's change things up so both A and B are just random, generated under the same parameters. So there's no "real" difference between the two
Step2: Our p-value actually got a little lower, and the t-test a little larger, but still not enough to declare a real difference. So, you could have reached the right decision with just 10,000 samples instead of 100,000. Even a million samples doesn't help, so if we were to keep running this A/B test for years, you'd never acheive the result you're hoping for
Step3: If we compare the same set to itself, by definition we get a t-statistic of 0 and p-value of 1 | Python Code:
import numpy as np
from scipy import stats
A = np.random.normal(25.0, 5.0, 10000)
B = np.random.normal(26.0, 5.0, 10000)
stats.ttest_ind(A, B)
Explanation: T-Tests and P-Values
Let's say we're running an A/B test. We'll fabricate some data that randomly assigns order amounts from customers in sets A and B, with B being a little bit higher:
End of explanation
B = np.random.normal(25.0, 5.0, 10000)
stats.ttest_ind(A, B)
A = np.random.normal(25.0, 5.0, 100000)
B = np.random.normal(25.0, 5.0, 100000)
stats.ttest_ind(A, B)
Explanation: The t-statistic is a measure of the difference between the two sets expressed in units of standard error. Put differently, it's the size of the difference relative to the variance in the data. A high t value means there's probably a real difference between the two sets; you have "significance". The P-value is a measure of the probability of an observation lying at extreme t-values; so a low p-value also implies "significance." If you're looking for a "statistically significant" result, you want to see a very low p-value and a high t-statistic (well, a high absolute value of the t-statistic more precisely). In the real world, statisticians seem to put more weight on the p-value result.
Let's change things up so both A and B are just random, generated under the same parameters. So there's no "real" difference between the two:
End of explanation
A = np.random.normal(25.0, 5.0, 1000000)
B = np.random.normal(25.0, 5.0, 1000000)
stats.ttest_ind(A, B)
Explanation: Our p-value actually got a little lower, and the t-test a little larger, but still not enough to declare a real difference. So, you could have reached the right decision with just 10,000 samples instead of 100,000. Even a million samples doesn't help, so if we were to keep running this A/B test for years, you'd never acheive the result you're hoping for:
End of explanation
stats.ttest_ind(A, A)
Explanation: If we compare the same set to itself, by definition we get a t-statistic of 0 and p-value of 1:
End of explanation |
14,334 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pole-Zero (PZ) simulation example
In this short example we will simulate a simple RLC circuit with the ahkab simulator.
In particular, we consider a series resonant RLC circuit. If you need to refresh your knowledge on 2nd filters, you may take a look at this page.
The plan
Step1: Then we create a new circuit object titled 'RLC bandpass', which we name bpf from Band-Pass Filter
Step2: A circuit is made of, internally, components and nodes. For now, our bpf circuit is empty and really of not much use.
We wish to define our nodes, our components, specifying their connection to the appropriate nodes and inform the circuit instance about the what we did.
It sounds complicated, but it is actually very simple, also thanks to the convenience functions add_*() in the Circuit instances (circuit documentation).
We now add the inductor L1, the capacitor C1, the resistor R1 and the input source V1
Step3: Notice that
Step4: The above text defines the same circuit in netlist form. It has the advantage that it's a very coincise piece of text and that the syntax resembles (not perfectly yet) that of simulators such as SPICE.
If you prefer to run ahkab from the command line, be sure to check the Netlist syntax doc page and to add the simulation statements, which are missing above.
2. PZ analysis
The analysis is set up easily by calling ahkab.new_pz(). Its signature is
Step5: The results are in the pz_solution object r. It has an interface that works like a dictionary.
Eg. you can do
Step6: Check out the documentation on pz_solution for more.
Let's see what we got
Step7: Note that the results are frequencies expressed in Hz (and not angular frequencies in rad/s).
Graphically, we can see better where the singularities are located
Step8: As expected, we got two complex conjugate poles and a zero in the origin.
The resonance frequency
Let's check that indeed the (undamped) resonance frequency $f_0$ has the expected value from the theory.
It should be
Step9: That's alright.
3. AC analysis
Let's perform an AC analysis
Step10: Next, we use sympy to assemble the transfer functions from the singularities we got from the PZ analysis.
Step11: We need a function to evaluate the absolute value of a transfer function in decibels.
Here it is
Step12: Next we can plot $|H(\omega)|$ in dB and inspect the results visually.
Step13: 4. Symbolic analysis
Next, we setup and run a symbolic analysis.
We set the input source to be 'V1', in this way, ahkab will calculate all transfer functions, together with low-frequency gain, poles and zeros, with respect to every variable in the circuit.
It is done very similarly to the previous cases
Step14: Notice how to the 'symbolic' key corresponds a tuple of two objects
Step15: In particular, to our transfer function corresponds
Step16: It's easy to show the above entries are a different formulation that corresponds to the theoretical results we introduced at the beginning of this example.
We'll do it graphically. First of all, let's isolate out TF
Step17: We wish to substitute the correct circuit values to R1, L1 and C1 to be able to evaluate numerically the results.
In order to do so, the symbolic_solution class in the results module has a method named as_symbols that takes a string of space-separed symbol names and returns the sympy symbols associated with them (symbolic_solution.as_symbols documentation).
Step18: Did we get the same results, let's sat within a 1dB accuracy?
Step19: Good.
5. Conclusions
Let's take a look at PZ, AC and symbolic results together | Python Code:
%pylab inline
figsize = (10, 7)
# libraries we need
import ahkab
print "We're using ahkab %s" % ahkab.__version__
Explanation: Pole-Zero (PZ) simulation example
In this short example we will simulate a simple RLC circuit with the ahkab simulator.
In particular, we consider a series resonant RLC circuit. If you need to refresh your knowledge on 2nd filters, you may take a look at this page.
The plan: what we'll do
0. A brief analysis of the circuit
This should be done with pen and paper, we'll just mention the results. The circuit is pretty simple, feel free to skip if you find it boring.
1. How to describe the circuit with ahkab
We'll show this:
from Python,
and briefly with a netlist deck.
2. Pole-zero analysis
We will extract poles and zeros.
We'll use them to build input-output transfer function, which we evaluate.
3. AC analysis
We will run an AC analysis to evaluate numerically the transfer function.
4. Symbolic analysis
We'll finally run a symbolic analysis as well.
Once we have the results, we'll substitute for the real circuit values and verify both AC and PZ analysis.
5. Conclusions
We will check that the three PZ, AC and Symbolic analysis match!
The circuit
The circuit we simulate is a very simple one:
INSERT
0. Theory
Once one proves that the current flowing in the only circuit branch in the Laplace domain is given by:
$$I(s) = \frac{1}{L}\cdot\frac{s}{s^2 + 2\alpha\cdot s + \omega_0^2}$$
Where:
$s$ is the Laplace varible, $s = \sigma + j \omega$:
$j$ is the imaginary unit,
$\omega$ is the angular frequency (units rad/s).
$\alpha$ is known as the Neper frequency and it is given by $R/(2L)$,
$\omega_0$ is the *(undamped) resonance frequency, equal to $(\sqrt{LC})^{-1}$.
It's easy to show that the pass-band transfer function we consider in our circuit, $V_{OUT}/V_{IN}$, has the expression:
$$H(s) = \frac{V_{OUT}}{V_{IN}}(s) = k_0 \cdot\frac{s}{s^2 + 2\alpha\cdot s + \omega_0^2}$$
Where the coeffiecient $k_0$ has value $k_0 = R/L$.
Solving for poles and zeros, we get:
One zero:
$z_0$, located in the origin.
Two poles, $p_0$ and $p_1$:
$p_{0,1} = - \alpha \pm \sqrt{\alpha^2 - \omega_0^2}$
1. Describe the circuit with ahkab
Let's call ahkab and describe the circuit above.
First we need to import ahkab:
End of explanation
bpf = ahkab.Circuit('RLC bandpass')
Explanation: Then we create a new circuit object titled 'RLC bandpass', which we name bpf from Band-Pass Filter:
End of explanation
bpf = ahkab.Circuit('RLC bandpass')
bpf.add_inductor('L1', 'in', 'n1', 1e-6)
bpf.add_capacitor('C1', 'n1', 'out', 2.2e-12)
bpf.add_resistor('R1', 'out', bpf.gnd, 13)
# we also give V1 an AC value since we wish to run an AC simulation
# in the following
bpf.add_vsource('V1', 'in', bpf.gnd, dc_value=1, ac_value=1)
Explanation: A circuit is made of, internally, components and nodes. For now, our bpf circuit is empty and really of not much use.
We wish to define our nodes, our components, specifying their connection to the appropriate nodes and inform the circuit instance about the what we did.
It sounds complicated, but it is actually very simple, also thanks to the convenience functions add_*() in the Circuit instances (circuit documentation).
We now add the inductor L1, the capacitor C1, the resistor R1 and the input source V1:
End of explanation
print(bpf)
Explanation: Notice that:
* the nodes to which they get connected ('in', 'n1', 'out'...) are nothing but strings. If you prefer handles, you can call the create_node() method of the circuit instance bpf (create_node documentation.
* Using the convenience methods add_*, the nodes are not explicitely added to the circuit, but they are in fact automatically taken care of behind the hood.
Now we have successfully defined our circuit object bpf.
Let's see what's in there and generate a netlist:
End of explanation
pza = ahkab.new_pz('V1', ('out', bpf.gnd), x0=None, shift=1e3)
r = ahkab.run(bpf, pza)['pz']
Explanation: The above text defines the same circuit in netlist form. It has the advantage that it's a very coincise piece of text and that the syntax resembles (not perfectly yet) that of simulators such as SPICE.
If you prefer to run ahkab from the command line, be sure to check the Netlist syntax doc page and to add the simulation statements, which are missing above.
2. PZ analysis
The analysis is set up easily by calling ahkab.new_pz(). Its signature is:
ahkab.new_pz(input_source=None, output_port=None, shift=0.0, MNA=None, outfile=None, x0=u'op', verbose=0)
And you can find the documentation for ahkab.new_pz here.
We will set:
* Input source and output port, to enable the extraction of the zeros.
* the input source is V1,
* the output port is defined between the output node out and ground node (bpf.gnd).
* We need no linearization, since the circuit is linear. Therefore we set x0 to None.
* I inserted a non-zero shift in the initial calculation frequency below. You may want to fiddle a bit with this value, the algorithm internally tries to kick the working frequency away from the exact location of the zeros, since we expect a zero in the origin, we help the simulation find the zero quickly by shifting away the initial working point.
End of explanation
r.keys()
Explanation: The results are in the pz_solution object r. It has an interface that works like a dictionary.
Eg. you can do:
End of explanation
print('Singularities:')
for x, _ in r:
print "* %s = %+g %+gj Hz" % (x, np.real(r[x]), np.imag(r[x]))
Explanation: Check out the documentation on pz_solution for more.
Let's see what we got:
End of explanation
figure(figsize=figsize)
# plot o's for zeros and x's for poles
for x, v in r:
plot(np.real(v), np.imag(v), 'bo'*(x[0]=='z')+'rx'*(x[0]=='p'))
# set axis limits and print some thin axes
xm = 1e6
xlim(-xm*10., xm*10.)
plot(xlim(), [0,0], 'k', alpha=.5, lw=.5)
plot([0,0], ylim(), 'k', alpha=.5, lw=.5)
# plot the distance from the origin of p0 and p1
plot([np.real(r['p0']), 0], [np.imag(r['p0']), 0], 'k--', alpha=.5)
plot([np.real(r['p1']), 0], [np.imag(r['p1']), 0], 'k--', alpha=.5)
# print the distance between p0 and p1
plot([np.real(r['p1']), np.real(r['p0'])], [np.imag(r['p1']), np.imag(r['p0'])], 'k-', alpha=.5, lw=.5)
# label the singularities
text(np.real(r['p1']), np.imag(r['p1'])*1.1, '$p_1$', ha='center', fontsize=20)
text(.4e6, .4e7, '$z_0$', ha='center', fontsize=20)
text(np.real(r['p0']), np.imag(r['p0'])*1.2, '$p_0$', ha='center', va='bottom', fontsize=20)
xlabel('Real [Hz]'); ylabel('Imag [Hz]'); title('Singularities');
Explanation: Note that the results are frequencies expressed in Hz (and not angular frequencies in rad/s).
Graphically, we can see better where the singularities are located:
End of explanation
C = 2.2e-12
L = 1e-6
f0 = 1./(2*np.pi*np.sqrt(L*C))
print 'Resonance frequency from analytic calculations: %g Hz' %f0
alpha = (-r['p0']-r['p1'])/2
a1 = np.real(abs(r['p0'] - r['p1']))/2
f0 = np.sqrt(a1**2 - alpha**2)
f0 = np.real_if_close(f0)
print 'Resonance frequency from PZ analysis: %g Hz' %f0
Explanation: As expected, we got two complex conjugate poles and a zero in the origin.
The resonance frequency
Let's check that indeed the (undamped) resonance frequency $f_0$ has the expected value from the theory.
It should be:
$$f_0 = \frac{1}{2\pi\sqrt{LC}}$$
Since we have little damping, $f_0$ is very close to the damped resonant frequency in our circuit, given by the absolute value of the imaginary part of either $p_0$ or $p_1$.
In fact, the damped resonant frequency $f_d$ is given by:
$$f_d = \frac{1}{2\pi}\sqrt{\alpha^2 -w_0^2}$$
Since this is an example and we have Python at our fingertips, we'll compensate for the frequency pulling due to the damping anyway. That way, the example is analytically correct.
End of explanation
aca = ahkab.new_ac(start=1e8, stop=5e9, points=5e2, x0=None)
rac = ahkab.run(bpf, aca)['ac']
Explanation: That's alright.
3. AC analysis
Let's perform an AC analysis:
End of explanation
import sympy
sympy.init_printing()
from sympy.abc import w
from sympy import I
p0, p1, z0 = sympy.symbols('p0, p1, z0')
k = 13/1e-6 # constant term, can be calculated to be R/L
H = 13/1e-6*(I*w + z0*6.28)/(I*w +p0*6.28)/(I*w + p1*6.28)
Hl = sympy.lambdify(w, H.subs({p0:r['p0'], z0:abs(r['z0']), p1:r['p1']}))
Explanation: Next, we use sympy to assemble the transfer functions from the singularities we got from the PZ analysis.
End of explanation
def dB20(x):
return 20*np.log10(x)
Explanation: We need a function to evaluate the absolute value of a transfer function in decibels.
Here it is:
End of explanation
figure(figsize=figsize)
semilogx(rac.get_x()/2/np.pi, dB20(abs(rac['vout'])), label='TF from AC analysis')
semilogx(rac.get_x()/2/np.pi, dB20(abs(Hl(rac.get_x()))), 'o', ms=4, label='TF from PZ analysis')
legend(); xlabel('Frequency [Hz]'); ylabel('|H(w)| [dB]'); xlim(4e7, 3e8); ylim(-50, 1);
Explanation: Next we can plot $|H(\omega)|$ in dB and inspect the results visually.
End of explanation
symba = ahkab.new_symbolic(source='V1')
rs, tfs = ahkab.run(bpf, symba)['symbolic']
Explanation: 4. Symbolic analysis
Next, we setup and run a symbolic analysis.
We set the input source to be 'V1', in this way, ahkab will calculate all transfer functions, together with low-frequency gain, poles and zeros, with respect to every variable in the circuit.
It is done very similarly to the previous cases:
End of explanation
print(rs)
print tfs
Explanation: Notice how to the 'symbolic' key corresponds a tuple of two objects: the symbolic results and the TF object that was derived from it.
Let's inspect their contents:
End of explanation
tfs['VOUT/V1']
Explanation: In particular, to our transfer function corresponds:
End of explanation
Hs = tfs['VOUT/V1']['gain']
Hs
Explanation: It's easy to show the above entries are a different formulation that corresponds to the theoretical results we introduced at the beginning of this example.
We'll do it graphically. First of all, let's isolate out TF:
End of explanation
s, C1, R1, L1 = rs.as_symbols('s C1 R1 L1')
HS = sympy.lambdify(w, Hs.subs({s:I*w, C1:2.2e-12, R1:13., L1:1e-6}))
Explanation: We wish to substitute the correct circuit values to R1, L1 and C1 to be able to evaluate numerically the results.
In order to do so, the symbolic_solution class in the results module has a method named as_symbols that takes a string of space-separed symbol names and returns the sympy symbols associated with them (symbolic_solution.as_symbols documentation).
End of explanation
np.allclose(dB20(abs(HS(rac.get_x()))), dB20(abs(Hl(rac.get_x()))), atol=1)
Explanation: Did we get the same results, let's sat within a 1dB accuracy?
End of explanation
figure(figsize=figsize); title('Series RLC passband: TFs compared')
semilogx(rac.get_x()/2/np.pi, dB20(abs(rac['vout'])), label='TF from AC analysis')
semilogx(rac.get_x()/2/np.pi, dB20(abs(Hl(rac.get_x()))), 'o', ms=4, label='TF from PZ analysis')
semilogx(rac.get_x()/2/np.pi, dB20(abs(HS(rac.get_x()))), '-', lw=10, alpha=.2, label='TF from symbolic analysis')
vlines(1.07297e+08, *gca().get_ylim(), alpha=.4)
text(7e8/2/np.pi, -45, '$f_d = 107.297\\, \\mathrm{MHz}$', fontsize=20)
legend(); xlabel('Frequency [Hz]'); ylabel('|H(w)| [dB]'); xlim(4e7, 3e8); ylim(-50, 1);
Explanation: Good.
5. Conclusions
Let's take a look at PZ, AC and symbolic results together:
End of explanation |
14,335 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step4: Let me work through CSS Tutorial, while consulting Cascading Style Sheets - Wikipedia, the free encyclopedia.
CSS as a collection of
Step6: Box model
CSS box model - Wikipedia, the free encyclopedia
<img src="https
Step7: Basic exercise
Step12: box-sizing
http | Python Code:
from nbfiddle import Fiddle
# http://www.w3schools.com/css/tryit.asp?filename=trycss_default
Fiddle(
div_css =
background-color: #d0e4fe;
h1 {
color: orange;
text-align: center;
}
p {
font-family: "Times New Roman";
font-size: 20px;
}
,
html =
<h1>My First CSS Example</h1>
<p>This is a paragraph.</p>
)
# http://www.w3schools.com/css/tryit.asp?filename=trycss_inline-block_old
Fiddle(
div_css =
.floating-box {
float: left;
width: 150px;
height: 75px;
margin: 10px;
border: 3px solid #73AD21;
}
.after-box {
clear: left;
border: 3px solid red;
}
,
html =
<h2>The Old Way - using float</h2>
<div class="floating-box">Floating box</div>
<div class="floating-box">Floating box</div>
<div class="floating-box">Floating box</div>
<div class="floating-box">Floating box</div>
<div class="floating-box">Floating box</div>
<div class="floating-box">Floating box</div>
<div class="floating-box">Floating box</div>
<div class="floating-box">Floating box</div>
<div class="after-box">Another box, after the floating boxes...</div>
,
)
Explanation: Let me work through CSS Tutorial, while consulting Cascading Style Sheets - Wikipedia, the free encyclopedia.
CSS as a collection of:
selectors, attributes, values
Some important CSS attributes:
color
background-color
End of explanation
# using normalize.css
normalize_css_url = "http://yui.yahooapis.com/3.18.0/build/cssnormalize-context/cssnormalize-context-min.css"
Fiddle(
html =
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/6/64/W3C_and_Internet_Explorer_box_models.svg/600px-W3C_and_Internet_Explorer_box_models.svg.png"/ style="width:300px">
,
csslibs = (normalize_css_url,)
)
Explanation: Box model
CSS box model - Wikipedia, the free encyclopedia
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/6/64/W3C_and_Internet_Explorer_box_models.svg/600px-W3C_and_Internet_Explorer_box_models.svg.png"/ style="width:300px">
End of explanation
%%html
<style>
</style>
<img src="https://upload.wikimedia.org/wikipedia/commons/6/6a/Johann_Sebastian_Bach.jpg" style="width: 200px; padding:5px; float:left;">
<p>Johann Sebastian Bach (31 March [O.S. 21 March] 1685 – 28 July 1750) was a German composer and musician
of the Baroque period. He enriched established German styles through his skill in counterpoint,
harmonic and motivic organisation, and the adaptation of rhythms, forms, and textures from abroad,
particularly from Italy and France. Bach's compositions include the Brandenburg Concertos, the Goldberg
Variations, the Mass in B minor, two Passions, and over three hundred cantatas of which around two hundred survive.
His music is revered for its technical command, artistic beauty, and intellectual depth.<p>
<p style="clear: right;">Bach's abilities as an organist were highly respected during his lifetime,
although he was not widely recognised as a great composer until a revival of interest and performances
of his music in the first half of the 19th century. He is now generally regarded as one of the greatest
composers of all time.</p>
Explanation: Basic exercise: flowing text around images
End of explanation
# box-sizing
Fiddle(
html =
<div class="box-sizing-content-box">HELLO</div>
<div class="box-sizing-border-box">HELLO</div>
,
div_css =
.box-sizing-content-box {
box-sizing: content-box;
width: 200px;
padding: 30px;
border: 5px solid red;
}
.box-sizing-border-box {
box-sizing: border-box;
width: 200px;
padding: 30px;
border: 5px solid blue;
}
)
# display: block, inline, none
Fiddle(
html =
<p class="inline">p.inline</p>
<p class="inline">p.inline</p>
<span class="block">span.block</span>
<span class="block">span.block</span>
<div class="none">none</div>
,
div_css =
p.inline {
display: inline;
}
span.block {
display: block;
}
div.none {
display: none;
}
)
Explanation: box-sizing
http://www.w3schools.com/cssref/css3_pr_box-sizing.asp
content-box: Default. The width and height properties (and min/max properties) includes only the content. Border, padding, or margin are not included
border-box: The width and height properties (and min/max properties) includes content, padding and border, but not the margin
End of explanation |
14,336 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First Order Filters
This notebook goes through calculations of first order low- and high-pass filters.
To start, lets import the libraries that will be used during this tutorial.
Step1: Time Constant
The time constant of an RC circuit, $\tau$, can be describe by the following relation
Step2: Low-Pass (LP) Filter
A low pass filter uses a resistor and capacitor with the voltage read across the capacitor and the input voltage into the resistor. The reactance of the capacitor blocks low frequency signals.
The response of the low pass filter is,
$$ V_o = V_i \frac{1}{\sqrt{1+(\omega R C)^2}} $$
Where $ \omega $ is the frequency and $RC$ is $ \tau $.
Step3: Now, we can find the $-3 dB$ frequency with this discrete data set if we interoplate, or if we just look at the graph and find the point where the Gain in dB is $-3$.
Step4: High Pass Filter
A first order high-pass filter similarly uses a resistor and capacitor, however, the output voltage is measured across the resistor.
The response of the high-pass filter is,
$$ V_o = V_i \frac{\omega R C}{\sqrt{1+ (\omega R C)^2}}$$
Step5: Now, overlaying the graphs | Python Code:
import numpy as np
import plotly.plotly as py
import plotly.graph_objs as go
Explanation: First Order Filters
This notebook goes through calculations of first order low- and high-pass filters.
To start, lets import the libraries that will be used during this tutorial.
End of explanation
# inputs
R = 1000 # resistace in ohms
C = 0.001 # capacitance in farads
tau = R*C
fc = 2*np.pi*tau
print('Time constant, tau =', tau, 's')
print('Cutoff frequency, f =', fc, 'Hz')
Explanation: Time Constant
The time constant of an RC circuit, $\tau$, can be describe by the following relation:
$$ \tau = RC $$
The turnover or cutoff frequency can be determined with the time constant.
$$ f_c = \frac{1}{2 \pi \tau} $$
End of explanation
# inputs
Vi = 5 # volts
# define a linear space of omega that is significantly greatr that fc
omega = np.linspace(0.01*fc,fc*5,1000)
Vo_lp = Vi*1/np.sqrt(1+(omega*tau)**2)
Gdb_lp = 20*np.log10(Vo_lp/Vi) # Where Gdb is the power
# plot with plotly
# Create traces
legend = ['Low Pass Gain']
tracelp = go.Scatter(
x=np.log10(omega),
y=Gdb_lp,
mode='lines',
name=legend[0]
)
# Edit the layout
layout = dict(title='Output Voltage of First Order Low-Pass Filter vs. Time',
xaxis=dict(title='Log[Frequency (Hz)]'),
yaxis=dict(title='Power Gain (dB)'),
)
data = [tracelp] # put trace in array (plotly formatting)
fig = dict(data=data, layout=layout)
py.iplot(fig, filename="FirstOrderLPFilter")
Explanation: Low-Pass (LP) Filter
A low pass filter uses a resistor and capacitor with the voltage read across the capacitor and the input voltage into the resistor. The reactance of the capacitor blocks low frequency signals.
The response of the low pass filter is,
$$ V_o = V_i \frac{1}{\sqrt{1+(\omega R C)^2}} $$
Where $ \omega $ is the frequency and $RC$ is $ \tau $.
End of explanation
freq3db = 0 # logarithmic frequency at -3db
freqcut = 10**freq3db
print('Therefore, the cutoff frequency is', freqcut, 'Hz')
Explanation: Now, we can find the $-3 dB$ frequency with this discrete data set if we interoplate, or if we just look at the graph and find the point where the Gain in dB is $-3$.
End of explanation
Vo_hp = Vi*(omega*tau)/np.sqrt(1+(omega*tau)**2)
Gdb_hp = 20*np.log10(Vo_hp/Vi) # Where Gdb is the power
# plot with plotly
# Create traces
legend = ['High Pass Gain']
tracehp = go.Scatter(
x=np.log10(omega),
y=Gdb_hp,
mode='lines',
name=legend[0]
)
# Edit the layout
layout = dict(title='Output Voltage of First Order High-Pass Filter vs. Time',
xaxis=dict(title='Log[Frequency (Hz)]'),
yaxis=dict(title='Power Gain (dB)'),
)
data = [tracehp] # put trace in array (plotly formatting)
fig = dict(data=data, layout=layout)
py.iplot(fig, filename="FirstOrderHPFilter")
Explanation: High Pass Filter
A first order high-pass filter similarly uses a resistor and capacitor, however, the output voltage is measured across the resistor.
The response of the high-pass filter is,
$$ V_o = V_i \frac{\omega R C}{\sqrt{1+ (\omega R C)^2}}$$
End of explanation
# Edit the layout
layout = dict(title='Output Voltage of First Order High-Pass and Low-Pass Filter vs. Time',
xaxis=dict(title='Log[Frequency (Hz)]'),
yaxis=dict(title='Power Gain (dB)'),
)
data = [tracehp, tracelp] # put trace in array (plotly formatting)
fig = dict(data=data, layout=layout)
py.iplot(fig, filename="FirstOrderFilters")
Explanation: Now, overlaying the graphs
End of explanation |
14,337 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
July 2, 2020
Step1: Define prog constants
Step2: Define dirs, cats, etc
Step3: Define header, format, etc
NOTE
Step4: Read in the matched 2020 SSC + PS1 cat
Step5: Print out header and info of N2020PS1 cat as needed
Step6: Get value added cols
IMPORTANT
Step7: Function to plot one color (y-axis) versus various others pars (x-axis)
Step8: Plot color offsets versus (g-i) color, also against ra, dec | Python Code:
# GENERAL PURPOSE PACKAGES
import os
import glob
import tarfile
from urllib.request import urlretrieve
from datetime import date
import timeit
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Import ZI tools
%load_ext autoreload
%autoreload 2
# importing ZI tools:
# import ZItools as zit
# importing ZI tools with KT changes:
import KTtools as ktt
# NOT NEEDED ANY MORE, NO MATCHING OF CATALOGS DONE
# import pyspherematch
Explanation: July 2, 2020: v2: Adding ZI tools with KT additions
This reads in the matched catalog from SDSS_PS1_matching_KTv1.ipynb
Generate plots comparing SDSS with PS1
Use stripe82calibStars_v3.2.dat, from ZI, June 13
This is a copy of ZI's gaia matching code, modified by KT to sdss_PS1_matching_KTv1
End of explanation
# MATCH RAD FOR PYSPHEREMATCH
tol_asec = 3. # matching radius in arc.sec
tol_deg = tol_asec/3600.
# MAX PERMITTED MAG ERROR
max_MAGerr = 0.05
Explanation: Define prog constants
End of explanation
homedir = '/home/user/SDSS_SSC/'
datadir = homedir+'DataDir/'
# matched SDSS + PS1 catalogs
nSSC2PS1 = datadir+'nSSC2PS1_matchedv1.csv'
# SDSS 2020: USE THIS VERSION ONLY - FROM ZI ON JUNE, 13, 2020
newSSC2020_cat = datadir+'stripe82calibStars_v3.2.dat'
# DO NOT USE THese earlier VERSIONs
# SDSS 2020: USE THIS VERSION ONLY - FROM ZI ON JUNE, 9, 2020
# newSSC2020_cat = 'stripe82calibStars_v3.1.dat'
# newSSC2020_cat = 'N2020_stripe82calibStars.dat'
# PS1
PS1_cat = datadir+'PS1_STARS_Stripe82_area.csv'
Explanation: Define dirs, cats, etc
End of explanation
colnames = ['ra','dec','nEpochs','g_Nobs','g_mMed','g_mErr',
'r_Nobs','r_mMed','r_mErr','i_Nobs','i_mMed','i_mErr',
'z_Nobs','z_mMed','z_mErr','raMean','decMean',
'gMeanPSFMag','gMeanPSFMagErr','gMeanPSFMagNpt',
'rMeanPSFMag','rMeanPSFMagErr','rMeanPSFMagNpt',
'iMeanPSFMag','iMeanPSFMagErr','iMeanPSFMagNpt',
'zMeanPSFMag','zMeanPSFMagErr','zMeanPSFMagNpt']
Ndtype = {'nEpochs':'int64','g_Nobs':'int64',\
'r_Nobs':'int64','i_Nobs':'int64','z_Nobs':'int64',\
'gMeanPSFMagNpt':'int64','rMeanPSFMagNpt':'int64',\
'iMeanPSFMagNpt':'int64','zMeanPSFMagNpt':'int64'}
Explanation: Define header, format, etc
NOTE: PS1 header is given in the first row
End of explanation
%%time
n2020PS1 = pd.read_csv(nSSC2PS1,header=0,dtype=Ndtype,comment='#')
nrows,ncols = n2020PS1.shape
print('N2020+PS1, as read: num rows, cols: ',nrows,ncols)
Explanation: Read in the matched 2020 SSC + PS1 cat
End of explanation
n2020PS1.info()
n2020PS1.head()
### code for generating new quantities, such as dra, ddec, colors, differences in mags, etc
### NOTE: matches IS A DATAFRAME WITH COL NAMES AS GIVEN IN THIS FUNCTIONS
def derivedColumns(matches):
matches['dra'] = (matches['ra']-matches['raMean'])*3600
matches['ddec'] = (matches['dec']-matches['decMean'])*3600
ra = matches['ra']
matches['raW'] = np.where(ra > 180, ra-360, ra)
matches['dg'] = matches['g_mMed'] - matches['gMeanPSFMag']
matches['dr'] = matches['r_mMed'] - matches['rMeanPSFMag']
matches['di'] = matches['i_mMed'] - matches['iMeanPSFMag']
matches['dz'] = matches['z_mMed'] - matches['zMeanPSFMag']
matches['gr'] = matches['g_mMed'] - matches['r_mMed']
matches['ri'] = matches['r_mMed'] - matches['i_mMed']
matches['gi'] = matches['g_mMed'] - matches['i_mMed']
matches['iz'] = matches['i_mMed'] - matches['z_mMed']
matches['dgr'] = matches['dg'] - matches['dr']
matches['dri'] = matches['dr'] - matches['di']
matches['diz'] = matches['di'] - matches['dz']
matches['drz'] = matches['dr'] - matches['dz']
matches['dgi'] = matches['dg'] - matches['di']
return
Explanation: Print out header and info of N2020PS1 cat as needed
End of explanation
derivedColumns(n2020PS1)
n2020PS1.head()
Explanation: Get value added cols
IMPORTANT: ORIGINAL DF IS MODIFIED IN THIS CELL
End of explanation
def doOneColor(d, kw):
print('=========== WORKING ON:', kw['Ystr'], '===================')
xVec = d[kw['Xstr']]
yVec = d[kw['Ystr']]
#
# this is where the useable range of color specified by Xstr is defined
# it's really a hack - these limits should be passed via kw...
xMin = 0.5
xMax = 3.1
nBinX = 52
xBin, nPts, medianBin, sigGbin = ktt.fitMedians(xVec, yVec, xMin, xMax,nBinX, 0)
fig,ax = plt.subplots(1,1,figsize=(8,6))
# This is for a scatter plot
# ax.scatter(xVec, yVec, s=0.01, c='blue')
# Replace with hist2D
# get x,y-binning
yMax,yMin = np.quantile(yVec, [0.99,0.01])
yMax,yMin,nBinY = round(yMax,2),round(yMin,2),25
xedges = np.linspace(xMin,xMax,nBinX+1,endpoint=False,dtype=float)
yedges = np.linspace(yMin,yMax,nBinY+1,endpoint=False,dtype=float)
Histo2D, xedges, yedges = np.histogram2d(xVec,yVec, bins=(xedges, yedges))
Histo2D = Histo2D.T
X, Y = np.meshgrid(xedges, yedges)
# cs = ax.pcolormesh(X,Y, Histo2D, cmap='Greys')
# cs = ax.pcolormesh(X,Y, Histo2D, cmap='plasma')
cs = ax.pcolormesh(X,Y, Histo2D, cmap=kw['cmap'])
#ax.scatter(xBin, medianBin, s=5.2, c='yellow')
ax.scatter(xBin, medianBin, s=5.2, c='black', marker='+')
# ax.set_xlim(0.4,3.2)
# ax.set_ylim(-0.5,0.5)
ax.set_xlim(xMin,xMax)
ax.set_ylim(yMin,yMax)
ax.set_xlabel(kw['Xstr'])
ax.set_ylabel(kw['Ystr'])
fig.colorbar(cs, ax=ax, shrink=0.9)
# THERE IS NO ANALYTIC COLOR TERM: linear interpolation of the binned medians!
d['colorfit'] = np.interp(xVec, xBin, medianBin)
# the following line corrects the trend given by the binned medians
d['colorresid'] = d[kw['Ystr']] - d['colorfit']
# note that we only use the restricted range of color for ZP analysis
# 0.3 mag limit is to reject gross outliers
goodC = d[(np.abs(d['colorresid'])<0.3)&(xVec>xMin)&(xVec<xMax)]
### plots
# RA
print(' stats for RA binning medians:')
plotNameRoot = kw['plotNameRoot'] + kw['Ystr']
plotName = plotNameRoot + '_RA.png'
Ylabel =kw['Ystr'] + ' residuals'
kwOC = {"Xstr":'raW', "Xmin":-45, "Xmax":47, "Xlabel":'R.A. (deg)', \
"Ystr":'colorresid', "Ymin":-0.07, "Ymax":0.07,"nBinY":25, "Ylabel":Ylabel, \
"XminBin":-43, "XmaxBin":45, "nBinX":56, "Nsigma":3, "offset":0.01, \
"plotName":plotName, "symbSize":kw['symbSize'], "cmap":kw['cmap']}
ktt.plotdelMag_KT(goodC, kwOC)
print('made plot', plotName)
# Dec
print('-----------')
print(' stats for Dec binning medians:')
plotName = plotNameRoot + '_Dec.png'
kwOC = {"Xstr":'dec', "Xmin":-1.3, "Xmax":1.3, "Xlabel":'Declination (deg)', \
"Ystr":'colorresid', "Ymin":-0.07, "Ymax":0.07,"nBinY":25, "Ylabel":Ylabel, \
"XminBin":-1.26, "XmaxBin":1.26, "nBinX":52, "Nsigma":3, "offset":0.01, \
"plotName":plotName, "symbSize":kw['symbSize'], "cmap":kw['cmap']}
ktt.plotdelMag_KT(goodC, kwOC)
print('made plot', plotName)
# r SDSS
print('-----------')
print(' stats for SDSS r binning medians:')
plotName = plotNameRoot + '_rmag.png'
kwOC = {"Xstr":'r_mMed', "Xmin":14.3, "Xmax":22.2, "Xlabel":'SDSS r (mag)', \
"Ystr":'colorresid', "Ymin":-0.07, "Ymax":0.07,"nBinY":25, "Ylabel":Ylabel, \
"XminBin":14.5, "XmaxBin":21.5, "nBinX":55, "Nsigma":3, "offset":0.01, \
"plotName":plotName, "symbSize":kw['symbSize'], "cmap":kw['cmap']}
ktt.plotdelMag_KT(goodC, kwOC)
print('made plot', plotName)
print('------------------------------------------------------------------')
Explanation: Function to plot one color (y-axis) versus various others pars (x-axis)
End of explanation
# These keywords what you want to plot on x
keywords = {"Xstr":'gi', "plotNameRoot":'colorResidPS12_'}
# These keywords are for plot style
keywords["symbSize"] = 0.05
keywords["cmap"] = 'plasma' # gives a good dark background for the overplots
# keywords["cmap"] = 'PuBuGn' # the overplots look washed out
# create a series of plots
# for color in ('dg', 'dr', 'di', 'dz', 'dgr', 'dri', 'diz'):
for color in ('dr', 'dgr'): # use this for testing
keywords["Ystr"] = color
doOneColor(n2020PS1, keywords)
Explanation: Plot color offsets versus (g-i) color, also against ra, dec
End of explanation |
14,338 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
Text Generation using a RNN
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https
Step1: Import tensorflow and enable eager execution.
Step2: Download the dataset
In this example, we will use the shakespeare dataset. You can use any other dataset that you like.
Step3: Read the dataset
Step4: Creating dictionaries to map from characters to their indices and vice-versa, which will be used to vectorize the inputs
Step5: Creating the input and output tensors
Vectorizing the input and the target text because our model cannot understand strings only numbers.
But first, we need to create the input and output vectors.
Remember the max_length we set above, we will use it here. We are creating max_length chunks of input, where each input vector is all the characters in that chunk except the last and the target vector is all the characters in that chunk except the first.
For example, consider that the string = 'tensorflow' and the max_length is 9
So, the input = 'tensorflo' and output = 'ensorflow'
After creating the vectors, we convert each character into numbers using the char2idx dictionary we created above.
Step6: Creating batches and shuffling them using tf.data
Step7: Creating the model
We use the Model Subclassing API which gives us full flexibility to create the model and change it however we like. We use 3 layers to define our model.
Embedding layer
GRU layer (you can use an LSTM layer here)
Fully connected layer
Step8: Call the model and set the optimizer and the loss function
Step9: Train the model
Here we will use a custom training loop with the help of GradientTape()
We initialize the hidden state of the model with zeros and shape == (batch_size, number of rnn units). We do this by calling the function defined while creating the model.
Next, we iterate over the dataset(batch by batch) and calculate the predictions and the hidden states associated with that input.
There are a lot of interesting things happening here.
The model gets hidden state(initialized with 0), lets call that H0 and the first batch of input, lets call that I0.
The model then returns the predictions P1 and H1.
For the next batch of input, the model receives I1 and H1.
The interesting thing here is that we pass H1 to the model with I1 which is how the model learns. The context learned from batch to batch is contained in the hidden state.
We continue doing this until the dataset is exhausted and then we start a new epoch and repeat this.
After calculating the predictions, we calculate the loss using the loss function defined above. Then we calculate the gradients of the loss with respect to the model variables(input)
Finally, we take a step in that direction with the help of the optimizer using the apply_gradients function.
Note
Step10: Predicting using our trained model
The below code block is used to generated the text
We start by choosing a start string and initializing the hidden state and setting the number of characters we want to generate.
We get predictions using the start_string and the hidden state
Then we use a multinomial distribution to calculate the index of the predicted word. We use this predicted word as our next input to the model
The hidden state returned by the model is fed back into the model so that it now has more context rather than just one word. After we predict the next word, the modified hidden states are again fed back into the model, which is how it learns as it gets more context from the previously predicted words.
If you see the predictions, the model knows when to capitalize, make paragraphs and the text follows a shakespeare style of writing which is pretty awesome! | Python Code:
!pip install unidecode
Explanation: Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
Text Generation using a RNN
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/text_generation.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/generative_examples/text_generation.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on Github</a></td></table>
This notebook demonstrates how to generate text using an RNN using tf.keras and eager execution. If you like, you can write a similar model using less code. Here, we show a lower-level impementation that's useful to understand as prework before diving in to deeper examples in a similar, like Neural Machine Translation with Attention.
This notebook is an end-to-end example. When you run it, it will download a dataset of Shakespeare's writing. We'll use a collection of plays, borrowed from Andrej Karpathy's excellent The Unreasonable Effectiveness of Recurrent Neural Networks. The notebook will train a model, and use it to generate sample output.
Here is the output(with start string='w') after training a single layer GRU for 30 epochs with the default settings below:
```
were to the death of him
And nothing of the field in the view of hell,
When I said, banish him, I will not burn thee that would live.
HENRY BOLINGBROKE:
My gracious uncle--
DUKE OF YORK:
As much disgraced to the court, the gods them speak,
And now in peace himself excuse thee in the world.
HORTENSIO:
Madam, 'tis not the cause of the counterfeit of the earth,
And leave me to the sun that set them on the earth
And leave the world and are revenged for thee.
GLOUCESTER:
I would they were talking with the very name of means
To make a puppet of a guest, and therefore, good Grumio,
Nor arm'd to prison, o' the clouds, of the whole field,
With the admire
With the feeding of thy chair, and we have heard it so,
I thank you, sir, he is a visor friendship with your silly your bed.
SAMPSON:
I do desire to live, I pray: some stand of the minds, make thee remedies
With the enemies of my soul.
MENENIUS:
I'll keep the cause of my mistress.
POLIXENES:
My brother Marcius!
Second Servant:
Will't ple
```
Of course, while some of the sentences are grammatical, most do not make sense. But, consider:
Our model is character based (when we began training, it did not yet know how to spell a valid English word, or that words were even a unit of text).
The structure of the output resembles a play (blocks begin with a speaker name, in all caps similar to the original text). Sentences generally end with a period. If you look at the text from a distance (or don't read the invididual words too closely, it appears as if it's an excerpt from a play).
As a next step, you can experiment training the model on a different dataset - any large text file(ASCII) will do, and you can modify a single line of code below to make that change. Have fun!
Install unidecode library
A helpful library to convert unicode to ASCII.
End of explanation
# Import TensorFlow >= 1.9 and enable eager execution
import tensorflow as tf
# Note: Once you enable eager execution, it cannot be disabled.
tf.enable_eager_execution()
import numpy as np
import re
import random
import unidecode
import time
Explanation: Import tensorflow and enable eager execution.
End of explanation
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/yashkatariya/shakespeare.txt')
Explanation: Download the dataset
In this example, we will use the shakespeare dataset. You can use any other dataset that you like.
End of explanation
text = unidecode.unidecode(open(path_to_file).read())
# length of text is the number of characters in it
print (len(text))
Explanation: Read the dataset
End of explanation
# unique contains all the unique characters in the file
unique = sorted(set(text))
# creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(unique)}
idx2char = {i:u for i, u in enumerate(unique)}
# setting the maximum length sentence we want for a single input in characters
max_length = 100
# length of the vocabulary in chars
vocab_size = len(unique)
# the embedding dimension
embedding_dim = 256
# number of RNN (here GRU) units
units = 1024
# batch size
BATCH_SIZE = 64
# buffer size to shuffle our dataset
BUFFER_SIZE = 10000
Explanation: Creating dictionaries to map from characters to their indices and vice-versa, which will be used to vectorize the inputs
End of explanation
input_text = []
target_text = []
for f in range(0, len(text)-max_length, max_length):
inps = text[f:f+max_length]
targ = text[f+1:f+1+max_length]
input_text.append([char2idx[i] for i in inps])
target_text.append([char2idx[t] for t in targ])
print (np.array(input_text).shape)
print (np.array(target_text).shape)
Explanation: Creating the input and output tensors
Vectorizing the input and the target text because our model cannot understand strings only numbers.
But first, we need to create the input and output vectors.
Remember the max_length we set above, we will use it here. We are creating max_length chunks of input, where each input vector is all the characters in that chunk except the last and the target vector is all the characters in that chunk except the first.
For example, consider that the string = 'tensorflow' and the max_length is 9
So, the input = 'tensorflo' and output = 'ensorflow'
After creating the vectors, we convert each character into numbers using the char2idx dictionary we created above.
End of explanation
dataset = tf.data.Dataset.from_tensor_slices((input_text, target_text)).shuffle(BUFFER_SIZE)
dataset = dataset.apply(tf.contrib.data.batch_and_drop_remainder(BATCH_SIZE))
Explanation: Creating batches and shuffling them using tf.data
End of explanation
class Model(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, units, batch_size):
super(Model, self).__init__()
self.units = units
self.batch_sz = batch_size
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
if tf.test.is_gpu_available():
self.gru = tf.keras.layers.CuDNNGRU(self.units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
else:
self.gru = tf.keras.layers.GRU(self.units,
return_sequences=True,
return_state=True,
recurrent_activation='sigmoid',
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
def call(self, x, hidden):
x = self.embedding(x)
# output shape == (batch_size, max_length, hidden_size)
# states shape == (batch_size, hidden_size)
# states variable to preserve the state of the model
# this will be used to pass at every step to the model while training
output, states = self.gru(x, initial_state=hidden)
# reshaping the output so that we can pass it to the Dense layer
# after reshaping the shape is (batch_size * max_length, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# The dense layer will output predictions for every time_steps(max_length)
# output shape after the dense layer == (max_length * batch_size, vocab_size)
x = self.fc(output)
return x, states
Explanation: Creating the model
We use the Model Subclassing API which gives us full flexibility to create the model and change it however we like. We use 3 layers to define our model.
Embedding layer
GRU layer (you can use an LSTM layer here)
Fully connected layer
End of explanation
model = Model(vocab_size, embedding_dim, units, BATCH_SIZE)
optimizer = tf.train.AdamOptimizer()
# using sparse_softmax_cross_entropy so that we don't have to create one-hot vectors
def loss_function(real, preds):
return tf.losses.sparse_softmax_cross_entropy(labels=real, logits=preds)
Explanation: Call the model and set the optimizer and the loss function
End of explanation
# Training step
EPOCHS = 30
for epoch in range(EPOCHS):
start = time.time()
# initializing the hidden state at the start of every epoch
hidden = model.reset_states()
for (batch, (inp, target)) in enumerate(dataset):
with tf.GradientTape() as tape:
# feeding the hidden state back into the model
# This is the interesting step
predictions, hidden = model(inp, hidden)
# reshaping the target because that's how the
# loss function expects it
target = tf.reshape(target, (-1,))
loss = loss_function(target, predictions)
grads = tape.gradient(loss, model.variables)
optimizer.apply_gradients(zip(grads, model.variables), global_step=tf.train.get_or_create_global_step())
if batch % 100 == 0:
print ('Epoch {} Batch {} Loss {:.4f}'.format(epoch+1,
batch,
loss))
print ('Epoch {} Loss {:.4f}'.format(epoch+1, loss))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
Explanation: Train the model
Here we will use a custom training loop with the help of GradientTape()
We initialize the hidden state of the model with zeros and shape == (batch_size, number of rnn units). We do this by calling the function defined while creating the model.
Next, we iterate over the dataset(batch by batch) and calculate the predictions and the hidden states associated with that input.
There are a lot of interesting things happening here.
The model gets hidden state(initialized with 0), lets call that H0 and the first batch of input, lets call that I0.
The model then returns the predictions P1 and H1.
For the next batch of input, the model receives I1 and H1.
The interesting thing here is that we pass H1 to the model with I1 which is how the model learns. The context learned from batch to batch is contained in the hidden state.
We continue doing this until the dataset is exhausted and then we start a new epoch and repeat this.
After calculating the predictions, we calculate the loss using the loss function defined above. Then we calculate the gradients of the loss with respect to the model variables(input)
Finally, we take a step in that direction with the help of the optimizer using the apply_gradients function.
Note:- If you are running this notebook in Colab which has a Tesla K80 GPU it takes about 23 seconds per epoch.
End of explanation
# Evaluation step(generating text using the model learned)
# number of characters to generate
num_generate = 1000
# You can change the start string to experiment
start_string = 'Q'
# converting our start string to numbers(vectorizing!)
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
# empty string to store our results
text_generated = ''
# low temperatures results in more predictable text.
# higher temperatures results in more surprising text
# experiment to find the best setting
temperature = 1.0
# hidden state shape == (batch_size, number of rnn units); here batch size == 1
hidden = [tf.zeros((1, units))]
for i in range(num_generate):
predictions, hidden = model(input_eval, hidden)
# using a multinomial distribution to predict the word returned by the model
predictions = predictions / temperature
predicted_id = tf.multinomial(tf.exp(predictions), num_samples=1)[0][0].numpy()
# We pass the predicted word as the next input to the model
# along with the previous hidden state
input_eval = tf.expand_dims([predicted_id], 0)
text_generated += idx2char[predicted_id]
print (start_string + text_generated)
Explanation: Predicting using our trained model
The below code block is used to generated the text
We start by choosing a start string and initializing the hidden state and setting the number of characters we want to generate.
We get predictions using the start_string and the hidden state
Then we use a multinomial distribution to calculate the index of the predicted word. We use this predicted word as our next input to the model
The hidden state returned by the model is fed back into the model so that it now has more context rather than just one word. After we predict the next word, the modified hidden states are again fed back into the model, which is how it learns as it gets more context from the previously predicted words.
If you see the predictions, the model knows when to capitalize, make paragraphs and the text follows a shakespeare style of writing which is pretty awesome!
End of explanation |
14,339 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter
Step1: Lesson
Step2: Project 1
Step3: Transforming Text into Numbers
Step4: Project 2
Step5: Project 3 | Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem"
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset
End of explanation
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
Explanation: Lesson: Develop a Predictive Theory
End of explanation
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
Explanation: Project 1: Quick Theory Validation
End of explanation
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
Explanation: Transforming Text into Numbers
End of explanation
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
list(vocab)
import numpy as np
layer_0 = np.zeros((1,vocab_size))
layer_0
from IPython.display import Image
Image(filename='sentiment_network.png')
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
word2index
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
if(label == 'POSITIVE'):
return 1
else:
return 0
labels[0]
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
Explanation: Project 2: Creating the Input/Output Data
End of explanation
import time
import sys
import numpy as np
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
# set our random number generator
np.random.seed(1)
self.pre_process_data(reviews, labels)
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# TODO: Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# TODO: Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
if(np.abs(layer_2_error) < 0.5):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
if(layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
# evaluate our model before training (just to show how horrible it is)
mlp.test(reviews[-1000:],labels[-1000:])
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Project 3: Building a Neural Network
Start with your neural network from the last chapter
3 layer neural network
no non-linearity in hidden layer
use our functions to create the training data
create a "pre_process_data" function to create vocabulary for our training data generating functions
modify "train" to train over the entire corpus
Where to Get Help if You Need it
Re-watch previous week's Udacity Lectures
Chapters 3-5 - Grokking Deep Learning - (40% Off: traskud17)
End of explanation |
14,340 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="http
Step1: Market Environment and Portfolio Object
We start by instantiating a market environment object which in particular contains a list of ticker symbols in which we are interested in.
Step2: Using pandas under the hood, the class retrieves historial stock price data from either Yahoo! Finance of Google.
Step3: Basic Statistics
Since no portfolio weights have been provided, the class defaults to equal weights.
Step4: Given these weights you can calculate the portfolio return via the method get_portfolio_return.
Step5: Analogously, you can call get_portfolio_variance to get the historical portfolio variance.
Step6: The class also has a neatly printable string representation.
Step7: Setting Weights
Via the method set_weights the weights of the single portfolio components can be adjusted.
Step8: You cal also easily check results for different weights with changing the attribute values of an object.
Step9: Let us implement a Monte Carlo simulation over potential portfolio weights.
Step10: And the simulation results visualized.
Step11: Optimizing Portfolio Composition
One of the major application areas of the mean-variance portfolio theory and therewith of this DX Analytics class it the optimization of the portfolio composition. Different target functions can be used to this end.
Return
The first target function might be the portfolio return.
Step12: Instead of maximizing the portfolio return without any constraints, you can also set a (sensible/possible) maximum target volatility level as a constraint. Both, in an exact sense ("equality constraint") ...
Step13: ... or just a an upper bound ("inequality constraint").
Step14: Risk
The class also allows you to minimize portfolio risk.
Step15: And, as before, to set constraints (in this case) for the target return level.
Step16: Sharpe Ratio
Often, the target of the portfolio optimization efforts is the so called Sharpe ratio. The mean_variance_portfolio class of DX Analytics assumes a risk-free rate of zero in this context.
Step17: Efficient Frontier
Another application area is to derive the efficient frontier in the mean-variance space. These are all these portfolios for which there is no portfolio with both lower risk and higher return. The method get_efficient_frontier yields the desired results.
Step18: The plot with the random and efficient portfolios.
Step19: Capital Market Line
The capital market line is another key element of the mean-variance portfolio approach representing all those risk-return combinations (in mean-variance space) that are possible to form from a risk-less money market account and the market portfolio (or another appropriate substitute efficient portfolio).
Step20: The following plot illustrates that the capital market line has an ordinate value equal to the risk-free rate (the safe return of the money market account) and is tangent to the efficient frontier.
Step21: Portfolio return and risk of the efficient portfolio used are
Step22: The portfolio composition can be derived as follows.
Step23: Or also in this way.
Step24: Dow Jones Industrial Average
As a larger, more realistic example, consider all symbols of the Dow Jones Industrial Average 30 index.
Step25: Data retrieval in this case takes a bit.
Step26: Given the larger data set now used, efficient frontier ...
Step27: ... and capital market line derivations take also longer. | Python Code:
from dx import *
import seaborn as sns; sns.set()
Explanation: <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="45%" align="right" border="4">
Mean-Variance Portfolio Class
Without doubt, the Markowitz (1952) mean-variance portfolio theory is a cornerstone of modern financial theory. This section illustrates the use of the mean_variance_portfolio class to implement this approach.
End of explanation
ma = market_environment('ma', dt.date(2010, 1, 1))
ma.add_list('symbols', ['AAPL', 'GOOG', 'MSFT', 'FB'])
ma.add_constant('source', 'google')
ma.add_constant('final date', dt.date(2014, 3, 1))
Explanation: Market Environment and Portfolio Object
We start by instantiating a market environment object which in particular contains a list of ticker symbols in which we are interested in.
End of explanation
%%time
port = mean_variance_portfolio('am_tech_stocks', ma)
# instantiates the portfolio class
# and retrieves all the time series data needed
Explanation: Using pandas under the hood, the class retrieves historial stock price data from either Yahoo! Finance of Google.
End of explanation
port.get_weights()
# defaults to equal weights
Explanation: Basic Statistics
Since no portfolio weights have been provided, the class defaults to equal weights.
End of explanation
port.get_portfolio_return()
# expected (= historical mean) return
Explanation: Given these weights you can calculate the portfolio return via the method get_portfolio_return.
End of explanation
port.get_portfolio_variance()
# expected (= historical) variance
Explanation: Analogously, you can call get_portfolio_variance to get the historical portfolio variance.
End of explanation
print port
# ret. con. is "return contribution"
# given the mean return and the weight
# of the security
Explanation: The class also has a neatly printable string representation.
End of explanation
port.set_weights([0.6, 0.2, 0.1, 0.1])
print port
Explanation: Setting Weights
Via the method set_weights the weights of the single portfolio components can be adjusted.
End of explanation
port.test_weights([0.6, 0.2, 0.1, 0.1])
# returns av. return + vol + Sharp ratio
# without setting new weights
Explanation: You cal also easily check results for different weights with changing the attribute values of an object.
End of explanation
# Monte Carlo simulation of portfolio compositions
rets = []
vols = []
for w in range(500):
weights = np.random.random(4)
weights /= sum(weights)
r, v, sr = port.test_weights(weights)
rets.append(r)
vols.append(v)
rets = np.array(rets)
vols = np.array(vols)
Explanation: Let us implement a Monte Carlo simulation over potential portfolio weights.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(10, 6))
plt.scatter(vols, rets, c=rets / vols, marker='o')
plt.xlabel('expected volatility')
plt.ylabel('expected return')
plt.colorbar(label='Sharpe ratio')
Explanation: And the simulation results visualized.
End of explanation
port.optimize('Return')
# maximizes expected return of portfolio
# no volatility constraint
print port
Explanation: Optimizing Portfolio Composition
One of the major application areas of the mean-variance portfolio theory and therewith of this DX Analytics class it the optimization of the portfolio composition. Different target functions can be used to this end.
Return
The first target function might be the portfolio return.
End of explanation
port.optimize('Return', constraint=0.225, constraint_type='Exact')
# interpretes volatility constraint as equality
print port
Explanation: Instead of maximizing the portfolio return without any constraints, you can also set a (sensible/possible) maximum target volatility level as a constraint. Both, in an exact sense ("equality constraint") ...
End of explanation
port.optimize('Return', constraint=0.4, constraint_type='Bound')
# interpretes volatility constraint as inequality (upper bound)
print port
Explanation: ... or just a an upper bound ("inequality constraint").
End of explanation
port.optimize('Vol')
# minimizes expected volatility of portfolio
# no return constraint
print port
Explanation: Risk
The class also allows you to minimize portfolio risk.
End of explanation
port.optimize('Vol', constraint=0.175, constraint_type='Exact')
# interpretes return constraint as equality
print port
port.optimize('Vol', constraint=0.20, constraint_type='Bound')
# interpretes return constraint as inequality (upper bound)
print port
Explanation: And, as before, to set constraints (in this case) for the target return level.
End of explanation
port.optimize('Sharpe')
# maximize Sharpe ratio
print port
Explanation: Sharpe Ratio
Often, the target of the portfolio optimization efforts is the so called Sharpe ratio. The mean_variance_portfolio class of DX Analytics assumes a risk-free rate of zero in this context.
End of explanation
%%time
evols, erets = port.get_efficient_frontier(100)
# 100 points of the effient frontier
Explanation: Efficient Frontier
Another application area is to derive the efficient frontier in the mean-variance space. These are all these portfolios for which there is no portfolio with both lower risk and higher return. The method get_efficient_frontier yields the desired results.
End of explanation
plt.figure(figsize=(10, 6))
plt.scatter(vols, rets, c=rets / vols, marker='o')
plt.scatter(evols, erets, c=erets / evols, marker='x')
plt.xlabel('expected volatility')
plt.ylabel('expected return')
plt.colorbar(label='Sharpe ratio')
Explanation: The plot with the random and efficient portfolios.
End of explanation
%%time
cml, optv, optr = port.get_capital_market_line(riskless_asset=0.05)
# capital market line for effiecient frontier and risk-less short rate
cml # lambda function for capital market line
Explanation: Capital Market Line
The capital market line is another key element of the mean-variance portfolio approach representing all those risk-return combinations (in mean-variance space) that are possible to form from a risk-less money market account and the market portfolio (or another appropriate substitute efficient portfolio).
End of explanation
plt.figure(figsize=(10, 6))
plt.plot(evols, erets, lw=2.0, label='efficient frontier')
plt.plot((0, 0.4), (cml(0), cml(0.4)), lw=2.0, label='capital market line')
plt.plot(optv, optr, 'r*', markersize=10, label='optimal portfolio')
plt.legend(loc=0)
plt.ylim(0)
plt.xlabel('expected volatility')
plt.ylabel('expected return')
Explanation: The following plot illustrates that the capital market line has an ordinate value equal to the risk-free rate (the safe return of the money market account) and is tangent to the efficient frontier.
End of explanation
optr
optv
Explanation: Portfolio return and risk of the efficient portfolio used are:
End of explanation
port.optimize('Vol', constraint=optr, constraint_type='Exact')
print port
Explanation: The portfolio composition can be derived as follows.
End of explanation
port.optimize('Return', constraint=optv, constraint_type='Exact')
print port
Explanation: Or also in this way.
End of explanation
symbols = ['AXP', 'BA', 'CAT', 'CSCO', 'CVX', 'DD', 'DIS', 'GE',
'GS', 'HD', 'IBM', 'INTC', 'JNJ', 'JPM', 'KO', 'MCD', 'MMM',
'MRK', 'MSFT', 'NKE', 'PFE', 'PG', 'T', 'TRV', 'UNH', 'UTX',
'V', 'VZ','WMT', 'XOM']
# all DJIA 30 symbols
ma = market_environment('ma', dt.date(2010, 1, 1))
ma.add_list('symbols', symbols)
ma.add_constant('source', 'google')
ma.add_constant('final date', dt.date(2014, 3, 1))
Explanation: Dow Jones Industrial Average
As a larger, more realistic example, consider all symbols of the Dow Jones Industrial Average 30 index.
End of explanation
%%time
djia = mean_variance_portfolio('djia', ma)
# defining the portfolio and retrieving the data
%%time
djia.optimize('Vol')
print djia.variance, djia.variance ** 0.5
# minimium variance & volatility in decimals
Explanation: Data retrieval in this case takes a bit.
End of explanation
%%time
evols, erets = djia.get_efficient_frontier(25)
# efficient frontier of DJIA
Explanation: Given the larger data set now used, efficient frontier ...
End of explanation
%%time
cml, optv, optr = djia.get_capital_market_line(riskless_asset=0.01)
# capital market line and optimal (tangent) portfolio
plt.figure(figsize=(10, 6))
plt.plot(evols, erets, lw=2.0, label='efficient frontier')
plt.plot((0, 0.4), (cml(0), cml(0.4)), lw=2.0, label='capital market line')
plt.plot(optv, optr, 'r*', markersize=10, label='optimal portfolio')
plt.legend(loc=0)
plt.ylim(0)
plt.xlabel('expected volatility')
plt.ylabel('expected return')
Explanation: ... and capital market line derivations take also longer.
End of explanation |
14,341 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is a tutorial on how to compare machine learning methods with the python library scikit-learn. We'll be using the Indian Liver Disease dataset (found here https
Step1: We'll use all columns except Gender for this tutorial. We could use gender by converting the gender to a numeric value (e.g., 0 for Male, 1 for Female) but for the purproses of this post, we'll just skip this column.
Step2: The 'Dataset' column is the value we are trying to predict...whether the user has liver disease or not so we'll that as our "Y" and the other columns for our "X" array.
Step3: Before we run our machine learning models, we need to set a random number to use to seed them. This can be any random number that you'd like it to be. Some people like to use a random number generator but for the purposes of this, I'll just set it to 12 (it could just as easily be 1 or 3 or 1023 or any other number).
Step4: Now we need to set up our models that we'll be testing out. We'll set up a list of the models and give them each a name. Additionally, I'm going to set up the blank arrays/lists for the outcomes and the names of the models to use for comparison.
Step5: We are going to use a k-fold validation to evaluate each algorithm and will run through each model with a for loop, running the analysis and then storing the outcomes into the lists we created above. We'll use a 10-fold cross validation.
Step6: From the above, it looks like the Logistic Regression, Support Vector Machine and Linear Discrimation Analysis methods are providing the best results. If we take a look at a box plot to see what the accuracy is for each cross validation fold, we can see just how good each does relative to each other and their means. | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (20,10)
from sklearn import model_selection
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
#read in the data
data = pd.read_csv('indian_liver_patient.csv')
data.head()
Explanation: This is a tutorial on how to compare machine learning methods with the python library scikit-learn. We'll be using the Indian Liver Disease dataset (found here https://www.kaggle.com/uciml/indian-liver-patient-records).
From the dataset page:
"This data set contains 416 liver patient records and 167 non liver patient records collected from North East of Andhra Pradesh, India. The "Dataset" column is a class label used to divide groups into liver patient (liver disease) or not (no disease). This data set contains 441 male patient records and 142 female patient records."
I've used Jason Brownlee's article (https://machinelearningmastery.com/compare-machine-learning-algorithms-python-scikit-learn/) from 2016 as the basis for this article...I wanted to expand a bit on what he did as well as use a different dataset.
End of explanation
data_to_use = data
del data_to_use['Gender']
data_to_use.dropna(inplace=True)
data_to_use.head()
Explanation: We'll use all columns except Gender for this tutorial. We could use gender by converting the gender to a numeric value (e.g., 0 for Male, 1 for Female) but for the purproses of this post, we'll just skip this column.
End of explanation
values = data_to_use.values
Y = values[:,9]
X = values[:,0:9]
Explanation: The 'Dataset' column is the value we are trying to predict...whether the user has liver disease or not so we'll that as our "Y" and the other columns for our "X" array.
End of explanation
random_seed = 12
Explanation: Before we run our machine learning models, we need to set a random number to use to seed them. This can be any random number that you'd like it to be. Some people like to use a random number generator but for the purposes of this, I'll just set it to 12 (it could just as easily be 1 or 3 or 1023 or any other number).
End of explanation
outcome = []
model_names = []
models = [('LogReg', LogisticRegression()),
('SVM', SVC()),
('DecTree', DecisionTreeClassifier()),
('KNN', KNeighborsClassifier()),
('LinDisc', LinearDiscriminantAnalysis()),
('GaussianNB', GaussianNB())]
Explanation: Now we need to set up our models that we'll be testing out. We'll set up a list of the models and give them each a name. Additionally, I'm going to set up the blank arrays/lists for the outcomes and the names of the models to use for comparison.
End of explanation
for model_name, model in models:
k_fold_validation = model_selection.KFold(n_splits=10, random_state=random_seed)
results = model_selection.cross_val_score(model, X, Y, cv=k_fold_validation, scoring='accuracy')
outcome.append(results)
model_names.append(model_name)
output_message = "%s| Mean=%f STD=%f" % (model_name, results.mean(), results.std())
print(output_message)
Explanation: We are going to use a k-fold validation to evaluate each algorithm and will run through each model with a for loop, running the analysis and then storing the outcomes into the lists we created above. We'll use a 10-fold cross validation.
End of explanation
fig = plt.figure()
fig.suptitle('Machine Learning Model Comparison')
ax = fig.add_subplot(111)
plt.boxplot(outcome)
ax.set_xticklabels(model_names)
plt.show()
Explanation: From the above, it looks like the Logistic Regression, Support Vector Machine and Linear Discrimation Analysis methods are providing the best results. If we take a look at a box plot to see what the accuracy is for each cross validation fold, we can see just how good each does relative to each other and their means.
End of explanation |
14,342 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
Step2: Convolution
Step4: Aside
Step5: Convolution
Step6: Max pooling
Step7: Max pooling
Step8: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory
Step9: Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
Step10: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file cs231n/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug
Step11: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer.
Step12: Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
Step13: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting
Step14: Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set
Step15: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following
Step16: Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.
Spatial batch normalization
Step17: Spatial batch normalization
Step18: Experiment!
Experiment and try to get the best performance that you can on CIFAR-10 using a ConvNet. Here are some ideas to get you started | Python Code:
# As usual, a bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.cnn import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from cs231n.layers import *
from cs231n.fast_layers import *
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
Explanation: Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
End of explanation
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=3)
conv_param = {'stride': 2, 'pad': 1}
out, _ = conv_forward_naive(x, w, b, conv_param)
correct_out = np.array([[[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]]])
# Compare your output to ours; difference should be around 1e-8
print 'Testing conv_forward_naive'
print 'difference: ', rel_error(out, correct_out)
Explanation: Convolution: Naive forward pass
The core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive.
You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.
You can test your implementation by running the following:
End of explanation
from scipy.misc import imread, imresize
kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg')
# kitten is wide, and puppy is already square
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d/2:-d/2, :]
img_size = 200 # Make this smaller if it runs too slow
x = np.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))
x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))
# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = np.array([0, 128])
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})
def imshow_noax(img, normalize=True):
Tiny helper to show images as uint8 and remove axis labels
if normalize:
img_max, img_min = np.max(img), np.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_noax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_noax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_noax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_noax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_noax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_noax(out[1, 1])
plt.show()
Explanation: Aside: Image processing via convolutions
As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
End of explanation
x = np.random.randn(4, 3, 5, 5)
w = np.random.randn(2, 3, 3, 3)
b = np.random.randn(2,)
dout = np.random.randn(4, 2, 5, 5)
conv_param = {'stride': 1, 'pad': 1}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward_naive(x, w, b, conv_param)
dx, dw, db = conv_backward_naive(dout, cache)
# Your errors should be around 1e-9'
print 'Testing conv_backward_naive function'
print 'dx error: ', rel_error(dx, dx_num)
print 'dw error: ', rel_error(dw, dw_num)
print 'db error: ', rel_error(db, db_num)
Explanation: Convolution: Naive backward pass
Implement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don't need to worry too much about computational efficiency.
When you are done, run the following to check your backward pass with a numeric gradient check.
End of explanation
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward_naive(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be around 1e-8.
print 'Testing max_pool_forward_naive function:'
print 'difference: ', rel_error(out, correct_out)
Explanation: Max pooling: Naive forward
Implement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don't worry too much about computational efficiency.
Check your implementation by running the following:
End of explanation
x = np.random.randn(3, 2, 8, 8)
dout = np.random.randn(3, 2, 4, 4)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)
out, cache = max_pool_forward_naive(x, pool_param)
dx = max_pool_backward_naive(dout, cache)
# Your error should be around 1e-12
print 'Testing max_pool_backward_naive function:'
print 'dx error: ', rel_error(dx, dx_num)
Explanation: Max pooling: Naive backward
Implement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency.
Check your implementation with numeric gradient checking by running the following:
End of explanation
from cs231n.fast_layers import conv_forward_fast, conv_backward_fast
from time import time
x = np.random.randn(100, 3, 31, 31)
w = np.random.randn(25, 3, 3, 3)
b = np.random.randn(25,)
dout = np.random.randn(100, 25, 16, 16)
conv_param = {'stride': 2, 'pad': 1}
t0 = time()
out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)
t1 = time()
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
t2 = time()
print 'Testing conv_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'Difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)
t1 = time()
dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting conv_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
print 'dw difference: ', rel_error(dw_naive, dw_fast)
print 'db difference: ', rel_error(db_naive, db_fast)
from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast
x = np.random.randn(100, 3, 32, 32)
dout = np.random.randn(100, 3, 16, 16)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time()
out_naive, cache_naive = max_pool_forward_naive(x, pool_param)
t1 = time()
out_fast, cache_fast = max_pool_forward_fast(x, pool_param)
t2 = time()
print 'Testing pool_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'fast: %fs' % (t2 - t1)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive = max_pool_backward_naive(dout, cache_naive)
t1 = time()
dx_fast = max_pool_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting pool_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
Explanation: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:
bash
python setup.py build_ext --inplace
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.
NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.
You can compare the performance of the naive and fast versions of these layers by running the following:
End of explanation
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
x = np.random.randn(2, 3, 16, 16)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
print 'Testing conv_relu_pool'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
from cs231n.layer_utils import conv_relu_forward, conv_relu_backward
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
print 'Testing conv_relu:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
Explanation: Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
End of explanation
model = ThreeLayerConvNet()
N = 50
X = np.random.randn(N, 3, 32, 32)
y = np.random.randint(10, size=N)
loss, grads = model.loss(X, y)
print 'Initial loss (no regularization): ', loss, np.log(10)
model.reg = 0.5
loss, grads = model.loss(X, y)
print 'Initial loss (with regularization): ', loss, np.log(10)
Explanation: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file cs231n/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug:
Sanity check loss
After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.
End of explanation
num_inputs = 2
input_dim = (3, 16, 16)
reg = 0.0
num_classes = 10
X = np.random.randn(num_inputs, *input_dim)
y = np.random.randint(num_classes, size=num_inputs)
model = ThreeLayerConvNet(num_filters=3, filter_size=3,
input_dim=input_dim, hidden_dim=7,
dtype=np.float64)
loss, grads = model.loss(X, y)
for param_name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
Explanation: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer.
End of explanation
num_train = 100
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
model = ThreeLayerConvNet(weight_scale=1e-2)
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=1)
solver.train()
Explanation: Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
End of explanation
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, 'o')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-o')
plt.plot(solver.val_acc_history, '-o')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
Explanation: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
End of explanation
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)
solver = Solver(model, data,
num_epochs=20, batch_size=500,
update_rule='adadelta',
optim_config={
'rho': 0.95,
'epsilon': 1e-8,
'learning_rate': 0.001
},
verbose=True, print_every=20)
solver.train()
Explanation: Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:
End of explanation
from cs231n.vis_utils import visualize_grid
grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))
plt.imshow(grid.astype('uint8'))
plt.axis('off')
plt.gcf().set_size_inches(5, 5)
plt.show()
print 'Final accuracy on test data using adadelta: %f' % solver.check_accuracy(data['X_test'], data['y_test'])
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, '-')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-')
plt.plot(solver.val_acc_history, '-')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
Explanation: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following:
End of explanation
# Check the training-time forward pass by checking means and variances
# of features both before and after spatial batch normalization
N, C, H, W = 2, 3, 4, 5
x = 4 * np.random.randn(N, C, H, W) + 10
print 'Before spatial batch normalization:'
print ' Shape: ', x.shape
print ' Means: ', x.mean(axis=(0, 2, 3))
print ' Stds: ', x.std(axis=(0, 2, 3))
# Means should be close to zero and stds close to one
gamma, beta = np.ones(C), np.zeros(C)
bn_param = {'mode': 'train'}
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print 'After spatial batch normalization:'
print ' Shape: ', out.shape
print ' Means: ', out.mean(axis=(0, 2, 3))
print ' Stds: ', out.std(axis=(0, 2, 3))
# Means should be close to beta and stds close to gamma
gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print 'After spatial batch normalization (nontrivial gamma, beta):'
print ' Shape: ', out.shape
print ' Means: ', out.mean(axis=(0, 2, 3))
print ' Stds: ', out.std(axis=(0, 2, 3))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, C, H, W = 10, 4, 11, 12
bn_param = {'mode': 'train'}
gamma = np.ones(C)
beta = np.zeros(C)
for t in xrange(50):
x = 2.3 * np.random.randn(N, C, H, W) + 13
spatial_batchnorm_forward(x, gamma, beta, bn_param)
bn_param['mode'] = 'test'
x = 2.3 * np.random.randn(N, C, H, W) + 13
a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print 'After spatial batch normalization (test-time):'
print ' means: ', a_norm.mean(axis=(0, 2, 3))
print ' stds: ', a_norm.std(axis=(0, 2, 3))
Explanation: Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.
Spatial batch normalization: forward
In the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following:
End of explanation
N, C, H, W = 2, 3, 4, 5
x = 5 * np.random.randn(N, C, H, W) + 12
gamma = np.random.randn(C)
beta = np.random.randn(C)
dout = np.random.randn(N, C, H, W)
bn_param = {'mode': 'train'}
fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)
print 'dx error: ', rel_error(dx_num, dx)
print 'dgamma error: ', rel_error(da_num, dgamma)
print 'dbeta error: ', rel_error(db_num, dbeta)
Explanation: Spatial batch normalization: backward
In the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:
End of explanation
model = ChaudharyNet(hidden_dim=4096, reg=0.5)
solver = Solver(model, data,
num_epochs=50, batch_size=500,
update_rule='adadelta',
optim_config={
'rho': 0.95,
'epsilon': 1e-8,
'learning_rate': 0.001
},
verbose=True, print_every=5)
solver.train()
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, '-')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-')
plt.plot(solver.val_acc_history, '-')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
model
Explanation: Experiment!
Experiment and try to get the best performance that you can on CIFAR-10 using a ConvNet. Here are some ideas to get you started:
Things you should try:
Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
Number of filters: Above we used 32 filters. Do more or fewer do better?
Batch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization aafter affine layers. Do your networks train faster?
Network architecture: The network above has two layers of trainable parameters. Can you do better with a deeper network? You can implement alternative architectures in the file cs231n/classifiers/convnet.py. Some good architectures to try include:
[conv-relu-pool]xN - conv - relu - [affine]xM - [softmax or SVM]
[conv-relu-pool]XN - [affine]XM - [softmax or SVM]
[conv-relu-conv-relu-pool]xN - [affine]xM - [softmax or SVM]
Tips for training
For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:
If the parameters are working well, you should see improvement within a few hundred iterations
Remember the course-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.
Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
Alternative activation functions such as leaky ReLU, parametric ReLU, or MaxOut.
Model ensembles
Data augmentation
If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.
What we expect
At the very least, you should be able to train a ConvNet that gets at least 65% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.
You should use the space below to experiment and train your network. The final cell in this notebook should contain the training, validation, and test set accuracies for your final trained network. In this notebook you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.
Have fun and happy training!
End of explanation |
14,343 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Update for PyTorch 0.4
Step1: Simplicity of using backward()
Step2: The simple operations defined a forward path $z=(2x)^3$, $z$ will be the final output tensor we would like to compute gradient
Step3: The gradients of both $y$ and $z$ are None, since the function returns the gradient for the leaves, which is $x$ in this case. At the very beginning, I was assuming something like this
Step4: Testing the explicit default value, which should give the same result. For the same graph which is retained, DO NOT forget to zero the gradient before recalculate the gradients.
Step5: Then what about other values, let's try 0.1 and 0.5.
Step6: It looks like the elements of grad_tensors act as scaling factors. Now let's set $x$ to be a $2\times 2$matrix. Note that $z$ will also be a matrix. (Always use the latest version, backward had been improved a lot from earlier version, becoming much easier to understand.)
Step7: We can clearly see the gradients of $z$ are computed w.r.t to each dimension of $x$, because the operations are all element-wise.
Then what if we render the output one-dimensional (scalar) while $x$ is two-dimensional. This is a real simplified scenario of neural networks.
$$f(x)=\frac{1}{n}\sum_i^n(2x_i)^3$$
$$f'(x)=\frac{1}{n}\sum_i^n24x_i^2$$
Step8: We will get complaints if the grad_tensors is specified for the scalar function.
Step9: What is retain_graph doing?
When training a model, the graph will be re-generated for each iteration. Therefore each iteration will consume the graph if the retain_graph is false, in order to keep the graph, we need to set it be true. | Python Code:
import torch as T
import torch.autograd
import numpy as np
Explanation: Update for PyTorch 0.4:
Earlier versions used Variable to wrap tensors with different properties. Since version 0.4, Variable is merged with tensor, in other words, Variable is NOT needed anymore. The flag require_grad can be directly set in tensor. Accordingly, this post is also updated.
Having heard about the announcement about Theano from Bengio lab , as a Theano user, I am happy and sad to see the fading of the old hero, caused by many raising stars. Sad to see it is too old to compete with its industrial competitors, and happy to have so many excellent deep learning frameworks to choose from. Recently I started translating some of my old codes to Pytorch and have been really impressed by its dynamic nature and clearness. But at the very beginning, I was very confused by the backward() function when reading the tutorials and documentations. This motivated me to write this post in order for other Pytorch beginners to ease the understanding a bit. And I'll assume that you already know the autograd module and what a Variable is, but are a little confused by definition of backward().
First let's recall the gradient computing under mathematical notions. For an independent variable $x$ (scalar or vector), the whatever operation on $x$ is $y = f(x)$. Then the gradient of $y$ w.r.t $x_i$s is
$$\begin{align}\nabla y&=\begin{bmatrix}
\frac{\partial y}{\partial x_1}\
\frac{\partial y}{\partial x_2}\
\vdots
\end{bmatrix}
\end{align}.
$$
Then for a specific point of $x=[X_1, X_2, \dots]$, we'll get the gradient of $y$ on that point as a vector. With these notions in mind, the following things are a bit confusing at the beginning
Mathematically, we would say "The gradients of a function w.r.t. the independent variables", whereas the .grad is attached to the leaf tensors. In Theano and Tensorflow, the computed gradients are stored separately in a variable. But with a moment of adjustment, it is fairly easy to buy that. In Pytorch it is also possible to get the .grad for intermediate Variables with help of register_hook function
The parameter grad_variables of the function torch.autograd.backward(variables, grad_tensors=None, retain_graph=None, create_graph=None, retain_variables=None, grad_variables=None) is not straightforward for knowing its functionality. **note that grad_variables is deprecated, use grad_tensors instead.
What is retain_graph doing?
End of explanation
'''
Define a scalar variable, set requires_grad to be true to add it to backward path for computing gradients
It is actually very simple to use backward()
first define the computation graph, then call backward()
'''
x = T.randn(1, 1, requires_grad=True) #x is a leaf created by user, thus grad_fn is none
print('x', x)
#define an operation on x
y = 2 * x
print('y', y)
#define one more operation to check the chain rule
z = y ** 3
print('z', z)
Explanation: Simplicity of using backward()
End of explanation
#yes, it is just as simple as this to compute gradients:
z.backward()
print('z gradient:', z.grad)
print('y gradient:', y.grad)
print('x gradient:', x.grad, 'Requires gradient?', x.grad.requires_grad) # note that x.grad is also a tensor
Explanation: The simple operations defined a forward path $z=(2x)^3$, $z$ will be the final output tensor we would like to compute gradient: $dz=24x^2dx$, which will be passed to the parameter tensors in backward() function.
End of explanation
x = T.randn(1, 1, requires_grad=True) #x is a leaf created by user, thus grad_fn is none
print('x', x)
#define an operation on x
y = 2 * x
#define one more operation to check the chain rule
z = y ** 3
z.backward(retain_graph=True)
print('Keeping the default value of grad_tensors gives')
print('z gradient:', z.grad)
print('y gradient:', y.grad)
print('x gradient:', x.grad)
Explanation: The gradients of both $y$ and $z$ are None, since the function returns the gradient for the leaves, which is $x$ in this case. At the very beginning, I was assuming something like this:
x gradient: None
y gradient: None
z gradient: tensor([11.6105]),
since the gradient is calculated for the final output $z$.
With a blink of thinking, we could figure out it would be practically chaos if $x$ is a multi-dimensional vector. x.grad should be interpreted as the gradient of $z$ at $x$.
How do we use grad_tensors?
grad_tensors should be a list of torch tensors. In default case, the backward() is applied to scalar-valued function, the default value of grad_tensors is thus torch.FloatTensor([0]). But why is that? What if we put some other values to it?
Keep the same forward path, then do backward by only setting retain_graph as True.
End of explanation
x.grad.data.zero_()
z.backward(T.Tensor([[1]]), retain_graph=True)
print('Set grad_tensors to 1 gives')
print('z gradient:', z.grad)
print('y gradient:', y.grad)
print('x gradient:', x.grad)
Explanation: Testing the explicit default value, which should give the same result. For the same graph which is retained, DO NOT forget to zero the gradient before recalculate the gradients.
End of explanation
x.grad.data.zero_()
z.backward(T.Tensor([[0.1]]), retain_graph=True)
print('Set grad_tensors to 0.1 gives')
print('z gradient:', z.grad)
print('y gradient:', y.grad)
print('x gradient:', x.grad)
x.grad.data.zero_()
z.backward(T.FloatTensor([[0.5]]), retain_graph=True)
print('Modifying the default value of grad_variables to 0.1 gives')
print('z gradient', z.grad)
print('y gradient', y.grad)
print('x gradient', x.grad)
Explanation: Then what about other values, let's try 0.1 and 0.5.
End of explanation
x = T.randn(2, 2, requires_grad=True) #x is a leaf created by user, thus grad_fn is none
print('x', x)
#define an operation on x
y = 2 * x
#define one more operation to check the chain rule
z = y ** 3
print('z shape:', z.size())
z.backward(T.FloatTensor([[1, 1], [1, 1]]), retain_graph=True)
print('x gradient for its all elements:\n', x.grad)
print()
x.grad.data.zero_() #the gradient for x will be accumulated, it needs to be cleared.
z.backward(T.FloatTensor([[0, 1], [0, 1]]), retain_graph=True)
print('x gradient for the second column:\n', x.grad)
print()
x.grad.data.zero_()
z.backward(T.FloatTensor([[1, 1], [0, 0]]), retain_graph=True)
print('x gradient for the first row:\n', x.grad)
Explanation: It looks like the elements of grad_tensors act as scaling factors. Now let's set $x$ to be a $2\times 2$matrix. Note that $z$ will also be a matrix. (Always use the latest version, backward had been improved a lot from earlier version, becoming much easier to understand.)
End of explanation
x = T.randn(2, 2, requires_grad=True) #x is a leaf created by user, thus grad_fn is none
print('x', x)
#define an operation on x
y = 2 * x
#print('y', y)
#define one more operation to check the chain rule
z = y ** 3
out = z.mean()
print('out', out)
out.backward(retain_graph=True)
print('x gradient:\n', x.grad)
Explanation: We can clearly see the gradients of $z$ are computed w.r.t to each dimension of $x$, because the operations are all element-wise.
Then what if we render the output one-dimensional (scalar) while $x$ is two-dimensional. This is a real simplified scenario of neural networks.
$$f(x)=\frac{1}{n}\sum_i^n(2x_i)^3$$
$$f'(x)=\frac{1}{n}\sum_i^n24x_i^2$$
End of explanation
x.grad.data.zero_()
out.backward(T.FloatTensor([[1, 1], [1, 1]]), retain_graph=True)
print('x gradient', x.grad)
Explanation: We will get complaints if the grad_tensors is specified for the scalar function.
End of explanation
x = T.randn(2, 2, requires_grad=True) #x is a leaf created by user, thus grad_fn is none
print('x', x)
#define an operation on x
y = 2 * x
#print('y', y)
#define one more operation to check the chain rule
z = y ** 3
out = z.mean()
print('out', out)
out.backward() #without setting retain_graph to be true, it is alright for first time of backward.
print('x gradient', x.grad)
x.grad.data.zero_()
out.backward() #Now we get complaint saying that no graph is available for tracing back.
print('x gradient', x.grad)
Explanation: What is retain_graph doing?
When training a model, the graph will be re-generated for each iteration. Therefore each iteration will consume the graph if the retain_graph is false, in order to keep the graph, we need to set it be true.
End of explanation |
14,344 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Youtube videos
Step1: Closures
A closure closes over free variables from their environment.
Step2: Decorators
Decorators are a way to dynamically alter the functionality of your functions. So for example, if you wanted to
log information when a function is run, you could use a decorator to add this functionality without modifying the
source code of your original function.
Step3: Some practical applications of decorators
Step4: Chaining of Decorators
Step5: Let's see if switching the order of decorators helps
Step6: Decorators with Arguments | Python Code:
def square(x):
return x*x
def cube(x):
return x*x*x
# This is custom-built map function which is going to behave like in-bulit map function.
def my_map(func, arg_list):
result = []
for i in arg_list:
result.append(func(i))
return result
squares = my_map(square, [1,2,3,4])
print(squares)
cubes = my_map(cube, [1,2,3,4])
print(cubes)
Explanation: Youtube videos:
* [https://www.youtube.com/watch?v=kr0mpwqttM0&t=34s]
* [https://www.youtube.com/watch?v=swU3c34d2NQ]
* [https://www.youtube.com/watch?v=FsAPt_9Bf3U&t=35s]
* [https://www.youtube.com/watch?v=KlBPCzcQNU8]
First class functions
We can treat functions just like anyother object or variable.
End of explanation
def html_tag(tag):
def wrap_text(msg):
print('<{0}>{1}<{0}>'.format(tag, msg))
return wrap_text
print_h1 = html_tag('h1')
print_h1('Test Headline')
print_h1('Another Headline')
print_p = html_tag('p')
print_p('Test Paragraph!')
Explanation: Closures
A closure closes over free variables from their environment.
End of explanation
def decorator_function(original_function):
def wrapper_function():
print("wrapper executed this before {}".format(original_function.__name__))
return original_function()
return wrapper_function
def display():
print("display function ran!")
decorated_display = decorator_function(display)
decorated_display()
# The above code is functionally the same as below:
def decorator_function(original_function):
def wrapper_function():
print("wrapper executed this before {}".format(original_function.__name__))
return original_function()
return wrapper_function
@decorator_function
def display():
print("display function ran!")
display()
# Lets make our decorator function to work with functions with different number of arguments
# For this we use, *args (arguments) and **kwargs (keyword arguments).
# args and kwargs are convention, you can use any other name you want like *myargs, **yourkeywordargs
def decorator_function(original_function):
def wrapper_function(*args, **kwargs):
print("wrapper executed this before {}".format(original_function.__name__))
return original_function(*args, **kwargs)
return wrapper_function
@decorator_function
def display():
print("display function ran!")
@decorator_function
def display_info(name, age):
print('display_info ran with arguments ({}, {})'.format(name, age))
display()
display_info('John', 25)
# Now let's use a class as a decorator instead of a function
class decorator_class(object):
def __init__(self, original_function):
self.original_function = original_function
def __call__(self, *args, **kwargs): # This method is going to behave just like our wrapper function behaved
print('call method executed this before {}'.format(self.original_function.__name__))
return self.original_function(*args, **kwargs)
@decorator_class
def display():
print("display function ran!")
@decorator_class
def display_info(name, age):
print('display_info ran with arguments ({}, {})'.format(name, age))
display()
display_info('John', 25)
Explanation: Decorators
Decorators are a way to dynamically alter the functionality of your functions. So for example, if you wanted to
log information when a function is run, you could use a decorator to add this functionality without modifying the
source code of your original function.
End of explanation
#Let's say we want to keep track of how many times a specific function was run and what argument were passed to that function
def my_logger(orig_func):
import logging
logging.basicConfig(filename='{}.log'.format(orig_func.__name__), level=logging.INFO) #Generates a log file in current directory with the name of original funcion
def wrapper(*args, **kwargs):
logging.info(
'Ran with args: {}, and kwargs: {}'.format(args, kwargs))
return orig_func(*args, **kwargs)
return wrapper
@my_logger
def display_info(name, age):
print('display_info ran with arguments ({}, {})'.format(name, age))
display_info('John', 25)
# Note:
# Since display_info is decorated with my_logger, the above call is equivalent to:
# decorated_display = my_logger(display_info)
# decorated_display('John', 25)
def my_timer(orig_func):
import time
def wrapper(*args, **kwargs):
t1 = time.time()
result = orig_func(*args, **kwargs)
t2 = time.time() - t1
print('{} ran in: {} sec'.format(orig_func.__name__, t2))
return result
return wrapper
@my_timer
def display_info(name, age):
print('display_info ran with arguments ({}, {})'.format(name, age))
display_info('John', 25)
# Note:
# Since display_info is decorated with my_timer, the above call is equivalent to:
# decorated_display = my_timer(display_info)
# decorated_display('John', 25)
# (or simply put)
# my_timer(display_info('John', 25))
Explanation: Some practical applications of decorators
End of explanation
@my_timer
@my_logger
def display_info(name, age):
print('display_info ran with arguments ({}, {})'.format(name, age))
display_info('John', 25) # This is equivalent to my_timer(my_logger(display_info('John', 25)))
# The above code will give us some unexpected results.
# Instead of printing "display_info ran in: ---- sec" it prints "wrapper ran in: ---- sec"
Explanation: Chaining of Decorators
End of explanation
@my_logger
@my_timer
def display_info(name, age):
print('display_info ran with arguments ({}, {})'.format(name, age))
display_info('John', 25) # This is equivalent to my_logger(my_timer(display_info('John', 25)))
# Now this would create wrapper.log instead of display_info.log like we expected.
# To understand why wrapper.log is generated instead of display_info.log let's look at the following code.
def display_info(name, age):
print('display_info ran with arguments ({}, {})'.format(name, age))
display_info = my_timer(display_info('John',25))
print(display_info.__name__)
# So how do we solve this problem
# The answer is by using the wraps decorator
# For this we need to import wraps from functools module
from functools import wraps
def my_logger(orig_func):
import logging
logging.basicConfig(filename='{}.log'.format(orig_func.__name__), level=logging.INFO) #Generates a log file in current directory with the name of original funcion
@wraps(orig_func)
def wrapper(*args, **kwargs):
logging.info(
'Ran with args: {}, and kwargs: {}'.format(args, kwargs))
return orig_func(*args, **kwargs)
return wrapper
def my_timer(orig_func):
import time
@wraps(orig_func)
def wrapper(*args, **kwargs):
t1 = time.time()
result = orig_func(*args, **kwargs)
t2 = time.time() - t1
print('{} ran in: {} sec'.format(orig_func.__name__, t2))
return result
return wrapper
@my_logger
@my_timer
def display_info(name, age):
print('display_info ran with arguments ({}, {})'.format(name, age))
display_info('Hank', 22)
Explanation: Let's see if switching the order of decorators helps
End of explanation
def prefix_decorator(prefix):
def decorator_function(original_function):
def wrapper_function(*args, **kwargs):
print(prefix, "Executed before {}".format(original_function.__name__))
result = original_function(*args, **kwargs)
print(prefix, "Executed after {}".format(original_function.__name__), '\n')
return result
return wrapper_function
return decorator_function
@prefix_decorator('LOG:')
def display_info(name, age):
print('display_info ran with arguments ({}, {})'.format(name, age))
display_info('John', 25)
Explanation: Decorators with Arguments
End of explanation |
14,345 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Making New Layers and Models via Subclassing
Learning Objectives
Use Layer class as the combination of state (weights) and computation.
Defer weight creation until the shape of the inputs is known.
Build recursively composable layers.
Compute loss using add_loss() method.
Compute average using add_metric() method.
Enable serialization on layers.
Introduction
This tutorial shows how to build new layers and models via subclassing.
Subclassing is a term that refers inheriting properties for a new object from a base or superclass object.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Setup
Step1: The Layer class
Step2: You would use a layer by calling it on some tensor input(s), much like a Python
function.
Step3: Note that the weights w and b are automatically tracked by the layer upon
being set as layer attributes
Step4: Note you also have access to a quicker shortcut for adding weight to a layer
Step5: Layers can have non-trainable weights
Besides trainable weights, you can add non-trainable weights to a layer as
well. Such weights are meant not to be taken into account during
backpropagation, when you are training the layer.
Here's how to add and use a non-trainable weight
Step6: It's part of layer.weights, but it gets categorized as a non-trainable weight
Step7: Best practice
Step8: In many cases, you may not know in advance the size of your inputs, and you
would like to lazily create weights when that value becomes known, some time
after instantiating the layer.
In the Keras API, we recommend creating layer weights in the build(self, input_shape) method of your layer. Like this
Step9: The __call__() method of your layer will automatically run build the first time
it is called. You now have a layer that's lazy and thus easier to use
Step10: Layers are recursively composable
If you assign a Layer instance as an attribute of another Layer, the outer layer
will start tracking the weights of the inner layer.
We recommend creating such sublayers in the __init__() method (since the
sublayers will typically have a build method, they will be built when the
outer layer gets built).
Step11: The add_loss() method
When writing the call() method of a layer, you can create loss tensors that
you will want to use later, when writing your training loop. This is doable by
calling self.add_loss(value)
Step12: These losses (including those created by any inner layer) can be retrieved via
layer.losses. This property is reset at the start of every __call__() to
the top-level layer, so that layer.losses always contains the loss values
created during the last forward pass.
Step13: In addition, the loss property also contains regularization losses created
for the weights of any inner layer
Step14: These losses are meant to be taken into account when writing training loops,
like this
Step15: The add_metric() method
Similarly to add_loss(), layers also have an add_metric() method
for tracking the moving average of a quantity during training.
Consider the following layer
Step16: Metrics tracked in this way are accessible via layer.metrics
Step17: Just like for add_loss(), these metrics are tracked by fit()
Step18: You can optionally enable serialization on your layers
If you need your custom layers to be serializable as part of a
Functional model, you can optionally implement a get_config()
method
Step19: Note that the __init__() method of the base Layer class takes some keyword
arguments, in particular a name and a dtype. It's good practice to pass
these arguments to the parent class in __init__() and to include them in the
layer config
Step20: If you need more flexibility when deserializing the layer from its config, you
can also override the from_config() class method. This is the base
implementation of from_config()
Step25: Privileged mask argument in the call() method
The other privileged argument supported by call() is the mask argument.
You will find it in all Keras RNN layers. A mask is a boolean tensor (one
boolean value per timestep in the input) used to skip certain input timesteps
when processing timeseries data.
Keras will automatically pass the correct mask argument to __call__() for
layers that support it, when a mask is generated by a prior layer.
Mask-generating layers are the Embedding
layer configured with mask_zero=True, and the Masking layer.
To learn more about masking and how to write masking-enabled layers, please
check out the guide
"understanding padding and masking".
The Model class
In general, you will use the Layer class to define inner computation blocks,
and will use the Model class to define the outer model -- the object you
will train.
For instance, in a ResNet50 model, you would have several ResNet blocks
subclassing Layer, and a single Model encompassing the entire ResNet50
network.
The Model class has the same API as Layer, with the following differences
Step26: Let's write a simple training loop on MNIST
Step27: Note that since the VAE is subclassing Model, it features built-in training
loops. So you could also have trained it like this
Step28: Beyond object-oriented development | Python Code:
# Import necessary libraries
import tensorflow as tf
from tensorflow import keras
Explanation: Making New Layers and Models via Subclassing
Learning Objectives
Use Layer class as the combination of state (weights) and computation.
Defer weight creation until the shape of the inputs is known.
Build recursively composable layers.
Compute loss using add_loss() method.
Compute average using add_metric() method.
Enable serialization on layers.
Introduction
This tutorial shows how to build new layers and models via subclassing.
Subclassing is a term that refers inheriting properties for a new object from a base or superclass object.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Setup
End of explanation
# Define a Linear class
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(input_dim, units), dtype="float32"),
trainable=True,
)
b_init = tf.zeros_initializer()
self.b = tf.Variable(
initial_value=b_init(shape=(units,), dtype="float32"), trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
Explanation: The Layer class: the combination of state (weights) and some computation
One of the central abstraction in Keras is the Layer class. A layer
encapsulates both a state (the layer's "weights") and a transformation from
inputs to outputs (a "call", the layer's forward pass).
Here's a densely-connected layer. It has a state: the variables w and b.
End of explanation
x = tf.ones((2, 2))
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
Explanation: You would use a layer by calling it on some tensor input(s), much like a Python
function.
End of explanation
assert linear_layer.weights == [linear_layer.w, linear_layer.b]
Explanation: Note that the weights w and b are automatically tracked by the layer upon
being set as layer attributes:
End of explanation
# TODO
# Use `add_weight()` method for adding weight to a layer
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
self.w = self.add_weight(
shape=(input_dim, units), initializer="random_normal", trainable=True
)
self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
x = tf.ones((2, 2))
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
Explanation: Note you also have access to a quicker shortcut for adding weight to a layer:
the add_weight() method:
End of explanation
# Add and use a non-trainable weight
class ComputeSum(keras.layers.Layer):
def __init__(self, input_dim):
super(ComputeSum, self).__init__()
self.total = tf.Variable(initial_value=tf.zeros((input_dim,)), trainable=False)
def call(self, inputs):
self.total.assign_add(tf.reduce_sum(inputs, axis=0))
return self.total
x = tf.ones((2, 2))
my_sum = ComputeSum(2)
y = my_sum(x)
print(y.numpy())
y = my_sum(x)
print(y.numpy())
Explanation: Layers can have non-trainable weights
Besides trainable weights, you can add non-trainable weights to a layer as
well. Such weights are meant not to be taken into account during
backpropagation, when you are training the layer.
Here's how to add and use a non-trainable weight:
End of explanation
print("weights:", len(my_sum.weights))
print("non-trainable weights:", len(my_sum.non_trainable_weights))
# It's not included in the trainable weights:
print("trainable_weights:", my_sum.trainable_weights)
Explanation: It's part of layer.weights, but it gets categorized as a non-trainable weight:
End of explanation
class Linear(keras.layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
self.w = self.add_weight(
shape=(input_dim, units), initializer="random_normal", trainable=True
)
self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
Explanation: Best practice: deferring weight creation until the shape of the inputs is known
Our Linear layer above took an input_dim argument that was used to compute
the shape of the weights w and b in __init__():
End of explanation
# TODO
class Linear(keras.layers.Layer):
def __init__(self, units=32):
super(Linear, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
Explanation: In many cases, you may not know in advance the size of your inputs, and you
would like to lazily create weights when that value becomes known, some time
after instantiating the layer.
In the Keras API, we recommend creating layer weights in the build(self, input_shape) method of your layer. Like this:
End of explanation
# At instantiation, we don't know on what inputs this is going to get called
linear_layer = Linear(32)
# The layer's weights are created dynamically the first time the layer is called
y = linear_layer(x)
Explanation: The __call__() method of your layer will automatically run build the first time
it is called. You now have a layer that's lazy and thus easier to use:
End of explanation
# TODO
# Let's assume we are reusing the Linear class
# with a `build` method that we defined above.
class MLPBlock(keras.layers.Layer):
def __init__(self):
super(MLPBlock, self).__init__()
self.linear_1 = Linear(32)
self.linear_2 = Linear(32)
self.linear_3 = Linear(1)
def call(self, inputs):
x = self.linear_1(inputs)
x = tf.nn.relu(x)
x = self.linear_2(x)
x = tf.nn.relu(x)
return self.linear_3(x)
mlp = MLPBlock()
y = mlp(tf.ones(shape=(3, 64))) # The first call to the `mlp` will create the weights
print("weights:", len(mlp.weights))
print("trainable weights:", len(mlp.trainable_weights))
Explanation: Layers are recursively composable
If you assign a Layer instance as an attribute of another Layer, the outer layer
will start tracking the weights of the inner layer.
We recommend creating such sublayers in the __init__() method (since the
sublayers will typically have a build method, they will be built when the
outer layer gets built).
End of explanation
# A layer that creates an activity regularization loss
class ActivityRegularizationLayer(keras.layers.Layer):
def __init__(self, rate=1e-2):
super(ActivityRegularizationLayer, self).__init__()
self.rate = rate
def call(self, inputs):
self.add_loss(self.rate * tf.reduce_sum(inputs))
return inputs
Explanation: The add_loss() method
When writing the call() method of a layer, you can create loss tensors that
you will want to use later, when writing your training loop. This is doable by
calling self.add_loss(value):
End of explanation
# TODO
class OuterLayer(keras.layers.Layer):
def __init__(self):
super(OuterLayer, self).__init__()
self.activity_reg = ActivityRegularizationLayer(1e-2)
def call(self, inputs):
return self.activity_reg(inputs)
layer = OuterLayer()
assert len(layer.losses) == 0 # No losses yet since the layer has never been called
_ = layer(tf.zeros(1, 1))
assert len(layer.losses) == 1 # We created one loss value
# `layer.losses` gets reset at the start of each __call__
_ = layer(tf.zeros(1, 1))
assert len(layer.losses) == 1 # This is the loss created during the call above
Explanation: These losses (including those created by any inner layer) can be retrieved via
layer.losses. This property is reset at the start of every __call__() to
the top-level layer, so that layer.losses always contains the loss values
created during the last forward pass.
End of explanation
class OuterLayerWithKernelRegularizer(keras.layers.Layer):
def __init__(self):
super(OuterLayerWithKernelRegularizer, self).__init__()
self.dense = keras.layers.Dense(
32, kernel_regularizer=tf.keras.regularizers.l2(1e-3)
)
def call(self, inputs):
return self.dense(inputs)
layer = OuterLayerWithKernelRegularizer()
_ = layer(tf.zeros((1, 1)))
# This is `1e-3 * sum(layer.dense.kernel ** 2)`,
# created by the `kernel_regularizer` above.
print(layer.losses)
Explanation: In addition, the loss property also contains regularization losses created
for the weights of any inner layer:
End of explanation
import numpy as np
inputs = keras.Input(shape=(3,))
outputs = ActivityRegularizationLayer()(inputs)
model = keras.Model(inputs, outputs)
# If there is a loss passed in `compile`, the regularization
# losses get added to it
model.compile(optimizer="adam", loss="mse")
model.fit(np.random.random((2, 3)), np.random.random((2, 3)))
# It's also possible not to pass any loss in `compile`,
# since the model already has a loss to minimize, via the `add_loss`
# call during the forward pass!
model.compile(optimizer="adam")
model.fit(np.random.random((2, 3)), np.random.random((2, 3)))
Explanation: These losses are meant to be taken into account when writing training loops,
like this:
```python
Instantiate an optimizer.
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3)
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
Iterate over the batches of a dataset.
for x_batch_train, y_batch_train in train_dataset:
with tf.GradientTape() as tape:
logits = layer(x_batch_train) # Logits for this minibatch
# Loss value for this minibatch
loss_value = loss_fn(y_batch_train, logits)
# Add extra losses created during this forward pass:
loss_value += sum(model.losses)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
```
For a detailed guide about writing training loops, see the
guide to writing a training loop from scratch.
These losses also work seamlessly with fit() (they get automatically summed
and added to the main loss, if any):
End of explanation
# TODO
class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_fn = keras.metrics.BinaryAccuracy()
def call(self, targets, logits, sample_weights=None):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weights)
self.add_loss(loss)
# Log accuracy as a metric and add it
# to the layer using `self.add_metric()`.
acc = self.accuracy_fn(targets, logits, sample_weights)
self.add_metric(acc, name="accuracy")
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
Explanation: The add_metric() method
Similarly to add_loss(), layers also have an add_metric() method
for tracking the moving average of a quantity during training.
Consider the following layer: a "logistic endpoint" layer.
It takes as inputs predictions & targets, it computes a loss which it tracks
via add_loss(), and it computes an accuracy scalar, which it tracks via
add_metric().
End of explanation
layer = LogisticEndpoint()
targets = tf.ones((2, 2))
logits = tf.ones((2, 2))
y = layer(targets, logits)
print("layer.metrics:", layer.metrics)
print("current accuracy value:", float(layer.metrics[0].result()))
Explanation: Metrics tracked in this way are accessible via layer.metrics:
End of explanation
inputs = keras.Input(shape=(3,), name="inputs")
targets = keras.Input(shape=(10,), name="targets")
logits = keras.layers.Dense(10)(inputs)
predictions = LogisticEndpoint(name="predictions")(logits, targets)
model = keras.Model(inputs=[inputs, targets], outputs=predictions)
model.compile(optimizer="adam")
data = {
"inputs": np.random.random((3, 3)),
"targets": np.random.random((3, 10)),
}
model.fit(data)
Explanation: Just like for add_loss(), these metrics are tracked by fit():
End of explanation
# TODO
class Linear(keras.layers.Layer):
def __init__(self, units=32):
super(Linear, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {"units": self.units}
# You can enable serialization on your layers using `get_config()` method
# Now you can recreate the layer from its config:
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
Explanation: You can optionally enable serialization on your layers
If you need your custom layers to be serializable as part of a
Functional model, you can optionally implement a get_config()
method:
End of explanation
class Linear(keras.layers.Layer):
def __init__(self, units=32, **kwargs):
super(Linear, self).__init__(**kwargs)
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
config = super(Linear, self).get_config()
config.update({"units": self.units})
return config
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
Explanation: Note that the __init__() method of the base Layer class takes some keyword
arguments, in particular a name and a dtype. It's good practice to pass
these arguments to the parent class in __init__() and to include them in the
layer config:
End of explanation
class CustomDropout(keras.layers.Layer):
def __init__(self, rate, **kwargs):
super(CustomDropout, self).__init__(**kwargs)
self.rate = rate
def call(self, inputs, training=None):
if training:
return tf.nn.dropout(inputs, rate=self.rate)
return inputs
Explanation: If you need more flexibility when deserializing the layer from its config, you
can also override the from_config() class method. This is the base
implementation of from_config():
python
def from_config(cls, config):
return cls(**config)
To learn more about serialization and saving, see the complete
guide to saving and serializing models.
Privileged training argument in the call() method
Some layers, in particular the BatchNormalization layer and the Dropout
layer, have different behaviors during training and inference. For such
layers, it is standard practice to expose a training (boolean) argument in
the call() method.
By exposing this argument in call(), you enable the built-in training and
evaluation loops (e.g. fit()) to correctly use the layer in training and
inference.
End of explanation
from tensorflow.keras import layers
class Sampling(layers.Layer):
Uses (z_mean, z_log_var) to sample z, the vector encoding a digit.
def call(self, inputs):
z_mean, z_log_var = inputs
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = tf.keras.backend.random_normal(shape=(batch, dim))
return z_mean + tf.exp(0.5 * z_log_var) * epsilon
class Encoder(layers.Layer):
Maps MNIST digits to a triplet (z_mean, z_log_var, z).
def __init__(self, latent_dim=32, intermediate_dim=64, name="encoder", **kwargs):
super(Encoder, self).__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation="relu")
self.dense_mean = layers.Dense(latent_dim)
self.dense_log_var = layers.Dense(latent_dim)
self.sampling = Sampling()
def call(self, inputs):
x = self.dense_proj(inputs)
z_mean = self.dense_mean(x)
z_log_var = self.dense_log_var(x)
z = self.sampling((z_mean, z_log_var))
return z_mean, z_log_var, z
class Decoder(layers.Layer):
Converts z, the encoded digit vector, back into a readable digit.
def __init__(self, original_dim, intermediate_dim=64, name="decoder", **kwargs):
super(Decoder, self).__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation="relu")
self.dense_output = layers.Dense(original_dim, activation="sigmoid")
def call(self, inputs):
x = self.dense_proj(inputs)
return self.dense_output(x)
class VariationalAutoEncoder(keras.Model):
Combines the encoder and decoder into an end-to-end model for training.
def __init__(
self,
original_dim,
intermediate_dim=64,
latent_dim=32,
name="autoencoder",
**kwargs
):
super(VariationalAutoEncoder, self).__init__(name=name, **kwargs)
self.original_dim = original_dim
self.encoder = Encoder(latent_dim=latent_dim, intermediate_dim=intermediate_dim)
self.decoder = Decoder(original_dim, intermediate_dim=intermediate_dim)
def call(self, inputs):
z_mean, z_log_var, z = self.encoder(inputs)
reconstructed = self.decoder(z)
# Add KL divergence regularization loss.
kl_loss = -0.5 * tf.reduce_mean(
z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1
)
self.add_loss(kl_loss)
return reconstructed
Explanation: Privileged mask argument in the call() method
The other privileged argument supported by call() is the mask argument.
You will find it in all Keras RNN layers. A mask is a boolean tensor (one
boolean value per timestep in the input) used to skip certain input timesteps
when processing timeseries data.
Keras will automatically pass the correct mask argument to __call__() for
layers that support it, when a mask is generated by a prior layer.
Mask-generating layers are the Embedding
layer configured with mask_zero=True, and the Masking layer.
To learn more about masking and how to write masking-enabled layers, please
check out the guide
"understanding padding and masking".
The Model class
In general, you will use the Layer class to define inner computation blocks,
and will use the Model class to define the outer model -- the object you
will train.
For instance, in a ResNet50 model, you would have several ResNet blocks
subclassing Layer, and a single Model encompassing the entire ResNet50
network.
The Model class has the same API as Layer, with the following differences:
It exposes built-in training, evaluation, and prediction loops
(model.fit(), model.evaluate(), model.predict()).
It exposes the list of its inner layers, via the model.layers property.
It exposes saving and serialization APIs (save(), save_weights()...)
Effectively, the Layer class corresponds to what we refer to in the
literature as a "layer" (as in "convolution layer" or "recurrent layer") or as
a "block" (as in "ResNet block" or "Inception block").
Meanwhile, the Model class corresponds to what is referred to in the
literature as a "model" (as in "deep learning model") or as a "network" (as in
"deep neural network").
So if you're wondering, "should I use the Layer class or the Model class?",
ask yourself: will I need to call fit() on it? Will I need to call save()
on it? If so, go with Model. If not (either because your class is just a block
in a bigger system, or because you are writing training & saving code yourself),
use Layer.
For instance, we could take our mini-resnet example above, and use it to build
a Model that we could train with fit(), and that we could save with
save_weights():
```python
class ResNet(tf.keras.Model):
def __init__(self, num_classes=1000):
super(ResNet, self).__init__()
self.block_1 = ResNetBlock()
self.block_2 = ResNetBlock()
self.global_pool = layers.GlobalAveragePooling2D()
self.classifier = Dense(num_classes)
def call(self, inputs):
x = self.block_1(inputs)
x = self.block_2(x)
x = self.global_pool(x)
return self.classifier(x)
resnet = ResNet()
dataset = ...
resnet.fit(dataset, epochs=10)
resnet.save(filepath)
```
Putting it all together: an end-to-end example
Here's what you've learned so far:
A Layer encapsulate a state (created in __init__() or build()) and some
computation (defined in call()).
Layers can be recursively nested to create new, bigger computation blocks.
Layers can create and track losses (typically regularization losses) as well
as metrics, via add_loss() and add_metric()
The outer container, the thing you want to train, is a Model. A Model is
just like a Layer, but with added training and serialization utilities.
Let's put all of these things together into an end-to-end example: we're going
to implement a Variational AutoEncoder (VAE). We'll train it on MNIST digits.
Our VAE will be a subclass of Model, built as a nested composition of layers
that subclass Layer. It will feature a regularization loss (KL divergence).
End of explanation
original_dim = 784
vae = VariationalAutoEncoder(original_dim, 64, 32)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
mse_loss_fn = tf.keras.losses.MeanSquaredError()
loss_metric = tf.keras.metrics.Mean()
(x_train, _), _ = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype("float32") / 255
train_dataset = tf.data.Dataset.from_tensor_slices(x_train)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
epochs = 2
# Iterate over epochs.
for epoch in range(epochs):
print("Start of epoch %d" % (epoch,))
# Iterate over the batches of the dataset.
for step, x_batch_train in enumerate(train_dataset):
with tf.GradientTape() as tape:
reconstructed = vae(x_batch_train)
# Compute reconstruction loss
loss = mse_loss_fn(x_batch_train, reconstructed)
loss += sum(vae.losses) # Add KLD regularization loss
grads = tape.gradient(loss, vae.trainable_weights)
optimizer.apply_gradients(zip(grads, vae.trainable_weights))
loss_metric(loss)
if step % 100 == 0:
print("step %d: mean loss = %.4f" % (step, loss_metric.result()))
Explanation: Let's write a simple training loop on MNIST:
End of explanation
vae = VariationalAutoEncoder(784, 64, 32)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())
vae.fit(x_train, x_train, epochs=2, batch_size=64)
Explanation: Note that since the VAE is subclassing Model, it features built-in training
loops. So you could also have trained it like this:
End of explanation
original_dim = 784
intermediate_dim = 64
latent_dim = 32
# Define encoder model.
original_inputs = tf.keras.Input(shape=(original_dim,), name="encoder_input")
x = layers.Dense(intermediate_dim, activation="relu")(original_inputs)
z_mean = layers.Dense(latent_dim, name="z_mean")(x)
z_log_var = layers.Dense(latent_dim, name="z_log_var")(x)
z = Sampling()((z_mean, z_log_var))
encoder = tf.keras.Model(inputs=original_inputs, outputs=z, name="encoder")
# Define decoder model.
latent_inputs = tf.keras.Input(shape=(latent_dim,), name="z_sampling")
x = layers.Dense(intermediate_dim, activation="relu")(latent_inputs)
outputs = layers.Dense(original_dim, activation="sigmoid")(x)
decoder = tf.keras.Model(inputs=latent_inputs, outputs=outputs, name="decoder")
# Define VAE model.
outputs = decoder(z)
vae = tf.keras.Model(inputs=original_inputs, outputs=outputs, name="vae")
# Add KL divergence regularization loss.
kl_loss = -0.5 * tf.reduce_mean(z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1)
vae.add_loss(kl_loss)
# Train.
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())
vae.fit(x_train, x_train, epochs=3, batch_size=64)
Explanation: Beyond object-oriented development: the Functional API
Was this example too much object-oriented development for you? You can also
build models using the Functional API. Importantly,
choosing one style or another does not prevent you from leveraging components
written in the other style: you can always mix-and-match.
For instance, the Functional API example below reuses the same Sampling layer
we defined in the example above:
End of explanation |
14,346 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demonstrate impact of whitening on source estimates
This example demonstrates the relationship between the noise covariance
estimate and the MNE / dSPM source amplitudes. It computes source estimates for
the SPM faces data and compares proper regularization with insufficient
regularization based on the methods described in [1]. The example demonstrates
that improper regularization can lead to overestimation of source amplitudes.
This example makes use of the previous, non-optimized code path that was used
before implementing the suggestions presented in [1]. Please do not copy the
patterns presented here for your own analysis, this is example is purely
illustrative.
<div class="alert alert-info"><h4>Note</h4><p>This example does quite a bit of processing, so even on a
fast machine it can take a couple of minutes to complete.</p></div>
References
.. [1] Engemann D. and Gramfort A. (2015) Automated model selection in
covariance estimation and spatial whitening of MEG and EEG signals,
vol. 108, 328-342, NeuroImage.
Step1: Get data
Step2: Estimate covariances
Step4: Show the resulting source estimates | Python Code:
# Author: Denis A. Engemann <[email protected]>
#
# License: BSD (3-clause)
import os
import os.path as op
import numpy as np
from scipy.misc import imread
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import spm_face
from mne.minimum_norm import apply_inverse, make_inverse_operator
from mne.cov import compute_covariance
print(__doc__)
Explanation: Demonstrate impact of whitening on source estimates
This example demonstrates the relationship between the noise covariance
estimate and the MNE / dSPM source amplitudes. It computes source estimates for
the SPM faces data and compares proper regularization with insufficient
regularization based on the methods described in [1]. The example demonstrates
that improper regularization can lead to overestimation of source amplitudes.
This example makes use of the previous, non-optimized code path that was used
before implementing the suggestions presented in [1]. Please do not copy the
patterns presented here for your own analysis, this is example is purely
illustrative.
<div class="alert alert-info"><h4>Note</h4><p>This example does quite a bit of processing, so even on a
fast machine it can take a couple of minutes to complete.</p></div>
References
.. [1] Engemann D. and Gramfort A. (2015) Automated model selection in
covariance estimation and spatial whitening of MEG and EEG signals,
vol. 108, 328-342, NeuroImage.
End of explanation
data_path = spm_face.data_path()
subjects_dir = data_path + '/subjects'
raw_fname = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces%d_3D.ds'
raw = io.read_raw_ctf(raw_fname % 1) # Take first run
# To save time and memory for this demo, we'll just use the first
# 2.5 minutes (all we need to get 30 total events) and heavily
# resample 480->60 Hz (usually you wouldn't do either of these!)
raw = raw.crop(0, 150.).load_data().resample(60, npad='auto')
picks = mne.pick_types(raw.info, meg=True, exclude='bads')
raw.filter(1, None, n_jobs=1)
events = mne.find_events(raw, stim_channel='UPPT001')
event_ids = {"faces": 1, "scrambled": 2}
tmin, tmax = -0.2, 0.5
baseline = None # no baseline as high-pass is applied
reject = dict(mag=3e-12)
# Make source space
trans = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces1_3D_raw-trans.fif'
src = mne.setup_source_space('spm', fname=None, spacing='oct6',
subjects_dir=subjects_dir, add_dist=False)
bem = data_path + '/subjects/spm/bem/spm-5120-5120-5120-bem-sol.fif'
forward = mne.make_forward_solution(raw.info, trans, src, bem)
forward = mne.convert_forward_solution(forward, surf_ori=True)
del src
# inverse parameters
conditions = 'faces', 'scrambled'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = 'dSPM'
clim = dict(kind='value', lims=[0, 2.5, 5])
Explanation: Get data
End of explanation
samples_epochs = 5, 15,
method = 'empirical', 'shrunk'
colors = 'steelblue', 'red'
evokeds = list()
stcs = list()
methods_ordered = list()
for n_train in samples_epochs:
# estimate covs based on a subset of samples
# make sure we have the same number of conditions.
events_ = np.concatenate([events[events[:, 2] == id_][:n_train]
for id_ in [event_ids[k] for k in conditions]])
epochs_train = mne.Epochs(raw, events_, event_ids, tmin, tmax, picks=picks,
baseline=baseline, preload=True, reject=reject)
epochs_train.equalize_event_counts(event_ids)
assert len(epochs_train) == 2 * n_train
noise_covs = compute_covariance(
epochs_train, method=method, tmin=None, tmax=0, # baseline only
return_estimators=True) # returns list
# prepare contrast
evokeds = [epochs_train[k].average() for k in conditions]
del epochs_train, events_
# do contrast
# We skip empirical rank estimation that we introduced in response to
# the findings in reference [1] to use the naive code path that
# triggered the behavior described in [1]. The expected true rank is
# 274 for this dataset. Please do not do this with your data but
# rely on the default rank estimator that helps regularizing the
# covariance.
stcs.append(list())
methods_ordered.append(list())
for cov in noise_covs:
inverse_operator = make_inverse_operator(evokeds[0].info, forward,
cov, loose=0.2, depth=0.8,
rank=274)
stc_a, stc_b = (apply_inverse(e, inverse_operator, lambda2, "dSPM",
pick_ori=None) for e in evokeds)
stc = stc_a - stc_b
methods_ordered[-1].append(cov['method'])
stcs[-1].append(stc)
del inverse_operator, evokeds, cov, noise_covs, stc, stc_a, stc_b
del raw, forward # save some memory
Explanation: Estimate covariances
End of explanation
fig, (axes1, axes2) = plt.subplots(2, 3, figsize=(9.5, 6))
def brain_to_mpl(brain):
convert image to be usable with matplotlib
tmp_path = op.abspath(op.join(op.curdir, 'my_tmp'))
brain.save_imageset(tmp_path, views=['ven'])
im = imread(tmp_path + '_ven.png')
os.remove(tmp_path + '_ven.png')
return im
for ni, (n_train, axes) in enumerate(zip(samples_epochs, (axes1, axes2))):
# compute stc based on worst and best
ax_dynamics = axes[1]
for stc, ax, method, kind, color in zip(stcs[ni],
axes[::2],
methods_ordered[ni],
['best', 'worst'],
colors):
brain = stc.plot(subjects_dir=subjects_dir, hemi='both', clim=clim,
initial_time=0.175)
im = brain_to_mpl(brain)
brain.close()
del brain
ax.axis('off')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax.imshow(im)
ax.set_title('{0} ({1} epochs)'.format(kind, n_train * 2))
# plot spatial mean
stc_mean = stc.data.mean(0)
ax_dynamics.plot(stc.times * 1e3, stc_mean,
label='{0} ({1})'.format(method, kind),
color=color)
# plot spatial std
stc_var = stc.data.std(0)
ax_dynamics.fill_between(stc.times * 1e3, stc_mean - stc_var,
stc_mean + stc_var, alpha=0.2, color=color)
# signal dynamics worst and best
ax_dynamics.set_title('{0} epochs'.format(n_train * 2))
ax_dynamics.set_xlabel('Time (ms)')
ax_dynamics.set_ylabel('Source Activation (dSPM)')
ax_dynamics.set_xlim(tmin * 1e3, tmax * 1e3)
ax_dynamics.set_ylim(-3, 3)
ax_dynamics.legend(loc='upper left', fontsize=10)
fig.subplots_adjust(hspace=0.4, left=0.03, right=0.98, wspace=0.07)
fig.canvas.draw()
fig.show()
Explanation: Show the resulting source estimates
End of explanation |
14,347 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib Exercise 2
Imports
Step1: Exoplanet properties
Over the past few decades, astronomers have discovered thousands of extrasolar planets. The following paper describes the properties of some of these planets.
http
Step2: Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data
Step3: Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
Pick the number of bins for the histogram appropriately.
Step4: Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Matplotlib Exercise 2
Imports
End of explanation
!head -n 30 open_exoplanet_catalogue.txt
Explanation: Exoplanet properties
Over the past few decades, astronomers have discovered thousands of extrasolar planets. The following paper describes the properties of some of these planets.
http://iopscience.iop.org/1402-4896/2008/T130/014001
Your job is to reproduce Figures 2 and 4 from this paper using an up-to-date dataset of extrasolar planets found on this GitHub repo:
https://github.com/OpenExoplanetCatalogue/open_exoplanet_catalogue
A text version of the dataset has already been put into this directory. The top of the file has documentation about each column of data:
End of explanation
data = np.genfromtxt('open_exoplanet_catalogue.txt', delimiter=',', comments = '#')
assert data.shape==(1993,24)
Explanation: Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data:
End of explanation
import math
# get rid of the missing values - they seem to cause problems with hist
complete = [x for x in data[:,2] if not math.isnan(x)]
max(complete)
# plt.hist(complete, bins=50)
f = plt.figure(figsize = (6,4))
plt.hist(complete, bins = 25, range = (0,20))
plt.xlabel("Planetary Masses (Jupiter Units)")
plt.ylabel("Frequency")
plt.title("Distribution of Planetary masses less than 20 Jupiter units\n")
assert True # leave for grading
Explanation: Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
Pick the number of bins for the histogram appropriately.
End of explanation
y = data[:,6]
x = data[:,5]
# plt.scatter(x,y)
plt.semilogx(x,y,'bo', alpha = .2, ms = 4)
plt.xlabel("Semi-Major Axis (AU)")
plt.ylabel("Orbital Eccentricity")
plt.title("Orbital Eccentricity veruss Semi-Major Axis\n")
plt.scatter(x,np.log(y))
assert True # leave for grading
Explanation: Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation |
14,348 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Minimizing KL Divergence
Let’s see how we could go about minimizing the KL divergence between two probability distributions using gradient descent. To begin, we create a probability distribution with a known mean (0) and variance (2). Then, we create another distribution with random parameters.
Step1: To begin, we create a probability distribution, $p$, with a known mean (0) and variance (2).
Step2: We define a function to compute the KL divergence that excludes probabilities equal to zero.
Step3: Just for test
Step4: KL(P||Q) !!
$p$
Step5: Plot the results | Python Code:
import os
import warnings
warnings.filterwarnings('ignore')
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (4,4) # Make the figures a bit bigger
plt.style.use('fivethirtyeight')
import numpy as np
from scipy.stats import norm
import tensorflow as tf
import seaborn as sns
sns.set()
import math
from tqdm import tqdm
np.random.seed(7)
Explanation: Minimizing KL Divergence
Let’s see how we could go about minimizing the KL divergence between two probability distributions using gradient descent. To begin, we create a probability distribution with a known mean (0) and variance (2). Then, we create another distribution with random parameters.
End of explanation
x = np.arange(-10, 10, 0.1)
x.shape[0]
tf_pdf_shape=(1, x.shape[0])
p = tf.placeholder(tf.float64, shape=tf_pdf_shape)#p_pdf.shape
#mu = tf.Variable(np.zeros(1))
#mu = tf.Variable(tf.truncated_normal((1,), stddev=3.0))
mu = tf.Variable(np.ones(1)*5)
print(mu.dtype)
varq = tf.Variable(np.eye(1))
print(varq.dtype)
normal = tf.exp(-tf.square(x - mu) / (2 * varq))
q = normal / tf.reduce_sum(normal)
learning_rate = 0.01
nb_epochs = 500*2
Explanation: To begin, we create a probability distribution, $p$, with a known mean (0) and variance (2).
End of explanation
kl_divergence = tf.reduce_sum( p * tf.log(p / q))
kl_divergence = tf.reduce_sum(
tf.where(p == 0, tf.zeros(tf_pdf_shape, tf.float64), p * tf.log(p / q))
)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(kl_divergence)
init = tf.global_variables_initializer()
sess = tf.compat.v1.InteractiveSession()
sess.run(init)
history = []
means = []
variances = []
Explanation: We define a function to compute the KL divergence that excludes probabilities equal to zero.
End of explanation
m1 = 0
var1 = 2
p_pdf0 = norm.pdf(x, m1, np.sqrt(var1))
p_pdf1 = 1.0 / np.sqrt(var1) / np.sqrt(2 * math.pi) * np.exp(-np.square(x - m1) / (2 * var1))
import matplotlib
plt.plot(p_pdf0)
plt.plot(p_pdf1, marker=",")
Explanation: Just for test
End of explanation
m_truth = 0
var_truth = 7
p_pdf0 = norm.pdf(x, m_truth, np.sqrt(var_truth))
p_pdf0 = 1.0 / np.sqrt(var_truth) / np.sqrt(2 * math.pi) * np.exp(-np.square(x - m_truth) / (2 * var_truth))
p_pdf = p_pdf0.reshape(1, -1)
for i in tqdm(range(nb_epochs)):
sess.run(optimizer, { p: p_pdf })
history.append(sess.run(kl_divergence, { p: p_pdf }))
means.append(sess.run(mu)[0])
variances.append(sess.run(varq)[0][0])
if i % 100 == 10:
print(sess.run(mu)[0], sess.run(varq)[0][0])
Explanation: KL(P||Q) !!
$p$ : given (target)
$q$ : variables to learn
Generating values for $p$
End of explanation
len1 = np.shape(means)[0]
alphas = np.linspace(0.1, 1, len1)
rgba_colors = np.zeros((len1,4))
# for red the first column needs to be one
rgba_colors[:,0] = 1.0
# the fourth column needs to be your alphas
rgba_colors[:, 3] = alphas
print(rgba_colors.shape)
grange = range(len1)
print(np.shape(grange))
for mean, variance, g in zip(means, variances, grange):
if g%5 ==0:
q_pdf = norm.pdf(x, mean, np.sqrt(variance))
plt.plot(x, q_pdf.reshape(-1, 1), color=rgba_colors[g])
plt.title('KL(P||Q) = %1.3f' % history[-1])
plt.plot(x, p_pdf.reshape(-1, 1), linewidth=3)
plt.show()
#target
plt.plot(x, p_pdf.reshape(-1, 1), linewidth=5)
#initial
q_pdf = norm.pdf(x, means[0] , np.sqrt(variances[0]))
plt.plot(x, q_pdf.reshape(-1, 1))
#final
q_pdf = norm.pdf(x, means[-1] , np.sqrt(variances[-1]))
plt.plot(x, q_pdf.reshape(-1, 1), color='r')
plt.plot(means)
plt.xlabel('epoch')
plt.ylabel('mean')
plt.plot(variances)
plt.xlabel('epoch')
plt.ylabel('variances')
plt.plot(history)
plt.title('history')
plt.show()
#sess.close()
Explanation: Plot the results
End of explanation |
14,349 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
内容索引
相关性分析 --- cov函数、diagonal函数、trace函数、corrcoef函数
多项式拟合 --- polyfit函数、polyval函数、roots函数、polyder函数
计算净额成交量 --- sign函数、piecewise函数
Step1: 1. 股票相关性分析
本例子中, 我们使用2个示例数据集提供收盘价数据,第一家公司是BHP Billiton(BHP),其主要业务是石油、金属和钻石开采;第二家公司是Vale(VALE),也是一家金属开采业公司,他们有部分业务是重合的。我们来分析一下他们的股票相关性。
Step2: 协方差描述的是两个变量共同变化的趋势,其实就是归一化前的相关系数。
使用cov函数计算股票收益率的协方差矩阵
Step3: 用相关系数来度量两只股票的相关程度。相关系数的取值范围在-1到1之间,一组数据域自身的相关系数为1.使用corrcoef函数计算相关系数。
Step4: 相关系数矩阵关于对角线对称,BHP与VALE的相关系数等于VALE和BHP的相关系数。看起来0.68的相关系数表示他们的相关程度似乎不是很强。
判断两只股票的价格走势是否同步
如果它们的差值偏离了平均差值2倍于标准差的距离,则认为他们走势不同步
Step5: 这说明,最后一次收盘价不再同步状态,我们暂时不能进行交易
Step6: 2. 多项式拟合
NumPy中的plotfit函数可以用多项式去拟合一系列数据点,无论这些数据点是否来自连续函数都适用。
Step7: 理想情况下,BHP和VALE股票收盘价的差价越小越好。在极限情况下,差值可以在某个点为0。用roots函数找到拟合多项式函数在什么时候达到0。
Step8: 求极值
Step9: 绘制拟合曲线
Step10: 3. 计算净额成交量
成交量表示价格波动的大小,净额成交量(On-Balance Volume)是由当日收盘价、前一天的收盘价以及当日成交量计算得出的。
以前一日为基期计算当日的OBV的值(可认为基期的OBV的值为0)。若当日收盘价高于前一日收盘价,则本日OBV等于基期OBV加上当日成交量,否则减去当日成交量。
我们需要在成交量前面乘上一个有收盘价变化决定的正负号。
Step11: 使用NumPy的sign函数返回每个元素的正负号。
Step12: 使用Numpy的piecewise函数获取数组元素的正负。piecewise(分段),可以根据给定取值,得到分段。
Step13: 4. 模拟交易过程
使用vectorize函数可以减少你的程序中使用循环的次数,NumPy中的vectorize函数相当于Python中的map函数。我们用它来计算单个交易日的利润。
Step14: 我们尝试以比开盘价稍低一点的价格买入股票。如果这个价格不在当日的股价范围内,则尝试买入失败,没有获利,也没有亏损,我们返回0。否则,我们将以当日收盘价卖出,所获得的利润即买入卖出的差价。 | Python Code:
%matplotlib inline
import numpy as np
from matplotlib.pyplot import plot
from matplotlib.pyplot import show
Explanation: 内容索引
相关性分析 --- cov函数、diagonal函数、trace函数、corrcoef函数
多项式拟合 --- polyfit函数、polyval函数、roots函数、polyder函数
计算净额成交量 --- sign函数、piecewise函数
End of explanation
# 首先读入两只股票的收盘价,并计算收益率
bhp_cp = np.loadtxt('BHP.csv', delimiter=',', usecols=(6,), unpack=True)
vale_cp = np.loadtxt('VALE.csv', delimiter=',', usecols=(6,), unpack=True)
bhp_returns = np.diff(bhp_cp) / bhp_cp[:-1]
vale_returns = np.diff(vale_cp) / vale_cp[:-1]
Explanation: 1. 股票相关性分析
本例子中, 我们使用2个示例数据集提供收盘价数据,第一家公司是BHP Billiton(BHP),其主要业务是石油、金属和钻石开采;第二家公司是Vale(VALE),也是一家金属开采业公司,他们有部分业务是重合的。我们来分析一下他们的股票相关性。
End of explanation
covariance = np.cov(bhp_returns, vale_returns)
print 'Covariance:\n', covariance
# 查看协方差矩阵对角线的元素
print 'Covariance diagonal:\n', covariance.diagonal()
# 计算矩阵的迹,即对角线之和
print 'Covariance trace:\n', covariance.trace()
# 计算相关系数,相关系数是协方差除以各自标准差的乘积
print 'Correlation coefficient:\n', covariance / (bhp_returns.std() * vale_returns.std())
Explanation: 协方差描述的是两个变量共同变化的趋势,其实就是归一化前的相关系数。
使用cov函数计算股票收益率的协方差矩阵
End of explanation
# 使用corrcoef计算更加精确
print 'Correlation coefficient:\n', np.corrcoef(bhp_returns, vale_returns)
Explanation: 用相关系数来度量两只股票的相关程度。相关系数的取值范围在-1到1之间,一组数据域自身的相关系数为1.使用corrcoef函数计算相关系数。
End of explanation
difference = bhp_cp - vale_cp
avg = np.mean(difference)
dev = np.std(difference)
# 检查最后一次收盘价是否在同步状态
print "Out of sync : ", np.abs(difference[-1] - avg) > 2*dev
Explanation: 相关系数矩阵关于对角线对称,BHP与VALE的相关系数等于VALE和BHP的相关系数。看起来0.68的相关系数表示他们的相关程度似乎不是很强。
判断两只股票的价格走势是否同步
如果它们的差值偏离了平均差值2倍于标准差的距离,则认为他们走势不同步
End of explanation
# 绘制收益率曲线
t = np.arange(len(bhp_returns))
plot(t, bhp_returns, lw=1)
plot(t, vale_returns, lw=2)
show()
Explanation: 这说明,最后一次收盘价不再同步状态,我们暂时不能进行交易
End of explanation
# 用三次多项式去拟合两只股票收盘价的差价
t = np.arange(len(bhp_cp))
poly = np.polyfit(t, bhp_cp-vale_cp, 3)
print "Polynomial fit\n", poly
# 用刚才得到的多项式对象,推断下一个值
print "Next value: ", np.polyval(poly, t[-1]+1)
Explanation: 2. 多项式拟合
NumPy中的plotfit函数可以用多项式去拟合一系列数据点,无论这些数据点是否来自连续函数都适用。
End of explanation
print "Roots: ", np.roots(poly)
Explanation: 理想情况下,BHP和VALE股票收盘价的差价越小越好。在极限情况下,差值可以在某个点为0。用roots函数找到拟合多项式函数在什么时候达到0。
End of explanation
# 极值位于导数为0的点
der = np.polyder(poly)
print "Dervative:\n", der
# 得到多项式导函数的系数
# 求出导函数的根,即找出原多项式函数的极值点
print "Extremas: ", np.roots(der)
# 通过argmax和argmin函数找到最大最小值点来检查结果
vals = np.polyval(poly, t)
print "Maximum index: ", np.argmax(vals)
print "Minimum index: ", np.argmin(vals)
Explanation: 求极值
End of explanation
plot(t, bhp_cp-vale_cp)
plot(t, vals)
show()
Explanation: 绘制拟合曲线
End of explanation
cp, volume = np.loadtxt('BHP.csv', delimiter=',', usecols=(6,7), unpack=True)
change = np.diff(cp)
print "Change:", change
Explanation: 3. 计算净额成交量
成交量表示价格波动的大小,净额成交量(On-Balance Volume)是由当日收盘价、前一天的收盘价以及当日成交量计算得出的。
以前一日为基期计算当日的OBV的值(可认为基期的OBV的值为0)。若当日收盘价高于前一日收盘价,则本日OBV等于基期OBV加上当日成交量,否则减去当日成交量。
我们需要在成交量前面乘上一个有收盘价变化决定的正负号。
End of explanation
signs = np.sign(change)
print "Signs:\n", signs
Explanation: 使用NumPy的sign函数返回每个元素的正负号。
End of explanation
pieces = np.piecewise(change, [change<0, change>0], [-1,1])
print "Pieces:\n", pieces
# 检查两次输出是否一致
print "Arrays equal?", np.array_equal(signs, pieces)
# OBV值的计算依赖于前一日的收盘价
print "On balance volume: \n", volume[1:]*signs
Explanation: 使用Numpy的piecewise函数获取数组元素的正负。piecewise(分段),可以根据给定取值,得到分段。
End of explanation
# 读入数据
# op is opening price,hp is the highest price
# lp is the lowest price, cp is closing price
op, hp, lp, cp = np.loadtxt('BHP.csv', delimiter=',', usecols=(3,4,5,6), unpack=True)
Explanation: 4. 模拟交易过程
使用vectorize函数可以减少你的程序中使用循环的次数,NumPy中的vectorize函数相当于Python中的map函数。我们用它来计算单个交易日的利润。
End of explanation
def calc_profit(op, high, low, close):
# 以开盘价买入,这里不考虑买入多少股
buy = op
if low < buy < high:
return (close-buy) / buy
else:
return 0
# 矢量化一个函数,这样可以避免使用循环
func = np.vectorize(calc_profit)
profits = func(op, hp, lp, cp)
print 'Profits:\n', profits
# 我们选择非零利润的交易日并计算平均值
real_trades = profits[profits != 0]
print 'Number of trades:\n', len(real_trades), round(100.0 * len(real_trades)/len(cp), 2),"%"
print "Average profit/loss % :", round(np.mean(real_trades) * 100, 2)
Explanation: 我们尝试以比开盘价稍低一点的价格买入股票。如果这个价格不在当日的股价范围内,则尝试买入失败,没有获利,也没有亏损,我们返回0。否则,我们将以当日收盘价卖出,所获得的利润即买入卖出的差价。
End of explanation |
14,350 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: Known coordinates of rectangle
Step2: Define the area inside x and y coordinates
Step3: Define boundaries as CLOSED
Step4: Make a new elevation field for display | Python Code:
from landlab import RasterModelGrid
import numpy as np
from landlab.plot.imshow import imshow_grid_at_node
from matplotlib.pyplot import show
%matplotlib inline
mg = RasterModelGrid((10, 10))
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
Setting Boundary Conditions: interior rectangle
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
This tutorial illustrates how to modify the boundary conditions of an interior rectangle in the grid if you know the x and y coordinates of the rectangle.
End of explanation
min_x = 2.5
max_x = 5.
min_y = 3.5
max_y = 7.5
Explanation: Known coordinates of rectangle:
End of explanation
x_condition = np.logical_and(mg.x_of_node < max_x, mg.x_of_node > min_x)
y_condition = np.logical_and(mg.y_of_node < max_y, mg.y_of_node > min_y)
my_nodes = np.logical_and(x_condition, y_condition)
Explanation: Define the area inside x and y coordinates:
End of explanation
mg.status_at_node[my_nodes] = mg.BC_NODE_IS_CLOSED
Explanation: Define boundaries as CLOSED:
End of explanation
z = mg.add_zeros('topographic__elevation', at='node')
imshow_grid_at_node(mg, z)
Explanation: Make a new elevation field for display:
End of explanation |
14,351 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Estimating Counts
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License
Step1: In the previous chapter we solved problems that involve estimating proportions.
In the Euro problem, we estimated the probability that a coin lands heads up, and in the exercises, you estimated a batting average, the fraction of people who cheat on their taxes, and the chance of shooting down an invading alien.
Clearly, some of these problems are more realistic than others, and some are more useful than others.
In this chapter, we'll work on problems related to counting, or estimating the size of a population.
Again, some of the examples will seem silly, but some of them, like the German Tank problem, have real applications, sometimes in life and death situations.
The Train Problem
I found the train problem
in Frederick Mosteller's, Fifty Challenging Problems in
Probability with Solutions
Step3: Now let's figure out the likelihood of the data.
In a hypothetical fleet of $N$ locomotives, what is the probability that we would see number 60?
If we assume that we are equally likely to see any locomotive, the chance of seeing any particular one is $1/N$.
Here's the function that does the update
Step4: This function might look familiar; it is the same as the update function for the dice problem in the previous chapter.
In terms of likelihood, the train problem is the same as the dice problem.
Here's the update
Step5: Here's what the posterior looks like
Step6: Not surprisingly, all values of $N$ below 60 have been eliminated.
The most likely value, if you had to guess, is 60.
Step7: That might not seem like a very good guess; after all, what are the chances that you just happened to see the train with the highest number?
Nevertheless, if you want to maximize the chance of getting
the answer exactly right, you should guess 60.
But maybe that's not the right goal.
An alternative is to compute the mean of the posterior distribution.
Given a set of possible quantities, $q_i$, and their probabilities, $p_i$, the mean of the distribution is
Step8: Or we can use the method provided by Pmf
Step9: The mean of the posterior is 333, so that might be a good guess if you want to minimize error.
If you played this guessing game over and over, using the mean of the posterior as your estimate would minimize the mean squared error over the long run.
Sensitivity to the Prior
The prior I used in the previous section is uniform from 1 to 1000, but I offered no justification for choosing a uniform distribution or that particular upper bound.
We might wonder whether the posterior distribution is sensitive to the prior.
With so little data---only one observation---it is.
This table shows what happens as we vary the upper bound
Step10: As we vary the upper bound, the posterior mean changes substantially.
So that's bad.
When the posterior is sensitive to the prior, there are two ways to proceed
Step11: The differences are smaller, but apparently three trains are not enough for the posteriors to converge.
Power Law Prior
If more data are not available, another option is to improve the
priors by gathering more background information.
It is probably not reasonable to assume that a train-operating company with 1000 locomotives is just as likely as a company with only 1.
With some effort, we could probably find a list of companies that
operate locomotives in the area of observation.
Or we could interview an expert in rail shipping to gather information about the typical size of companies.
But even without getting into the specifics of railroad economics, we
can make some educated guesses.
In most fields, there are many small companies, fewer medium-sized companies, and only one or two very large companies.
In fact, the distribution of company sizes tends to follow a power law, as Robert Axtell reports in Science (http
Step12: For comparison, here's the uniform prior again.
Step13: Here's what a power law prior looks like, compared to the uniform prior
Step14: Here's the update for both priors.
Step15: And here are the posterior distributions.
Step16: The power law gives less prior probability to high values, which yields lower posterior means, and less sensitivity to the upper bound.
Here's how the posterior means depend on the upper bound when we use a power law prior and observe three trains
Step17: Now the differences are much smaller. In fact,
with an arbitrarily large upper bound, the mean converges on 134.
So the power law prior is more realistic, because it is based on
general information about the size of companies, and it behaves better in practice.
Credible Intervals
So far we have seen two ways to summarize a posterior distribution
Step19: With a power law prior and a dataset of three trains, the result is about 29%.
So 100 trains is the 29th percentile.
Going the other way, suppose we want to compute a particular percentile; for example, the median of a distribution is the 50th percentile.
We can compute it by adding up probabilities until the total exceeds 0.5.
Here's a function that does it
Step20: The loop uses items, which iterates the quantities and probabilities in the distribution.
Inside the loop we add up the probabilities of the quantities in order.
When the total equals or exceeds prob, we return the corresponding quantity.
This function is called quantile because it computes a quantile rather than a percentile.
The difference is the way we specify prob.
If prob is a percentage between 0 and 100, we call the corresponding quantity a percentile.
If prob is a probability between 0 and 1, we call the corresponding quantity a quantile.
Here's how we can use this function to compute the 50th percentile of the posterior distribution
Step21: The result, 113 trains, is the median of the posterior distribution.
Pmf provides a method called quantile that does the same thing.
We can call it like this to compute the 5th and 95th percentiles
Step22: The result is the interval from 91 to 243 trains, which implies
Step23: The German Tank Problem
During World War II, the Economic Warfare Division of the American
Embassy in London used statistical analysis to estimate German
production of tanks and other equipment.
The Western Allies had captured log books, inventories, and repair
records that included chassis and engine serial numbers for individual
tanks.
Analysis of these records indicated that serial numbers were allocated
by manufacturer and tank type in blocks of 100 numbers, that numbers
in each block were used sequentially, and that not all numbers in each
block were used. So the problem of estimating German tank production
could be reduced, within each block of 100 numbers, to a form of the
train problem.
Based on this insight, American and British analysts produced
estimates substantially lower than estimates from other forms
of intelligence. And after the war, records indicated that they were
substantially more accurate.
They performed similar analyses for tires, trucks, rockets, and other
equipment, yielding accurate and actionable economic intelligence.
The German tank problem is historically interesting; it is also a nice
example of real-world application of statistical estimation.
For more on this problem, see this Wikipedia page and Ruggles and Brodie, "An Empirical Approach to Economic Intelligence in World War II", Journal of the American Statistical Association, March 1947, available here.
Informative Priors
Among Bayesians, there are two approaches to choosing prior
distributions. Some recommend choosing the prior that best represents
background information about the problem; in that case the prior
is said to be informative. The problem with using an informative
prior is that people might have different information or
interpret it differently. So informative priors might seem arbitrary.
The alternative is a so-called uninformative prior, which is
intended to be as unrestricted as possible, in order to let the data
speak for itself. In some cases you can identify a unique prior
that has some desirable property, like representing minimal prior
information about the estimated quantity.
Uninformative priors are appealing because they seem more
objective. But I am generally in favor of using informative priors.
Why? First, Bayesian analysis is always based on
modeling decisions. Choosing the prior is one of those decisions, but
it is not the only one, and it might not even be the most subjective.
So even if an uninformative prior is more objective, the entire analysis is still subjective.
Also, for most practical problems, you are likely to be in one of two
situations
Step24: Exercise
Step25: Exercise
Step26: Exercise
Step27: For simplicity, let's assume that all families in the 4+ category have exactly 4 children.
Step28: Exercise | Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
Explanation: Estimating Counts
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
import numpy as np
from empiricaldist import Pmf
hypos = np.arange(1, 1001)
prior = Pmf(1, hypos)
Explanation: In the previous chapter we solved problems that involve estimating proportions.
In the Euro problem, we estimated the probability that a coin lands heads up, and in the exercises, you estimated a batting average, the fraction of people who cheat on their taxes, and the chance of shooting down an invading alien.
Clearly, some of these problems are more realistic than others, and some are more useful than others.
In this chapter, we'll work on problems related to counting, or estimating the size of a population.
Again, some of the examples will seem silly, but some of them, like the German Tank problem, have real applications, sometimes in life and death situations.
The Train Problem
I found the train problem
in Frederick Mosteller's, Fifty Challenging Problems in
Probability with Solutions:
"A railroad numbers its locomotives in order 1..N. One day you see a locomotive with the number 60. Estimate how many locomotives the railroad has."
Based on this observation, we know the railroad has 60 or more
locomotives. But how many more? To apply Bayesian reasoning, we
can break this problem into two steps:
What did we know about $N$ before we saw the data?
For any given value of $N$, what is the likelihood of seeing the data (a locomotive with number 60)?
The answer to the first question is the prior. The answer to the
second is the likelihood.
We don't have much basis to choose a prior, so we'll start with
something simple and then consider alternatives.
Let's assume that $N$ is equally likely to be any value from 1 to 1000.
Here's the prior distribution:
End of explanation
def update_train(pmf, data):
Update pmf based on new data.
hypos = pmf.qs
likelihood = 1 / hypos
impossible = (data > hypos)
likelihood[impossible] = 0
pmf *= likelihood
pmf.normalize()
Explanation: Now let's figure out the likelihood of the data.
In a hypothetical fleet of $N$ locomotives, what is the probability that we would see number 60?
If we assume that we are equally likely to see any locomotive, the chance of seeing any particular one is $1/N$.
Here's the function that does the update:
End of explanation
data = 60
posterior = prior.copy()
update_train(posterior, data)
Explanation: This function might look familiar; it is the same as the update function for the dice problem in the previous chapter.
In terms of likelihood, the train problem is the same as the dice problem.
Here's the update:
End of explanation
from utils import decorate
posterior.plot(label='Posterior after train 60', color='C4')
decorate(xlabel='Number of trains',
ylabel='PMF',
title='Posterior distribution')
Explanation: Here's what the posterior looks like:
End of explanation
posterior.max_prob()
Explanation: Not surprisingly, all values of $N$ below 60 have been eliminated.
The most likely value, if you had to guess, is 60.
End of explanation
np.sum(posterior.ps * posterior.qs)
Explanation: That might not seem like a very good guess; after all, what are the chances that you just happened to see the train with the highest number?
Nevertheless, if you want to maximize the chance of getting
the answer exactly right, you should guess 60.
But maybe that's not the right goal.
An alternative is to compute the mean of the posterior distribution.
Given a set of possible quantities, $q_i$, and their probabilities, $p_i$, the mean of the distribution is:
$$\mathrm{mean} = \sum_i p_i q_i$$
Which we can compute like this:
End of explanation
posterior.mean()
Explanation: Or we can use the method provided by Pmf:
End of explanation
import pandas as pd
df = pd.DataFrame(columns=['Posterior mean'])
df.index.name = 'Upper bound'
for high in [500, 1000, 2000]:
hypos = np.arange(1, high+1)
pmf = Pmf(1, hypos)
update_train(pmf, data=60)
df.loc[high] = pmf.mean()
df
Explanation: The mean of the posterior is 333, so that might be a good guess if you want to minimize error.
If you played this guessing game over and over, using the mean of the posterior as your estimate would minimize the mean squared error over the long run.
Sensitivity to the Prior
The prior I used in the previous section is uniform from 1 to 1000, but I offered no justification for choosing a uniform distribution or that particular upper bound.
We might wonder whether the posterior distribution is sensitive to the prior.
With so little data---only one observation---it is.
This table shows what happens as we vary the upper bound:
End of explanation
df = pd.DataFrame(columns=['Posterior mean'])
df.index.name = 'Upper bound'
dataset = [30, 60, 90]
for high in [500, 1000, 2000]:
hypos = np.arange(1, high+1)
pmf = Pmf(1, hypos)
for data in dataset:
update_train(pmf, data)
df.loc[high] = pmf.mean()
df
Explanation: As we vary the upper bound, the posterior mean changes substantially.
So that's bad.
When the posterior is sensitive to the prior, there are two ways to proceed:
Get more data.
Get more background information and choose a better prior.
With more data, posterior distributions based on different priors tend to converge.
For example, suppose that in addition to train 60 we also see trains 30 and 90.
Here's how the posterior means depend on the upper bound of the prior, when we observe three trains:
End of explanation
alpha = 1.0
ps = hypos**(-alpha)
power = Pmf(ps, hypos, name='power law')
power.normalize()
Explanation: The differences are smaller, but apparently three trains are not enough for the posteriors to converge.
Power Law Prior
If more data are not available, another option is to improve the
priors by gathering more background information.
It is probably not reasonable to assume that a train-operating company with 1000 locomotives is just as likely as a company with only 1.
With some effort, we could probably find a list of companies that
operate locomotives in the area of observation.
Or we could interview an expert in rail shipping to gather information about the typical size of companies.
But even without getting into the specifics of railroad economics, we
can make some educated guesses.
In most fields, there are many small companies, fewer medium-sized companies, and only one or two very large companies.
In fact, the distribution of company sizes tends to follow a power law, as Robert Axtell reports in Science (http://www.sciencemag.org/content/293/5536/1818.full.pdf).
This law suggests that if there are 1000 companies with fewer than
10 locomotives, there might be 100 companies with 100 locomotives,
10 companies with 1000, and possibly one company with 10,000 locomotives.
Mathematically, a power law means that the number of companies with a given size, $N$, is proportional to $(1/N)^{\alpha}$, where $\alpha$ is a parameter that is often near 1.
We can construct a power law prior like this:
End of explanation
hypos = np.arange(1, 1001)
uniform = Pmf(1, hypos, name='uniform')
uniform.normalize()
Explanation: For comparison, here's the uniform prior again.
End of explanation
uniform.plot(color='C4')
power.plot(color='C1')
decorate(xlabel='Number of trains',
ylabel='PMF',
title='Prior distributions')
Explanation: Here's what a power law prior looks like, compared to the uniform prior:
End of explanation
dataset = [60]
update_train(uniform, dataset)
update_train(power, dataset)
Explanation: Here's the update for both priors.
End of explanation
uniform.plot(color='C4')
power.plot(color='C1')
decorate(xlabel='Number of trains',
ylabel='PMF',
title='Posterior distributions')
Explanation: And here are the posterior distributions.
End of explanation
df = pd.DataFrame(columns=['Posterior mean'])
df.index.name = 'Upper bound'
alpha = 1.0
dataset = [30, 60, 90]
for high in [500, 1000, 2000]:
hypos = np.arange(1, high+1)
ps = hypos**(-alpha)
power = Pmf(ps, hypos)
for data in dataset:
update_train(power, data)
df.loc[high] = power.mean()
df
Explanation: The power law gives less prior probability to high values, which yields lower posterior means, and less sensitivity to the upper bound.
Here's how the posterior means depend on the upper bound when we use a power law prior and observe three trains:
End of explanation
power.prob_le(100)
Explanation: Now the differences are much smaller. In fact,
with an arbitrarily large upper bound, the mean converges on 134.
So the power law prior is more realistic, because it is based on
general information about the size of companies, and it behaves better in practice.
Credible Intervals
So far we have seen two ways to summarize a posterior distribution: the value with the highest posterior probability (the MAP) and the posterior mean.
These are both point estimates, that is, single values that estimate the quantity we are interested in.
Another way to summarize a posterior distribution is with percentiles.
If you have taken a standardized test, you might be familiar with percentiles.
For example, if your score is the 90th percentile, that means you did as well as or better than 90\% of the people who took the test.
If we are given a value, x, we can compute its percentile rank by finding all values less than or equal to x and adding up their probabilities.
Pmf provides a method that does this computation.
So, for example, we can compute the probability that the company has less than or equal to 100 trains:
End of explanation
def quantile(pmf, prob):
Compute a quantile with the given prob.
total = 0
for q, p in pmf.items():
total += p
if total >= prob:
return q
return np.nan
Explanation: With a power law prior and a dataset of three trains, the result is about 29%.
So 100 trains is the 29th percentile.
Going the other way, suppose we want to compute a particular percentile; for example, the median of a distribution is the 50th percentile.
We can compute it by adding up probabilities until the total exceeds 0.5.
Here's a function that does it:
End of explanation
quantile(power, 0.5)
Explanation: The loop uses items, which iterates the quantities and probabilities in the distribution.
Inside the loop we add up the probabilities of the quantities in order.
When the total equals or exceeds prob, we return the corresponding quantity.
This function is called quantile because it computes a quantile rather than a percentile.
The difference is the way we specify prob.
If prob is a percentage between 0 and 100, we call the corresponding quantity a percentile.
If prob is a probability between 0 and 1, we call the corresponding quantity a quantile.
Here's how we can use this function to compute the 50th percentile of the posterior distribution:
End of explanation
power.quantile([0.05, 0.95])
Explanation: The result, 113 trains, is the median of the posterior distribution.
Pmf provides a method called quantile that does the same thing.
We can call it like this to compute the 5th and 95th percentiles:
End of explanation
power.credible_interval(0.9)
Explanation: The result is the interval from 91 to 243 trains, which implies:
The probability is 5% that the number of trains is less than or equal to 91.
The probability is 5% that the number of trains is greater than 243.
Therefore the probability is 90% that the number of trains falls between 91 and 243 (excluding 91 and including 243).
For this reason, this interval is called a 90% credible interval.
Pmf also provides credible_interval, which computes an interval that contains the given probability.
End of explanation
# Solution
# I'll use a uniform prior from 1 to 2000
# (we'll see that the probability is small that there are
# more than 2000 people in the room)
hypos = np.arange(1, 2000, 10)
prior = Pmf(1, hypos)
prior.normalize()
# Solution
# We can use the binomial distribution to compute the probability
# of the data for each hypothetical audience size
from scipy.stats import binom
likelihood1 = binom.pmf(2, hypos, 1/365)
likelihood2 = binom.pmf(1, hypos, 1/365)
likelihood3 = binom.pmf(0, hypos, 1/365)
# Solution
# Here's the update
posterior = prior * likelihood1 * likelihood2 * likelihood3
posterior.normalize()
# Solution
# And here's the posterior distribution
posterior.plot(color='C4', label='posterior')
decorate(xlabel='Number of people in the audience',
ylabel='PMF')
# Solution
# If we have to guess the audience size,
# we might use the posterior mean
posterior.mean()
# Solution
# And we can use prob_gt to compute the probability
# of exceeding the capacity of the room.
# It's about 1%, which may or may not satisfy the fire marshal
posterior.prob_gt(1200)
Explanation: The German Tank Problem
During World War II, the Economic Warfare Division of the American
Embassy in London used statistical analysis to estimate German
production of tanks and other equipment.
The Western Allies had captured log books, inventories, and repair
records that included chassis and engine serial numbers for individual
tanks.
Analysis of these records indicated that serial numbers were allocated
by manufacturer and tank type in blocks of 100 numbers, that numbers
in each block were used sequentially, and that not all numbers in each
block were used. So the problem of estimating German tank production
could be reduced, within each block of 100 numbers, to a form of the
train problem.
Based on this insight, American and British analysts produced
estimates substantially lower than estimates from other forms
of intelligence. And after the war, records indicated that they were
substantially more accurate.
They performed similar analyses for tires, trucks, rockets, and other
equipment, yielding accurate and actionable economic intelligence.
The German tank problem is historically interesting; it is also a nice
example of real-world application of statistical estimation.
For more on this problem, see this Wikipedia page and Ruggles and Brodie, "An Empirical Approach to Economic Intelligence in World War II", Journal of the American Statistical Association, March 1947, available here.
Informative Priors
Among Bayesians, there are two approaches to choosing prior
distributions. Some recommend choosing the prior that best represents
background information about the problem; in that case the prior
is said to be informative. The problem with using an informative
prior is that people might have different information or
interpret it differently. So informative priors might seem arbitrary.
The alternative is a so-called uninformative prior, which is
intended to be as unrestricted as possible, in order to let the data
speak for itself. In some cases you can identify a unique prior
that has some desirable property, like representing minimal prior
information about the estimated quantity.
Uninformative priors are appealing because they seem more
objective. But I am generally in favor of using informative priors.
Why? First, Bayesian analysis is always based on
modeling decisions. Choosing the prior is one of those decisions, but
it is not the only one, and it might not even be the most subjective.
So even if an uninformative prior is more objective, the entire analysis is still subjective.
Also, for most practical problems, you are likely to be in one of two
situations: either you have a lot of data or not very much. If you have a lot of data, the choice of the prior doesn't matter;
informative and uninformative priors yield almost the same results.
If you don't have much data, using relevant background information (like the power law distribution) makes a big difference.
And if, as in the German tank problem, you have to make life and death
decisions based on your results, you should probably use all of the
information at your disposal, rather than maintaining the illusion of
objectivity by pretending to know less than you do.
Summary
This chapter introduces the train problem, which turns out to have the same likelihood function as the dice problem, and which can be applied to the German Tank problem.
In all of these examples, the goal is to estimate a count, or the size of a population.
In the next chapter, I'll introduce "odds" as an alternative to probabilities, and Bayes's Rule as an alternative form of Bayes's Theorem.
We'll compute distributions of sums and products, and use them to estimate the number of Members of Congress who are corrupt, among other problems.
But first, you might want to work on these exercises.
Exercises
Exercise: Suppose you are giving a talk in a large lecture hall and the fire marshal interrupts because they think the audience exceeds 1200 people, which is the safe capacity of the room.
You think there are fewer then 1200 people, and you offer to prove it.
It would take too long to count, so you try an experiment:
You ask how many people were born on May 11 and two people raise their hands.
You ask how many were born on May 23 and 1 person raises their hand.
Finally, you ask how many were born on August 1, and no one raises their hand.
How many people are in the audience? What is the probability that there are more than 1200 people.
Hint: Remember the binomial distribution.
End of explanation
# Solution
hypos = np.arange(4, 11)
prior = Pmf(1, hypos)
# Solution
# The probability that the second rabbit is the same as the first is 1/N
# The probability that the third rabbit is different is (N-1)/N
N = hypos
likelihood = (N-1) / N**2
# Solution
posterior = prior * likelihood
posterior.normalize()
posterior.bar(alpha=0.7)
decorate(xlabel='Number of rabbits',
ylabel='PMF',
title='The Rabbit Problem')
Explanation: Exercise: I often see rabbits in the garden behind my house, but it's not easy to tell them apart, so I don't really know how many there are.
Suppose I deploy a motion-sensing camera trap that takes a picture of the first rabbit it sees each day. After three days, I compare the pictures and conclude that two of them are the same rabbit and the other is different.
How many rabbits visit my garden?
To answer this question, we have to think about the prior distribution and the likelihood of the data:
I have sometimes seen four rabbits at the same time, so I know there are at least that many. I would be surprised if there were more than 10. So, at least as a starting place, I think a uniform prior from 4 to 10 is reasonable.
To keep things simple, let's assume that all rabbits who visit my garden are equally likely to be caught by the camera trap in a given day. Let's also assume it is guaranteed that the camera trap gets a picture every day.
End of explanation
# Solution
# Here's the prior distribution of sentences
hypos = np.arange(1, 4)
prior = Pmf(1/3, hypos)
prior
# Solution
# If you visit a prison at a random point in time,
# the probability of observing any given prisoner
# is proportional to the duration of their sentence.
likelihood = hypos
posterior = prior * likelihood
posterior.normalize()
posterior
# Solution
# The mean of the posterior is the average sentence.
# We can divide by 2 to get the average remaining sentence.
posterior.mean() / 2
Explanation: Exercise: Suppose that in the criminal justice system, all prison sentences are either 1, 2, or 3 years, with an equal number of each. One day, you visit a prison and choose a prisoner at random. What is the probability that they are serving a 3-year sentence? What is the average remaining sentence of the prisoners you observe?
End of explanation
import matplotlib.pyplot as plt
qs = [1, 2, 3, 4]
ps = [22, 41, 24, 14]
prior = Pmf(ps, qs)
prior.bar(alpha=0.7)
plt.xticks(qs, ['1 child', '2 children', '3 children', '4+ children'])
decorate(ylabel='PMF',
title='Distribution of family size')
Explanation: Exercise: If I chose a random adult in the U.S., what is the probability that they have a sibling? To be precise, what is the probability that their mother has had at least one other child.
This article from the Pew Research Center provides some relevant data.
From it, I extracted the following distribution of family size for mothers in the U.S. who were 40-44 years old in 2014:
End of explanation
# Solution
# When you choose a person a random, you are more likely to get someone
# from a bigger family; in fact, the chance of choosing someone from
# any given family is proportional to the number of children
likelihood = qs
posterior = prior * likelihood
posterior.normalize()
posterior
# Solution
# The probability that they have a sibling is the probability
# that they do not come from a family of 1
1 - posterior[1]
# Solution
# Or we could use prob_gt again
posterior.prob_gt(1)
Explanation: For simplicity, let's assume that all families in the 4+ category have exactly 4 children.
End of explanation
# Solution
hypos = [200, 2000]
prior = Pmf(1, hypos)
# Solution
likelihood = 1/prior.qs
posterior = prior * likelihood
posterior.normalize()
posterior
# Solution
# According to this analysis, the probability is about 91% that our
# civilization will be short-lived.
# But this conclusion is based on a dubious prior.
# And with so little data, the posterior depends strongly on the prior.
# To see that, run this analysis again with a different prior,
# and see what the results look like.
# What do you think of the Doomsday argument?
Explanation: Exercise: The Doomsday argument is "a probabilistic argument that claims to predict the number of future members of the human species given an estimate of the total number of humans born so far."
Suppose there are only two kinds of intelligent civilizations that can happen in the universe. The "short-lived" kind go exinct after only 200 billion individuals are born. The "long-lived" kind survive until 2,000 billion individuals are born.
And suppose that the two kinds of civilization are equally likely.
Which kind of civilization do you think we live in?
The Doomsday argument says we can use the total number of humans born so far as data.
According to the Population Reference Bureau, the total number of people who have ever lived is about 108 billion.
Since you were born quite recently, let's assume that you are, in fact, human being number 108 billion.
If $N$ is the total number who will ever live and we consider you to be a randomly-chosen person, it is equally likely that you could have been person 1, or $N$, or any number in between.
So what is the probability that you would be number 108 billion?
Given this data and dubious prior, what is the probability that our civilization will be short-lived?
End of explanation |
14,352 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computer Vision for Image Feature Extraction
Step1: A smaller size allows computation intensive algoritms to run reasonably on a PC
Step2: Part of the canny edge detection algorithm is a gaussian blur.
This image is just to demonstrate that gaussian blur reduces noise while keeping major edges intact.
Step3: Hough Transforms
Finds circles (or ellipsis) in images
Step4: This is largely a sorting and aggregation step
Step5: Equalizes the histogram of the image to stretch image across the spectrum
Step6: On the server the center (cx, cy) and the radius is retrieved from the user
Step7: We surveyed several algorithms
Step8: Circle estimation is a suboptimal measure for symmetry, but sufficient for our purposes. | Python Code:
%pylab inline
from skimage import io
import matplotlib.pyplot as plt
Explanation: Computer Vision for Image Feature Extraction
End of explanation
image = io.imread('uploads/df947b7905ec613a239a1c4d531e8eab45ccbd6d.jpg')
from skimage.transform import rescale
small = rescale(image, 0.1)
imshow(small)
from skimage.color import rgb2gray
gray = rgb2gray(small)
imshow(gray)
Explanation: A smaller size allows computation intensive algoritms to run reasonably on a PC
End of explanation
from skimage.filters import gaussian_filter
blurry = gaussian_filter(gray, 2)
imshow(blurry)
from skimage.feature import canny
# Don't use the blurry image
edges = canny(gray, sigma=2)
imshow(edges)
Explanation: Part of the canny edge detection algorithm is a gaussian blur.
This image is just to demonstrate that gaussian blur reduces noise while keeping major edges intact.
End of explanation
from skimage.transform import hough_circle
hough_radii = np.arange(15, 30)
hough_res = hough_circle(edges, hough_radii)
Explanation: Hough Transforms
Finds circles (or ellipsis) in images
End of explanation
from skimage.feature import peak_local_max
from skimage.draw import circle_perimeter
from skimage.color import gray2rgb
centers = []
accums = []
radii = []
for radius, h in zip(hough_radii, hough_res):
# For each radius, extract two circles
num_peaks = 2
peaks = peak_local_max(h, num_peaks=num_peaks)
centers.extend(peaks)
accums.extend(h[peaks[:, 0], peaks[:, 1]])
radii.extend([radius] * num_peaks)
coin_center = 0
coin_radius = 0
# Draw the most prominent 5 circles
gray_copy = gray2rgb(gray)
for idx in np.argsort(accums)[::-1][:1]:
coin_center = centers[idx]
center_x, center_y = centers[idx]
coin_radius = radii[idx]
cx, cy = circle_perimeter(center_y, center_x, coin_radius)
gray_copy[cy, cx] = (220, 20, 20)
imshow(gray_copy)
Explanation: This is largely a sorting and aggregation step
End of explanation
from skimage.exposure import equalize_hist
equal = equalize_hist(gray)
imshow(equal)
Explanation: Equalizes the histogram of the image to stretch image across the spectrum
End of explanation
y,x = np.ogrid[:gray.shape[0],:gray.shape[1]]
cx = 90
cy = 45
radius = 30
r2 = (x-cx)*(x-cx) + (y-cy)*(y-cy)
mask = r2 <= radius * radius
Explanation: On the server the center (cx, cy) and the radius is retrieved from the user
End of explanation
from skimage.feature import canny
mole_edge = canny(equal, sigma=2, mask=mask)
imshow(mole_edge)
from skimage.measure import find_contours
contours = find_contours(mole_edge, 0.9, fully_connected='high')
fig, ax = plt.subplots()
ax.imshow(equal, interpolation='nearest', cmap=plt.cm.gray)
ax.plot(contours[0][:, 1], contours[0][:, 0], linewidth=1)
from mahotas.polygon import fill_polygon
from skimage.transform import resize
canvas = np.zeros((gray.shape[0], gray.shape[1]))
fill_polygon(contours[0].astype(np.int), canvas)
import numpy.ma as ma
from skimage.color import rgb2hsv
hsv = rgb2hsv(small)
deviations = []
for color in (0,1,2):
masked = ma.array(hsv[:,:,color], mask=~canvas.astype(np.bool))
deviations.append(masked.std())
print(deviations)
Explanation: We surveyed several algorithms:
Canny -> Marching Squares -> Fill Polygon
Faired the best
End of explanation
from skimage.measure import CircleModel
circle_model = CircleModel()
circle_model.estimate(contours[0])
symmetry = circle_model.residuals(contours[0]).mean()
print(symmetry)
diameter = (19.05 / coin_radius) * (circle_model.params[2])
print(diameter )
from skimage.draw import circle
from skimage.color import gray2rgb
gray_copy = gray2rgb(gray)
cx, cy = circle(circle_model.params[1], circle_model.params[0], circle_model.params[2])
gray_copy[cy, cx] = (220, 20, 20)
center_x, center_y = coin_center
cx, cy = circle(center_y, center_x, coin_radius)
gray_copy[cy, cx] = (220, 20, 20)
imshow(gray_copy)
Explanation: Circle estimation is a suboptimal measure for symmetry, but sufficient for our purposes.
End of explanation |
14,353 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial "Algorithmic Methods for Network Analysis with NetworKit" (Part 1)
Welcome to the hands-on session of our tutorial! This tutorial is based on the user guide of NetworKit, our network analysis software. You will learn in this tutorial how to use NetworKit for fundamental tasks in network analysis.
NetworKit can run in your browser (thanks to IPython notebooks) and is still very fast (thanks to C++ code in the background). It is easy to mix text with code and solutions in this environment. Thus, you should be able to obtain your results in a convenient and quick manner. This is not only true for the rather small graphs we use for this tutorial, but for larger instances in production runs as well. In particular you can mix text, code, plots and other rich media in this environment. Since this allows a simplified execution and interpretation of experiments, the interactive approach followed by NetworKit can simplify the cyclic algorithm engineering process significantly (without compromising algorithm performance).
Preparation
Let's start by making NetworKit available in your session. Click into the cell below and hit space-return or click the "Play" button or select "Cell -> Run" in the menu.
Step1: In case a Python warning appears that recommends an update to Python 3.4, simply ignore it for this tutorial. Python 3.3 works just as fine for our purposes.
IPython lets us use familiar shell commands in a Python interpreter. Use one of them now to change into the main directory of your NetworKit installation
Step2: Reading Graphs
Let us start by reading a network from a file on disk
Step3: In the course of this tutorial, we are going to work (among others) on the PGPgiantcompo network, a social network/web of trust in which nodes are PGP keys and an edge represents a signature from one key on another (web of trust). It is distributed with NetworKit as a good starting point.
The Graph Object
Graph is the central class of NetworKit. An object of this type represents an optionally weighted network. In this tutorial we work with undirected graphs, but NetworKit supports directed graphs as well.
Let us inspect several of the methods which the class provides. Maybe the most basic information is the number of nodes and edges in as well as the name of the network.
Step4: NetworKit stores nodes simply as integer indices. Edges are pairs of such indices. The following prints the indices of the first ten nodes and edges, respectively.
Step5: Another very useful feature is to determine if an edge is present and what its weight is. In case of unweighted graphs, edges have the default weight 1.
Step6: Many graph algorithms can be expressed with iterators over nodes or edges. As an example, let us iterate over the nodes to determine how many of them have more than 100 neighbors.
Step7: Interesting Features of a Network
Let us become more concrete | Python Code:
from networkit import *
%matplotlib inline
Explanation: Tutorial "Algorithmic Methods for Network Analysis with NetworKit" (Part 1)
Welcome to the hands-on session of our tutorial! This tutorial is based on the user guide of NetworKit, our network analysis software. You will learn in this tutorial how to use NetworKit for fundamental tasks in network analysis.
NetworKit can run in your browser (thanks to IPython notebooks) and is still very fast (thanks to C++ code in the background). It is easy to mix text with code and solutions in this environment. Thus, you should be able to obtain your results in a convenient and quick manner. This is not only true for the rather small graphs we use for this tutorial, but for larger instances in production runs as well. In particular you can mix text, code, plots and other rich media in this environment. Since this allows a simplified execution and interpretation of experiments, the interactive approach followed by NetworKit can simplify the cyclic algorithm engineering process significantly (without compromising algorithm performance).
Preparation
Let's start by making NetworKit available in your session. Click into the cell below and hit space-return or click the "Play" button or select "Cell -> Run" in the menu.
End of explanation
cd ~/Documents/workspace/NetworKit/
Explanation: In case a Python warning appears that recommends an update to Python 3.4, simply ignore it for this tutorial. Python 3.3 works just as fine for our purposes.
IPython lets us use familiar shell commands in a Python interpreter. Use one of them now to change into the main directory of your NetworKit installation:
End of explanation
G = readGraph("input/PGPgiantcompo.graph", Format.METIS)
Explanation: Reading Graphs
Let us start by reading a network from a file on disk: PGPgiantcompo.graph. NetworKit supports a number of popular graph file formats, among them the METIS adjacency list format. There is a convenient function in the top namespace to read a graph from a file:
End of explanation
n = G.numberOfNodes()
m = G.numberOfEdges()
print(n, m)
G.toString()
Explanation: In the course of this tutorial, we are going to work (among others) on the PGPgiantcompo network, a social network/web of trust in which nodes are PGP keys and an edge represents a signature from one key on another (web of trust). It is distributed with NetworKit as a good starting point.
The Graph Object
Graph is the central class of NetworKit. An object of this type represents an optionally weighted network. In this tutorial we work with undirected graphs, but NetworKit supports directed graphs as well.
Let us inspect several of the methods which the class provides. Maybe the most basic information is the number of nodes and edges in as well as the name of the network.
End of explanation
V = G.nodes()
print(V[:10])
E = G.edges()
print(E[:10])
Explanation: NetworKit stores nodes simply as integer indices. Edges are pairs of such indices. The following prints the indices of the first ten nodes and edges, respectively.
End of explanation
edgeExists = G.hasEdge(42,11)
if edgeExists:
print("Weight of existing edge:", G.weight(42,11))
print("Weight of nonexisting edge:", G.weight(42,12))
Explanation: Another very useful feature is to determine if an edge is present and what its weight is. In case of unweighted graphs, edges have the default weight 1.
End of explanation
count = 0 # counts number of nodes with more than 100 neighbors
for v in G.nodes():
if G.degree(v) > 100:
count = count + 1
print("Number of nodes with more than 100 neighbors: ", count)
Explanation: Many graph algorithms can be expressed with iterators over nodes or edges. As an example, let us iterate over the nodes to determine how many of them have more than 100 neighbors.
End of explanation
# Enter code for Q&A Session #1 here
Explanation: Interesting Features of a Network
Let us become more concrete: In the talk that accompanies this tutorial you learned about basic network features. Go back to the 'Analytics' section of the slides and answer the following questions within the box below, including the code which found your answer (click on the box to enter text). If you need information on method prototypes, you have at least two options: Use the built-in code completion (tab) or the project website, which offers documentation in the form of an automatically generated reference: https://networkit.iti.kit.edu/documentation/ (Python/C++ Documentation in the left navigation bar).
After you answered the questions, go on with Tutorial #2.
Q&A Session #1
Who (which vertex) has the least/most 'friends' and how many?
Answer:
How many neighbors does a vertex have on average?
Answer:
Does the degree distribution follow a power law?
Answer:
How often is the friend of a friend also a friend? Let's go for the average fraction here, other definitions are possible...
Answer:
How many connected components does the graph have?
Answer:
End of explanation |
14,354 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experimento de Young con trenes de ondas
Consideraciones iniciales
Step1: Cuando estudiamos el experimento de Young, asumimos que iluminábamos con radiación monocromática. En este caso, la posición de los máximos y mínimos de irradiancia venían dados por,
<div class="alert alert-error">
Máximos de irradiancia. $\delta = 2 m \pi \implies \frac{a x}{D} = m \implies$
$$x_{max} = \frac{m \lambda D}{a}$$
</div>
<div class="alert alert-error">
Mínimos de irradiancia. $\delta = (2 m + 1)\pi \implies \frac{a x}{D} = (m +1/2) \implies$
$$x_{min} = \frac{(m + 1/2) \lambda D}{a}$$
</div>
Como vemos en las fórmulas anteriores, la posición de los máximos y mínimos dependen de la longitud de onda $\lambda$. Cuando iluminamos el mismo experimento con radiación no monocromática, podemos considerar que cada longitud de onda que compone el espectro de la radiación forma su patrón de interferencias. Pero cada patrón de interferencias tendrá los máximos en posiciones ligeramente distintas. Esto va a llevar a una reducción del constraste y finalmente, a la desaparición de las franjas de interferencia. Vamos a estudiar este proceso con más detalle, viendo primero una mejor aproximación que una onda monocromática a la radiación que emiten la fuentes de luz reales, y posteriormente, cómo afecta este tipo de radiación a las interferencias en un experimento de Young
Trenes de onda
Aunque la abstracción de tratar una onda monocromática es extremadamente útil, las fuentes de luz reales no emiten tal radiación. La razón es sencilla
Step2: Longitud de coherencia
El tiempo en el que la fase de la onda permanece constante (tiempo entre saltos consecutivos) se llama tiempo de coherencia y nosotros lo denominaremos $t_c$.
Si observamos el tren de ondas espacialmente, veremos una figura similar a la anterior, es decir, una figura sinusoidal con un periodo igual a la longitud de onda $\lambda$ y con saltos de fase cada cierta distancia. A esta distancia se le denomina longitud de coherencia ($l_c$) y se relaciona con el tiempo de coherencia mediante la relación,
$$l_c = c t_c$$
donde $c$ es la velocidad de la luz.
Anchura espectral
Un tren de ondas deja de ser una radiación completamente monocromática, es decir, con una única longitud de onda o frecuencia, pasando a tener una cierta anchura espectral. Lo podemos entender observando que un tren de ondas deja de ser un coseno o un seno debido a esos saltos de fase aleatorios, pasando a tener una evolución temporal más compleja.
La anchura en frecuencias (o longitudes de onda) de un tren de ondas la podemos hallar mediante la transformada de Fourier. Este análisis queda fuera del objeto de este curso pero sí nos vas a resultar útil un resultado que emerge de esta transformada
Step3: ¿Qué ocurre si iluminamos el experimento de Young con este tipo de radiación?
Si iluminamos una doble rendija con un tren de ondas como el representado anteriormente, tendremos dos ondas llegando a un cierto punto de la pantalla con la misma evolución temporal pero una de ellas retrasada con respecto a la otra. Esto es debido a la diferencia de camino óptico recorrido por cada tren de onda.
Cuando superponemos ambos trenes (uno con un cierto retraso con respecto al otro), la diferencia entre las fases iniciales de cada onda dependerá del tiempo. Además, como los saltos de fase en el tren de ondas son aleatorios, esa diferencia de fase cambiara a su vez aleatoriamente. El siguiente codigo muestra esta diferencia.
Esta diferencia aleatoria tiene un gran efecto en la irradiancia total del patron de interferencias.
Recordemos que la intensidad total viene dada por | Python Code:
from IPython.display import Image
Image(filename="EsquemaYoung.png")
Explanation: Experimento de Young con trenes de ondas
Consideraciones iniciales
End of explanation
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
plt.style.use('fivethirtyeight')
#import ipywidgets as widg
#from IPython.display import display
#####
#PARÁMETROS QUE SE PUEDEN MODIFICAR
#####
Lambda = 5e-7
c0 = 3e8
omega = 2*np.pi*c0/Lambda
T = 2*np.pi/omega
tau = 2*T
###########
time = np.linspace(0,18*T,600)
def campo(t,w,tau0):
numsaltos = (int)(np.floor(t[-1]/tau0))
phi = (np.random.random(numsaltos)-0.5)*4*np.pi
phi_aux = np.array([np.ones(np.size(t)/numsaltos)*phi[i] for i in range(numsaltos)])
phi_t = np.reshape(phi_aux,np.shape(phi_aux)[0]*np.shape(phi_aux)[1])
phi_t = np.pad(phi_t,(np.size(t)-np.size(phi_t),0),mode='edge')
e1 = np.cos(omega*t + phi_t)
fig,ax = plt.subplots(1,1,figsize=(8,4))
ax.plot(t,e1)
ax.set_xlabel('Tiempo (s)')
ax.set_ylabel('Campo (u.a.)')
return None
campo(time,omega,tau)
Explanation: Cuando estudiamos el experimento de Young, asumimos que iluminábamos con radiación monocromática. En este caso, la posición de los máximos y mínimos de irradiancia venían dados por,
<div class="alert alert-error">
Máximos de irradiancia. $\delta = 2 m \pi \implies \frac{a x}{D} = m \implies$
$$x_{max} = \frac{m \lambda D}{a}$$
</div>
<div class="alert alert-error">
Mínimos de irradiancia. $\delta = (2 m + 1)\pi \implies \frac{a x}{D} = (m +1/2) \implies$
$$x_{min} = \frac{(m + 1/2) \lambda D}{a}$$
</div>
Como vemos en las fórmulas anteriores, la posición de los máximos y mínimos dependen de la longitud de onda $\lambda$. Cuando iluminamos el mismo experimento con radiación no monocromática, podemos considerar que cada longitud de onda que compone el espectro de la radiación forma su patrón de interferencias. Pero cada patrón de interferencias tendrá los máximos en posiciones ligeramente distintas. Esto va a llevar a una reducción del constraste y finalmente, a la desaparición de las franjas de interferencia. Vamos a estudiar este proceso con más detalle, viendo primero una mejor aproximación que una onda monocromática a la radiación que emiten la fuentes de luz reales, y posteriormente, cómo afecta este tipo de radiación a las interferencias en un experimento de Young
Trenes de onda
Aunque la abstracción de tratar una onda monocromática es extremadamente útil, las fuentes de luz reales no emiten tal radiación. La razón es sencilla: una onda monocromática pura (es decir, un seno o un coseno) no tiene ni principio ni final, por lo que para emitir una onda de este tipo se necesitaría energía infinita.
Lo más próximo que podemos obtener a una onda monocromática es una sucesión de trenes de ondas armónicos separados unos de otros por saltos aleatorios en la fase de la onda.
El siguiente código muestra un ejemplo de este tipo de trenes de onda.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
plt.style.use('fivethirtyeight')
import ipywidgets as widg
from IPython.display import display
#####
#PARÁMETROS QUE SE PUEDEN MODIFICAR
#####
Lambda = 5e-7
c0 = 3e8
omega = 2*np.pi*c0/Lambda
T = 2*np.pi/omega
time = np.linspace(0,30*T,1500)
tau = 2*T
###########
def campofft(t,w,tau0):
numsaltos = (int)(np.floor(t[-1]/tau0))
phi = (np.random.random(numsaltos)-0.5)*4*np.pi
phi_aux = np.array([np.ones(np.size(t)/numsaltos)*phi[i] for i in range(numsaltos)])
phi_t = np.reshape(phi_aux,np.shape(phi_aux)[0]*np.shape(phi_aux)[1])
phi_t = np.pad(phi_t,(np.size(t)-np.size(phi_t),0),mode='edge')
e1 = np.cos(omega*t + phi_t)
fig1,ax1 = plt.subplots(1,2,figsize=(10,4))
ax1[0].plot(t,e1)
ax1[0].set_title('Campo')
ax1[0].set_xlabel('tiempo (s)')
ax1[1].set_ylabel('E(t)')
freq = np.fft.fftfreq(t.shape[0],t[1]-t[0])
e1fft = np.fft.fft(e1)
ax1[1].plot(freq,np.abs(e1fft)**2)
ax1[1].set_xlim(0,0.1e16)
ax1[1].set_title('Espectro del campo')
ax1[1].vlines(omega/(2*np.pi),0,np.max(np.abs(e1fft)**2),'k')
return
campofft(time,omega,tau)
Explanation: Longitud de coherencia
El tiempo en el que la fase de la onda permanece constante (tiempo entre saltos consecutivos) se llama tiempo de coherencia y nosotros lo denominaremos $t_c$.
Si observamos el tren de ondas espacialmente, veremos una figura similar a la anterior, es decir, una figura sinusoidal con un periodo igual a la longitud de onda $\lambda$ y con saltos de fase cada cierta distancia. A esta distancia se le denomina longitud de coherencia ($l_c$) y se relaciona con el tiempo de coherencia mediante la relación,
$$l_c = c t_c$$
donde $c$ es la velocidad de la luz.
Anchura espectral
Un tren de ondas deja de ser una radiación completamente monocromática, es decir, con una única longitud de onda o frecuencia, pasando a tener una cierta anchura espectral. Lo podemos entender observando que un tren de ondas deja de ser un coseno o un seno debido a esos saltos de fase aleatorios, pasando a tener una evolución temporal más compleja.
La anchura en frecuencias (o longitudes de onda) de un tren de ondas la podemos hallar mediante la transformada de Fourier. Este análisis queda fuera del objeto de este curso pero sí nos vas a resultar útil un resultado que emerge de esta transformada: la relación entre anchura espectral (rango de frecuencias presentes en la radiación $\Delta \nu$) y tiempo de coherencia. Esta relación es,
$$t_c \simeq \frac{1}{\Delta \nu}$$
Teniendo en cuenta que $\nu = c/\lambda$ podemos llegar a la relación entre la longitud de coherencia y la anchura espectral expresada en longitudes de onda,
$$l_c \simeq \frac{\lambda^2}{\Delta \lambda}$$
La anterior relación nos dice que a mayor longitud de coherencia, menor anchura espectral de la radiación, o lo que es lo mismo, más monocromática será.
End of explanation
from matplotlib.pyplot import *
from numpy import *
%matplotlib inline
style.use('fivethirtyeight')
#####
#PARÁMETROS QUE SE PUEDEN MODIFICAR
#####
Lambda = 5e-7 # longitud de onda de la radiación de 500 nm
k = 2.0*pi/Lambda
D = 3.5# en metros
a = 0.003 # separación entre fuentes de 3 mm
DeltaLambda = 7e-8 # anchura espectral
###########
lc = (Lambda**2)/DeltaLambda
interfranja = Lambda*D/a
print ("interfranja",interfranja*1e3, "(mm)") # muestra el valor de la interfranja en mm
print( "longitud de coherencia", lc*1e6, "(um)") #muestra el valor de la long. de coherencia en um
x = linspace(-10*interfranja,10*interfranja,500)
I1 = 1 # Consideramos irradiancias normalizadas a un cierto valor.
I2 = 1
X,Y = meshgrid(x,x)
Delta = a*X/D
delta = k*Delta
gamma12 = (1 - np.abs(Delta)/lc)*(np.abs(Delta)<lc)
Itotal = I1 + I2 + 2.0*sqrt(I1*I2)*gamma12*cos(delta)
figure(figsize=(14,5))
subplot(121)
pcolormesh(x*1e3,x*1e3,Itotal,cmap = 'gray',vmin=0,vmax=4)
xlabel("x (mm)")
ylabel("y (mm)")
subplot(122)
plot(x*1e3,Itotal[(int)(x.shape[0]/2),:])
xlim(-5,5)
ylim(0,4)
xlabel("x (mm)")
ylabel("Irradiancia total normalizada");
Explanation: ¿Qué ocurre si iluminamos el experimento de Young con este tipo de radiación?
Si iluminamos una doble rendija con un tren de ondas como el representado anteriormente, tendremos dos ondas llegando a un cierto punto de la pantalla con la misma evolución temporal pero una de ellas retrasada con respecto a la otra. Esto es debido a la diferencia de camino óptico recorrido por cada tren de onda.
Cuando superponemos ambos trenes (uno con un cierto retraso con respecto al otro), la diferencia entre las fases iniciales de cada onda dependerá del tiempo. Además, como los saltos de fase en el tren de ondas son aleatorios, esa diferencia de fase cambiara a su vez aleatoriamente. El siguiente codigo muestra esta diferencia.
Esta diferencia aleatoria tiene un gran efecto en la irradiancia total del patron de interferencias.
Recordemos que la intensidad total viene dada por:
$$ I_t = I_1 + I_2 + \epsilon_0 c n < E_1 E_2>_{\tau}$$
En la anterior expresion hemos dejado explícitamente en el término interferencial el promedio sobre el producto escalar de los campos que interfieren. Este producto escalar nos da lugar a,
$$\int_0^\tau \cos(k_1 r - \omega t + \phi_1) \cos(k_2 r - \omega t + \phi_2) dt$$
que podemos escribir en función de la diferencia de fases iniciales $\phi_1 - \phi_2$. Si esta diferencia varía aleatoriamente durante el intervalo de tiempo $\tau$, su promedio sera nulo y el termino interferencial tambien lo sera. Por tanto la irradiancia total sera,
$$I_t = I_1 + I_2$$
Es decir, se pierden las franjas de interferencia. Esta situación ocurrirá cuando la diferencia de camino sea suficiente como para que no se solapen las zonas de los trenes de ondas que interfieren con la misma fase. Desde el centro de la pantalla (diferencia de fase igual a cero entre las ondas que interfieren) veremos entonces cómo las franjas se van perdiendo gradualmente a medida que nos alejamos a puntos exteriores (el contraste disminuye progresivamente) hasta que se pierden por completo (contraste igual a cero). En este punto, la irradiancia total será simplemente la suma de las irradiancias de los haces que interfieren.
El punto en el que las franjas se pierden por completo será, como se ha comentado, aquel que haga que no haya solapamiento entre las zonas de los trenes de ondas con la misma fase. Es decir, la diferencia de camino ha de ser mayor que la distancia característica de cada una de estas zonas. Esta distancia es simplemente la longitud de coherencia. Por tanto, perderemos la interferencia si,
$$\Delta > l_c$$
donde $\Delta$ denota la diferencia de camino entre los haces.
El siguiente código muestra el patrón de interferencias cuando iluminamos el experimento de Young con un tren de ondas.
End of explanation |
14,355 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting house prices using k-nearest neighbors regression
In this notebook, you will implement k-nearest neighbors regression. You will
Step1: Load in house sales data
For this notebook, we use a subset of the King County housing dataset created by randomly selecting 40% of the houses in the full dataset.
Step2: Import useful functions from previous notebooks
To efficiently compute pairwise distances among data points, we will convert the SFrame into a 2D Numpy array. First import the numpy library and then copy and paste get_numpy_data() from the second notebook of Week 2.
Step3: We will also need the normalize_features() function from Week 5 that normalizes all feature columns to unit norm. Paste this function below.
Step4: Split data into training, test, and validation sets
Step5: Extract features and normalize
Using all of the numerical inputs listed in feature_list, transform the training, test, and validation SFrames into Numpy arrays
Step6: In computing distances, it is crucial to normalize features. Otherwise, for example, the sqft_living feature (typically on the order of thousands) would exert a much larger influence on distance than the bedrooms feature (typically on the order of ones). We divide each column of the training feature matrix by its 2-norm, so that the transformed column has unit norm.
IMPORTANT
Step7: Compute a single distance
To start, let's just explore computing the "distance" between two given houses. We will take our query house to be the first house of the test set and look at the distance between this house and the 10th house of the training set.
To see the features associated with the query house, print the first row (index 0) of the test feature matrix. You should get an 18-dimensional vector whose components are between 0 and 1.
Step8: Now print the 10th row (index 9) of the training feature matrix. Again, you get an 18-dimensional vector with components between 0 and 1.
Step9: QUIZ QUESTION
What is the Euclidean distance between the query house and the 10th house of the training set?
Note
Step10: Compute multiple distances
Of course, to do nearest neighbor regression, we need to compute the distance between our query house and all houses in the training set.
To visualize this nearest-neighbor search, let's first compute the distance from our query house (features_test[0]) to the first 10 houses of the training set (features_train[0
Step11: QUIZ QUESTION
Among the first 10 training houses, which house is the closest to the query house?
Step12: It is computationally inefficient to loop over computing distances to all houses in our training dataset. Fortunately, many of the Numpy functions can be vectorized, applying the same operation over multiple values or vectors. We now walk through this process.
Consider the following loop that computes the element-wise difference between the features of the query house (features_test[0]) and the first 3 training houses (features_train[0
Step13: The subtraction operator (-) in Numpy is vectorized as follows
Step14: Note that the output of this vectorized operation is identical to that of the loop above, which can be verified below
Step15: Aside
Step16: To test the code above, run the following cell, which should output a value -0.0934339605842
Step17: The next step in computing the Euclidean distances is to take these feature-by-feature differences in diff, square each, and take the sum over feature indices. That is, compute the sum of square feature differences for each training house (row in diff).
By default, np.sum sums up everything in the matrix and returns a single number. To instead sum only over a row or column, we need to specifiy the axis parameter described in the np.sum documentation. In particular, axis=1 computes the sum across each row.
Below, we compute this sum of square feature differences for all training houses and verify that the output for the 16th house in the training set is equivalent to having examined only the 16th row of diff and computing the sum of squares on that row alone.
Step18: With this result in mind, write a single-line expression to compute the Euclidean distances between the query house and all houses in the training set. Assign the result to a variable distances.
Hint
Step19: To test the code above, run the following cell, which should output a value 0.0237082324496
Step20: Now you are ready to write a function that computes the distances from a query house to all training houses. The function should take two parameters
Step21: QUIZ QUESTIONS
Take the query house to be third ho
use of the test set (features_test[2]). What is the index of the house in the training set that is closest to this query house?
What is the predicted value of the query house based on 1-nearest neighbor regression?
Step22: Perform k-nearest neighbor regression
For k-nearest neighbors, we need to find a set of k houses in the training set closest to a given query house. We then make predictions based on these k nearest neighbors.
Fetch k-nearest neighbors
Using the functions above, implement a function that takes in
* the value of k;
* the feature matrix for the training houses; and
* the feature vector of the query house
and returns the indices of the k closest training houses. For instance, with 2-nearest neighbor, a return value of [5, 10] would indicate that the 6th and 11th training houses are closest to the query house.
Hint
Step23: QUIZ QUESTION
Take the query house to be third house of the test set (features_test[2]). What are the indices of the 4 training houses closest to the query house?
Step24: Make a single prediction by averaging k nearest neighbor outputs
Now that we know how to find the k-nearest neighbors, write a function that predicts the value of a given query house. For simplicity, take the average of the prices of the k nearest neighbors in the training set. The function should have the following parameters
Step25: QUIZ QUESTION
Again taking the query house to be third house of the test set (features_test[2]), predict the value of the query house using k-nearest neighbors with k=4 and the simple averaging method described and implemented above.
Step26: Compare this predicted value using 4-nearest neighbors to the predicted value using 1-nearest neighbor computed earlier.
Make multiple predictions
Write a function to predict the value of each and every house in a query set. (The query set can be any subset of the dataset, be it the test set or validation set.) The idea is to have a loop where we take each house in the query set as the query house and make a prediction for that specific house. The new function should take the following parameters
Step27: QUIZ QUESTION
Make predictions for the first 10 houses in the test set using k-nearest neighbors with k=10.
What is the index of the house in this query set that has the lowest predicted value?
What is the predicted value of this house?
Step28: Choosing the best value of k using a validation set
There remains a question of choosing the value of k to use in making predictions. Here, we use a validation set to choose this value. Write a loop that does the following | Python Code:
import graphlab
graphlab.product_key.set_product_key("C0C2-04B4-D94B-70F6-8771-86F9-C6E1-E122")
Explanation: Predicting house prices using k-nearest neighbors regression
In this notebook, you will implement k-nearest neighbors regression. You will:
* Find the k-nearest neighbors of a given query input
* Predict the output for the query input using the k-nearest neighbors
* Choose the best value of k using a validation set
Fire up GraphLab Create
End of explanation
sales = graphlab.SFrame('kc_house_data_small.gl/kc_house_data_small.gl')
Explanation: Load in house sales data
For this notebook, we use a subset of the King County housing dataset created by randomly selecting 40% of the houses in the full dataset.
End of explanation
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe['price']
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
Explanation: Import useful functions from previous notebooks
To efficiently compute pairwise distances among data points, we will convert the SFrame into a 2D Numpy array. First import the numpy library and then copy and paste get_numpy_data() from the second notebook of Week 2.
End of explanation
def normalize_features(feature_matrix):
norms = np.linalg.norm(feature_matrix, axis=0)
features = feature_matrix / norms
return features, norms
Explanation: We will also need the normalize_features() function from Week 5 that normalizes all feature columns to unit norm. Paste this function below.
End of explanation
(train_and_validation, test) = sales.random_split(.8, seed=1) # initial train/test split
(train, validation) = train_and_validation.random_split(.8, seed=1) # split training set into training and validation sets
Explanation: Split data into training, test, and validation sets
End of explanation
feature_list = ['bedrooms',
'bathrooms',
'sqft_living',
'sqft_lot',
'floors',
'waterfront',
'view',
'condition',
'grade',
'sqft_above',
'sqft_basement',
'yr_built',
'yr_renovated',
'lat',
'long',
'sqft_living15',
'sqft_lot15']
features_train, output_train = get_numpy_data(train, feature_list, 'price')
features_test, output_test = get_numpy_data(test, feature_list, 'price')
features_valid, output_valid = get_numpy_data(validation, feature_list, 'price')
Explanation: Extract features and normalize
Using all of the numerical inputs listed in feature_list, transform the training, test, and validation SFrames into Numpy arrays:
End of explanation
features_train, norms = normalize_features(features_train) # normalize training set features (columns)
features_test = features_test / norms # normalize test set by training set norms
features_valid = features_valid / norms # normalize validation set by training set norms
Explanation: In computing distances, it is crucial to normalize features. Otherwise, for example, the sqft_living feature (typically on the order of thousands) would exert a much larger influence on distance than the bedrooms feature (typically on the order of ones). We divide each column of the training feature matrix by its 2-norm, so that the transformed column has unit norm.
IMPORTANT: Make sure to store the norms of the features in the training set. The features in the test and validation sets must be divided by these same norms, so that the training, test, and validation sets are normalized consistently.
End of explanation
print features_test[0]
Explanation: Compute a single distance
To start, let's just explore computing the "distance" between two given houses. We will take our query house to be the first house of the test set and look at the distance between this house and the 10th house of the training set.
To see the features associated with the query house, print the first row (index 0) of the test feature matrix. You should get an 18-dimensional vector whose components are between 0 and 1.
End of explanation
print features_train[9]
Explanation: Now print the 10th row (index 9) of the training feature matrix. Again, you get an 18-dimensional vector with components between 0 and 1.
End of explanation
print np.sqrt(np.sum((features_train[9]-features_test[0])**2))
Explanation: QUIZ QUESTION
What is the Euclidean distance between the query house and the 10th house of the training set?
Note: Do not use the np.linalg.norm function; use np.sqrt, np.sum, and the power operator (**) instead. The latter approach is more easily adapted to computing multiple distances at once.
End of explanation
for i in range(0,10):
print str(i) + " : " + str(np.sqrt(np.sum((features_train[i]-features_test[0])**2)))
Explanation: Compute multiple distances
Of course, to do nearest neighbor regression, we need to compute the distance between our query house and all houses in the training set.
To visualize this nearest-neighbor search, let's first compute the distance from our query house (features_test[0]) to the first 10 houses of the training set (features_train[0:10]) and then search for the nearest neighbor within this small set of houses. Through restricting ourselves to a small set of houses to begin with, we can visually scan the list of 10 distances to verify that our code for finding the nearest neighbor is working.
Write a loop to compute the Euclidean distance from the query house to each of the first 10 houses in the training set.
End of explanation
for i in range(0,10):
print str(i) + " : " + str(np.sqrt(np.sum((features_train[i]-features_test[2])**2)))
Explanation: QUIZ QUESTION
Among the first 10 training houses, which house is the closest to the query house?
End of explanation
for i in xrange(3):
print features_train[i]-features_test[0]
# should print 3 vectors of length 18
Explanation: It is computationally inefficient to loop over computing distances to all houses in our training dataset. Fortunately, many of the Numpy functions can be vectorized, applying the same operation over multiple values or vectors. We now walk through this process.
Consider the following loop that computes the element-wise difference between the features of the query house (features_test[0]) and the first 3 training houses (features_train[0:3]):
End of explanation
print features_train[0:3] - features_test[0]
Explanation: The subtraction operator (-) in Numpy is vectorized as follows:
End of explanation
# verify that vectorization works
results = features_train[0:3] - features_test[0]
print results[0] - (features_train[0]-features_test[0])
# should print all 0's if results[0] == (features_train[0]-features_test[0])
print results[1] - (features_train[1]-features_test[0])
# should print all 0's if results[1] == (features_train[1]-features_test[0])
print results[2] - (features_train[2]-features_test[0])
# should print all 0's if results[2] == (features_train[2]-features_test[0])
Explanation: Note that the output of this vectorized operation is identical to that of the loop above, which can be verified below:
End of explanation
diff = features_train[0:len(features_train)] - features_test[0]
Explanation: Aside: it is a good idea to write tests like this cell whenever you are vectorizing a complicated operation.
Perform 1-nearest neighbor regression
Now that we have the element-wise differences, it is not too hard to compute the Euclidean distances between our query house and all of the training houses. First, write a single-line expression to define a variable diff such that diff[i] gives the element-wise difference between the features of the query house and the i-th training house.
End of explanation
print diff[-1].sum() # sum of the feature differences between the query and last training house
# should print -0.0934339605842
Explanation: To test the code above, run the following cell, which should output a value -0.0934339605842:
End of explanation
print np.sum(diff**2, axis=1)[15] # take sum of squares across each row, and print the 16th sum
print np.sum(diff[15]**2) # print the sum of squares for the 16th row -- should be same as above
Explanation: The next step in computing the Euclidean distances is to take these feature-by-feature differences in diff, square each, and take the sum over feature indices. That is, compute the sum of square feature differences for each training house (row in diff).
By default, np.sum sums up everything in the matrix and returns a single number. To instead sum only over a row or column, we need to specifiy the axis parameter described in the np.sum documentation. In particular, axis=1 computes the sum across each row.
Below, we compute this sum of square feature differences for all training houses and verify that the output for the 16th house in the training set is equivalent to having examined only the 16th row of diff and computing the sum of squares on that row alone.
End of explanation
distances = np.sqrt(np.sum(diff**2, axis=1))
Explanation: With this result in mind, write a single-line expression to compute the Euclidean distances between the query house and all houses in the training set. Assign the result to a variable distances.
Hint: Do not forget to take the square root of the sum of squares.
End of explanation
print distances[100] # Euclidean distance between the query house and the 101th training house
# should print 0.0237082324496
Explanation: To test the code above, run the following cell, which should output a value 0.0237082324496:
End of explanation
def compute_distances(features_instances, features_query):
diff = features_instances[0:len(features_instances)] - features_query
distances = np.sqrt(np.sum(diff**2, axis=1))
return distances
Explanation: Now you are ready to write a function that computes the distances from a query house to all training houses. The function should take two parameters: (i) the matrix of training features and (ii) the single feature vector associated with the query.
End of explanation
distances = compute_distances(features_train, features_test[2])
min = distances[0]
index = 0
for i in xrange(len(distances)):
if(distances[i] < min):
min = distances[i]
index = i
print min
print index
print output_train[382]
Explanation: QUIZ QUESTIONS
Take the query house to be third ho
use of the test set (features_test[2]). What is the index of the house in the training set that is closest to this query house?
What is the predicted value of the query house based on 1-nearest neighbor regression?
End of explanation
def k_nearest_neighbors(k, feature_train, features_query):
distances = compute_distances(features_train, features_query)
neighbors = np.argsort(distances)[0:k]
return neighbors
Explanation: Perform k-nearest neighbor regression
For k-nearest neighbors, we need to find a set of k houses in the training set closest to a given query house. We then make predictions based on these k nearest neighbors.
Fetch k-nearest neighbors
Using the functions above, implement a function that takes in
* the value of k;
* the feature matrix for the training houses; and
* the feature vector of the query house
and returns the indices of the k closest training houses. For instance, with 2-nearest neighbor, a return value of [5, 10] would indicate that the 6th and 11th training houses are closest to the query house.
Hint: Look at the documentation for np.argsort.
End of explanation
print k_nearest_neighbors(4, features_train, features_test[2])
Explanation: QUIZ QUESTION
Take the query house to be third house of the test set (features_test[2]). What are the indices of the 4 training houses closest to the query house?
End of explanation
def predict_output_of_query(k, features_train, output_train, features_query):
neighbors = k_nearest_neighbors(k, features_train, features_query)
prices = output_train[neighbors]
prediction = np.sum(prices)/k
return prediction
Explanation: Make a single prediction by averaging k nearest neighbor outputs
Now that we know how to find the k-nearest neighbors, write a function that predicts the value of a given query house. For simplicity, take the average of the prices of the k nearest neighbors in the training set. The function should have the following parameters:
* the value of k;
* the feature matrix for the training houses;
* the output values (prices) of the training houses; and
* the feature vector of the query house, whose price we are predicting.
The function should return a predicted value of the query house.
Hint: You can extract multiple items from a Numpy array using a list of indices. For instance, output_train[[6, 10]] returns the prices of the 7th and 11th training houses.
End of explanation
print predict_output_of_query(4, features_train, output_train, features_test[2])
Explanation: QUIZ QUESTION
Again taking the query house to be third house of the test set (features_test[2]), predict the value of the query house using k-nearest neighbors with k=4 and the simple averaging method described and implemented above.
End of explanation
def predict_output(k, features_train, output_train, features_query):
predictions = []
for i in xrange(len(features_query)):
prediction = predict_output_of_query(k, features_train, output_train, features_query[i])
predictions.append(prediction)
return predictions
Explanation: Compare this predicted value using 4-nearest neighbors to the predicted value using 1-nearest neighbor computed earlier.
Make multiple predictions
Write a function to predict the value of each and every house in a query set. (The query set can be any subset of the dataset, be it the test set or validation set.) The idea is to have a loop where we take each house in the query set as the query house and make a prediction for that specific house. The new function should take the following parameters:
* the value of k;
* the feature matrix for the training houses;
* the output values (prices) of the training houses; and
* the feature matrix for the query set.
The function should return a set of predicted values, one for each house in the query set.
Hint: To get the number of houses in the query set, use the .shape field of the query features matrix. See the documentation.
End of explanation
print predict_output(10, features_train, output_train,features_test[0:10])
Explanation: QUIZ QUESTION
Make predictions for the first 10 houses in the test set using k-nearest neighbors with k=10.
What is the index of the house in this query set that has the lowest predicted value?
What is the predicted value of this house?
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
kvals = range(1, 16)
plt.plot(kvals, rss_all,'bo-')
Explanation: Choosing the best value of k using a validation set
There remains a question of choosing the value of k to use in making predictions. Here, we use a validation set to choose this value. Write a loop that does the following:
For k in [1, 2, ..., 15]:
Makes predictions for each house in the VALIDATION set using the k-nearest neighbors from the TRAINING set.
Computes the RSS for these predictions on the VALIDATION set
Stores the RSS computed above in rss_all
Report which k produced the lowest RSS on VALIDATION set.
(Depending on your computing environment, this computation may take 10-15 minutes.)
To visualize the performance as a function of k, plot the RSS on the VALIDATION set for each considered k value:
End of explanation |
14,356 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
If, else, Logic, and Laziness
These commands are Pythons bread and butter! You would do well to pay attention to this lecture because 'if statements' are both very common and very very useful.
There are a few ways to use the if statement within your code, and each way has slightly different syntax. For now, we are going to focus on in-line expressions.
{value} if {condition} else {value_2}
* Caveat
Step1: I would recommend you call this function a few times, to ensure you understand it. Notice that "result" has different values depending on whether the number you gave is/is not divisible by 4.
Notice above when I was speaking of syntax I said the conditions have to be True/False. Let me just quickly show you that it is the case here
Step2: Logic Operators...
Before moving on, I should probably give you a bit more Python vocabulary
Step3: Why does '[0]' equal True and '[]' equal False? The answer is that when we ask Python ‘if a exists’ Python decides that statement is True if a is NOT empty. In other words, an empty string/list is basically the same as not existing and hence returns False. In this particular case exists([0]) is True because the list contains the element "0". This is also why [False] returns True; its True because the list contains the element "False".
The way this works is a bit tricky to get your hand around, but once you do get it you can start writing some really nice idiomatic code.
Readibility counts...
This is a minor detour, but I feel that I should not be simply teaching you guys how to do stuff, I should be trying to teach you guys how to do stuff in the most 'Pythonic' way possible. This is a good juncture to talk a little about style.
Consider the following code
Step4: And & Or
Now we are going to add to the complexity a little by explaining how we can combine expressions into larger ones.
Why might we want to do this?
Well say for instance you want to write some code that returns a number that is divisible by 5 OR divisible by 10. Maybe you want to write some code that checks if a number is odd AND also a perfect square (eg. 9, 25).
As it turns out, Python has the "and"/"or" commands and for the most part they will work how we understand them in English. But, perhaps we should make the effort to be precise. The table below tells you what the output of the operator is for all values of A and B.
Okay, cool. So how do we use and/or in Python? The good news is that syntax is super intuitive
Step6: A note on Python's "Lazy" evaluation...
Lets play a simple game. The rules
Step7: So in the above two experiments notice that in both cases the timesink_function returns True (after waiting 5 secounds). The only difference between the two experiments is the order in which we make the calls (true or wait, wait or true). Notice that wait or true took 5 secs to evaluate, whereas 'true or wait' took 0 secs.
Step8: In experiments 3 and 4 we are comparing 'False or wait' against 'wait or false'. In this case there is no difference in time. Thats because the Truth of 'False or X' or 'X or False' actually simplifies to 'X'. Python cannot take any shortcuts here, it has to workout what X is, and that takes 5 secs.
Step10: In Experiments 5 and 6 are just like 3 and 4, except that we are not looking at the "and" condition. There is once again a time difference. Python gets to be lazy in one case but not in the other.
In short, understanding Python’s lazy evaluation can speed up you code considerably (without any cost to readability). As a general rule, to make use of lazy evaluation all you need to do is put the thing that is fastest to evaluate first and the slowest thing last. For maximum efficiency you would need to figure out what it minimum about of work you need to do in order to arrive at an answer.
Homework
Your challenge this week is to evaluate the following formula as fast as possible | Python Code:
# Takes a number as input, prints whether that number is divisible by 4.
text = input("Give me integer... ")
result = "{} is{}divisible by 4".format(text, "" if int(text) % 4 == 0 else " NOT ")
# ^ this is the important bit here ^
print(result)
Explanation: If, else, Logic, and Laziness
These commands are Pythons bread and butter! You would do well to pay attention to this lecture because 'if statements' are both very common and very very useful.
There are a few ways to use the if statement within your code, and each way has slightly different syntax. For now, we are going to focus on in-line expressions.
{value} if {condition} else {value_2}
* Caveat: {condition} must evaluate to a boolean value (True or False).
This code will return {value} if the condition is True. If the condition is False we return whatever is in the "else" bit of the code, i.e. {value_2}. To restate the logic in normal English:
"If the condition is true we do {this} but if the condition is False we do {that}".
Okay, lets show you how this works with an example. You guys remember how 'input' works, right?
End of explanation
print (10 % 4 == 0)
print (16 % 4 == 0)
# Note: "==" is NOT to be confused with "="
Explanation: I would recommend you call this function a few times, to ensure you understand it. Notice that "result" has different values depending on whether the number you gave is/is not divisible by 4.
Notice above when I was speaking of syntax I said the conditions have to be True/False. Let me just quickly show you that it is the case here:
End of explanation
def exists(x):
return True if x else False
print ("if 1 equates to...", exists(10))
print ("if \"\" equates to...", exists("")) # empty string
print ("if 0 equates to...", exists(0)) # remember True/False are 1/0 in Python
print ("if [] equates to...", exists([])) # empty list
print ("if [0] equates to...", exists([0])) # list contains 0, therefore list not empty
print ("if [False] equates to...", exists([False]))
Explanation: Logic Operators...
Before moving on, I should probably give you a bit more Python vocabulary:
SYMBOL + SYNTAX :: MEANING :: :: EXAMPLE ::
if a if a exists if a print(a)
if not a if a does not exist if not a print("NOOO!!!!")
a == b is a equal to b 10 == 10 is True, 5 == 10 is False.
a != b is a not equal to b 10 != 10 is False, 5 != 10 is True.
a > b is a greater than b 10 > 5 is True, 10 > 5 is False
a < b is a less than b 10 < 5 is False, 5 < 10 is True
a >= b is a greater or equal b 10 >= 10 is True
a <= b is a less or equal b 10 <= 10 is True
The above table has a bunch of logical operators, with meanings and examples. For example, ‘is a equal to b?’ which is the ‘==’ symbol. If you remember only one thing from today's lesson please let it be '=='. It comes up a lot, and I mean A LOT.
Anyway, I suspect the most complex of these commands to grasp is the simple 'if a'. Below I have a few more test cases to help you understand how it works. I also have a section on readability today, which will help explain why you will frequently see code like if a != [ ] * rewritten as if a.
End of explanation
a = input("\nGive me variable 'a' : ")
b = input("Now give me variable 'b' : ")
op = input("give me an operator (e.g '==', '!=', '>') : ")
string = "{} {} {}".format(a, op, b)
print("\nThe statement is {}...".format(string), "The statement is {}".format(eval(string)), sep="\n ")
Explanation: Why does '[0]' equal True and '[]' equal False? The answer is that when we ask Python ‘if a exists’ Python decides that statement is True if a is NOT empty. In other words, an empty string/list is basically the same as not existing and hence returns False. In this particular case exists([0]) is True because the list contains the element "0". This is also why [False] returns True; its True because the list contains the element "False".
The way this works is a bit tricky to get your hand around, but once you do get it you can start writing some really nice idiomatic code.
Readibility counts...
This is a minor detour, but I feel that I should not be simply teaching you guys how to do stuff, I should be trying to teach you guys how to do stuff in the most 'Pythonic' way possible. This is a good juncture to talk a little about style.
Consider the following code:
if variable != []:
{do something}
OR:
if variable == True:
{do something}
The first code snippet asks if 'variable' is an empty list. If it isn't empty we enter the main body of code and do something (see indentation lecture). The second snippet of code asks if our value is equal to the value True. In many cases code like this code can be written to be more 'Pythonic'. You see, both these statements are essentially asking 'if variable exists do X', which means we can refactor this to:
if variable:
{do something}
Alright, let's try another example...
if variable == {value}:
return True
else:
return False
So this code is part of a function. The function returns True if the variable is equal to some value and returns False otherwise. Just as before it is important to note that this code works, BUT, it can be rewritten like this:
return True if variable == {value} else False
But guess what, we can further refactor this code, since "==" always returns a boolean (ie. True/False) the "True if" is simply not necessary. So even better is:
return variable == {value}
Thats four lines of code condensed into one simple expression. Neat huh? Alright, that's enough about readability for today’s lecture, lets move on.
Understanding the operators...
In the code window below I have written a bit of code that will ask you for two variables (a, b) and an operator. It will then tell you whether that condition is True or False.
For example:
a is 10
b is 16
operator is >=
In which case, the code will figure out whether 10 >= 16 is True or False. I'd recommend calling this code a few times with different inputs in order to get a proper feel for what's going on.
End of explanation
x = 10
print(x > 9 and x < 11) # is x greater than 9 AND less than 11. Note, we could refactor this to: 11 > x > 9
print( isinstance(x, str) or isinstance(x, int)) # is x a string OR a integer
print(x % 5 == 0 and x % 2 ==0) # is x divisible by 5 AND 2
# Once again, remember 0 = False and 1 = True
print(0 or 0) # False or False = False
print(1 or 0) # True or False = True
print(0 and 0) # False AND False = False
print(0 and 1) # False AND True = False
a = True
# Basic logic...
print(a or not a) # Tautology, ALWAYS TRUE (for any a)
print(a and not a) # Contraction, AlWAYS FALSE (for any a)
Explanation: And & Or
Now we are going to add to the complexity a little by explaining how we can combine expressions into larger ones.
Why might we want to do this?
Well say for instance you want to write some code that returns a number that is divisible by 5 OR divisible by 10. Maybe you want to write some code that checks if a number is odd AND also a perfect square (eg. 9, 25).
As it turns out, Python has the "and"/"or" commands and for the most part they will work how we understand them in English. But, perhaps we should make the effort to be precise. The table below tells you what the output of the operator is for all values of A and B.
Okay, cool. So how do we use and/or in Python? The good news is that syntax is super intuitive:
{value} and {value_2} returns True/False
{value} or {value_2} returns True/False
yes thats right, the keyword for 'and' in Python is the word 'and', 'or' in Python is also the same as English. Alright, lets run a few examples shall we:
End of explanation
import time
def timesink_function(x):
Returns x, after 5 secs
time.sleep(5)
return x
## Experiment # 1
print("True or wait(true)...")
t1 = time.time()
_ = True or timesink_function(True) # the important line
t2 = time.time()
print("Time Taken was {} secs".format(round(t2-t1, 1)))
print("\n")
## Experiment # 2
print("wait(true) or true...")
t1 = time.time()
_ = timesink_function(True) or True # the important line
t2 = time.time()
print("Time Taken was {} secs".format(round(t2-t1, 1)))
print("\n")
Explanation: A note on Python's "Lazy" evaluation...
Lets play a simple game. The rules:
1. I give you two statements, P, Q and an operator (operator will always be one of "and/or")
1. Both statements are truth functional (i.e. statements that are True or False)
1. Your job is to evaluate the expression (P operator Q) in the fastest time possible.
1. You get to choose what to evaluate first, i.e. your options are P then Q, or Q then P.
Alright, here's the first question:
P = "There are an infinite number of twin primes"
Q = "7 + 7 is an odd number"
operator = And
So, to once again reiterate the question:
Is (P and Q) True/False and what is the fastest strategy for solving it?
Answer:
The statement (P and Q) is False, and the fastest strategy is to calculate statement Q first.
Why? Well, in order to prove that (P and Q) is False we only have to prove that either P or Q is False. In this case, its pretty easy to see that Q is false, therefore we know the answer of (P and Q) without having to prove the twin primes conjecture.
Alright, lets try one more time.
P = 7 is a prime number
Q = "NP = P" (wiki entry for the np = p problem)
operator = or
Answer:
“The statement (P or Q) is True, and the fastest strategy is to calculate statement P first."
Similar to the problem above, to prove (P or Q) is True we only have to prove either P or Q is True. In this case, we can quickly check 7 is prime and thus we can solve the problem without even attempting to solve the np=p problem (which btw, has a million dollar prize for the first person to prove it).
Okay, so how can we use this information in our Python programmes? Well Python uses the same "lazy" evaluation as we did above; if we know A is True we don't have to calculate B in order to know (A or B) is True.
So how can we take advantage of this? Well, the syntax is simple, Python evaluates left-to-right. So in the case of P or Q Python ALWAYS checks P first. To take advantage then, what we should do is give Python the easiest statement first and then the slower one.
In the case of the first problem all we would have to do to make Python solve it as fast as we did is to give python (Q and P) (in that order).
The code below attempts to convince you of this...
End of explanation
## Experiment 3
print("False or wait(true)...")
t1 = time.time()
_ = False or timesink_function(True) # the important line
t2 = time.time()
print("Time Taken was {} secs".format(round(t2-t1, 1)))
print("\n")
## Experiment 4
print("wait(true) or False...")
t1 = time.time()
_ = timesink_function(True) or False # the important line
t2 = time.time()
print("Time Taken was {} secs".format(round(t2-t1, 1)))
print("\n")
Explanation: So in the above two experiments notice that in both cases the timesink_function returns True (after waiting 5 secounds). The only difference between the two experiments is the order in which we make the calls (true or wait, wait or true). Notice that wait or true took 5 secs to evaluate, whereas 'true or wait' took 0 secs.
End of explanation
## Experiment 5
print("False and wait(true)...")
t1 = time.time()
_ = False and timesink_function(True) # the important line
t2 = time.time()
print("Time Taken was {} secs".format(round(t2-t1, 1)))
print("\n")
## Experiment 6
print("wait(true) and False...")
t1 = time.time()
_ = timesink_function(True) and False # the important line
t2 = time.time()
print("Time Taken was {} secs".format(round(t2-t1, 1)))
print("\n")
Explanation: In experiments 3 and 4 we are comparing 'False or wait' against 'wait or false'. In this case there is no difference in time. Thats because the Truth of 'False or X' or 'X or False' actually simplifies to 'X'. Python cannot take any shortcuts here, it has to workout what X is, and that takes 5 secs.
End of explanation
import time
import random
from functools import partial
def sleep_bool(x, b):
waits for x secs, then returns true/false
time.sleep(x)
return b
a = partial(sleep_bool, 2, True)
b = partial(sleep_bool, 3, False)
c = partial(sleep_bool, 4, False)
t1 = time.time()
expression = ( a() or b() ) and ( (b() and not b() ) or c() ) ## <-- change this line
t2 = time.time()
print("Time Taken was {} secs".format(round(t2-t1, 1)))
print("\n")
## Can you beat 9 secs ???
Explanation: In Experiments 5 and 6 are just like 3 and 4, except that we are not looking at the "and" condition. There is once again a time difference. Python gets to be lazy in one case but not in the other.
In short, understanding Python’s lazy evaluation can speed up you code considerably (without any cost to readability). As a general rule, to make use of lazy evaluation all you need to do is put the thing that is fastest to evaluate first and the slowest thing last. For maximum efficiency you would need to figure out what it minimum about of work you need to do in order to arrive at an answer.
Homework
Your challenge this week is to evaluate the following formula as fast as possible:
(a or b) and ( (b and not b) or c )
Where:
a takes 2 seconds to return a true/false value
b takes 3 seconds to return a true/false value
c takes 4 seconds to return a true/false value
Have a bit of play and see how quickly you can make it execute.
Just be careful not to change the meaning of the expression. The parenthesis, just like in maths group the elements. For example "(a or b) and c" is not the same as "a or (b and c)".
End of explanation |
14,357 | Given the following text description, write Python code to implement the functionality described.
Description:
Sort the character array based on ASCII % N
This function takes last element as pivot , places the pivot element at its correct position in sorted array , and places all smaller ( smaller than pivot ) to left of pivot and all greater elements to right of pivot ; pivot ; Index of smaller element ; If current element is smaller than or equal to pivot Instead of values , ASCII % m values are compared ; Increment index of smaller element ; swap ; The main function that implements QuickSort arr [ ] -- > Array to be sorted , low -- > Starting index , high -- > Ending index ; pi is partitioning index , arr [ p ] is now at right place ; Separately sort elements before partition and after partition ; Function to print the given array ; Driver code ; Sort the given array ; Print the sorted array
| Python Code:
def partition(arr , low , high , mod ) :
pivot = ord(arr[high ] ) ;
i =(low - 1 ) ;
piv = pivot % mod ;
for j in range(low , high ) :
a = ord(arr[j ] ) % mod ;
if(a <= piv ) :
i += 1 ;
arr[i ] , arr[j ] = arr[j ] , arr[i ]
arr[i + 1 ] , arr[high ] = arr[high ] , arr[i + 1 ]
return(i + 1 ) ;
def quickSort(arr , low , high , mod ) :
if(low < high ) :
pi = partition(arr , low , high , mod ) ;
quickSort(arr , low , pi - 1 , mod ) ;
quickSort(arr , pi + 1 , high , mod ) ;
return arr
def printArray(arr , size ) :
for i in range(0 , size ) :
print(arr[i ] , end = "▁ ") ;
if __name__== "__main __":
arr =[' g ' , ' e ' , ' e ' , ' k ' , ' s ' ] ;
n = len(arr ) ;
mod = 8 ;
arr = quickSort(arr , 0 , n - 1 , mod ) ;
printArray(arr , n ) ;
|
14,358 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook was created by Sergey Tomin for Workshop
Step1: If you want to see injector_lattice.py file you can run following command (lattice file is very large)
Step2: 1. Design optics calculation of the European XFEL Injector
Remark
For convenience reasons, we define optical functions starting at the gun by backtracking of the optical functions derived from ASTRA (or similar space charge code) at 130 MeV at the entrance to the first quadrupole. The optical functions we thus obtain have obviously nothing to do with the actual beam envelope form the gun to the 130 MeV point.
Because we work with linear accelerator we have to define initial energy and initial twiss paramters in order to get
correct twiss functions along the Injector.
Step3: 2. Tracking in first and second order approximation without any collective effects
Remark
Because of the reasons mentioned above, we start the beam tracking from the first quadrupole after RF cavities.
Loading of beam distribution
In order to perform tracking we have to have beam distribution. We will load beam distribution from a ASTRA file ('beam_distrib.ast'). And we convert the Astra beam distribution to Ocelot format - ParticleArray. ParticleArray is designed for tracking.
In order to work with converters we have to import specific module from ocelot.adaptors
from ocelot.adaptors.astra2ocelot import *
After importing ocelot.adaptors.astra2ocelot we can use converter astraBeam2particleArray() to load and convert.
As you will see beam distribution consists of 200 000 particles (that is why loading can take a few second), charge 250 pC, initial energy is about 6.5 MeV.
ParticleArray is a class which includes several parameters and methods.
* ParticleArray.particles is a 1D numpy array with coordinates of particles in
$$ParticleArray.particles = [\vec{x_0}, \vec{x_1}, ..., \vec{x_n}], $$ where $$\vec{x_n} = (x_n, x_n', y_n, y_n', \tau_n, p_n)$$
* ParticleArray.s is the longitudinal coordinate of the reference particle in [m].
* ParticleArray.E is the energy of the reference particle in [GeV].
* ParticleArray.q_array - is a 1D numpy array of the charges each (macro) particles in [C]
Step4: Selection of the tracking order and lattice for the tracking.
MagneticLattice(sequence, start=None, stop=None, method=MethodTM()) have wollowing arguments
Step5: Tracking
for tracking we have to define following objects
Step6: Current profile
Step7: Beam distribution | Python Code:
# the output of plotting commands is displayed inline within frontends,
# directly below the code cell that produced it
%matplotlib inline
# this python library provides generic shallow (copy) and deep copy (deepcopy) operations
from copy import deepcopy
# import from Ocelot main modules and functions
from ocelot import *
# import from Ocelot graphical modules
from ocelot.gui.accelerator import *
# import injector lattice
from ocelot.test.workshop.injector_lattice import *
Explanation: This notebook was created by Sergey Tomin for Workshop: Designing future X-ray FELs. Source and license info is on GitHub. August 2016.
Tutorial N2. Tracking.
As an example, we will use lattice file (converted to Ocelot format) of the European XFEL Injector.
This example will cover the following topics:
calculation of the linear optics for the European XFEL Injector.
Tracking of the particles in first and second order approximation without collective effects.
Coordiantes
Coordinates in Ocelot are following:
$$ \left (x, \quad x' = \frac{p_x}{p_0} \right), \qquad \left (y, \quad y' = \frac{p_y}{p_0} \right), \qquad \left (\Delta s = c\tau, \quad p = \frac{\Delta E}{p_0 c} \right)$$
Requirements
injector_lattice.py - input file, the Injector lattice.
beam_130MeV.ast - input file, initial beam distribution in ASTRA format.
End of explanation
lat = MagneticLattice(cell, stop=None)
Explanation: If you want to see injector_lattice.py file you can run following command (lattice file is very large):
$ %load injector_lattice.py
The variable cell contains all the elements of the lattice in right order.
And again Ocelot will work with class MagneticLattice instead of simple sequence of element. So we have to run following command.
End of explanation
# initialization of Twiss object
tws0 = Twiss()
# defining initial twiss parameters
tws0.beta_x = 29.171
tws0.beta_y = 29.171
tws0.alpha_x = 10.955
tws0.alpha_y = 10.955
# defining initial electron energy in GeV
tws0.E = 0.005
# calculate optical functions with initial twiss parameters
tws = twiss(lat, tws0, nPoints=None)
# ploting twiss paramentrs.
plot_opt_func(lat, tws, top_plot=["Dx", "Dy"], fig_name="i1", legend=False)
plt.show()
Explanation: 1. Design optics calculation of the European XFEL Injector
Remark
For convenience reasons, we define optical functions starting at the gun by backtracking of the optical functions derived from ASTRA (or similar space charge code) at 130 MeV at the entrance to the first quadrupole. The optical functions we thus obtain have obviously nothing to do with the actual beam envelope form the gun to the 130 MeV point.
Because we work with linear accelerator we have to define initial energy and initial twiss paramters in order to get
correct twiss functions along the Injector.
End of explanation
from ocelot.adaptors.astra2ocelot import *
#p_array_init = astraBeam2particleArray(filename='beam_130MeV.ast')
p_array_init = astraBeam2particleArray(filename='beam_130MeV_off_crest.ast')
Explanation: 2. Tracking in first and second order approximation without any collective effects
Remark
Because of the reasons mentioned above, we start the beam tracking from the first quadrupole after RF cavities.
Loading of beam distribution
In order to perform tracking we have to have beam distribution. We will load beam distribution from a ASTRA file ('beam_distrib.ast'). And we convert the Astra beam distribution to Ocelot format - ParticleArray. ParticleArray is designed for tracking.
In order to work with converters we have to import specific module from ocelot.adaptors
from ocelot.adaptors.astra2ocelot import *
After importing ocelot.adaptors.astra2ocelot we can use converter astraBeam2particleArray() to load and convert.
As you will see beam distribution consists of 200 000 particles (that is why loading can take a few second), charge 250 pC, initial energy is about 6.5 MeV.
ParticleArray is a class which includes several parameters and methods.
* ParticleArray.particles is a 1D numpy array with coordinates of particles in
$$ParticleArray.particles = [\vec{x_0}, \vec{x_1}, ..., \vec{x_n}], $$ where $$\vec{x_n} = (x_n, x_n', y_n, y_n', \tau_n, p_n)$$
* ParticleArray.s is the longitudinal coordinate of the reference particle in [m].
* ParticleArray.E is the energy of the reference particle in [GeV].
* ParticleArray.q_array - is a 1D numpy array of the charges each (macro) particles in [C]
End of explanation
# initialization of tracking method
method = MethodTM()
# for second order tracking we have to choose SecondTM
method.global_method = SecondTM
# for first order tracking uncomment next line
# method.global_method = TransferMap
# we will start simulation from the first quadrupole (QI.46.I1) after RF section.
# you can change stop element (and the start element, as well)
# START_73_I1 - marker before Dog leg
# START_96_I1 - marker before Bunch Compresion
lat_t = MagneticLattice(cell, start=QI_46_I1, stop=None, method=method)
Explanation: Selection of the tracking order and lattice for the tracking.
MagneticLattice(sequence, start=None, stop=None, method=MethodTM()) have wollowing arguments:
* sequence - list of the elements,
* start - first element of the lattice. If None, then lattice starts from the first element of the sequence,
* stop - last element of the lattice. If None, then lattice stops by the last element of the sequence,
* method=MethodTM() - method of the tracking. MethodTM() class assigns transfer map to every element. By default all elements are assigned first order transfer map - TransferMap. One can create one's own map, but there are following predefined maps:
- TransferMap - first order matrices.
- SecondTM - 2nd order matrices.
- KickTM - kick applyed.
- RungeKuttaTM - Runge-Kutta integrator is applyed, but required 3D magnetic field function element.mag_field = lambda x, y, z: (Bx, By, Bz) (see example ocelot/demos/ebeam/tune_shift.py)
End of explanation
navi = Navigator(lat_t)
p_array = deepcopy(p_array_init)
tws_track, p_array = track(lat_t, p_array, navi)
# you can change top_plot argument, for example top_plot=["alpha_x", "alpha_y"]
plot_opt_func(lat_t, tws_track, top_plot=["E"], fig_name=0, legend=False)
plt.show()
Explanation: Tracking
for tracking we have to define following objects:
* Navigator defines step (dz) of tracking and which, if it exists, physical process will be applied at each step.
In order to add collective effects (Space charge, CSR or wake) method add_physics_proc() must be run.
- **Method:**
* Navigator.add_physics_proc(physics_proc, elem1, elem2)
- physics_proc - physics process, can be CSR, SpaceCharge or Wake,
- elem1 and elem2 - first and last elements between which the physics process will be applied.
track(MagneticLatice, ParticleArray, Navigator) - the function performs tracking through the lattice [lat] of the particles [p_array]. This function also calculate twiss parameters of the beam distribution on each tracking step.
End of explanation
bins_start, hist_start = get_current(p_array, charge=p_array.q_array[0], num_bins=200)
plt.figure(4)
plt.title("current: end")
plt.plot(bins_start*1000, hist_start)
plt.xlabel("s, mm")
plt.ylabel("I, A")
plt.grid(True)
plt.show()
Explanation: Current profile
End of explanation
tau = np.array([p.tau for p in p_array])
dp = np.array([p.p for p in p_array])
x = np.array([p.x for p in p_array])
y = np.array([p.y for p in p_array])
ax1 = plt.subplot(311)
ax1.plot(-tau*1000, x*1000, 'r.')
plt.setp(ax1.get_xticklabels(), visible=False)
plt.ylabel("x, mm")
plt.grid(True)
ax2 = plt.subplot(312, sharex=ax1)
ax2.plot(-tau*1000, y*1000, 'r.')
plt.setp(ax2.get_xticklabels(), visible=False)
plt.ylabel("y, mm")
plt.grid(True)
ax3 = plt.subplot(313, sharex=ax1)
ax3.plot(-tau*1000, dp, 'r.')
plt.ylabel("dE/E")
plt.xlabel("s, mm")
plt.grid(True)
plt.show()
Explanation: Beam distribution
End of explanation |
14,359 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Slicing with pandas, gurobipy.tuplelist, and O(n) slicing
Run a little python script that sets up the performance comparisons.
Step1: The slicing will be over small, medium, and large tables.
Step2: We will run three series of four tests each.
Each series tests
1. slicing with .sloc and pandas
1. slicing with gurobipy.tuplelist
1. slicing with ticdat.Slicer (with the gurobipy enhancement disabled)
1. O(n) slicing
First, we see that with a small table (1,200) rows, the pandas slicing is only somewhat faster than the O(n) slicing, while Slicer slicing is quite a bit faster and tuplelist faster still.
Step3: Next we see that with a table of 31,800 rows, pandas slicing is now ~100 faster than O(n) slicing (but tuplelist and Slicer are still the fastest by far).
Step4: Finally, we see that with a table of 270,000 rows, pandas slicing is ~1000X faster than O(n) slicing. Here, tuplelist is blindingly fast - nearly as much an improvement shows over pandas as pandas shows over O(n). Slicer again comes in a respectably close second.
Step5: Bottom line? pandas isn't really designed with "iterating over indicies and slicing" in mind, so it isn't the absolutely fastest way to write this sort of code. However, pandas also doesn't implement naive O(n) slicing.
For most instances, the .sloc approach to slicing will be fast enough. In general, so long as you use the optimal big-O subroutines, the time to solve a MIP or LP model will be larger than the time to formulate the model. However, in those instances where the slicing is the bottleneck operation, gurobipy.tuplelist or ticdat.Slicer can be used, or the model building code can be refactored to be more pandonic.
Addendum
There was a request to check sum as well as len. Here the results vindicate pandas, in as much as all three "smart" strategies are roughly equivalent. | Python Code:
run prep_for_different_slicings.py
Explanation: Slicing with pandas, gurobipy.tuplelist, and O(n) slicing
Run a little python script that sets up the performance comparisons.
End of explanation
[len(getattr(td, "childTable")) for td in (smallTd, medTd, bigTd)]
Explanation: The slicing will be over small, medium, and large tables.
End of explanation
%timeit checkChildDfLen(smallChildDf, *smallChk)
%timeit checkTupleListLen(smallSmartTupleList, *smallChk)
%timeit checkSlicerLen(smallSlicer, *smallChk)
%timeit checkTupleListLen(smallDumbTupleList, *smallChk)
Explanation: We will run three series of four tests each.
Each series tests
1. slicing with .sloc and pandas
1. slicing with gurobipy.tuplelist
1. slicing with ticdat.Slicer (with the gurobipy enhancement disabled)
1. O(n) slicing
First, we see that with a small table (1,200) rows, the pandas slicing is only somewhat faster than the O(n) slicing, while Slicer slicing is quite a bit faster and tuplelist faster still.
End of explanation
%timeit checkChildDfLen(medChildDf, *medChk)
%timeit checkTupleListLen(medSmartTupleList, *medChk)
%timeit checkSlicerLen(medSlicer, *medChk)
%timeit checkTupleListLen(medDumbTupleList, *medChk)
Explanation: Next we see that with a table of 31,800 rows, pandas slicing is now ~100 faster than O(n) slicing (but tuplelist and Slicer are still the fastest by far).
End of explanation
%timeit checkChildDfLen(bigChildDf, *bigChk)
%timeit checkTupleListLen(bigSmartTupleList, *bigChk)
%timeit checkSlicerLen(bigSlicer, *bigChk)
%timeit checkTupleListLen(bigDumbTupleList, *bigChk)
Explanation: Finally, we see that with a table of 270,000 rows, pandas slicing is ~1000X faster than O(n) slicing. Here, tuplelist is blindingly fast - nearly as much an improvement shows over pandas as pandas shows over O(n). Slicer again comes in a respectably close second.
End of explanation
%timeit checkChildDfSum(bigChildDf, *bigChk)
%timeit checkTupleListSum(bigSmartTupleList, bigTd, *bigChk)
%timeit checkSlicerSum(bigSlicer, bigTd, *bigChk)
%timeit checkTupleListSum(bigDumbTupleList, bigTd, *bigChk)
Explanation: Bottom line? pandas isn't really designed with "iterating over indicies and slicing" in mind, so it isn't the absolutely fastest way to write this sort of code. However, pandas also doesn't implement naive O(n) slicing.
For most instances, the .sloc approach to slicing will be fast enough. In general, so long as you use the optimal big-O subroutines, the time to solve a MIP or LP model will be larger than the time to formulate the model. However, in those instances where the slicing is the bottleneck operation, gurobipy.tuplelist or ticdat.Slicer can be used, or the model building code can be refactored to be more pandonic.
Addendum
There was a request to check sum as well as len. Here the results vindicate pandas, in as much as all three "smart" strategies are roughly equivalent.
End of explanation |
14,360 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyGSLIB
Draw
The GSLIb equivalent parameter file is
```
Parameters for DRAW
***
START OF PARAMETERS
Step1: Getting the data ready for work
If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
Step2: Testing Draw
Step3: Comparing results with gslib | Python Code:
#general imports
import matplotlib.pyplot as plt
import pygslib
import numpy as np
import pandas as pd
#make the plots inline
%matplotlib inline
Explanation: PyGSLIB
Draw
The GSLIb equivalent parameter file is
```
Parameters for DRAW
***
START OF PARAMETERS:
data/cluster.dat \file with data
3 \ number of variables
1 2 3 \ columns for variables
0 \ column for probabilities (0=equal)
-1.0e21 1.0e21 \ trimming limits
69069 100 \random number seed, number to draw
draw.out \file for realizations
```
End of explanation
#get the data in gslib format into a pandas Dataframe
cluster = pygslib.gslib.read_gslib_file('../datasets/cluster.dat')
print ('\n\t\tCluster Data \n',cluster.tail())
Explanation: Getting the data ready for work
If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
End of explanation
print (pygslib.gslib.__draw.draw.__doc__)
cluster['NO-Weight']=1.
parameters_draw = {
'vr' : cluster[['Xlocation','Ylocation','Primary']], # data
'wt' : cluster['NO-Weight'], # weight/prob (use wt[:]=1 for equal probability)
'rseed' : 69069, # random number seed (conditioning cat.)
'ndraw' : 100} # number to draw
vo,sumwts,error = pygslib.gslib.__draw.draw(**parameters_draw)
print ('error ? ', error != 0, error)
print ('is 1./sumwts == nd?', 1./sumwts, len(cluster))
#making the output (which is numpy array) a pandas dataframe for nice printing
dfvo=pd.DataFrame(vo,columns= ['Xlocation','Ylocation','Primary'])
Explanation: Testing Draw
End of explanation
print (dfvo.head(6))
print ('******')
print (dfvo.tail(6))
Explanation: Comparing results with gslib
End of explanation |
14,361 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training on Cifar 10 Using MXNet and H2O
https
Step1: Step 1
Step2: Let's turn the class label into a factor
Step3: Anytime, especially during training, you can inspect the model in Flow (http
Step4: Predict | Python Code:
%matplotlib inline
import matplotlib
import scipy.io
import matplotlib.pyplot as plt
import cPickle
import numpy as np
from scipy.misc import imsave
from IPython.display import Image, display, HTML
Explanation: Training on Cifar 10 Using MXNet and H2O
https://www.cs.toronto.edu/~kriz/cifar.html
The CIFAR-10 dataset
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.
End of explanation
!mkdir -p ~/.h2o/datasets/
!wget -c https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz -O ~/.h2o/datasets/cifar-10-python.tar.gz
!tar xvzf ~/.h2o/datasets/cifar-10-python.tar.gz -C ~/.h2o/datasets/
import os.path
with open(os.path.expanduser("~/.h2o/datasets/cifar-10-batches-py/batches.meta")) as fd:
meta = cPickle.load(fd)
print meta
labels = meta['label_names']
labels
def load_cifar10_image_list(filepath):
images = []
labels = []
with open(filepath, 'rb') as fd:
d = cPickle.load(fd)
for image, label, filename in zip(d['data'], d['labels'], d['filenames']):
x = np.array(image)
x = np.dstack((x[:1024], x[1024:2048], x[2048:]))
x = x.reshape(32,32,3)
filename=os.path.expanduser("~/.h2o/datasets/cifar-10-batches-py/"+filename)
imsave(filename, x)
images.append(filename)
labels.append(label)
return images, labels
x_train = []
y_train = []
for batch in range(1,6):
batch_name = os.path.expanduser('~/.h2o/datasets/cifar-10-batches-py/data_batch_%d' % batch)
x,y = load_cifar10_image_list(batch_name)
x_train.extend(x)
y_train.extend(y)
!ls ~/.h2o/datasets/cifar-10-batches-py/ | sed -n '1~5000p' # show every 5000th file
for x in x_train[:10]:
display(Image(filename=x))
[labels[x] for x in y_train[:10]]
len(x_train)
batch_test = os.path.expanduser('~/.h2o/datasets/cifar-10-batches-py/test_batch')
x_test, y_test = load_cifar10_image_list(batch_test)
import h2o
h2o.init()
!nvidia-smi
train_df = {"x0": x_train, "x1": y_train }
test_df = {"x0" : x_test, "x1": y_test }
train_hf = h2o.H2OFrame(train_df)
test_hf = h2o.H2OFrame(test_df)
Explanation: Step 1: Preprocess the data
End of explanation
train_hf['x1'] = train_hf['x1'].asfactor()
test_hf['x1'] = test_hf['x1'].asfactor()
train_hf.head(10)
from h2o.estimators.deepwater import H2ODeepWaterEstimator
deepwater_model = H2ODeepWaterEstimator(
epochs=10, ##
nfolds=3, ## 3-fold cross-validation
learning_rate=2e-3,
mini_batch_size=64,
# problem_type='image', ## autodetected by default
network='vgg',
# network_definition_file="mycnn.json" ## provide your own mxnet .json model
image_shape=[32,32],
channels=3,
gpu=True
)
deepwater_model.train(x=['x0'], y='x1', training_frame=train_hf)
Explanation: Let's turn the class label into a factor
End of explanation
train_error = deepwater_model.model_performance(train=True).mean_per_class_error()
print "training error:", train_error
xval_error = deepwater_model.model_performance(xval=True).mean_per_class_error()
print "cross-validated error:", xval_error
deepwater_model
Explanation: Anytime, especially during training, you can inspect the model in Flow (http://localhost:54321)
Here's the first (of three) cross-validation models:
End of explanation
random_test_image_hf = test_hf[int(np.random.random()*len(test_df)),:]['x0']
random_test_image_hf
filename = random_test_image_hf.as_data_frame(use_pandas=False)[1][0]
filename
Image(filename=filename)
pred = deepwater_model.predict(random_test_image_hf)
predlabel = int(pred['predict'].as_data_frame(use_pandas=False)[1][0])
labels[predlabel]
Explanation: Predict
End of explanation |
14,362 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create a set of pings from "saved-session" to build a set of core client data.
Step1: Remove any pings without a clientId.
Step2: Sanitize the pings and reduce the set of pings to one ping per client per day.
Step3: Output the data to CSV. | Python Code:
update_channel = "beta"
now = dt.datetime.now()
start = now - dt.timedelta(30)
end = now - dt.timedelta(1)
pings = get_pings(sc, app="Fennec", channel=update_channel,
submission_date=(start.strftime("%Y%m%d"), end.strftime("%Y%m%d")),
build_id=("20100101000000", "99999999999999"),
fraction=1)
subset = get_pings_properties(pings, ["clientId",
"application/channel",
"application/version",
"meta/submissionDate",
"environment/profile/creationDate",
"environment/system/os/version",
"environment/system/memoryMB"])
Explanation: Create a set of pings from "saved-session" to build a set of core client data.
End of explanation
subset = subset.filter(lambda p: p["clientId"] is not None)
print subset.first()
Explanation: Remove any pings without a clientId.
End of explanation
def transform(ping):
clientId = ping["clientId"] # Should not be None since we filter those out
profileDate = None
profileDaynum = ping["environment/profile/creationDate"]
if profileDaynum is not None:
profileDate = (dt.date(1970, 1, 1) + dt.timedelta(int(profileDaynum))).strftime("%Y%m%d")
submissionDate = ping["meta/submissionDate"] # Added via the ingestion process so should not be None
channel = ping["application/channel"]
version = ping["application/version"]
os_version = int(ping["environment/system/os/version"])
memory = ping["environment/system/memoryMB"]
if memory is None:
memory = 0
else:
memory = int(memory)
return [clientId, channel, profileDate, submissionDate, version, os_version, memory]
transformed = get_one_ping_per_client(subset).map(transform)
print transformed.first()
Explanation: Sanitize the pings and reduce the set of pings to one ping per client per day.
End of explanation
grouped = pd.DataFrame(transformed.collect(), columns=["clientid", "channel", "profiledate", "submissiondate", "version", "osversion", "memory"])
!mkdir -p ./output
grouped.to_csv("./output/fennec-clients-" + update_channel + "-" + end.strftime("%Y%m%d") + ".csv", index=False)
s3_output = "s3n://net-mozaws-prod-us-west-2-pipeline-analysis/mfinkle/android-clients-" + update_channel
s3_output += "/v1/channel=" + update_channel + "/end_date=" + end.strftime("%Y%m%d")
grouped = sqlContext.createDataFrame(transformed, ["clientid", "channel", "profiledate", "submissiondate", "version", "osversion", "memory"])
grouped.saveAsParquetFile(s3_output)
Explanation: Output the data to CSV.
End of explanation |
14,363 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lasso regression with block updating
Sometimes, it is very useful to update a set of parameters together. For example, variables that are highly correlated are often good to update together. In PyMC 3 block updating is simple, as example will demonstrate.
Here we have a LASSO regression model where the two coefficients are strongly correlated. Normally, we would define the coefficient parameters as a single random variable, but here we define them separately to show how to do block updates.
First we generate some fake data.
Step1: Then define the random variables.
Step2: For most samplers, including Metropolis and HamiltonianMC, simply pass a list of variables to sample as a block. This works with both scalar and array parameters. | Python Code:
%pylab inline
from matplotlib.pylab import *
from pymc3 import *
import numpy as np
d = np.random.normal(size=(3, 30))
d1 = d[0] + 4
d2 = d[1] + 4
yd = .2*d1 +.3*d2 + d[2]
Explanation: Lasso regression with block updating
Sometimes, it is very useful to update a set of parameters together. For example, variables that are highly correlated are often good to update together. In PyMC 3 block updating is simple, as example will demonstrate.
Here we have a LASSO regression model where the two coefficients are strongly correlated. Normally, we would define the coefficient parameters as a single random variable, but here we define them separately to show how to do block updates.
First we generate some fake data.
End of explanation
lam = 3
with Model() as model:
s = Exponential('s', 1)
tau = Uniform('tau', 0, 1000)
b = lam * tau
m1 = Laplace('m1', 0, b)
m2 = Laplace('m2', 0, b)
p = d1*m1 + d2*m2
y = Normal('y', mu=p, sd=s, observed=yd)
Explanation: Then define the random variables.
End of explanation
with model:
start = find_MAP()
step1 = Metropolis([m1, m2])
step2 = Slice([s, tau])
trace = sample(10000, [step1, step2], start=start)
traceplot(trace);
hexbin(trace[m1],trace[m2], gridsize = 50)
Explanation: For most samplers, including Metropolis and HamiltonianMC, simply pass a list of variables to sample as a block. This works with both scalar and array parameters.
End of explanation |
14,364 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Optimization Exercise 1
Imports
Step1: Hat potential
The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential"
Step2: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$
Step3: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.
Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.
Print the x values of the minima.
Plot the function as a blue line.
On the same axes, show the minima as red circles.
Customize your visualization to make it beatiful and effective. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
Explanation: Optimization Exercise 1
Imports
End of explanation
def hat(x,a,b):
return -a*x**2+b*x**4
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(1.0, 10.0, 1.0)==-9.0
Explanation: Hat potential
The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential":
$$ V(x) = -a x^2 + b x^4 $$
Write a function hat(x,a,b) that returns the value of this function:
End of explanation
a = 5.0
b = 1.0
x=np.linspace(-3,3,100)
plt.figure(figsize=(9,6))
plt.xlabel('Range'), plt.ylabel('V(x)'), plt.title('Hat Potential')
plt.plot(x, hat(x,a,b))
plt.box(False)
plt.grid(True)
plt.tick_params(axis='x', top='off', direction='out')
plt.tick_params(axis='y', right='off', direction='out');
assert True # leave this to grade the plot
Explanation: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$:
End of explanation
res1 = opt.minimize_scalar(hat, bounds=(-3,0), args=(a,b), method='bounded')
res2 = opt.minimize_scalar(hat, bounds=(0,3), args=(a,b), method='bounded')
print('Local minima: %f, %f' % (res1.x, res2.x))
plt.figure(figsize=(9,6))
plt.xlabel('Range'), plt.ylabel('V(x)')
plt.plot(x, hat(x,a,b), label="Potential")
plt.scatter(res1.x, res1.fun, marker="o", color="r")
plt.scatter(res2.x, res2.fun, marker="o", color="r")
plt.title('Finding Local Minima of Hat Potential')
plt.box(False), plt.grid(True), plt.xlim(-2.5,2.5), plt.ylim(-8,4)
plt.tick_params(axis='x', top='off', direction='out')
plt.tick_params(axis='y', right='off', direction='out')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.);
assert True # leave this for grading the plot
Explanation: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.
Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.
Print the x values of the minima.
Plot the function as a blue line.
On the same axes, show the minima as red circles.
Customize your visualization to make it beatiful and effective.
End of explanation |
14,365 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LDA and NMF on New Job-Skill Matrix
Step1: LDA and NMF
Global arguments
Step2: Trainning LDA
Step3: Saved results of trainning
Step4: Evaluation of LDA on test set by perplexity
Step5: Save topics learnt by LDA
Step6: Assignning skill clusters to job posts
The clusters are top-$k$ clusters where we either
+ fix $k$ OR
+ choose $k$ for each JD such that the cumulative prob of $k$ clusters is larger than a certain threshold.
Step7: Cluster assignment analysis
We want to see when the cluster assignment to job post is clear or fuzzy. The former (latter) means that we the list of top clusters assigned to the post has at most 3 clusters (more than 3 clusters) respectively.
First, we look at those posts with clear assignment
Step8: These posts contain lots of skills. Only 25% of them contain no more than 31 skills in each post, so each of the remaining 75% contains at least 31 skills. We can contrast this quartile with the skill distribution in all job posts below.
Step9: Examples of clear vs. fuzzy assignment can be seen in result file.
Cluster assignment statistics
Step10: Correlation between n_top_cluster and n_skill in job posts
We can roughly divide job posts into 4 following groups based on the above quartile
Step11: Box plot of mixture size
Step12: The box plot reveals the following
Step13: Probability of top cluster
Step14: NMF
Step15: Building TF-IDF matrix
Need to proceed like LDA i.e. we need to calculate tfidf for trigram skills, remove them, then calculate tfidf for bigram skills, remove then calculate tfidf for unigram skills.
Step16: Training
Step17: Save models
Step18: Evaluation
Step19: Model Comparison | Python Code:
import ja_helpers as ja_helpers; from ja_helpers import *
HOME_DIR = 'd:/larc_projects/job_analytics/'; DATA_DIR = HOME_DIR + 'data/clean/'
RES_DIR = HOME_DIR + 'results/skill_cluster/new/'
skill_df = pd.read_csv(DATA_DIR + 'skill_index.csv')
doc_skill = mmread(DATA_DIR + 'doc_skill.mtx')
skills = skill_df['skill']
print('# skills from the skill index: %d' %len(skills))
n_doc = doc_skill.shape[0]; n_skill = doc_skill.shape[1]
print ('# skills in matrix doc-skill: %d' %n_skill)
print('# documents in matrix doc-skill: %d' %n_doc)
## May not be needed
# doc_index = pd.read_csv(DATA_DIR + 'doc_index.csv')
# jd_docs = doc_index['doc']; print('# JDs: %d' %len(jd_docs))
Explanation: LDA and NMF on New Job-Skill Matrix
End of explanation
ks = range(15, 35, 5) # ks = [15]
n_top_words = 10
Explanation: LDA and NMF
Global arguments:
no. of topics: k in {5, 10, ..., 20}
no. of top words to be printed out in result
End of explanation
print('# docs: {}, # skills: {}'.format(n_doc, n_skill))
in_train, in_test = mkPartition(n_doc, p=80)
doc_skill = doc_skill.tocsr()
lda_X_train, lda_X_test = doc_skill[in_train, :], doc_skill[in_test, :]
beta = 0.1 # or 200/W
lda = trainLDA(beta, ks, trainning_set=lda_X_train)
Explanation: Trainning LDA
End of explanation
LDA_DIR = RES_DIR + 'lda/'
for k in ks:
doc_topic_distr = lda[k].transform(doc_skill)
fname = RES_DIR + 'doc_{}topic_distr.mtx'.format(k)
with(open(fname, 'w')) as f:
mmwrite(f, doc_topic_distr)
Explanation: Saved results of trainning:
End of explanation
perp_df = testLDA(lda, ks, test_set=lda_X_test)
perp_df
perp_df.to_csv(LDA_DIR + 'perplexity.csv', index=False)
Explanation: Evaluation of LDA on test set by perplexity
End of explanation
for k in ks:
# word_dist = pd.DataFrame(lda[k].components_).apply(normalize, axis=1)
# word_dist.to_csv(LDA_DIR + 'lda_word_dist_{}topics.csv'.format(k), index=False)
lda_topics = top_words_df(n_top_words=10, model=lda[k], feature_names=skills)
lda_topics.to_csv(LDA_DIR + '{}topics.csv'.format(k), index=False)
for k in ks:
topic_word_dist = lda[k].components_
fname = LDA_DIR + 'word_dist_{}_topics.mtx'.format(k)
with(open(fname, 'w')) as f:
mmwrite(f, topic_word_dist)
Explanation: Save topics learnt by LDA:
End of explanation
clusters = pd.read_csv(LDA_DIR + 'cluster.csv')['cluster']
n_cluster = len(clusters)
doc_index.to_csv(DATA_DIR + 'doc_index.csv', index=False)
doc_topic_distr = lda[15].transform(doc_skill)
with(open(LDA_DIR + 'doc_topic_distr.mtx', 'w')) as f:
mmwrite(f, doc_topic_distr)
thres = 0.4 # 0.5
t0 = time()
# doc_index['top_clusters'] = doc_index.apply(getTopTopics_GT, axis=1, doc_topic_distr=doc_topic_distr, thres=0.5)
# doc_index['n_top_cluster_40'] = doc_index.apply(getTopTopics_GT, axis=1, doc_topic_distr=doc_topic_distr, thres=thres)
doc_index['prob_top_cluster'] = doc_index.apply(getTopTopicProb, axis=1, doc_topic_distr=doc_topic_distr)
print('Done after %.1fs' %(time() - t0))
res = doc_index.query('n_skill >= 2')
res.sort_values('n_skill', ascending=False, inplace=True)
print('No. of JDs in result: %d' %res.shape[0])
res.head()
n_sample = 100
res.head(n_sample).to_csv(LDA_DIR + 'new/cluster_100top_docs.csv', index=False)
res.tail(n_sample).to_csv(LDA_DIR + 'new/cluster_100bottom_docs.csv', index=False)
# res.to_csv(LDA_DIR + 'new/cluster_assign2.csv', index=False)
res.rename(columns={'n_top_cluster_40': 'n_top_cluster'}, inplace=True)
Explanation: Assignning skill clusters to job posts
The clusters are top-$k$ clusters where we either
+ fix $k$ OR
+ choose $k$ for each JD such that the cumulative prob of $k$ clusters is larger than a certain threshold.
End of explanation
clear_assign = res.query('n_top_cluster <= 3'); fuzzy_assign = res.query('n_top_cluster > 3')
print('# posts with clear assignment: %d' %clear_assign.shape[0])
print('Distribution of skills in these posts:')
quantile(clear_assign['n_skill'])
Explanation: Cluster assignment analysis
We want to see when the cluster assignment to job post is clear or fuzzy. The former (latter) means that we the list of top clusters assigned to the post has at most 3 clusters (more than 3 clusters) respectively.
First, we look at those posts with clear assignment:
End of explanation
print('Distribution of skills in all posts:')
quantile(res['n_skill'])
fig = plotSkillDist(res)
plt.savefig(LDA_DIR + 'fig/n_skill_hist.jpg')
plt.show(); plt.close()
Explanation: These posts contain lots of skills. Only 25% of them contain no more than 31 skills in each post, so each of the remaining 75% contains at least 31 skills. We can contrast this quartile with the skill distribution in all job posts below.
End of explanation
res = pd.read_csv(LDA_DIR + 'new/cluster_assign.csv')
res.describe().round(2)
Explanation: Examples of clear vs. fuzzy assignment can be seen in result file.
Cluster assignment statistics
End of explanation
g1 = res.query('n_skill < 7'); g2 = res.query('n_skill >= 7 & n_skill < 12')
g3 = res.query('n_skill >= 12 & n_skill < 18'); g4 = res.query('n_skill >= 18')
print('# posts in 4 groups:');
print(','.join([str(g1.shape[0]), str(g2.shape[0]), str(g3.shape[0]), str(g4.shape[0])]))
Explanation: Correlation between n_top_cluster and n_skill in job posts
We can roughly divide job posts into 4 following groups based on the above quartile:
+ G1: $ 2 \le $ n_skill $ \le 7 $; G2: $ 7 < $ n_skill $ \le 12 $
+ G3: $ 12 < $ n_skill $ \le 18 $; G4: $ 18 < $ n_skill $ \le 115 $
End of explanation
bp = mixtureSizePlot(g1, g2, g3, g4)
plt.savefig(LDA_DIR + 'fig/boxplot_mixture_size.pdf'); plt.show(); plt.close()
Explanation: Box plot of mixture size
End of explanation
thres = 0.4
fig = errorBarPlot(res, thres=thres)
plt.savefig(LDA_DIR + 'fig/mixture_size_thres{}.jpg'.format(int(thres*100)))
plt.show(); plt.close()
Explanation: The box plot reveals the following:
the median mixture size decreases when we have more skills in job post. This is expected as more skills should give clearer assignment.
when $ 2 \le $ n_skill $ \le 7 $ and $ 12 < $ n_skill $ \le 18 $, the mixture size is resp 7 and 6 most of the time.
Error bar plot of mixture size
End of explanation
fig = topClusterProbPlot(g1, g2, g3, g4)
plt.savefig(LDA_DIR + 'fig/top_cluster_prob.jpg')
plt.show(); plt.close()
Explanation: Probability of top cluster
End of explanation
NMF_DIR = RES_DIR + 'new/nmf/'
Explanation: NMF
End of explanation
## TODO
tf_idf_vect = text_manip.TfidfVectorizer(vocabulary=skills, ngram_range=(1, max_n_word))
n_instance, n_feat = posts.shape[0], len(skills)
t0 =time()
print('Building tf_idf for %d JDs using %d features (skills)...' %(n_instance, n_feat))
doc_skill_tfidf = tf_idf_vect.fit_transform(posts['clean_text'])
print('Done after %.1fs' %(time()-t0))
Explanation: Building TF-IDF matrix
Need to proceed like LDA i.e. we need to calculate tfidf for trigram skills, remove them, then calculate tfidf for bigram skills, remove then calculate tfidf for unigram skills.
End of explanation
rnmf = {k: NMF(n_components=k, random_state=0) for k in ks}
print( "Fitting NMF using random initialization..." )
print('No. of topics, Error, Running time')
rnmf_error = []
for k in ks:
t0 = time()
rnmf[k].fit(X_train)
elapsed = time() - t0
err = rnmf[k].reconstruction_err_
print('%d, %0.1f, %0.1fs' %(k, err, elapsed))
rnmf_error.append(err)
# end
Explanation: Training
End of explanation
nmf_features = tf_idf_vect.get_feature_names()
pd.DataFrame(nmf_features).to_csv(RES_DIR + 'nmf_features.csv', index=False)
for k in ks:
top_words = top_words_df(n_top_words, model=rnmf[k],feature_names=nmf_features)
top_words.to_csv(RES_DIR + 'nmf_{}_topics.csv'.format(k), index=False)
# each word dist is a component in NMF
word_dist = pd.DataFrame(rnmf[k].components_).apply(normalize, axis=1)
word_dist.to_csv(RES_DIR + 'nmf_word_dist_{}topics.csv'.format(k), index=False)
Explanation: Save models:
End of explanation
print('Calculating test errors of random NMF ...')
rnmf_test_error = cal_test_err(mf_models=rnmf)
best_k = ks[np.argmin(rnmf_test_error)]
print('The best no. of topics is %d' %best_k)
rnmf_best = rnmf[best_k]
nmf_fig = plotMetrics(train_metric=rnmf_error, test_metric=rnmf_test_error, model_name='NMF')
nmf_fig.savefig(RES_DIR + 'nmf.pdf')
plt.close(nmf_fig)
Explanation: Evaluation
End of explanation
# Put all model metrics on training & test datasets into 2 data frames
model_list = ['LDA', 'randomNMF']
train_metric = pd.DataFrame({'No. of topics': ks, 'LDA': np.divide(lda_scores, 10**6), 'randomNMF': rnmf_error})
test_metric = pd.DataFrame({'No. of topics': ks, 'LDA': perp, 'randomNMF': rnmf_test_error, })
fig = plt.figure(figsize=(10, 6))
for i, model in enumerate(model_list):
plt.subplot(2, 2, i+1)
plt.subplots_adjust(wspace=.5, hspace=.5)
# train metric
plt.title(model)
plt.plot(ks, train_metric[model], '--')
plt.xlabel('No. of topics')
if model == 'LDA':
plt.ylabel(r'Log likelihood ($\times 10^6$)')
else:
plt.ylabel(r'$\| X_{train} - W_{train} H \|_2$')
plt.grid(True)
plt.xticks(ks)
# test metric
plt.subplot(2, 2, i+3)
plt.title(model)
plt.plot(ks, test_metric[model], 'r')
plt.xlabel('No. of topics')
if model == 'LDA':
plt.ylabel(r'Perplexity')
else:
plt.ylabel(r'$\| X_{test} - W_{test} H \|_2$')
plt.grid(True)
plt.xticks(ks)
# end
plt.show()
fig.savefig(RES_DIR + 'lda_vs_nmf.pdf')
plt.close(fig)
Explanation: Model Comparison
End of explanation |
14,366 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 12
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License
Step4: Code
Here's the code from the previous notebook that we'll need.
Step5: In the previous chapter I presented the SIR model of infectious disease and used it to model the Freshman Plague at Olin. In this chapter we'll consider metrics intended to quantify the effects of the disease and interventions intended to reduce those effects.
Immunization
Models like this are useful for testing "what if?" scenarios. As an
example, we'll consider the effect of immunization.
Suppose there is a vaccine that causes a student to become immune to the Freshman Plague without being infected. How might you modify the model to capture this effect?
One option is to treat immunization as a shortcut from susceptible to
recovered without going through infectious. We can implement this
feature like this
Step6: add_immunization moves the given fraction of the population from S
to R.
Step7: If we assume that 10% of students are vaccinated at the
beginning of the semester, and the vaccine is 100% effective, we can
simulate the effect like this
Step8: The following figure shows S as a function of time, with and
without immunization.
Step9: Metrics
When we plot a time series, we get a view of everything that happened
when the model ran, but often we want to boil it down to a few numbers
that summarize the outcome. These summary statistics are called
metrics, as we saw in Section xxx.
In the SIR model, we might want to know the time until the peak of the
outbreak, the number of people who are sick at the peak, the number of
students who will still be sick at the end of the semester, or the total number of students who get sick at any point.
As an example, I will focus on the last one --- the total number of sick students --- and we will consider interventions intended to minimize it.
When a person gets infected, they move from S to I, so we can get
the total number of infections by computing the difference in S at the beginning and the end
Step10: Without immunization, almost 47% of the population gets infected at some point. With 10% immunization, only 31% get infected. That's pretty good.
Sweeping Immunization
Now let's see what happens if we administer more vaccines. This
following function sweeps a range of immunization rates
Step11: The parameter of sweep_immunity is an array of immunization rates. The
result is a SweepSeries object that maps from each immunization rate
to the resulting fraction of students ever infected.
The following figure shows a plot of the SweepSeries. Notice that
the x-axis is the immunization rate, not time.
Step13: As the immunization rate increases, the number of infections drops
steeply. If 40% of the students are immunized, fewer than 4% get sick.
That's because immunization has two effects
Step14: The following array represents the range of possible spending.
Step16: compute_factor computes the reduction in beta for a given level of campaign spending.
M is chosen so the transition happens around \$500.
K is the maximum reduction in beta, 20%.
B is chosen by trial and error to yield a curve that seems feasible.
Step17: Here's what it looks like.
Step18: The result is the following function, which
takes spending as a parameter and returns factor, which is the factor
by which beta is reduced
Step19: I use compute_factor to write add_hand_washing, which takes a
System object and a budget, and modifies system.beta to model the
effect of hand washing
Step20: Now we can sweep a range of values for spending and use the simulation
to compute the effect
Step21: Here's how we run it
Step22: The following figure shows the result.
Step23: Below \$200, the campaign has little effect.
At \$800 it has a substantial effect, reducing total infections from more than 45% to about 20%.
Above \$800, the additional benefit is small.
Optimization
Let's put it all together. With a fixed budget of \$1200, we have to
decide how many doses of vaccine to buy and how much to spend on the
hand-washing campaign.
Here are the parameters
Step24: The fraction budget/price_per_dose might not be an integer. int is a
built-in function that converts numbers to integers, rounding down.
We'll sweep the range of possible doses
Step25: In this example we call linrange with only one argument; it returns a
NumPy array with the integers from 0 to max_doses. With the argument
endpoint=True, the result includes both endpoints.
Then we run the simulation for each element of dose_array
Step26: For each number of doses, we compute the fraction of students we can
immunize, fraction and the remaining budget we can spend on the
campaign, spending. Then we run the simulation with those quantities
and store the number of infections.
The following figure shows the result.
Step29: If we buy no doses of vaccine and spend the entire budget on the campaign, the fraction infected is around 19%. At 4 doses, we have \$800 left for the campaign, and this is the optimal point that minimizes the number of students who get sick.
As we increase the number of doses, we have to cut campaign spending,
which turns out to make things worse. But interestingly, when we get
above 10 doses, the effect of herd immunity starts to kick in, and the
number of sick students goes down again.
Summary
Exercises
Exercise | Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
Explanation: Chapter 12
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
from modsim import State, System
def make_system(beta, gamma):
Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def update_func(state, t, system):
Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
from numpy import arange
from modsim import TimeFrame
def run_simulation(system, update_func):
Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
frame = TimeFrame(columns=system.init.index)
frame.loc[system.t0] = system.init
for t in arange(system.t0, system.t_end):
frame.loc[t+1] = update_func(frame.loc[t], t, system)
return frame
Explanation: Code
Here's the code from the previous notebook that we'll need.
End of explanation
def add_immunization(system, fraction):
system.init.S -= fraction
system.init.R += fraction
Explanation: In the previous chapter I presented the SIR model of infectious disease and used it to model the Freshman Plague at Olin. In this chapter we'll consider metrics intended to quantify the effects of the disease and interventions intended to reduce those effects.
Immunization
Models like this are useful for testing "what if?" scenarios. As an
example, we'll consider the effect of immunization.
Suppose there is a vaccine that causes a student to become immune to the Freshman Plague without being infected. How might you modify the model to capture this effect?
One option is to treat immunization as a shortcut from susceptible to
recovered without going through infectious. We can implement this
feature like this:
End of explanation
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
Explanation: add_immunization moves the given fraction of the population from S
to R.
End of explanation
system2 = make_system(beta, gamma)
add_immunization(system2, 0.1)
results2 = run_simulation(system2, update_func)
Explanation: If we assume that 10% of students are vaccinated at the
beginning of the semester, and the vaccine is 100% effective, we can
simulate the effect like this:
End of explanation
results.S.plot(label='No immunization')
results2.S.plot(label='10% immunization')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
Explanation: The following figure shows S as a function of time, with and
without immunization.
End of explanation
def calc_total_infected(results, system):
s_0 = results.S[system.t0]
s_end = results.S[system.t_end]
return s_0 - s_end
calc_total_infected(results, system)
calc_total_infected(results2, system2)
Explanation: Metrics
When we plot a time series, we get a view of everything that happened
when the model ran, but often we want to boil it down to a few numbers
that summarize the outcome. These summary statistics are called
metrics, as we saw in Section xxx.
In the SIR model, we might want to know the time until the peak of the
outbreak, the number of people who are sick at the peak, the number of
students who will still be sick at the end of the semester, or the total number of students who get sick at any point.
As an example, I will focus on the last one --- the total number of sick students --- and we will consider interventions intended to minimize it.
When a person gets infected, they move from S to I, so we can get
the total number of infections by computing the difference in S at the beginning and the end:
End of explanation
def sweep_immunity(immunize_array):
sweep = SweepSeries()
for fraction in immunize_array:
sir = make_system(beta, gamma)
add_immunization(sir, fraction)
results = run_simulation(sir, update_func)
sweep[fraction] = calc_total_infected(results, sir)
return sweep
Explanation: Without immunization, almost 47% of the population gets infected at some point. With 10% immunization, only 31% get infected. That's pretty good.
Sweeping Immunization
Now let's see what happens if we administer more vaccines. This
following function sweeps a range of immunization rates:
End of explanation
immunize_array = linspace(0, 1, 21)
infected_sweep = sweep_immunity(immunize_array)
infected_sweep.plot()
decorate(xlabel='Fraction immunized',
ylabel='Total fraction infected',
title='Fraction infected vs. immunization rate')
Explanation: The parameter of sweep_immunity is an array of immunization rates. The
result is a SweepSeries object that maps from each immunization rate
to the resulting fraction of students ever infected.
The following figure shows a plot of the SweepSeries. Notice that
the x-axis is the immunization rate, not time.
End of explanation
from numpy import exp
def logistic(x, A=0, B=1, C=1, M=0, K=1, Q=1, nu=1):
Computes the generalize logistic function.
A: controls the lower bound
B: controls the steepness of the transition
C: not all that useful, AFAIK
M: controls the location of the transition
K: controls the upper bound
Q: shift the transition left or right
nu: affects the symmetry of the transition
returns: float or array
exponent = -B * (x - M)
denom = C + Q * exp(exponent)
return A + (K-A) / denom ** (1/nu)
Explanation: As the immunization rate increases, the number of infections drops
steeply. If 40% of the students are immunized, fewer than 4% get sick.
That's because immunization has two effects: it protects the people who get immunized (of course) but it also protects the rest of the
population.
Reducing the number of "susceptibles" and increasing the number of
"resistants" makes it harder for the disease to spread, because some
fraction of contacts are wasted on people who cannot be infected. This
phenomenon is called herd immunity, and it is an important element
of public health (see http://modsimpy.com/herd).
The steepness of the curve is a blessing and a curse. It's a blessing
because it means we don't have to immunize everyone, and vaccines can
protect the "herd" even if they are not 100% effective.
But it's a curse because a small decrease in immunization can cause a
big increase in infections. In this example, if we drop from 80%
immunization to 60%, that might not be too bad. But if we drop from 40% to 20%, that would trigger a major outbreak, affecting more than 15% of the population. For a serious disease like measles, just to name one, that would be a public health catastrophe.
One use of models like this is to demonstrate phenomena like herd
immunity and to predict the effect of interventions like vaccination.
Another use is to evaluate alternatives and guide decision making. We'll see an example in the next section.
Hand washing
Suppose you are the Dean of Student Life, and you have a budget of just \$1200 to combat the Freshman Plague. You have two options for spending this money:
You can pay for vaccinations, at a rate of \$100 per dose.
You can spend money on a campaign to remind students to wash hands
frequently.
We have already seen how we can model the effect of vaccination. Now
let's think about the hand-washing campaign. We'll have to answer two
questions:
How should we incorporate the effect of hand washing in the model?
How should we quantify the effect of the money we spend on a
hand-washing campaign?
For the sake of simplicity, let's assume that we have data from a
similar campaign at another school showing that a well-funded campaign
can change student behavior enough to reduce the infection rate by 20%.
In terms of the model, hand washing has the effect of reducing beta.
That's not the only way we could incorporate the effect, but it seems
reasonable and it's easy to implement.
Now we have to model the relationship between the money we spend and the
effectiveness of the campaign. Again, let's suppose we have data from
another school that suggests:
If we spend \$500 on posters, materials, and staff time, we can
change student behavior in a way that decreases the effective value of beta by 10%.
If we spend \$1000, the total decrease in beta is almost 20%.
Above \$1000, additional spending has little additional benefit.
Logistic function
To model the effect of a hand-washing campaign, I'll use a generalized logistic function (GLF), which is a convenient function for modeling curves that have a generally sigmoid shape. The parameters of the GLF correspond to various features of the curve in a way that makes it easy to find a function that has the shape you want, based on data or background information about the scenario.
End of explanation
spending = linspace(0, 1200, 21)
Explanation: The following array represents the range of possible spending.
End of explanation
def compute_factor(spending):
Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
return logistic(spending, M=500, K=0.2, B=0.01)
Explanation: compute_factor computes the reduction in beta for a given level of campaign spending.
M is chosen so the transition happens around \$500.
K is the maximum reduction in beta, 20%.
B is chosen by trial and error to yield a curve that seems feasible.
End of explanation
percent_reduction = compute_factor(spending) * 100
plot(spending, percent_reduction)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate')
Explanation: Here's what it looks like.
End of explanation
def compute_factor(spending):
return logistic(spending, M=500, K=0.2, B=0.01)
Explanation: The result is the following function, which
takes spending as a parameter and returns factor, which is the factor
by which beta is reduced:
End of explanation
def add_hand_washing(system, spending):
factor = compute_factor(spending)
system.beta *= (1 - factor)
Explanation: I use compute_factor to write add_hand_washing, which takes a
System object and a budget, and modifies system.beta to model the
effect of hand washing:
End of explanation
def sweep_hand_washing(spending_array):
sweep = SweepSeries()
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[spending] = calc_total_infected(results, system)
return sweep
Explanation: Now we can sweep a range of values for spending and use the simulation
to compute the effect:
End of explanation
from numpy import linspace
spending_array = linspace(0, 1200, 20)
infected_sweep2 = sweep_hand_washing(spending_array)
Explanation: Here's how we run it:
End of explanation
infected_sweep2.plot()
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Total fraction infected',
title='Effect of hand washing on total infections')
Explanation: The following figure shows the result.
End of explanation
num_students = 90
budget = 1200
price_per_dose = 100
max_doses = int(budget / price_per_dose)
Explanation: Below \$200, the campaign has little effect.
At \$800 it has a substantial effect, reducing total infections from more than 45% to about 20%.
Above \$800, the additional benefit is small.
Optimization
Let's put it all together. With a fixed budget of \$1200, we have to
decide how many doses of vaccine to buy and how much to spend on the
hand-washing campaign.
Here are the parameters:
End of explanation
dose_array = arange(max_doses+1)
Explanation: The fraction budget/price_per_dose might not be an integer. int is a
built-in function that converts numbers to integers, rounding down.
We'll sweep the range of possible doses:
End of explanation
def sweep_doses(dose_array):
sweep = SweepSeries()
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
results = run_simulation(system, update_func)
sweep[doses] = calc_total_infected(results, system)
return sweep
Explanation: In this example we call linrange with only one argument; it returns a
NumPy array with the integers from 0 to max_doses. With the argument
endpoint=True, the result includes both endpoints.
Then we run the simulation for each element of dose_array:
End of explanation
infected_sweep3 = sweep_doses(dose_array)
infected_sweep3.plot()
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses')
Explanation: For each number of doses, we compute the fraction of students we can
immunize, fraction and the remaining budget we can spend on the
campaign, spending. Then we run the simulation with those quantities
and store the number of infections.
The following figure shows the result.
End of explanation
# Solution
There is no unique best answer to this question,
but one simple option is to model quarantine as an
effective reduction in gamma, on the assumption that
quarantine reduces the number of infectious contacts
per infected student.
Another option would be to add a fourth compartment
to the model to track the fraction of the population
in quarantine at each point in time. This approach
would be more complex, and it is not obvious that it
is substantially better.
The following function could be used, like
add_immunization and add_hand_washing, to adjust the
parameters in order to model various interventions.
In this example, `high` is the highest duration of
the infection period, with no quarantine. `low` is
the lowest duration, on the assumption that it takes
some time to identify infectious students.
`fraction` is the fraction of infected students who
are quarantined as soon as they are identified.
def add_quarantine(system, fraction):
Model the effect of quarantine by adjusting gamma.
system: System object
fraction: fraction of students quarantined
# `low` represents the number of days a student
# is infectious if quarantined.
# `high` is the number of days they are infectious
# if not quarantined
low = 1
high = 4
tr = high - fraction * (high-low)
system.gamma = 1 / tr
Explanation: If we buy no doses of vaccine and spend the entire budget on the campaign, the fraction infected is around 19%. At 4 doses, we have \$800 left for the campaign, and this is the optimal point that minimizes the number of students who get sick.
As we increase the number of doses, we have to cut campaign spending,
which turns out to make things worse. But interestingly, when we get
above 10 doses, the effect of herd immunity starts to kick in, and the
number of sick students goes down again.
Summary
Exercises
Exercise: Suppose the price of the vaccine drops to $50 per dose. How does that affect the optimal allocation of the spending?
Exercise: Suppose we have the option to quarantine infected students. For example, a student who feels ill might be moved to an infirmary, or a private dorm room, until they are no longer infectious.
How might you incorporate the effect of quarantine in the SIR model?
End of explanation |
14,367 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
News Categorization using Multinomial Naive Bayes
The objective of this site is to show how to use Multinomial Naive Bayes method to classify news according to some predefined classes.
The News Aggregator Data Set comes from the UCI Machine Learning Repository.
Lichman, M. (2013). UCI Machine Learning Repository [http
Step1: This way we can refer to pandas by its alias 'pd'. Let's import news aggregator data via Pandas
Step2: Function head gives us the first 5 items in a column (or the first 5 rows in the DataFrame)
Step3: We want to predict the category of a news article based only on its title. Class LabelEncoder allows to encode labels with values between 0 and n_classes-1.
Step4: Categories are literal labels, but it is better for machine learning algorithms just to work with numbers, so we will encode them using LabelEncoder, which encode labels with value between 0 and n_classes-1.
Step5: Now we should split our data into two sets
Step6: In order to make the training process easier, scikit-learn provides a Pipeline class that behaves like a compound classifier. The first step should be to tokenize and count the number of occurrence of each word that appears into the news'titles. For that, we will use the CountVectorizer class. Then we will transform the counters to a tf-idf representation using TfidfTransformer class. The last step creates the Naive Bayes classifier
Step7: Now we procede to fit the Naive Bayes classifier to the train set
Step8: Now we can procede to apply the classifier to the test set and calculate the predicted values
Step9: sklearn.metrics module includes score functions, performance metrics, and pairwise metrics and distance computations.
accuracy_score
Step10: Let's build a text report showing the main classification metrics with the Precision/Recall/F1-score measures for each element in the test data. | Python Code:
import pandas as pd
Explanation: News Categorization using Multinomial Naive Bayes
The objective of this site is to show how to use Multinomial Naive Bayes method to classify news according to some predefined classes.
The News Aggregator Data Set comes from the UCI Machine Learning Repository.
Lichman, M. (2013). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
This specific dataset can be found in the UCI ML Repository at this URL: http://archive.ics.uci.edu/ml/datasets/News+Aggregator
This dataset contains headlines, URLs, and categories for 422,937 news stories collected by a web aggregator between March 10th, 2014 and August 10th, 2014. News categories in this dataset are labelled:
b: business;
t: science and technology;
e: entertainment; and
m: health.
Using Multinomial Naive Bayes method, we will try to predict the category (business, entertainment, etc.) of a news article given only its headline.
Let's begin importing the Pandas (Python Data Analysis Library) module. The import statement is the most common way to gain access to the code in another module.
End of explanation
news = pd.read_csv("uci-news-aggregator.csv")
Explanation: This way we can refer to pandas by its alias 'pd'. Let's import news aggregator data via Pandas
End of explanation
print(news.head())
Explanation: Function head gives us the first 5 items in a column (or the first 5 rows in the DataFrame)
End of explanation
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
y = encoder.fit_transform(news['CATEGORY'])
print(y[:5])
categories = news['CATEGORY']
titles = news['TITLE']
N = len(titles)
print('Number of news',N)
labels = list(set(categories))
print('possible categories',labels)
for l in labels:
print('number of ',l,' news',len(news.loc[news['CATEGORY'] == l]))
Explanation: We want to predict the category of a news article based only on its title. Class LabelEncoder allows to encode labels with values between 0 and n_classes-1.
End of explanation
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
ncategories = encoder.fit_transform(categories)
Explanation: Categories are literal labels, but it is better for machine learning algorithms just to work with numbers, so we will encode them using LabelEncoder, which encode labels with value between 0 and n_classes-1.
End of explanation
Ntrain = int(N * 0.7)
from sklearn.utils import shuffle
titles, ncategories = shuffle(titles, ncategories, random_state=0)
X_train = titles[:Ntrain]
print('X_train.shape',X_train.shape)
y_train = ncategories[:Ntrain]
print('y_train.shape',y_train.shape)
X_test = titles[Ntrain:]
print('X_test.shape',X_test.shape)
y_test = ncategories[Ntrain:]
print('y_test.shape',y_test.shape)
Explanation: Now we should split our data into two sets:
1. a training set (70%) used to discover potentially predictive relationships, and
2. a test set (30%) used to evaluate whether the discovered relationships hold and to assess the strength and utility of a predictive relationship.
Samples should be first shuffled and then split into a pair of train and test sets. Make sure you permute (shuffle) your training data before fitting the model.
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
print('Training...')
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),
])
Explanation: In order to make the training process easier, scikit-learn provides a Pipeline class that behaves like a compound classifier. The first step should be to tokenize and count the number of occurrence of each word that appears into the news'titles. For that, we will use the CountVectorizer class. Then we will transform the counters to a tf-idf representation using TfidfTransformer class. The last step creates the Naive Bayes classifier
End of explanation
text_clf = text_clf.fit(X_train, y_train)
Explanation: Now we procede to fit the Naive Bayes classifier to the train set
End of explanation
print('Predicting...')
predicted = text_clf.predict(X_test)
Explanation: Now we can procede to apply the classifier to the test set and calculate the predicted values
End of explanation
from sklearn import metrics
print('accuracy_score',metrics.accuracy_score(y_test,predicted))
print('Reporting...')
Explanation: sklearn.metrics module includes score functions, performance metrics, and pairwise metrics and distance computations.
accuracy_score: computes subset accuracy; used to compare set of predicted labels for a sample to the corresponding set of true labels
End of explanation
print(metrics.classification_report(y_test, predicted, target_names=labels))
Explanation: Let's build a text report showing the main classification metrics with the Precision/Recall/F1-score measures for each element in the test data.
End of explanation |
14,368 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Multiplying Numpy Arrays
Step2: LAB CHALLENGE | Python Code:
import numpy as np
one_dimensional = np.array([1,1,1,2,3,3,3,3,3])
one_dimensional
one_dimensional.shape # not yet rows & columns
one_dimensional.reshape((9,-1)) # let numpy figure out how many columns
one_dimensional # still the same
one_dimensional.ndim
two_dimensional = one_dimensional.reshape(1,9) # recycle same name
two_dimensional.shape # is now 2D even if just the one row
two_dimensional.ndim
class M:
Symbolic representation of multiply, add
def __init__(self, s):
self.s = str(s)
def __mul__(self, other):
return M(self.s + " * " + other.s) # string
def __add__(self, other):
return M(self.s + " + " + other.s)
def __repr__(self):
return self.s
#Demo
one = M(1)
two = M(2)
print(one * two)
A,B,C = map(M, ['A','B','C']) # create three M type objects
m_array = np.array([A,B,C]) # put them in a numpy array
m_array.dtype # infers type (Object)
m_array = m_array.reshape((-1, len(m_array))) # make this 2 dimensional
m_array.shape # transpose works for > 1 dimension
m_array.T # stand it up (3,1) vs (1,3) shape
m_array.dot(m_array.T) # row dot column i.e. self * self.T
m_array.T[1,0] = M('Z') # transpose is not a copy
m_array # original has changes
m_array * m_array # dot versus element-wise
Explanation: Multiplying Numpy Arrays: dot vs __mul__
Data Science has everything to do with linear algebra.
When we want to do a weighted sum, we can put the weights in a row vector, and what they multiply in a column vector.
Assigning weights, usually iteratively, in response to back propagation, is at the heart of machine learning, from logistic regression to neural networks.
Lets go over the basics of creating row and column vectors, such that dot products become possible.
You will find np.dot(A, B) works the same as A.dot(B) when it comes to numpy arrays.
End of explanation
from pandas import Series
A = Series(np.arange(10))
Explanation: LAB CHALLENGE:
Create two arrays of compatiable dimensions and form their dot product.
numpy.random.randint is a good source of random numbers (for data).
End of explanation |
14,369 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import important modules and declare important directories
Step1: This is a function that we'll use later to plot the results of a linear SVM classifier
Step2: Load in the sample JSON file and view its contents
Step3: Now, let's create two lists for all the reviews in Ohio
Step4: Let's take a look at the following regression (information is correlated with review length)
Step5: Let's try using Harvard-IV sentiment categories as dependent variables
Step6: NOTE
Step7: Let's plot the overall distribution of ratings aggregated across all of the states
Step8: Let's plot the rating distribution of reviews within each of the states.
Step9: Now let's try to build a simple linear support vector machine
Note, all support vector machine algorithm relies on drawing a separating hyperplane amongst the different classes. This is not necessarily guarenteed to exist. For a complete set of conditions that must be satisfied for this to be an appropriate algorithm to use, please see below
Step10: In order to use the machine learning algorithms in Sci-Kit learn, we first have to initialize a CountVectorizer object. We can use this object creates a matrix representation of each of our words. There are many options that we can specify when we initialize our CountVectorizer object (see documentation for full list) but they essentially all relate to how the words are represented in the final matrix.
Step11: Create dataframe to hold our results from the classification algorithms
Step12: Lets call a linear SVM instance from SK Learn have it train on our subset of reviews. We'll output the results to an output dataframe and then calculate a total accuracy percentage.
Step13: SKLearn uses what's known as a pipeline. Instead of having to declare each of these objects on their own and passing them into each other, we can just create one object with all the necessary options specified and then use that to run the algorithm. For each pipeline below, we specify the vector to be the CountVectorizer object we have defined above, set it to use tfidf, and then specify the classifier that we want to use.
Below, we create a separate pipeline for Random Forest, a Bagged Decision Tree, and Multinomial Logistic Regression. We then append the results to the dataframe that we've already created.
Step14: Test results using all of the states
0.5383 from Naive TF-IDF Linear SVM
0.4567 from Naive TF-IDF Linear SVM using Harvard-IV dictionary
0.5241 from Naive TF-IDF Bagged DT using 100 estimators
0.496 from Naive TF-IDF Bagged DT using 100 estimators and Harvard-IV dictionary
0.5156 from Naive TF-IDF RandomForest and Harvard-IV dictionary
0.53 from Naive TF-IDF RF
0.458 from Naive TF-IDF SVM
As you can see, none of the above classifiers performs significantly better than a fair coin toss. This is most likely due to the heavily skewed distribution of review ratings. There are many reviews that receive 4 or 5 stars, therefore it is likely that the language associated with each review is being confused with each other. We can confirm this by looking at the "confusion matrix" of our predictions.
Step15: Each row and column corresponds to a rating number. For example, element (1,1) is the number of 1 star reviews that were correctly classified. Element (1,2) is the number of 1 star reviews that were incorrectly classified as 2 stars. Therefore, the sum of the diagonal represents the total number of correctly classified reviews. As you can see, the bagged decision tree classifier is classifying many four starred reviews as five starred reviews and vice versa.
This indicates that we can improve our results by using more aggregated categories. For example, we can call all four and five star reviews as "good" and all other review ratings as "bad".
Step16: We draw a heat map for each state below. Longitude is on the Y axis and Latitude is on the X axis. The color coding is as follows
Step17: We run the following linear regression model for each of the states | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd
import json
import pandas as pd
import csv
import os
import re
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn import svm
from sklearn.linear_model import SGDClassifier
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.ensemble import BaggingClassifier
from sklearn.svm import LinearSVC
from sklearn.naive_bayes import MultinomialNB
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
from sklearn.pipeline import Pipeline
import numpy as np
from sklearn import datasets, linear_model
from sklearn.linear_model import LinearRegression
import statsmodels.api as sm
from scipy import stats
from statsmodels.sandbox.regression.predstd import wls_prediction_std
Explanation: Import important modules and declare important directories
End of explanation
def plot_coefficients(classifier, feature_names, top_features=20):
coef = classifier.coef_.ravel()[0:200]
top_positive_coefficients = np.argsort(coef)[-top_features:]
top_negative_coefficients = np.argsort(coef)[:top_features]
top_coefficients = np.hstack([top_negative_coefficients, top_positive_coefficients])
#create plot
plt.figure(figsize=(15, 5))
colors = ['red' if c < 0 else 'blue' for c in coef[top_coefficients]]
plt.bar(np.arange(2 * top_features), coef[top_coefficients], color=colors)
feature_names = np.array(feature_names)
plt.xticks(np.arange(1, 1 + 2 * top_features), feature_names[top_coefficients], rotation=60, ha='right')
plt.show()
def bayesian_average()
#This is the main folder where all the modules and JSON files are stored on my computer.
#You need to change this to the folder path specific to your computer
file_directory = "/Users/robertsonwang/Desktop/Python/Yelp_class/yelp-classification/"
reviews_file = "cleaned_reviews_states_2010.json"
biz_file = "cleaned_business_data.json"
Explanation: This is a function that we'll use later to plot the results of a linear SVM classifier
End of explanation
#This is a smaller subset of our overall Yelp data
#I randomly chose 5000 reviews from each state and filed them into the JSON file
#Note that for the overall dataset, we have about 2 million reviews.
#That's why we need to use a data management system like MongoDB in order to hold all our data
#and to more efficiently manipulate it
reviews_json = json.load(open(file_directory+reviews_file))
biz_json = json.load(open(file_directory+biz_file))
for key in reviews_json.keys():
reviews_json[key] = reviews_json[key][0:5000]
#Let's see how reviews_json is set up
print reviews_json.keys()
#We can see that on the highest level, the dictionary keys are the different states
#Let's look at the first entry under Ohio
print reviews_json['OH'][0]['useful']
#So for each review filed under Ohio, we have many different attributes to choose from
#Let's look at what the review and rating was for the first review filed under Ohio
print reviews_json['OH'][0]['text']
print reviews_json['OH'][0]['stars']
Explanation: Load in the sample JSON file and view its contents
End of explanation
#We want to split up reviews between text and labels for each state
reviews = []
stars = []
for key in reviews_json.keys():
for review in reviews_json[key]:
reviews.append(review['text'])
stars.append(review['stars'])
#Just for demonstration, let's pick out the same review example as above but from our respective lists
print reviews[0]
print stars[0]
Explanation: Now, let's create two lists for all the reviews in Ohio:
One that holds all the reviews
One that holds all the ratings
End of explanation
harvard_dict = pd.read_csv('HIV-4.csv')
negative_words = list(harvard_dict.loc[harvard_dict['Negativ'] == 'Negativ']['Entry'])
positive_words = list(harvard_dict.loc[harvard_dict['Positiv'] == 'Positiv']['Entry'])
#Use word dictionary from Hu and Liu (2004)
negative_words = open('negative-words.txt', 'r').read()
negative_words = negative_words.split('\n')
positive_words = open('positive-words.txt', 'r').read()
positive_words = positive_words.split('\n')
total_words = negative_words + positive_words
total_words = list(set(total_words))
review_length = []
negative_percent = []
positive_percent = []
for review in reviews:
length_words = len(review.split())
neg_words = [x.lower() for x in review.split() if x in negative_words]
pos_words = [x.lower() for x in review.split() if x in positive_words]
negative_percent.append(float(len(neg_words))/float(length_words))
positive_percent.append(float(len(pos_words))/float(length_words))
review_length.append(length_words)
regression_df = pd.DataFrame({'stars':stars, 'review_length':review_length, 'neg_percent': negative_percent, 'positive_percent': positive_percent})
#Standardize dependent variables
std_vars = ['neg_percent', 'positive_percent', 'review_length']
for var in std_vars:
len_std = regression_df[var].std()
len_mu = regression_df[var].mean()
regression_df[var] = [(x - len_mu)/len_std for x in regression_df[var]]
Explanation: Let's take a look at the following regression (information is correlated with review length):
$Rating = \beta_{neg}neg + \beta_{pos}pos + \beta_{num}\text{Std_NumWords} + \epsilon$
Where:
$neg = \frac{\text{Number of Negative Words}}{\text{Total Number of Words}}$
$pos = \frac{\text{Number of Positive Words}}{\text{Total Number of Words}}$
End of explanation
#The R-Squared from using the Harvard Dictionary is 0.1 but with the Hu & Liu word dictionary
X = np.column_stack((regression_df.review_length,regression_df.neg_percent, regression_df.positive_percent))
y = regression_df.stars
X = sm.add_constant(X)
est = sm.OLS(y, X)
est2 = est.fit()
print(est2.summary())
Explanation: Let's try using Harvard-IV sentiment categories as dependent variables
End of explanation
x = np.array(regression_df.stars)
#beta = [3.3648, -0.3227 , 0.5033]
y = [int(round(i)) for i in list(est2.fittedvalues)]
y = np.array(y)
errors = np.subtract(x,y)
np.sum(errors)
fig, ax = plt.subplots(figsize=(5,5))
ax.plot(x, x, 'b', label="data")
ax.plot(x, y, 'o', label="ols")
#ax.plot(x, est2.fittedvalues, 'r--.', label="OLS")
#ax.plot(x, iv_u, 'r--')
#ax.plot(x, iv_l, 'r--')
ax.legend(loc='best');
#Do a QQ plot of the data
fig = sm.qqplot(errors)
plt.show()
Explanation: NOTE: BLUE Estimator does not require normality of errors
Gauss-Markov Theorem states that the ordinary least squares estimate is the best linear unbiased estimator (BLUE) of the regression coefficients ('Best' meaning optimal in terms of minimizing mean squared error) as long as the errors:
(1) have mean zero
(2) are uncorrelated
(3) have constant variance
End of explanation
star_hist = pd.DataFrame({'Ratings':stars})
star_hist.plot.hist()
Explanation: Let's plot the overall distribution of ratings aggregated across all of the states
End of explanation
df_list = []
states = list(reviews_json.keys())
for state in states:
stars_state = []
for review in reviews_json[state]:
stars_state.append(review['stars'])
star_hist = pd.DataFrame({'Ratings':stars_state})
df_list.append(star_hist)
for i in range(0, len(df_list)):
print states[i] + " Rating Distribution"
df_list[i].plot.hist()
plt.show()
Explanation: Let's plot the rating distribution of reviews within each of the states.
End of explanation
#First let's separate out our dataset into a training sample and a test sample
#We specify a training sample percentage of 80% of our total dataset. This is just a rule of thumb
training_percent = 0.8
train_reviews = reviews[0:int(len(reviews)*training_percent)]
test_reviews = reviews[int(len(reviews)*training_percent):len(reviews)]
train_ratings = stars[0:int(len(stars)*training_percent)]
test_ratings = stars[int(len(stars)*training_percent):len(stars)]
Explanation: Now let's try to build a simple linear support vector machine
Note, all support vector machine algorithm relies on drawing a separating hyperplane amongst the different classes. This is not necessarily guarenteed to exist. For a complete set of conditions that must be satisfied for this to be an appropriate algorithm to use, please see below:
http://www.unc.edu/~normanp/890part4.pdf
The following is also a good, and more general, introduction to Support Vector Machines:
http://web.mit.edu/6.034/wwwbob/svm-notes-long-08.pdf
End of explanation
vectorizer = CountVectorizer(analyzer = "word", \
tokenizer = None, \
preprocessor = None, \
stop_words = None, \
vocabulary = total_words, \
max_features = 200)
train_data_features = vectorizer.fit_transform(train_reviews)
test_data_features = vectorizer.fit_transform(test_reviews)
Explanation: In order to use the machine learning algorithms in Sci-Kit learn, we first have to initialize a CountVectorizer object. We can use this object creates a matrix representation of each of our words. There are many options that we can specify when we initialize our CountVectorizer object (see documentation for full list) but they essentially all relate to how the words are represented in the final matrix.
End of explanation
output = pd.DataFrame( data={"Reviews": test_reviews, "Rating": test_ratings} )
Explanation: Create dataframe to hold our results from the classification algorithms
End of explanation
#Let's do the same exercise as above but use TF-IDF, you can learn more about TF-IDF here:
#https://nlp.stanford.edu/IR-book/html/htmledition/tf-idf-weighting-1.html
tf_transformer = TfidfTransformer(use_idf=True)
train_data_features = tf_transformer.fit_transform(train_data_features)
test_data_features = tf_transformer.fit_transform(test_data_features)
lin_svm = lin_svm.fit(train_data_features, train_ratings)
lin_svm_result = lin_svm.predict(test_data_features)
output['lin_svm'] = lin_svm_result
output['Accurate'] = np.where(output['Rating'] == output['lin_svm'], 1, 0)
accurate_percentage = float(sum(output['Accurate']))/float(len(output))
print accurate_percentage
#Here we plot the features with the highest absolute value coefficient weight
plot_coefficients(lin_svm, vectorizer.get_feature_names())
Explanation: Lets call a linear SVM instance from SK Learn have it train on our subset of reviews. We'll output the results to an output dataframe and then calculate a total accuracy percentage.
End of explanation
# random_forest = Pipeline([('vect', vectorizer),
# ('tfidf', TfidfTransformer()),
# ('clf', RandomForestClassifier())])
# random_forest.set_params(clf__n_estimators=100, clf__criterion='entropy').fit(train_reviews, train_ratings)
# output['random_forest'] = random_forest.predict(test_reviews)
# output['Accurate'] = np.where(output['Rating'] == output['random_forest'], 1, 0)
# accurate_percentage = float(sum(output['Accurate']))/float(len(output))
# print accurate_percentage
# bagged_dt = Pipeline([('vect', vectorizer),
# ('tfidf', TfidfTransformer()),
# ('clf', BaggingClassifier())])
# bagged_dt.set_params(clf__n_estimators=100, clf__n_jobs=1).fit(train_reviews, train_ratings)
# output['bagged_dt'] = bagged_dt.predict(test_reviews)
# output['Accurate'] = np.where(output['Rating'] == output['bagged_dt'], 1, 0)
# accurate_percentage = float(sum(output['Accurate']))/float(len(output))
# print accurate_percentage
multi_logit = Pipeline([('vect', vectorizer),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB())])
multi_logit.set_params(clf__alpha=1, clf__fit_prior = True, clf__class_prior = None).fit(train_reviews, train_ratings)
output['multi_logit'] = multi_logit.predict(test_reviews)
output['Accurate'] = np.where(output['Rating'] == output['multi_logit'], 1, 0)
accurate_percentage = float(sum(output['Accurate']))/float(len(output))
print accurate_percentage
random_forest = Pipeline([('vect', vectorizer),
('tfidf', TfidfTransformer()),
('clf', RandomForestClassifier())])
random_forest.set_params(clf__n_estimators=100, clf__criterion='entropy').fit(train_reviews, train_ratings)
output['random_forest'] = random_forest.predict(test_reviews)
output['Accurate'] = np.where(output['Rating'] == output['random_forest'], 1, 0)
accurate_percentage = float(sum(output['Accurate']))/float(len(output))
print accurate_percentage
Explanation: SKLearn uses what's known as a pipeline. Instead of having to declare each of these objects on their own and passing them into each other, we can just create one object with all the necessary options specified and then use that to run the algorithm. For each pipeline below, we specify the vector to be the CountVectorizer object we have defined above, set it to use tfidf, and then specify the classifier that we want to use.
Below, we create a separate pipeline for Random Forest, a Bagged Decision Tree, and Multinomial Logistic Regression. We then append the results to the dataframe that we've already created.
End of explanation
print metrics.confusion_matrix(test_ratings, bagged_dt.predict(test_reviews), labels = [1, 2, 3, 4, 5])
Explanation: Test results using all of the states
0.5383 from Naive TF-IDF Linear SVM
0.4567 from Naive TF-IDF Linear SVM using Harvard-IV dictionary
0.5241 from Naive TF-IDF Bagged DT using 100 estimators
0.496 from Naive TF-IDF Bagged DT using 100 estimators and Harvard-IV dictionary
0.5156 from Naive TF-IDF RandomForest and Harvard-IV dictionary
0.53 from Naive TF-IDF RF
0.458 from Naive TF-IDF SVM
As you can see, none of the above classifiers performs significantly better than a fair coin toss. This is most likely due to the heavily skewed distribution of review ratings. There are many reviews that receive 4 or 5 stars, therefore it is likely that the language associated with each review is being confused with each other. We can confirm this by looking at the "confusion matrix" of our predictions.
End of explanation
for review in reviews_json[reviews_json.keys()[0]]:
print type(review['date'])
break
latitude_list = []
longitude_list = []
stars_list = []
count_list = []
state_list = []
for biz in biz_json:
stars_list.append(biz['stars'])
latitude_list.append(biz['latitude'])
longitude_list.append(biz['longitude'])
count_list.append(biz['review_count'])
state_list.append(biz['state'])
biz_df = pd.DataFrame({'ratings':stars_list, 'latitude':latitude_list, 'longitude': longitude_list, 'review_count': count_list, 'state':state_list})
Explanation: Each row and column corresponds to a rating number. For example, element (1,1) is the number of 1 star reviews that were correctly classified. Element (1,2) is the number of 1 star reviews that were incorrectly classified as 2 stars. Therefore, the sum of the diagonal represents the total number of correctly classified reviews. As you can see, the bagged decision tree classifier is classifying many four starred reviews as five starred reviews and vice versa.
This indicates that we can improve our results by using more aggregated categories. For example, we can call all four and five star reviews as "good" and all other review ratings as "bad".
End of explanation
states = [u'OH', u'NC', u'WI', u'IL', u'AZ', u'NV']
cmap, norm = mpl.colors.from_levels_and_colors([1, 2, 3, 4, 5], ['red', 'orange', 'yellow', 'green', 'blue'], extend = 'max')
for state in states:
state_df = biz_df[biz_df.state == state]
state_df_filt = state_df[(np.abs(state_df.longitude-state_df.longitude.mean()) <= 2*state_df.longitude.std()) \
& (np.abs(state_df.latitude-state_df.latitude.mean()) <= 2*state_df.latitude.std())]
plt.ylim(min(state_df_filt.latitude), max(state_df_filt.latitude))
plt.xlim(min(state_df_filt.longitude), max(state_df_filt.longitude))
plt.scatter(state_df_filt.longitude, state_df_filt.latitude, c=state_df_filt.ratings, cmap=cmap, norm=norm)
plt.show()
print state
Explanation: We draw a heat map for each state below. Longitude is on the Y axis and Latitude is on the X axis. The color coding is as follows:
Red = Rating of 1
Orange = Rating of 2
Yellow = Rating of 3
Green = Rating of 4
Blue = Rating of 5
End of explanation
for state in states:
state_df = biz_df[biz_df.state == state]
state_df_filt = state_df[(np.abs(state_df.longitude-state_df.longitude.mean()) <= 2*state_df.longitude.std()) \
& (np.abs(state_df.latitude-state_df.latitude.mean()) <= 2*state_df.latitude.std())]
state_df_filt['longitude'] = (state_df_filt.longitude - state_df.longitude.mean())/state_df.longitude.std()
state_df_filt['latitude'] = (state_df_filt.latitude - state_df.latitude.mean())/state_df.latitude.std()
state_df_filt['review_count'] = (state_df_filt.review_count - state_df.review_count.mean())/state_df.review_count.std()
X = np.column_stack((state_df_filt.longitude, state_df_filt.latitude, state_df_filt.review_count))
y = state_df_filt.ratings
est = sm.OLS(y, X)
est2 = est.fit()
print(est2.summary())
print state
Explanation: We run the following linear regression model for each of the states:
$Rating = \beta_{1} Longitude + \beta_{2} Latitude + \beta_{3} Num of Reviews + \epsilon$
End of explanation |
14,370 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Index - Back - Next
Output widgets
Step1: The Output widget can capture and display stdout, stderr and rich output generated by IPython. You can also append output directly to an output widget, or clear it programmatically.
Step2: After the widget is created, direct output to it using a context manager. You can print text to the output area
Step3: Rich output can also be directed to the output area. Anything which displays nicely in a Jupyter notebook will also display well in the Output widget.
Step4: We can even display complex mimetypes, such as nested widgets, in an output widget.
Step5: We can also append outputs to the output widget directly with the convenience methods append_stdout, append_stderr, or append_display_data.
Step6: We can clear the output by either using IPython.display.clear_output within the context manager, or we can call the widget's clear_output method directly.
Step7: clear_output supports the keyword argument wait. With this set to True, the widget contents are not cleared immediately. Instead, they are cleared the next time the widget receives something to display. This can be useful when replacing content in the output widget
Step8: out.capture supports the keyword argument clear_output. Setting this to True will clear the output widget every time the function is invoked, so that you only see the output of the last invocation. With clear_output set to True, you can also pass a wait=True argument to only clear the output once new output is available. Of course, you can also manually clear the output any time as well.
Step9: Output widgets as the foundation for interact
The output widget forms the basis of how interact and related methods are implemented. It can also be used by itself to create rich layouts with widgets and code output. One simple way to customize how an interact UI looks is to use the interactive_output function to hook controls up to a function whose output is captured in the returned output widget. In the next example, we stack the controls vertically and then put the output of the function to the right.
Step10: Debugging errors in callbacks with the output widget
On some platforms, like JupyterLab, output generated by widget callbacks (for instance, functions attached to the .observe method on widget traits, or to the .on_click method on button widgets) are not displayed anywhere. Even on other platforms, it is unclear what cell this output should appear in. This can make debugging errors in callback functions more challenging.
An effective tool for accessing the output of widget callbacks is to decorate the callback with an output widget's capture method. You can then display the widget in a new cell to see the callback output.
Step15: Integrating output widgets with the logging module
While using the .capture decorator works well for understanding and debugging single callbacks, it does not scale to larger applications. Typically, in larger applications, one might use the logging module to print information on the status of the program. However, in the case of widget applications, it is unclear where the logging output should go.
A useful pattern is to create a custom handler that redirects logs to an output widget. The output widget can then be displayed in a new cell to monitor the application while it runs.
Step16: Interacting with output widgets from background threads
Jupyter's display mechanism can be counter-intuitive when displaying output produced by background threads. A background thread's output is printed to whatever cell the main thread is currently writing to. To see this directly, create a thread that repeatedly prints to standard out | Python Code:
import ipywidgets as widgets
Explanation: Index - Back - Next
Output widgets: leveraging Jupyter's display system
End of explanation
out = widgets.Output(layout={'border': '1px solid black'})
out
Explanation: The Output widget can capture and display stdout, stderr and rich output generated by IPython. You can also append output directly to an output widget, or clear it programmatically.
End of explanation
with out:
for i in range(10):
print(i, 'Hello world!')
Explanation: After the widget is created, direct output to it using a context manager. You can print text to the output area:
End of explanation
from IPython.display import YouTubeVideo
with out:
display(YouTubeVideo('eWzY2nGfkXk'))
Explanation: Rich output can also be directed to the output area. Anything which displays nicely in a Jupyter notebook will also display well in the Output widget.
End of explanation
with out:
display(widgets.IntSlider())
Explanation: We can even display complex mimetypes, such as nested widgets, in an output widget.
End of explanation
out = widgets.Output(layout={'border': '1px solid black'})
out.append_stdout('Output appended with append_stdout')
out.append_display_data(YouTubeVideo('eWzY2nGfkXk'))
out
Explanation: We can also append outputs to the output widget directly with the convenience methods append_stdout, append_stderr, or append_display_data.
End of explanation
out.clear_output()
Explanation: We can clear the output by either using IPython.display.clear_output within the context manager, or we can call the widget's clear_output method directly.
End of explanation
@out.capture()
def function_with_captured_output():
print('This goes into the output widget')
raise Exception('As does this')
function_with_captured_output()
Explanation: clear_output supports the keyword argument wait. With this set to True, the widget contents are not cleared immediately. Instead, they are cleared the next time the widget receives something to display. This can be useful when replacing content in the output widget: it allows for smoother transitions by avoiding a jarring resize of the widget following the call to clear_output.
Finally, we can use an output widget to capture all the output produced by a function using the capture decorator.
End of explanation
out.clear_output()
Explanation: out.capture supports the keyword argument clear_output. Setting this to True will clear the output widget every time the function is invoked, so that you only see the output of the last invocation. With clear_output set to True, you can also pass a wait=True argument to only clear the output once new output is available. Of course, you can also manually clear the output any time as well.
End of explanation
a = widgets.IntSlider(description='a')
b = widgets.IntSlider(description='b')
c = widgets.IntSlider(description='c')
def f(a, b, c):
print('{}*{}*{}={}'.format(a, b, c, a*b*c))
out = widgets.interactive_output(f, {'a': a, 'b': b, 'c': c})
widgets.HBox([widgets.VBox([a, b, c]), out])
Explanation: Output widgets as the foundation for interact
The output widget forms the basis of how interact and related methods are implemented. It can also be used by itself to create rich layouts with widgets and code output. One simple way to customize how an interact UI looks is to use the interactive_output function to hook controls up to a function whose output is captured in the returned output widget. In the next example, we stack the controls vertically and then put the output of the function to the right.
End of explanation
debug_view = widgets.Output(layout={'border': '1px solid black'})
@debug_view.capture(clear_output=True)
def bad_callback(event):
print('This is about to explode')
return 1.0 / 0.0
button = widgets.Button(
description='click me to raise an exception',
layout={'width': '300px'}
)
button.on_click(bad_callback)
button
debug_view
Explanation: Debugging errors in callbacks with the output widget
On some platforms, like JupyterLab, output generated by widget callbacks (for instance, functions attached to the .observe method on widget traits, or to the .on_click method on button widgets) are not displayed anywhere. Even on other platforms, it is unclear what cell this output should appear in. This can make debugging errors in callback functions more challenging.
An effective tool for accessing the output of widget callbacks is to decorate the callback with an output widget's capture method. You can then display the widget in a new cell to see the callback output.
End of explanation
import ipywidgets as widgets
import logging
class OutputWidgetHandler(logging.Handler):
Custom logging handler sending logs to an output widget
def __init__(self, *args, **kwargs):
super(OutputWidgetHandler, self).__init__(*args, **kwargs)
layout = {
'width': '100%',
'height': '160px',
'border': '1px solid black'
}
self.out = widgets.Output(layout=layout)
def emit(self, record):
Overload of logging.Handler method
formatted_record = self.format(record)
new_output = {
'name': 'stdout',
'output_type': 'stream',
'text': formatted_record+'\n'
}
self.out.outputs = (new_output, ) + self.out.outputs
def show_logs(self):
Show the logs
display(self.out)
def clear_logs(self):
Clear the current logs
self.out.clear_output()
logger = logging.getLogger(__name__)
handler = OutputWidgetHandler()
handler.setFormatter(logging.Formatter('%(asctime)s - [%(levelname)s] %(message)s'))
logger.addHandler(handler)
logger.setLevel(logging.INFO)
handler.show_logs()
handler.clear_logs()
logger.info('Starting program')
try:
logger.info('About to try something dangerous...')
1.0/0.0
except Exception as e:
logger.exception('An error occurred!')
Explanation: Integrating output widgets with the logging module
While using the .capture decorator works well for understanding and debugging single callbacks, it does not scale to larger applications. Typically, in larger applications, one might use the logging module to print information on the status of the program. However, in the case of widget applications, it is unclear where the logging output should go.
A useful pattern is to create a custom handler that redirects logs to an output widget. The output widget can then be displayed in a new cell to monitor the application while it runs.
End of explanation
import threading
from IPython.display import display, HTML
import ipywidgets as widgets
import time
def thread_func(something, out):
for i in range(1, 5):
time.sleep(0.3)
out.append_stdout('{} {} {}\n'.format(i, '**'*i, something))
out.append_display_data(HTML("<em>All done!</em>"))
display('Display in main thread')
out = widgets.Output()
# Now the key: the container is displayed (while empty) in the main thread
display(out)
thread = threading.Thread(
target=thread_func,
args=("some text", out))
thread.start()
thread.join()
Explanation: Interacting with output widgets from background threads
Jupyter's display mechanism can be counter-intuitive when displaying output produced by background threads. A background thread's output is printed to whatever cell the main thread is currently writing to. To see this directly, create a thread that repeatedly prints to standard out:
```python
import threading
import time
def run():
for i in itertools.count(0):
time.sleep(1)
print('output from background {}'.format(i))
t = threading.Thread(target=run)
t.start()
```
This always prints in the currently active cell, not the cell that started the background thread.
This can lead to surprising behavior in output widgets. During the time in which output is captured by the output widget, any output generated in the notebook, regardless of thread, will go into the output widget.
The best way to avoid surprises is to never use an output widget's context manager in a context where multiple threads generate output. Instead, we can pass an output widget to the function executing in a thread, and use append_display_data(), append_stdout(), or append_stderr() methods to append displayable output to the output widget.
End of explanation |
14,371 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating MNE's data structures from scratch
MNE provides mechanisms for creating various core objects directly from
NumPy arrays.
Step1: Creating
Step2: You can also supply more extensive metadata
Step3: <div class="alert alert-info"><h4>Note</h4><p>When assigning new values to the fields of an
Step4: Creating
Step5: It is necessary to supply an "events" array in order to create an Epochs
object. This is of shape(n_events, 3) where the first column is the sample
number (time) of the event, the second column indicates the value from which
the transition is made from (only used when the new value is bigger than the
old one), and the third column is the new event value.
Step6: More information about the event codes
Step7: Finally, we must specify the beginning of an epoch (the end will be inferred
from the sampling frequency and n_samples)
Step8: Now we can create the
Step9: Creating | Python Code:
import mne
import numpy as np
Explanation: Creating MNE's data structures from scratch
MNE provides mechanisms for creating various core objects directly from
NumPy arrays.
End of explanation
# Create some dummy metadata
n_channels = 32
sampling_rate = 200
info = mne.create_info(n_channels, sampling_rate)
print(info)
Explanation: Creating :class:~mne.Info objects
<div class="alert alert-info"><h4>Note</h4><p>for full documentation on the :class:`~mne.Info` object, see
`tut-info-class`. See also `ex-array-classes`.</p></div>
Normally, :class:mne.Info objects are created by the various
data import functions.
However, if you wish to create one from scratch, you can use the
:func:mne.create_info function to initialize the minimally required
fields. Further fields can be assigned later as one would with a regular
dictionary.
The following creates the absolute minimum info structure:
End of explanation
# Names for each channel
channel_names = ['MEG1', 'MEG2', 'Cz', 'Pz', 'EOG']
# The type (mag, grad, eeg, eog, misc, ...) of each channel
channel_types = ['grad', 'grad', 'eeg', 'eeg', 'eog']
# The sampling rate of the recording
sfreq = 1000 # in Hertz
# The EEG channels use the standard naming strategy.
# By supplying the 'montage' parameter, approximate locations
# will be added for them
montage = 'standard_1005'
# Initialize required fields
info = mne.create_info(channel_names, sfreq, channel_types, montage)
# Add some more information
info['description'] = 'My custom dataset'
info['bads'] = ['Pz'] # Names of bad channels
print(info)
Explanation: You can also supply more extensive metadata:
End of explanation
# Generate some random data
data = np.random.randn(5, 1000)
# Initialize an info structure
info = mne.create_info(
ch_names=['MEG1', 'MEG2', 'EEG1', 'EEG2', 'EOG'],
ch_types=['grad', 'grad', 'eeg', 'eeg', 'eog'],
sfreq=100
)
custom_raw = mne.io.RawArray(data, info)
print(custom_raw)
Explanation: <div class="alert alert-info"><h4>Note</h4><p>When assigning new values to the fields of an
:class:`mne.Info` object, it is important that the
fields are consistent:
- The length of the channel information field `chs` must be
`nchan`.
- The length of the `ch_names` field must be `nchan`.
- The `ch_names` field should be consistent with the `name` field
of the channel information contained in `chs`.</p></div>
Creating :class:~mne.io.Raw objects
To create a :class:mne.io.Raw object from scratch, you can use the
:class:mne.io.RawArray class, which implements raw data that is backed by a
numpy array. The correct units for the data are:
V: eeg, eog, seeg, emg, ecg, bio, ecog
T: mag
T/m: grad
M: hbo, hbr
Am: dipole
AU: misc
The :class:mne.io.RawArray constructor simply takes the data matrix and
:class:mne.Info object:
End of explanation
# Generate some random data: 10 epochs, 5 channels, 2 seconds per epoch
sfreq = 100
data = np.random.randn(10, 5, sfreq * 2)
# Initialize an info structure
info = mne.create_info(
ch_names=['MEG1', 'MEG2', 'EEG1', 'EEG2', 'EOG'],
ch_types=['grad', 'grad', 'eeg', 'eeg', 'eog'],
sfreq=sfreq
)
Explanation: Creating :class:~mne.Epochs objects
To create an :class:mne.Epochs object from scratch, you can use the
:class:mne.EpochsArray class, which uses a numpy array directly without
wrapping a raw object. The array must be of shape(n_epochs, n_chans,
n_times). The proper units of measure are listed above.
End of explanation
# Create an event matrix: 10 events with alternating event codes
events = np.array([
[0, 0, 1],
[1, 0, 2],
[2, 0, 1],
[3, 0, 2],
[4, 0, 1],
[5, 0, 2],
[6, 0, 1],
[7, 0, 2],
[8, 0, 1],
[9, 0, 2],
])
Explanation: It is necessary to supply an "events" array in order to create an Epochs
object. This is of shape(n_events, 3) where the first column is the sample
number (time) of the event, the second column indicates the value from which
the transition is made from (only used when the new value is bigger than the
old one), and the third column is the new event value.
End of explanation
event_id = dict(smiling=1, frowning=2)
Explanation: More information about the event codes: subject was either smiling or
frowning
End of explanation
# Trials were cut from -0.1 to 1.0 seconds
tmin = -0.1
Explanation: Finally, we must specify the beginning of an epoch (the end will be inferred
from the sampling frequency and n_samples)
End of explanation
custom_epochs = mne.EpochsArray(data, info, events, tmin, event_id)
print(custom_epochs)
# We can treat the epochs object as we would any other
_ = custom_epochs['smiling'].average().plot(time_unit='s')
Explanation: Now we can create the :class:mne.EpochsArray object
End of explanation
# The averaged data
data_evoked = data.mean(0)
# The number of epochs that were averaged
nave = data.shape[0]
# A comment to describe to evoked (usually the condition name)
comment = "Smiley faces"
# Create the Evoked object
evoked_array = mne.EvokedArray(data_evoked, info, tmin,
comment=comment, nave=nave)
print(evoked_array)
_ = evoked_array.plot(time_unit='s')
Explanation: Creating :class:~mne.Evoked Objects
If you already have data that is collapsed across trials, you may also
directly create an evoked array. Its constructor accepts an array of
shape(n_chans, n_times) in addition to some bookkeeping parameters.
The proper units of measure for the data are listed above.
End of explanation |
14,372 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GLM
Step1: Load and Prepare Data
We'll use the Hogg 2010 data available at https
Step2: Observe
Step3: Sample
Step4: View Traces
NOTE
Step5: NOTE
Step6: Sample
Step7: View Traces
Step8: Observe
Step9: Sample
Step10: View Traces
Step11: NOTE
Step12: Observe
Step13: Posterior Prediction Plots for OLS vs StudentT vs SignalNoise | Python Code:
%matplotlib inline
%qtconsole --colors=linux
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import optimize
import pymc3 as pm
import theano as thno
import theano.tensor as T
# configure some basic options
sns.set(style="darkgrid", palette="muted")
pd.set_option('display.notebook_repr_html', True)
plt.rcParams['figure.figsize'] = 12, 8
np.random.seed(0)
Explanation: GLM: Robust Regression with Outlier Detection
A minimal reproducable example of Robust Regression with Outlier Detection using Hogg 2010 Signal vs Noise method.
This is a complementary approach to the Student-T robust regression as illustrated in Thomas Wiecki's notebook in the PyMC3 documentation, that approach is also compared here.
This model returns a robust estimate of linear coefficients and an indication of which datapoints (if any) are outliers.
The likelihood evaluation is essentially a copy of eqn 17 in "Data analysis recipes: Fitting a model to data" - Hogg 2010.
The model is adapted specifically from Jake Vanderplas' implementation (3rd model tested).
The dataset is tiny and hardcoded into this Notebook. It contains errors in both the x and y, but we will deal here with only errors in y.
Note:
Python 3.4 project using latest available PyMC3
Developed using ContinuumIO Anaconda distribution on a Macbook Pro 3GHz i7, 16GB RAM, OSX 10.10.5.
During development I've found that 3 data points are always indicated as outliers, but the remaining ordering of datapoints by decreasing outlier-hood is slightly unstable between runs: the posterior surface appears to have a small number of solutions with similar probability.
Finally, if runs become unstable or Theano throws weird errors, try clearing the cache $> theano-cache clear and rerunning the notebook.
Package Requirements (shown as a conda-env YAML):
```
$> less conda_env_pymc3_examples.yml
name: pymc3_examples
channels:
- defaults
dependencies:
- python=3.4
- ipython
- ipython-notebook
- ipython-qtconsole
- numpy
- scipy
- matplotlib
- pandas
- seaborn
- patsy
- pip
$> conda env create --file conda_env_pymc3_examples.yml
$> source activate pymc3_examples
$> pip install --process-dependency-links git+https://github.com/pymc-devs/pymc3
```
Setup
End of explanation
#### cut & pasted directly from the fetch_hogg2010test() function
## identical to the original dataset as hardcoded in the Hogg 2010 paper
dfhogg = pd.DataFrame(np.array([[1, 201, 592, 61, 9, -0.84],
[2, 244, 401, 25, 4, 0.31],
[3, 47, 583, 38, 11, 0.64],
[4, 287, 402, 15, 7, -0.27],
[5, 203, 495, 21, 5, -0.33],
[6, 58, 173, 15, 9, 0.67],
[7, 210, 479, 27, 4, -0.02],
[8, 202, 504, 14, 4, -0.05],
[9, 198, 510, 30, 11, -0.84],
[10, 158, 416, 16, 7, -0.69],
[11, 165, 393, 14, 5, 0.30],
[12, 201, 442, 25, 5, -0.46],
[13, 157, 317, 52, 5, -0.03],
[14, 131, 311, 16, 6, 0.50],
[15, 166, 400, 34, 6, 0.73],
[16, 160, 337, 31, 5, -0.52],
[17, 186, 423, 42, 9, 0.90],
[18, 125, 334, 26, 8, 0.40],
[19, 218, 533, 16, 6, -0.78],
[20, 146, 344, 22, 5, -0.56]]),
columns=['id','x','y','sigma_y','sigma_x','rho_xy'])
## for convenience zero-base the 'id' and use as index
dfhogg['id'] = dfhogg['id'] - 1
dfhogg.set_index('id', inplace=True)
## standardize (mean center and divide by 1 sd)
dfhoggs = (dfhogg[['x','y']] - dfhogg[['x','y']].mean(0)) / dfhogg[['x','y']].std(0)
dfhoggs['sigma_y'] = dfhogg['sigma_y'] / dfhogg['y'].std(0)
dfhoggs['sigma_x'] = dfhogg['sigma_x'] / dfhogg['x'].std(0)
## create xlims ylims for plotting
xlims = (dfhoggs['x'].min() - np.ptp(dfhoggs['x'])/5
,dfhoggs['x'].max() + np.ptp(dfhoggs['x'])/5)
ylims = (dfhoggs['y'].min() - np.ptp(dfhoggs['y'])/5
,dfhoggs['y'].max() + np.ptp(dfhoggs['y'])/5)
## scatterplot the standardized data
g = sns.FacetGrid(dfhoggs, size=8)
_ = g.map(plt.errorbar, 'x', 'y', 'sigma_y', 'sigma_x', marker="o", ls='')
_ = g.axes[0][0].set_ylim(ylims)
_ = g.axes[0][0].set_xlim(xlims)
plt.subplots_adjust(top=0.92)
_ = g.fig.suptitle('Scatterplot of Hogg 2010 dataset after standardization', fontsize=16)
Explanation: Load and Prepare Data
We'll use the Hogg 2010 data available at https://github.com/astroML/astroML/blob/master/astroML/datasets/hogg2010test.py
It's a very small dataset so for convenience, it's hardcoded below
End of explanation
with pm.Model() as mdl_ols:
## Define weakly informative Normal priors to give Ridge regression
b0 = pm.Normal('b0_intercept', mu=0, sd=100)
b1 = pm.Normal('b1_slope', mu=0, sd=100)
## Define linear model
yest = b0 + b1 * dfhoggs['x']
## Use y error from dataset, convert into theano variable
sigma_y = thno.shared(np.asarray(dfhoggs['sigma_y'],
dtype=thno.config.floatX), name='sigma_y')
## Define Normal likelihood
likelihood = pm.Normal('likelihood', mu=yest, sd=sigma_y, observed=dfhoggs['y'])
Explanation: Observe:
Even judging just by eye, you can see these datapoints mostly fall on / around a straight line with positive gradient
It looks like a few of the datapoints may be outliers from such a line
Create Conventional OLS Model
The linear model is really simple and conventional:
$$\bf{y} = \beta^{T} \bf{X} + \bf{\sigma}$$
where:
$\beta$ = coefs = ${1, \beta_{j \in X_{j}}}$
$\sigma$ = the measured error in $y$ in the dataset sigma_y
Define model
NOTE:
+ We're using a simple linear OLS model with Normally distributed priors so that it behaves like a ridge regression
End of explanation
with mdl_ols:
## find MAP using Powell, seems to be more robust
start_MAP = pm.find_MAP(fmin=optimize.fmin_powell, disp=True)
## take samples
traces_ols = pm.sample(2000, start=start_MAP, step=pm.NUTS(), progressbar=True)
Explanation: Sample
End of explanation
_ = pm.traceplot(traces_ols[-1000:], figsize=(12,len(traces_ols.varnames)*1.5),
lines={k: v['mean'] for k, v in pm.df_summary(traces_ols[-1000:]).iterrows()})
Explanation: View Traces
NOTE: I'll 'burn' the traces to only retain the final 1000 samples
End of explanation
with pm.Model() as mdl_studentt:
## Define weakly informative Normal priors to give Ridge regression
b0 = pm.Normal('b0_intercept', mu=0, sd=100)
b1 = pm.Normal('b1_slope', mu=0, sd=100)
## Define linear model
yest = b0 + b1 * dfhoggs['x']
## Use y error from dataset, convert into theano variable
sigma_y = thno.shared(np.asarray(dfhoggs['sigma_y'],
dtype=thno.config.floatX), name='sigma_y')
## define prior for Student T degrees of freedom
nu = pm.DiscreteUniform('nu', lower=1, upper=100)
## Define Student T likelihood
likelihood = pm.StudentT('likelihood', mu=yest, sd=sigma_y, nu=nu
,observed=dfhoggs['y'])
Explanation: NOTE: We'll illustrate this OLS fit and compare to the datapoints in the final plot
Create Robust Model: Student-T Method
I've added this brief section in order to directly compare the Student-T based method exampled in Thomas Wiecki's notebook in the PyMC3 documentation
Instead of using a Normal distribution for the likelihood, we use a Student-T, which has fatter tails. In theory this allows outliers to have a smaller mean square error in the likelihood, and thus have less influence on the regression estimation. This method does not produce inlier / outlier flags but is simpler and faster to run than the Signal Vs Noise model below, so a comparison seems worthwhile.
Note: we'll constrain the Student-T 'degrees of freedom' parameter nu to be an integer, but otherwise leave it as just another stochastic to be inferred: no need for prior knowledge.
Define Model
End of explanation
with mdl_studentt:
## find MAP using Powell, seems to be more robust
start_MAP = pm.find_MAP(fmin=optimize.fmin_powell, disp=True)
## two-step sampling to allow Metropolis for nu (which is discrete)
step1 = pm.NUTS([b0, b1])
step2 = pm.Metropolis([nu])
## take samples
traces_studentt = pm.sample(2000, start=start_MAP, step=[step1, step2], progressbar=True)
Explanation: Sample
End of explanation
_ = pm.traceplot(traces_studentt[-1000:]
,figsize=(12,len(traces_studentt.varnames)*1.5)
,lines={k: v['mean'] for k, v in pm.df_summary(traces_studentt[-1000:]).iterrows()})
Explanation: View Traces
End of explanation
def logp_signoise(yobs, is_outlier, yest_in, sigma_y_in, yest_out, sigma_y_out):
'''
Define custom loglikelihood for inliers vs outliers.
NOTE: in this particular case we don't need to use theano's @as_op
decorator because (as stated by Twiecki in conversation) that's only
required if the likelihood cannot be expressed as a theano expression.
We also now get the gradient computation for free.
'''
# likelihood for inliers
pdfs_in = T.exp(-(yobs - yest_in + 1e-4)**2 / (2 * sigma_y_in**2))
pdfs_in /= T.sqrt(2 * np.pi * sigma_y_in**2)
logL_in = T.sum(T.log(pdfs_in) * (1 - is_outlier))
# likelihood for outliers
pdfs_out = T.exp(-(yobs - yest_out + 1e-4)**2 / (2 * (sigma_y_in**2 + sigma_y_out**2)))
pdfs_out /= T.sqrt(2 * np.pi * (sigma_y_in**2 + sigma_y_out**2))
logL_out = T.sum(T.log(pdfs_out) * is_outlier)
return logL_in + logL_out
with pm.Model() as mdl_signoise:
## Define weakly informative Normal priors to give Ridge regression
b0 = pm.Normal('b0_intercept', mu=0, sd=100)
b1 = pm.Normal('b1_slope', mu=0, sd=100)
## Define linear model
yest_in = b0 + b1 * dfhoggs['x']
## Define weakly informative priors for the mean and variance of outliers
yest_out = pm.Normal('yest_out', mu=0, sd=100)
sigma_y_out = pm.HalfNormal('sigma_y_out', sd=100)
## Define Bernoulli inlier / outlier flags according to a hyperprior
## fraction of outliers, itself constrained to [0,.5] for symmetry
frac_outliers = pm.Uniform('frac_outliers', lower=0., upper=.5)
is_outlier = pm.Bernoulli('is_outlier', p=frac_outliers, shape=dfhoggs.shape[0])
## Extract observed y and sigma_y from dataset, encode as theano objects
yobs = thno.shared(np.asarray(dfhoggs['y'], dtype=thno.config.floatX), name='yobs')
sigma_y_in = thno.shared(np.asarray(dfhoggs['sigma_y']
, dtype=thno.config.floatX), name='sigma_y_in')
## Use custom likelihood using DensityDist
likelihood = pm.DensityDist('likelihood', logp_signoise,
observed={'yobs':yobs, 'is_outlier':is_outlier,
'yest_in':yest_in, 'sigma_y_in':sigma_y_in,
'yest_out':yest_out, 'sigma_y_out':sigma_y_out})
Explanation: Observe:
Both parameters b0 and b1 show quite a skew to the right, possibly this is the action of a few samples regressing closer to the OLS estimate which is towards the left
The nu parameter seems very happy to stick at nu = 1, indicating that a fat-tailed Student-T likelihood has a better fit than a thin-tailed (Normal-like) Student-T likelihood.
The inference sampling also ran very quickly, almost as quickly as the conventional OLS
NOTE: We'll illustrate this Student-T fit and compare to the datapoints in the final plot
Create Robust Model with Outliers: Hogg Method
Please read the paper (Hogg 2010) and Jake Vanderplas' code for more complete information about the modelling technique.
The general idea is to create a 'mixture' model whereby datapoints can be described by either the linear model (inliers) or a modified linear model with different mean and larger variance (outliers).
The likelihood is evaluated over a mixture of two likelihoods, one for 'inliers', one for 'outliers'. A Bernouilli distribution is used to randomly assign datapoints in N to either the inlier or outlier groups, and we sample the model as usual to infer robust model parameters and inlier / outlier flags:
$$
\mathcal{logL} = \sum_{i}^{i=N} log \left[ \frac{(1 - B_{i})}{\sqrt{2 \pi \sigma_{in}^{2}}} exp \left( - \frac{(x_{i} - \mu_{in})^{2}}{2\sigma_{in}^{2}} \right) \right] + \sum_{i}^{i=N} log \left[ \frac{B_{i}}{\sqrt{2 \pi (\sigma_{in}^{2} + \sigma_{out}^{2})}} exp \left( - \frac{(x_{i}- \mu_{out})^{2}}{2(\sigma_{in}^{2} + \sigma_{out}^{2})} \right) \right]
$$
where:
$\bf{B}$ is Bernoulli-distibuted $B_{i} \in [0_{(inlier)},1_{(outlier)}]$
Define model
End of explanation
with mdl_signoise:
## two-step sampling to create Bernoulli inlier/outlier flags
step1 = pm.NUTS([frac_outliers, yest_out, sigma_y_out, b0, b1])
step2 = pm.BinaryMetropolis([is_outlier], tune_interval=100)
## find MAP using Powell, seems to be more robust
start_MAP = pm.find_MAP(fmin=optimize.fmin_powell, disp=True)
## take samples
traces_signoise = pm.sample(2000, start=start_MAP, step=[step1,step2], progressbar=True)
Explanation: Sample
End of explanation
_ = pm.traceplot(traces_signoise[-1000:], figsize=(12,len(traces_signoise.varnames)*1.5),
lines={k: v['mean'] for k, v in pm.df_summary(traces_signoise[-1000:]).iterrows()})
Explanation: View Traces
End of explanation
outlier_melt = pd.melt(pd.DataFrame(traces_signoise['is_outlier', -1000:],
columns=['[{}]'.format(int(d)) for d in dfhoggs.index]),
var_name='datapoint_id', value_name='is_outlier')
ax0 = sns.pointplot(y='datapoint_id', x='is_outlier', data=outlier_melt,
kind='point', join=False, ci=None, size=4, aspect=2)
_ = ax0.vlines([0,1], 0, 19, ['b','r'], '--')
_ = ax0.set_xlim((-0.1,1.1))
_ = ax0.set_xticks(np.arange(0, 1.1, 0.1))
_ = ax0.set_xticklabels(['{:.0%}'.format(t) for t in np.arange(0,1.1,0.1)])
_ = ax0.yaxis.grid(True, linestyle='-', which='major', color='w', alpha=0.4)
_ = ax0.set_title('Prop. of the trace where datapoint is an outlier')
_ = ax0.set_xlabel('Prop. of the trace where is_outlier == 1')
Explanation: NOTE:
During development I've found that 3 datapoints id=[1,2,3] are always indicated as outliers, but the remaining ordering of datapoints by decreasing outlier-hood is unstable between runs: the posterior surface appears to have a small number of solutions with very similar probability.
The NUTS sampler seems to work okay, and indeed it's a nice opportunity to demonstrate a custom likelihood which is possible to express as a theano function (thus allowing a gradient-based sampler like NUTS). However, with a more complicated dataset, I would spend time understanding this instability and potentially prefer using more samples under Metropolis-Hastings.
Declare Outliers and Compare Plots
View ranges for inliers / outlier predictions
At each step of the traces, each datapoint may be either an inlier or outlier. We hope that the datapoints spend an unequal time being one state or the other, so let's take a look at the simple count of states for each of the 20 datapoints.
End of explanation
cutoff = 5
dfhoggs['outlier'] = np.percentile(traces_signoise[-1000:]['is_outlier'],cutoff, axis=0)
dfhoggs['outlier'].value_counts()
Explanation: Observe:
The plot above shows the number of samples in the traces in which each datapoint is marked as an outlier, expressed as a percentage.
In particular, 3 points [1, 2, 3] spend >=95% of their time as outliers
Contrastingly, points at the other end of the plot close to 0% are our strongest inliers.
For comparison, the mean posterior value of frac_outliers is ~0.35, corresponding to roughly 7 of the 20 datapoints. You can see these 7 datapoints in the plot above, all those with a value >50% or thereabouts.
However, only 3 of these points are outliers >=95% of the time.
See note above regarding instability between runs.
The 95% cutoff we choose is subjective and arbitrary, but I prefer it for now, so let's declare these 3 to be outliers and see how it looks compared to Jake Vanderplas' outliers, which were declared in a slightly different way as points with means above 0.68.
Declare outliers
Note:
+ I will declare outliers to be datapoints that have value == 1 at the 5-percentile cutoff, i.e. in the percentiles from 5 up to 100, their values are 1.
+ Try for yourself altering cutoff to larger values, which leads to an objective ranking of outlier-hood.
End of explanation
g = sns.FacetGrid(dfhoggs, size=8, hue='outlier', hue_order=[True,False],
palette='Set1', legend_out=False)
lm = lambda x, samp: samp['b0_intercept'] + samp['b1_slope'] * x
pm.glm.plot_posterior_predictive(traces_ols[-1000:],
eval=np.linspace(-3, 3, 10), lm=lm, samples=200, color='#22CC00', alpha=.2)
pm.glm.plot_posterior_predictive(traces_studentt[-1000:], lm=lm,
eval=np.linspace(-3, 3, 10), samples=200, color='#FFA500', alpha=.5)
pm.glm.plot_posterior_predictive(traces_signoise[-1000:], lm=lm,
eval=np.linspace(-3, 3, 10), samples=200, color='#357EC7', alpha=.3)
_ = g.map(plt.errorbar, 'x', 'y', 'sigma_y', 'sigma_x', marker="o", ls='').add_legend()
_ = g.axes[0][0].annotate('OLS Fit: Green\nStudent-T Fit: Orange\nSignal Vs Noise Fit: Blue',
size='x-large', xy=(1,0), xycoords='axes fraction',
xytext=(-160,10), textcoords='offset points')
_ = g.axes[0][0].set_ylim(ylims)
_ = g.axes[0][0].set_xlim(xlims)
Explanation: Posterior Prediction Plots for OLS vs StudentT vs SignalNoise
End of explanation |
14,373 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load an AIA image
Step1: I then go to JPL Horizons (https
Step2: This is assuming the target is at 1 ly away (very far!) | Python Code:
aiamap=sunpy.map.Map('/Users/kkozarev/sunpy/data/sample_data/AIA20110319_105400_0171.fits')
Explanation: Load an AIA image
End of explanation
sunc_1au=SkyCoord(ra='23h53m53.47',dec='-00d39m44.3s', distance=1.*u.au,frame='icrs').transform_to(aiamap.coordinate_frame)
Explanation: I then go to JPL Horizons (https://ssd.jpl.nasa.gov/horizons.cgi) and find out RA and DEC of the solar disk center. I use a geocentric observer because I do not know exactly where SDO is located at that time.
The following is assuming the target is at 1 AU (where the Sun is supposed to be)
End of explanation
sunc_1ly=SkyCoord(ra='23h53m53.47',dec='-00d39m44.3s',
distance=1.*u.lightyear,frame='icrs').transform_to(aiamap.coordinate_frame)
fig = plt.figure(figsize=(8,8))
ax = plt.subplot(projection=aiamap)
aiamap.plot(axes=ax)
aiamap.draw_grid(axes=ax)
aiamap.draw_limb(axes=ax)
ax.plot_coord(sunc_1au, '+w', ms=10, label='Sun Center 1 AU')
ax.plot_coord(sunc_1ly, '*r', ms=10, label='Sun Center 1 LY')
#plt.show()
sunc_1au
sunc_1ly
Explanation: This is assuming the target is at 1 ly away (very far!)
End of explanation |
14,374 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
APS - new snow
Imports
Step1: Parameters, categories and scores
Main control factors
Step2: Weighting
Weights are added if they are independent of the value of the core factor or multiplied if they are related to the core factor.
Step3: The new_snow_24_72h_scores are used to weight the new_snow_24h_scores prior to multiplying it with wind_speed_score.
In order to achive a smooth fit within the range of interest I added some control points just right outside the normal range for the higher order polynomials.
The temperature evolution during a snowfall can be fitted to a curve which can then be compared to predefined curves/scenarios. The scenario with the best correlation is chosen to define the category.
The type_new_snow_cat can be infered from evolution_temperature and wind_speed.
In the first place the categories can be set manually.
Score functions
New snow 24 h
Step4: New snow 24-72 h
Step5: Wind speed
Step6: New snow vs. wind speed
Step7: ToDo
retrieve relevant data from a weather station / grid point (e.g. Hallingdal/Hemsedal Holtø in mid-late January)
calculate new_snow_score for some weeks
compare to chosen AP in regional forecast
maybe extent to a larger grid
...continue with hemsedal_jan2016.py in Test
Random scripting testing
Step8: the dict is not sorted and the comparison less than is random... | Python Code:
# -*- coding: utf-8 -*-
%matplotlib inline
from __future__ import print_function
import pylab as plt
import datetime
import numpy as np
plt.rcParams['figure.figsize'] = (14, 6)
Explanation: APS - new snow
Imports
End of explanation
# New snow amount last 24 h 0-60 cm [10 cm intervals]
new_snow_24h_cat = np.array([0, 10, 20, 30, 40, 50, 60])
new_snow_24h_score = np.array([0.1, 0.5, 1.1, 1.3, 1.4, 1.8, 2.0])
# Wind speed 0-100 km/h [0,10,20,30,40,50,60,80,100]
wind_speed_km_cat = np.array([-5, 0, 10, 20, 30, 40, 50, 60, 80, 100, 150])
wind_speed_cat = wind_speed_km_cat / 3.6 # m/s
wind_speed_score = np.array([0.0, 1.8, 2.8, 3.3, 2.6, 1.2, 0.6, 0.3, 0.15, 0.07, 0.0])
Explanation: Parameters, categories and scores
Main control factors
End of explanation
# New snow amount last 24-72h 0-100 cm [0,10,20,30,40,50,60,80,100]
new_snow_24_72h_cat = np.array([0, 10, 20, 30, 40, 50, 60, 80, 100])
new_snow_24_72h_score = np.array([0.8, 1.0, 1.1, 1.2, 1.4, 1.6, 1.8, 2.1, 2.5]) # a weight for new_snow_24h
# Evolution of temperature
evolution_temperature_cat = ["constant very cold", "constant cold", "constant warm", "rise towards 0 deg after snowfall", "substantial cooling after snowfall"]
# Bonding to existing snowpack
bonding_existing_snowpack_cat = ["favorable", "moderate", "poor"]
# Type of new snow
type_new_snow_cat = ["loose-powder", "soft", "packed", "packed and moist"]
Explanation: Weighting
Weights are added if they are independent of the value of the core factor or multiplied if they are related to the core factor.
End of explanation
new_snow_24h_fit = np.polyfit(new_snow_24h_cat, new_snow_24h_score, 2)
score_new_snow_24h = np.poly1d(new_snow_24h_fit)
x = np.arange(0, 60.0)
res = score_new_snow_24h(x)
plt.scatter(new_snow_24h_cat, new_snow_24h_score)
plt.plot(x, res)
Explanation: The new_snow_24_72h_scores are used to weight the new_snow_24h_scores prior to multiplying it with wind_speed_score.
In order to achive a smooth fit within the range of interest I added some control points just right outside the normal range for the higher order polynomials.
The temperature evolution during a snowfall can be fitted to a curve which can then be compared to predefined curves/scenarios. The scenario with the best correlation is chosen to define the category.
The type_new_snow_cat can be infered from evolution_temperature and wind_speed.
In the first place the categories can be set manually.
Score functions
New snow 24 h
End of explanation
new_snow_24_72h_fit = np.polyfit(new_snow_24_72h_cat, new_snow_24_72h_score, 1)
score_new_snow_24_72h = np.poly1d(new_snow_24_72h_fit)
x = np.arange(0, 100.0)
res = score_new_snow_24_72h(x)
plt.scatter(new_snow_24_72h_cat, new_snow_24_72h_score)
plt.plot(x, res)
Explanation: New snow 24-72 h
End of explanation
wind_speed_fit = np.polyfit(wind_speed_cat, wind_speed_score, 5)
score_wind_speed = np.poly1d(wind_speed_fit)
x = np.arange(-5, 150.0 / 3.6)
res = score_wind_speed(x)
plt.scatter(wind_speed_cat, wind_speed_score)
plt.plot(x, res)
Explanation: Wind speed
End of explanation
new_snow = np.matrix(np.arange(0, 60.0))
sns = score_new_snow_24h(new_snow)
# weighted by new snow amount of the previous two days
new_snow_72 = 40
ns_weight = score_new_snow_24_72h(new_snow_72)
sns *= ns_weight
wind_speed = np.matrix(np.arange(0, 100.0 / 3.6))
swp = score_wind_speed(wind_speed)
M = np.multiply(sns, swp.T)
#print(M)
plt.contourf(M)#np.flipud(M.T))
print("Min {0}; Max {1}".format(np.amin(M), np.amax(M)))
plt.colorbar()
plt.xlabel("New snow last 24h [cm]")
plt.ylabel("Wind speed [m/s]")
Explanation: New snow vs. wind speed
End of explanation
new_snow_cat = ["0-5", "5-10", "10-15", "15-20"]
new_snow_thres = {(0, 5): 0.2, (5, 10): 0.5, (10, 15): 1, (15, 20): 3}
wind_cat = ["0-3", "4-7", "8-10", "10-15", "16-30"]
wind_thres = {(0, 3): 0.2, (3, 7): 1, (7, 10): 2, (10, 15): 0.2, (15, 30): 0.01}
new_snow_region = np.array([[0, 4, 6, 18],
[0, 4, 6, 18],
[0, 4, 6, 18]])
wind_region = np.array([[0, 4, 12, 18],
[4, 0, 18, 6],
[18, 12, 6, 0]])
def get_score(a, score_dict):
for key, value in score_dict.items():
if key[0] <= a < key[1]:
# if a < key:
return value
break
return None
Explanation: ToDo
retrieve relevant data from a weather station / grid point (e.g. Hallingdal/Hemsedal Holtø in mid-late January)
calculate new_snow_score for some weeks
compare to chosen AP in regional forecast
maybe extent to a larger grid
...continue with hemsedal_jan2016.py in Test
Random scripting testing
End of explanation
new_snow_region_score = [get_score(a, new_snow_thres) for a in new_snow_region.flatten()]
new_snow_region_score = np.array(new_snow_region_score).reshape(new_snow_region.shape)
print(new_snow_region_score)
wind_region_score = [get_score(a, wind_thres) for a in wind_region.flatten()]
wind_region_score = np.array(wind_region_score).reshape(wind_region.shape)
print(wind_region_score)
print(wind_region_score * new_snow_region_score)
X = np.matrix(np.arange(0, 11.0))
Y = np.matrix(np.arange(10.0, 21.0))
Z = np.multiply(X, Y.T)
print(X)
print(Y.T)
print(Z)
plt.imshow(Z)
print("Min {0}; Max {1}".format(np.amin(Z), np.amax(Z)))
plt.colorbar()
Explanation: the dict is not sorted and the comparison less than is random...
End of explanation |
14,375 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook provides a couple examples for how to convert long $\LaTeX$ expression into sympy format, via Mathematica.
EMRI terms
The first step is to select the equation you want from the original source (possibly obtained from the "Other Formats" link on the paper's arXiv page), and put it in its own file. Here, we have an example named EMRIGWFlux_7PN.tex taken from Fujita (2012). It is best to copy this exactly, without making any changes
The next step is to run this through perl, and let perl make any necessary replacements. Mathematica won't want any \begin{equation} statements, so we remove them first. Next, we probably want to remove the left-hand side of the equation, which is just the variable name that this expression represents. We correct any mistakes in the original (the use of {\rm ln} instead of \ln, for example), and remove anything else Mathematica won't understand.
Step2: Next, we just need to run this through Mathematica, which has a good TeX conversion method. Of course, Mathematica's output is weird — all the function names are capitalized; function arguments come in square brackets; etc. So we just replace them in the output. Then, we go through and print the terms at each order, ready to be copied to another notebook. (Or we could just use the sympy object Flux below.)
NOTE
Step3: Just for fun, let's look at the actual expression
Step6: PN angular momentum
Step8: Binding energy | Python Code:
%%bash
perl -nlw \
-e 's/\\begin\{eqnarray\*\}//g; s/\\end\{eqnarray\*\}//g; ' `# remove environment for Mathematica` \
-e 's/\{dE\\over dt\}=&&\\left\(\{dE\\over dt\}\\right\)_N//;' `# remove definition statement` \
-e 's/\{\\rm ln\}/\\ln/g;' `# Correct bad notation for logarithm` \
-e 's/\\cr//g; s/\\displaystyle//g;' `# These shouldn't have been in there to begin with` \
-e 's/(\\ln\(.\))(\^\{.\})/($1)$2/g;' `# group logarithm powers correctly` \
-e 's/\\\{/(/g; s/\\\}/)/g; s/\[/(/g; s/\]/)/g;' `# convert braces and brackets to parentheses` \
-e 's/\),/\)/;' `# remove final punctuation` \
-e 'print if /\S/;' `# only print line if nonempty` \
EMRIGWFlux_7PN.tex > EMRIGWFlux_7PN_Simplified.tex
#cat EMRIGWFlux_7PN_Simplified.tex
Explanation: This notebook provides a couple examples for how to convert long $\LaTeX$ expression into sympy format, via Mathematica.
EMRI terms
The first step is to select the equation you want from the original source (possibly obtained from the "Other Formats" link on the paper's arXiv page), and put it in its own file. Here, we have an example named EMRIGWFlux_7PN.tex taken from Fujita (2012). It is best to copy this exactly, without making any changes
The next step is to run this through perl, and let perl make any necessary replacements. Mathematica won't want any \begin{equation} statements, so we remove them first. Next, we probably want to remove the left-hand side of the equation, which is just the variable name that this expression represents. We correct any mistakes in the original (the use of {\rm ln} instead of \ln, for example), and remove anything else Mathematica won't understand.
End of explanation
MathKernel='/Applications/Local/Mathematica.app/Contents/MacOS/MathKernel'
FluxCommand = r
\[Gamma] = EulerGamma;
\[Zeta] = Zeta;
HornerForm[ToExpression[Import[
"EMRIGWFlux_7PN_Simplified.tex",
"Text"], TeXForm]] >> /tmp/Flux.cpp
Exit[];
! {MathKernel} -run '{FluxCommand}' >& /dev/null
Flux = !cat /tmp/flux.cpp
Flux = ''.join(Flux).replace(' ','').replace('Pi','pi').replace('Log','log').replace('Zeta','zeta').replace('Power','Pow')
Flux = Flux.replace('[','(').replace(']',')').replace('^','**')
Flux = sympify(Flux)
logv = symbols('logv')
FluxDictionary = Poly(Flux.subs('log(v)', logv), Flux.atoms(Symbol).pop()).as_dict()
for key in sorted(FluxDictionary) :
if(key[0]>7) :
print("FluxTerms['IncompleteNonspinning'][{0}] = {1}".format(key[0], FluxDictionary[key].subs(logv, log(v))))
Explanation: Next, we just need to run this through Mathematica, which has a good TeX conversion method. Of course, Mathematica's output is weird — all the function names are capitalized; function arguments come in square brackets; etc. So we just replace them in the output. Then, we go through and print the terms at each order, ready to be copied to another notebook. (Or we could just use the sympy object Flux below.)
NOTE: You will need to adjust the MathKernel path below. On Linux, you will need the executable named math instead.
End of explanation
Flux
Explanation: Just for fun, let's look at the actual expression:
End of explanation
from sympy import *
ellHat= Symbol('ellHat')
nHat= Symbol('nHat')
lambdaHat= Symbol('lambdaHat')
var('v, m, nu, G, c, x');
def MathematicaToSympy(L):
import re
L = ''.join(L).replace(' ','')
L = L.replace(r'\[ScriptL]','ell')
MathematicaCapitalGreek = re.compile(r'\\ \[ Capital(.*?) \]', re.VERBOSE)
L = MathematicaCapitalGreek.sub(r'\1',L)
MathematicaGreek = re.compile(r'\\ \[ (.*?) \]', re.VERBOSE)
L = MathematicaGreek.sub(lambda m: m.group(1).lower(),L)
OverHat = re.compile(r'OverHat\[ (.*?) \]', re.VERBOSE)
L = OverHat.sub(r'\1Hat',L)
Subscript = re.compile(r'Subscript\[ (.*?), (.*?) \]', re.VERBOSE)
L = Subscript.sub(r'\1_\2',L)
Sqrt = re.compile(r'Sqrt\[ (.*?) \]', re.VERBOSE)
L = Sqrt.sub(r'sqrt(\1)',L)
L = L.replace('Pi','pi').replace('Log','log').replace('Zeta','zeta').replace('Power','Pow')
L = L.replace('^','**')
return L
MathKernel='/Applications/Local/Mathematica.app/Contents/MacOS/MathKernel'
MCommand = r
\[Gamma] = EulerGamma;
\[Zeta] = Zeta;
HornerForm[ToExpression[Import[
"AngularMomentum.tex",
"Text"], TeXForm],x] >> /tmp/AngularMomentum.cpp
Exit[];
! {MathKernel} -run '{MCommand}' >& /dev/null
L = !cat /tmp/AngularMomentum.cpp
L = MathematicaToSympy(L)
L = sympify(L).subs('sqrt(x)',v).subs('x',v**2).subs('G',1).subs('c',1).simplify()
L_ellHat = horner( (v*L).simplify().expand().coeff(ellHat) )/v
L_nHat = horner( (v*L).simplify().expand().coeff(nHat) )/v
L_lambdaHat = horner( (v*L).simplify().expand().coeff(lambdaHat) )/v
L_set = [L_ellHat, L_nHat, L_lambdaHat]
for n in range(3,9,2):
print(AngularMomentum_Spin.AddDerivedVariable('L_SO_{0}',
({1})*ellHat
+ ({2})*nHat
+ ({3})*lambdaHat,
datatype=ellHat.datatype).format(n,
(v*L_ellHat/(m**2*nu)).expand().coeff(v**n).simplify(),
(v*L_nHat/(m**2*nu)).expand().coeff(v**n).simplify(),
(v*L_lambdaHat/(m**2*nu)).expand().coeff(v**n).simplify()))
for var in [L_ellHat, L_nHat, L_lambdaHat]:
print ccode(N(var,16))
Explanation: PN angular momentum
End of explanation
%%bash --out tmp
perl -nlw \
-e 's/\\begin\{eqnarray\}//g; s/\\end\{eqnarray\}//g; ' `# remove environment for Mathematica` \
-e 's/\\label\{eB6PN\}//;' `# remove equation label` \
-e 's/\\nonumber//g; s/\\biggl//g; s/\\Biggl//g; s/\\left//g; s/\\right//g;' `# remove irrelevant formatting` \
-e 's/\&//g; s/\\\\//g;' `# remove alignments and newlines` \
-e 's/\\\{/(/g; s/\\\}/)/g; s/\[/(/g; s/\]/)/g;' `# convert braces and brackets to parentheses` \
-e 's/\),/\)/;' `# remove final punctuation` \
-e 'print if /\S/;' `# only print line if nonempty` \
BindingEnergy.tex > BindingEnergy_Simplified.tex
cat BindingEnergy_Simplified.tex
tmp
from __future__ import print_function
Terms = ['E_2', 'E_4', 'E_6', 'E_8', 'E_lnv_8', 'E_10', 'E_lnv_10', 'E_11', 'E_12', 'E_lnv_12']
Expressions = [s[(s.index('=')+1):].replace('\n','') for s in tmp.split('\ne')]
for t,e in zip(Terms,Expressions):
print(e, file=open('/tmp/'+t+'.tex','w+'))
import re
MathKernel='/Applications/Local/Mathematica.app/Contents/MacOS/MathKernel'
BindingEnergyCommand = r
\[Gamma] = EulerGamma;
HornerForm[ToExpression[Import[
"/tmp/{0}.tex",
"Text"], TeXForm]] >> /tmp/{0}.cpp
Exit[];
for t in Terms:
! {MathKernel} -run '{BindingEnergyCommand.format(t)}' >& /dev/null
BindingEnergy = !cat /tmp/{t}.cpp
BindingEnergy = ''.join(BindingEnergy).replace(' ','').replace('Pi','pi').replace('Log','log').replace('Nu','nu')
BindingEnergy = BindingEnergy.replace('[','(').replace(']',')').replace('Power','Pow').replace('^','**').replace('\\','')
BindingEnergy = re.sub(r'Subscript\((.*?),([0-9])[.5]*\)\*\*(c|ln)', r'\1_\2__\g<3>1', BindingEnergy)
BindingEnergy = sympify(BindingEnergy)
logv = symbols('logv')
BindingEnergy = BindingEnergy.subs('log(v)', logv)
print("BindingEnergy_NoSpin_B.AddDerivedConstant('{0}', {1})".format(t, BindingEnergy))
print()
import re
from sympy import sympify, symbols
for i in [1]:
BindingEnergy = !cat /tmp/E_lnv_12.cpp
print(BindingEnergy)
BindingEnergy = ''.join(BindingEnergy).replace(' ','').replace('Pi','pi').replace('Log','log').replace('Nu','nu')
print(BindingEnergy)
BindingEnergy = BindingEnergy.replace('[','(').replace(']',')').replace('Power','Pow').replace('^','**').replace('\\','')
print(BindingEnergy)
BindingEnergy = re.sub(r'Subscript\((.*?),([0-9])[.5]*\)\*\*(c|ln)', r'\1_\2__\g<3>1', BindingEnergy)
print(BindingEnergy)
BindingEnergy = sympify(BindingEnergy)
print(BindingEnergy)
logv = symbols('logv')
BindingEnergy = BindingEnergy.subs('log(v)', logv)
print("BindingEnergy_NoSpin_B.AddDerivedConstant('E_lnv_12', {0})".format(BindingEnergy))
print()
nu*(nu*(11*a_6__ln1/2 + nu*(-11*a_6__ln1/2 + 616/27) + 56386/105) - 1967284/8505)
11*self.a_7__ln1/3 + nu*(nu*(11*self.a_6__ln1/2 + nu*(-11*self.a_6__ln1/2 + 616/27 + 56386/105) - 1967284/8505))
Explanation: Binding energy
End of explanation |
14,376 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 10
Python Basic, Lesson 4, v1.0.1, 2016.12 by David.Yi
Python Basic, Lesson 4, v1.0.2, 2017.03 modified by Yimeng.Zhang
v1.1, 2020.4 5,edit by David Yi
本次内容要点
函数不同参数形式
匿名函数
思考一下:函数可变参数练习
函数不同参数形式
位置参数和默认参数
位置参数:必须按照顺序准确传递,如果数量和顺序不对,就会可能造成程序错误;调用函数时候,如果写了参数名称,那么位置就不重要了;
默认参数:在参数申明的时候跟一个用于默认值的赋值语句,如果调用函数的时候没有给出值,那么这个赋值语句就执行;
注意:所有必须的参数要在默认参数之前;
默认参数的好处:
* 减少程序复杂度
* 降低程序错误可能性
* 更好的兼容性
可变长度的参数-元组和字典
可变长度的参数,分为不提供关键字和提供关键字两种模式,分别为元组 tuple 和字典 dict;
可变长度的参数,如果是提供关键字,就是字典 dict,需要提供 key – value;
将字典作为参数传递的时候,可以直接传一个字典变量,也可以在参数列表中写明 key 和 value。
函数参数小结
位置参数
默认参数
元组参数,一个星号
字典参数,两个星号,需要传递 key 和 value
Step1: 匿名函数
Python 允许用 lambda 关键字创造匿名函数。
lambda 匿名函数不需要常规函数的 def 和 return 关键字,因为匿名函数代码较短,因此适用于一些简单处理运算的场景。
Step2: 匿名函数的用法特点
只用一次的场景,不用取名称,因为给函数取名是比较麻烦的一件事;
函数逻辑简单,使用匿名函数前确认 Python 没有自带类似功能的函数,实际开发中,不要去重复发明轮子;
思考一下
函数可变参数练习,参数 kind 中增加 max 这个key,如果设置为 ignore,则输入的数字中的最大数忽略 | Python Code:
# 函数默认参数
def cal_0(money, rate=0.1):
return money + money * rate
print(cal_0(100))
print(cal_0(100,0.2))
print(cal_0(rate=0.3,money=100))
# 函数默认参数
def cal_1(money, bonus=1000, month=12,a=1, b=2):
i = money * month + bonus
return i
print(cal_1(5000))
print(cal_1(5000, 2000))
print(cal_1(5000, 2000, 10))
# 画一个三角形 ,n=高度
'''
*
***
*****
*******
*********
***********
*************
'''
# 函数默认参数
def draw_triangle(n=5):
for i in range(n+1):
print(' '*(n-i),'*'*(2*i-1))
draw_triangle(3)
draw_triangle(7)
# 函数可变长度的参数 元组
# 计算平均数
def cal_2(kind, *numbers):
if kind == 'avg':
n = 0
for i in numbers:
n = n + i
return n / len(numbers)
t = cal_2('avg', 1,2,3,4)
print(t)
# 函数可变长度的参数 元组
# 输入计算的类别,输入要计算的数字
def cal_3(kind, *numbers):
if kind == 'avg':
n = 0
for i in numbers:
n = n + i
return n / len(numbers)
if kind == 'sum':
n = 0
for i in numbers:
n = n + i
return n
print(cal_3('avg', 1,2,3,4))
print(cal_3('sum', 1,2,3,4))
# 如果已经有一个list或者tuple,要要作为可变参数传给函数怎么办?
# 在list或tuple前面加一个*号,把list或tuple的元素变成可变参数传进去:
print(cal_3('sum',*[1,2,3,4]))
# 函数可变长度的参数 元组
# 简化程序逻辑
def cal_4(kind, *numbers):
n = 0
for i in numbers:
n = n + i
if kind == 'avg':
return n / len(numbers)
if kind == 'sum':
return n
print(cal_4(kind='avg', 1,2,3,4))
print(cal_4('sum', 1,2,3,4))
# 传递元组和字典参数优雅的写法
def cal_5(*numbers, **kw):
# 判断是否有 kind 这个 key
if 'kind' in kw:
kind_value = kw.get('kind')
n = 0
for i in numbers:
n = n + i
if kind_value == 'avg':
return n / len(numbers)
if kind_value == 'sum':
return n
print(cal_5(1,2,3,4,kind='avg',max='ignore'))
print(cal_5(1,2,3,4,kind='sum'))
Explanation: Lesson 10
Python Basic, Lesson 4, v1.0.1, 2016.12 by David.Yi
Python Basic, Lesson 4, v1.0.2, 2017.03 modified by Yimeng.Zhang
v1.1, 2020.4 5,edit by David Yi
本次内容要点
函数不同参数形式
匿名函数
思考一下:函数可变参数练习
函数不同参数形式
位置参数和默认参数
位置参数:必须按照顺序准确传递,如果数量和顺序不对,就会可能造成程序错误;调用函数时候,如果写了参数名称,那么位置就不重要了;
默认参数:在参数申明的时候跟一个用于默认值的赋值语句,如果调用函数的时候没有给出值,那么这个赋值语句就执行;
注意:所有必须的参数要在默认参数之前;
默认参数的好处:
* 减少程序复杂度
* 降低程序错误可能性
* 更好的兼容性
可变长度的参数-元组和字典
可变长度的参数,分为不提供关键字和提供关键字两种模式,分别为元组 tuple 和字典 dict;
可变长度的参数,如果是提供关键字,就是字典 dict,需要提供 key – value;
将字典作为参数传递的时候,可以直接传一个字典变量,也可以在参数列表中写明 key 和 value。
函数参数小结
位置参数
默认参数
元组参数,一个星号
字典参数,两个星号,需要传递 key 和 value
End of explanation
# 等价的函数一般写法和匿名函数写法
def add(x, y):
return x + y
a = lambda x, y: x + y
a(1,2)
# lambda 同样可以有默认参数
a = lambda x, y=2 : x + y
a(3)
a(3, 5)
a = lambda x : x * x +40
print(a(2))
Explanation: 匿名函数
Python 允许用 lambda 关键字创造匿名函数。
lambda 匿名函数不需要常规函数的 def 和 return 关键字,因为匿名函数代码较短,因此适用于一些简单处理运算的场景。
End of explanation
# kind 中 增加 max key,
# max = ingnore, 则忽略最大值
def cal_6(*numbers, **kind):
if 'kind' in kind:
kind_value = kind.get('kind')
if 'max' in kind:
if kind.get('max') == 'ignore':
numbers = list(numbers)
numbers.remove(max(numbers))
n = 0
for i in numbers:
n = n + i
if kind_value == 'avg':
return n / len(numbers)
if kind_value == 'sum':
return n
print(cal_6(1,2,3,4, kind='avg', max='ignore',min='ignore'))
print(cal_6(1,2,3,4, kind='avg'))
print(cal_6(1,2,3,4, kind='sum'))
# kind 中 增加 min key,
# min key = double, 则最小值计算两次
def cal_7(*numbers, **kw):
numbers = list(numbers)
if 'kind' in kw:
kind_value = kw.get('kind')
if 'max' in kw:
if kw.get('max') == 'ignore':
numbers.remove(max(numbers))
if 'min' in kw:
if kw.get('min') == 'double':
numbers.append(min(numbers))
n = 0
for i in numbers:
n = n + i
if kind_value == 'avg':
return n / len(numbers)
if kind_value == 'sum':
return n
print(cal_7(1,2,3,4, kind='avg', max='ignore', min='double'))
print(cal_7(1,2,3,4, kind='avg'))
print(cal_7(1,2,3,4, kind='sum'))
Explanation: 匿名函数的用法特点
只用一次的场景,不用取名称,因为给函数取名是比较麻烦的一件事;
函数逻辑简单,使用匿名函数前确认 Python 没有自带类似功能的函数,实际开发中,不要去重复发明轮子;
思考一下
函数可变参数练习,参数 kind 中增加 max 这个key,如果设置为 ignore,则输入的数字中的最大数忽略
End of explanation |
14,377 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 6
Step1: Setup an identical instance of NPTFit to Example 5
Firstly we initialize an instance of nptfit identical to that used in the previous example.
Step2: Evaluate the Likelihood Manually
After configuring for the scan, the instance of nptfit.NPTF now has an associated function ll. This function was passed to MultiNest in the previous example, but we can also manually evaluate it.
The log likelihood function is called as
Step3: To make the point clearer we can fix $n_1$ and $n_2$ to their best fit values, and calculate a Test Statistics (TS) array as we vary $\log_{10} \left( A^\mathrm{ps}_\mathrm{iso} \right)$. As shown the likelihood is maximised at approximated where MultiNest told us was the best fit point for this parameter.
Step4: Next we do the same thing for $n_2$. This time we see that this parameter is much more poorly constrained than the value of the normalisation, as the TS is very flat.
NB
Step5: In general $\theta$ will always be a flattened array of the floated parameters. Poisson parameters always occur first, in the order in which they were added (via add_poiss_model), following by non-Poissonian parameters in the order they were added (via add_non_poiss_model). To be explicit if we have $m$ Poissonian templates and $n$ non-Poissonian templates with breaks $\ell_n$, then | Python Code:
# Import relevant modules
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import healpy as hp
import matplotlib.pyplot as plt
from NPTFit import nptfit # module for performing scan
from NPTFit import create_mask as cm # module for creating the mask
from NPTFit import psf_correction as pc # module for determining the PSF correction
from NPTFit import dnds_analysis # module for analysing the output
from __future__ import print_function
Explanation: Example 6: Manual evaluation of non-Poissonian Likelihood
In this example we show to manually evaluate the non-Poissonian likelihood. This can be used, for example, to interface nptfit with parameter estimation packages other than MultiNest. We also show how to extract the prior cube.
We will take the exact same analysis as considered in the previous example, and show the likelihood peaks at exactly the same location for the normalisation of the non-Poissonian template.
NB: This example makes use of the Fermi Data, which needs to already be installed. See Example 1 for details.
End of explanation
n = nptfit.NPTF(tag='non-Poissonian_Example')
fermi_data = np.load('fermi_data/fermidata_counts.npy')
fermi_exposure = np.load('fermi_data/fermidata_exposure.npy')
n.load_data(fermi_data, fermi_exposure)
analysis_mask = cm.make_mask_total(mask_ring = True, inner = 0, outer = 5, ring_b = 90, ring_l = 0)
n.load_mask(analysis_mask)
iso = np.load('fermi_data/template_iso.npy')
n.add_template(iso, 'iso')
n.add_poiss_model('iso','$A_\mathrm{iso}$', False, fixed=True, fixed_norm=1.47)
n.add_non_poiss_model('iso',
['$A^\mathrm{ps}_\mathrm{iso}$','$n_1$','$n_2$','$S_b$'],
[[-6,1],[2.05,30],[-2,1.95]],
[True,False,False],
fixed_params = [[3,22.]])
pc_inst = pc.PSFCorrection(psf_sigma_deg=0.1812)
f_ary = pc_inst.f_ary
df_rho_div_f_ary = pc_inst.df_rho_div_f_ary
n.configure_for_scan(f_ary=f_ary, df_rho_div_f_ary=df_rho_div_f_ary, nexp=1)
Explanation: Setup an identical instance of NPTFit to Example 5
Firstly we initialize an instance of nptfit identical to that used in the previous example.
End of explanation
print('Vary A: ', n.ll([-3.52+0.22,2.56,-0.48]), n.ll([-3.52,2.56,-0.48]), n.ll([-3.52-0.24,2.56,-0.48]))
print('Vary n1:', n.ll([-3.52,2.56+0.67,-0.48]), n.ll([-3.52,2.56,-0.48]), n.ll([-3.52,2.56-0.37,-0.48]))
print('Vary n2:', n.ll([-3.52,2.56,-0.48+1.18]), n.ll([-3.52,2.56,-0.48]), n.ll([-3.52,2.56,-0.48-1.02]))
Explanation: Evaluate the Likelihood Manually
After configuring for the scan, the instance of nptfit.NPTF now has an associated function ll. This function was passed to MultiNest in the previous example, but we can also manually evaluate it.
The log likelihood function is called as: ll(theta), where theta is a flattened array of parameters. In the case above:
$$ \theta = \left[ \log_{10} \left( A^\mathrm{ps}_\mathrm{iso} \right), n_1, n_2 \right] $$
As an example we can evaluate it at a few points around the best fit parameters:
End of explanation
Avals = np.arange(-5.5,0.5,0.01)
TSvals_A = np.array([2*(n.ll([-3.52,2.56,-0.48])-n.ll([Avals[i],2.56,-0.48])) for i in range(len(Avals))])
plt.plot(Avals,TSvals_A,color='black', lw=1.5)
plt.axvline(-3.52+0.22,ls='dashed',color='black')
plt.axvline(-3.52,ls='dashed',color='black')
plt.axvline(-3.52-0.24,ls='dashed',color='black')
plt.axhline(0,ls='dashed',color='black')
plt.xlim([-4.0,-3.0])
plt.ylim([-5.0,15.0])
plt.xlabel('$A^\mathrm{ps}_\mathrm{iso}$')
plt.ylabel('$\mathrm{TS}$')
plt.show()
Explanation: To make the point clearer we can fix $n_1$ and $n_2$ to their best fit values, and calculate a Test Statistics (TS) array as we vary $\log_{10} \left( A^\mathrm{ps}_\mathrm{iso} \right)$. As shown the likelihood is maximised at approximated where MultiNest told us was the best fit point for this parameter.
End of explanation
n2vals = np.arange(-1.995,1.945,0.01)
TSvals_n2 = np.array([2*(n.ll([-3.52,2.56,-0.48])-n.ll([-3.52,2.56,n2vals[i]])) for i in range(len(n2vals))])
plt.plot(n2vals,TSvals_n2,color='black', lw=1.5)
plt.axvline(-0.48+1.18,ls='dashed',color='black')
plt.axvline(-0.48,ls='dashed',color='black')
plt.axvline(-0.48-1.02,ls='dashed',color='black')
plt.axhline(0,ls='dashed',color='black')
plt.xlim([-2.0,1.5])
plt.ylim([-5.0,15.0])
plt.xlabel('$n_2$')
plt.ylabel('$\mathrm{TS}$')
plt.show()
Explanation: Next we do the same thing for $n_2$. This time we see that this parameter is much more poorly constrained than the value of the normalisation, as the TS is very flat.
NB: it is important not to evaluate breaks exactly at a value of $n=1$. The reason for this is the analytic form of the likelihood involves $(n-1)^{-1}$.
End of explanation
print(n.prior_cube(cube=[1,1,1],ndim=3))
Explanation: In general $\theta$ will always be a flattened array of the floated parameters. Poisson parameters always occur first, in the order in which they were added (via add_poiss_model), following by non-Poissonian parameters in the order they were added (via add_non_poiss_model). To be explicit if we have $m$ Poissonian templates and $n$ non-Poissonian templates with breaks $\ell_n$, then:
$$ \theta = \left[ A_\mathrm{P}^1, \ldots, A_\mathrm{P}^m, A_\mathrm{NP}^1, n_1^1, \ldots, n_{\ell_1+1}^1, S_b^{(1)~1}, \ldots, S_b^{(\ell_1)~1}, \ldots, A_\mathrm{NP}^n, n_1^n, \ldots, n_{\ell_n+1}^n, S_b^{(1)~n}, \ldots, S_b^{(\ell_n)~n} \right]
$$
Fixed parameters are deleted from the list, and any parameter entered with a log flat prior is replaced by $\log_{10}$ of itself.
Extract the Prior Cube Manually
To extract the prior cube, we use the internal function log_prior_cube. This requires two arguments: 1. cube, the unit cube of dimension equal to the number of floated parameters; and 2. ndim, the number of floated parameters.
End of explanation |
14,378 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Collate outside the notebook
Python files, input files, output file
Set up a PyCharm project
Create a Python file
Run a script
In PyCharm
In the terminal
Input files
Output file
Exercise
Here it is another way to run the scripts you produced in the previous tutorials (note
Step1: Now save the file, as 'collate.py', inside the directory 'Scripts' (see above). If you setup a project in PyCharm then the files should automatically be saved in the correct place.
Run the script
In PyCharm
In Pycharm you can run the script using the button, or run from the menu.
The result will appear in a window at the bottom of the page.
In the terminal
<img src="images/run-script.png" width="50%" style="border
Step2: Now save the file (just 'save', or 'save as' with another name, as 'collate-darwin-tei.py', if you want to keep both scripts) and then
run the new script (run in PyCharm; or type python collate.py or python collate-darwin-tei.py in the terminal). This may take a bit longer than the fox and dog example.
The result will appear below.
Output file
Looking at the result this way is not very practical, especially if we want to save it. Better store the result in a new file, that we call 'outfile' (but you can give it another name if you prefer). We need to add this chunk of code, in order to create and open 'outfile'
Step3: If we are going to produce an output in XML/TEI, we can specify that 'outfile' will be a XML file, and the same goes for any other format. Here below there are two examples, the first for a XML output file, the second for a JSON output file
Step4: Now we add the outfile chunk to our code above. The new script is
Step5: When we run the script, the result won't appear below anymore. But a new file, 'outfile-tei.xml' has been created in the directory 'Scripts'. Check what's inside!
If you want to change the location of the output file, you can specify a different path. If, for example, you want your output file in the Desktop, you would write | Python Code:
from collatex import *
collation = Collation()
collation.add_plain_witness( "A", "The quick brown fox jumped over the lazy dog.")
collation.add_plain_witness( "B", "The brown fox jumped over the dog." )
collation.add_plain_witness( "C", "The bad fox jumped over the lazy dog.")
table = collate(collation)
print(table)
Explanation: Collate outside the notebook
Python files, input files, output file
Set up a PyCharm project
Create a Python file
Run a script
In PyCharm
In the terminal
Input files
Output file
Exercise
Here it is another way to run the scripts you produced in the previous tutorials (note: even if technically they mean different things, we will use interchangeably the words code, script and program). This tutorial assumes that you went already through tutorials on Collate plain texts (1 and 2) and on the different Collation ouputs. Everything that we will do here, is possible also in Jupyter notebook and certain section, as Input files is a recap of something already seen in the previous tutorials.
In the Command line tutorial, we have briefly seen how to run a Python program. In the terminal, type
python myfile.py
replacing “myfile.py” with the name of your Python program.
Again on file system hygiene: directory 'Scripts'
In this tutorial, we will create Python programs. Where to save the files that you will create? Remember that we created a directory for this workshop, called 'Workshop'. Now let's create a sub-directory, called 'Scripts', to store all our Python programs.
Set up a PyCharm project
If you are using PyCharm for these exercises it is worth setting up a project that will automatically save the files you create to the 'Scripts' directory you just created (see above). To do this open PyCharm and from the File menu select New Project. In the dialogue box that appears navigate to the 'scripts' directory you made for this workshop by clicking the button with '...' on it, on the right of the location box. Then click create. This will create a new project that will save all of the files to the folder you have selected.
Create a Python file
Let's do this step by step. First of all, create a python file.
Open PyCharm, if you downloaded it before, or another text editor: Notepad++ for Windows or TextWrangler for Mac OS X.
Create a new file and copy paste the code we used before:
End of explanation
from collatex import *
collation = Collation()
witness_1859 = open( "../fixtures/Darwin/txt/darwin1859_par1.txt", encoding='utf-8' ).read()
witness_1860 = open( "../fixtures/Darwin/txt/darwin1860_par1.txt", encoding='utf-8' ).read()
witness_1861 = open( "../fixtures/Darwin/txt/darwin1861_par1.txt", encoding='utf-8' ).read()
witness_1866 = open( "../fixtures/Darwin/txt/darwin1866_par1.txt", encoding='utf-8' ).read()
witness_1869 = open( "../fixtures/Darwin/txt/darwin1869_par1.txt", encoding='utf-8' ).read()
witness_1872 = open( "../fixtures/Darwin/txt/darwin1872_par1.txt", encoding='utf-8' ).read()
collation.add_plain_witness( "1859", witness_1859 )
collation.add_plain_witness( "1860", witness_1860 )
collation.add_plain_witness( "1861", witness_1861 )
collation.add_plain_witness( "1866", witness_1866 )
collation.add_plain_witness( "1869", witness_1869 )
collation.add_plain_witness( "1872", witness_1872 )
table = collate(collation, output='tei')
print(table)
Explanation: Now save the file, as 'collate.py', inside the directory 'Scripts' (see above). If you setup a project in PyCharm then the files should automatically be saved in the correct place.
Run the script
In PyCharm
In Pycharm you can run the script using the button, or run from the menu.
The result will appear in a window at the bottom of the page.
In the terminal
<img src="images/run-script.png" width="50%" style="border:1px solid black" align="right"/>
Open the terminal and navigate to the folder where your script is, using the 'cd' command (again, refer to the Command line tutorial, if you don't know what this means). Then type
python collate.py
If you are not in the directory where your script is, you should specify the path for that file. If you are in the Home directory, for example, the command would look like
python Workshop/Scripts/collate.py
The result will appear below in the terminal.
Input files
In the first tutorial, we saw how to use texts stored in files as witnesses for the collation. We used the open command to open each text file and appoint the contents to a variable with an appropriately chosen name; and we don't forget the encoding="utf-8" bit!
Let's try to do the same in our script 'collate.py', using the data in fixtures/Darwin/txt (only the first paragraph: _par1) and producing an output in XML/TEI. The code will look like this:
End of explanation
outfile = open('outfile.txt', 'w', encoding='utf-8')
Explanation: Now save the file (just 'save', or 'save as' with another name, as 'collate-darwin-tei.py', if you want to keep both scripts) and then
run the new script (run in PyCharm; or type python collate.py or python collate-darwin-tei.py in the terminal). This may take a bit longer than the fox and dog example.
The result will appear below.
Output file
Looking at the result this way is not very practical, especially if we want to save it. Better store the result in a new file, that we call 'outfile' (but you can give it another name if you prefer). We need to add this chunk of code, in order to create and open 'outfile':
End of explanation
outfile = open('outfile.xml', 'w', encoding='utf-8')
outfile = open('outfile.json', 'w', encoding='utf-8')
Explanation: If we are going to produce an output in XML/TEI, we can specify that 'outfile' will be a XML file, and the same goes for any other format. Here below there are two examples, the first for a XML output file, the second for a JSON output file:
End of explanation
from collatex import *
collation = Collation()
witness_1859 = open( "../fixtures/Darwin/txt/darwin1859_par1.txt", encoding='utf-8' ).read()
witness_1860 = open( "../fixtures/Darwin/txt/darwin1860_par1.txt", encoding='utf-8' ).read()
witness_1861 = open( "../fixtures/Darwin/txt/darwin1861_par1.txt", encoding='utf-8' ).read()
witness_1866 = open( "../fixtures/Darwin/txt/darwin1866_par1.txt", encoding='utf-8' ).read()
witness_1869 = open( "../fixtures/Darwin/txt/darwin1869_par1.txt", encoding='utf-8' ).read()
witness_1872 = open( "../fixtures/Darwin/txt/darwin1872_par1.txt", encoding='utf-8' ).read()
outfile = open('outfile-tei.xml', 'w', encoding='utf-8')
collation.add_plain_witness( "1859", witness_1859 )
collation.add_plain_witness( "1860", witness_1860 )
collation.add_plain_witness( "1861", witness_1861 )
collation.add_plain_witness( "1866", witness_1866 )
collation.add_plain_witness( "1869", witness_1869 )
collation.add_plain_witness( "1872", witness_1872 )
table = collate(collation, output='tei')
print(table, file=outfile)
Explanation: Now we add the outfile chunk to our code above. The new script is:
End of explanation
outfile = open('C:/Users/Elena/Desktop/output.xml', 'w', encoding='utf-8')
Explanation: When we run the script, the result won't appear below anymore. But a new file, 'outfile-tei.xml' has been created in the directory 'Scripts'. Check what's inside!
If you want to change the location of the output file, you can specify a different path. If, for example, you want your output file in the Desktop, you would write
End of explanation |
14,379 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Remote Service Control Manager Handle
Metadata
| Metadata | Value |
|
Step1: Download & Process Security Dataset
Step2: Analytic I
Detects non-system users failing to get a handle of the SCM database.
| Data source | Event Provider | Relationship | Event |
|
Step3: Analytic II
Look for non-system accounts performing privileged operations on protected subsystem objects such as the SCM database
| Data source | Event Provider | Relationship | Event |
|
Step4: Analytic III
Look for inbound network connections to services.exe from other endpoints in the network. Same SourceAddress, but different Hostname
| Data source | Event Provider | Relationship | Event |
|
Step5: Analytic IV
Look for several network connection maded by services.exe from different endpoints to the same destination
| Data source | Event Provider | Relationship | Event |
|
Step6: Analytic V
Look for non-system accounts performing privileged operations on protected subsystem objects such as the SCM database from other endpoints in the network
| Data source | Event Provider | Relationship | Event |
| | Python Code:
from openhunt.mordorutils import *
spark = get_spark()
Explanation: Remote Service Control Manager Handle
Metadata
| Metadata | Value |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2019/08/26 |
| modification date | 2020/09/20 |
| playbook related | [] |
Hypothesis
Adversaries might be attempting to open up a handle to the service control manager (SCM) database on remote endpoints to check for local admin access in my environment.
Technical Context
Often times, when an adversary lands on an endpoint, the current user does not have local administrator privileges over the compromised system.
While some adversaries consider this situation a dead end, others find it very interesting to identify which machines on the network the current user has administrative access to.
One common way to accomplish this is by attempting to open up a handle to the service control manager (SCM) database on remote endpoints in the network with SC_MANAGER_ALL_ACCESS (0xF003F) access rights.
The Service Control Manager (SCM) is a remote procedure call (RPC) server, so that service configuration and service control programs can manipulate services on remote machines.
Only processes with Administrator privileges are able to open a handle to the SCM database.
This database is also known as the ServicesActive database.
Therefore, it is very effective to check if the current user has administrative or local admin access to other endpoints in the network.
Offensive Tradecraft
An adversary can simply use the Win32 API function OpenSCManagerA to attempt to establish a connection to the service control manager (SCM) on the specified computer and open the service control manager database.
If this succeeds (A non-zero handle is returned), the current user context has local administrator acess to the remote host.
Additional reading
* https://github.com/OTRF/ThreatHunter-Playbook/tree/master/docs/library/windows/service_control_manager.md
Security Datasets
| Metadata | Value |
|:----------|:----------|
| docs | https://securitydatasets.com/notebooks/atomic/windows/07_discovery/SDWIN-190518224039.html |
| link | https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/discovery/host/empire_find_localadmin_smb_svcctl_OpenSCManager.zip |
Analytics
Initialize Analytics Engine
End of explanation
sd_file = "https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/discovery/host/empire_find_localadmin_smb_svcctl_OpenSCManager.zip"
registerMordorSQLTable(spark, sd_file, "sdTable")
Explanation: Download & Process Security Dataset
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, SubjectUserName, ProcessName, ObjectName
FROM sdTable
WHERE LOWER(Channel) = "security"
AND EventID = 4656
AND ObjectType = "SC_MANAGER OBJECT"
AND ObjectName = "ServicesActive"
AND AccessMask = "0xf003f"
AND NOT SubjectLogonId = "0x3e4"
'''
)
df.show(10,False)
Explanation: Analytic I
Detects non-system users failing to get a handle of the SCM database.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| File | Microsoft-Windows-Security-Auditing | User requested access File | 4656 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, SubjectUserName, ProcessName, ObjectName, PrivilegeList, ObjectServer
FROM sdTable
WHERE LOWER(Channel) = "security"
AND EventID = 4674
AND ObjectType = "SC_MANAGER OBJECT"
AND ObjectName = "ServicesActive"
AND PrivilegeList = "SeTakeOwnershipPrivilege"
AND NOT SubjectLogonId = "0x3e4"
'''
)
df.show(10,False)
Explanation: Analytic II
Look for non-system accounts performing privileged operations on protected subsystem objects such as the SCM database
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| File | Microsoft-Windows-Security-Auditing | User requested access File | 4674 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Application, SourcePort, SourceAddress, DestPort, DestAddress
FROM sdTable
WHERE LOWER(Channel) = "security"
AND EventID = 5156
AND Application LIKE "%\\\services.exe"
AND LayerRTID = 44
'''
)
df.show(10,False)
Explanation: Analytic III
Look for inbound network connections to services.exe from other endpoints in the network. Same SourceAddress, but different Hostname
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Security-Auditing | Process connected to Port | 5156 |
| Process | Microsoft-Windows-Security-Auditing | Process connected to Ip | 5156 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, User, SourcePort, SourceIp, DestinationPort, DestinationIp
FROM sdTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 3
AND Image LIKE "%\\\services.exe"
'''
)
df.show(10,False)
Explanation: Analytic IV
Look for several network connection maded by services.exe from different endpoints to the same destination
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Security-Auditing | Process connected to Port | 3 |
| Process | Microsoft-Windows-Security-Auditing | Process connected to Ip | 3 |
End of explanation
df = spark.sql(
'''
SELECT o.`@timestamp`, o.Hostname, o.SubjectUserName, o.ObjectType,o.ObjectName, o.PrivilegeList, a.IpAddress
FROM sdTable o
INNER JOIN (
SELECT Hostname,TargetUserName,TargetLogonId,IpAddress
FROM sdTable
WHERE LOWER(Channel) = "security"
AND EventID = 4624
AND LogonType = 3
AND NOT TargetUserName LIKE "%$"
) a
ON o.SubjectLogonId = a.TargetLogonId
WHERE LOWER(o.Channel) = "security"
AND o.EventID = 4656
AND NOT o.SubjectLogonId = "0x3e4"
'''
)
df.show(10,False)
Explanation: Analytic V
Look for non-system accounts performing privileged operations on protected subsystem objects such as the SCM database from other endpoints in the network
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Authentication log | Microsoft-Windows-Security-Auditing | User authenticated Host | 4624 |
| File | Microsoft-Windows-Security-Auditing | User requested access File | 4656 |
End of explanation |
14,380 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 16 - Metric-Predicted Variable on One or Two Groups
16.1 - Estimating the mean and standard deviation of a normal distribution
16.2 - Outliers and robust estimation
Step1: Data
Step2: 16.1 - Estimating the mean and standard deviation of a normal distribution
Model (Kruschke, 2015)
Step3: Figure 16.3
Step4: 16.2 - Outliers and robust estimation
Step5: Figure 16.9
Step6: 16.2 - Two Groups
Model (Kruschke, 2015)
Step7: Figure 16.12 | Python Code:
import pandas as pd
import numpy as np
import pymc3 as pm
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
from scipy.stats import norm, t
from IPython.display import Image
%matplotlib inline
plt.style.use('seaborn-white')
color = '#87ceeb'
%load_ext watermark
%watermark -p pandas,numpy,pymc3,matplotlib,seaborn,scipy
Explanation: Chapter 16 - Metric-Predicted Variable on One or Two Groups
16.1 - Estimating the mean and standard deviation of a normal distribution
16.2 - Outliers and robust estimation: the t distribution
16.3 - Two Groups
End of explanation
df = pd.read_csv('data/TwoGroupIQ.csv', dtype={'Group':'category'})
df.info()
df.head()
# Mean and standard deviation
df.groupby('Group').agg(['mean', 'std'])
fg = sns.FacetGrid(df, col='Group', height=4)
fg.map(sns.distplot, 'Score', kde=False, color='#87ceeb');
# We are only interested in the scores of group 'Smart Drug'
y = df['Score'][df.Group == 'Smart Drug']
Explanation: Data
End of explanation
Image('images/fig16_2.png', width=300)
with pm.Model() as model:
mu = pm.Normal('mu', y.mean(), sd=y.std())
sigma = pm.Uniform('sigma', y.std()/1000, y.std()*1000)
# PyMC's Normal likelihood can take either precision or standard deviation as an argument.
likelihood = pm.Normal('likelihood', mu, sd=sigma, observed=y)
pm.model_to_graphviz(model)
with model:
trace = pm.sample(2000, cores=4, nuts_kwargs={'target_accept': 0.95})
pm.traceplot(trace);
Explanation: 16.1 - Estimating the mean and standard deviation of a normal distribution
Model (Kruschke, 2015)
End of explanation
fig, [(ax1, ax2), (ax3, ax4)] = plt.subplots(2,2, figsize=(10,6))
font_d = {'size':16}
# Upper left
pm.plot_posterior(trace['mu'], point_estimate='mode', ref_val=100, ax=ax1, color=color)
ax1.set_xlabel('$\mu$', fontdict=font_d)
ax1.set_title('Mean', fontdict=font_d)
# Upper right
tr_len = len(trace)
# Plot only 20 posterior prediction curves.
n_curves = 20
# Create an index of length 20 with which we step through the trace.
stepIdxVec = np.arange(0, tr_len, tr_len//n_curves)
x_range = np.arange(y.min(), y.max())
x = np.tile(x_range.reshape(-1,1), (1,20))
ax2.hist(y, bins=25, density=True, color='steelblue')
ax2.plot(x, norm.pdf(x, trace['mu'][stepIdxVec], trace['sigma'][stepIdxVec]), c=color)
ax2.set_xlabel('y', fontdict=font_d)
ax2.set_title('Data w. Post. Pred.\nN=63')
[ax2.spines[spine].set_visible(False) for spine in ['left', 'right', 'top']]
ax2.yaxis.set_visible(False)
# Lower left
pm.plot_posterior(trace['sigma'], point_estimate='mode', ref_val=15, ax=ax3, color=color)
ax3.set_xlabel('$\sigma$', fontdict=font_d)
ax3.set_title('Std. Dev.', fontdict=font_d)
# Lower right
pm.plot_posterior((trace['mu']-100)/trace['sigma'], point_estimate='mode', ref_val=0,
ax=ax4, color=color)
ax4.set_title('Effect Size', fontdict=font_d)
ax4.set_xlabel('$(\mu - 100)/\sigma$', fontdict=font_d)
plt.tight_layout();
Explanation: Figure 16.3
End of explanation
with pm.Model() as model2:
mu = pm.Normal('mu', y.mean(), sd=y.std())
sigma = pm.Uniform('sigma', y.std()/1000, y.std()*1000)
nu_minus1 = pm.Exponential('nu_minus1', 1/29)
nu = pm.Deterministic('nu', nu_minus1+1)
likelihood = pm.StudentT('likelihood', nu, mu, sd=sigma, observed=y)
pm.model_to_graphviz(model2)
with model2:
trace2 = pm.sample(5000, cores=4, nuts_kwargs={'target_accept': 0.95})
pm.traceplot(trace2);
Explanation: 16.2 - Outliers and robust estimation: the t distribution
Model
End of explanation
fig, [(ax1, ax2), (ax3, ax4), (ax5, ax6)] = plt.subplots(3,2, figsize=(10,8))
# Upper left
pm.plot_posterior(trace2['mu'], point_estimate='mode', ref_val=100, ax=ax1, color=color)
ax1.set_xlabel('$\mu$', fontdict=font_d)
ax1.set_title('Mean', fontdict=font_d)
# Upper right
tr_len = len(trace)
n_curves = 20
stepIdxVec = np.arange(0, tr_len, tr_len//n_curves)
x_range = np.arange(y.min(), y.max())
x = np.tile(x_range.reshape(-1,1), (1,20))
ax2.hist(y, bins=25, density=True, color='steelblue')
ax2.plot(x, norm.pdf(x, trace2['mu'][stepIdxVec], trace2['sigma'][stepIdxVec]), c='#87ceeb')
ax2.set_xlabel('y', fontdict=font_d)
ax2.set_title('Data w. Post. Pred.')
[ax2.spines[spine].set_visible(False) for spine in ['left', 'right', 'top']]
ax2.yaxis.set_visible(False)
# Middle left
pm.plot_posterior(trace2['sigma'], point_estimate='mode', ref_val=15, ax=ax3, color=color)
ax3.set_xlabel('$\sigma$', fontdict=font_d)
ax3.set_title('Std. Dev.', fontdict=font_d)
# Middle right
pm.plot_posterior((trace2['mu']-100)/trace2['sigma'], point_estimate='mode', ref_val=0,
ax=ax4, color=color)
ax4.set_title('Effect Size', fontdict=font_d)
ax4.set_xlabel('$(\mu - 100)/\sigma$', fontdict=font_d)
# Lower left
pm.plot_posterior(np.log10(trace2['nu']), point_estimate='mode', ax=ax5, color=color)
ax5.set_title('Normality', fontdict=font_d)
ax5.set_xlabel(r'log10($\nu$)', fontdict=font_d)
plt.tight_layout();
ax6.set_visible(False)
Explanation: Figure 16.9
End of explanation
Image('images/fig16_11.png', width=400)
grp_idx = df.Group.cat.codes.values
grp_codes = df.Group.cat.categories
n_grps = grp_codes.size
with pm.Model() as model3:
mu = pm.Normal('mu', df.Score.mean(), sd=df.Score.std(), shape=n_grps)
sigma = pm.Uniform('sigma', df.Score.std()/1000, df.Score.std()*1000, shape=n_grps)
nu_minus1 = pm.Exponential('nu_minus1', 1/29)
nu = pm.Deterministic('nu', nu_minus1+1)
likelihood = pm.StudentT('likelihood', nu, mu[grp_idx], sd=sigma[grp_idx], observed=df.Score)
pm.model_to_graphviz(model3)
with model3:
trace3 = pm.sample(5000, cores=4, nuts_kwargs={'target_accept': 0.95})
pm.traceplot(trace3);
Explanation: 16.2 - Two Groups
Model (Kruschke, 2015)
End of explanation
tr3_mu0 = trace3['mu'][:,0]
tr3_mu1 = trace3['mu'][:,1]
tr3_sigma0 = trace3['sigma'][:,0]
tr3_sigma1 = trace3['sigma'][:,1]
tr3_nu = np.log10(trace3['nu'])
fig, axes = plt.subplots(5,2, figsize=(12, 12))
# Left column figs
l_trace_vars = (tr3_mu0, tr3_mu1, tr3_sigma0, tr3_sigma1, tr3_nu)
l_axes_idx = np.arange(5)
l_xlabels = ('$\mu_0$', '$\mu_1$', '$\sigma_0$', '$\sigma_1$', r'log10($\nu$)')
l_titles = ('Placebo Mean', 'Smart Drug Mean', 'Placebo Scale', 'Smart Drug Scale', 'Normality')
for var, ax_i, xlabel, title in zip(l_trace_vars, l_axes_idx, l_xlabels, l_titles):
pm.plot_posterior(var, point_estimate='mode', ax=axes[ax_i,0], color=color)
axes[ax_i,0].set_xlabel(xlabel, font_d)
axes[ax_i,0].set_title(title, font_d)
# Right column figs
tr_len = len(trace3)
n_curves = 20
stepIdxVec = np.arange(0, tr_len, tr_len//n_curves)
x_range = np.arange(df.Score.min(), df.Score.max())
x = np.tile(x_range.reshape(-1,1), (1,20))
# 1
axes[0,1].hist(df.Score[df.Group == 'Placebo'], bins=25, density=True, color='steelblue')
axes[0,1].plot(x, t.pdf(x, loc=tr3_mu0[stepIdxVec], scale=tr3_sigma0[stepIdxVec],
df=trace3['nu'][stepIdxVec]), c='#87ceeb')
axes[0,1].set_xlabel('y', font_d)
[axes[0,1].spines[spine].set_visible(False) for spine in ['left', 'right', 'top']]
axes[0,1].yaxis.set_visible(False)
axes[0,1].set_title('Data for Placebo w. Post. Pred.', font_d)
# 2
axes[1,1].hist(df.Score[df.Group == 'Smart Drug'], bins=25, density=True, color='steelblue')
axes[1,1].plot(x, t.pdf(x, loc=tr3_mu1[stepIdxVec], scale=tr3_sigma1[stepIdxVec],
df=trace3['nu'][stepIdxVec]), c='#87ceeb')
axes[1,1].set_xlabel('y', font_d)
[axes[1,1].spines[spine].set_visible(False) for spine in ['left', 'right', 'top']]
axes[1,1].yaxis.set_visible(False)
axes[1,1].set_title('Data for Smart Drug w. Post. Pred.', font_d)
# 3-5
r_vars = (tr3_mu1-tr3_mu0,
tr3_sigma1-tr3_sigma0,
(tr3_mu1-tr3_mu0)/np.sqrt((tr3_sigma0**2+tr3_sigma1**2)/2))
r_axes_idx = np.arange(start=2, stop=5)
r_xlabels = ('$\mu_1 - \mu_0$',
'$\sigma_1 - \sigma_0$',
r'$\frac{(\mu_1-\mu_0)}{\sqrt{(\sigma_0^2+\sigma_1^2)/2}}$')
r_titles = ('Difference of Means',
'Difference of Scales',
'Effect Size')
for var, ax_i, xlabel, title in zip(r_vars, r_axes_idx, r_xlabels, r_titles):
pm.plot_posterior(var, point_estimate='mode', ref_val=0, ax=axes[ax_i,1], color=color)
axes[ax_i,1].set_xlabel(xlabel, font_d)
axes[ax_i,1].set_title(title, font_d)
plt.tight_layout();
Explanation: Figure 16.12
End of explanation |
14,381 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In Class Exercise
Step1: In Class Exercise
Returning to the programme you just wrote to calculate the integral of $f(x) = \sin^2 [\frac{1}{x(2-x)}]$, please now add an estimate of the accuracy of your integration. Using
$$\sigma_I = \frac{\sqrt{I(A-I)}}{\sqrt(N)} $$
Step2: In Class Exercise
Now calculate the integral of $f(x) = \sin^2 [\frac{1}{x(2-x)}]$ using the mean value method, and compare the accuracy from 10000 sample to what you obtained previously. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
#This just needed for the Notebook to show plots inline.
%matplotlib inline
#Define the function
def f(x):
fx = (np.sin(1/(x*(2-x))))**2
return fx
#Integrate the function from x=0-2
#Note that you need to know the maximum value of the function
#over this range (which is y=1), and therefore the area of the box
#from which we draw random number is A=2.
N=100000
k=0
for i in range(N):
x=2*np.random.random()
y=np.random.random()
if y<f(x):
k+=1
A=2.
I=A*k/N
print("The integral is equal to I = ",I)
Explanation: In Class Exercise: Monte Carlo Integration
Write a programme following the method of Monte Carlo Integration to calculate
$$ I = \int_0^2 \sin^2 [\frac{1}{x(2-x)}] dx. $$
As you will need to calculate $f(x) = \sin^2 [\frac{1}{x(2-x)}]$ many times please write a user defined function for this part of your programme.
End of explanation
#Calculate the error:
sigmaI = np.sqrt(I*(A-I))/np.sqrt(N)
print("The integral is equal to I = ",I)
print("The error on the integral is equal to sigmaI = {0:.4f}".format(sigmaI))
Explanation: In Class Exercise
Returning to the programme you just wrote to calculate the integral of $f(x) = \sin^2 [\frac{1}{x(2-x)}]$, please now add an estimate of the accuracy of your integration. Using
$$\sigma_I = \frac{\sqrt{I(A-I)}}{\sqrt(N)} $$
End of explanation
#Draw N values from the distribution, and calculate their mean to
#use the mean method for integration.
xvalues=[]
fvalues=[]
for i in range(N):
x=2*np.random.random()
y=np.random.random()
if y<f(x):
fvalues.append(f(x))
xvalues.append(x)
fmean=np.mean(fvalues)
Imean = 2*fmean
Imeansigma = 2*np.sqrt(np.var(fvalues))/np.sqrt(len(fvalues))
print("Mean Method: ")
print("The integral is equal to I = {0:.4f}".format(Imean))
print("The error on the integral is equal to sigmaI = {0:.4f}".format(Imeansigma))
print("The percent error is: {0:.2f} ".format(100*Imeansigma/Imean))
print("**********************")
print("Compare to the `hit or miss` Monte Carlo Method: ")
print("The integral is equal to I = {0:.4f}".format(I))
print("The error on the integral is equal to sigmaI = {0:.4f}".format(sigmaI))
print("The percent error is: {0:.2f} ".format(100*sigmaI/I))
#Using mpl_style file.
import mpl_style
plt.style.use(mpl_style.style1)
plt.plot(xvalues,fvalues,'.')
plt.xlabel('x')
plt.ylabel('f(x)')
plt.show()
np.mean?
Explanation: In Class Exercise
Now calculate the integral of $f(x) = \sin^2 [\frac{1}{x(2-x)}]$ using the mean value method, and compare the accuracy from 10000 sample to what you obtained previously.
End of explanation |
14,382 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This notebook is a very basic and simple introductory primer to the method of ensembling models, in particular the variant of ensembling known as Stacking. In a nutshell stacking uses as a first-level (base), the predictions of a few basic machine learning models (classifiers) and then uses another model at the second-level to predict the output from the earlier first-level predictions.
The Titanic dataset is a prime candidate for introducing this concept as many newcomers to Kaggle start out here. Furthermore even though stacking has been responsible for many a team winning Kaggle competitions there seems to be a dearth of kernels on this topic so I hope this notebook can fill somewhat of that void.
I myself am quite a newcomer to the Kaggle scene as well and the first proper ensembling/stacking script that I managed to chance upon and study was one written in the AllState Severity Claims competition by the great Faron. The material in this notebook borrows heavily from Faron's script although ported to factor in ensembles of classifiers whilst his was ensembles of regressors. Anyway please check out his script here
Step1: Feature Exploration, Engineering and Cleaning
Now we will proceed much like how most kernels in general are structured, and that is to first explore the data on hand, identify possible feature engineering opportunities as well as numerically encode any categorical features.
Step2: Well it is no surprise that our task is to somehow extract the information out of the categorical variables
Feature Engineering
Here, credit must be extended to Sina's very comprehensive and well-thought out notebook for the feature engineering ideas so please check out his work
Titanic Best Working Classfier
Step3: All right so now having cleaned the features and extracted relevant information and dropped the categorical columns our features should now all be numeric, a format suitable to feed into our Machine Learning models. However before we proceed let us generate some simple correlation and distribution plots of our transformed dataset to observe ho
Visualisations
Pearson Correlation Heatmap
let us generate some correlation plots of the features to see how related one feature is to the next. To do so, we will utilise the Seaborn plotting package which allows us to plot heatmaps very conveniently as follows
Step4: Takeaway from the Plots
One thing that that the Pearson Correlation plot can tell us is that there are not too many features strongly correlated with one another. This is good from a point of view of feeding these features into your learning model because this means that there isn't much redundant or superfluous data in our training set and we are happy that each feature carries with it some unique information. Here are two most correlated features are that of Family size and Parch (Parents and Children). I'll still leave both features in for the purposes of this exercise.
Pairplots
Finally let us generate some pairplots to observe the distribution of data from one feature to the other. Once again we use Seaborn to help us.
Step5: Ensembling & Stacking models
Finally after that brief whirlwind detour with regards to feature engineering and formatting, we finally arrive at the meat and gist of the this notebook.
Creating a Stacking ensemble
Helpers via Python Classes
Here we invoke the use of Python's classes to help make it more convenient for us. For any newcomers to programming, one normally hears Classes being used in conjunction with Object-Oriented Programming (OOP). In short, a class helps to extend some code/program for creating objects (variables for old-school peeps) as well as to implement functions and methods specific to that class.
In the section of code below, we essentially write a class SklearnHelper that allows one to extend the inbuilt methods (such as train, predict and fit) common to all the Sklearn classifiers. Therefore this cuts out redundancy as won't need to write the same methods five times if we wanted to invoke five different classifiers.
Step6: Bear with me for those who already know this but for people who have not created classes or objects in Python before, let me explain what the code given above does. In creating my base classifiers, I will only use the models already present in the Sklearn library and therefore only extend the class for that.
def init
Step7: Generating our Base First-Level Models
So now let us prepare five learning models as our first level classification. These models can all be conveniently invoked via the Sklearn library and are listed as follows
Step8: Furthermore, since having mentioned about Objects and classes within the OOP framework, let us now create 5 objects that represent our 5 learning models via our Helper Sklearn Class we defined earlier.
Step9: Creating NumPy arrays out of our train and test sets
Great. Having prepared our first layer base models as such, we can now ready the training and test test data for input into our classifiers by generating NumPy arrays out of their original dataframes as follows
Step10: Output of the First level Predictions
We now feed the training and test data into our 5 base classifiers and use the Out-of-Fold prediction function we defined earlier to generate our first level predictions. Allow a handful of minutes for the chunk of code below to run.
Step11: Feature importances generated from the different classifiers
Now having learned our the first-level classifiers, we can utilise a very nifty feature of the Sklearn models and that is to output the importances of the various features in the training and test sets with one very simple line of code.
As per the Sklearn documentation, most of the classifiers are built in with an attribute which returns feature importances by simply typing in .feature_importances_. Therefore we will invoke this very useful attribute via our function earliand plot the feature importances as such
Step12: So I have not yet figured out how to assign and store the feature importances outright. Therefore I'll print out the values from the code above and then simply copy and paste into Python lists as below (sorry for the lousy hack)
Step13: Create a dataframe from the lists containing the feature importance data for easy plotting via the Plotly package.
Step14: Interactive feature importances via Plotly scatterplots
I'll use the interactive Plotly package at this juncture to visualise the feature importances values of the different classifiers
Step15: Now let us calculate the mean of all the feature importances and store it as a new column in the feature importance dataframe
Step16: Plotly Barplot of Average Feature Importances
Having obtained the mean feature importance across all our classifiers, we can plot them into a Plotly bar plot as follows
Step17: Second-Level Predictions from the First-level Output
First-level output as new features
Having now obtained our first-level predictions, one can think of it as essentially building a new set of features to be used as training data for the next classifier. As per the code below, we are therefore having as our new columns the first-level predictions from our earlier classifiers and we train the next classifier on this.
Step18: Correlation Heatmap of the Second Level Training set
Step19: There have been quite a few articles and Kaggle competition winner stories about the merits of having trained models that are more uncorrelated with one another producing better scores.
Step20: Having now concatenated and joined both the first-level train and test predictions as x_train and x_test, we can now fit a second-level learning model.
Second level learning model via XGBoost
Here we choose the eXtremely famous library for boosted tree learning model, XGBoost. It was built to optimize large-scale boosted tree algorithms. For further information about the algorithm, check out the official documentation.
Anyways, we call an XGBClassifier and fit it to the first-level train and target data and use the learned model to predict the test data as follows
Step21: Just a quick run down of the XGBoost parameters used in the model | Python Code:
# Load in our libraries
import pandas as pd
import numpy as np
import re
import sklearn
import xgboost as xgb
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import plotly.offline as py
py.init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.tools as tls
import warnings
warnings.filterwarnings('ignore')
# Going to use these 5 base models for the stacking
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier, ExtraTreesClassifier
from sklearn.svm import SVC
from sklearn.cross_validation import KFold
from sklearn.preprocessing import StandardScaler
from sklearn.learning_curve import validation_curve
from sklearn.grid_search import GridSearchCV
Explanation: Introduction
This notebook is a very basic and simple introductory primer to the method of ensembling models, in particular the variant of ensembling known as Stacking. In a nutshell stacking uses as a first-level (base), the predictions of a few basic machine learning models (classifiers) and then uses another model at the second-level to predict the output from the earlier first-level predictions.
The Titanic dataset is a prime candidate for introducing this concept as many newcomers to Kaggle start out here. Furthermore even though stacking has been responsible for many a team winning Kaggle competitions there seems to be a dearth of kernels on this topic so I hope this notebook can fill somewhat of that void.
I myself am quite a newcomer to the Kaggle scene as well and the first proper ensembling/stacking script that I managed to chance upon and study was one written in the AllState Severity Claims competition by the great Faron. The material in this notebook borrows heavily from Faron's script although ported to factor in ensembles of classifiers whilst his was ensembles of regressors. Anyway please check out his script here:
Stacking Starter : by Faron
Now onto the notebook at hand and I hope that it manages to do justice and convey the concept of ensembling in an intuitive and concise manner. My other standalone Kaggle script which implements exactly the same ensembling steps (albeit with different parameters) discussed below gives a Public LB score of 0.808 which is good enough to get to the top 9% and runs just under 4 minutes. Therefore I am pretty sure there is a lot of room to improve and add on to that script. Anyways please feel free to leave me any comments with regards to how I can improve
End of explanation
# Load in the train and test datasets
train = pd.read_csv('./input/train.csv')
test = pd.read_csv('./input/test.csv')
# Store our passenger ID for easy access
PassengerId = test['PassengerId']
train.head(5)
Explanation: Feature Exploration, Engineering and Cleaning
Now we will proceed much like how most kernels in general are structured, and that is to first explore the data on hand, identify possible feature engineering opportunities as well as numerically encode any categorical features.
End of explanation
def get_Cabin_Class(name):
if(type(name) == float):
name = 'None'
title_search = re.search('[A-Z]', name)
if title_search:
return title_search.group(0)
return 'None'
train.Cabin.apply(get_Cabin_Class).value_counts().to_dict()
#train[train['Cabin'] == 'F G73']
full_data = [train, test]
# Some features of my own that I have added in
# Gives the length of the name
train['Name_length'] = train['Name'].apply(len)
test['Name_length'] = test['Name'].apply(len)
# Feature that tells whether a passenger had a cabin on the Titanic
train['Has_Cabin'] = train["Cabin"].apply(lambda x: 0 if type(x) == float else 1)
test['Has_Cabin'] = test["Cabin"].apply(lambda x: 0 if type(x) == float else 1)
# Feature engineering steps taken from Sina
# Create new feature FamilySize as a combination of SibSp and Parch
for dataset in full_data:
dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
# Create new feature IsAlone from FamilySize
for dataset in full_data:
dataset['IsAlone'] = 0
dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1
# Remove all NULLS in the Embarked column
for dataset in full_data:
dataset['Embarked'] = dataset['Embarked'].fillna('S')
# Remove all NULLS in the Fare column and create a new feature CategoricalFare
for dataset in full_data:
dataset['Fare'] = dataset['Fare'].fillna(train['Fare'].median())
# Create a New feature CategoricalAge
for dataset in full_data:
age_avg = dataset['Age'].mean()
age_std = dataset['Age'].std()
age_null_count = dataset['Age'].isnull().sum()
age_null_random_list = np.random.randint(age_avg - age_std, age_avg + age_std, size=age_null_count)
dataset['Age'][np.isnan(dataset['Age'])] = age_null_random_list
dataset['Age'] = dataset['Age'].astype(int)
# Define function to extract titles from passenger names
def get_title(name):
title_search = re.search(' ([A-Za-z]+)\.', name)
# If the title exists, extract and return it.
if title_search:
return title_search.group(1)
return ""
# Create a new feature Title, containing the titles of passenger names
for dataset in full_data:
dataset['Title'] = dataset['Name'].apply(get_title)
# def get_Cabin_Class(name):
# if(type(name) == float):
# return 'None'
# title_search = re.search('[A-Z]', name).group(0)
# if (title_search):
# if(title_search == 'T'):
# return 'None'
# return title_search
# return 'None'
# for dataset in full_data:
# dataset['Cabin'] = dataset['Cabin'].apply(get_Cabin_Class)
# Group all non-common titles into one single grouping "Rare"
for dataset in full_data:
dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col','Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')
dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')
dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')
def data_mapping(dataset):
#Mapping Cabin
#cabin = pd.get_dummies(dataset['Cabin'], prefix='Cabin')
# Mapping Sex
#dataset['Sex'] = dataset['Sex'].map( {'female': 0, 'male': 1} ).astype(int)
sex = pd.get_dummies(dataset['Sex'],prefix='Sex')
# Mapping titles
# title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
# dataset['Title'] = dataset['Title'].map(title_mapping)
# dataset['Title'] = dataset['Title'].fillna(0)
title = pd.get_dummies(dataset['Title'],prefix='Title')
# Mapping Embarked
#dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)
embarked = pd.get_dummies(dataset['Embarked'],prefix='Embarked')
# Mapping Fare
dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0
dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1
dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2
dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3
dataset['Fare'] = dataset['Fare'].astype(int)
#dataset['CategoricalFare'] = pd.qcut(train['Fare'], 4) #Lu's comment: Mapping base on cut result on train set
fare = pd.get_dummies(dataset['Fare'],prefix='Fare')
# Mapping Age
dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0
dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1
dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2
dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3
dataset.loc[ dataset['Age'] > 64, 'Age'] = 4;
dataset['Age'] = dataset['Age'].astype(int)
#dataset['Age'] = pd.cut(dataset['Age'], 5) #Lu's comment: Mapping base on cut result on train set
age = pd.get_dummies(dataset['Age'],prefix='Age')
# Mapping Pclass
pclass = pd.get_dummies(dataset['Pclass'],prefix='Pclass')
#dataset.join([sex,title,embarked,fare,age])
dataset = pd.concat([dataset,sex,title,embarked,fare,age,pclass],axis= 1)
dataset.drop(['Sex','Title','Embarked','Fare','Age','Pclass'], axis=1, inplace=True)
return dataset
train = data_mapping(train)
test = data_mapping(test)
#print(dataset)
# Feature selection
drop_elements = ['PassengerId', 'Name', 'Ticket', 'Cabin', 'SibSp']
train = train.drop(drop_elements, axis = 1)
#train = train.drop(['CategoricalAge', 'CategoricalFare'], axis = 1)
test = test.drop(drop_elements, axis = 1)
train.columns.size
test.columns.size
Explanation: Well it is no surprise that our task is to somehow extract the information out of the categorical variables
Feature Engineering
Here, credit must be extended to Sina's very comprehensive and well-thought out notebook for the feature engineering ideas so please check out his work
Titanic Best Working Classfier : by Sina
End of explanation
colormap = plt.cm.coolwarm
plt.figure(figsize=(22,22))
plt.title('Pearson Correlation of Features', y=1.05, size=15)
sns.heatmap(train.astype(float).corr(),linewidths=0.1,vmax=1.0, square=True, cmap=colormap, linecolor='white', annot=True)
Explanation: All right so now having cleaned the features and extracted relevant information and dropped the categorical columns our features should now all be numeric, a format suitable to feed into our Machine Learning models. However before we proceed let us generate some simple correlation and distribution plots of our transformed dataset to observe ho
Visualisations
Pearson Correlation Heatmap
let us generate some correlation plots of the features to see how related one feature is to the next. To do so, we will utilise the Seaborn plotting package which allows us to plot heatmaps very conveniently as follows
End of explanation
g = sns.pairplot(train[['Survived','Name_length','Sex_female','Title_Mr','Fare_3']], hue='Survived', palette = 'seismic',size=1.3,diag_kind = 'kde',diag_kws=dict(shade=True),plot_kws=dict(s=10) )
g.set(xticklabels=[])
Explanation: Takeaway from the Plots
One thing that that the Pearson Correlation plot can tell us is that there are not too many features strongly correlated with one another. This is good from a point of view of feeding these features into your learning model because this means that there isn't much redundant or superfluous data in our training set and we are happy that each feature carries with it some unique information. Here are two most correlated features are that of Family size and Parch (Parents and Children). I'll still leave both features in for the purposes of this exercise.
Pairplots
Finally let us generate some pairplots to observe the distribution of data from one feature to the other. Once again we use Seaborn to help us.
End of explanation
# Some useful parameters which will come in handy later on
ntrain = train.shape[0]
ntest = test.shape[0]
SEED = 0 # for reproducibility
NFOLDS = 5 # set folds for out-of-fold prediction
kf = KFold(ntrain, n_folds= NFOLDS, random_state=SEED)
# Class to extend the Sklearn classifier
class SklearnHelper(object):
def __init__(self, clf, seed=0, params=None):
params['random_state'] = seed
self.clf = clf(**params)
def train(self, x_train, y_train):
self.clf.fit(x_train, y_train)
def predict(self, x):
return self.clf.predict(x)
def fit(self,x,y):
return self.clf.fit(x,y)
def feature_importances(self,x,y):
result = self.clf.fit(x,y).feature_importances_
print(result)
return result
# Class to extend XGboost classifer
ntrain
Explanation: Ensembling & Stacking models
Finally after that brief whirlwind detour with regards to feature engineering and formatting, we finally arrive at the meat and gist of the this notebook.
Creating a Stacking ensemble
Helpers via Python Classes
Here we invoke the use of Python's classes to help make it more convenient for us. For any newcomers to programming, one normally hears Classes being used in conjunction with Object-Oriented Programming (OOP). In short, a class helps to extend some code/program for creating objects (variables for old-school peeps) as well as to implement functions and methods specific to that class.
In the section of code below, we essentially write a class SklearnHelper that allows one to extend the inbuilt methods (such as train, predict and fit) common to all the Sklearn classifiers. Therefore this cuts out redundancy as won't need to write the same methods five times if we wanted to invoke five different classifiers.
End of explanation
def get_oof(clf, x_train, y_train, x_test):
oof_train = np.zeros((ntrain,)) # n * 1
oof_test = np.zeros((ntest,))
oof_test_skf = np.empty((NFOLDS, ntest))
for i, (train_index, test_index) in enumerate(kf):
x_tr = x_train[train_index]
y_tr = y_train[train_index]
x_te = x_train[test_index]
clf.train(x_tr, y_tr)
oof_train[test_index] = clf.predict(x_te)
oof_test_skf[i, :] = clf.predict(x_test)
oof_test[:] = oof_test_skf.mean(axis=0)
return oof_train.reshape(-1, 1), oof_test.reshape(-1, 1) # 1 * n
Explanation: Bear with me for those who already know this but for people who have not created classes or objects in Python before, let me explain what the code given above does. In creating my base classifiers, I will only use the models already present in the Sklearn library and therefore only extend the class for that.
def init : Python standard for invoking the default constructor for the class. This means that when you want to create an object (classifier), you have to give it the parameters of clf (what sklearn classifier you want), seed (random seed) and params (parameters for the classifiers).
The rest of the code are simply methods of the class which simply call the corresponding methods already existing within the sklearn classifiers.
Out-of-Fold Predictions
Now as alluded to above in the introductory section, stacking uses predictions of base classifiers as input for training to a second-level model. However one cannot simply train the base models on the full training data, generate predictions on the full test set and then output these for the second-level training. This runs the risk of your base model predictions already having "seen" the test set and therefore overfitting when feeding these predictions.
End of explanation
# Put in our parameters for said classifiers
# Random Forest parameters
rf_params = {
'n_jobs': -1,
'n_estimators': 500,
'warm_start': True,
#'max_features': 0.2,
'max_depth': 6,
'min_samples_leaf': 2,
'max_features' : 'sqrt',
'verbose': 0
}
# Extra Trees Parameters
et_params = {
'n_jobs': -1,
'n_estimators':500,
#'max_features': 0.5,
'max_depth': 8,
'min_samples_leaf': 2,
'verbose': 0
}
# AdaBoost parameters
ada_params = {
'n_estimators': 500,
'learning_rate' : 0.75
}
# Gradient Boosting parameters
gb_params = {
'n_estimators': 500,
#'max_features': 0.2,
'max_depth': 5,
'min_samples_leaf': 2,
'verbose': 0
}
# Support Vector Classifier parameters
svc_params = {
'kernel' : 'linear',
'C' : 0.025
}
Explanation: Generating our Base First-Level Models
So now let us prepare five learning models as our first level classification. These models can all be conveniently invoked via the Sklearn library and are listed as follows:
Random Forest classifier
Extra Trees classifier
AdaBoost classifer
Gradient Boosting classifer
Support Vector Machine
Parameters
Just a quick summary of the parameters that we will be listing here for completeness,
n_jobs : Number of cores used for the training process. If set to -1, all cores are used.
n_estimators : Number of classification trees in your learning model ( set to 10 per default)
max_depth : Maximum depth of tree, or how much a node should be expanded. Beware if set to too high a number would run the risk of overfitting as one would be growing the tree too deep
verbose : Controls whether you want to output any text during the learning process. A value of 0 suppresses all text while a value of 3 outputs the tree learning process at every iteration.
Please check out the full description via the official Sklearn website. There you will find that there are a whole host of other useful parameters that you can play around with.
End of explanation
# Create 5 objects that represent our 4 models
rf = SklearnHelper(clf=RandomForestClassifier, seed=SEED, params=rf_params)
et = SklearnHelper(clf=ExtraTreesClassifier, seed=SEED, params=et_params)
ada = SklearnHelper(clf=AdaBoostClassifier, seed=SEED, params=ada_params)
gb = SklearnHelper(clf=GradientBoostingClassifier, seed=SEED, params=gb_params)
svc = SklearnHelper(clf=SVC, seed=SEED, params=svc_params)
Explanation: Furthermore, since having mentioned about Objects and classes within the OOP framework, let us now create 5 objects that represent our 5 learning models via our Helper Sklearn Class we defined earlier.
End of explanation
# Create Numpy arrays of train, test and target ( Survived) dataframes to feed into our models
y_train = train['Survived'].ravel()
train = train.drop(['Survived'], axis=1)
x_train = train.values # Creates an array of the train data
x_test = test.values # Creats an array of the test data
#standardization
stdsc = StandardScaler()
x_train = stdsc.fit_transform(x_train)
x_test = stdsc.transform(x_test)
Explanation: Creating NumPy arrays out of our train and test sets
Great. Having prepared our first layer base models as such, we can now ready the training and test test data for input into our classifiers by generating NumPy arrays out of their original dataframes as follows:
End of explanation
x_train.shape
# Create our OOF train and test predictions. These base results will be used as new features
et_oof_train, et_oof_test = get_oof(et, x_train, y_train, x_test) # Extra Trees
rf_oof_train, rf_oof_test = get_oof(rf,x_train, y_train, x_test) # Random Forest
ada_oof_train, ada_oof_test = get_oof(ada, x_train, y_train, x_test) # AdaBoost
gb_oof_train, gb_oof_test = get_oof(gb,x_train, y_train, x_test) # Gradient Boost
svc_oof_train, svc_oof_test = get_oof(svc,x_train, y_train, x_test) # Support Vector Classifier
print("Training is complete")
Explanation: Output of the First level Predictions
We now feed the training and test data into our 5 base classifiers and use the Out-of-Fold prediction function we defined earlier to generate our first level predictions. Allow a handful of minutes for the chunk of code below to run.
End of explanation
rf_features = rf.feature_importances(x_train,y_train)
et_features = et.feature_importances(x_train, y_train)
ada_features = ada.feature_importances(x_train, y_train)
gb_features = gb.feature_importances(x_train,y_train)
Explanation: Feature importances generated from the different classifiers
Now having learned our the first-level classifiers, we can utilise a very nifty feature of the Sklearn models and that is to output the importances of the various features in the training and test sets with one very simple line of code.
As per the Sklearn documentation, most of the classifiers are built in with an attribute which returns feature importances by simply typing in .feature_importances_. Therefore we will invoke this very useful attribute via our function earliand plot the feature importances as such
End of explanation
# rf_features = [0.10474135, 0.21837029, 0.04432652, 0.02249159, 0.05432591, 0.02854371
# ,0.07570305, 0.01088129 , 0.24247496, 0.13685733 , 0.06128402]
# et_features = [ 0.12165657, 0.37098307 ,0.03129623 , 0.01591611 , 0.05525811 , 0.028157
# ,0.04589793 , 0.02030357 , 0.17289562 , 0.04853517, 0.08910063]
# ada_features = [0.028 , 0.008 , 0.012 , 0.05866667, 0.032 , 0.008
# ,0.04666667 , 0. , 0.05733333, 0.73866667, 0.01066667]
# gb_features = [ 0.06796144 , 0.03889349 , 0.07237845 , 0.02628645 , 0.11194395, 0.04778854
# ,0.05965792 , 0.02774745, 0.07462718, 0.4593142 , 0.01340093]
Explanation: So I have not yet figured out how to assign and store the feature importances outright. Therefore I'll print out the values from the code above and then simply copy and paste into Python lists as below (sorry for the lousy hack)
End of explanation
cols = train.columns.values
# Create a dataframe with features
feature_dataframe = pd.DataFrame( {'features': cols,
'Random Forest feature importances': rf_features,
'Extra Trees feature importances': et_features,
'AdaBoost feature importances': ada_features,
'Gradient Boost feature importances': gb_features
})
Explanation: Create a dataframe from the lists containing the feature importance data for easy plotting via the Plotly package.
End of explanation
# Scatter plot
trace = go.Scatter(
y = feature_dataframe['Random Forest feature importances'].values,
x = feature_dataframe['features'].values,
mode='markers',
marker=dict(
sizemode = 'diameter',
sizeref = 1,
size = 25,
# size= feature_dataframe['AdaBoost feature importances'].values,
#color = np.random.randn(500), #set color equal to a variable
color = feature_dataframe['Random Forest feature importances'].values,
colorscale='Portland',
showscale=True
),
text = feature_dataframe['features'].values
)
data = [trace]
layout= go.Layout(
autosize= True,
title= 'Random Forest Feature Importance',
hovermode= 'closest',
# xaxis= dict(
# title= 'Pop',
# ticklen= 5,
# zeroline= False,
# gridwidth= 2,
# ),
yaxis=dict(
title= 'Feature Importance',
ticklen= 5,
gridwidth= 2
),
showlegend= False
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig,filename='scatter2010')
# Scatter plot
trace = go.Scatter(
y = feature_dataframe['Extra Trees feature importances'].values,
x = feature_dataframe['features'].values,
mode='markers',
marker=dict(
sizemode = 'diameter',
sizeref = 1,
size = 25,
# size= feature_dataframe['AdaBoost feature importances'].values,
#color = np.random.randn(500), #set color equal to a variable
color = feature_dataframe['Extra Trees feature importances'].values,
colorscale='Portland',
showscale=True
),
text = feature_dataframe['features'].values
)
data = [trace]
layout= go.Layout(
autosize= True,
title= 'Extra Trees Feature Importance',
hovermode= 'closest',
# xaxis= dict(
# title= 'Pop',
# ticklen= 5,
# zeroline= False,
# gridwidth= 2,
# ),
yaxis=dict(
title= 'Feature Importance',
ticklen= 5,
gridwidth= 2
),
showlegend= False
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig,filename='scatter2010')
# Scatter plot
trace = go.Scatter(
y = feature_dataframe['AdaBoost feature importances'].values,
x = feature_dataframe['features'].values,
mode='markers',
marker=dict(
sizemode = 'diameter',
sizeref = 1,
size = 25,
# size= feature_dataframe['AdaBoost feature importances'].values,
#color = np.random.randn(500), #set color equal to a variable
color = feature_dataframe['AdaBoost feature importances'].values,
colorscale='Portland',
showscale=True
),
text = feature_dataframe['features'].values
)
data = [trace]
layout= go.Layout(
autosize= True,
title= 'AdaBoost Feature Importance',
hovermode= 'closest',
# xaxis= dict(
# title= 'Pop',
# ticklen= 5,
# zeroline= False,
# gridwidth= 2,
# ),
yaxis=dict(
title= 'Feature Importance',
ticklen= 5,
gridwidth= 2
),
showlegend= False
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig,filename='scatter2010')
# Scatter plot
trace = go.Scatter(
y = feature_dataframe['Gradient Boost feature importances'].values,
x = feature_dataframe['features'].values,
mode='markers',
marker=dict(
sizemode = 'diameter',
sizeref = 1,
size = 25,
# size= feature_dataframe['AdaBoost feature importances'].values,
#color = np.random.randn(500), #set color equal to a variable
color = feature_dataframe['Gradient Boost feature importances'].values,
colorscale='Portland',
showscale=True
),
text = feature_dataframe['features'].values
)
data = [trace]
layout= go.Layout(
autosize= True,
title= 'Gradient Boosting Feature Importance',
hovermode= 'closest',
# xaxis= dict(
# title= 'Pop',
# ticklen= 5,
# zeroline= False,
# gridwidth= 2,
# ),
yaxis=dict(
title= 'Feature Importance',
ticklen= 5,
gridwidth= 2
),
showlegend= False
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig,filename='scatter2010')
Explanation: Interactive feature importances via Plotly scatterplots
I'll use the interactive Plotly package at this juncture to visualise the feature importances values of the different classifiers
End of explanation
# Create the new column containing the average of values
feature_dataframe['mean'] = feature_dataframe.mean(axis= 1) # axis = 1 computes the mean row-wise
feature_dataframe.head(3)
Explanation: Now let us calculate the mean of all the feature importances and store it as a new column in the feature importance dataframe
End of explanation
y = feature_dataframe['mean'].values
x = feature_dataframe['features'].values
data = [go.Bar(
x= x,
y= y,
width = 0.5,
marker=dict(
color = feature_dataframe['mean'].values,
colorscale='Portland',
showscale=True,
reversescale = False
),
opacity=0.6
)]
layout= go.Layout(
autosize= True,
title= 'Barplots of Mean Feature Importance',
hovermode= 'closest',
# xaxis= dict(
# title= 'Pop',
# ticklen= 5,
# zeroline= False,
# gridwidth= 2,
# ),
yaxis=dict(
title= 'Feature Importance',
ticklen= 5,
gridwidth= 2
),
showlegend= False
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='bar-direct-labels')
Explanation: Plotly Barplot of Average Feature Importances
Having obtained the mean feature importance across all our classifiers, we can plot them into a Plotly bar plot as follows:
End of explanation
base_predictions_train = pd.DataFrame( {'RandomForest': rf_oof_train.ravel(),
'ExtraTrees': et_oof_train.ravel(),
'AdaBoost': ada_oof_train.ravel(),
'GradientBoost': gb_oof_train.ravel()
})
base_predictions_train.head()
Explanation: Second-Level Predictions from the First-level Output
First-level output as new features
Having now obtained our first-level predictions, one can think of it as essentially building a new set of features to be used as training data for the next classifier. As per the code below, we are therefore having as our new columns the first-level predictions from our earlier classifiers and we train the next classifier on this.
End of explanation
data = [
go.Heatmap(
z= base_predictions_train.astype(float).corr().values ,
x=base_predictions_train.columns.values,
y= base_predictions_train.columns.values,
colorscale='Portland',
showscale=True,
reversescale = True
)
]
py.iplot(data, filename='labelled-heatmap')
Explanation: Correlation Heatmap of the Second Level Training set
End of explanation
x_train
x_train = np.concatenate(( et_oof_train, rf_oof_train, ada_oof_train, gb_oof_train, svc_oof_train), axis=1)
x_test = np.concatenate(( et_oof_test, rf_oof_test, ada_oof_test, gb_oof_test, svc_oof_test), axis=1)
x_train.shape
Explanation: There have been quite a few articles and Kaggle competition winner stories about the merits of having trained models that are more uncorrelated with one another producing better scores.
End of explanation
gbm = xgb.XGBClassifier(
#learning_rate = 0.02,
n_estimators= 2000,
max_depth= 4,
min_child_weight= 2,
#gamma=1,
gamma=0.9,
subsample=0.8,
colsample_bytree=0.8,
objective= 'binary:logistic',
nthread= -1,
scale_pos_weight=1).fit(
x_train, y_train,
eval_set=[(x_train, y_train)],
eval_metric='logloss',
verbose=True
)
predictions = gbm.predict(x_test)
import matplotlib.pyplot as plt
from sklearn.learning_curve import learning_curve
param_dist = {
"n_estimators": 2000,
"max_depth": 4,
"min_child_weight": 2,
#gamma=1,
"gamma":0.9,
"subsample":0.8,
"colsample_bytree":0.8,
"objective": 'binary:logistic',
"nthread": -1,
"scale_pos_weight":1
}
clf = xgb.XGBClassifier(**param_dist)
train_sizes, train_scores, test_scores =\
learning_curve(estimator=clf,
X=x_train,
y=y_train,
train_sizes=np.linspace(0.1, 1.0, 10),
cv=10,
n_jobs=1)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(train_sizes, train_mean,
color='blue', marker='o',
markersize=8, label='training accuracy')
plt.fill_between(train_sizes,
train_mean + train_std,
train_mean - train_std,
alpha=0.15, color='blue')
plt.plot(train_sizes, test_mean,
color='green', linestyle='--',
marker='s', markersize=8,
label='validation accuracy')
plt.fill_between(train_sizes,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='green')
plt.grid()
plt.xlabel('Number of training samples')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.ylim([0.8, 1.0])
plt.tight_layout()
# plt.savefig('./figures/learning_curve.png', dpi=300)
plt.show()
#XGBoost eval example
# clf.fit(x_train, y_train,
# eval_set=[(x_train, y_train)],
# eval_metric='logloss',
# verbose=False)
# # Load evals result by calling the evals_result() function
# evals_result = clf.evals_result()
# print('Access logloss metric directly from validation_0:')
# print(evals_result['validation_0']['logloss'])
# print('')
# print('Access metrics through a loop:')
# for e_name, e_mtrs in evals_result.items():
# print('- {}'.format(e_name))
# for e_mtr_name, e_mtr_vals in e_mtrs.items():
# print(' - {}'.format(e_mtr_name))
# print(' - {}'.format(e_mtr_vals))
# print('')
# print('Access complete dict:')
#print(evals_result['validation_0']['logloss'][-1])
xgb_model = xgb.XGBClassifier()
clf = GridSearchCV(xgb_model,
{'max_depth': [3,4,5],
'n_estimators': [2000],
'gamma':[0.8,0.9,1],
"min_child_weight": [2,3],
"subsample":[0.8,0.9],
'colsample_bytree':[0.8],
"scale_pos_weight":[1]}, verbose=1)
clf.fit(x_train,y_train)
print('*' * 30)
print(clf.best_score_)
print('*' * 30)
print(clf.best_params_)
Explanation: Having now concatenated and joined both the first-level train and test predictions as x_train and x_test, we can now fit a second-level learning model.
Second level learning model via XGBoost
Here we choose the eXtremely famous library for boosted tree learning model, XGBoost. It was built to optimize large-scale boosted tree algorithms. For further information about the algorithm, check out the official documentation.
Anyways, we call an XGBClassifier and fit it to the first-level train and target data and use the learned model to predict the test data as follows:
End of explanation
# Generate Submission File
StackingSubmission = pd.DataFrame({ 'PassengerId': PassengerId,
'Survived': predictions })
StackingSubmission.to_csv("StackingSubmission.csv", index=False)
Explanation: Just a quick run down of the XGBoost parameters used in the model:
max_depth : How deep you want to grow your tree. Beware if set to too high a number might run the risk of overfitting.
gamma : minimum loss reduction required to make a further partition on a leaf node of the tree. The larger, the more conservative the algorithm will be.
eta : step size shrinkage used in each boosting step to prevent overfitting
Producing the Submission file
Finally having trained and fit all our first-level and second-level models, we can now output the predictions into the proper format for submission to the Titanic competition as follows:
End of explanation |
14,383 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This notebook gives examples for processing spins monitor data.
Logging data is stored in monitors that are defined within the optimization plan. Every iteration of the optimization saves a log file in the form a Pickle file, which contains the values of all the monitors at that point in time. To help process this data, spins includes the log_tools (spins.invdes.problem_graph.log_tools).
There are 3 general ways that these logs can be processed
Step1: Option 1
Step2: Option 2
Step3: Option 3 | Python Code:
## Import libraries necessary for monitor data processing. ##
from matplotlib import pyplot as plt
import numpy as np
import os
import pandas as pd
import pickle
from spins.invdes.problem_graph import log_tools
## Define filenames. ##
# `save_folder` is the full path to the directory containing the Pickle (.pkl) log files from the optimization.
save_folder = os.getcwd()
## Load the logged monitor data and monitor spec information. ##
# `df` is a pandas dataframe containing all the data loaded from the log Pickle (.pkl) files.
df = log_tools.create_log_data_frame(log_tools.load_all_logs(save_folder))
Explanation: Introduction
This notebook gives examples for processing spins monitor data.
Logging data is stored in monitors that are defined within the optimization plan. Every iteration of the optimization saves a log file in the form a Pickle file, which contains the values of all the monitors at that point in time. To help process this data, spins includes the log_tools (spins.invdes.problem_graph.log_tools).
There are 3 general ways that these logs can be processed:
1. Defining a monitor_spec file that describes how you want data to be plotted.
2. Use lower-level log_tools functions to load the data and plot the data.
3. Directly load and process the Pickle files.
Prepare the log data and plotting information.
The following three cells import the necessary libraries and load the optimization monitor data so it can be processed.
A monitor specification file is a YAML file that lists all the monitors to be plotted as well as how they should be plotted (e.g. taking magnitude, real part, etc.). The monitor specification file also allows you to join multiple monitors into one plot (e.g. for joining different monitors across different transformations).
Note that the monitor specification can be generated in code if desired (instead of actually saving it to a YAML file).
End of explanation
# `monitor_spec_filename` is the full path to the monitor spec yml file.
monitor_spec_filename = os.path.join(save_folder, "monitor_spec.yml")
# `monitor_descriptions` now contains the information from the monitor_spec.yml file. It follows the format of
# the schema found in `log_tools.monitor_spec`.
monitor_descriptions = log_tools.load_from_yml(monitor_spec_filename)
## Plot all monitor data and save into a pdf file in the project folder. ##
# `summary_out_name` is the full path to the pdf that will be generated containing plots of all the log data.
summary_out_name = os.path.join(save_folder, "summary.pdf")
# This command plots all the monitor data contained in the log files, saves it to the specified pdf file, and
# displays to the screen.
log_tools.plot_monitor_data(df, monitor_descriptions, summary_out_name)
## Print summary of scalar monitor values to screen during optimization without plotting. ##
# This command is useful to quickly view the current optimization state or
# if one is running an optimization somewhere where plotting to a screen is difficult.
log_tools.display_summary(df, monitor_descriptions)
Explanation: Option 1: Using a monitor specification file.
End of explanation
## Get the iterations and data for a specific 1-dimensional scalar monitor (here, power vs iteration is demonstrated)
## for a specific overlap monitor.
# We call `get_joined_scalar_monitors` because we want the monitor data across all iterations rather than
# just the data for particular transformation or iteration number (contrast with `get_single_monitor_data` usage
# below).
joined_monitor_data = log_tools.get_joined_scalar_monitors(
df, "power1300", event_name="optimizing", scalar_operation="magnitude_squared")
# Now, the iteration numbers are stored in the list iterations and the overlap monitor power values are
# stored in the list data. - If desired, these lists can now be exported for plotting in a different program
# or can be plotted manually by the user in python, as demonstrated next.
iterations = joined_monitor_data.iterations
data = joined_monitor_data.data
## Manually plot the power versus iteration data we've retrieved for the monitor of interest. ##
plt.figure()
plt.plot(iterations, data)
plt.xlabel("Iterations")
plt.ylabel("Transmission")
plt.title("Transmission at 1300 nm")
## Get the data for a specific 2-dimensional field slice monitor. ##
# These functions get the monitor information for the monitor name specified above and return the data associated
# with the monitor name. Here we retrieve the last stored field. We can specify `transformation_name` and
# `iteration` to grab data from a particular transformation or iteration.
field_data = log_tools.get_single_monitor_data(df, "field1550")
# `field_data` is now an array with 3 entries, corresponding to the x-, y-, and z- components of the field,
# so we apply a utility function to get the magnitude of the vector.
field_mag = log_tools.process_field(field_data, vector_operation="magnitude")
## Manually plot this 2-dimensional field data. ##
plt.figure()
plt.imshow(np.squeeze(np.array(field_mag.T)), origin="lower")
plt.title("E-Field Magnitude at 1550 nm")
Explanation: Option 2: Using log_tools to extract the data.
The following 2 cells demonstrate extracting specific monitor data of interest in order to export the data or plot it yourself.
End of explanation
with open(os.path.join(save_folder, "step1.pkl"), "rb") as fp:
data = pickle.load(fp)
print("Log time: ", data["time"])
print("Transmission at 1300 nm: ", data["monitor_data"]["power1300"])
Explanation: Option 3: Directly manipulating Pickle files.
This is the most tedious way of accessing data as there is only one Pickle file iteration.
However, this enables one to inspect all of the available data.
Note that data formats are subject to change.
End of explanation |
14,384 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Esta será una microentrada para presentar una extensión para el notebook que estoy usando en un curso interno que estoy dando en mi empresa.
Si a alguno más os puede valer para mostrar cosas básicas de Python (2 y 3, además de Java y Javascript) para muy principiantes me alegro.
Nombre en clave
Step1: Una vez hecho esto ya deberiamos tener disponible la cell magic para ser usada
Step2: Ahora un ejemplo con javascript | Python Code:
%load_ext tutormagic
Explanation: Esta será una microentrada para presentar una extensión para el notebook que estoy usando en un curso interno que estoy dando en mi empresa.
Si a alguno más os puede valer para mostrar cosas básicas de Python (2 y 3, además de Java y Javascript) para muy principiantes me alegro.
Nombre en clave: tutormagic
Esta extensión lo único que hace es embeber dentro de un IFrame la página de pythontutor usando el código que hayamos definido en una celda de código precedida de la cell magic %%tutor.
Como he comentado anteriormente, se puede escribir código Python2, Python3, Java y Javascript, que son los lenguajes soportados por pythontutor.
Ejemplo
Primero deberemos instalar la extensión. Está disponible en pypi por lo que la podéis instalar usando pip install tutormagic. Una vez instalada, dentro de un notebook de IPython la deberías cargar usando:
End of explanation
%%tutor --lang python3
a = 1
b = 2
def add(x, y):
return x + y
c = add(a, b)
Explanation: Una vez hecho esto ya deberiamos tener disponible la cell magic para ser usada:
End of explanation
%%tutor --lang javascript
var a = 1;
var b = 1;
console.log(a + b);
Explanation: Ahora un ejemplo con javascript:
End of explanation |
14,385 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Today, we'll sample a spatially-varying coefficient model, like that discussed in Gelfand (2003). These models are of the form
Step1: This reflects a gradient from left to right, and from bottom to top of a perfectly square grid. While this is highly idealized, we can see how well the model recovers these estimates in the exercise below.
First, though, let's draw some random normal data for the exercise and construct our $y$ vector, letting $\tau^2=1$.
Step2: The aspatial distribution of our data does not betray any specific trending, since we've ensured that our $\beta_1$ and $\beta_2$ surfaces are perfectly independent of one another
Step3: So, before we sample, let's assemble our data matrix and our coordinates. The coordinates are used to compute the spatial kernel function, $\mathbf{H}(\phi)$, which models the spatial similarity in the random component of $\beta$ in space.
Step4: We can sample multiple traces in parallel using spvcm, so below, we will see progressbars for each of the chains independently
Step5: We can see the structure of the model below, with our traceplots showing the sample paths, and the Kernel density estimates on right
Step6: Further, we can extract our estimatesfrom the trace
Step7: And verify that the estimates from all of our chains, though slightly different, look like our target surfaces
Step8: Finally, it is important that our prediction error in the $\hat{\beta_i}$ estimates are uncorrelated. Below, we can see that, in the map, the surfaces are indeed spatially random
Step9: Finally, we can see that the true and estimated values are strongly correlated | Python Code:
side = np.arange(0,10,1)
grid = np.tile(side, 10)
beta1 = grid.reshape(10,10)
beta2 = np.fliplr(beta1).T
fig, ax = plt.subplots(1,2, figsize=(12*1.6, 6))
sns.heatmap(beta1, ax=ax[0])
sns.heatmap(beta2, ax=ax[1])
plt.show()
Explanation: Today, we'll sample a spatially-varying coefficient model, like that discussed in Gelfand (2003). These models are of the form:
$$ y_i \sim \mathcal{N}(\mathbf{x}i'\beta{i.}, \tau^2)$$
where $\beta_{i.}$ reflects the vector of $p$ coefficient estimates local to site $i$.
This is a hierarchical model, where a prior on the $\beta$ effects is assigned as a function of a spatial kernel $\mathbf{H}(\phi)$, relating all $N$ sites to one another as a function of distance and attenuation parameter $\phi$, and an intrinsic covariance among the $\beta$ unrelated to spatial correlation, $\mathbf{T}$. This prior is often stated for a tiling of $\beta$ with $j$ process index changing faster than $i$ site index as:
$$ \vec{\beta} ~ \mathcal{N}(1_N \otimes \alpha, \mathbf{T} \otimes \mathbf{H}(\phi))$$
with $\alpha$ being the $j$-length process mean vector, and $1_N$ being the $N$-length vector of ones.
Then, $\phi$ is often assigned a gamma-distributed prior contingent on the scale of distances reflected in the form of the $\mathbf{H}(.)$ kernel, and $\mathbf{T}$ is assigned an inverse Wishart prior.
This model is amenable to Gibbs sampling, and a Gibbs sampler in spvcm been written in Python that can efficiently sample these models.
For starters, let's state a simple parameter surface we are interested in fitting:
End of explanation
x1, x2 = np.random.normal(0,1,size=(100,2)).T
x1 = x1.reshape(100,1)
x2 = x2.reshape(100,1)
flat_beta1 = beta1.flatten().reshape(100,1)
flat_beta2 = beta2.flatten().reshape(100,1)
y = x1 * flat_beta1 + x2 * flat_beta2 + np.random.normal(0,1,size=(100,1))
Explanation: This reflects a gradient from left to right, and from bottom to top of a perfectly square grid. While this is highly idealized, we can see how well the model recovers these estimates in the exercise below.
First, though, let's draw some random normal data for the exercise and construct our $y$ vector, letting $\tau^2=1$.
End of explanation
f,ax = plt.subplots(1,2, figsize=(12,6))
sns.heatmap(y.reshape(10,10), ax=ax[1])
sns.regplot(beta1.flatten(), beta2.flatten(), ax=ax[0])
Explanation: The aspatial distribution of our data does not betray any specific trending, since we've ensured that our $\beta_1$ and $\beta_2$ surfaces are perfectly independent of one another:
End of explanation
positions = np.array(list(zip(flat_beta1.flatten(),
flat_beta2.flatten())))
X = np.hstack((x1, x2))
Explanation: So, before we sample, let's assemble our data matrix and our coordinates. The coordinates are used to compute the spatial kernel function, $\mathbf{H}(\phi)$, which models the spatial similarity in the random component of $\beta$ in space.
End of explanation
import time as t
m = SVC(y, X, positions, n_samples=0,
starting_values=dict(Phi=.5),
configs=dict(jump=.2))
start = t.time()
m.sample(2000, n_jobs=4)
end = t.time() - start
print('{} seconds elapsed'.format(end))
Explanation: We can sample multiple traces in parallel using spvcm, so below, we will see progressbars for each of the chains independently:
End of explanation
m.trace.plot(burn=1000)
plt.tight_layout()
plt.show()
Explanation: We can see the structure of the model below, with our traceplots showing the sample paths, and the Kernel density estimates on right:
End of explanation
a,b,c,d = np.squeeze(m.trace['Betas'])
est_b0s = np.squeeze(m.trace['Betas'])[:,:,::3].mean(axis=1)
est_b1s = np.squeeze(m.trace['Betas'])[:,:,1::3].mean(axis=1)
est_b2s = np.squeeze(m.trace['Betas'])[:,:,2::3].mean(axis=1)
Explanation: Further, we can extract our estimatesfrom the trace:
End of explanation
f,ax = plt.subplots(4,2, figsize=(16,20),
subplot_kw=dict(aspect='equal'))
cfgs = dict(xticklabels='', yticklabels='',
vmin=0, vmax=9, cmap='viridis')
for i, (b1,b2) in enumerate(zip(est_b1s, est_b2s)):
sns.heatmap(b1.reshape(10,10),ax=ax[i,0], cbar=True, **cfgs)
sns.heatmap(b2.reshape(10,10), ax=ax[i,1], cbar=True, **cfgs)
ax[i,0].set_title('Chain {} $\\beta_1$'.format(i))
ax[i,1].set_title('Chain {} $\\beta_2$'.format(i))
plt.tight_layout()
plt.show()
Explanation: And verify that the estimates from all of our chains, though slightly different, look like our target surfaces:
End of explanation
f,ax = plt.subplots(3,2, figsize=(16,20))
cfgs = dict(xticklabels='', yticklabels='',
vmin=None, vmax=None, cmap='viridis')
b1ref = est_b1s[0].reshape(10,10)
b2ref = est_b2s[0].reshape(10,10)
for i, (b1,b2) in enumerate(zip(est_b1s[1:], est_b2s[1:])):
sns.heatmap(b1ref - b1.reshape(10,10),ax=ax[i,0], cbar=True, **cfgs)
sns.heatmap(b2ref - b2.reshape(10,10), ax=ax[i,1], cbar=True, **cfgs)
ax[i,0].set_title('Chain 1 - Chain {}: $\\beta_1$'.format(i))
ax[i,1].set_title('Chain 1 - Chain {}: $\\beta_2$'.format(i))
plt.tight_layout()
plt.show()
Explanation: Finally, it is important that our prediction error in the $\hat{\beta_i}$ estimates are uncorrelated. Below, we can see that, in the map, the surfaces are indeed spatially random:
End of explanation
f,ax = plt.subplots(2,4, figsize=(20,10), sharex=True, sharey='row')
[sns.regplot(beta1.flatten(), est_b1.flatten(), color='k',
line_kws=dict(color='orangered'), ax=subax)
for est_b1,subax in zip(est_b1s, ax[0])]
[sns.regplot(beta2.flatten(), est_b2.flatten(), color='k',
line_kws=dict(color='orangered'), ax=subax)
for est_b2,subax in zip(est_b2s, ax[1])]
[subax.set_title("Chain {}".format(i)) for i,subax in enumerate(ax[0])]
Explanation: Finally, we can see that the true and estimated values are strongly correlated:
End of explanation |
14,386 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Procrustes Analysis
Step1: PCA
Step2: Solving b vector
$$b = \Phi^T \left(x - \bar{x}\right)$$ | Python Code:
import pandas
df = pandas.read_csv('muct76-opencv.csv', header=0, usecols=np.arange(2,154), dtype=float)
df.head()
X = df.iloc[:, ::2].values
Y = df.iloc[:, 1::2].values
d = np.hstack((X,Y))
d.shape
import sys
threshold = 1.0e-8
def center(vec):
pivot = int(vec.shape[0]/2)
meanx = np.mean(vec[:pivot])
meany = np.mean(vec[pivot:])
return(meanx, meany)
def calnorm(vec):
vsqsum = np.sum(np.square(vec))
return(vsqsum)
def scale(vec):
vcopy = vec.copy()
vmax = np.max(vec)
if vmax > 2.0:
vcopy = vcopy / vmax
vnorm = calnorm(vcopy)
return (vcopy / np.sqrt(vnorm))
def caldiff(pref, pcmp):
return np.mean(np.sum(np.square(pref - pcmp), axis=1))
def simTransform(pref, pcmp, showerror = False):
err_before = np.mean(np.sum(np.square(pref - pcmp), axis=1))
ref_mean = np.mean(pref, axis=0)
prefcentered = np.asmatrix(pref) - np.asmatrix(ref_mean)
cmp_mean = np.mean(pcmp, axis=0)
pcmpcentered = np.asmatrix(pcmp) - np.asmatrix(cmp_mean)
Sxx = np.sum(np.square(pcmpcentered[:,0]))
Syy = np.sum(np.square(pcmpcentered[:,1]))
Sxxr = prefcentered[:,0].T * pcmpcentered[:,0] #(ref_x, x)
Syyr = prefcentered[:,1].T * pcmpcentered[:,1] #(ref_y, y)
Sxyr = prefcentered[:,1].T * pcmpcentered[:,0] #(ref_y, x)
Syxr = prefcentered[:,0].T * pcmpcentered[:,1] #(ref_x, y)
a = (Sxxr + Syyr)/(Sxx + Syy) #(Sxxr + Syyr) / (Sxx + Syy)
b = (Sxyr - Syxr) / (Sxx + Syy)
a = np.asscalar(a)
b = np.asscalar(b)
Rot = np.matrix([[a, -b],[b, a]])
translation = -Rot * np.asmatrix(cmp_mean).T + np.asmatrix(ref_mean).T
outx, outy = [], []
res = Rot * np.asmatrix(pcmp).T + translation
err_after = np.mean(np.sum(np.square(pref - res.T), axis=1))
if showerror:
print("Error before: %.4f after: %.4f\n"%(err_before, err_after))
return (res.T, err_after)
def align2mean(data):
d = data.copy()
pivot = int(d.shape[1]/2)
for i in range(d.shape[0]):
cx, cy = center(d[i,:])
d[i,:pivot] = d[i,:pivot] - cx
d[i,pivot:] = d[i,pivot:] - cy
#print(cx, cy, center(d[i,:]))
d[i,:] = scale(d[i,:])
norm = calnorm(d[i,:])
d_aligned = d.copy()
pref = np.vstack((d[0,:pivot], d[0,pivot:])).T
print(pref.shape)
mean = pref.copy()
mean_diff = 1
while mean_diff > threshold:
err_sum = 0.0
for i in range(1, d.shape[0]):
p = np.vstack((d[i,:pivot], d[i,pivot:])).T
p_aligned, err = simTransform(mean, p)
d_aligned[i,:] = scale(p_aligned.flatten(order='F'))
err_sum += err
oldmean = mean.copy()
mean = np.mean(d_aligned, axis=0)
mean = scale(mean)
mean = np.reshape(mean, newshape=pref.shape, order='F')
d = d_aligned.copy()
mean_diff = caldiff(oldmean, mean)
sys.stdout.write("SumError: %.4f MeanDiff: %.6f\n"%(err_sum, mean_diff))
return (d_aligned, mean)
d_aligned, mean = align2mean(d)
plt.figure(figsize=(7,7))
plt.gca().set_aspect('equal')
plotFaceShape(mean)
Explanation: Procrustes Analysis:
Translate each example such that its centroid is at origin, and scale them so that $|\mathbf{x}|=1$
Choose the first example as the initial estimate of the mean shape ($\bar{\mathbf{x}}$), and save a copy as a reference ($\mathbf{\bar{x}_0}$)
Align all the examples with the current estimate of the mean ($\bar{\mathbf{x}}$)
Re-estimate mean from the aligned shapes
Apply constraints on the current estiate of the mean by aligning it with reference $\mathbf{\bar{x}_0}$ and scaling so that $|\bar{\mathbf{x}}|=1$
Repeat steps 3, 4, 5 until convergence
End of explanation
d_aligned.shape
from sklearn.decomposition import PCA
pca = PCA(n_components=8)
pca.fit(d_aligned)
print(pca.explained_variance_ratio_)
cov_mat = np.cov(d_aligned.T)
print(cov_mat.shape)
eig_values, eig_vectors = np.linalg.eig(cov_mat)
print(eig_values.shape, eig_vectors.shape)
num_eigs = 8
Phi_matrix = eig_vectors[:,:num_eigs]
Phi_matrix.shape
Explanation: PCA
End of explanation
# * ()
mean_matrix = np.reshape(mean, (152,1), 'F')
d_aligned_matrix = np.matrix(d_aligned)
delta = d_aligned_matrix.T - mean_matrix
b = (np.matrix(Phi_matrix).T * delta).T
b.shape
mean.dump('models/meanshape-ocvfmt.pkl')
eig_vectors.dump('models/eigenvectors-ocvfmt.pkl')
eig_values.dump('models/eigenvalues-ocvfmt.pkl')
Phi_matrix.dump('models/phimatrix.pkl')
b.dump('models/bvector.pkl')
d_aligned.dump('models/alignedfaces.pkl')
mean_matrix.dump('models/meanvector.pkl')
Explanation: Solving b vector
$$b = \Phi^T \left(x - \bar{x}\right)$$
End of explanation |
14,387 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bokeh Visualization Demo
Recreating Han's Rosling's "The Health and Wealth of Nations"
This notebook is intended to illustrate the some of the utilities of the Python Bokeh visualization library.
Step1: Setting up the data
The plot animates with the slider showing the data over time from 1964 to 2013. We can think of each year as a seperate static plot, and when the slider moves, we use the Callback to change the data source that is driving the plot.
We could use bokeh-server to drive this change, but as the data is not too big we can also pass all the datasets to the javascript at once and switch between them on the client side.
This means that we need to build one data source for each year that we have data for and are going to switch between using the slider. We build them and add them to a dictionary sources that holds them under a key that is the name of the year preficed with a _.
Step2: sources looks like this
{'_1964'
Step3: Build the plot
Step4: Build the axes
Step5: Add the background year text
We add this first so it is below all the other glyphs
Step6: Add the bubbles and hover
We add the bubbles using the Circle glyph. We start from the first year of data and that is our source that drives the circles (the other sources will be used later).
plot.add_glyph returns the renderer, and we pass this to the HoverTool so that hover only happens for the bubbles on the page and not other glyph elements.
Step7: Add the legend
We manually build the legend by adding circles and texts to the upper-right portion of the plot.
Step9: Add the slider and callback
Next we add the slider widget and the JS callback code which changes the data of the renderer_source (powering the bubbles / circles) and the data of the text_source (powering background text). After we've set() the data we need to trigger() a change. slider, renderer_source, text_source are all available because we add them as args to Callback.
It is the combination of sources = %s % (js_source_array) in the JS and Callback(args=sources...) that provides the ability to look-up, by year, the JS version of our Python-made ColumnDataSource.
Step10: Render together with a slider
Last but not least, we put the chart and the slider together in a layout and diplay it inline in the notebook. | Python Code:
import numpy as np
import pandas as pd
from bokeh.embed import file_html
from bokeh.io import output_notebook, show
from bokeh.layouts import layout
from bokeh.models import (
ColumnDataSource, Plot, Circle, Range1d, LinearAxis, HoverTool,
Text, SingleIntervalTicker, Slider, CustomJS)
from bokeh.palettes import Spectral6
output_notebook()
import bokeh.sampledata
bokeh.sampledata.download()
Explanation: Bokeh Visualization Demo
Recreating Han's Rosling's "The Health and Wealth of Nations"
This notebook is intended to illustrate the some of the utilities of the Python Bokeh visualization library.
End of explanation
def process_data():
from bokeh.sampledata.gapminder import fertility, life_expectancy, population, regions
# Make the column names ints not strings for handling
columns = list(fertility.columns)
years = list(range(int(columns[0]), int(columns[-1])))
rename_dict = dict(zip(columns, years))
fertility = fertility.rename(columns=rename_dict)
life_expectancy = life_expectancy.rename(columns=rename_dict)
population = population.rename(columns=rename_dict)
regions = regions.rename(columns=rename_dict)
# Turn population into bubble sizes. Use min_size and factor to tweak.
scale_factor = 200
population_size = np.sqrt(population / np.pi) / scale_factor
min_size = 3
population_size = population_size.where(population_size >= min_size).fillna(min_size)
# Use pandas categories and categorize & color the regions
regions.Group = regions.Group.astype('category')
regions_list = list(regions.Group.cat.categories)
def get_color(r):
return Spectral6[regions_list.index(r.Group)]
regions['region_color'] = regions.apply(get_color, axis=1)
return fertility, life_expectancy, population_size, regions, years, regions_list
fertility_df, life_expectancy_df, population_df_size, regions_df, years, regions = process_data()
sources = {}
region_color = regions_df['region_color']
region_color.name = 'region_color'
for year in years:
fertility = fertility_df[year]
fertility.name = 'fertility'
life = life_expectancy_df[year]
life.name = 'life'
population = population_df_size[year]
population.name = 'population'
new_df = pd.concat([fertility, life, population, region_color], axis=1)
sources['_' + str(year)] = ColumnDataSource(new_df)
Explanation: Setting up the data
The plot animates with the slider showing the data over time from 1964 to 2013. We can think of each year as a seperate static plot, and when the slider moves, we use the Callback to change the data source that is driving the plot.
We could use bokeh-server to drive this change, but as the data is not too big we can also pass all the datasets to the javascript at once and switch between them on the client side.
This means that we need to build one data source for each year that we have data for and are going to switch between using the slider. We build them and add them to a dictionary sources that holds them under a key that is the name of the year preficed with a _.
End of explanation
dictionary_of_sources = dict(zip([x for x in years], ['_%s' % x for x in years]))
js_source_array = str(dictionary_of_sources).replace("'", "")
Explanation: sources looks like this
{'_1964': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d165cc0>,
'_1965': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d165b00>,
'_1966': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d1656a0>,
'_1967': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d165ef0>,
'_1968': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9dac18>,
'_1969': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9da9b0>,
'_1970': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9da668>,
'_1971': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9da0f0>...
We will pass this dictionary to the Callback. In doing so, we will find that in our javascript we have an object called, for example 1964 that refers to our ColumnDataSource. Note that we needed the prefixing as JS objects cannot begin with a number.
Finally we construct a string that we can insert into our javascript code to define an object.
The string looks like this: {1962: _1962, 1963: _1963, ....}
Note the keys of this object are integers and the values are the references to our ColumnDataSources from above. So that now, in our JS code, we have an object that's storing all of our ColumnDataSources and we can look them up.
End of explanation
# Set up the plot
xdr = Range1d(1, 9)
ydr = Range1d(20, 100)
plot = Plot(
x_range=xdr,
y_range=ydr,
plot_width=800,
plot_height=400,
outline_line_color=None,
toolbar_location=None,
min_border=20,
)
Explanation: Build the plot
End of explanation
AXIS_FORMATS = dict(
minor_tick_in=None,
minor_tick_out=None,
major_tick_in=None,
major_label_text_font_size="10pt",
major_label_text_font_style="normal",
axis_label_text_font_size="10pt",
axis_line_color='#AAAAAA',
major_tick_line_color='#AAAAAA',
major_label_text_color='#666666',
major_tick_line_cap="round",
axis_line_cap="round",
axis_line_width=1,
major_tick_line_width=1,
)
xaxis = LinearAxis(ticker=SingleIntervalTicker(interval=1), axis_label="Children per woman (total fertility)", **AXIS_FORMATS)
yaxis = LinearAxis(ticker=SingleIntervalTicker(interval=20), axis_label="Life expectancy at birth (years)", **AXIS_FORMATS)
plot.add_layout(xaxis, 'below')
plot.add_layout(yaxis, 'left')
Explanation: Build the axes
End of explanation
# Add the year in background (add before circle)
text_source = ColumnDataSource({'year': ['%s' % years[0]]})
text = Text(x=2, y=35, text='year', text_font_size='150pt', text_color='#EEEEEE')
plot.add_glyph(text_source, text)
Explanation: Add the background year text
We add this first so it is below all the other glyphs
End of explanation
# Add the circle
renderer_source = sources['_%s' % years[0]]
circle_glyph = Circle(
x='fertility', y='life', size='population',
fill_color='region_color', fill_alpha=0.8,
line_color='#7c7e71', line_width=0.5, line_alpha=0.5)
circle_renderer = plot.add_glyph(renderer_source, circle_glyph)
# Add the hover (only against the circle and not other plot elements)
tooltips = "@index"
plot.add_tools(HoverTool(tooltips=tooltips, renderers=[circle_renderer]))
Explanation: Add the bubbles and hover
We add the bubbles using the Circle glyph. We start from the first year of data and that is our source that drives the circles (the other sources will be used later).
plot.add_glyph returns the renderer, and we pass this to the HoverTool so that hover only happens for the bubbles on the page and not other glyph elements.
End of explanation
text_x = 7
text_y = 95
for i, region in enumerate(regions):
plot.add_glyph(Text(x=text_x, y=text_y, text=[region], text_font_size='10pt', text_color='#666666'))
plot.add_glyph(Circle(x=text_x - 0.1, y=text_y + 2, fill_color=Spectral6[i], size=10, line_color=None, fill_alpha=0.8))
text_y = text_y - 5
Explanation: Add the legend
We manually build the legend by adding circles and texts to the upper-right portion of the plot.
End of explanation
# Add the slider
code =
var year = slider.get('value'),
sources = %s,
new_source_data = sources[year].get('data');
renderer_source.set('data', new_source_data);
text_source.set('data', {'year': [String(year)]});
% js_source_array
callback = CustomJS(args=sources, code=code)
slider = Slider(start=years[0], end=years[-1], value=1, step=1, title="Year", callback=callback)
callback.args["renderer_source"] = renderer_source
callback.args["slider"] = slider
callback.args["text_source"] = text_source
Explanation: Add the slider and callback
Next we add the slider widget and the JS callback code which changes the data of the renderer_source (powering the bubbles / circles) and the data of the text_source (powering background text). After we've set() the data we need to trigger() a change. slider, renderer_source, text_source are all available because we add them as args to Callback.
It is the combination of sources = %s % (js_source_array) in the JS and Callback(args=sources...) that provides the ability to look-up, by year, the JS version of our Python-made ColumnDataSource.
End of explanation
# Stick the plot and the slider together
show(layout([[plot], [slider]], sizing_mode='scale_width'))
Explanation: Render together with a slider
Last but not least, we put the chart and the slider together in a layout and diplay it inline in the notebook.
End of explanation |
14,388 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implement Logistic Classification for classifying tweets / text
Given a tweet we will have to decide whether a tweet is positive and negative
Step1: Load and Analyse the dataset
Step2: Processing of the data to create word frequencies list
Step3: Model Training
Step4: Making your own predictions | Python Code:
import numpy as np
import pandas as pd
import nltk
from nltk.corpus import twitter_samples
nltk.download('twitter_samples')
nltk.download('stopwords')
Explanation: Implement Logistic Classification for classifying tweets / text
Given a tweet we will have to decide whether a tweet is positive and negative
End of explanation
# load positive tweets
positive_tweets = twitter_samples.strings('positive_tweets.json')
positive_tweets[:3]
# load negative tweets
negative_tweets = twitter_samples.strings('negative_tweets.json')
negative_tweets[:3]
## total number of pos and neg tweets
print(f"Total No. of Positive tweets: {len(positive_tweets)}")
print(f'Total No. of Negative tweets: {len(negative_tweets)}')
## generate a train and test dataset with equal combination of pos and neg tweets
## in total 1000 words, dividing the list of tweets into 8000 train and 2000 test datasets.
train_pos = positive_tweets[:4000]
train_neg = negative_tweets[:4000]
test_pos = positive_tweets[4000:]
test_neg = negative_tweets[4000:]
# combining all of them together
train_data = train_pos + train_neg
test_data = test_pos + test_neg
print(f'Total number of data count train data: {len(train_data)} and test data : {len(test_data)}')
# creating labels for the datasets
train_label = np.append(np.ones((len(train_pos),1)), np.zeros((len(train_neg),1)), axis=0)
test_label = np.append(np.ones((len(test_pos),1)), np.zeros((len(test_neg),1)), axis=0)
print(f'Shape of Train and Test labels : {train_label.shape} and {test_label.shape}')
Explanation: Load and Analyse the dataset
End of explanation
from nltk.corpus import stopwords
import re
def clean_tweet(tweet):
'''
clean the tweet to tokenise, remove stop words and stem the words
'''
stop_words = stopwords.words('english')
#print(f'Total stop words in the vocab: {len(stop_words)}')
tweet = re.sub(r'#','',tweet) ## remove the # symbol
tweet = re.sub(r'https?:\/\/.*[\r\n]*','',tweet) ## remove any hyperlinks
tweet = re.sub(r'^RT[\s]+','',tweet) ## remove any Retweets (RT)
tokenizer = nltk.tokenize.TweetTokenizer(preserve_case=False, strip_handles=True, reduce_len=True)
tweet_token = tokenizer.tokenize(tweet)
tweet_cleaned = []
for word in tweet_token:
if word not in stop_words:
tweet_cleaned.append(word)
return tweet_cleaned
def build_tweet_frequency(tweets, label):
'''
Build a vocab of tweet word frequencies across corpus.
@input: Tweets - list of tweets
label - Array of tweet sentiments
@output: a dict of (word, label):frequency
'''
label_list = np.squeeze(label).tolist()
freq = {}
for t, l in zip(tweets, label_list):
for word in clean_tweet(t):
word_pair = (word,l)
if word_pair in freq:
freq[word_pair] +=1
else:
freq[word_pair] =1
return freq
train_data[0] ## 0, 500
clean_tweet(train_data[0])
tweet_freq_vocab = build_tweet_frequency(train_data, train_label)
tweet_freq_vocab.get(('sad',0))
def extract_features(tweet, vocab):
'''
Given a tweet and frequency vocab, generate a list of
@input:
tweet - tweet we want to extract features from
vocab - frequency vocab dictionary
@output:
tweet_feature - a numpy array with [label, total_pos_freq, total_neg_freq]
'''
cleaned_tweet = clean_tweet(tweet)
#print(cleaned_tweet)
tweet_feature = np.zeros((1,3))
tweet_feature[0,0] = 1
for words in cleaned_tweet: # iterate over the tweet to get the number of pos and neg tweet freqs
#print(vocab.get((words,1.0),0), " --- ", vocab.get((words,0.0),0))
tweet_feature[0,1] += vocab.get((words,1.0),0)
tweet_feature[0,2] += vocab.get((words,0.0),0)
return tweet_feature
extract_features(train_data[0],tweet_freq_vocab)
extract_features('Hi How are you? I am doing good', tweet_freq_vocab)
Explanation: Processing of the data to create word frequencies list
End of explanation
## Generate the vector word frequency for all of the training tweets
train_X = np.zeros((len(train_data),3))
for i in range(len(train_data)):
train_X[i,:] = extract_features(train_data[i], tweet_freq_vocab)
train_y = train_label
test_X = np.zeros((len(test_data),3))
for i in range(len(test_data)):
test_X[i,:] = extract_features(test_data[i], tweet_freq_vocab)
test_y = test_label
train_X[0:5]
train_y.shape
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='liblinear')
model.fit(train_X, train_y)
predictions = model.predict(test_X)
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
from sklearn.metrics import classification_report
print(classification_report(test_y,predictions))
Explanation: Model Training
End of explanation
my_tweet1 = 'i liked my prediction score. happy with the results'
model.predict(extract_features(my_tweet1,tweet_freq_vocab))
my_tweet2 = 'i am sad with the result of the football match'
model.predict(extract_features(my_tweet2,tweet_freq_vocab))
my_tweet3 = 'shame that i couldnt get an entry to the competition'
model.predict(extract_features(my_tweet3,tweet_freq_vocab))
my_tweet3 = 'this movie should have been great.'
model.predict(extract_features(my_tweet3,tweet_freq_vocab)) ## misclassified example
my_tweet3 = 'i liked my prediction score. not happy with the results'
model.predict(extract_features(my_tweet3,tweet_freq_vocab))
Explanation: Making your own predictions
End of explanation |
14,389 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
return None, None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
# Number of Epochs
num_epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Embedding Dimension Size
embed_dim = None
# Sequence Length
seq_length = None
# Learning Rate
learning_rate = None
# Show stats for every n number of batches
show_every_n_batches = None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
return None, None, None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
14,390 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Storing and loading questions as a serialized object.
As questinos.csv is not easy to use itself, it might be healpful to make the csv file into a serialized object. In this case, we can use pickle, Python object serialization.
https
Step1: Let's check how many items in the dictionary.
Step2: Yes, 7949 items is right. How about question numbers? It is continuous number or not? We might want to check the first and last 10 items.
Step3: Yes, it's not continuous data in terms of qid. But, it's OK. What about just see one question? How can we do it? Just look at qid 1.
Step4: Yes, it's dictionary. So, you can use some dictionary functions. Check this out.
Step5: How can figure out questions length without tokenizing question it self?
Step6: Make questions pickled data
As you know that, reading csv and coverting it as a dictionary spend more than one minute. Once we convert it as a dictionary, we can save it as a pickled data and we can load it when we need. It is really simple and fast. Look at that!
Wait! We will use gzip.open instead of open because pickled file is too big. So we will use compression. It's easy and it consumes only 1/10 size of that of original one. Of course, it will take few more seconds than plain one.
original
Step7: Yes, now we can load pickled data as a variable.
Step8: Yes, it took only few second. I will save it, make it as a commit, and push it into github. So, you can use pickled data instead of converting questions.csv | Python Code:
import csv
import gzip
import cPickle as pickle
from collections import defaultdict
import yaml
question_reader = csv.reader(open("../data/questions.csv"))
question_header = ["answer", "group", "category", "question", "pos_token"]
questions = defaultdict(dict)
for row in question_reader:
question = {}
row[-1] = yaml.load(row[-1].replace(": u'", ": '"))
qid = int(row.pop(0))
for index, item in enumerate(row):
question[question_header[index]] = item
questions[qid] = question
Explanation: Storing and loading questions as a serialized object.
As questinos.csv is not easy to use itself, it might be healpful to make the csv file into a serialized object. In this case, we can use pickle, Python object serialization.
https://docs.python.org/2/library/pickle.html
Loading csv and make it dictionary
First of all, we need to load questions.csv file and convert it into dictionary. If key of the dictionay is question id(qid), we can find questions by key.
End of explanation
print len(questions)
Explanation: Let's check how many items in the dictionary.
End of explanation
print sorted(questions.keys())[:10]
print sorted(questions.keys())[-10:]
Explanation: Yes, 7949 items is right. How about question numbers? It is continuous number or not? We might want to check the first and last 10 items.
End of explanation
questions[1]
Explanation: Yes, it's not continuous data in terms of qid. But, it's OK. What about just see one question? How can we do it? Just look at qid 1.
End of explanation
questions[1].keys()
questions[1]['answer']
questions[1]['pos_token']
questions[1]['pos_token'].keys()
questions[1]['pos_token'].values()
questions[1]['pos_token'].items()
Explanation: Yes, it's dictionary. So, you can use some dictionary functions. Check this out.
End of explanation
max(questions[1]['pos_token'].keys())
Explanation: How can figure out questions length without tokenizing question it self?
End of explanation
with gzip.open("questions.pklz", "wb") as output:
pickle.dump(questions, output)
Explanation: Make questions pickled data
As you know that, reading csv and coverting it as a dictionary spend more than one minute. Once we convert it as a dictionary, we can save it as a pickled data and we can load it when we need. It is really simple and fast. Look at that!
Wait! We will use gzip.open instead of open because pickled file is too big. So we will use compression. It's easy and it consumes only 1/10 size of that of original one. Of course, it will take few more seconds than plain one.
original: 1 sec in my PC
compressed: 5 sec in my PC
Also, "wb" means writing as binary mode, and "rb" means reading file as binary mode.
End of explanation
with gzip.open("questions.pklz", "rb") as fp:
questions_new = pickle.load(fp)
print len(questions_new)
Explanation: Yes, now we can load pickled data as a variable.
End of explanation
print questions == questions
print questions == questions_new
questions_new[0] = 1
print questions == questions_new
Explanation: Yes, it took only few second. I will save it, make it as a commit, and push it into github. So, you can use pickled data instead of converting questions.csv
End of explanation |
14,391 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NDArray Tutorial
In MXNet, NDArray is the core datastructure for all mathematical computations.
An NDArray represents a multidimensional, fixed-size homogenous array.
If you're familiar with the scientific computing python package NumPy,
you might notice that mxnet.ndarray is similar to numpy.ndarray.
Like the corresponding NumPy data structure, MXNet's NDArray enables imperative computation.
So you might wonder, why not just use NumPy?
MXNet offers two compelling advantages.
First, MXNet's NDArray supports fast execution
on a wide range of hardware configurations,
including CPU, GPU, and multi-GPU machines.
MXNet also scales to distribute systems in the cloud.
Second, MXNet's NDArray executes code lazily,
allowing it to automatically parallelize multiple operations
across the available hardware.
The basics
An NDArray is a multidimensional array of numbers with the same type.
We could represent the coordinates of a point in 3D space,
e.g. [2, 1, 6] as a 1D array with shape (3).
Similarly, we could represent a 2D array.
Below, we present an array with length 2 along the first axis and length 3 along the second axis.
[[0, 1, 2]
[3, 4, 5]]
Note that here the use of "dimension" is overloaded.
When we say a 2D array, we mean an array with 2 axes, not an array with two components.
Each NDArray supports some important attributes that you'll often want to query
Step1: We can also create an MXNet NDArray from an numpy.ndarray object
Step2: We can specify the element type with the option dtype, which accepts a numpy type. By default, float32 is used
Step3: If we know the size of the desired NDArray, but not the element values,
MXNet offers several functions to create arrays with placeholder content
Step4: Printing Arrays
When inspecting the contents of an NDArray, it's often convenient
to first extract its contents as a numpy.ndarray using the asnumpy function.
Numpy uses the following layout
Step5: Basic Operations
When applied to NDArrays, the standard arithmetic operators apply elementwise calculations. The returned value is a new array whose content contains the result.
Step6: As in NumPy, * represents element-wise multiplication. For matrix-matrix multiplication, use dot.
Step7: The assignment operators such as += and *= modify arrays in place, and thus don't allocate new memory to create a new array.
Step8: Indexing and Slicing
The slice operator [] applies on axis 0.
Step9: We can also slice a particular axis with the method slice_axis
Step10: Shape Manipulation
Using reshape, we can manipulate any arrays shape as long as the size remains unchanged.
Step11: The concatenate method stacks multiple arrays along the first axis. (Their shapes must be the same along the other axes).
Step12: Reduce
Some functions, like sum and mean reduce arrays to scalars.
Step13: We can also reduce an array along a particular axis
Step14: Broadcast
We can also broadcast an array. Broadcasting operations, duplicate an array's value along an axis with length 1. The following code broadcasts along axis 1
Step15: It's possible to simultaneously broadcast along multiple axes. In the following example, we broadcast along axes 1 and 2
Step16: Broadcasting can be applied automatically when executing some operations, e.g. * and + on arrays of different shapes.
Step17: Copies
When assigning an NDArray to another Python variable, we copy a reference to the same NDArray. However, we often need to maek a copy of the data, so that we can manipulate the new array without overwriting the original values.
Step18: The copy method makes a deep copy of the array and its data
Step19: The above code allocates a new NDArray and then assigns to b. When we do not want to allocate additional memory, we can use the copyto method or the slice operator [] instead.
Step20: Advanced Topics
MXNet's NDArray offers some advanced features that differentiate it from the offerings you'll find in most other libraries.
GPU Support
By default, NDArray operators are executed on CPU. But with MXNet, it's easy to switch to another computation resource, such as GPU, when available. Each NDArray's device information is stored in ndarray.context. When MXNet is compiled with flag USE_CUDA=1 and the machine has at least one NVIDIA GPU, we can cause all computations to run on GPU 0 by using context mx.gpu(0), or simply mx.gpu(). When we have access to two or more GPUs, the 2nd GPU is represented by mx.gpu(1), etc.
Step21: We can also explicitly specify the context when creating an array
Step22: Currently, MXNet requires two arrays to sit on the same device for computation. There are several methods for copying data between devices.
Step23: Serialize From/To (Distributed) Filesystems
MXNet offers two simple ways to save (load) data to (from) disk. The first way is to use pickle, as you might with any other Python objects. NDArray is pickle-compatible.
Step24: The second way is to directly dump to disk in binary format by using the save and load methods. We can save/load a single NDArray, or a list of NDArrays
Step25: It's also possible to save or load a dict of NDArrays in this fashion
Step28: The load and save methods are preferable to pickle in two respects
1. When using these methods, you can save data from within the Python interface and then use it later from another lanuage's binding. For example, if we save the data in Python
Step29: Besides analyzing data read and write dependencies, the backend engine is able to schedule computations with no dependency in parallel. For example, in the following code
Step30: Now we issue all workloads at the same time. The backend engine will try to parallel the CPU and GPU computations. | Python Code:
import mxnet as mx
# create a 1-dimensional array with a python list
a = mx.nd.array([1,2,3])
# create a 2-dimensional array with a nested python list
b = mx.nd.array([[1,2,3], [2,3,4]])
{'a.shape':a.shape, 'b.shape':b.shape}
Explanation: NDArray Tutorial
In MXNet, NDArray is the core datastructure for all mathematical computations.
An NDArray represents a multidimensional, fixed-size homogenous array.
If you're familiar with the scientific computing python package NumPy,
you might notice that mxnet.ndarray is similar to numpy.ndarray.
Like the corresponding NumPy data structure, MXNet's NDArray enables imperative computation.
So you might wonder, why not just use NumPy?
MXNet offers two compelling advantages.
First, MXNet's NDArray supports fast execution
on a wide range of hardware configurations,
including CPU, GPU, and multi-GPU machines.
MXNet also scales to distribute systems in the cloud.
Second, MXNet's NDArray executes code lazily,
allowing it to automatically parallelize multiple operations
across the available hardware.
The basics
An NDArray is a multidimensional array of numbers with the same type.
We could represent the coordinates of a point in 3D space,
e.g. [2, 1, 6] as a 1D array with shape (3).
Similarly, we could represent a 2D array.
Below, we present an array with length 2 along the first axis and length 3 along the second axis.
[[0, 1, 2]
[3, 4, 5]]
Note that here the use of "dimension" is overloaded.
When we say a 2D array, we mean an array with 2 axes, not an array with two components.
Each NDArray supports some important attributes that you'll often want to query:
ndarray.shape: The dimensions of the array. It is a tuple of integers indicating the length of the array along each axis. For a matrix with n rows and m columns, its shape will be (n, m).
ndarray.dtype: A numpy type object describing the type of its elements.
ndarray.size: the total number of components in the array - equal to the product of the components of its shape
ndarray.context: The device on which this array is stored, e.g. cpu() or gpu(1).
Array Creation
There are a few different ways to create an NDArray.
We can create an NDArray from a regular Python list or tuple by using the array function:
End of explanation
import numpy as np
import math
c = np.arange(15).reshape(3,5)
# create a 2-dimensional array from a numpy.ndarray object
a = mx.nd.array(c)
{'a.shape':a.shape}
Explanation: We can also create an MXNet NDArray from an numpy.ndarray object:
End of explanation
# float32 is used in deafult
a = mx.nd.array([1,2,3])
# create an int32 array
b = mx.nd.array([1,2,3], dtype=np.int32)
# create a 16-bit float array
c = mx.nd.array([1.2, 2.3], dtype=np.float16)
(a.dtype, b.dtype, c.dtype)
Explanation: We can specify the element type with the option dtype, which accepts a numpy type. By default, float32 is used:
End of explanation
# create a 2-dimensional array full of zeros with shape (2,3)
a = mx.nd.zeros((2,3))
# create a same shape array full of ones
b = mx.nd.ones((2,3))
# create a same shape array with all elements set to 7
c = mx.nd.full((2,3), 7)
# create a same shape whose initial content is random and
# depends on the state of the memory
d = mx.nd.empty((2,3))
Explanation: If we know the size of the desired NDArray, but not the element values,
MXNet offers several functions to create arrays with placeholder content:
End of explanation
b = mx.nd.arange(18).reshape((3,2,3))
b.asnumpy()
Explanation: Printing Arrays
When inspecting the contents of an NDArray, it's often convenient
to first extract its contents as a numpy.ndarray using the asnumpy function.
Numpy uses the following layout:
- The last axis is printed from left to right,
- The second-to-last is printed from top to bottom,
- The rest are also printed from top to bottom, with each slice separated from the next by an empty line.
End of explanation
a = mx.nd.ones((2,3))
b = mx.nd.ones((2,3))
# elementwise plus
c = a + b
# elementwise minus
d = - c
# elementwise pow and sin, and then transpose
e = mx.nd.sin(c**2).T
# elementwise max
f = mx.nd.maximum(a, c)
f.asnumpy()
Explanation: Basic Operations
When applied to NDArrays, the standard arithmetic operators apply elementwise calculations. The returned value is a new array whose content contains the result.
End of explanation
a = mx.nd.arange(4).reshape((2,2))
b = a * a
c = mx.nd.dot(a,a)
print("b: %s, \n c: %s" % (b.asnumpy(), c.asnumpy()))
Explanation: As in NumPy, * represents element-wise multiplication. For matrix-matrix multiplication, use dot.
End of explanation
a = mx.nd.ones((2,2))
b = mx.nd.ones(a.shape)
b += a
b.asnumpy()
Explanation: The assignment operators such as += and *= modify arrays in place, and thus don't allocate new memory to create a new array.
End of explanation
a = mx.nd.array(np.arange(6).reshape(3,2))
a[1:2] = 1
a[:].asnumpy()
Explanation: Indexing and Slicing
The slice operator [] applies on axis 0.
End of explanation
d = mx.nd.slice_axis(a, axis=1, begin=1, end=2)
d.asnumpy()
Explanation: We can also slice a particular axis with the method slice_axis
End of explanation
a = mx.nd.array(np.arange(24))
b = a.reshape((2,3,4))
b.asnumpy()
Explanation: Shape Manipulation
Using reshape, we can manipulate any arrays shape as long as the size remains unchanged.
End of explanation
a = mx.nd.ones((2,3))
b = mx.nd.ones((2,3))*2
c = mx.nd.concatenate([a,b])
c.asnumpy()
Explanation: The concatenate method stacks multiple arrays along the first axis. (Their shapes must be the same along the other axes).
End of explanation
a = mx.nd.ones((2,3))
b = mx.nd.sum(a)
b.asnumpy()
Explanation: Reduce
Some functions, like sum and mean reduce arrays to scalars.
End of explanation
c = mx.nd.sum_axis(a, axis=1)
c.asnumpy()
Explanation: We can also reduce an array along a particular axis:
End of explanation
a = mx.nd.array(np.arange(6).reshape(6,1))
b = a.broadcast_to((6,4)) #
b.asnumpy()
Explanation: Broadcast
We can also broadcast an array. Broadcasting operations, duplicate an array's value along an axis with length 1. The following code broadcasts along axis 1:
End of explanation
c = a.reshape((2,1,1,3))
d = c.broadcast_to((2,2,2,3))
d.asnumpy()
Explanation: It's possible to simultaneously broadcast along multiple axes. In the following example, we broadcast along axes 1 and 2:
End of explanation
a = mx.nd.ones((3,2))
b = mx.nd.ones((1,2))
c = a + b
c.asnumpy()
Explanation: Broadcasting can be applied automatically when executing some operations, e.g. * and + on arrays of different shapes.
End of explanation
a = mx.nd.ones((2,2))
b = a
b is a
Explanation: Copies
When assigning an NDArray to another Python variable, we copy a reference to the same NDArray. However, we often need to maek a copy of the data, so that we can manipulate the new array without overwriting the original values.
End of explanation
b = a.copy()
b is a
Explanation: The copy method makes a deep copy of the array and its data:
End of explanation
b = mx.nd.ones(a.shape)
c = b
c[:] = a
d = b
a.copyto(d)
(c is b, d is b)
Explanation: The above code allocates a new NDArray and then assigns to b. When we do not want to allocate additional memory, we can use the copyto method or the slice operator [] instead.
End of explanation
def f():
a = mx.nd.ones((100,100))
b = mx.nd.ones((100,100))
c = a + b
print(c)
# in default mx.cpu() is used
f()
# change the default context to the first GPU
with mx.Context(mx.gpu()):
f()
Explanation: Advanced Topics
MXNet's NDArray offers some advanced features that differentiate it from the offerings you'll find in most other libraries.
GPU Support
By default, NDArray operators are executed on CPU. But with MXNet, it's easy to switch to another computation resource, such as GPU, when available. Each NDArray's device information is stored in ndarray.context. When MXNet is compiled with flag USE_CUDA=1 and the machine has at least one NVIDIA GPU, we can cause all computations to run on GPU 0 by using context mx.gpu(0), or simply mx.gpu(). When we have access to two or more GPUs, the 2nd GPU is represented by mx.gpu(1), etc.
End of explanation
a = mx.nd.ones((100, 100), mx.gpu(0))
a
Explanation: We can also explicitly specify the context when creating an array:
End of explanation
import mxnet as mx
a = mx.nd.ones((100,100), mx.cpu())
b = mx.nd.ones((100,100), mx.gpu())
c = mx.nd.ones((100,100), mx.gpu())
a.copyto(c) # copy from CPU to GPU
d = b + c
e = b.as_in_context(c.context) + c # same to above
{'d':d, 'e':e}
Explanation: Currently, MXNet requires two arrays to sit on the same device for computation. There are several methods for copying data between devices.
End of explanation
import pickle as pkl
a = mx.nd.ones((2, 3))
# pack and then dump into disk
data = pkl.dumps(a)
pkl.dump(data, open('tmp.pickle', 'wb'))
# load from disk and then unpack
data = pkl.load(open('tmp.pickle', 'rb'))
b = pkl.loads(data)
b.asnumpy()
Explanation: Serialize From/To (Distributed) Filesystems
MXNet offers two simple ways to save (load) data to (from) disk. The first way is to use pickle, as you might with any other Python objects. NDArray is pickle-compatible.
End of explanation
a = mx.nd.ones((2,3))
b = mx.nd.ones((5,6))
mx.nd.save("temp.ndarray", [a,b])
c = mx.nd.load("temp.ndarray")
c
Explanation: The second way is to directly dump to disk in binary format by using the save and load methods. We can save/load a single NDArray, or a list of NDArrays:
End of explanation
d = {'a':a, 'b':b}
mx.nd.save("temp.ndarray", d)
c = mx.nd.load("temp.ndarray")
c
Explanation: It's also possible to save or load a dict of NDArrays in this fashion:
End of explanation
# @@@ AUTOTEST_OUTPUT_IGNORED_CELL
import time
def do(x, n):
push computation into the backend engine
return [mx.nd.dot(x,x) for i in range(n)]
def wait(x):
wait until all results are available
for y in x:
y.wait_to_read()
tic = time.time()
a = mx.nd.ones((1000,1000))
b = do(a, 50)
print('time for all computations are pushed into the backend engine:\n %f sec' % (time.time() - tic))
wait(b)
print('time for all computations are finished:\n %f sec' % (time.time() - tic))
Explanation: The load and save methods are preferable to pickle in two respects
1. When using these methods, you can save data from within the Python interface and then use it later from another lanuage's binding. For example, if we save the data in Python:
python
a = mx.nd.ones((2, 3))
mx.save("temp.ndarray", [a,])
we can later load it from R:
```R
a <- mx.nd.load("temp.ndarray")
as.array(a[[1]])
[,1] [,2] [,3]
[1,] 1 1 1
[2,] 1 1 1
2. When a distributed filesystem such as Amazon S3 or Hadoop HDFS is set up, we can directly save to and load from it.python
mx.nd.save('s3://mybucket/mydata.ndarray', [a,]) # if compiled with USE_S3=1
mx.nd.save('hdfs///users/myname/mydata.bin', [a,]) # if compiled with USE_HDFS=1
```
Lazy Evaluation and Automatic Parallelization *
MXNet uses lazy evaluation to achieve superior performance.
When we run a=b+1 in Python, the Python thread
just pushes this operation into the backend engine and then returns.
There are two benefits for to this approach:
1. The main Python thread can continue to execute other computations once the previous one is pushed. It is useful for frontend languages with heavy overheads.
2. It is easier for the backend engine to explore further optimization, such as auto parallelization.
The backend engine can resolve data dependencies and schedule the computations correctly. It is transparent to frontend users. We can explicitly call the method wait_to_read on the result array to wait until the computation finishes. Operations that copy data from an array to other packages, such as asnumpy, will implicitly call wait_to_read.
End of explanation
# @@@ AUTOTEST_OUTPUT_IGNORED_CELL
n = 10
a = mx.nd.ones((1000,1000))
b = mx.nd.ones((6000,6000), mx.gpu(1))
tic = time.time()
c = do(a, n)
wait(c)
print('Time to finish the CPU workload: %f sec' % (time.time() - tic))
d = do(b, n)
wait(d)
print('Time to finish both CPU/CPU workloads: %f sec' % (time.time() - tic))
Explanation: Besides analyzing data read and write dependencies, the backend engine is able to schedule computations with no dependency in parallel. For example, in the following code:
python
a = mx.nd.ones((2,3))
b = a + 1
c = a + 2
d = b * c
the second and third lines can be executed in parallel. The following example first runs on CPU and then on GPU:
End of explanation
# @@@ AUTOTEST_OUTPUT_IGNORED_CELL
tic = time.time()
c = do(a, n)
d = do(b, n)
wait(c)
wait(d)
print('Both as finished in: %f sec' % (time.time() - tic))
Explanation: Now we issue all workloads at the same time. The backend engine will try to parallel the CPU and GPU computations.
End of explanation |
14,392 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: \title{myHDL Combinational Logic Elements
Step2: Demultiplexers
\begin{definition}\label{def
Step4: myHDL Module
Step6: myHDL Testing
Step7: Verilog Conversion
Step9: \begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_2_Combo_RTL.png}}
\caption{\label{fig
Step10: PYNQ-Z1 Deployment
Board Circuit
\begin{figure}
\centerline{\includegraphics[width=5cm]{DEMUX12PYNQZ1Circ.png}}
\caption{\label{fig
Step11: Video of Deployment
DEMUX1_2_Combo on PYNQ-Z1 (YouTube)
1 Channel Input
Step13: myHDL Module
Step15: myHDL Testing
Step16: Verilog Conversion
Step18: \begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_4_Combo_RTL.png}}
\caption{\label{fig
Step19: PYNQ-Z1 Deployment
Board Circuit
\begin{figure}
\centerline{\includegraphics[width=5cm]{DEMUX14PYNQZ1Circ.png}}
\caption{\label{fig
Step21: Video of Deployment
DEMUX1_4_Combo on PYNQ-Z1 (YouTube)
1 Channel Input
Step23: myHDL Testing
Step24: Verilog Conversion
Step26: \begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_4_DMS_RTL.png}}
\caption{\label{fig
Step28: PYNQ-Z1 Deployment
Board Circuit
See Board Circuit for "1 Channel Input
Step30: myHDL Testing
Step31: Verilog Conversion
Step33: \begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_2_B_RTL.png}}
\caption{\label{fig
Step35: PYNQ-Z1 Deployment
Board Circuit
See Board Circuit for "1 Channel Input
Step37: myHDL Testing
Step38: Verilog Conversion
Step40: \begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_4_B_RTL.png}}
\caption{\label{fig
Step42: PYNQ-Z1 Deployment
Board Circuit
See Board Circuit for "1 Channel Input
Step43: myHDL Testing
Step44: Verilog Conversion
Step45: \begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_4_BV_RTL.png}}
\caption{\label{fig | Python Code:
#This notebook also uses the `(some) LaTeX environments for Jupyter`
#https://github.com/ProfFan/latex_envs wich is part of the
#jupyter_contrib_nbextensions package
from myhdl import *
from myhdlpeek import Peeker
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sympy import *
init_printing()
import itertools
#EE drawing tools in python from https://cdelker.bitbucket.io/SchemDraw/
import SchemDraw as schem
import SchemDraw.elements as e
import SchemDraw.logic as l
#https://github.com/jrjohansson/version_information
%load_ext version_information
%version_information myhdl, myhdlpeek, numpy, pandas, matplotlib, sympy, itertools, SchemDraw
#helper functions to read in the .v and .vhd generated files into python
def VerilogTextReader(loc, printresult=True):
with open(f'{loc}.v', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***Verilog modual from {loc}.v***\n\n', VerilogText)
return VerilogText
def VHDLTextReader(loc, printresult=True):
with open(f'{loc}.vhd', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***VHDL modual from {loc}.vhd***\n\n', VerilogText)
return VerilogText
def ConstraintXDCTextReader(loc, printresult=True):
with open(f'{loc}.xdc', 'r') as xdcText:
ConstraintText=xdcText.read()
if printresult:
print(f'***Constraint file from {loc}.xdc***\n\n', ConstraintText)
return ConstraintText
def TruthTabelGenrator(BoolSymFunc):
Function to generate a truth table from a sympy boolian expression
BoolSymFunc: sympy boolian expression
return TT: a Truth table stored in a pandas dataframe
colsL=sorted([i for i in list(BoolSymFunc.rhs.atoms())], key=lambda x:x.sort_key())
colsR=sorted([i for i in list(BoolSymFunc.lhs.atoms())], key=lambda x:x.sort_key())
bitwidth=len(colsL)
cols=colsL+colsR; cols
TT=pd.DataFrame(columns=cols, index=range(2**bitwidth))
for i in range(2**bitwidth):
inputs=[int(j) for j in list(np.binary_repr(i, bitwidth))]
outputs=BoolSymFunc.rhs.subs({j:v for j, v in zip(colsL, inputs)})
inputs.append(int(bool(outputs)))
TT.iloc[i]=inputs
return TT
Explanation: \title{myHDL Combinational Logic Elements: Demultiplexers (DEMUXs))}
\author{Steven K Armour}
\maketitle
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Refrances" data-toc-modified-id="Refrances-1"><span class="toc-item-num">1 </span>Refrances</a></span></li><li><span><a href="#Libraries-and-Helper-functions" data-toc-modified-id="Libraries-and-Helper-functions-2"><span class="toc-item-num">2 </span>Libraries and Helper functions</a></span></li><li><span><a href="#Demultiplexers" data-toc-modified-id="Demultiplexers-3"><span class="toc-item-num">3 </span>Demultiplexers</a></span></li><li><span><a href="#1-Channel-Input:-2-Channel-Output-demultiplexer-in-Gate-Level-Logic" data-toc-modified-id="1-Channel-Input:-2-Channel-Output-demultiplexer-in-Gate-Level-Logic-4"><span class="toc-item-num">4 </span>1 Channel Input: 2 Channel Output demultiplexer in Gate Level Logic</a></span><ul class="toc-item"><li><span><a href="#Sympy-Expression" data-toc-modified-id="Sympy-Expression-4.1"><span class="toc-item-num">4.1 </span>Sympy Expression</a></span></li><li><span><a href="#myHDL-Module" data-toc-modified-id="myHDL-Module-4.2"><span class="toc-item-num">4.2 </span>myHDL Module</a></span></li><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-4.3"><span class="toc-item-num">4.3 </span>myHDL Testing</a></span></li><li><span><a href="#Verilog-Conversion" data-toc-modified-id="Verilog-Conversion-4.4"><span class="toc-item-num">4.4 </span>Verilog Conversion</a></span></li><li><span><a href="#myHDL-to-Verilog-Testbech" data-toc-modified-id="myHDL-to-Verilog-Testbech-4.5"><span class="toc-item-num">4.5 </span>myHDL to Verilog Testbech</a></span></li><li><span><a href="#PYNQ-Z1-Deployment" data-toc-modified-id="PYNQ-Z1-Deployment-4.6"><span class="toc-item-num">4.6 </span>PYNQ-Z1 Deployment</a></span><ul class="toc-item"><li><span><a href="#Board-Circuit" data-toc-modified-id="Board-Circuit-4.6.1"><span class="toc-item-num">4.6.1 </span>Board Circuit</a></span></li><li><span><a href="#Board-Constraints" data-toc-modified-id="Board-Constraints-4.6.2"><span class="toc-item-num">4.6.2 </span>Board Constraints</a></span></li><li><span><a href="#Video-of-Deployment" data-toc-modified-id="Video-of-Deployment-4.6.3"><span class="toc-item-num">4.6.3 </span>Video of Deployment</a></span></li></ul></li></ul></li><li><span><a href="#1-Channel-Input:4-Channel-Output-demultiplexer-in-Gate-Level-Logic" data-toc-modified-id="1-Channel-Input:4-Channel-Output-demultiplexer-in-Gate-Level-Logic-5"><span class="toc-item-num">5 </span>1 Channel Input:4 Channel Output demultiplexer in Gate Level Logic</a></span><ul class="toc-item"><li><span><a href="#Sympy-Expression" data-toc-modified-id="Sympy-Expression-5.1"><span class="toc-item-num">5.1 </span>Sympy Expression</a></span></li><li><span><a href="#myHDL-Module" data-toc-modified-id="myHDL-Module-5.2"><span class="toc-item-num">5.2 </span>myHDL Module</a></span></li><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-5.3"><span class="toc-item-num">5.3 </span>myHDL Testing</a></span></li><li><span><a href="#Verilog-Conversion" data-toc-modified-id="Verilog-Conversion-5.4"><span class="toc-item-num">5.4 </span>Verilog Conversion</a></span></li><li><span><a href="#myHDL-to-Verilog-Testbench" data-toc-modified-id="myHDL-to-Verilog-Testbench-5.5"><span class="toc-item-num">5.5 </span>myHDL to Verilog Testbench</a></span></li><li><span><a href="#PYNQ-Z1-Deployment" data-toc-modified-id="PYNQ-Z1-Deployment-5.6"><span class="toc-item-num">5.6 </span>PYNQ-Z1 Deployment</a></span><ul class="toc-item"><li><span><a href="#Board-Circuit" data-toc-modified-id="Board-Circuit-5.6.1"><span class="toc-item-num">5.6.1 </span>Board Circuit</a></span></li><li><span><a href="#Board-Constraints" data-toc-modified-id="Board-Constraints-5.6.2"><span class="toc-item-num">5.6.2 </span>Board Constraints</a></span></li><li><span><a href="#Video-of-Deployment" data-toc-modified-id="Video-of-Deployment-5.6.3"><span class="toc-item-num">5.6.3 </span>Video of Deployment</a></span></li></ul></li></ul></li><li><span><a href="#1-Channel-Input:4-Channel-Output-demultiplexer-via-DEMUX-Stacking" data-toc-modified-id="1-Channel-Input:4-Channel-Output-demultiplexer-via-DEMUX-Stacking-6"><span class="toc-item-num">6 </span>1 Channel Input:4 Channel Output demultiplexer via DEMUX Stacking</a></span><ul class="toc-item"><li><span><a href="#myHDL-Module" data-toc-modified-id="myHDL-Module-6.1"><span class="toc-item-num">6.1 </span>myHDL Module</a></span></li><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-6.2"><span class="toc-item-num">6.2 </span>myHDL Testing</a></span></li><li><span><a href="#Verilog-Conversion" data-toc-modified-id="Verilog-Conversion-6.3"><span class="toc-item-num">6.3 </span>Verilog Conversion</a></span></li><li><span><a href="#myHDL-to-Verilog-Testbench" data-toc-modified-id="myHDL-to-Verilog-Testbench-6.4"><span class="toc-item-num">6.4 </span>myHDL to Verilog Testbench</a></span></li><li><span><a href="#PYNQ-Z1-Deployment" data-toc-modified-id="PYNQ-Z1-Deployment-6.5"><span class="toc-item-num">6.5 </span>PYNQ-Z1 Deployment</a></span><ul class="toc-item"><li><span><a href="#Board-Circuit" data-toc-modified-id="Board-Circuit-6.5.1"><span class="toc-item-num">6.5.1 </span>Board Circuit</a></span></li><li><span><a href="#Board-Constraint" data-toc-modified-id="Board-Constraint-6.5.2"><span class="toc-item-num">6.5.2 </span>Board Constraint</a></span></li><li><span><a href="#Video-of-Deployment" data-toc-modified-id="Video-of-Deployment-6.5.3"><span class="toc-item-num">6.5.3 </span>Video of Deployment</a></span></li></ul></li></ul></li><li><span><a href="#1:2-DEMUX-via-Behavioral-IF" data-toc-modified-id="1:2-DEMUX-via-Behavioral-IF-7"><span class="toc-item-num">7 </span>1:2 DEMUX via Behavioral IF</a></span><ul class="toc-item"><li><span><a href="#myHDL-Module" data-toc-modified-id="myHDL-Module-7.1"><span class="toc-item-num">7.1 </span>myHDL Module</a></span></li><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-7.2"><span class="toc-item-num">7.2 </span>myHDL Testing</a></span></li><li><span><a href="#Verilog-Conversion" data-toc-modified-id="Verilog-Conversion-7.3"><span class="toc-item-num">7.3 </span>Verilog Conversion</a></span></li><li><span><a href="#myHDL-to-Verilog-Testbench" data-toc-modified-id="myHDL-to-Verilog-Testbench-7.4"><span class="toc-item-num">7.4 </span>myHDL to Verilog Testbench</a></span></li><li><span><a href="#PYNQ-Z1-Deployment" data-toc-modified-id="PYNQ-Z1-Deployment-7.5"><span class="toc-item-num">7.5 </span>PYNQ-Z1 Deployment</a></span><ul class="toc-item"><li><span><a href="#Board-Circuit" data-toc-modified-id="Board-Circuit-7.5.1"><span class="toc-item-num">7.5.1 </span>Board Circuit</a></span></li><li><span><a href="#Board-Constraint" data-toc-modified-id="Board-Constraint-7.5.2"><span class="toc-item-num">7.5.2 </span>Board Constraint</a></span></li><li><span><a href="#Video-of-Deployment" data-toc-modified-id="Video-of-Deployment-7.5.3"><span class="toc-item-num">7.5.3 </span>Video of Deployment</a></span></li></ul></li></ul></li><li><span><a href="#1:4-DEMUX-via-Behavioral-if-elif-else" data-toc-modified-id="1:4-DEMUX-via-Behavioral-if-elif-else-8"><span class="toc-item-num">8 </span>1:4 DEMUX via Behavioral if-elif-else</a></span><ul class="toc-item"><li><span><a href="#myHDL-Module" data-toc-modified-id="myHDL-Module-8.1"><span class="toc-item-num">8.1 </span>myHDL Module</a></span></li><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-8.2"><span class="toc-item-num">8.2 </span>myHDL Testing</a></span></li><li><span><a href="#Verilog-Conversion" data-toc-modified-id="Verilog-Conversion-8.3"><span class="toc-item-num">8.3 </span>Verilog Conversion</a></span></li><li><span><a href="#myHDL-to-Verilog-Testbench" data-toc-modified-id="myHDL-to-Verilog-Testbench-8.4"><span class="toc-item-num">8.4 </span>myHDL to Verilog Testbench</a></span></li><li><span><a href="#PYNQ-Z1-Deployment" data-toc-modified-id="PYNQ-Z1-Deployment-8.5"><span class="toc-item-num">8.5 </span>PYNQ-Z1 Deployment</a></span><ul class="toc-item"><li><span><a href="#Board-Circuit" data-toc-modified-id="Board-Circuit-8.5.1"><span class="toc-item-num">8.5.1 </span>Board Circuit</a></span></li><li><span><a href="#Board-Constraint" data-toc-modified-id="Board-Constraint-8.5.2"><span class="toc-item-num">8.5.2 </span>Board Constraint</a></span></li><li><span><a href="#Video-of-Deployment" data-toc-modified-id="Video-of-Deployment-8.5.3"><span class="toc-item-num">8.5.3 </span>Video of Deployment</a></span></li></ul></li></ul></li><li><span><a href="#Demultiplexer-1:4-Behavioral-via-Bitvectors" data-toc-modified-id="Demultiplexer-1:4-Behavioral-via-Bitvectors-9"><span class="toc-item-num">9 </span>Demultiplexer 1:4 Behavioral via Bitvectors</a></span><ul class="toc-item"><li><span><a href="#myHDL-Module" data-toc-modified-id="myHDL-Module-9.1"><span class="toc-item-num">9.1 </span>myHDL Module</a></span></li><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-9.2"><span class="toc-item-num">9.2 </span>myHDL Testing</a></span></li><li><span><a href="#Verilog-Conversion" data-toc-modified-id="Verilog-Conversion-9.3"><span class="toc-item-num">9.3 </span>Verilog Conversion</a></span></li><li><span><a href="#myHDL-to-Verilog-Testbench" data-toc-modified-id="myHDL-to-Verilog-Testbench-9.4"><span class="toc-item-num">9.4 </span>myHDL to Verilog Testbench</a></span></li><li><span><a href="#PYNQ-Z1-Board-Deployment" data-toc-modified-id="PYNQ-Z1-Board-Deployment-9.5"><span class="toc-item-num">9.5 </span>PYNQ-Z1 Board Deployment</a></span><ul class="toc-item"><li><span><a href="#Board-Circuit" data-toc-modified-id="Board-Circuit-9.5.1"><span class="toc-item-num">9.5.1 </span>Board Circuit</a></span></li><li><span><a href="#Board-Constraints" data-toc-modified-id="Board-Constraints-9.5.2"><span class="toc-item-num">9.5.2 </span>Board Constraints</a></span></li><li><span><a href="#Video-of-Deployment" data-toc-modified-id="Video-of-Deployment-9.5.3"><span class="toc-item-num">9.5.3 </span>Video of Deployment</a></span></li></ul></li></ul></li></ul></div>
Refrances
Libraries and Helper functions
End of explanation
x, s, y0, y1=symbols('x, s, y_0, y_1')
y12_0Eq=Eq(y0, ~s&x)
y12_1Eq=Eq(y1, s&x)
y12_0Eq, y12_1Eq
T0=TruthTabelGenrator(y12_0Eq)
T1=TruthTabelGenrator(y12_1Eq)
T10=pd.merge(T1, T0, how='left')
T10
y12_0EqN=lambdify([s, x], y12_0Eq.rhs, dummify=False)
y12_1EqN=lambdify([s, x], y12_1Eq.rhs, dummify=False)
SystmaticVals=np.array(list(itertools.product([0,1], repeat=2)))
print(SystmaticVals)
print(y12_0EqN(SystmaticVals[:, 0], SystmaticVals[:, 1]).astype(int))
print(y12_1EqN(SystmaticVals[:, 0], SystmaticVals[:, 1]).astype(int))
Explanation: Demultiplexers
\begin{definition}\label{def:MUX}
A Demultiplexer, typically referred to as a DEMUX, is a Digital(or analog) switching unit that takes one input channel to be streamed to a single output channel from many via a control input. For single input DEMUXs with $2^n$ outputs, there are then $n$ input selection signals that make up the control word to select the output channel for the input.
Thus a DEMUX is the conjugate digital element to the MUX such that a MUX is an $N:1$ mapping device and a DEMUX is a $1:N$ mapping device.
From a behavioral standpoint DEMUXs are implemented with the same if-elif-else (case) control statements as a MUX but for each case, all outputs must be specified.
Furthermore, DEMUXs are often implemented via stacked MUXs since there governing equation is the Product SET (Cartesian product) all internal products of a MUXs SOP equation
\end{definition}
1 Channel Input: 2 Channel Output demultiplexer in Gate Level Logic
\begin{figure}
\centerline{\includegraphics{DEMUX12Gate.png}}
\caption{\label{fig:D12G} 1:2 DEMUX Symbol and Gate internals}
\end{figure}
Sympy Expression
End of explanation
@block
def DEMUX1_2_Combo(x, s, y0, y1):
1:2 DEMUX written in full combo
Inputs:
x(bool): input feed
s(bool): channel select
Outputs:
y0(bool): ouput channel 0
y1(bool): ouput channel 1
@always_comb
def logic():
y0.next= not s and x
y1.next= s and x
return instances()
Explanation: myHDL Module
End of explanation
TestLen=10
SystmaticVals=list(itertools.product([0,1], repeat=2))
xTVs=np.array([i[1] for i in SystmaticVals]).astype(int)
np.random.seed(15)
xTVs=np.append(xTVs, np.random.randint(0,2, TestLen)).astype(int)
sTVs=np.array([i[0] for i in SystmaticVals]).astype(int)
#the random genrator must have a differint seed beween each generation
#call in order to produce differint values for each call
np.random.seed(16)
sTVs=np.append(sTVs, np.random.randint(0,2, TestLen)).astype(int)
TestLen=len(xTVs)
SystmaticVals, sTVs, xTVs
Peeker.clear()
x=Signal(bool(0)); Peeker(x, 'x')
s=Signal(bool(0)); Peeker(s, 's')
y0=Signal(bool(0)); Peeker(y0, 'y0')
y1=Signal(bool(0)); Peeker(y1, 'y1')
DUT=DEMUX1_2_Combo(x, s, y0, y1)
def DEMUX1_2_Combo_TB():
myHDL only testbench for module `DEMUX1_2_Combo`
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTVs[i])
s.next=int(sTVs[i])
yield delay(1)
raise StopSimulation()
return instances()
sim=Simulation(DUT, DEMUX1_2_Combo_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('x', 's', 'y0','y1')
DEMUX1_2_ComboData=Peeker.to_dataframe()
DEMUX1_2_ComboData=DEMUX1_2_ComboData[['x', 's', 'y0','y1']]
DEMUX1_2_ComboData
DEMUX1_2_ComboData['y0Ref']=DEMUX1_2_ComboData.apply(lambda row:y12_0EqN(row['s'], row['x']), axis=1).astype(int)
DEMUX1_2_ComboData['y1Ref']=DEMUX1_2_ComboData.apply(lambda row:y12_1EqN(row['s'], row['x']), axis=1).astype(int)
DEMUX1_2_ComboData
Test0=(DEMUX1_2_ComboData['y0']==DEMUX1_2_ComboData['y0Ref']).all()
Test1=(DEMUX1_2_ComboData['y1']==DEMUX1_2_ComboData['y1Ref']).all()
Test=Test0&Test1
print(f'Module `DEMUX1_2_Combo` works as exspected: {Test}')
Explanation: myHDL Testing
End of explanation
DUT.convert()
VerilogTextReader('DEMUX1_2_Combo');
Explanation: Verilog Conversion
End of explanation
#create BitVectors
xTVs=intbv(int(''.join(xTVs.astype(str)), 2))[TestLen:]
sTVs=intbv(int(''.join(sTVs.astype(str)), 2))[TestLen:]
xTVs, bin(xTVs), sTVs, bin(sTVs)
@block
def DEMUX1_2_Combo_TBV():
myHDL -> testbench for module `DEMUX1_2_Combo`
x=Signal(bool(0))
s=Signal(bool(0))
y0=Signal(bool(0))
y1=Signal(bool(0))
@always_comb
def print_data():
print(x, s, y0, y1)
#Test Signal Bit Vectors
xTV=Signal(xTVs)
sTV=Signal(sTVs)
DUT=DEMUX1_2_Combo(x, s, y0, y1)
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTV[i])
s.next=int(sTV[i])
yield delay(1)
raise StopSimulation()
return instances()
TB=DEMUX1_2_Combo_TBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('DEMUX1_2_Combo_TBV');
Explanation: \begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_2_Combo_RTL.png}}
\caption{\label{fig:D12CRTL} DEMUX1_2_Combo RTL schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_2_Combo_SYN.png}}
\caption{\label{fig:D12CSYN} DEMUX1_2_Combo Synthesized Schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_2_Combo_IMP.png}}
\caption{\label{fig:D12CIMP} DEMUX1_2_Combo Implementated Schematic; Xilinx Vivado 2017.4}
\end{figure}
myHDL to Verilog Testbech
End of explanation
ConstraintXDCTextReader('DEMUX1_2');
Explanation: PYNQ-Z1 Deployment
Board Circuit
\begin{figure}
\centerline{\includegraphics[width=5cm]{DEMUX12PYNQZ1Circ.png}}
\caption{\label{fig:D12Circ} 1:2 DEMUX PYNQ-Z1 (Non SoC) conceptualized circuit}
\end{figure}
Board Constraints
End of explanation
x, s0, s1, y0, y1, y2, y3=symbols('x, s0, s1, y0, y1, y2, y3')
y14_0Eq=Eq(y0, ~s0&~s1&x)
y14_1Eq=Eq(y1, s0&~s1&x)
y14_2Eq=Eq(y2, ~s0&s1&x)
y14_3Eq=Eq(y3, s0&s1&x)
y14_0Eq, y14_1Eq, y14_2Eq, y14_3Eq
T0=TruthTabelGenrator(y14_0Eq)
T1=TruthTabelGenrator(y14_1Eq)
T2=TruthTabelGenrator(y14_2Eq)
T3=TruthTabelGenrator(y14_3Eq)
T10=pd.merge(T1, T0, how='left')
T20=pd.merge(T2, T10, how='left')
T30=pd.merge(T3, T20, how='left')
T30
y14_0EqN=lambdify([x, s0, s1], y14_0Eq.rhs, dummify=False)
y14_1EqN=lambdify([x, s0, s1], y14_1Eq.rhs, dummify=False)
y14_2EqN=lambdify([x, s0, s1], y14_2Eq.rhs, dummify=False)
y14_3EqN=lambdify([x, s0, s1], y14_3Eq.rhs, dummify=False)
SystmaticVals=np.array(list(itertools.product([0,1], repeat=3)))
print(SystmaticVals)
print(y14_0EqN(SystmaticVals[:, 2], SystmaticVals[:, 1], SystmaticVals[:, 0]).astype(int))
print(y14_1EqN(SystmaticVals[:, 2], SystmaticVals[:, 1], SystmaticVals[:, 0]).astype(int))
print(y14_2EqN(SystmaticVals[:, 2], SystmaticVals[:, 1], SystmaticVals[:, 0]).astype(int))
print(y14_3EqN(SystmaticVals[:, 2], SystmaticVals[:, 1], SystmaticVals[:, 0]).astype(int))
Explanation: Video of Deployment
DEMUX1_2_Combo on PYNQ-Z1 (YouTube)
1 Channel Input:4 Channel Output demultiplexer in Gate Level Logic
Sympy Expression
End of explanation
@block
def DEMUX1_4_Combo(x, s0, s1, y0, y1, y2, y3):
1:4 DEMUX written in full combo
Inputs:
x(bool): input feed
s0(bool): channel select 0
s1(bool): channel select 1
Outputs:
y0(bool): ouput channel 0
y1(bool): ouput channel 1
y2(bool): ouput channel 2
y3(bool): ouput channel 3
@always_comb
def logic():
y0.next= (not s0) and (not s1) and x
y1.next= s0 and (not s1) and x
y2.next= (not s0) and s1 and x
y3.next= s0 and s1 and x
return instances()
Explanation: myHDL Module
End of explanation
TestLen=10
SystmaticVals=list(itertools.product([0,1], repeat=3))
xTVs=np.array([i[2] for i in SystmaticVals]).astype(int)
np.random.seed(15)
xTVs=np.append(xTVs, np.random.randint(0,2, TestLen)).astype(int)
s0TVs=np.array([i[1] for i in SystmaticVals]).astype(int)
#the random genrator must have a differint seed beween each generation
#call in order to produce differint values for each call
np.random.seed(16)
s0TVs=np.append(s0TVs, np.random.randint(0,2, TestLen)).astype(int)
s1TVs=np.array([i[0] for i in SystmaticVals]).astype(int)
#the random genrator must have a differint seed beween each generation
#call in order to produce differint values for each call
np.random.seed(17)
s1TVs=np.append(s1TVs, np.random.randint(0,2, TestLen)).astype(int)
TestLen=len(xTVs)
SystmaticVals, xTVs, s0TVs, s1TVs
Peeker.clear()
x=Signal(bool(0)); Peeker(x, 'x')
s0=Signal(bool(0)); Peeker(s0, 's0')
s1=Signal(bool(0)); Peeker(s1, 's1')
y0=Signal(bool(0)); Peeker(y0, 'y0')
y1=Signal(bool(0)); Peeker(y1, 'y1')
y2=Signal(bool(0)); Peeker(y2, 'y2')
y3=Signal(bool(0)); Peeker(y3, 'y3')
DUT=DEMUX1_4_Combo(x, s0, s1, y0, y1, y2, y3)
def DEMUX1_4_Combo_TB():
myHDL only testbench for module `DEMUX1_4_Combo`
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTVs[i])
s0.next=int(s0TVs[i])
s1.next=int(s1TVs[i])
yield delay(1)
raise StopSimulation()
return instances()
sim=Simulation(DUT, DEMUX1_4_Combo_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('x', 's1', 's0', 'y0', 'y1', 'y2', 'y3')
DEMUX1_4_ComboData=Peeker.to_dataframe()
DEMUX1_4_ComboData=DEMUX1_4_ComboData[['x', 's1', 's0', 'y0', 'y1', 'y2', 'y3']]
DEMUX1_4_ComboData
DEMUX1_4_ComboData['y0Ref']=DEMUX1_4_ComboData.apply(lambda row:y14_0EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int)
DEMUX1_4_ComboData['y1Ref']=DEMUX1_4_ComboData.apply(lambda row:y14_1EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int)
DEMUX1_4_ComboData['y2Ref']=DEMUX1_4_ComboData.apply(lambda row:y14_2EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int)
DEMUX1_4_ComboData['y3Ref']=DEMUX1_4_ComboData.apply(lambda row:y14_3EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int)
DEMUX1_4_ComboData
Test0=(DEMUX1_4_ComboData['y0']==DEMUX1_4_ComboData['y0Ref']).all()
Test1=(DEMUX1_4_ComboData['y1']==DEMUX1_4_ComboData['y1Ref']).all()
Test2=(DEMUX1_4_ComboData['y2']==DEMUX1_4_ComboData['y2Ref']).all()
Test3=(DEMUX1_4_ComboData['y3']==DEMUX1_4_ComboData['y3Ref']).all()
Test=Test0&Test1&Test2&Test3
print(f'Module `DEMUX1_4_Combo` works as exspected: {Test}')
Explanation: myHDL Testing
End of explanation
DUT.convert()
VerilogTextReader('DEMUX1_4_Combo');
Explanation: Verilog Conversion
End of explanation
#create BitVectors
xTVs=intbv(int(''.join(xTVs.astype(str)), 2))[TestLen:]
s0TVs=intbv(int(''.join(s0TVs.astype(str)), 2))[TestLen:]
s1TVs=intbv(int(''.join(s1TVs.astype(str)), 2))[TestLen:]
xTVs, bin(xTVs), s0TVs, bin(s0TVs), s1TVs, bin(s1TVs)
@block
def DEMUX1_4_Combo_TBV():
myHDL -> testbench for module `DEMUX1_4_Combo`
x=Signal(bool(0))
s0=Signal(bool(0))
s1=Signal(bool(0))
y0=Signal(bool(0))
y1=Signal(bool(0))
y2=Signal(bool(0))
y3=Signal(bool(0))
@always_comb
def print_data():
print(x, s0, s1, y0, y1, y2, y3)
#Test Signal Bit Vectors
xTV=Signal(xTVs)
s0TV=Signal(s0TVs)
s1TV=Signal(s1TVs)
DUT=DEMUX1_4_Combo(x, s0, s1, y0, y1, y2, y3)
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTV[i])
s0.next=int(s0TV[i])
s1.next=int(s1TV[i])
yield delay(1)
raise StopSimulation()
return instances()
TB=DEMUX1_4_Combo_TBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('DEMUX1_4_Combo_TBV');
Explanation: \begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_4_Combo_RTL.png}}
\caption{\label{fig:D14CRTL} DEMUX1_4_Combo RTL schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_4_Combo_SYN.png}}
\caption{\label{fig:D14CSYN} DEMUX1_4_Combo Synthesized Schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_4_Combo_IMP.png}}
\caption{\label{fig:D14CIMP} DEMUX1_4_Combo Implementated Schematic; Xilinx Vivado 2017.4}
\end{figure}
myHDL to Verilog Testbench
End of explanation
ConstraintXDCTextReader('DEMUX1_4');
Explanation: PYNQ-Z1 Deployment
Board Circuit
\begin{figure}
\centerline{\includegraphics[width=5cm]{DEMUX14PYNQZ1Circ.png}}
\caption{\label{fig:D14Circ} 1:4 DEMUX PYNQ-Z1 (Non SoC) conceptualized circuit}
\end{figure}
Board Constraints
End of explanation
@block
def DEMUX1_4_DMS(x, s0, s1, y0, y1, y2, y3):
1:4 DEMUX via DEMUX Stacking
Inputs:
x(bool): input feed
s0(bool): channel select 0
s1(bool): channel select 1
Outputs:
y0(bool): ouput channel 0
y1(bool): ouput channel 1
y2(bool): ouput channel 2
y3(bool): ouput channel 3
s0_y0y1_WIRE=Signal(bool(0))
s0_y2y3_WIRE=Signal(bool(0))
x_s1_DEMUX=DEMUX1_2_Combo(x, s1, s0_y0y1_WIRE, s0_y2y3_WIRE)
s1_y0y1_DEMUX=DEMUX1_2_Combo(s0_y0y1_WIRE, s0, y0, y1)
s1_y2y3_DEMUX=DEMUX1_2_Combo(s0_y2y3_WIRE, s0, y2, y3)
return instances()
Explanation: Video of Deployment
DEMUX1_4_Combo on PYNQ-Z1 (YouTube)
1 Channel Input:4 Channel Output demultiplexer via DEMUX Stacking
myHDL Module
End of explanation
TestLen=10
SystmaticVals=list(itertools.product([0,1], repeat=3))
xTVs=np.array([i[2] for i in SystmaticVals]).astype(int)
np.random.seed(15)
xTVs=np.append(xTVs, np.random.randint(0,2, TestLen)).astype(int)
s0TVs=np.array([i[1] for i in SystmaticVals]).astype(int)
#the random genrator must have a differint seed beween each generation
#call in order to produce differint values for each call
np.random.seed(16)
s0TVs=np.append(s0TVs, np.random.randint(0,2, TestLen)).astype(int)
s1TVs=np.array([i[0] for i in SystmaticVals]).astype(int)
#the random genrator must have a differint seed beween each generation
#call in order to produce differint values for each call
np.random.seed(17)
s1TVs=np.append(s1TVs, np.random.randint(0,2, TestLen)).astype(int)
TestLen=len(xTVs)
SystmaticVals, xTVs, s0TVs, s1TVs
Peeker.clear()
x=Signal(bool(0)); Peeker(x, 'x')
s0=Signal(bool(0)); Peeker(s0, 's0')
s1=Signal(bool(0)); Peeker(s1, 's1')
y0=Signal(bool(0)); Peeker(y0, 'y0')
y1=Signal(bool(0)); Peeker(y1, 'y1')
y2=Signal(bool(0)); Peeker(y2, 'y2')
y3=Signal(bool(0)); Peeker(y3, 'y3')
DUT=DEMUX1_4_DMS(x, s0, s1, y0, y1, y2, y3)
def DEMUX1_4_DMS_TB():
myHDL only testbench for module `DEMUX1_4_DMS`
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTVs[i])
s0.next=int(s0TVs[i])
s1.next=int(s1TVs[i])
yield delay(1)
raise StopSimulation()
return instances()
sim=Simulation(DUT, DEMUX1_4_DMS_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('x', 's1', 's0', 'y0', 'y1', 'y2', 'y3')
DEMUX1_4_DMSData=Peeker.to_dataframe()
DEMUX1_4_DMSData=DEMUX1_4_DMSData[['x', 's1', 's0', 'y0', 'y1', 'y2', 'y3']]
DEMUX1_4_DMSData
Test=DEMUX1_4_DMSData==DEMUX1_4_ComboData[['x', 's1', 's0', 'y0', 'y1', 'y2', 'y3']]
Test=Test.all().all()
print(f'DEMUX1_4_DMS equivlinet to DEMUX1_4_Combo: {Test}')
Explanation: myHDL Testing
End of explanation
DUT.convert()
VerilogTextReader('DEMUX1_4_DMS');
Explanation: Verilog Conversion
End of explanation
#create BitVectors
xTVs=intbv(int(''.join(xTVs.astype(str)), 2))[TestLen:]
s0TVs=intbv(int(''.join(s0TVs.astype(str)), 2))[TestLen:]
s1TVs=intbv(int(''.join(s1TVs.astype(str)), 2))[TestLen:]
xTVs, bin(xTVs), s0TVs, bin(s0TVs), s1TVs, bin(s1TVs)
@block
def DEMUX1_4_DMS_TBV():
myHDL -> testbench for module `DEMUX1_4_DMS`
x=Signal(bool(0))
s0=Signal(bool(0))
s1=Signal(bool(0))
y0=Signal(bool(0))
y1=Signal(bool(0))
y2=Signal(bool(0))
y3=Signal(bool(0))
@always_comb
def print_data():
print(x, s0, s1, y0, y1, y2, y3)
#Test Signal Bit Vectors
xTV=Signal(xTVs)
s0TV=Signal(s0TVs)
s1TV=Signal(s1TVs)
DUT=DEMUX1_4_DMS(x, s0, s1, y0, y1, y2, y3)
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTV[i])
s0.next=int(s0TV[i])
s1.next=int(s1TV[i])
yield delay(1)
raise StopSimulation()
return instances()
TB=DEMUX1_4_DMS_TBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('DEMUX1_4_DMS_TBV');
Explanation: \begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_4_DMS_RTL.png}}
\caption{\label{fig:D14DMSRTL} DEMUX1_4_DMS RTL schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_4_DMS_SYN.png}}
\caption{\label{fig:D14DMSSYN} DEMUX1_4_DMS Synthesized Schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_4_DMS_IMP.png}}
\caption{\label{fig:D14DMSIMP} DEMUX1_4_DMS Implementated Schematic; Xilinx Vivado 2017.4}
\end{figure}
myHDL to Verilog Testbench
End of explanation
@block
def DEMUX1_2_B(x, s, y0, y1):
1:2 DMUX in behavioral
Inputs:
x(bool): input feed
s(bool): channel select
Outputs:
y0(bool): ouput channel 0
y1(bool): ouput channel 1
@always_comb
def logic():
if s==0:
#take note that since we have
#two ouputs there next state values
#must both be set, else the last
#value will presist till it changes
y0.next=x
y1.next=0
else:
y0.next=0
y1.next=x
return instances()
Explanation: PYNQ-Z1 Deployment
Board Circuit
See Board Circuit for "1 Channel Input:4 Channel Output demultiplexer in Gate Level Logic"
Board Constraint
uses same 'DEMUX1_4.xdc' as "# 1 Channel Input:4 Channel Output demultiplexer in Gate Level Logic"
Video of Deployment
DEMUX1_4_DMS on PYNQ-Z1 (YouTube)
1:2 DEMUX via Behavioral IF
myHDL Module
End of explanation
TestLen=10
SystmaticVals=list(itertools.product([0,1], repeat=2))
xTVs=np.array([i[1] for i in SystmaticVals]).astype(int)
np.random.seed(15)
xTVs=np.append(xTVs, np.random.randint(0,2, TestLen)).astype(int)
sTVs=np.array([i[0] for i in SystmaticVals]).astype(int)
#the random genrator must have a differint seed beween each generation
#call in order to produce differint values for each call
np.random.seed(16)
sTVs=np.append(sTVs, np.random.randint(0,2, TestLen)).astype(int)
TestLen=len(xTVs)
SystmaticVals, sTVs, xTVs
Peeker.clear()
x=Signal(bool(0)); Peeker(x, 'x')
s=Signal(bool(0)); Peeker(s, 's')
y0=Signal(bool(0)); Peeker(y0, 'y0')
y1=Signal(bool(0)); Peeker(y1, 'y1')
DUT=DEMUX1_2_B(x, s, y0, y1)
def DEMUX1_2_B_TB():
myHDL only testbench for module `DEMUX1_2_B`
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTVs[i])
s.next=int(sTVs[i])
yield delay(1)
raise StopSimulation()
return instances()
sim=Simulation(DUT, DEMUX1_2_B_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('x', 's', 'y0','y1')
DEMUX1_2_BData=Peeker.to_dataframe()
DEMUX1_2_BData=DEMUX1_2_BData[['x', 's', 'y0','y1']]
DEMUX1_2_BData
Test=DEMUX1_2_BData==DEMUX1_2_ComboData[['x', 's', 'y0','y1']]
Test=Test.all().all()
print(f'DEMUX1_2_BD is equivlent to DEMUX1_2_Combo: {Test}')
Explanation: myHDL Testing
End of explanation
DUT.convert()
VerilogTextReader('DEMUX1_2_B');
Explanation: Verilog Conversion
End of explanation
#create BitVectors
xTVs=intbv(int(''.join(xTVs.astype(str)), 2))[TestLen:]
sTVs=intbv(int(''.join(sTVs.astype(str)), 2))[TestLen:]
xTVs, bin(xTVs), sTVs, bin(sTVs)
@block
def DEMUX1_2_B_TBV():
myHDL -> testbench for module `DEMUX1_2_B`
x=Signal(bool(0))
s=Signal(bool(0))
y0=Signal(bool(0))
y1=Signal(bool(0))
@always_comb
def print_data():
print(x, s, y0, y1)
#Test Signal Bit Vectors
xTV=Signal(xTVs)
sTV=Signal(sTVs)
DUT=DEMUX1_2_B(x, s, y0, y1)
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTV[i])
s.next=int(sTV[i])
yield delay(1)
raise StopSimulation()
return instances()
TB=DEMUX1_2_B_TBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('DEMUX1_2_B_TBV');
Explanation: \begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_2_B_RTL.png}}
\caption{\label{fig:D12BRTL} DEMUX1_2_B RTL schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_2_B_SYN.png}}
\caption{\label{fig:D12BSYN} DEMUX1_2_B Synthesized Schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_2_B_IMP.png}}
\caption{\label{fig:D12BIMP} DEMUX1_2_B Implementated Schematic; Xilinx Vivado 2017.4}
\end{figure}
myHDL to Verilog Testbench
End of explanation
@block
def DEMUX1_4_B(x, s0, s1, y0, y1, y2, y3):
1:4 DEMUX written via behaviorial
Inputs:
x(bool): input feed
s0(bool): channel select 0
s1(bool): channel select 1
Outputs:
y0(bool): ouput channel 0
y1(bool): ouput channel 1
y2(bool): ouput channel 2
y3(bool): ouput channel 3
@always_comb
def logic():
if s0==0 and s1==0:
y0.next=x; y1.next=0
y2.next=0; y3.next=0
elif s0==1 and s1==0:
y0.next=0; y1.next=x
y2.next=0; y3.next=0
elif s0==0 and s1==1:
y0.next=0; y1.next=0
y2.next=x; y3.next=0
else:
y0.next=0; y1.next=0
y2.next=0; y3.next=x
return instances()
Explanation: PYNQ-Z1 Deployment
Board Circuit
See Board Circuit for "1 Channel Input: 2 Channel Output demultiplexer in Gate Level Logic"
Board Constraint
uses same 'DEMUX1_2.xdc' as "1 Channel Input: 2 Channel Output demultiplexer in Gate Level Logic"
Video of Deployment
DEMUX1_2_B on PYNQ-Z1 (YouTube)
1:4 DEMUX via Behavioral if-elif-else
myHDL Module
End of explanation
TestLen=10
SystmaticVals=list(itertools.product([0,1], repeat=3))
xTVs=np.array([i[2] for i in SystmaticVals]).astype(int)
np.random.seed(15)
xTVs=np.append(xTVs, np.random.randint(0,2, TestLen)).astype(int)
s0TVs=np.array([i[1] for i in SystmaticVals]).astype(int)
#the random genrator must have a differint seed beween each generation
#call in order to produce differint values for each call
np.random.seed(16)
s0TVs=np.append(s0TVs, np.random.randint(0,2, TestLen)).astype(int)
s1TVs=np.array([i[0] for i in SystmaticVals]).astype(int)
#the random genrator must have a differint seed beween each generation
#call in order to produce differint values for each call
np.random.seed(17)
s1TVs=np.append(s1TVs, np.random.randint(0,2, TestLen)).astype(int)
TestLen=len(xTVs)
SystmaticVals, xTVs, s0TVs, s1TVs
Peeker.clear()
x=Signal(bool(0)); Peeker(x, 'x')
s0=Signal(bool(0)); Peeker(s0, 's0')
s1=Signal(bool(0)); Peeker(s1, 's1')
y0=Signal(bool(0)); Peeker(y0, 'y0')
y1=Signal(bool(0)); Peeker(y1, 'y1')
y2=Signal(bool(0)); Peeker(y2, 'y2')
y3=Signal(bool(0)); Peeker(y3, 'y3')
DUT=DEMUX1_4_B(x, s0, s1, y0, y1, y2, y3)
def DEMUX1_4_B_TB():
myHDL only testbench for module `DEMUX1_4_Combo`
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTVs[i])
s0.next=int(s0TVs[i])
s1.next=int(s1TVs[i])
yield delay(1)
raise StopSimulation()
return instances()
sim=Simulation(DUT, DEMUX1_4_B_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('x', 's1', 's0', 'y0', 'y1', 'y2', 'y3')
DEMUX1_4_BData=Peeker.to_dataframe()
DEMUX1_4_BData=DEMUX1_4_BData[['x', 's1', 's0', 'y0', 'y1', 'y2', 'y3']]
DEMUX1_4_BData
Test=DEMUX1_4_BData==DEMUX1_4_ComboData[['x', 's1', 's0', 'y0', 'y1', 'y2', 'y3']]
Test=Test.all().all()
print(f'DEMUX1_4_B equivlinet to DEMUX1_4_Combo: {Test}')
Explanation: myHDL Testing
End of explanation
DUT.convert()
VerilogTextReader('DEMUX1_4_B');
Explanation: Verilog Conversion
End of explanation
#create BitVectors
xTVs=intbv(int(''.join(xTVs.astype(str)), 2))[TestLen:]
s0TVs=intbv(int(''.join(s0TVs.astype(str)), 2))[TestLen:]
s1TVs=intbv(int(''.join(s1TVs.astype(str)), 2))[TestLen:]
xTVs, bin(xTVs), s0TVs, bin(s0TVs), s1TVs, bin(s1TVs)
@block
def DEMUX1_4_B_TBV():
myHDL -> testbench for module `DEMUX1_4_B`
x=Signal(bool(0))
s0=Signal(bool(0))
s1=Signal(bool(0))
y0=Signal(bool(0))
y1=Signal(bool(0))
y2=Signal(bool(0))
y3=Signal(bool(0))
@always_comb
def print_data():
print(x, s0, s1, y0, y1, y2, y3)
#Test Signal Bit Vectors
xTV=Signal(xTVs)
s0TV=Signal(s0TVs)
s1TV=Signal(s1TVs)
DUT=DEMUX1_4_B(x, s0, s1, y0, y1, y2, y3)
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTV[i])
s0.next=int(s0TV[i])
s1.next=int(s1TV[i])
yield delay(1)
raise StopSimulation()
return instances()
TB=DEMUX1_4_B_TBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('DEMUX1_4_B_TBV');
Explanation: \begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_4_B_RTL.png}}
\caption{\label{fig:D14BRTL} DEMUX1_4_B RTL schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_4_B_SYN.png}}
\caption{\label{fig:D14BSYN} DEMUX1_4_B Synthesized Schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_4_B_IMP.png}}
\caption{\label{fig:D14BIMP} DEMUX1_4_B Implementated Schematic; Xilinx Vivado 2017.4}
\end{figure}
myHDL to Verilog Testbench
End of explanation
@block
def DEMUX1_4_BV(x, S, Y):
1:4 DEMUX written via behaviorial with
bit vectors
Inputs:
x(bool): input feed
S(2bit vector): channel select bitvector;
min=0, max=3
Outputs:
Y(4bit vector): ouput channel bitvector;
values min=0, max=15; allowed is: 0,1,2,4,8
in this application
@always_comb
def logic():
#here concat is used to build up the word
#from the x input
if S==0:
Y.next=concat(intbv(0)[3:], x); '0001'
elif S==1:
Y.next=concat(intbv(0)[2:], x, intbv(0)[1:]); '0010'
elif S==2:
Y.next=concat(intbv(0)[1:], x, intbv(0)[2:]); '0100'
else:
Y.next=concat(x, intbv(0)[3:]); '1000'
return instances()
Explanation: PYNQ-Z1 Deployment
Board Circuit
See Board Circuit for "1 Channel Input:4 Channel Output demultiplexer in Gate Level Logic"
Board Constraint
uses same 'DEMUX1_4.xdc' as "# 1 Channel Input:4 Channel Output demultiplexer in Gate Level Logic"
Video of Deployment
DEMUX1_4_B on PYNQ-Z1 (YouTube)
Demultiplexer 1:4 Behavioral via Bitvectors
myHDL Module
End of explanation
xTVs=np.array([0,1])
xTVs=np.append(xTVs, np.random.randint(0,2,6)).astype(int)
TestLen=len(xTVs)
np.random.seed(12)
STVs=np.arange(0,4)
STVs=np.append(STVs, np.random.randint(0,4, 5))
TestLen, xTVs, STVs
Peeker.clear()
x=Signal(bool(0)); Peeker(x, 'x')
S=Signal(intbv(0)[2:]); Peeker(S, 'S')
Y=Signal(intbv(0)[4:]); Peeker(Y, 'Y')
DUT=DEMUX1_4_BV(x, S, Y)
def DEMUX1_4_BV_TB():
@instance
def stimules():
for i in STVs:
for j in xTVs:
S.next=int(i)
x.next=int(j)
yield delay(1)
raise StopSimulation()
return instances()
sim=Simulation(DUT, DEMUX1_4_BV_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('x', 'S', 'Y', start_time=0, stop_time=2*TestLen+2)
DEMUX1_4_BVData=Peeker.to_dataframe()
DEMUX1_4_BVData=DEMUX1_4_BVData[['x', 'S', 'Y']]
DEMUX1_4_BVData
DEMUX1_4_BVData['y0']=None; DEMUX1_4_BVData['y1']=None; DEMUX1_4_BVData['y2']=None; DEMUX1_4_BVData['y3']=None
DEMUX1_4_BVData[['y3', 'y2', 'y1', 'y0']]=DEMUX1_4_BVData[['Y']].apply(lambda bv: [int(i) for i in bin(bv, 4)], axis=1, result_type='expand')
DEMUX1_4_BVData['s0']=None; DEMUX1_4_BVData['s1']=None
DEMUX1_4_BVData[['s1', 's0']]=DEMUX1_4_BVData[['S']].apply(lambda bv: [int(i) for i in bin(bv, 2)], axis=1, result_type='expand')
DEMUX1_4_BVData=DEMUX1_4_BVData[['x', 'S', 's0', 's1', 'Y', 'y3', 'y2', 'y1', 'y0']]
DEMUX1_4_BVData
DEMUX1_4_BVData['y0Ref']=DEMUX1_4_BVData.apply(lambda row:y14_0EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int)
DEMUX1_4_BVData['y1Ref']=DEMUX1_4_BVData.apply(lambda row:y14_1EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int)
DEMUX1_4_BVData['y2Ref']=DEMUX1_4_BVData.apply(lambda row:y14_2EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int)
DEMUX1_4_BVData['y3Ref']=DEMUX1_4_BVData.apply(lambda row:y14_3EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int)
DEMUX1_4_BVData
Test=DEMUX1_4_BVData[['y0', 'y1', 'y2', 'y3']].sort_index(inplace=True)==DEMUX1_4_BVData[['y0Ref', 'y1Ref', 'y2Ref', 'y3Ref']].sort_index(inplace=True)
print(f'Module `DEMUX1_4_BVData` works as exspected: {Test}')
Explanation: myHDL Testing
End of explanation
DUT.convert()
VerilogTextReader('DEMUX1_4_BV');
Explanation: Verilog Conversion
End of explanation
ConstraintXDCTextReader('DEMUX1_4_BV');
Explanation: \begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_4_BV_RTL.png}}
\caption{\label{fig:D14BVRTL} DEMUX1_4_BV RTL schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_4_BV_SYN.png}}
\caption{\label{fig:D14BVSYN} DEMUX1_4_BV Synthesized Schematic; Xilinx Vivado 2017.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=10cm]{DEMUX1_4_BV_IMP.png}}
\caption{\label{fig:D14BVIMP} DEMUX1_4_BV Implementated Schematic; Xilinx Vivado 2017.4}
\end{figure}
myHDL to Verilog Testbench
(To Do!)
PYNQ-Z1 Board Deployment
Board Circuit
Board Constraints
End of explanation |
14,393 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Imputation
Step1: Create data frame
Step2: Add some missing values
Step3: Confirm the presence of null values
Step4: Create categorical variables
Step5: Create dummy variables
Step6: Impute data
Replace null values using MICE model
MICEData class
Step7: Imputation for one feature
The conditional_formula attribute is a dictionary containing the models that will be used to impute the data for each column. This can be updated to change the imputation model.
Step8: The perturb_params method must be called before running the impute method, that runs the imputation. It updates the specified column in the data attribute.
Step9: Impute all
Step10: Validation | Python Code:
import pandas as pd
import numpy as np
import statsmodels
from statsmodels.imputation import mice
import random
random.seed(10)
Explanation: Imputation
End of explanation
df = pd.read_csv("http://goo.gl/19NKXV")
df.head()
original = df.copy()
original.describe().loc['count',:]
Explanation: Create data frame
End of explanation
def add_nulls(df, n):
new = df.copy()
new.iloc[random.sample(range(new.shape[0]), n), :] = np.nan
return new
df.Cholesterol = add_nulls(df[['Cholesterol']], 20)
df.Smoking = add_nulls(df[['Smoking']], 20)
df.Education = add_nulls(df[['Education']], 20)
df.Age = add_nulls(df[['Age']], 5)
df.BMI = add_nulls(df[['BMI']], 5)
Explanation: Add some missing values
End of explanation
df.describe()
Explanation: Confirm the presence of null values
End of explanation
for col in ['Gender', 'Smoking', 'Education']:
df[col] = df[col].astype('category')
df.dtypes
Explanation: Create categorical variables
End of explanation
df = pd.get_dummies(df);
Explanation: Create dummy variables
End of explanation
imp = mice.MICEData(df)
Explanation: Impute data
Replace null values using MICE model
MICEData class
End of explanation
imp.conditional_formula['BMI']
before = imp.data.BMI.copy()
Explanation: Imputation for one feature
The conditional_formula attribute is a dictionary containing the models that will be used to impute the data for each column. This can be updated to change the imputation model.
End of explanation
imp.perturb_params('BMI')
imp.impute('BMI')
after = imp.data.BMI
import matplotlib.pyplot as plt
plt.clf()
fig, ax = plt.subplots(1, 1)
ax.plot(before, 'or', label='before', alpha=1, ms=8)
ax.plot(after, 'ok', label='after', alpha=0.8, mfc='w', ms=8)
plt.legend();
pd.DataFrame(dict(before=before.describe(), after=after.describe()))
before[before != after]
after[before != after]
Explanation: The perturb_params method must be called before running the impute method, that runs the imputation. It updates the specified column in the data attribute.
End of explanation
imp.update_all(2)
imp.plot_fit_obs('BMI');
imp.plot_fit_obs('Age');
Explanation: Impute all
End of explanation
original.mean()
for col in original.mean().index:
x = original.mean()[col]
y = imp.data[col].mean()
e = abs(x - y) / x
print("{:<12} mean={:>8.2f}, exact={:>8.2f}, error={:>5.2g}%".format(col, x, y, e * 100))
Explanation: Validation
End of explanation |
14,394 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing various MNE solutions
This example shows example fixed- and free-orientation source localizations
produced by the minimum-norm variants implemented in MNE-Python
Step1: Fixed orientation
First let's create a fixed-orientation inverse, with the default weighting.
Step2: Let's look at the current estimates using MNE. We'll take the absolute
value of the source estimates to simplify the visualization.
Step3: Next let's use the default noise normalization, dSPM
Step4: And sLORETA
Step5: And finally eLORETA
Step6: Free orientation
Now let's not constrain the orientation of the dipoles at all by creating
a free-orientation inverse.
Step7: Let's look at the current estimates using MNE. We'll take the absolute
value of the source estimates to simplify the visualization.
Step8: Next let's use the default noise normalization, dSPM
Step9: sLORETA
Step10: And finally eLORETA | Python Code:
# Author: Eric Larson <[email protected]>
#
# License: BSD-3-Clause
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path / 'subjects'
# Read data (just MEG here for speed, though we could use MEG+EEG)
meg_path = data_path / 'MEG' / 'sample'
fname_evoked = meg_path / 'sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname_evoked, condition='Right Auditory',
baseline=(None, 0))
fname_fwd = meg_path / 'sample_audvis-meg-oct-6-fwd.fif'
fname_cov = meg_path / 'sample_audvis-cov.fif'
fwd = mne.read_forward_solution(fname_fwd)
cov = mne.read_cov(fname_cov)
# crop for speed in these examples
evoked.crop(0.05, 0.15)
Explanation: Computing various MNE solutions
This example shows example fixed- and free-orientation source localizations
produced by the minimum-norm variants implemented in MNE-Python:
MNE, dSPM, sLORETA, and eLORETA.
End of explanation
inv = make_inverse_operator(evoked.info, fwd, cov, loose=0., depth=0.8,
verbose=True)
Explanation: Fixed orientation
First let's create a fixed-orientation inverse, with the default weighting.
End of explanation
snr = 3.0
lambda2 = 1.0 / snr ** 2
kwargs = dict(initial_time=0.08, hemi='lh', subjects_dir=subjects_dir,
size=(600, 600), clim=dict(kind='percent', lims=[90, 95, 99]),
smoothing_steps=7)
stc = abs(apply_inverse(evoked, inv, lambda2, 'MNE', verbose=True))
brain = stc.plot(figure=1, **kwargs)
brain.add_text(0.1, 0.9, 'MNE', 'title', font_size=14)
Explanation: Let's look at the current estimates using MNE. We'll take the absolute
value of the source estimates to simplify the visualization.
End of explanation
stc = abs(apply_inverse(evoked, inv, lambda2, 'dSPM', verbose=True))
brain = stc.plot(figure=2, **kwargs)
brain.add_text(0.1, 0.9, 'dSPM', 'title', font_size=14)
Explanation: Next let's use the default noise normalization, dSPM:
End of explanation
stc = abs(apply_inverse(evoked, inv, lambda2, 'sLORETA', verbose=True))
brain = stc.plot(figure=3, **kwargs)
brain.add_text(0.1, 0.9, 'sLORETA', 'title', font_size=14)
Explanation: And sLORETA:
End of explanation
stc = abs(apply_inverse(evoked, inv, lambda2, 'eLORETA', verbose=True))
brain = stc.plot(figure=4, **kwargs)
brain.add_text(0.1, 0.9, 'eLORETA', 'title', font_size=14)
del inv
Explanation: And finally eLORETA:
End of explanation
inv = make_inverse_operator(evoked.info, fwd, cov, loose=1., depth=0.8,
verbose=True)
del fwd
Explanation: Free orientation
Now let's not constrain the orientation of the dipoles at all by creating
a free-orientation inverse.
End of explanation
stc = apply_inverse(evoked, inv, lambda2, 'MNE', verbose=True)
brain = stc.plot(figure=5, **kwargs)
brain.add_text(0.1, 0.9, 'MNE', 'title', font_size=14)
Explanation: Let's look at the current estimates using MNE. We'll take the absolute
value of the source estimates to simplify the visualization.
End of explanation
stc = apply_inverse(evoked, inv, lambda2, 'dSPM', verbose=True)
brain = stc.plot(figure=6, **kwargs)
brain.add_text(0.1, 0.9, 'dSPM', 'title', font_size=14)
Explanation: Next let's use the default noise normalization, dSPM:
End of explanation
stc = apply_inverse(evoked, inv, lambda2, 'sLORETA', verbose=True)
brain = stc.plot(figure=7, **kwargs)
brain.add_text(0.1, 0.9, 'sLORETA', 'title', font_size=14)
Explanation: sLORETA:
End of explanation
stc = apply_inverse(evoked, inv, lambda2, 'eLORETA', verbose=True,
method_params=dict(eps=1e-4)) # larger eps just for speed
brain = stc.plot(figure=8, **kwargs)
brain.add_text(0.1, 0.9, 'eLORETA', 'title', font_size=14)
Explanation: And finally eLORETA:
End of explanation |
14,395 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Method 2
Step2: Method 3 | Python Code:
# Create a list of casualties from battles
battleDeaths = [482, 93, 392, 920, 813, 199, 374, 237, 244]
# Create a function that updates all battle deaths by adding 100
def updated(x): return x + 100
# Create a list that applies updated() to all elements of battleDeaths
list(map(updated, battleDeaths))
Explanation: Title: Apply Operations Over Items In A List
Slug: apply_operations_over_items_in_lists
Summary: Apply Operations Over Items In A List
Date: 2016-05-01 12:00
Category: Python
Tags: Basics
Authors: Chris Albon
Method 1: map()
End of explanation
# Create a list of deaths
casualties = [482, 93, 392, 920, 813, 199, 374, 237, 244]
# Create a variable where we will put the updated casualty numbers
casualtiesUpdated = []
# Create a function that for each item in casualties, adds 10
for x in casualties:
casualtiesUpdated.append(x + 100)
# View casualties variables
casualtiesUpdated
Explanation: Method 2: for x in y
End of explanation
# Map the lambda function x() over casualties
list(map((lambda x: x + 100), casualties))
Explanation: Method 3: lambda functions
End of explanation |
14,396 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Optimierung des fachlichen Schnitts
Vorgehen
Subdomains sind bereits anhand von Namensschemata gebildet
Abhängigkeiten zwischen Subdomains werden über die Abhängigkeitsbeziehung der zugrundeliegenden Typen identifiziert
Step2: Graph-Abfrage-Ergebnis
Als Ergebnis wird die Anzahl der Beziehungen zwischen den Typen der Subdomänen geliefert.
Step3: Visualisierungsdaten
Das Ergebnis wird in eine JSON-Datei abgelegt
Die Daten können direkt zur Visualisierung verwendet werden
Für die Visualisierung wird ein D3 Chord Diagramm genutzt | Python Code:
import py2neo
import pandas as pd
graph= py2neo.Graph()
query=
MATCH
(s1:Subdomain)<-[:BELONGS_TO]-
(type:Type)-[r:DEPENDS_ON*0..1]->
(dependency:Type)-[:BELONGS_TO]->(s2:Subdomain)
RETURN s1.name as from, s2.name as to, COUNT(r) as x_number
result = graph.run(query).data()
df = pd.DataFrame(result)
Explanation: Optimierung des fachlichen Schnitts
Vorgehen
Subdomains sind bereits anhand von Namensschemata gebildet
Abhängigkeiten zwischen Subdomains werden über die Abhängigkeitsbeziehung der zugrundeliegenden Typen identifiziert
End of explanation
df.head(10)
Explanation: Graph-Abfrage-Ergebnis
Als Ergebnis wird die Anzahl der Beziehungen zwischen den Typen der Subdomänen geliefert.
End of explanation
import json
json_data = df.to_dict(orient='split')['data']
with open ( "chord_data.json", mode='w') as json_file:
json_file.write(json.dumps(json_data, indent=3))
json_data[:10]
Explanation: Visualisierungsdaten
Das Ergebnis wird in eine JSON-Datei abgelegt
Die Daten können direkt zur Visualisierung verwendet werden
Für die Visualisierung wird ein D3 Chord Diagramm genutzt
End of explanation |
14,397 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
Step1: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise
Step2: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
Step3: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise
Step4: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
End of explanation
learning_rate = 0.001
inputs_ =
targets_ =
### Encoder
conv1 =
# Now 28x28x16
maxpool1 =
# Now 14x14x16
conv2 =
# Now 14x14x8
maxpool2 =
# Now 7x7x8
conv3 =
# Now 7x7x8
encoded =
# Now 4x4x8
### Decoder
upsample1 =
# Now 7x7x8
conv4 =
# Now 7x7x8
upsample2 =
# Now 14x14x8
conv5 =
# Now 14x14x8
upsample3 =
# Now 28x28x8
conv6 =
# Now 28x28x16
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.
End of explanation
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
End of explanation
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
Explanation: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
End of explanation |
14,398 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I'm using tensorflow 2.10.0. | Problem:
import tensorflow as tf
import numpy as np
np.random.seed(10)
a = tf.constant(np.random.rand(50, 100, 512))
def g(a):
return tf.expand_dims(a, 2)
result = g(a.__copy__()) |
14,399 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Symbolic derivation qubit-cavity Hamiltonian
<style>
p {
font-family
Step1: The Jaynes-Cummings model
The Jaynes-Cummings model is one of the most elementary quantum mechanical models light-matter interaction. It describes a single two-level atom that interacts with a single harmonic-oscillator mode of a electromagnetic cavity.
The Hamiltonian for a two-level system in its eigenbasis can be written as
$$
H = \frac{1}{2}\omega_q \sigma_z
$$
and the Hamiltonian of a quantum harmonic oscillator (cavity or nanomechanical resonator, NR) is
$$
H = \hbar\omega_r (a^\dagger a + 1/2)
$$
$$
H = \hbar\omega_{NR} (b^\dagger b + 1/2)
$$
The atom interacts with the electromagnetic field produced by the cavity (NR) mode $a + a^\dagger$ ($b + b^\dagger$) through its dipole moment. The dipole-transition operators is $\sigma_x$ (which cause a transition from the two dipole states of the atom). The combined atom-cavity Hamiltonian can therefore be written in the form
$$
H =
\hbar \omega_r (a^\dagger a + 1/2)
+
\hbar \omega_{NR} (b^\dagger b + 1/2)
+ \frac{1}{2}\hbar\Omega\sigma_z
+
\hbar
\sigma_x \left( g(a + a^\dagger)
+
\lambda(b + b^\dagger)\right)
$$
Although the Jaynes-Cumming Hamiltonian allow us to evolve the given initial state according to the Schrödinger Equation, in an experiment we would like to predicte the response of the coupled cavity-NR-qubit system under the influence of driving fields for the cavity and qubit, and also account for the effects of dissipation and dephasing (not treated here)
The external coherent-state input may be incorporated in the Jaynes-Cummings Hamiltonian by addition of terms involving the amplitude of the driving field $\vec{E_d} \left(\vec{E_s}\right)$ and it's frequency $\omega_d\left(\omega_s\right)$
$$
H_{cavity} =
E_d \left(e^{i\omega_dt}a +e^{-i\omega_dt}a^\dagger\right)
$$
$$
H_{qubit} =
E_s \left(e^{i\omega_st}\sigma_- +e^{-i\omega_st}\sigma_+\right)
$$
$$
H_{NR} =
E_p \left(e^{i\omega_st}b +e^{-i\omega_st}b^\dagger\right)
$$
To obtain the Jaynes-Cumming Hamiltonian
$$
?
H =
\hbar\omega_r (a^\dagger a + 1/2)
%-\frac{1}{2}\Delta\sigma_x
+ \frac{1}{2}\hbar\Omega\sigma_z
+
\hbar
\sigma_x \left( g(a + a^\dagger)
+
\lambda(b + b^\dagger)\right)
$$
we also need to perform a rotating-wave approximation which simplifies the interaction part of the Hamiltonian. In the following we will begin with looking at how these two Hamiltonians are related.
To represent the atom-cavity Hamiltonian in SymPy we creates an instances of the operator classes BosonOp and SigmaX, SigmaY, and SigmaZ, and use these to construct the Hamiltonian (we work in units where $\hbar = 1$).
Step2: First move to the interaction picture
Step3: We introduce the detuning parameter $\Delta = \omega_q - \omega_r$ and substitute into this expression
Step4: Now, in the rotating-wave approximation we can drop the fast oscillating terms containing the factors $ e^{\pm i( (\omega_q + \omega_r)t}$
Step5: This is the interaction term of in the Jaynes-Cumming model in the interaction picture. If we transform back to the Schrödinger picture we have
Step6: Linearized interaction
Step7: Dispersive Regime | Python Code:
from sympy import *
init_printing()
from sympsi import *
from sympsi.boson import *
from sympsi.pauli import *
Explanation: Symbolic derivation qubit-cavity Hamiltonian
<style>
p {
font-family: "Liberation Serif", serif;
font-size: 12pt;
}
</style>
Based on: J. R. Johansson ([email protected]), http://jrjohansson.github.io, and Eunjong Kim.
Setup modules
End of explanation
# CPW, qubit and NR energies
omega_r, omega_q, omega_nr = symbols("omega_r, omega_q, omega_{NR}")
# Coupling CPW-qubit, NR_qubit
g, L, chi, eps = symbols("g, lambda, chi, epsilon")
# Detuning
# Drives and detunnings
Delta_d, Delta_s, Delta_p = symbols(" Delta_d, Delta_s, Delta_p ")
A, B, C = symbols("A,B,C") # Electric field amplitude
omega_d, omega_s, omega_p = symbols("omega_d, omega_s, omega_p") # drive frequencies
# Detunning CPW-qubit, NR-qubit
Delta_CPW, Delta_NR = symbols("Delta_{CPW},Delta_{NR}")
# auxilary variables
x, y, t, Hsym = symbols("x, y, t, H ")
Delta_SP, Delta_SD = symbols("Delta_{SP},Delta_{SD}")
# omega_r, omega_q, g, Delta_d, Delta_s, t, x, chi, Hsym = symbols("omega_r, omega_q, g, Delta_d, Delta_s, t, x, chi, H")
# A, B, C = symbols("A,B,C") # Electric field amplitude
# omega_d, omega_s = symbols("omega_d, omega_s") #
# omega_nr, L = symbols("omega_{NR},lambda")
# Delta,Delta_t = symbols("Delta,Delta_t")
# y, omega_t = symbols("y, omega_t")
sx, sy, sz, sm, sp = SigmaX(), SigmaY(), SigmaZ(), SigmaMinus(), SigmaPlus()
a = BosonOp("a")
b = BosonOp("b")
H = omega_r * Dagger(a) * a + omega_q/2 * sz + omega_nr * Dagger(b) * b
H_int = sx * (g * (a + Dagger(a)))
H_int_2 = sx *( L * (b + Dagger(b)))
H_drive_r = A * (exp(I*omega_d*t)*a + exp(-I*omega_d*t)*Dagger(a))
H_drive_q = B * (exp(I*omega_s*t)*sm + exp(-I*omega_s*t)*sp)
H_drive_NR = C * (exp(I*omega_p*t)*b + exp(-I*omega_p*t)*Dagger(b))
H_total = H + H_drive_r + H_drive_q + H_drive_NR #+ H_int + H_int_2 +
Eq(Hsym,H_total)
Explanation: The Jaynes-Cummings model
The Jaynes-Cummings model is one of the most elementary quantum mechanical models light-matter interaction. It describes a single two-level atom that interacts with a single harmonic-oscillator mode of a electromagnetic cavity.
The Hamiltonian for a two-level system in its eigenbasis can be written as
$$
H = \frac{1}{2}\omega_q \sigma_z
$$
and the Hamiltonian of a quantum harmonic oscillator (cavity or nanomechanical resonator, NR) is
$$
H = \hbar\omega_r (a^\dagger a + 1/2)
$$
$$
H = \hbar\omega_{NR} (b^\dagger b + 1/2)
$$
The atom interacts with the electromagnetic field produced by the cavity (NR) mode $a + a^\dagger$ ($b + b^\dagger$) through its dipole moment. The dipole-transition operators is $\sigma_x$ (which cause a transition from the two dipole states of the atom). The combined atom-cavity Hamiltonian can therefore be written in the form
$$
H =
\hbar \omega_r (a^\dagger a + 1/2)
+
\hbar \omega_{NR} (b^\dagger b + 1/2)
+ \frac{1}{2}\hbar\Omega\sigma_z
+
\hbar
\sigma_x \left( g(a + a^\dagger)
+
\lambda(b + b^\dagger)\right)
$$
Although the Jaynes-Cumming Hamiltonian allow us to evolve the given initial state according to the Schrödinger Equation, in an experiment we would like to predicte the response of the coupled cavity-NR-qubit system under the influence of driving fields for the cavity and qubit, and also account for the effects of dissipation and dephasing (not treated here)
The external coherent-state input may be incorporated in the Jaynes-Cummings Hamiltonian by addition of terms involving the amplitude of the driving field $\vec{E_d} \left(\vec{E_s}\right)$ and it's frequency $\omega_d\left(\omega_s\right)$
$$
H_{cavity} =
E_d \left(e^{i\omega_dt}a +e^{-i\omega_dt}a^\dagger\right)
$$
$$
H_{qubit} =
E_s \left(e^{i\omega_st}\sigma_- +e^{-i\omega_st}\sigma_+\right)
$$
$$
H_{NR} =
E_p \left(e^{i\omega_st}b +e^{-i\omega_st}b^\dagger\right)
$$
To obtain the Jaynes-Cumming Hamiltonian
$$
?
H =
\hbar\omega_r (a^\dagger a + 1/2)
%-\frac{1}{2}\Delta\sigma_x
+ \frac{1}{2}\hbar\Omega\sigma_z
+
\hbar
\sigma_x \left( g(a + a^\dagger)
+
\lambda(b + b^\dagger)\right)
$$
we also need to perform a rotating-wave approximation which simplifies the interaction part of the Hamiltonian. In the following we will begin with looking at how these two Hamiltonians are related.
To represent the atom-cavity Hamiltonian in SymPy we creates an instances of the operator classes BosonOp and SigmaX, SigmaY, and SigmaZ, and use these to construct the Hamiltonian (we work in units where $\hbar = 1$).
End of explanation
U = exp(I * omega_r * t * Dagger(a) * a)
U
H2 = hamiltonian_transformation(U, H_total.expand(),independent = True)
H2
U = exp(I * omega_q * t * sp * sm)
U
H3 = hamiltonian_transformation(U, H2.expand(), independent = True)
H3 = H3.subs(sx, sm + sp).expand()
H3 = powsimp(H3)
H3
Explanation: First move to the interaction picture:
End of explanation
# trick to simplify exponents
def simplify_exp(e):
if isinstance(e, exp):
return exp(simplify(e.exp.expand()))
if isinstance(e, (Add, Mul)):
return type(e)(*(simplify_exp(arg) for arg in e.args))
return e
H4 = simplify_exp(H3).subs(-omega_r + omega_q, Delta_CPW)
H4
Explanation: We introduce the detuning parameter $\Delta = \omega_q - \omega_r$ and substitute into this expression
End of explanation
H5 = drop_terms_containing(H4, [exp( I * (omega_q + omega_r) * t),
exp(-I * (omega_q + omega_r) * t)])
H5 = drop_c_number_terms(H5.expand())
Eq(Hsym, H5)
Explanation: Now, in the rotating-wave approximation we can drop the fast oscillating terms containing the factors $ e^{\pm i( (\omega_q + \omega_r)t}$
End of explanation
U = exp(-I * omega_r * t * Dagger(a) * a)
H6 = hamiltonian_transformation(U, H5.expand(), independent =True)
U = exp(-I * omega_q * t * sp * sm)
H7 = hamiltonian_transformation(U, H6.expand(),independent = True)
H8 = simplify_exp(H7).subs(Delta_CPW, omega_q - omega_r)
H8 = simplify_exp(powsimp(H8)).expand()
H8 = drop_c_number_terms(H8)
H = collect(H8, [g])
Eq(Hsym, H)
H = collect(H,[L*sm])
H = collect(H,[L*sp])
H = collect(H,[L*(Dagger(b)+b), A, B ,C])
H = H.subs(sm+sp,sx)
H
Explanation: This is the interaction term of in the Jaynes-Cumming model in the interaction picture. If we transform back to the Schrödinger picture we have:
End of explanation
U = exp(I * Dagger(a) * a * omega_d * t)
H1 = hamiltonian_transformation(U, H_total, independent=True)
H1
U = exp(I * Dagger(b) * b * omega_p * t)
H2 = hamiltonian_transformation(U, H1, independent=True)
H2
U = exp(I * Dagger(sm) * sm * omega_s * t)
H3 = hamiltonian_transformation(U, H2, independent=True)
H3
# H4 = simplify_exp(H3).subs(omega_d - omega_s, -Delta_SD)
H4 = H3.collect([A,B,C,Dagger(a)*a,Dagger(b)*b, g, L])
H4
H5 = H4.expand().subs(-sp*sm , -sz/2 ).collect([A,B,C,Dagger(a)*a,Dagger(b)*b, g, L, sz])
H5
H5 = H5.subs(-omega_d + omega_r, -Delta_d).subs(-omega_p + omega_nr, - Delta_p).subs(omega_q/2 - omega_s/2, Delta_s); H5
H_total_01 = H5
Eq(Hsym,H_total_01)
H_total_01 = H_total_01 + H_int + H_int_2;
Eq(Hsym,H_total_01)
U = exp(I * omega_r * t * Dagger(a) * a)
U
H2 = hamiltonian_transformation(U, H_total_01.expand(),independent = True)
H2
U = exp(I * omega_q * t * sp * sm)
U
H3 = hamiltonian_transformation(U, H2.expand(), independent = True)
H3 = H3.subs(sx, sm + sp).expand()
H3 = powsimp(H3)
H3
H4 = simplify_exp(H3).subs(-omega_r + omega_q, Delta_CPW)
H4
H5 = drop_terms_containing(H4, [exp( I * (omega_q + omega_r) * t),
exp(-I * (omega_q + omega_r) * t)])
H5 = drop_c_number_terms(H5.expand())
Eq(Hsym, H5)
U = exp(-I * omega_r * t * Dagger(a) * a)
H6 = hamiltonian_transformation(U, H5.expand(), independent =True)
U = exp(-I * omega_q * t * sp * sm)
H7 = hamiltonian_transformation(U, H6.expand(),independent = True)
H8 = simplify_exp(H7).subs(Delta_CPW, omega_q - omega_r)
H8 = simplify_exp(powsimp(H8)).expand()
H8 = drop_c_number_terms(H8)
H = collect(H8, [g])
H = collect(H,[L*sm])
H = collect(H,[L*sp])
H = collect(H,[L*(Dagger(b)+b), A, B ,C])
# H = H.subs(sm+sp,sx)
H
Explanation: Linearized interaction
End of explanation
U = exp((x * ( -Dagger(a) * sm + a * sp )).expand())
U
H1 = hamiltonian_transformation(U, H, expansion_search=False, N=4).expand()
H1 = qsimplify(H1)
# H1
H2 = drop_terms_containing(H1.expand(), [ x**4,x**5,x**6])
# H2
H3 = H2.subs(x, g/Delta_CPW)
# H3
H4 = drop_c_number_terms(H3)
# H4
H5 = collect(H4, [Dagger(a) * a, sz])
# H5
U = exp(I * omega_r * t * Dagger(a) * a)
H6 = hamiltonian_transformation(U, H5.expand(),independent = True);
# H6
U = exp(I * omega_q * t * Dagger(sm) * sm)
H7 = hamiltonian_transformation(U, H6.expand(),independent=True);
# H7
H8 = drop_terms_containing(H7, [exp(I * omega_r * t), exp(-I * omega_r * t),
exp(I * omega_q * t), exp(-I * omega_q * t)])
# H8
H9 = qsimplify(H8)
H9 = collect(H9, [Dagger(a) * a, sz])
# H9
U = exp(-I * omega_r * t * Dagger(a) * a)
H10 = hamiltonian_transformation(U, H9.expand(),independent = True);
# H10
U = exp(-I * omega_q * t * Dagger(sm) * sm)
H11 = hamiltonian_transformation(U, H10.expand(),independent=True);
# H11
# H11 = drop_c_number_terms(H11)
H12 = qsimplify(H11)
H12 = collect(H12, [Dagger(a) * a, sz])
H12 = H12.subs(omega_r, omega_q-Delta_CPW).expand().collect([Dagger(a)*a, Dagger(b)*b, sz, L*g/2/Delta_CPW]).subs(omega_q-Delta_CPW,omega_r)
Eq(Hsym, H12)
Explanation: Dispersive Regime
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.