path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
ParticleModel.ipynb
|
###Markdown
Simplified one-dimensional oil spill modelThis notebook implements the model used in the paper "On the use of random walk schemes in oil spill modelling". Executing all the cells in the notebook in order will reproduce the figures from the paper, although with fewer particles and longer timestep. Adjust the numerical parameters as desired.All citations in the notebook refer to entries in the list of references in the paper.This model uses an implementation with variable size arrays to hold the depth and droplet diameter of the submerged particles. The maximum number of particles is $N_p$, so this is the maximum length of the arrays. Surfaced particles are removed, making the arrays shorter. Re-submerged particles are added, making the arrays longer. FunctionsThe cell below contains the functions needed to build a simple random walk model for vertical diffusion, with consistent implementation of the random walk steplength. The functions are all written so they can be called on an entire array of particle positions (depths) at once.
###Code
############################
#### Physical constants ####
############################
PhysicalConstants = namedtuple('PhysicalConstants', ('g', 'rho_w', 'nu'))
CONST = PhysicalConstants(g=9.81, # Acceleration due to gravity (m/s**2)
rho_w=1025, # Density of sea water (kg/m**3)
nu=1.358e-6, # Kinematic viscosity of sea water (m**2/s)
)
###############################
#### Numerical derivatives ####
###############################
def ddz(K, z, t):
'''
Numerical derivative of K(z, t).
This function calculates a numerical partial derivative
of K(z, t), with respect to z using forward finite difference.
K: diffusivity as a function of depth (m**2/s)
z: current particle depth (m)
t: current time (s)
'''
dz = 1e-6
return (K(z+dz, t) - K(z, t)) / dz
#############################
#### Random walk schemes ####
#############################
def naivestep(K, z, t, dt):
'''
Solving the naïve equation with the Euler-Maruyama scheme:
dz = sqrt(2K(z,t))*dW
See Visser (1997) and Gräwe (2011) for details.
K: diffusivity as a function of depth (m**2/s)
z: current particle depth (m)
t: current time (s)
dt: timestep (s)
'''
dW = np.random.normal(loc = 0, scale = np.sqrt(dt), size = z.size)
return z + np.sqrt(2*K(z, t))*dW
def correctstep(K, z, t, dt):
'''
Solving the corrected equation with the Euler-Maruyama scheme:
dz = K'(z, t)*dt + sqrt(2K(z,t))*dW
See Visser (1997) and Gräwe (2011) for details.
K: diffusivity as a function of depth (m**2/s)
z: current particle depth (m)
t: current time (s)
dt: timestep (s)
'''
dW = np.random.normal(loc = 0, scale = np.sqrt(dt), size = z.size)
dKdz = ddz(K, z, t)
return z + dKdz*dt + np.sqrt(2*K(z, t))*dW
####################
#### Rise speed ####
####################
def rise_speed(d, rho):
'''
Calculate the rise speed (m/s) of a droplet due to buoyancy.
This scheme uses Stokes' law at small Reynolds numbers, with
a harmonic transition to a constant drag coefficient at high
Reynolds numbers.
See Johansen (2000), Eq. (14) for details.
d: droplet diameter (m)
rho: droplet density (kg/m**3)
'''
# Physical constants
pref = 1.054 # Numerical prefactor
nu = CONST.nu # Kinematic viscosity of seawater (m**2/s)
rho_w = CONST.rho_w # Density of seawater (kg/m**3)
g = CONST.g # Acceleration of gravity (m/s**2)
g_ = g*(rho_w - rho) / rho_w
if g_ == 0.0:
return 0.0*d
else:
w1 = d**2 * g_ / (18*nu)
w2 = np.sqrt(d*abs(g_)) * pref * (g_/np.abs(g_)) # Last bracket sets sign
return w1*w2/(w1+w2)
###########################
#### Utility functions ####
###########################
def advect(z, dt, d, rho):
'''
Return the rise in meters due to buoyancy,
assuming constant speed (at least within a timestep).
z: current droplet depth (m)
dt: timestep (s)
d: droplet diameter (m)
rho: droplet density (kg/m**3)
'''
return z - dt*rise_speed(d, rho)
def reflect(z):
'''
Reflect from surface.
Depth is positive downwards.
z: current droplet depth (m)
'''
# Reflect from surface
z = np.abs(z)
return z
def surface(z, d):
'''
Remove surfaced elements.
This method shortens the array by removing surfaced particles.
z: current droplet depth (m)
d: droplet diameter (m)
'''
mask = z >= 0.0
return z[mask], d[mask]
################################
#### Wave-related functions ####
################################
def waveperiod(windspeed, fetch):
'''
Calculate the peak wave period (s)
Based on the JONSWAP model and associated empirical relations.
See Carter (1982) for details
windspeed: windspeed (m/s)
fetch: fetch (m)
'''
# Avoid division by zero
if windspeed > 0:
# Constants for the JONSWAP model:
g = CONST.g # Acceleration of gravity
t_const = 0.286 # Nondimensional period constant.
t_max = 8.134 # Nondimensional period maximum.
# Calculate wave period
t_nodim = t_const * (g * fetch / windspeed**2)**(1./3.)
waveperiod = np.minimum(t_max, t_nodim) * windspeed / g
else:
waveperiod = 1.0
return waveperiod
def waveheight(windspeed, fetch):
'''
Calculate the significant wave height (m)
Based on the JONSWAP model and associated empirical relations.
See Carter (1982) for details
windspeed: windspeed (m/s)
fetch: fetch (m)
'''
# Avoid division by zero
if windspeed > 0:
# Constants for the JONSWAP model:
g = CONST.g # Acceleration of gravity
h_max = 0.243 # Nondimensional height maximum.
h_const = 0.0016 # Nondimensional height constant.
# Calculate wave height
h_nodim = h_const * np.sqrt(g * fetch / windspeed**2)
waveheight = np.minimum(h_max, h_nodim) * windspeed**2 / g
else:
waveheight = 0.0
return waveheight
def jonswap(windspeed, fetch = 170000):
'''
Calculate wave height and wave period, from wind speed
and fetch length. A large default fetch means assuming
fully developed sea (i.e., not fetch-limited).
See Carter (1982) for details.
windspeed: windspeed (m/s)
fetch: fetch length (m)
'''
Hs = waveheight(windspeed, fetch)
Tp = waveperiod(windspeed, fetch)
return Hs, Tp
def Fbw(windspeed, Tp):
'''
Fraction of breaking waves per second.
See Holthuijsen and Herbers (1986)
and Delvigne and Sweeney (1988) for details.
windspeed: windspeed (m/s)
Tp: wave period (s)
'''
if windspeed > 5:
return 0.032 * (windspeed - 5)/Tp
else:
return 0
#######################################
#### Entrainment related functions ####
#######################################
def entrainmentrate(windspeed, Tp, Hs, rho, ift):
'''
Entrainment rate (s**-1).
See Li et al. (2017) for details.
windspeed: windspeed (m/s)
Tp: wave period (s)
Hs: wave height (m)
rho: density of oil (kg/m**3)
ift: oil-water interfacial tension (N/m)
'''
# Physical constants
g = CONST.g # Acceleration of gravity (m/s**2)
rho_w = CONST.rho_w # Density of seawater (kg/m**3)
# Model parameters (empirical constants from Li et al. (2017))
a = 4.604 * 1e-10
b = 1.805
c = -1.023
# Rayleigh-Taylor instability maximum diameter:
d0 = 4 * np.sqrt(ift / ((rho_w - rho)*g))
# Ohnesorge number
Oh = mu / np.sqrt(rho * ift * d0)
# Weber number
We = d0 * rho_w * g * Hs / ift
return a * (We**b) * (Oh**c) * Fbw(windspeed, Tp)
def weber_natural_dispersion(rho, mu, ift, Hs, h):
'''
Weber natural dispersion model. Predicts median droplet size D50 (m).
Johansen, 2015.
rho: oil density (kg/m**3)
mu: dynamic viscosity of oil (kg/m/s)
ift: oil-water interfacial tension (N/m, kg/s**2)
Hs: free-fall height or wave amplitude (m)
h: oil film thickness (m)
'''
# Physical parameters
g = 9.81 # Acceleration of gravity (m/s**2)
# Model parameters from Johansen 2015 (fitted to experiment)
A = 2.251
Bm = 0.027
a = 0.6
# Velocity scale
U = np.sqrt(2*g*Hs)
# Calculate relevant dimensionless numbers for given parameters
We = rho * U**2 * h / ift
# Note that Vi = We/Re
Vi = mu * U / ift
# Calculate D, characteristic (median) droplet size predicted by WMD model
WeMod = We / (1 + Bm * Vi**a)**(1/a)
D50n = h * A / WeMod**a
return D50n
def entrain(z, d, dt, windspeed, h, mu, rho, ift):
'''
Entrainment of droplets.
This function calculates the number of particles to submerged,
finds new depths and droplet sizes for those particles, and
appends these to the input arrays of depth and droplet size
for the currently submerged particles.
Number of particles to entrain is found from the entrainment rate
due to Li et al. (2017), intrusion depth is calculated according
to Delvigne and Sveeney (1988) and the droplet size distribution
from the weber natural dispersion model (Johansen 2015).
z: current array of particle depths (m)
d: current array of droplet diameters (m)
dt: timestep (s)
windspeed: windspeed (m/s)
h: oil film thickness (m)
mu: dynamic viscosity of oil (kg/m/s)
rho: oil density (kg/m**3)
ift: oil-water interfacial tension (N/m)
returns:
z: array of particle depths with newly entrained particles appended
d: array of droplet diameters with newly entrained particles appended
'''
# Significant wave height and peak wave period
Hs, Tp = jonswap(windspeed)
# Calculate lifetime from entrainment rate
tau = 1/entrainmentrate(windspeed, Tp, Hs, rho, ift)
# Probability for a droplet to be entrained
p = 1 - np.exp(-dt/tau)
R = np.random.random(Np - len(z))
# Number of entrained droplets
N = np.sum(R < p)
# According to Delvigne & Sweeney (1988), droplets are distributed
# in the interval (1.5 - 0.35)*Hs to (1.5 + 0.35)*Hs
znew = np.random.uniform(low = Hs*(1.5-0.35), high = Hs*(1.5+0.35), size = N)
# Assign new sizes from Johansen distribution
sigma = 0.4 * np.log(10)
D50n = weber_natural_dispersion(rho, mu, ift, Hs, h)
D50v = np.exp(np.log(D50n) + 3*sigma**2)
dnew = np.random.lognormal(mean = np.log(D50v), sigma = sigma, size = N)
# Append newly entrained droplets to existing arrays
z = np.concatenate((z, znew))
d = np.concatenate((d, dnew))
return z, d
###########################################
#### Main function to run a simulation ####
###########################################
def experiment(Z0, D0, Np, Tmax, dt, bins, K, windspeed, h0, mu, ift, rho, randomstep):
'''
Run the model.
Returns the number of submerged particles, the histograms (concentration profile),
the depth and diameters of the particles.
Z0: initial depth of particles (m)
D0: initial diameter of particles (m)
Np: Maximum number of particles
Tmax: total duration of the simulation (s)
dt: timestep (s)
bins: bins for histograms (concentration profiles)
K: diffusivity-function on form K(z, t)
windspeed: windspeed (m/s)
h0: initial oil film thickness (m)
mu: dynamic viscosity of oil (kg/m/s)
ift: interfacial tension (N/m)
rho: oil density (kg/m**3)
randomstep: random walk scheme
'''
# Number of timesteps
Nt = int(Tmax / dt)
# Arrays for z-position (depth) and droplet size
Z = Z0.copy()
D = D0.copy()
# Array to store submerged particle counts
C = np.zeros(Nt)
# Array to store histograms (concentration profiles)
H = np.zeros((Nt, len(bins)-1))
# Time loop
t = 0
for i in range(Nt):
# Count remaining submerged particles
C[i] = len(Z)
# Store histogram
H[i,:] = np.histogram(Z, bins)[0]
# Random displacement
Z = randomstep(K, Z, t, dt)
# Reflect from surface
Z = reflect(Z)
# Rise due to buoyancy
Z = advect(Z, dt, D, rho)
# Remove surfaced (applies to both depth and size arrays)
Z, D = surface(Z, D)
# Calculate oil film thickness
h = h0 * (Np - len(Z)) / Np
# Entrain
Z, D = entrain(Z, D, dt, windspeed, h, mu, rho, ift)
# Increment time
t = dt*i
return C, H, Z, D
###Output
_____no_output_____
###Markdown
Common parametersThe number of particles and the timestep can be adjusted by the user, but these modifications will modify the time required for the simulation.
###Code
##############################
#### Numerical parameters ####
##############################
# Number of particles
Np = 10000
# Total duration of the simulation in seconds
Tmax = 6*3600
# timestep in seconds
dt = 10
# Bins for histograms (concentration profiles)
bins = np.linspace(0, 50, 101)
#############################
#### Scenario parameters ####
#############################
# Oil parameters
## Dynamic viscosity of oil (kg/m/s)
mu = 1.51
## Interfacial tension (N/m)
ift = 0.013
## Oil density (kg/m**3)
rho = 992
## Intial oil film thickness (m)
h0 = 3e-3
# Environmental parameter
## Windspeed (m/s)
windspeed = 9
# Initial array of particle positions and droplet sizes
# Empty arrays means all particles start at surface
Z0 = np.array([])
D0 = np.array([])
##############################
#### Diffusivity profiles ####
##############################
# Wave parameters (used to construct profile A)
Hs, Tp = jonswap(windspeed)
def K_A(z, t):
'''Create diffusivity-function on form K(z, t).
Commonly used parameterization of diffusivity which
is due to Ichiye (1967).
'''
g = 9.81
gamma = 2 * 4 * np.pi**2 / (g * Tp**2)
eta0 = 0.028 * Hs**2/Tp
return eta0 * np.exp(-gamma * z)
# Analytical function which has been fitted to
# simulation results from the GOTM turbulence model
# with a wind stress corresponding to wind speed of about 9 m/s.
# (see GOTM input files in separate folder, and PlotProfileB.ipynb)
a, b, c, z0 = (0.00636, 0.088, 1.54, 1.3)
K_B = lambda z, t: a*(z+z0)*np.exp(-(b*(z+z0))**c)
###Output
_____no_output_____
###Markdown
Plot the two diffusivity profiles used
###Code
zc = np.linspace(0, 50, 1000)
fig = plt.figure(figsize = (9, 6))
# Plot diffusivity profiles
plt.plot(K_A(zc, 0), zc, label = 'Profile A', c='k')
plt.plot(K_B(zc, 0), zc, label = 'Profile B', c='k', ls='--')
# Plot entrainment depth
plt.plot([-1, 1], [(1.5-0.35)*Hs, (1.5-0.35)*Hs], c='k', ls=':', label='Entrainment depth')
plt.plot([-1, 1], [(1.5+0.35)*Hs, (1.5+0.35)*Hs], c='k', ls=':')
plt.ylabel('Depth [m]')
plt.xlabel('Diffusivity [$m^2$/s]')
# Limit the horizontal axis
plt.xlim(-0.001, 0.031)
# Flip the vertical axis
plt.ylim(50, 0)
plt.legend(fontsize = 12)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Model surfacing and entrainment
###Code
# Profile A, naïve scheme
CnA, HnA, ZnA, DnA = experiment(Z0, D0, Np, Tmax, dt, bins, K_A, windspeed, h0, mu, ift, rho, naivestep)
# Profile A, corrected scheme
CeA, HeA, ZeA, DeA = experiment(Z0, D0, Np, Tmax, dt, bins, K_A, windspeed, h0, mu, ift, rho, correctstep)
# Profile B, naïve scheme
CnB, HnB, ZnB, DnB = experiment(Z0, D0, Np, Tmax, dt, bins, K_B, windspeed, h0, mu, ift, rho, naivestep)
# Profile B, corrected scheme
CeB, HeB, ZeB, DeB = experiment(Z0, D0, Np, Tmax, dt, bins, K_B, windspeed, h0, mu, ift, rho, correctstep)
###Output
_____no_output_____
###Markdown
Plot surfaced fraction as function of time
###Code
fig = plt.figure(figsize = (9,6))
times = np.arange(0, Tmax, dt)
# Plot surfaced as function of time in hours
plt.plot(times/3600, 1-CnA/Np, c = '#348ABD', label = 'Profile A, Naïve')
plt.plot(times/3600, 1-CeA/Np, c = '#A60628', label = 'Profile A, Corrected')
plt.plot(times/3600, 1-CnB/Np, '--', c = '#348ABD', label = 'Profile B, Naïve')
plt.plot(times/3600, 1-CeB/Np, '--', c = '#A60628', label = 'Profile B, Corrected')
plt.xlim(0, Tmax/3600)
plt.ylim(0, 1)
plt.ylabel('Surfaced fraction')
plt.xlabel('Time [h]')
plt.legend(fontsize = 12)
plt.tight_layout()
#plt.savefig('surfaced_timeseries.pdf')
###Output
_____no_output_____
###Markdown
Plot (time-averaged) concentration profiles
###Code
fig = plt.figure(figsize = (9,6))
# Calculate average over last 3600 seconds
# Change to Navg = 1 to get snapshot at last timestep
Navg = int(3600 / dt)
# Find midpoints of bins for plotting
dz = bins[1]-bins[0]
midpoints = bins[:-1] + dz/2
# Plot concentration profiles, normalised such that
# the integral of the profile is 1 if everything is submerged
plt.plot(np.mean(HnA[-Navg:,:], axis = 0)/(dz*Np), midpoints, c = '#348ABD', label = 'Profile A, Naïve')
plt.plot(np.mean(HeA[-Navg:,:], axis = 0)/(dz*Np), midpoints, c = '#A60628', label = 'Profile A, Corrected')
plt.plot(np.mean(HnB[-Navg:,:], axis = 0)/(dz*Np), midpoints, '--', c = '#348ABD', label = 'Profile B, Naïve')
plt.plot(np.mean(HeB[-Navg:,:], axis = 0)/(dz*Np), midpoints, '--', c = '#A60628', label = 'Profile B, Corrected')
# Flip the vertical axis
plt.ylim(50, 0)
plt.ylabel('Depth [m]')
plt.xlabel('Concentration [m$^{-1}$]')
plt.legend(fontsize = 12)
plt.tight_layout()
#plt.savefig('concentration_profiles.pdf')
###Output
_____no_output_____
|
Deep Learning Specialization/Trigger_word_detection_v1a.ipynb
|
###Markdown
Trigger Word DetectionWelcome to the final programming assignment of this specialization! In this week's videos, you learned about applying deep learning to speech recognition. In this assignment, you will construct a speech dataset and implement an algorithm for trigger word detection (sometimes also called keyword detection, or wake word detection). * Trigger word detection is the technology that allows devices like Amazon Alexa, Google Home, Apple Siri, and Baidu DuerOS to wake up upon hearing a certain word. * For this exercise, our trigger word will be "Activate." Every time it hears you say "activate," it will make a "chiming" sound. * By the end of this assignment, you will be able to record a clip of yourself talking, and have the algorithm trigger a chime when it detects you saying "activate." * After completing this assignment, perhaps you can also extend it to run on your laptop so that every time you say "activate" it starts up your favorite app, or turns on a network connected lamp in your house, or triggers some other event? In this assignment you will learn to: - Structure a speech recognition project- Synthesize and process audio recordings to create train/dev datasets- Train a trigger word detection model and make predictions Updates If you were working on the notebook before this update...* The current notebook is version "1a".* You can find your original work saved in the notebook with the previous version name ("v1") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* 2.1: build the model * Added sample code to show how to use the Keras layers. * Lets student to implement the `TimeDistributed` code.* Spelling, grammar and wording corrections. Let's get started! Run the following cell to load the package you are going to use.
###Code
import numpy as np
from pydub import AudioSegment
import random
import sys
import io
import os
import glob
import IPython
from td_utils import *
%matplotlib inline
###Output
_____no_output_____
###Markdown
1 - Data synthesis: Creating a speech dataset Let's start by building a dataset for your trigger word detection algorithm. * A speech dataset should ideally be as close as possible to the application you will want to run it on. * In this case, you'd like to detect the word "activate" in working environments (library, home, offices, open-spaces ...). * Therefore, you need to create recordings with a mix of positive words ("activate") and negative words (random words other than activate) on different background sounds. Let's see how you can create such a dataset. 1.1 - Listening to the data * One of your friends is helping you out on this project, and they've gone to libraries, cafes, restaurants, homes and offices all around the region to record background noises, as well as snippets of audio of people saying positive/negative words. This dataset includes people speaking in a variety of accents. * In the raw_data directory, you can find a subset of the raw audio files of the positive words, negative words, and background noise. You will use these audio files to synthesize a dataset to train the model. * The "activate" directory contains positive examples of people saying the word "activate". * The "negatives" directory contains negative examples of people saying random words other than "activate". * There is one word per audio recording. * The "backgrounds" directory contains 10 second clips of background noise in different environments.Run the cells below to listen to some examples.
###Code
IPython.display.Audio("./raw_data/activates/1.wav")
IPython.display.Audio("./raw_data/negatives/4.wav")
IPython.display.Audio("./raw_data/backgrounds/1.wav")
###Output
_____no_output_____
###Markdown
You will use these three types of recordings (positives/negatives/backgrounds) to create a labeled dataset. 1.2 - From audio recordings to spectrogramsWhat really is an audio recording? * A microphone records little variations in air pressure over time, and it is these little variations in air pressure that your ear also perceives as sound. * You can think of an audio recording is a long list of numbers measuring the little air pressure changes detected by the microphone. * We will use audio sampled at 44100 Hz (or 44100 Hertz). * This means the microphone gives us 44,100 numbers per second. * Thus, a 10 second audio clip is represented by 441,000 numbers (= $10 \times 44,100$). Spectrogram* It is quite difficult to figure out from this "raw" representation of audio whether the word "activate" was said. * In order to help your sequence model more easily learn to detect trigger words, we will compute a *spectrogram* of the audio. * The spectrogram tells us how much different frequencies are present in an audio clip at any moment in time. * If you've ever taken an advanced class on signal processing or on Fourier transforms: * A spectrogram is computed by sliding a window over the raw audio signal, and calculating the most active frequencies in each window using a Fourier transform. * If you don't understand the previous sentence, don't worry about it.Let's look at an example.
###Code
IPython.display.Audio("audio_examples/example_train.wav")
x = graph_spectrogram("audio_examples/example_train.wav")
###Output
_____no_output_____
###Markdown
The graph above represents how active each frequency is (y axis) over a number of time-steps (x axis). **Figure 1**: Spectrogram of an audio recording * The color in the spectrogram shows the degree to which different frequencies are present (loud) in the audio at different points in time. * Green means a certain frequency is more active or more present in the audio clip (louder).* Blue squares denote less active frequencies.* The dimension of the output spectrogram depends upon the hyperparameters of the spectrogram software and the length of the input. * In this notebook, we will be working with 10 second audio clips as the "standard length" for our training examples. * The number of timesteps of the spectrogram will be 5511. * You'll see later that the spectrogram will be the input $x$ into the network, and so $T_x = 5511$.
###Code
_, data = wavfile.read("audio_examples/example_train.wav")
print("Time steps in audio recording before spectrogram", data[:,0].shape)
print("Time steps in input after spectrogram", x.shape)
###Output
_____no_output_____
###Markdown
Now, you can define:
###Code
Tx = 5511 # The number of time steps input to the model from the spectrogram
n_freq = 101 # Number of frequencies input to the model at each time step of the spectrogram
###Output
_____no_output_____
###Markdown
Dividing into time-intervalsNote that we may divide a 10 second interval of time with different units (steps).* Raw audio divides 10 seconds into 441,000 units.* A spectrogram divides 10 seconds into 5,511 units. * $T_x = 5511$* You will use a Python module `pydub` to synthesize audio, and it divides 10 seconds into 10,000 units.* The output of our model will divide 10 seconds into 1,375 units. * $T_y = 1375$ * For each of the 1375 time steps, the model predicts whether someone recently finished saying the trigger word "activate." * All of these are hyperparameters and can be changed (except the 441000, which is a function of the microphone). * We have chosen values that are within the standard range used for speech systems.
###Code
Ty = 1375 # The number of time steps in the output of our model
###Output
_____no_output_____
###Markdown
1.3 - Generating a single training example Benefits of synthesizing dataBecause speech data is hard to acquire and label, you will synthesize your training data using the audio clips of activates, negatives, and backgrounds. * It is quite slow to record lots of 10 second audio clips with random "activates" in it. * Instead, it is easier to record lots of positives and negative words, and record background noise separately (or download background noise from free online sources). Process for Synthesizing an audio clip* To synthesize a single training example, you will: - Pick a random 10 second background audio clip - Randomly insert 0-4 audio clips of "activate" into this 10sec clip - Randomly insert 0-2 audio clips of negative words into this 10sec clip* Because you had synthesized the word "activate" into the background clip, you know exactly when in the 10 second clip the "activate" makes its appearance. * You'll see later that this makes it easier to generate the labels $y^{\langle t \rangle}$ as well. Pydub* You will use the pydub package to manipulate audio. * Pydub converts raw audio files into lists of Pydub data structures. * Don't worry about the details of the data structures.* Pydub uses 1ms as the discretization interval (1ms is 1 millisecond = 1/1000 seconds). * This is why a 10 second clip is always represented using 10,000 steps.
###Code
# Load audio segments using pydub
activates, negatives, backgrounds = load_raw_audio()
print("background len should be 10,000, since it is a 10 sec clip\n" + str(len(backgrounds[0])),"\n")
print("activate[0] len may be around 1000, since an `activate` audio clip is usually around 1 second (but varies a lot) \n" + str(len(activates[0])),"\n")
print("activate[1] len: different `activate` clips can have different lengths\n" + str(len(activates[1])),"\n")
###Output
_____no_output_____
###Markdown
Overlaying positive/negative 'word' audio clips on top of the background audio* Given a 10 second background clip and a short audio clip containing a positive or negative word, you need to be able to "add" the word audio clip on top of the background audio.* You will be inserting multiple clips of positive/negative words into the background, and you don't want to insert an "activate" or a random word somewhere that overlaps with another clip you had previously added. * To ensure that the 'word' audio segments do not overlap when inserted, you will keep track of the times of previously inserted audio clips. * To be clear, when you insert a 1 second "activate" onto a 10 second clip of cafe noise, **you do not end up with an 11 sec clip.** * The resulting audio clip is still 10 seconds long. * You'll see later how pydub allows you to do this. Label the positive/negative words* Recall that the labels $y^{\langle t \rangle}$ represent whether or not someone has just finished saying "activate." * $y^{\langle t \rangle} = 1$ when that that clip has finished saying "activate". * Given a background clip, we can initialize $y^{\langle t \rangle}=0$ for all $t$, since the clip doesn't contain any "activates." * When you insert or overlay an "activate" clip, you will also update labels for $y^{\langle t \rangle}$. * Rather than updating the label of a single time step, we will update 50 steps of the output to have target label 1. * Recall from the lecture on trigger word detection that updating several consecutive time steps can make the training data more balanced.* You will train a GRU (Gated Recurrent Unit) to detect when someone has **finished** saying "activate". Example* Suppose the synthesized "activate" clip ends at the 5 second mark in the 10 second audio - exactly halfway into the clip. * Recall that $T_y = 1375$, so timestep $687 = $ `int(1375*0.5)` corresponds to the moment 5 seconds into the audio clip. * Set $y^{\langle 688 \rangle} = 1$. * We will allow the GRU to detect "activate" anywhere within a short time-internal **after** this moment, so we actually **set 50 consecutive values** of the label $y^{\langle t \rangle}$ to 1. * Specifically, we have $y^{\langle 688 \rangle} = y^{\langle 689 \rangle} = \cdots = y^{\langle 737 \rangle} = 1$. Synthesized data is easier to label* This is another reason for synthesizing the training data: It's relatively straightforward to generate these labels $y^{\langle t \rangle}$ as described above. * In contrast, if you have 10sec of audio recorded on a microphone, it's quite time consuming for a person to listen to it and mark manually exactly when "activate" finished. Visualizing the labels* Here's a figure illustrating the labels $y^{\langle t \rangle}$ in a clip. * We have inserted "activate", "innocent", activate", "baby." * Note that the positive labels "1" are associated only with the positive words. **Figure 2** Helper functionsTo implement the training set synthesis process, you will use the following helper functions. * All of these functions will use a 1ms discretization interval* The 10 seconds of audio is always discretized into 10,000 steps. 1. `get_random_time_segment(segment_ms)` * Retrieves a random time segment from the background audio.2. `is_overlapping(segment_time, existing_segments)` * Checks if a time segment overlaps with existing segments3. `insert_audio_clip(background, audio_clip, existing_times)` * Inserts an audio segment at a random time in the background audio * Uses the functions `get_random_time_segment` and `is_overlapping`4. `insert_ones(y, segment_end_ms)` * Inserts additional 1's into the label vector y after the word "activate" Get a random time segment* The function `get_random_time_segment(segment_ms)` returns a random time segment onto which we can insert an audio clip of duration `segment_ms`. * Please read through the code to make sure you understand what it is doing.
###Code
def get_random_time_segment(segment_ms):
"""
Gets a random time segment of duration segment_ms in a 10,000 ms audio clip.
Arguments:
segment_ms -- the duration of the audio clip in ms ("ms" stands for "milliseconds")
Returns:
segment_time -- a tuple of (segment_start, segment_end) in ms
"""
segment_start = np.random.randint(low=0, high=10000-segment_ms) # Make sure segment doesn't run past the 10sec background
segment_end = segment_start + segment_ms - 1
return (segment_start, segment_end)
###Output
_____no_output_____
###Markdown
Check if audio clips are overlapping* Suppose you have inserted audio clips at segments (1000,1800) and (3400,4500). * The first segment starts at step 1000 and ends at step 1800. * The second segment starts at 3400 and ends at 4500.* If we are considering whether to insert a new audio clip at (3000,3600) does this overlap with one of the previously inserted segments? * In this case, (3000,3600) and (3400,4500) overlap, so we should decide against inserting a clip here.* For the purpose of this function, define (100,200) and (200,250) to be overlapping, since they overlap at timestep 200. * (100,199) and (200,250) are non-overlapping. **Exercise**: * Implement `is_overlapping(segment_time, existing_segments)` to check if a new time segment overlaps with any of the previous segments. * You will need to carry out 2 steps:1. Create a "False" flag, that you will later set to "True" if you find that there is an overlap.2. Loop over the previous_segments' start and end times. Compare these times to the segment's start and end times. If there is an overlap, set the flag defined in (1) as True. You can use:```pythonfor ....: if ... = ...: ...```Hint: There is overlap if:* The new segment starts before the previous segment ends **and*** The new segment ends after the previous segment starts.
###Code
# GRADED FUNCTION: is_overlapping
def is_overlapping(segment_time, previous_segments):
"""
Checks if the time of a segment overlaps with the times of existing segments.
Arguments:
segment_time -- a tuple of (segment_start, segment_end) for the new segment
previous_segments -- a list of tuples of (segment_start, segment_end) for the existing segments
Returns:
True if the time segment overlaps with any of the existing segments, False otherwise
"""
segment_start, segment_end = segment_time
### START CODE HERE ### (≈ 4 lines)
# Step 1: Initialize overlap as a "False" flag. (≈ 1 line)
overlap = False
# Step 2: loop over the previous_segments start and end times.
# Compare start/end times and set the flag to True if there is an overlap (≈ 3 lines)
for previous_start, previous_end in previous_segments:
# print(previous_start, segment_start, previous_end)
# print(previous_start, segment_end, previous_end)
if (
(previous_start <= segment_start <= previous_end)
or (previous_start <= segment_end <= previous_end)
):
overlap = True
break
### END CODE HERE ###
return overlap
overlap1 = is_overlapping((950, 1430), [(2000, 2550), (260, 949)])
overlap2 = is_overlapping((2305, 2950), [(824, 1532), (1900, 2305), (3424, 3656)])
print("Overlap 1 = ", overlap1)
print("Overlap 2 = ", overlap2)
###Output
_____no_output_____
###Markdown
**Expected Output**: **Overlap 1** False **Overlap 2** True Insert audio clip* Let's use the previous helper functions to insert a new audio clip onto the 10 second background at a random time.* We will ensure that any newly inserted segment doesn't overlap with previously inserted segments. **Exercise**:* Implement `insert_audio_clip()` to overlay an audio clip onto the background 10sec clip. * You implement 4 steps:1. Get the length of the audio clip that is to be inserted. * Get a random time segment whose duration equals the duration of the audio clip that is to be inserted.2. Make sure that the time segment does not overlap with any of the previous time segments. * If it is overlapping, then go back to step 1 and pick a new time segment.3. Append the new time segment to the list of existing time segments * This keeps track of all the segments you've inserted. 4. Overlay the audio clip over the background using pydub. We have implemented this for you.
###Code
# GRADED FUNCTION: insert_audio_clip
def insert_audio_clip(background, audio_clip, previous_segments):
"""
Insert a new audio segment over the background noise at a random time step, ensuring that the
audio segment does not overlap with existing segments.
Arguments:
background -- a 10 second background audio recording.
audio_clip -- the audio clip to be inserted/overlaid.
previous_segments -- times where audio segments have already been placed
Returns:
new_background -- the updated background audio
"""
# Get the duration of the audio clip in ms
segment_ms = len(audio_clip)
### START CODE HERE ###
# Step 1: Use one of the helper functions to pick a random time segment onto which to insert
# the new audio clip. (≈ 1 line)
segment_time = get_random_time_segment(segment_ms)
# Step 2: Check if the new segment_time overlaps with one of the previous_segments. If so, keep
# picking new segment_time at random until it doesn't overlap. (≈ 2 lines)
while is_overlapping(segment_time, previous_segments):
segment_time = get_random_time_segment(segment_ms)
# Step 3: Append the new segment_time to the list of previous_segments (≈ 1 line)
previous_segments.append( segment_time )
### END CODE HERE ###
# Step 4: Superpose audio segment and background
new_background = background.overlay(audio_clip, position = segment_time[0])
return new_background, segment_time
np.random.seed(5)
audio_clip, segment_time = insert_audio_clip(backgrounds[0], activates[0], [(3790, 4400)])
audio_clip.export("insert_test.wav", format="wav")
print("Segment Time: ", segment_time)
IPython.display.Audio("insert_test.wav")
###Output
_____no_output_____
###Markdown
**Expected Output** **Segment Time** (2254, 3169)
###Code
# Expected audio
IPython.display.Audio("audio_examples/insert_reference.wav")
###Output
_____no_output_____
###Markdown
Insert ones for the labels of the positive target* Implement code to update the labels $y^{\langle t \rangle}$, assuming you just inserted an "activate" audio clip.* In the code below, `y` is a `(1,1375)` dimensional vector, since $T_y = 1375$. * If the "activate" audio clip ends at time step $t$, then set $y^{\langle t+1 \rangle} = 1$ and also set the next 49 additional consecutive values to 1. * Notice that if the target word appears near the end of the entire audio clip, there may not be 50 additional time steps to set to 1. * Make sure you don't run off the end of the array and try to update `y[0][1375]`, since the valid indices are `y[0][0]` through `y[0][1374]` because $T_y = 1375$. * So if "activate" ends at step 1370, you would get only set `y[0][1371] = y[0][1372] = y[0][1373] = y[0][1374] = 1`**Exercise**: Implement `insert_ones()`. * You can use a for loop. * If you want to use Python's array slicing operations, you can do so as well.* If a segment ends at `segment_end_ms` (using a 10000 step discretization), * To convert it to the indexing for the outputs $y$ (using a $1375$ step discretization), we will use this formula: ``` segment_end_y = int(segment_end_ms * Ty / 10000.0)```
###Code
# GRADED FUNCTION: insert_ones
def insert_ones(y, segment_end_ms):
"""
Update the label vector y. The labels of the 50 output steps strictly after the end of the segment
should be set to 1. By strictly we mean that the label of segment_end_y should be 0 while, the
50 following labels should be ones.
Arguments:
y -- numpy array of shape (1, Ty), the labels of the training example
segment_end_ms -- the end time of the segment in ms
Returns:
y -- updated labels
"""
# duration of the background (in terms of spectrogram time-steps)
segment_end_y = int(segment_end_ms * Ty / 10000.0)
# Add 1 to the correct index in the background label (y)
### START CODE HERE ### (≈ 3 lines)
for i in range(segment_end_y+1, segment_end_y+51):
if i < Ty:
y[0, i] = 1
### END CODE HERE ###
return y
arr1 = insert_ones(np.zeros((1, Ty)), 9700)
plt.plot(insert_ones(arr1, 4251)[0,:])
print("sanity checks:", arr1[0][1333], arr1[0][634], arr1[0][635])
###Output
_____no_output_____
###Markdown
**Expected Output** **sanity checks**: 0.0 1.0 0.0 Creating a training exampleFinally, you can use `insert_audio_clip` and `insert_ones` to create a new training example.**Exercise**: Implement `create_training_example()`. You will need to carry out the following steps:1. Initialize the label vector $y$ as a numpy array of zeros and shape $(1, T_y)$.2. Initialize the set of existing segments to an empty list.3. Randomly select 0 to 4 "activate" audio clips, and insert them onto the 10 second clip. Also insert labels at the correct position in the label vector $y$.4. Randomly select 0 to 2 negative audio clips, and insert them into the 10 second clip.
###Code
# GRADED FUNCTION: create_training_example
def create_training_example(background, activates, negatives):
"""
Creates a training example with a given background, activates, and negatives.
Arguments:
background -- a 10 second background audio recording
activates -- a list of audio segments of the word "activate"
negatives -- a list of audio segments of random words that are not "activate"
Returns:
x -- the spectrogram of the training example
y -- the label at each time step of the spectrogram
"""
# Set the random seed
np.random.seed(18)
# Make background quieter
background = background - 20
### START CODE HERE ###
# Step 1: Initialize y (label vector) of zeros (≈ 1 line)
y = np.zeros((1, Ty))
# Step 2: Initialize segment times as an empty list (≈ 1 line)
previous_segments = []
### END CODE HERE ###
# Select 0-4 random "activate" audio clips from the entire list of "activates" recordings
number_of_activates = np.random.randint(0, 5)
random_indices = np.random.randint(len(activates), size=number_of_activates)
random_activates = [activates[i] for i in random_indices]
### START CODE HERE ### (≈ 3 lines)
# Step 3: Loop over randomly selected "activate" clips and insert in background
for random_activate in random_activates:
# Insert the audio clip on the background
background, segment_time = insert_audio_clip(background, random_activate, previous_segments)
# Retrieve segment_start and segment_end from segment_time
segment_start, segment_end = segment_time
# Insert labels in "y"
y = insert_ones(y, segment_end_ms=segment_end)
### END CODE HERE ###
# Select 0-2 random negatives audio recordings from the entire list of "negatives" recordings
number_of_negatives = np.random.randint(0, 3)
random_indices = np.random.randint(len(negatives), size=number_of_negatives)
random_negatives = [negatives[i] for i in random_indices]
### START CODE HERE ### (≈ 2 lines)
# Step 4: Loop over randomly selected negative clips and insert in background
for random_negative in random_negatives:
# Insert the audio clip on the background
background, _ = insert_audio_clip(background, random_negative, previous_segments)
### END CODE HERE ###
# Standardize the volume of the audio clip
background = match_target_amplitude(background, -20.0)
# Export new training example
file_handle = background.export("train" + ".wav", format="wav")
print("File (train.wav) was saved in your directory.")
# Get and plot spectrogram of the new recording (background with superposition of positive and negatives)
x = graph_spectrogram("train.wav")
return x, y
x, y = create_training_example(backgrounds[0], activates, negatives)
###Output
_____no_output_____
###Markdown
**Expected Output** Now you can listen to the training example you created and compare it to the spectrogram generated above.
###Code
IPython.display.Audio("train.wav")
###Output
_____no_output_____
###Markdown
**Expected Output**
###Code
IPython.display.Audio("audio_examples/train_reference.wav")
###Output
_____no_output_____
###Markdown
Finally, you can plot the associated labels for the generated training example.
###Code
plt.plot(y[0])
###Output
_____no_output_____
###Markdown
**Expected Output** 1.4 - Full training set* You've now implemented the code needed to generate a single training example. * We used this process to generate a large training set. * To save time, we've already generated a set of training examples.
###Code
# Load preprocessed training examples
X = np.load("./XY_train/X.npy")
Y = np.load("./XY_train/Y.npy")
###Output
_____no_output_____
###Markdown
1.5 - Development set* To test our model, we recorded a development set of 25 examples. * While our training data is synthesized, we want to create a development set using the same distribution as the real inputs. * Thus, we recorded 25 10-second audio clips of people saying "activate" and other random words, and labeled them by hand. * This follows the principle described in Course 3 "Structuring Machine Learning Projects" that we should create the dev set to be as similar as possible to the test set distribution * This is why our **dev set uses real audio** rather than synthesized audio.
###Code
# Load preprocessed dev set examples
X_dev = np.load("./XY_dev/X_dev.npy")
Y_dev = np.load("./XY_dev/Y_dev.npy")
###Output
_____no_output_____
###Markdown
2 - Model* Now that you've built a dataset, let's write and train a trigger word detection model! * The model will use 1-D convolutional layers, GRU layers, and dense layers. * Let's load the packages that will allow you to use these layers in Keras. This might take a minute to load.
###Code
from keras.callbacks import ModelCheckpoint
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking, TimeDistributed, LSTM, Conv1D
from keras.layers import GRU, Bidirectional, BatchNormalization, Reshape
from keras.optimizers import Adam
###Output
_____no_output_____
###Markdown
2.1 - Build the modelOur goal is to build a network that will ingest a spectrogram and output a signal when it detects the trigger word. This network will use 4 layers: * A convolutional layer * Two GRU layers * A dense layer. Here is the architecture we will use. **Figure 3** 1D convolutional layerOne key layer of this model is the 1D convolutional step (near the bottom of Figure 3). * It inputs the 5511 step spectrogram. Each step is a vector of 101 units.* It outputs a 1375 step output* This output is further processed by multiple layers to get the final $T_y = 1375$ step output. * This 1D convolutional layer plays a role similar to the 2D convolutions you saw in Course 4, of extracting low-level features and then possibly generating an output of a smaller dimension. * Computationally, the 1-D conv layer also helps speed up the model because now the GRU can process only 1375 timesteps rather than 5511 timesteps. GRU, dense and sigmoid* The two GRU layers read the sequence of inputs from left to right.* A dense plus sigmoid layer makes a prediction for $y^{\langle t \rangle}$. * Because $y$ is a binary value (0 or 1), we use a sigmoid output at the last layer to estimate the chance of the output being 1, corresponding to the user having just said "activate." Unidirectional RNN* Note that we use a **unidirectional RNN** rather than a bidirectional RNN. * This is really important for trigger word detection, since we want to be able to detect the trigger word almost immediately after it is said. * If we used a bidirectional RNN, we would have to wait for the whole 10sec of audio to be recorded before we could tell if "activate" was said in the first second of the audio clip. Implement the modelImplementing the model can be done in four steps: **Step 1**: CONV layer. Use `Conv1D()` to implement this, with 196 filters, a filter size of 15 (`kernel_size=15`), and stride of 4. [conv1d](https://keras.io/layers/convolutional/conv1d)```Pythonoutput_x = Conv1D(filters=...,kernel_size=...,strides=...)(input_x)```* Follow this with a ReLu activation. Note that we can pass in the name of the desired activation as a string, all in lowercase letters.```Pythonoutput_x = Activation("...")(input_x)```* Follow this with dropout, using a keep rate of 0.8 ```Pythonoutput_x = Dropout(rate=...)(input_x)```**Step 2**: First GRU layer. To generate the GRU layer, use 128 units.```Pythonoutput_x = GRU(units=..., return_sequences = ...)(input_x)```* Return sequences instead of just the last time step's prediction to ensures that all the GRU's hidden states are fed to the next layer. * Follow this with dropout, using a keep rate of 0.8.* Follow this with batch normalization. No parameters need to be set.```Pythonoutput_x = BatchNormalization()(input_x)```**Step 3**: Second GRU layer. This has the same specifications as the first GRU layer.* Follow this with a dropout, batch normalization, and then another dropout.**Step 4**: Create a time-distributed dense layer as follows: ```PythonX = TimeDistributed(Dense(1, activation = "sigmoid"))(X)```This creates a dense layer followed by a sigmoid, so that the parameters used for the dense layer are the same for every time step. Documentation:* [Keras documentation on wrappers](https://keras.io/layers/wrappers/). * To learn more, you can read this blog post [How to Use the TimeDistributed Layer in Keras](https://machinelearningmastery.com/timedistributed-layer-for-long-short-term-memory-networks-in-python/).**Exercise**: Implement `model()`, the architecture is presented in Figure 3.
###Code
# GRADED FUNCTION: model
def model(input_shape):
"""
Function creating the model's graph in Keras.
Argument:
input_shape -- shape of the model's input data (using Keras conventions)
Returns:
model -- Keras model instance
"""
X_input = Input(shape = input_shape)
### START CODE HERE ###
# Step 1: CONV layer (≈4 lines)
X = Conv1D(filters=196, kernel_size=15, strides=4)(X_input) # CONV1D
X = BatchNormalization()(X) # Batch normalization
X = Activation("relu")(X) # ReLu activation
X = Dropout(0.8)(X) # dropout (use 0.8)
# Step 2: First GRU Layer (≈4 lines)
X = GRU(units=128, return_sequences=True)(X) # GRU (use 128 units and return the sequences)
X = Dropout(0.8)(X) # dropout (use 0.8)
X = BatchNormalization()(X) # Batch normalization
# Step 3: Second GRU Layer (≈4 lines)
X = GRU(units=128, return_sequences=True)(X) # GRU (use 128 units and return the sequences)
X = Dropout(0.8)(X) # dropout (use 0.8)
X = BatchNormalization()(X) # Batch normalization
X = Dropout(0.8)(X) # dropout (use 0.8)
# Step 4: Time-distributed dense layer (see given code in instructions) (≈1 line)
X = TimeDistributed(Dense(1, activation = "sigmoid"))(X) # time distributed (sigmoid)
### END CODE HERE ###
model = Model(inputs = X_input, outputs = X)
return model
model = model(input_shape = (Tx, n_freq))
###Output
_____no_output_____
###Markdown
Let's print the model summary to keep track of the shapes.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
**Expected Output**: **Total params** 522,561 **Trainable params** 521,657 **Non-trainable params** 904 The output of the network is of shape (None, 1375, 1) while the input is (None, 5511, 101). The Conv1D has reduced the number of steps from 5511 to 1375. 2.2 - Fit the model * Trigger word detection takes a long time to train. * To save time, we've already trained a model for about 3 hours on a GPU using the architecture you built above, and a large training set of about 4000 examples. * Let's load the model.
###Code
model = load_model('./models/tr_model.h5')
###Output
_____no_output_____
###Markdown
You can train the model further, using the Adam optimizer and binary cross entropy loss, as follows. This will run quickly because we are training just for one epoch and with a small training set of 26 examples.
###Code
opt = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, decay=0.01)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=["accuracy"])
model.fit(X, Y, batch_size = 5, epochs=1)
###Output
_____no_output_____
###Markdown
2.3 - Test the modelFinally, let's see how your model performs on the dev set.
###Code
loss, acc = model.evaluate(X_dev, Y_dev)
print("Dev set accuracy = ", acc)
###Output
_____no_output_____
###Markdown
This looks pretty good! * However, accuracy isn't a great metric for this task * Since the labels are heavily skewed to 0's, a neural network that just outputs 0's would get slightly over 90% accuracy. * We could define more useful metrics such as F1 score or Precision/Recall. * Let's not bother with that here, and instead just empirically see how the model does with some predictions. 3 - Making PredictionsNow that you have built a working model for trigger word detection, let's use it to make predictions. This code snippet runs audio (saved in a wav file) through the network. <!--can use your model to make predictions on new audio clips.You will first need to compute the predictions for an input audio clip.**Exercise**: Implement predict_activates(). You will need to do the following:1. Compute the spectrogram for the audio file2. Use `np.swap` and `np.expand_dims` to reshape your input to size (1, Tx, n_freqs)5. Use forward propagation on your model to compute the prediction at each output step!-->
###Code
def detect_triggerword(filename):
plt.subplot(2, 1, 1)
x = graph_spectrogram(filename)
# the spectrogram outputs (freqs, Tx) and we want (Tx, freqs) to input into the model
x = x.swapaxes(0,1)
x = np.expand_dims(x, axis=0)
predictions = model.predict(x)
plt.subplot(2, 1, 2)
plt.plot(predictions[0,:,0])
plt.ylabel('probability')
plt.show()
return predictions
###Output
_____no_output_____
###Markdown
Insert a chime to acknowledge the "activate" trigger* Once you've estimated the probability of having detected the word "activate" at each output step, you can trigger a "chiming" sound to play when the probability is above a certain threshold. * $y^{\langle t \rangle}$ might be near 1 for many values in a row after "activate" is said, yet we want to chime only once. * So we will insert a chime sound at most once every 75 output steps. * This will help prevent us from inserting two chimes for a single instance of "activate". * This plays a role similar to non-max suppression from computer vision.<!-- **Exercise**: Implement chime_on_activate(). You will need to do the following:1. Loop over the predicted probabilities at each output step2. When the prediction is larger than the threshold and more than 75 consecutive time steps have passed, insert a "chime" sound onto the original audio clipUse this code to convert from the 1,375 step discretization to the 10,000 step discretization and insert a "chime" using pydub:` audio_clip = audio_clip.overlay(chime, position = ((i / Ty) * audio.duration_seconds)*1000)`!-->
###Code
chime_file = "audio_examples/chime.wav"
def chime_on_activate(filename, predictions, threshold):
audio_clip = AudioSegment.from_wav(filename)
chime = AudioSegment.from_wav(chime_file)
Ty = predictions.shape[1]
# Step 1: Initialize the number of consecutive output steps to 0
consecutive_timesteps = 0
# Step 2: Loop over the output steps in the y
for i in range(Ty):
# Step 3: Increment consecutive output steps
consecutive_timesteps += 1
# Step 4: If prediction is higher than the threshold and more than 75 consecutive output steps have passed
if predictions[0,i,0] > threshold and consecutive_timesteps > 75:
# Step 5: Superpose audio and background using pydub
audio_clip = audio_clip.overlay(chime, position = ((i / Ty) * audio_clip.duration_seconds)*1000)
# Step 6: Reset consecutive output steps to 0
consecutive_timesteps = 0
audio_clip.export("chime_output.wav", format='wav')
###Output
_____no_output_____
###Markdown
3.3 - Test on dev examples Let's explore how our model performs on two unseen audio clips from the development set. Lets first listen to the two dev set clips.
###Code
IPython.display.Audio("./raw_data/dev/1.wav")
IPython.display.Audio("./raw_data/dev/2.wav")
###Output
_____no_output_____
###Markdown
Now lets run the model on these audio clips and see if it adds a chime after "activate"!
###Code
filename = "./raw_data/dev/1.wav"
prediction = detect_triggerword(filename)
chime_on_activate(filename, prediction, 0.5)
IPython.display.Audio("./chime_output.wav")
filename = "./raw_data/dev/2.wav"
prediction = detect_triggerword(filename)
chime_on_activate(filename, prediction, 0.5)
IPython.display.Audio("./chime_output.wav")
###Output
_____no_output_____
###Markdown
Congratulations You've come to the end of this assignment! Here's what you should remember:- Data synthesis is an effective way to create a large training set for speech problems, specifically trigger word detection. - Using a spectrogram and optionally a 1D conv layer is a common pre-processing step prior to passing audio data to an RNN, GRU or LSTM.- An end-to-end deep learning approach can be used to build a very effective trigger word detection system. *Congratulations* on finishing the final assignment! Thank you for sticking with us through the end and for all the hard work you've put into learning deep learning. We hope you have enjoyed the course! 4 - Try your own example! (OPTIONAL/UNGRADED)In this optional and ungraded portion of this notebook, you can try your model on your own audio clips! * Record a 10 second audio clip of you saying the word "activate" and other random words, and upload it to the Coursera hub as `myaudio.wav`. * Be sure to upload the audio as a wav file. * If your audio is recorded in a different format (such as mp3) there is free software that you can find online for converting it to wav. * If your audio recording is not 10 seconds, the code below will either trim or pad it as needed to make it 10 seconds.
###Code
# Preprocess the audio to the correct format
def preprocess_audio(filename):
# Trim or pad audio segment to 10000ms
padding = AudioSegment.silent(duration=10000)
segment = AudioSegment.from_wav(filename)[:10000]
segment = padding.overlay(segment)
# Set frame rate to 44100
segment = segment.set_frame_rate(44100)
# Export as wav
segment.export(filename, format='wav')
###Output
_____no_output_____
###Markdown
Once you've uploaded your audio file to Coursera, put the path to your file in the variable below.
###Code
your_filename = "audio_examples/my_audio.wav"
preprocess_audio(your_filename)
IPython.display.Audio(your_filename) # listen to the audio you uploaded
###Output
_____no_output_____
###Markdown
Finally, use the model to predict when you say activate in the 10 second audio clip, and trigger a chime. If beeps are not being added appropriately, try to adjust the chime_threshold.
###Code
chime_threshold = 0.5
prediction = detect_triggerword(your_filename)
chime_on_activate(your_filename, prediction, chime_threshold)
IPython.display.Audio("./chime_output.wav")
###Output
_____no_output_____
|
RecomendationSystem/2-Pyspark-Facto-MovieLens.ipynb
|
###Markdown
AI-Frameworks LAB 5 Introduction to Recommendation System with Collaborative Filtering - Part 2bis : Latent Vector-Based Methods with `SParkMl` Spark Library.The objectives of this notebook are the following : * Use ALS algorithm to learn decomposition of rating's matrices.* Use results of algorithm to apply recommendation. 1. IntroductionCe calepin traite d'un problème classique de recommandation par filtrage collaboratif en utilisant les ressources de la librairie [MLlib de Spark]([http://spark.apache.org/docs/latest/api/python/pyspark.mllib.htmlpyspark.mllib.recommendation.ALS) avec l'API pyspark. Le problème général est décrit en [introduction](https://github.com/wikistat/Ateliers-Big-Data/tree/master/3-MovieLens) et dans une [vignette](http://wikistat.fr/pdf/st-m-datSc3-colFil.pdf) de [Wikistat](http://wikistat.fr/). Il est appliqué aux données publiques du site [GroupLens](http://grouplens.org/datasets/movielens/). L'objectif est de tester les méthodes et la procédure d'optimisation sur le plus petit jeu de données composé de 100k notes de 943 clients sur 1682 films où chaque client a au moins noté 20 films. Les jeux de données plus gros (1M, 10M, 20M notes) peuvent être utilisés pour "passer à l'échelle volume". Ce calepin s'inspire des exemples de la [documentation](http://spark.apache.org/docs/latest/api/python/pyspark.mllib.htmlpyspark.mllib.recommendation.ALS) et d'un [tutoriel](https://github.com/jadianes/spark-movie-lens/blob/master/notebooks/building-recommender.ipynb) de [Jose A. Dianes](https://www.codementor.io/jadianes). Le sujet a été traité lors d'un [Spark Summit](https://databricks-training.s3.amazonaws.com/movie-recommendation-with-mllib.html).L'objectif est d'utiliser ces seules données pour proposer des recommandations. Les données initiales sont sous la forme d'une matrice **très creuse** (*sparse*) contenant des notes ou évaluations. **Attention**, les "0" de la matrice ne sont pas des notes mais des *données manquantes*, le film n'a pas encore été vu ou évalué. Un algorithme satisfaisant à l'objectif de *complétion de grande matrice creuse*, et implémenté dans un logiciel libre d'accès est disponible dans la librairie [softImpute de R](https://cran.r-project.org/web/packages/softImpute/index.html). SOn utilisaiton est décrite dans un autre [calepin](https://github.com/wikistat/Ateliers-Big-Data/blob/master/3-MovieLens/Atelier-MovieLens-softImpute.ipynb). La version de [NMF](http://wikistat.fr/pdf/st-m-explo-nmf.pdf) de [MLlib de Spark](http://spark.apache.org/docs/latest/api/python/pyspark.mllib.htmlpyspark.mllib.recommendation.ALS) autorise permet également la complétion.En revanche,la version de NMF incluse dans la librairie [Scikit-learn](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html) traite également des [matrices creuses](http://docs.scipy.org/doc/scipy/reference/sparse.html) mais le critère (moindres carrés) optimisé considère les "0" comme des notes nulles, pas comme des données manquantes. *Elle n'est pas adaptée au problème de complétion*, contrairement à celle de MLliB. Il faudrait sans doute utiliser la librairie [nonnegfac](https://github.com/kimjingu/nonnegfac-python) en Python de [Kim et al. (2014)](http://link.springer.com/content/pdf/10.1007%2Fs10898-013-0035-4.pdf); **à tester**!Dans la première partie, le plus petit fichier est partagé en trois échantillons: apprentissage, validation et test; l'optimisation du rang de la factorisation (nombre de facteurs latents) est réalisée par minimisation de l'erreur estimée sur l'échantillon de validation.Ensuite le plus gros fichier est utilisé pour évaluer l'impact de la taille de la base d'apprentissage. 2 Importation des données en HDFSLes données doivent être stockées à un emplacement accessibles de tous les noeuds du cluster pour permettre la construction de la base de données réparties (RDD). Dans une utilisation monoposte (*standalone*) de *Spark*, elles sont simplement chargées dans le répertoire courant.
###Code
sc
###Output
_____no_output_____
###Markdown
Les données sont lues comme une seule ligne de texte avant d'être restructurées au bon format d'une *matrice creuse* à savoir une liste de triplets contenant les indices de ligne, de colonne et la note pour les seules valeurs renseignées.
###Code
# Importer les données au format texte dans un RDD
small_ratings_raw_data = sc.textFile("movielens_small/ratings.csv")
# Identifier et afficher la première ligne
small_ratings_raw_data_header = small_ratings_raw_data.take(1)[0]
print(small_ratings_raw_data_header)
# Create RDD without header
all_lines = small_ratings_raw_data.filter(lambda l : l!=small_ratings_raw_data_header)
# Séparer les champs (user, item, note) dans un nouveau RDD
from pyspark.sql import Row
split_lines = all_lines.map(lambda l : l.split(","))
ratingsRDD = split_lines.map(lambda p: Row(user=int(p[0]), item=int(p[1]),
rating=float(p[2]), timestamp=int(p[3])))
# .cache() : le RDD est conservé en mémoire une fois traité
ratingsRDD.cache()
# Display the two first rows
ratingsRDD.take(2)
# Convert RDD to DataFrame
ratingsDF = spark.createDataFrame(ratingsRDD)
ratingsDF.take(2)
###Output
_____no_output_____
###Markdown
3. Optimisation du rang sur l'échantillon 10kLe fichier comporte 10 000 évaluations croisant les avis de mille utilisateurs sur les films qu'ils ont vus parmi 1700. 3.1 Constitution des échantillons Séparation aléatoire en trois échantillons apprentissage, validation et test. Le paramètre de rang est optimisé en minimisant l'estimaiton de l'erreur sur l'échantillon test. Cette stratégie, plutôt qu'ue validation croisée est plus adaptée à des données massives.
###Code
tauxTrain=0.6
tauxVal=0.2
tauxTes=0.2
# Si le total est inférieur à 1, les données sont sous-échantillonnées.
(trainDF, validDF, testDF) = ratingsDF.randomSplit([tauxTrain, tauxVal, tauxTes])
# validation et test à prédire, sans les notes
validDF_P = validDF.select("user", "item")
testDF_P = testDF.select("user", "item")
trainDF.take(2), validDF_P.take(2), testDF_P.take(2)
###Output
_____no_output_____
###Markdown
3.2 Optimisation du rang de la NMF L'erreur d'imputation des données, donc de recommandation, est estimée sur l'échantillon de validation pour différentes valeurs (grille) du rang de la factorisation matricielle. Il faudrait en principe aussi optimiser la valeur du paramètre de pénalisation pris à 0.1 par défaut.*Point important:* l'erreur d'ajustement de la factorisation ne prend en compte que les valeurs listées dans la matrice creuses, pas les "0" qui sont des données manquantes.
###Code
from pyspark.ml.recommendation import ALS
import math
import collections
# Initialisation du générateur
seed = 5
# Nombre max d'itérations (ALS)
maxIter = 10
# Régularisation L1; à optimiser également
regularization_parameter = 0.1
# Choix d'une grille pour les valeurs du rang à optimiser
ranks = [4, 8, 12]
#Initialisation variable
# création d'un dictionaire pour stocker l'erreur par rang testé
errors = collections.defaultdict(float)
tolerance = 0.02
min_error = float('inf')
best_rank = -1
best_iteration = -1
from pyspark.ml.evaluation import RegressionEvaluator
for rank in ranks:
als = ALS( rank=rank, seed=seed, maxIter=maxIter,
regParam=regularization_parameter)
model = als.fit(trainDF)
# Prévision de l'échantillon de validation
predDF = model.transform(validDF).select("prediction","rating")
#Remove unpredicter row due to no-presence of user in the train dataset
pred_without_naDF = predDF.na.drop()
# Calcul du RMSE
evaluator = RegressionEvaluator(metricName="rmse", labelCol="rating",
predictionCol="prediction")
rmse = evaluator.evaluate(pred_without_naDF)
print("Root-mean-square error for rank %d = "%rank + str(rmse))
errors[rank] = rmse
if rmse < min_error:
min_error = rmse
best_rank = rank
# Meilleure solution
print('Rang optimal: %s' % best_rank)
###Output
_____no_output_____
###Markdown
3.3 Résultats et test
###Code
# Quelques prévisions
pred_without_naDF.take(3)
###Output
_____no_output_____
###Markdown
Prévision finale de l'échantillon test.
###Code
#On concatane la DataFrame Train et Validatin
trainValidDF = trainDF.union(validDF)
# On crée un model avec le nouveau Dataframe complété d'apprentissage et le rank fixé à la valeur optimal
als = ALS( rank=best_rank, seed=seed, maxIter=maxIter,
regParam=regularization_parameter)
model = als.fit(trainValidDF)
#Prediction sur la DataFrame Test
testDF = model.transform(testDF).select("prediction","rating")
#Remove unpredicter row due to no-presence of user in the trai dataset
pred_without_naDF = predDF.na.drop()
# Calcul du RMSE
evaluator = RegressionEvaluator(metricName="rmse", labelCol="rating",
predictionCol="prediction")
rmse = evaluator.evaluate(pred_without_naDF)
print("Root-mean-square error for rank %d = "%best_rank + str(rmse))
###Output
_____no_output_____
###Markdown
3 Analyse du fichier complet MovieLens propose un plus gros fichier avec 20M de notes (138000 utilisateurs, 27000 films). Ce fichier est utilisé pour extraire un fichier test de deux millions de notes à reconstruire. Les paramètres précédemment optimisés, ils pourraient sans doute l'être mieux, sont appliqués pour une succesion d'estimation / prévision avec une taille croissante de l'échantillon d'apprentissage. Il aurait été plus élégant d'automatiser le travail dans une boucle mais lorsque les données sont les plus volumineuses des comportement mal contrôlés de Spark peuvent provoquer des plantages par défaut de mémoire. 3.1 Lecture des données Le fichier est prétraité de manière analogue.
###Code
# Importer les données au format texte dans un RDD
ratings_raw_data = sc.textFile(DATA_PATH+"ratings20M.csv")
# Identifier et afficher la première ligne
ratings_raw_data_header = ratings_raw_data.take(1)[0]
ratings_raw_data_header
# Create RDD without header
all_lines = ratings_raw_data.filter(lambda l : l!=ratings_raw_data_header)
# Séparer les champs (user, item, note) dans un nouveau RDD
split_lines = all_lines.map(lambda l : l.split(","))
ratingsRDD = split_lines.map(lambda p: Row(user=int(p[0]), item=int(p[1]),
rating=float(p[2]), timestamp=int(p[3])))
# Display the two first rows
ratingsRDD.take(2)
# Convert RDD to DataFrame
ratingsDF = spark.createDataFrame(ratingsRDD)
ratingsDF.take(2)
###Output
_____no_output_____
###Markdown
3.2 Echantillonnage Extraction de l'échantillon test et éventuellement sous-échantillonnage de l'échantillon d'apprentissage.
###Code
tauxTest=0.1
# Si le total est inférieur à 1, les données sont sous-échantillonnées.
(trainTotDF, testDF) = ratingsDF.randomSplit([1-tauxTest, tauxTest])
# Sous-échantillonnage de l'apprentissage permettant de
# tester pour des tailles croissantes de cet échantillon
tauxEch=0.2
(trainDF, DropData) = trainTotDF.randomSplit([tauxEch, 1-tauxEch])
testDF.take(2), trainDF.take(2)
###Output
_____no_output_____
###Markdown
3.3 Estimation du modèle Le modèle est estimé en utilisant les valeurs des paramètres obtenues dans l'étape précédente.
###Code
import time
time_start=time.time()
# Initialisation du générateur
seed = 5
# Nombre max d'itérations (ALS)
maxIter = 10
# Régularisation L1 (valeur par défaut)
regularization_parameter = 0.1
best_rank = 8
# Estimation pour chaque valeur de rang
als = ALS(rank=rank, seed=seed, maxIter=maxIter,
regParam=regularization_parameter)
model = als.fit(trainDF)
time_end=time.time()
time_als=(time_end - time_start)
print("ALS prend %d s" %(time_als))
###Output
_____no_output_____
###Markdown
3.4 Prévision de l'échantillon test et erreur
###Code
# Prévision de l'échantillon de validation
predDF = model.transform(testDF).select("prediction","rating")
#Remove unpredicter row due to no-presence of user in the train dataset
pred_without_naDF = predDF.na.drop()
# Calcul du RMSE
evaluator = RegressionEvaluator(metricName="rmse", labelCol="rating",
predictionCol="prediction")
rmse = evaluator.evaluate(pred_without_naDF)
print("Root-mean-square error for rank %d = "%best_rank + str(rmse))
trainDF.count()
###Output
_____no_output_____
|
_notebooks/2020-08-28-02-Correlation-and-Experimental-Design.ipynb
|
###Markdown
Correlation and Experimental Design> In this chapter, you'll learn how to quantify the strength of a linear relationship between two variables, and explore how confounding variables can affect the relationship between two other variables. You'll also see how a study’s design can influence its results, change how the data should be analyzed, and potentially affect the reliability of your conclusions. This is the Summary of lecture "Introduction to Statistics in Python", via datacamp.- toc: true - badges: true- comments: true- author: Chanseok Kang- categories: [Python, Datacamp, Statistics]- image: images/lmplot_wh.png
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Correlation- Correlation coefficient - Quantifies the linear relationship between two variables - Number between -1 and 1 - Magnitude corresponds to strength of relationship - Sign (+ or -) corresponds to direction of relationship- Pearson product-moment correlation($r$) - Most Common - $\bar{x}$ = mean of $x$ - $\sigma_x$ = standard deviation of $x$ $$ r = \sum_{i=1}^{n} \frac{(x_i - \bar{x})(y_i - \bar{y})}{\sigma_x \times \sigma_y} $$ - Variation - Kendall's Tau - Spearman's rho Relationships between variablesIn this chapter, you'll be working with a dataset `world_happiness` containing results from the [2019 World Happiness Report](https://worldhappiness.report/ed/2019/). The report scores various countries based on how happy people in that country are. It also ranks each country on various societal aspects such as social support, freedom, corruption, and others. The dataset also includes the GDP per capita and life expectancy for each country.In this exercise, you'll examine the relationship between a country's life expectancy (`life_exp`) and happiness score (`happiness_score`) both visually and quantitatively.
###Code
world_happiness = pd.read_csv('./dataset/world_happiness.csv', index_col=0)
world_happiness.head()
# Create a scatterplot of happiness_score vs. life_exp and show
sns.scatterplot(x='life_exp', y='happiness_score', data=world_happiness);
# Create scatterplot of happiness_score vs. life_exp with trendline
sns.lmplot(x='life_exp', y='happiness_score', data=world_happiness, ci=None);
# Correlation between life_exp and happiness_score
cor = world_happiness['life_exp'].corr(world_happiness['happiness_score'])
print(cor)
###Output
0.7802249053272061
###Markdown
Correlation caveats- Correlation only accounts for linear relationships- Transformation - Certain statistical methods rely on variables having a linear relationship - Correlation coefficient - Linear regression- Correlation does not imply causation - $x$ is correlated with $y$ **does not mean** $x$ causes $y$ What can't correlation measure?While the correlation coefficient is a convenient way to quantify the strength of a relationship between two variables, it's far from perfect. In this exercise, you'll explore one of the caveats of the correlation coefficient by examining the relationship between a country's GDP per capita (`gdp_per_cap`) and happiness score.
###Code
# Scatterplot of gdp_per_cap and life_exp
sns.scatterplot(x='gdp_per_cap', y='life_exp', data=world_happiness);
# Correlation between gdp_per_cap and life_exp
cor = world_happiness['gdp_per_cap'].corr(world_happiness['life_exp'])
print(cor)
###Output
0.7019547642148014
###Markdown
Transforming variablesWhen variables have skewed distributions, they often require a transformation in order to form a linear relationship with another variable so that correlation can be computed. In this exercise, you'll perform a transformation yourself.
###Code
# Scatterplot of happiness_score vs. gdp_per_cap
sns.scatterplot(x='gdp_per_cap', y='happiness_score', data=world_happiness);
# Calculate correlation
cor = world_happiness['gdp_per_cap'].corr(world_happiness['happiness_score'])
print(cor)
# Create log_gdp_per_cap column
world_happiness['log_gdp_per_cap'] = np.log(world_happiness['gdp_per_cap'])
# Scatterplot of log_gdp_per_cap and happiness_score
sns.scatterplot(x='log_gdp_per_cap', y='happiness_score', data=world_happiness);
# Calculate correlation
cor = world_happiness['log_gdp_per_cap'].corr(world_happiness['happiness_score'])
print(cor)
###Output
0.8043146004918288
###Markdown
Does sugar improve happiness?A new column has been added to `world_happiness` called `grams_sugar_per_day`, which contains the average amount of sugar eaten per person per day in each country. In this exercise, you'll examine the effect of a country's average sugar consumption on its happiness score.
###Code
world_happiness = pd.read_csv('./dataset/world_happiness_add_sugar.csv', index_col=0)
world_happiness
# Scatterplot of grams_sugar_per_day and happiness_score
sns.scatterplot(x='grams_sugar_per_day', y='happiness_score', data=world_happiness);
# Correlation between grams_sugar_per_day and happiness_score
cor = world_happiness['grams_sugar_per_day'].corr(world_happiness['happiness_score'])
print(cor)
###Output
0.6939100021829635
|
105_opencv_cropping.ipynb
|
###Markdown
Crop Image with OpenCV Welcome to **[PyImageSearch Plus](http://pyimg.co/plus)** Jupyter Notebooks!This notebook is associated with the [Crop Image with OpenCV](https://www.pyimagesearch.com/2021/01/19/crop-image-with-opencv/) blog post published on 01-19-2021.Only the code for the blog post is here. Most codeblocks have a 1:1 relationship with what you find in the blog post with two exceptions: (1) Python classes are not separate files as they are typically organized with PyImageSearch projects, and (2) Command Line Argument parsing is replaced with an `args` dictionary that you can manipulate as needed.We recommend that you execute (press ▶️) the code block-by-block, as-is, before adjusting parameters and `args` inputs. Once you've verified that the code is working, you are welcome to hack with it and learn from manipulating inputs, settings, and parameters. For more information on using Jupyter and Colab, please refer to these resources:* [Jupyter Notebook User Interface](https://jupyter-notebook.readthedocs.io/en/stable/notebook.htmlnotebook-user-interface)* [Overview of Google Colaboratory Features](https://colab.research.google.com/notebooks/basic_features_overview.ipynb)Happy hacking! Download the code zip file
###Code
!wget https://pyimagesearch-code-downloads.s3-us-west-2.amazonaws.com/opencv-cropping/opencv-cropping.zip
!unzip -qq opencv-cropping.zip
%cd opencv-cropping
###Output
_____no_output_____
###Markdown
Blog Post Code Import Packages
###Code
# import the necessary packages
from matplotlib import pyplot as plt
import numpy as np
import argparse
import cv2
###Output
_____no_output_____
###Markdown
Function to display images in Jupyter Notebooks and Google Colab
###Code
def plt_imshow(title, image):
# convert the image frame BGR to RGB color space and display it
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
plt.title(title)
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
Understanding image cropping with OpenCV and NumPy array slicing
###Code
I = np.arange(0, 25)
I
I = I.reshape((5, 5))
I
I[0:3, 0:2]
I[3:5, 1:5]
###Output
_____no_output_____
###Markdown
Implementing image cropping with OpenCV
###Code
# # construct the argument parser and parse the arguments
# ap = argparse.ArgumentParser()
# ap.add_argument("-i", "--image", type=str, default="adrian.png",
# help="path to the input image")
# args = vars(ap.parse_args())
# since we are using Jupyter Notebooks we can replace our argument
# parsing code with *hard coded* arguments and values
args = {
"image": "adrian.png"
}
# load the input image and display it to our screen
image = cv2.imread(args["image"])
plt_imshow("Original", image)
# cropping an image with OpenCV is accomplished via simple NumPy
# array slices in startY:endY, startX:endX order -- here we are
# cropping the face from the image (these coordinates were
# determined using photo editing software such as Photoshop,
# GIMP, Paint, etc.)
face = image[85:250, 85:220]
plt_imshow("Face", face)
# apply another image crop, this time extracting the body
body = image[90:450, 0:290]
plt_imshow("Body", body)
###Output
_____no_output_____
|
examples/reference/elements/plotly/Scatter3D.ipynb
|
###Markdown
Title Scatter3D Element Dependencies Matplotlib Backends Matplotlib Plotly
###Code
import numpy as np
import holoviews as hv
hv.notebook_extension('plotly')
###Output
_____no_output_____
###Markdown
``Scatter3D`` represents three-dimensional coordinates which may be colormapped or scaled in size according to a value. They are therefore very similar to [``Points``](Points.ipynb) and [``Scatter``](Scatter.ipynb) types but have one additional coordinate dimension. Like other 3D elements the camera angle can be controlled using ``azimuth``, ``elevation`` and ``distance`` plot options:
###Code
%%opts Scatter3D [width=500 height=500 camera_zoom=20 color_index=2] (size=5 cmap='fire')
y,x = np.mgrid[-5:5, -5:5] * 0.1
heights = np.sin(x**2+y**2)
hv.Scatter3D(zip(x.flat,y.flat,heights.flat))
###Output
_____no_output_____
###Markdown
Just like all regular 2D elements, ``Scatter3D`` types can be overlaid and will follow the default color cycle:
###Code
%%opts Scatter3D [width=500 height=500] (symbol='x' size=2)
hv.Scatter3D(np.random.randn(100,4), vdims=['Size']) * hv.Scatter3D(np.random.randn(100,4)+2, vdims=['Size'])
###Output
_____no_output_____
###Markdown
Title Scatter3D Element Dependencies Matplotlib Backends Matplotlib Plotly
###Code
import numpy as np
import holoviews as hv
from holoviews import dim, opts
hv.extension('plotly')
###Output
_____no_output_____
###Markdown
``Scatter3D`` represents three-dimensional coordinates which may be colormapped or scaled in size according to a value. They are therefore very similar to [``Points``](Points.ipynb) and [``Scatter``](Scatter.ipynb) types but have one additional coordinate dimension. Like other 3D elements the camera angle can be controlled using ``azimuth``, ``elevation`` and ``distance`` plot options:
###Code
y,x = np.mgrid[-5:5, -5:5] * 0.1
heights = np.sin(x**2+y**2)
hv.Scatter3D((x.flat, y.flat, heights.flat)).opts(
cmap='fire', color='z', size=5)
###Output
_____no_output_____
###Markdown
Just like all regular 2D elements, ``Scatter3D`` types can be overlaid and will follow the default color cycle:
###Code
(hv.Scatter3D(np.random.randn(100,4), vdims='Size') * hv.Scatter3D(np.random.randn(100,4)+2, vdims='Size')).opts(
opts.Scatter3D(size=(5+dim('Size'))*2, marker='diamond')
)
###Output
_____no_output_____
###Markdown
Title Scatter3D Element Dependencies Matplotlib Backends Matplotlib Plotly
###Code
import numpy as np
import holoviews as hv
hv.notebook_extension('plotly')
###Output
_____no_output_____
###Markdown
``Scatter3D`` represents three-dimensional coordinates which may be colormapped or scaled in size according to a value. They are therefore very similar to [``Points``](Points.ipynb) and [``Scatter``](Scatter.ipynb) types but have one additional coordinate dimension. Like other 3D elements the camera angle can be controlled using ``azimuth``, ``elevation`` and ``distance`` plot options:
###Code
%%opts Scatter3D [width=500 height=500 camera_zoom=20 color_index=2] (size=5 cmap='fire')
y,x = np.mgrid[-5:5, -5:5] * 0.1
heights = np.sin(x**2+y**2)
hv.Scatter3D(zip(x.flat,y.flat,heights.flat))
###Output
_____no_output_____
###Markdown
Just like all regular 2D elements, ``Scatter3D`` types can be overlaid and will follow the default color cycle:
###Code
%%opts Scatter3D [width=500 height=500] (symbol='x' size=2)
hv.Scatter3D(np.random.randn(100,4), vdims='Size') * hv.Scatter3D(np.random.randn(100,4)+2, vdims='Size')
###Output
_____no_output_____
###Markdown
Title Scatter3D Element Dependencies Matplotlib Backends Matplotlib Plotly
###Code
import numpy as np
import holoviews as hv
hv.notebook_extension('plotly')
###Output
_____no_output_____
###Markdown
``Scatter3D`` represents three-dimensional coordinates which may be colormapped or scaled in size according to a value. They are therefore very similar to [``Points``](Points.ipynb) and [``Scatter``](Scatter.ipynb) types but have one additional coordinate dimension. Like other 3D elements the camera angle can be controlled using ``azimuth``, ``elevation`` and ``distance`` plot options:
###Code
%%opts Scatter3D [width=500 height=500 camera_zoom=20 color_index=2] (size=5 cmap='fire')
y,x = np.mgrid[-5:5, -5:5] * 0.1
heights = np.sin(x**2+y**2)
hv.Scatter3D(zip(x.flat,y.flat,heights.flat))
###Output
_____no_output_____
###Markdown
Just like all regular 2D elements, ``Scatter3D`` types can be overlaid and will follow the default color cycle:
###Code
%%opts Scatter3D [width=500 height=500] (symbol='x' size=2)
hv.Scatter3D(np.random.randn(100,4), vdims='Size') * hv.Scatter3D(np.random.randn(100,4)+2, vdims='Size')
###Output
_____no_output_____
|
Mission_to_Mars-Starter.ipynb
|
###Markdown
Visit the NASA mars news site
###Code
# Visit the Mars news site
url = 'https://redplanetscience.com/'
browser.visit(url)
# Optional delay for loading the page
browser.is_element_present_by_css('div.list_text', wait_time=1)
# Convert the browser html to a soup object
html = browser.html
news_soup = soup(html, 'html.parser')
slide_elem = news_soup.select_one('div.list_text')
#display the current title content
slide_elem.find('div', class_='content_title')
# Use the parent element to find the first a tag and save it as `news_title`
news_title = slide_elem.find('div', class_='content_title').get_text()
news_title
# Use the parent element to find the paragraph text
news_p= slide_elem.find('div', class_='article_teaser_body').get_text()
news_p
###Output
_____no_output_____
###Markdown
JPL Space Images Featured Image
###Code
# Visit URL
url = 'https://spaceimages-mars.com'
browser.visit(url)
# Find and click the full image button
full_image_link = browser.find_by_tag('button')[1]
full_image_link.click()
# Parse the resulting html with soup
html = browser.html
img_soup = soup(html, 'html.parser')
#print(img_soup.prettify())
img_url_rel = img_soup.find('img', class_='fancybox-image').get('src')
# find the relative image url
img_url_rel
# Use the base url to create an absolute url
img_url=f'https://spaceimages-mars.com/{img_url_rel}'
img_url
###Output
_____no_output_____
###Markdown
Mars Facts
###Code
# Use `pd.read_html` to pull the data from the Mars-Earth Comparison section
# hint use index 0 to find the table
df = pd.read_html('https://galaxyfacts-mars.com/')[0]
df.head()
df.columns=['Description', 'Mars', 'Earth']
df
df.set_index('Description', inplace=True)
df
df.to_html()
###Output
_____no_output_____
###Markdown
Hemispheres
###Code
url = 'https://marshemispheres.com/'
browser.visit(url)
# Create a list to hold the images and titles.
hemisphere_image_urls = []
# Get a list of all of the hemispheres
links = browser.find_by_css('a.product-item img')
# Next, loop through those links, click the link, find the sample anchor, return the href
for i in range(len(links)):
hemisphereInfo = {}
# We have to find the elements on each loop to avoid a stale element exception
browser.find_by_css('a.product-item img')[i].click()
# Next, we find the Sample image anchor tag and extract the href
sample = browser.links.find_by_text('Sample').first
hemisphereInfo['img_url']= sample['href']
# Get Hemisphere title
hemisphereInfo['title'] = browser.find_by_css('h2.title').text
# Append hemisphere object to list
hemisphere_image_urls.append(hemisphereInfo)
# Finally, we navigate backwards
browser.back()
hemisphere_image_urls
browser.quit()
###Output
_____no_output_____
###Markdown
Visit the NASA mars news site
###Code
# Visit the Mars news site
url = 'https://redplanetscience.com/'
browser.visit(url)
# Optional delay for loading the page
browser.is_element_present_by_css('div.list_text', wait_time=1)
# Convert the browser html to a soup object
html = browser.html
news_soup = soup(html, 'html.parser')
slide_elem = news_soup.select_one('div.list_text')
#print(news_soup.prettify())
#display the current title content
slide_elem.find('div', class_='content_title')
# Use the parent element to find the first a tag and save it as `news_title`
news_title = slide_elem.find('div', class_='content_title').get_text()
news_title
# Use the parent element to find the paragraph text
news_p = slide_elem.find('div', class_='article_teaser_body').get_text()
news_p
###Output
_____no_output_____
###Markdown
JPL Space Images Featured Image
###Code
# Visit URL
url = 'https://spaceimages-mars.com'
browser.visit(url)
# Find and click the full image button
full_image_link = browser.find_by_tag('button')[1]
full_image_link.click()
# Parse the resulting html with soup
html = browser.html
img_soup = soup(html, 'html.parser')
#print(news_soup.prettify())
img_url_rel = img_soup.find('img', class_='fancybox-image').get('src')
# find the relative image url
img_url_rel
# Use the base url to create an absolute url
img_url = f'https://spaceimages-mars.com/{img_url_rel}'
img_url
###Output
_____no_output_____
###Markdown
Mars Facts
###Code
# Use `pd.read_html` to pull the data from the Mars-Earth Comparison section
# hint use index 0 to find the table
df = pd.read_html("https://galaxyfacts-mars.com/")[0]
df.head()
df.columns = ['Description', 'Mars', 'Earth']
df
df.set_index('Description', inplace=True)
df.to_html()
###Output
_____no_output_____
###Markdown
Hemispheres
###Code
url = 'https://marshemispheres.com/'
browser.visit(url)
# Create a list to hold the images and titles.
hemisphere_image_urls = []
# Get a list of all of the hemispheres
links = browser.find_by_css('a.product-item img')
# Next, loop through those links, click the link, find the sample anchor, return the href
for i in range(len(links)):
#hemisphere info dictionary
hemisphereInfo = {}
# We have to find the elements on each loop to avoid a stale element exception
browser.find_by_css('a.product-item img')[i].click()
# Next, we find the Sample image anchor tag and extract the href
sample = browser.links.find_by_text('Sample').first
hemisphereInfo["img_url"] = sample['href']
# Get Hemisphere title
hemisphereInfo['title'] = browser.find_by_css('h2.title').text
# Append hemisphere object to list
hemisphere_image_urls.append(hemisphereInfo)
# Finally, we navigate backwards
browser.back()
hemisphere_image_urls
browser.quit()
# refence code used from Dr.A's Videos
###Output
_____no_output_____
###Markdown
Visit the NASA mars news site
###Code
#print(news_soup.prettify())
# Visit the Mars news site
url = 'https://redplanetscience.com/'
browser.visit(url)
html = browser.html
news_soup = soup(html, 'html.parser')
print(news_soup.prettify())
# Visit the Mars news site
url = 'https://redplanetscience.com/'
browser.visit(url)
# Optional delay for loading the page
browser.is_element_present_by_css('div.list_text', wait_time=1)
# Convert the browser html to a soup object
html = browser.html
news_soup = soup(html, 'html.parser')
#print(news_soup.prettify())
slide_elem = news_soup.select_one('div.list_text')
#display the current title content
slide_elem.find('div', class_='content_title')
# Use the parent element to find the first a tag and save it as `news_title`
news_title = slide_elem.find('div', class_='content_title').get_text()
news_title
# Use the parent element to find the paragraph text
news_p = slide_elem.find('div', class_='article_teaser_body').get_text()
news_p
###Output
_____no_output_____
###Markdown
JPL Space Images Featured Image
###Code
# Visit URL
url = 'https://spaceimages-mars.com'
browser.visit(url)
# Find and click the full image button
full_image_link = browser.find_by_tag('button')[1]
full_image_link.click()
# Parse the resulting html with soup
html = browser.html
image_soup = soup(html, 'html.parser')
#print(image_soup.prettify())
img_url_rel = image_soup.find('img', class_='fancybox-image').get('src')
# find the relative image url
img_url_rel
# Use the base url to create an absolute url
img_url = f'https://spaceimages-mars.com/{img_url_rel}'
img_url
###Output
_____no_output_____
###Markdown
Mars Facts
###Code
# Use `pd.read_html` to pull the data from the Mars-Earth Comparison section
# hint use index 0 to find the table
df = pd.read_html('https://galaxyfacts-mars.com/')[0]
df.head()
df.columns = ['Description', 'Mars', 'Earth']
df
df.set_index('Description', inplace=True)
df
df.to_html()
###Output
_____no_output_____
###Markdown
Hemispheres
###Code
url = 'https://marshemispheres.com/'
browser.visit(url)
# Create a list to hold the images and titles.
hemisphere_image_urls = []
# Get a list of all of the hemispheres
links = browser.find_by_css('a.product-item img')
# Next, loop through those links, click the link, find the sample anchor, return the href
for i in range(len(links)):
#hemisphere info dictionary
hemisphereInfo = {}
# We have to find the elements on each loop to avoid a stale element exception
browser.find_by_css('a.product-item img')[i].click()
# Next, we find the Sample image anchor tag and extract the href
sample = browser.links.find_by_text('Sample').first
hemisphereInfo["img_url"] = sample['href']
# Get Hemisphere title
hemisphereInfo['title'] = browser.find_by_css('h2.title').text
# Append hemisphere object to list
hemisphere_image_urls.append(hemisphereInfo)
# Finally, we navigate backwards
browser.back()
hemisphere_image_urls
browser.quit()
###Output
_____no_output_____
|
examples/Marks/Pyplot/Candles.ipynb
|
###Markdown
OHLC with Ordinal Scale
###Code
from bqplot import OrdinalScale
fig = plt.figure()
plt.scales(scales={'x': OrdinalScale()})
axes_options = {'x': {'label': 'X', 'tick_format': '%d-%m-%Y'},
'y': {'label': 'Y', 'tick_format': '.2f'}}
ohlc3 = plt.ohlc(dates2, np.array(prices2) / 60, marker='candle',
colors=['dodgerblue','orange'],
axes_options=axes_options)
fig
ohlc3.opacities = [0.1, 0.2]
###Output
_____no_output_____
###Markdown
OHLC with Ordinal Scale
###Code
from bqplot import OrdinalScale
fig = plt.figure()
plt.scales(scales={"x": OrdinalScale()})
axes_options = {
"x": {"label": "X", "tick_format": "%d-%m-%Y"},
"y": {"label": "Y", "tick_format": ".2f"},
}
ohlc3 = plt.ohlc(
dates2,
np.array(prices2) / 60,
marker="candle",
colors=["dodgerblue", "orange"],
axes_options=axes_options,
)
fig
ohlc3.opacities = [0.1, 0.2]
###Output
_____no_output_____
|
notebooks/5. Scenario trees with optimized scenarios and structure (method #2).ipynb
|
###Markdown
We illustrate on a Geometric Brownian Motion (GBM) the second of the two methods to build scenario trees with **optimized scenarios and structure**. This method does not explore the space of tree structures, it builds the structure 'forward' stage-by-stage by meeting a `width_vector` requirement (either given explicity or calculated optimally). It has the advantage to be practical even when the number of stages and scenarios is large. However, it does not offer the diversity of structures that the other method does. Define a `ScenarioProcess` instance for the GBM
###Code
S_0 = 2 # initial value (at stage 0)
delta_t = 1 # time lag between 2 stages
mu = 0 # drift
sigma = 1 # volatility
###Output
_____no_output_____
###Markdown
The `gbm_recurrence` function below implements the dynamic relation of a GBM: * $S_{t} = S_{t-1} \exp[(\mu - \sigma^2/2) \Delta t + \sigma \epsilon_t\sqrt{\Delta t}], \quad t=1,2,\dots$ where $\epsilon_t$ is a standard normal random variable $N(0,1)$.The discretization of $\epsilon_t$ is done by quasi-Monte Carlo (QMC) and is implemented by the `epsilon_sample_qmc` method.
###Code
def gbm_recurrence(stage, epsilon, scenario_path):
if stage == 0:
return {'S': np.array([S_0])}
else:
return {'S': scenario_path[stage-1]['S'] \
* np.exp((mu - sigma**2 / 2) * delta_t + sigma * np.sqrt(delta_t) * epsilon)}
def epsilon_sample_qmc(n_samples, stage, u=0.5):
return norm.ppf(np.linspace(0, 1-1/n_samples, n_samples) + u / n_samples).reshape(-1, 1)
scenario_process = ScenarioProcess(gbm_recurrence, epsilon_sample_qmc)
###Output
_____no_output_____
###Markdown
Define a `VariabilityProcess` instance
###Code
def lookback_fct(stage, scenario_path):
return scenario_path[stage]['S'][0] + 1
my_variability = VariabilityProcess(lookback_fct)
###Output
_____no_output_____
###Markdown
Forward generation of structure From a given `width_vector`
###Code
scen_tree = ScenarioTree.forward_generation_from_given_width(width_vector=[5,20,50,80,100],
scenario_process=scenario_process,
variability_process=my_variability,
alpha=1)
scen_tree.plot(var_name='S', figsize=(15,15))
###Output
_____no_output_____
###Markdown
Without a `width_vector`
###Code
def lookback_fct(stage, scenario_path):
return scenario_path[stage]['S'][0] + 1
def looknow_fct(stage, epsilon):
return np.exp(epsilon[0])
def average_fct(stage):
return 1
my_variability = VariabilityProcess(lookback_fct, looknow_fct, average_fct)
scen_tree = ScenarioTree.forward_generation(n_stages=5,
n_scenarios=100,
scenario_process=scenario_process,
variability_process=my_variability,
alpha=1)
scen_tree.plot(var_name='S', figsize=(15,15))
###Output
_____no_output_____
|
mmdetection_training/to_coco_dataset.ipynb
|
###Markdown
Loading the train dataframe
###Code
data_dir = '/data/kaggle_data/'
df = pd.read_csv(data_dir + 'train.csv')
df.head()
def rle2mask(rle, img_w, img_h):
## transforming the string into an array of shape (2, N)
array = np.fromiter(rle.split(), dtype = np.uint)
array = array.reshape((-1,2)).T
array[0] = array[0] - 1
## decompressing the rle encoding (ie, turning [3, 1, 10, 2] into [3, 4, 10, 11, 12])
# for faster mask construction
starts, lenghts = array
mask_decompressed = np.concatenate([np.arange(s, s + l, dtype = np.uint) for s, l in zip(starts, lenghts)])
## Building the binary mask
msk_img = np.zeros(img_w * img_h, dtype = np.uint8)
msk_img[mask_decompressed] = 1
msk_img = msk_img.reshape((img_h, img_w))
msk_img = np.asfortranarray(msk_img) ## This is important so pycocotools can handle this object
return msk_img
###Output
_____no_output_____
###Markdown
Minor Sanity Check
###Code
rle = df.loc[0, 'annotation']
print(rle)
plt.imshow(rle2mask(rle, 704, 520));
###Output
118145 6 118849 7 119553 8 120257 8 120961 9 121665 10 122369 12 123074 13 123778 14 124482 15 125186 16 125890 17 126594 18 127298 19 128002 20 128706 21 129410 22 130114 23 130818 24 131523 24 132227 25 132931 25 133635 24 134339 24 135043 23 135748 21 136452 19 137157 16 137864 11 138573 4
###Markdown
Function that builds the .json file
###Code
from tqdm.notebook import tqdm
from pycocotools import mask as maskUtils
from joblib import Parallel, delayed
def annotate(idx, row, cat_ids):
mask = rle2mask(row['annotation'], row['width'], row['height']) # Binary mask
c_rle = maskUtils.encode(mask) # Encoding it back to rle (coco format)
c_rle['counts'] = c_rle['counts'].decode('utf-8') # converting from binary to utf-8
area = maskUtils.area(c_rle).item() # calculating the area
bbox = maskUtils.toBbox(c_rle).astype(int).tolist() # calculating the bboxes
annotation = {
'segmentation': c_rle,
'bbox': bbox,
'area': area,
'image_id':row['id'],
'category_id':cat_ids[row['cell_type']],
'iscrowd':0,
'id':idx
}
return annotation
def coco_structure(df, workers = 4):
## Building the header
cat_ids = {'astro':1, 'shsy5y':2, 'cort':3}
cats =[{'name':name, 'id':id} for name,id in cat_ids.items()]
images = [{'id':id, 'width':row.width, 'height':row.height, 'file_name':f'train/{id}.png'} for id,row in df.groupby('id').agg('first').iterrows()]
## Building the annotations
annotations = Parallel(n_jobs=workers)(delayed(annotate)(idx, row, cat_ids) for idx, row in tqdm(df.iterrows(), total = len(df)))
return {'categories':cats, 'images':images, 'annotations':annotations}
import json,itertools
root = coco_structure(df)
root['annotations'][19000]
from sklearn.model_selection import train_test_split
train_images, test_images = train_test_split(df.id.unique(), test_size=0.1, random_state=0)
train_df = df.query("id in @train_images")
test_df = df.query("id in @test_images")
assert len(train_df) + len(test_df) == len(df)
train_root = coco_structure(train_df)
val_root = coco_structure(test_df)
mmdet_dir = '/data/mmdet/'
with open(mmdet_dir + 'annotations_full.json', 'w', encoding='utf-8') as f:
json.dump(root, f, ensure_ascii=True, indent=4)
with open(mmdet_dir + 'annotations_train.json', 'w', encoding='utf-8') as f:
json.dump(train_root, f, ensure_ascii=True, indent=4)
with open(mmdet_dir + 'annotations_val.json', 'w', encoding='utf-8') as f:
json.dump(val_root, f, ensure_ascii=True, indent=4)
###Output
_____no_output_____
|
Deep Learning/Course 4 - Convolutional Neural Networks/Art_Generation_with_Neural_Style_Transfer_v3a.ipynb
|
###Markdown
Deep Learning & Art: Neural Style TransferIn this assignment, you will learn about Neural Style Transfer. This algorithm was created by [Gatys et al. (2015).](https://arxiv.org/abs/1508.06576)**In this assignment, you will:**- Implement the neural style transfer algorithm - Generate novel artistic images using your algorithm Most of the algorithms you've studied optimize a cost function to get a set of parameter values. In Neural Style Transfer, you'll optimize a cost function to get pixel values! Updates If you were working on the notebook before this update...* The current notebook is version "3a".* You can find your original work saved in the notebook with the previous version name ("v2") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* Use `pprint.PrettyPrinter` to format printing of the vgg model.* computing content cost: clarified and reformatted instructions, fixed broken links, added additional hints for unrolling.* style matrix: clarify two uses of variable "G" by using different notation for gram matrix.* style cost: use distinct notation for gram matrix, added additional hints.* Grammar and wording updates for clarity.* `model_nn`: added hints.
###Code
import os
import sys
import scipy.io
import scipy.misc
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
from PIL import Image
from nst_utils import *
import numpy as np
import tensorflow as tf
import pprint
%matplotlib inline
###Output
_____no_output_____
###Markdown
1 - Problem StatementNeural Style Transfer (NST) is one of the most fun techniques in deep learning. As seen below, it merges two images, namely: a **"content" image (C) and a "style" image (S), to create a "generated" image (G**). The generated image G combines the "content" of the image C with the "style" of image S. In this example, you are going to generate an image of the Louvre museum in Paris (content image C), mixed with a painting by Claude Monet, a leader of the impressionist movement (style image S).Let's see how you can do this. 2 - Transfer LearningNeural Style Transfer (NST) uses a previously trained convolutional network, and builds on top of that. The idea of using a network trained on a different task and applying it to a new task is called transfer learning. Following the [original NST paper](https://arxiv.org/abs/1508.06576), we will use the VGG network. Specifically, we'll use VGG-19, a 19-layer version of the VGG network. This model has already been trained on the very large ImageNet database, and thus has learned to recognize a variety of low level features (at the shallower layers) and high level features (at the deeper layers). Run the following code to load parameters from the VGG model. This may take a few seconds.
###Code
pp = pprint.PrettyPrinter(indent=4)
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
pp.pprint(model)
###Output
{ 'avgpool1': <tf.Tensor 'AvgPool:0' shape=(1, 150, 200, 64) dtype=float32>,
'avgpool2': <tf.Tensor 'AvgPool_1:0' shape=(1, 75, 100, 128) dtype=float32>,
'avgpool3': <tf.Tensor 'AvgPool_2:0' shape=(1, 38, 50, 256) dtype=float32>,
'avgpool4': <tf.Tensor 'AvgPool_3:0' shape=(1, 19, 25, 512) dtype=float32>,
'avgpool5': <tf.Tensor 'AvgPool_4:0' shape=(1, 10, 13, 512) dtype=float32>,
'conv1_1': <tf.Tensor 'Relu:0' shape=(1, 300, 400, 64) dtype=float32>,
'conv1_2': <tf.Tensor 'Relu_1:0' shape=(1, 300, 400, 64) dtype=float32>,
'conv2_1': <tf.Tensor 'Relu_2:0' shape=(1, 150, 200, 128) dtype=float32>,
'conv2_2': <tf.Tensor 'Relu_3:0' shape=(1, 150, 200, 128) dtype=float32>,
'conv3_1': <tf.Tensor 'Relu_4:0' shape=(1, 75, 100, 256) dtype=float32>,
'conv3_2': <tf.Tensor 'Relu_5:0' shape=(1, 75, 100, 256) dtype=float32>,
'conv3_3': <tf.Tensor 'Relu_6:0' shape=(1, 75, 100, 256) dtype=float32>,
'conv3_4': <tf.Tensor 'Relu_7:0' shape=(1, 75, 100, 256) dtype=float32>,
'conv4_1': <tf.Tensor 'Relu_8:0' shape=(1, 38, 50, 512) dtype=float32>,
'conv4_2': <tf.Tensor 'Relu_9:0' shape=(1, 38, 50, 512) dtype=float32>,
'conv4_3': <tf.Tensor 'Relu_10:0' shape=(1, 38, 50, 512) dtype=float32>,
'conv4_4': <tf.Tensor 'Relu_11:0' shape=(1, 38, 50, 512) dtype=float32>,
'conv5_1': <tf.Tensor 'Relu_12:0' shape=(1, 19, 25, 512) dtype=float32>,
'conv5_2': <tf.Tensor 'Relu_13:0' shape=(1, 19, 25, 512) dtype=float32>,
'conv5_3': <tf.Tensor 'Relu_14:0' shape=(1, 19, 25, 512) dtype=float32>,
'conv5_4': <tf.Tensor 'Relu_15:0' shape=(1, 19, 25, 512) dtype=float32>,
'input': <tf.Variable 'Variable:0' shape=(1, 300, 400, 3) dtype=float32_ref>}
###Markdown
* The model is stored in a python dictionary. * The python dictionary contains key-value pairs for each layer. * The 'key' is the variable name and the 'value' is a tensor for that layer. Assign input image to the model's input layerTo run an image through this network, you just have to feed the image to the model. In TensorFlow, you can do so using the [tf.assign](https://www.tensorflow.org/api_docs/python/tf/assign) function. In particular, you will use the assign function like this: ```pythonmodel["input"].assign(image)```This assigns the image as an input to the model. Activate a layerAfter this, if you want to access the activations of a particular layer, say layer `4_2` when the network is run on this image, you would run a TensorFlow session on the correct tensor `conv4_2`, as follows: ```pythonsess.run(model["conv4_2"])``` 3 - Neural Style Transfer (NST)We will build the Neural Style Transfer (NST) algorithm in three steps:- Build the content cost function $J_{content}(C,G)$- Build the style cost function $J_{style}(S,G)$- Put it together to get $J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$. 3.1 - Computing the content costIn our running example, the content image C will be the picture of the Louvre Museum in Paris. Run the code below to see a picture of the Louvre.
###Code
content_image = scipy.misc.imread("images/louvre.jpg")
imshow(content_image);
###Output
_____no_output_____
###Markdown
The content image (C) shows the Louvre museum's pyramid surrounded by old Paris buildings, against a sunny sky with a few clouds.** 3.1.1 - Make generated image G match the content of image C** Shallower versus deeper layers* The shallower layers of a ConvNet tend to detect lower-level features such as edges and simple textures.* The deeper layers tend to detect higher-level features such as more complex textures as well as object classes. Choose a "middle" activation layer $a^{[l]}$We would like the "generated" image G to have similar content as the input image C. Suppose you have chosen some layer's activations to represent the content of an image. * In practice, you'll get the most visually pleasing results if you choose a layer in the **middle** of the network--neither too shallow nor too deep. * (After you have finished this exercise, feel free to come back and experiment with using different layers, to see how the results vary.) Forward propagate image "C"* Set the image C as the input to the pretrained VGG network, and run forward propagation. * Let $a^{(C)}$ be the hidden layer activations in the layer you had chosen. (In lecture, we had written this as $a^{[l](C)}$, but here we'll drop the superscript $[l]$ to simplify the notation.) This will be an $n_H \times n_W \times n_C$ tensor. Forward propagate image "G"* Repeat this process with the image G: Set G as the input, and run forward progation. * Let $a^{(G)}$ be the corresponding hidden layer activation. Content Cost Function $J_{content}(C,G)$We will define the content cost function as:$$J_{content}(C,G) = \frac{1}{4 \times n_H \times n_W \times n_C}\sum _{ \text{all entries}} (a^{(C)} - a^{(G)})^2\tag{1} $$* Here, $n_H, n_W$ and $n_C$ are the height, width and number of channels of the hidden layer you have chosen, and appear in a normalization term in the cost. * For clarity, note that $a^{(C)}$ and $a^{(G)}$ are the 3D volumes corresponding to a hidden layer's activations. * In order to compute the cost $J_{content}(C,G)$, it might also be convenient to unroll these 3D volumes into a 2D matrix, as shown below.* Technically this unrolling step isn't needed to compute $J_{content}$, but it will be good practice for when you do need to carry out a similar operation later for computing the style cost $J_{style}$. **Exercise:** Compute the "content cost" using TensorFlow. **Instructions**: The 3 steps to implement this function are:1. Retrieve dimensions from `a_G`: - To retrieve dimensions from a tensor `X`, use: `X.get_shape().as_list()`2. Unroll `a_C` and `a_G` as explained in the picture above - You'll likey want to use these functions: [tf.transpose](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/transpose) and [tf.reshape](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/reshape).3. Compute the content cost: - You'll likely want to use these functions: [tf.reduce_sum](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [tf.square](https://www.tensorflow.org/api_docs/python/tf/square) and [tf.subtract](https://www.tensorflow.org/api_docs/python/tf/subtract). Additional Hints for "Unrolling"* To unroll the tensor, we want the shape to change from $(m,n_H,n_W,n_C)$ to $(m, n_H \times n_W, n_C)$.* `tf.reshape(tensor, shape)` takes a list of integers that represent the desired output shape.* For the `shape` parameter, a `-1` tells the function to choose the correct dimension size so that the output tensor still contains all the values of the original tensor.* So tf.reshape(a_C, shape=[m, n_H * n_W, n_C]) gives the same result as tf.reshape(a_C, shape=[m, -1, n_C]).* If you prefer to re-order the dimensions, you can use `tf.transpose(tensor, perm)`, where `perm` is a list of integers containing the original index of the dimensions. * For example, `tf.transpose(a_C, perm=[0,3,1,2])` changes the dimensions from $(m, n_H, n_W, n_C)$ to $(m, n_C, n_H, n_W)$.* There is more than one way to unroll the tensors.* Notice that it's not necessary to use tf.transpose to 'unroll' the tensors in this case but this is a useful function to practice and understand for other situations that you'll encounter.
###Code
# GRADED FUNCTION: compute_content_cost
def compute_content_cost(a_C, a_G):
"""
Computes the content cost
Arguments:
a_C -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image C
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image G
Returns:
J_content -- scalar that you compute using equation 1 above.
"""
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape a_C and a_G (≈2 lines)
a_C_unrolled = tf.reshape(a_C, shape=[m, n_H * n_W, n_C])
a_G_unrolled = tf.reshape(a_G, shape=[m, n_H * n_W, n_C])
# compute the cost with tensorflow (≈1 line)
J_content = 1/(4*n_H*n_W*n_C) * tf.reduce_sum(tf.square(tf.subtract(a_C_unrolled, a_G_unrolled)))
### END CODE HERE ###
return J_content
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_C = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_content = compute_content_cost(a_C, a_G)
print("J_content = " + str(J_content.eval()))
###Output
J_content = 6.76559
###Markdown
**Expected Output**: **J_content** 6.76559 What you should remember- The content cost takes a hidden layer activation of the neural network, and measures how different $a^{(C)}$ and $a^{(G)}$ are. - When we minimize the content cost later, this will help make sure $G$ has similar content as $C$. 3.2 - Computing the style costFor our running example, we will use the following style image:
###Code
style_image = scipy.misc.imread("images/monet_800600.jpg")
imshow(style_image);
###Output
_____no_output_____
###Markdown
This was painted in the style of *[impressionism](https://en.wikipedia.org/wiki/Impressionism)*.Lets see how you can now define a "style" cost function $J_{style}(S,G)$. 3.2.1 - Style matrix Gram matrix* The style matrix is also called a "Gram matrix." * In linear algebra, the Gram matrix G of a set of vectors $(v_{1},\dots ,v_{n})$ is the matrix of dot products, whose entries are ${\displaystyle G_{ij} = v_{i}^T v_{j} = np.dot(v_{i}, v_{j}) }$. * In other words, $G_{ij}$ compares how similar $v_i$ is to $v_j$: If they are highly similar, you would expect them to have a large dot product, and thus for $G_{ij}$ to be large. Two meanings of the variable $G$* Note that there is an unfortunate collision in the variable names used here. We are following common terminology used in the literature. * $G$ is used to denote the Style matrix (or Gram matrix) * $G$ also denotes the generated image. * For this assignment, we will use $G_{gram}$ to refer to the Gram matrix, and $G$ to denote the generated image. Compute $G_{gram}$In Neural Style Transfer (NST), you can compute the Style matrix by multiplying the "unrolled" filter matrix with its transpose:$$\mathbf{G}_{gram} = \mathbf{A}_{unrolled} \mathbf{A}_{unrolled}^T$$ $G_{(gram)i,j}$: correlationThe result is a matrix of dimension $(n_C,n_C)$ where $n_C$ is the number of filters (channels). The value $G_{(gram)i,j}$ measures how similar the activations of filter $i$ are to the activations of filter $j$. $G_{(gram),i,i}$: prevalence of patterns or textures* The diagonal elements $G_{(gram)ii}$ measure how "active" a filter $i$ is. * For example, suppose filter $i$ is detecting vertical textures in the image. Then $G_{(gram)ii}$ measures how common vertical textures are in the image as a whole.* If $G_{(gram)ii}$ is large, this means that the image has a lot of vertical texture. By capturing the prevalence of different types of features ($G_{(gram)ii}$), as well as how much different features occur together ($G_{(gram)ij}$), the Style matrix $G_{gram}$ measures the style of an image. **Exercise**:* Using TensorFlow, implement a function that computes the Gram matrix of a matrix A. * The formula is: The gram matrix of A is $G_A = AA^T$. * You may use these functions: [matmul](https://www.tensorflow.org/api_docs/python/tf/matmul) and [transpose](https://www.tensorflow.org/api_docs/python/tf/transpose).
###Code
# GRADED FUNCTION: gram_matrix
def gram_matrix(A):
"""
Argument:
A -- matrix of shape (n_C, n_H*n_W)
Returns:
GA -- Gram matrix of A, of shape (n_C, n_C)
"""
### START CODE HERE ### (≈1 line)
GA = tf.matmul(A, A, transpose_b=True)
### END CODE HERE ###
return GA
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
A = tf.random_normal([3, 2*1], mean=1, stddev=4)
GA = gram_matrix(A)
print("GA = \n" + str(GA.eval()))
###Output
GA =
[[ 6.42230511 -4.42912197 -2.09668207]
[ -4.42912197 19.46583748 19.56387138]
[ -2.09668207 19.56387138 20.6864624 ]]
###Markdown
**Expected Output**: **GA** [[ 6.42230511 -4.42912197 -2.09668207] [ -4.42912197 19.46583748 19.56387138] [ -2.09668207 19.56387138 20.6864624 ]] 3.2.2 - Style cost Your goal will be to minimize the distance between the Gram matrix of the "style" image S and the gram matrix of the "generated" image G. * For now, we are using only a single hidden layer $a^{[l]}$. * The corresponding style cost for this layer is defined as: $$J_{style}^{[l]}(S,G) = \frac{1}{4 \times {n_C}^2 \times (n_H \times n_W)^2} \sum _{i=1}^{n_C}\sum_{j=1}^{n_C}(G^{(S)}_{(gram)i,j} - G^{(G)}_{(gram)i,j})^2\tag{2} $$* $G_{gram}^{(S)}$ Gram matrix of the "style" image.* $G_{gram}^{(G)}$ Gram matrix of the "generated" image.* Remember, this cost is computed using the hidden layer activations for a particular hidden layer in the network $a^{[l]}$ **Exercise**: Compute the style cost for a single layer. **Instructions**: The 3 steps to implement this function are:1. Retrieve dimensions from the hidden layer activations a_G: - To retrieve dimensions from a tensor X, use: `X.get_shape().as_list()`2. Unroll the hidden layer activations a_S and a_G into 2D matrices, as explained in the picture above (see the images in the sections "computing the content cost" and "style matrix"). - You may use [tf.transpose](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/transpose) and [tf.reshape](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/reshape).3. Compute the Style matrix of the images S and G. (Use the function you had previously written.) 4. Compute the Style cost: - You may find [tf.reduce_sum](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [tf.square](https://www.tensorflow.org/api_docs/python/tf/square) and [tf.subtract](https://www.tensorflow.org/api_docs/python/tf/subtract) useful. Additional Hints* Since the activation dimensions are $(m, n_H, n_W, n_C)$ whereas the desired unrolled matrix shape is $(n_C, n_H*n_W)$, the order of the filter dimension $n_C$ is changed. So `tf.transpose` can be used to change the order of the filter dimension.* for the product $\mathbf{G}_{gram} = \mathbf{A}_{} \mathbf{A}_{}^T$, you will also need to specify the `perm` parameter for the `tf.transpose` function.
###Code
# GRADED FUNCTION: compute_layer_style_cost
def compute_layer_style_cost(a_S, a_G):
"""
Arguments:
a_S -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image S
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image G
Returns:
J_style_layer -- tensor representing a scalar value, style cost defined above by equation (2)
"""
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape the images to have them of shape (n_C, n_H*n_W) (≈2 lines)
a_S = tf.transpose(tf.reshape(a_S, shape=[-1, n_C]))
a_G = tf.transpose(tf.reshape(a_G, shape=[-1, n_C]))
# Computing gram_matrices for both images S and G (≈2 lines)
GS = gram_matrix(a_S)
GG = gram_matrix(a_G)
# Computing the loss (≈1 line)
J_style_layer = 1 / (2*n_C*n_H*n_W)**2 * tf.reduce_sum((GS-GG)**2)
### END CODE HERE ###
return J_style_layer
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_S = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_style_layer = compute_layer_style_cost(a_S, a_G)
print("J_style_layer = " + str(J_style_layer.eval()))
###Output
J_style_layer = 9.19028
###Markdown
**Expected Output**: **J_style_layer** 9.19028 3.2.3 Style Weights* So far you have captured the style from only one layer. * We'll get better results if we "merge" style costs from several different layers. * Each layer will be given weights ($\lambda^{[l]}$) that reflect how much each layer will contribute to the style.* After completing this exercise, feel free to come back and experiment with different weights to see how it changes the generated image $G$.* By default, we'll give each layer equal weight, and the weights add up to 1. ($\sum_{l}^L\lambda^{[l]} = 1$)
###Code
STYLE_LAYERS = [
('conv1_1', 0.2),
('conv2_1', 0.2),
('conv3_1', 0.2),
('conv4_1', 0.2),
('conv5_1', 0.2)]
###Output
_____no_output_____
###Markdown
You can combine the style costs for different layers as follows:$$J_{style}(S,G) = \sum_{l} \lambda^{[l]} J^{[l]}_{style}(S,G)$$where the values for $\lambda^{[l]}$ are given in `STYLE_LAYERS`. Exercise: compute style cost* We've implemented a compute_style_cost(...) function. * It calls your `compute_layer_style_cost(...)` several times, and weights their results using the values in `STYLE_LAYERS`. * Please read over it to make sure you understand what it's doing. Description of `compute_style_cost`For each layer:* Select the activation (the output tensor) of the current layer.* Get the style of the style image "S" from the current layer.* Get the style of the generated image "G" from the current layer.* Compute the "style cost" for the current layer* Add the weighted style cost to the overall style cost (J_style)Once you're done with the loop: * Return the overall style cost.
###Code
def compute_style_cost(model, STYLE_LAYERS):
"""
Computes the overall style cost from several chosen layers
Arguments:
model -- our tensorflow model
STYLE_LAYERS -- A python list containing:
- the names of the layers we would like to extract style from
- a coefficient for each of them
Returns:
J_style -- tensor representing a scalar value, style cost defined above by equation (2)
"""
# initialize the overall style cost
J_style = 0
for layer_name, coeff in STYLE_LAYERS:
# Select the output tensor of the currently selected layer
out = model[layer_name]
# Set a_S to be the hidden layer activation from the layer we have selected, by running the session on out
a_S = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model[layer_name]
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out
# Compute style_cost for the current layer
J_style_layer = compute_layer_style_cost(a_S, a_G)
# Add coeff * J_style_layer of this layer to overall style cost
J_style += coeff * J_style_layer
return J_style
###Output
_____no_output_____
###Markdown
**Note**: In the inner-loop of the for-loop above, `a_G` is a tensor and hasn't been evaluated yet. It will be evaluated and updated at each iteration when we run the TensorFlow graph in model_nn() below.<!-- How do you choose the coefficients for each layer? The deeper layers capture higher-level concepts, and the features in the deeper layers are less localized in the image relative to each other. So if you want the generated image to softly follow the style image, try choosing larger weights for deeper layers and smaller weights for the first layers. In contrast, if you want the generated image to strongly follow the style image, try choosing smaller weights for deeper layers and larger weights for the first layers!--> What you should remember- The style of an image can be represented using the Gram matrix of a hidden layer's activations. - We get even better results by combining this representation from multiple different layers. - This is in contrast to the content representation, where usually using just a single hidden layer is sufficient.- Minimizing the style cost will cause the image $G$ to follow the style of the image $S$. 3.3 - Defining the total cost to optimize Finally, let's create a cost function that minimizes both the style and the content cost. The formula is: $$J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$$**Exercise**: Implement the total cost function which includes both the content cost and the style cost.
###Code
# GRADED FUNCTION: total_cost
def total_cost(J_content, J_style, alpha = 10, beta = 40):
"""
Computes the total cost function
Arguments:
J_content -- content cost coded above
J_style -- style cost coded above
alpha -- hyperparameter weighting the importance of the content cost
beta -- hyperparameter weighting the importance of the style cost
Returns:
J -- total cost as defined by the formula above.
"""
### START CODE HERE ### (≈1 line)
J = alpha*J_content + beta*J_style
### END CODE HERE ###
return J
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(3)
J_content = np.random.randn()
J_style = np.random.randn()
J = total_cost(J_content, J_style)
print("J = " + str(J))
###Output
J = 35.34667875478276
###Markdown
**Expected Output**: **J** 35.34667875478276 What you should remember- The total cost is a linear combination of the content cost $J_{content}(C,G)$ and the style cost $J_{style}(S,G)$.- $\alpha$ and $\beta$ are hyperparameters that control the relative weighting between content and style. 4 - Solving the optimization problem Finally, let's put everything together to implement Neural Style Transfer!Here's what the program will have to do:1. Create an Interactive Session2. Load the content image 3. Load the style image4. Randomly initialize the image to be generated 5. Load the VGG19 model7. Build the TensorFlow graph: - Run the content image through the VGG19 model and compute the content cost - Run the style image through the VGG19 model and compute the style cost - Compute the total cost - Define the optimizer and the learning rate8. Initialize the TensorFlow graph and run it for a large number of iterations, updating the generated image at every step.Lets go through the individual steps in detail. Interactive SessionsYou've previously implemented the overall cost $J(G)$. We'll now set up TensorFlow to optimize this with respect to $G$. * To do so, your program has to reset the graph and use an "[Interactive Session](https://www.tensorflow.org/api_docs/python/tf/InteractiveSession)". * Unlike a regular session, the "Interactive Session" installs itself as the default session to build a graph. * This allows you to run variables without constantly needing to refer to the session object (calling "sess.run()"), which simplifies the code. Start the interactive session.
###Code
# Reset the graph
tf.reset_default_graph()
# Start interactive session
sess = tf.InteractiveSession()
###Output
_____no_output_____
###Markdown
Content imageLet's load, reshape, and normalize our "content" image (the Louvre museum picture):
###Code
content_image = scipy.misc.imread("images/louvre_small.jpg")
content_image = reshape_and_normalize_image(content_image)
###Output
_____no_output_____
###Markdown
Style imageLet's load, reshape and normalize our "style" image (Claude Monet's painting):
###Code
style_image = scipy.misc.imread("images/monet.jpg")
style_image = reshape_and_normalize_image(style_image)
###Output
_____no_output_____
###Markdown
Generated image correlated with content imageNow, we initialize the "generated" image as a noisy image created from the content_image.* The generated image is slightly correlated with the content image.* By initializing the pixels of the generated image to be mostly noise but slightly correlated with the content image, this will help the content of the "generated" image more rapidly match the content of the "content" image. * Feel free to look in `nst_utils.py` to see the details of `generate_noise_image(...)`; to do so, click "File-->Open..." at the upper-left corner of this Jupyter notebook.
###Code
generated_image = generate_noise_image(content_image)
imshow(generated_image[0]);
###Output
_____no_output_____
###Markdown
Load pre-trained VGG19 modelNext, as explained in part (2), let's load the VGG19 model.
###Code
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
###Output
_____no_output_____
###Markdown
Content CostTo get the program to compute the content cost, we will now assign `a_C` and `a_G` to be the appropriate hidden layer activations. We will use layer `conv4_2` to compute the content cost. The code below does the following:1. Assign the content image to be the input to the VGG model.2. Set a_C to be the tensor giving the hidden layer activation for layer "conv4_2".3. Set a_G to be the tensor giving the hidden layer activation for the same layer. 4. Compute the content cost using a_C and a_G.**Note**: At this point, a_G is a tensor and hasn't been evaluated. It will be evaluated and updated at each iteration when we run the Tensorflow graph in model_nn() below.
###Code
# Assign the content image to be the input of the VGG model.
sess.run(model['input'].assign(content_image))
# Select the output tensor of layer conv4_2
out = model['conv4_2']
# Set a_C to be the hidden layer activation from the layer we have selected
a_C = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model['conv4_2']
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out
# Compute the content cost
J_content = compute_content_cost(a_C, a_G)
###Output
_____no_output_____
###Markdown
Style cost
###Code
# Assign the input of the model to be the "style" image
sess.run(model['input'].assign(style_image))
# Compute the style cost
J_style = compute_style_cost(model, STYLE_LAYERS)
###Output
_____no_output_____
###Markdown
Exercise: total cost* Now that you have J_content and J_style, compute the total cost J by calling `total_cost()`. * Use `alpha = 10` and `beta = 40`.
###Code
### START CODE HERE ### (1 line)
J = total_cost(J_content, J_style, alpha=10, beta=40)
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
Optimizer* Use the Adam optimizer to minimize the total cost `J`.* Use a learning rate of 2.0. * [Adam Optimizer documentation](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer)
###Code
# define optimizer (1 line)
optimizer = tf.train.AdamOptimizer(2.0)
# define train_step (1 line)
train_step = optimizer.minimize(J)
###Output
_____no_output_____
###Markdown
Exercise: implement the model* Implement the model_nn() function. * The function **initializes** the variables of the tensorflow graph, * **assigns** the input image (initial generated image) as the input of the VGG19 model * and **runs** the `train_step` tensor (it was created in the code above this function) for a large number of steps. Hints* To initialize global variables, use this: ```Pythonsess.run(tf.global_variables_initializer())```* Run `sess.run()` to evaluate a variable.* [assign](https://www.tensorflow.org/versions/r1.14/api_docs/python/tf/assign) can be used like this:```pythonmodel["input"].assign(image)```
###Code
def model_nn(sess, input_image, num_iterations = 200):
# Initialize global variables (you need to run the session on the initializer)
### START CODE HERE ### (1 line)
sess.run(tf.global_variables_initializer())
### END CODE HERE ###
# Run the noisy input image (initial generated image) through the model. Use assign().
### START CODE HERE ### (1 line)
sess.run(model['input'].assign(input_image))
### END CODE HERE ###
for i in range(num_iterations):
# Run the session on the train_step to minimize the total cost
### START CODE HERE ### (1 line)
sess.run(train_step)
### END CODE HERE ###
# Compute the generated image by running the session on the current model['input']
### START CODE HERE ### (1 line)
generated_image = sess.run(model['input'])
### END CODE HERE ###
# Print every 20 iteration.
if i%20 == 0:
Jt, Jc, Js = sess.run([J, J_content, J_style])
print("Iteration " + str(i) + " :")
print("total cost = " + str(Jt))
print("content cost = " + str(Jc))
print("style cost = " + str(Js))
# save current generated image in the "/output" directory
save_image("output/" + str(i) + ".png", generated_image)
# save last generated image
save_image('output/generated_image.jpg', generated_image)
return generated_image
###Output
_____no_output_____
###Markdown
Run the following cell to generate an artistic image. It should take about 3min on CPU for every 20 iterations but you start observing attractive results after ≈140 iterations. Neural Style Transfer is generally trained using GPUs.
###Code
model_nn(sess, generated_image)
###Output
Iteration 0 :
total cost = 5.05035e+09
content cost = 7877.67
style cost = 1.26257e+08
Iteration 20 :
total cost = 9.43258e+08
content cost = 15187.1
style cost = 2.35777e+07
Iteration 40 :
total cost = 4.84866e+08
content cost = 16786.0
style cost = 1.21175e+07
Iteration 60 :
total cost = 3.12535e+08
content cost = 17466.7
style cost = 7.80901e+06
Iteration 80 :
total cost = 2.28121e+08
content cost = 17716.5
style cost = 5.6986e+06
Iteration 100 :
total cost = 1.80686e+08
content cost = 17896.0
style cost = 4.51267e+06
Iteration 120 :
total cost = 1.49982e+08
content cost = 18027.8
style cost = 3.74505e+06
Iteration 140 :
total cost = 1.27758e+08
content cost = 18180.9
style cost = 3.18942e+06
Iteration 160 :
total cost = 1.10779e+08
content cost = 18341.4
style cost = 2.76489e+06
Iteration 180 :
total cost = 9.74276e+07
content cost = 18488.1
style cost = 2.43107e+06
|
Classification_on_Gas_Array_Dataset.ipynb
|
###Markdown
EDA on Gas Array dataset, this is taken fron UCI Machine learning repository
###Code
import os,sys
from scipy import stats
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import glob
files= glob.glob("batch*.dat")
for f in files:
df= pd.read_csv(f, sep="\s+",index_col=0, header=None)
for col in df.columns.values:
df[col]= df[col].apply(lambda x: float(str(x).split(":")[1]))
df= df.rename_axis("Gas").reset_index()
df
df.groupby(["Gas"])
gas_1= df[df["Gas"]==1]
gas_1
#When I did PCA, I found that I need to sort the gas since the length did not match for gas 1
df= df.sort_values(by=["Gas"])
###Output
_____no_output_____
###Markdown
PCA for Dimensionality Reduction
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_scaled = scaler.fit_transform(df)
X_scaled.shape
#Principle Component Analysis
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
xtrain= pca.fit_transform(X_scaled)
#Finding out the length of each gas
for i in range(1,7):
print("length for gas "+str(i))
print(df[df["Gas"]==i].shape[0])
%matplotlib inline
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111, projection='3d')
plt.rcParams['legend.fontsize'] = 11
ax.plot(xtrain[0:164,0], xtrain[0:164,1], xtrain[0:164,2], 'o', markersize=3, label='Ethanol')
ax.plot(xtrain[165:(164+334),0], xtrain[165:(164+334),1], xtrain[165:(164+334),2], 'o', markersize=3, label='Ethylene')
ax.plot(xtrain[498:597,0], xtrain[498:597,1], xtrain[498:597,2], 'o', markersize=3, label='Ammonia')
ax.plot(xtrain[598:707,0], xtrain[598:707,1], xtrain[598:707,2], 'o', markersize=2.5, label='Acetaldehyde')
ax.plot(xtrain[708:1239,0], xtrain[708:1239,1], xtrain[708:1239,2], 'o', markersize=2.5, label='Acetone')
ax.plot(xtrain[1230:1235,0], xtrain[1230:1235,1], xtrain[1230:1235,2], 'o', markersize=2.5, label='Toluene')
ax.set_xlabel('PC1')
ax.set_xlim3d(-15, 80)
ax.set_ylabel('PC2')
ax.set_ylim3d(-300, 50)
ax.set_zlabel('PC3')
ax.set_zlim3d(0, 5)
ax.legend(loc='upper right')
plt.show()
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111, projection='3d')
plt.rcParams['legend.fontsize'] = 11
ax.plot(xtrain[0:164,2],xtrain[0:164,0], xtrain[0:164,1], 'o', markersize=3, label='Ethanol')
ax.plot(xtrain[165:(164+334),2],xtrain[165:(164+334),0], xtrain[165:(164+334),1], 'o', markersize=3, label='Ethylene')
ax.plot(xtrain[498:597,2], xtrain[498:597,0], xtrain[498:597,1], 'o', markersize=3, label='Ammonia')
ax.plot(xtrain[598:707,2], xtrain[598:707,0], xtrain[598:707,1], 'o', markersize=2.5, label='Acetaldehyde')
ax.plot(xtrain[708:1239,2], xtrain[708:1239,0], xtrain[708:1239,1], 'o', markersize=2.5, label='Acetone')
ax.plot(xtrain[1230:1235,2], xtrain[1230:1235,0], xtrain[1230:1235,1], 'o', markersize=2.5, label='Toluene')
ax.set_xlabel('PC3')
ax.set_ylabel('PC2')
ax.set_zlabel('PC1')
ax.set_zlim3d(0, 20)
ax.legend(loc='upper right')
plt.show()
###Output
_____no_output_____
###Markdown
Using LDA for Classification
###Code
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.model_selection import train_test_split
y= np.array(df.iloc[:,0])
scaler = StandardScaler()
X_scaled = scaler.fit_transform(df)
x_train, x_test, y_train, y_test = train_test_split(X_scaled, y, test_size=.1,random_state=42)
lda = LDA(n_components=3)
np.unique(y_train) #Make sure all of our labels are in the training set
lda_object= lda.fit(x_train, y_train)
print(lda_object.score(x_test, y_test))
#All correct classification for test set!
#Creating confusion matrix
from sklearn.metrics import classification_report, confusion_matrix
import seaborn as sns
y_pred= lda_object.predict(x_test)
cm= confusion_matrix(y_test, y_pred)
ax=plt.subplot()
sns.heatmap(cm, annot=True)
targets= list(range(1,7))
ax.set_xlabel("Predicted Labels")
ax.set_ylabel("True Labels")
ax.set_title("Confusion Matrix from LDA")
ax.xaxis.set_ticklabels(targets)
ax.yaxis.set_ticklabels(targets)
###Output
_____no_output_____
|
examples/raster/extract_load_sample_subset_dask.ipynb
|
###Markdown
Load Sample Subset from Extracted Data with dask
###Code
import fnmatch
import geopandas as gpd
import os
import pandas as pd
from pathlib import Path
from eobox.raster import extraction
from eobox import sampledata
%matplotlib inline
dataset = sampledata.get_dataset("s2l1c")
src_vector = dataset["vector_file"]
burn_attribute = "pid" # should be unique for the polygons and not contain zero
src_raster = fnmatch.filter(dataset["raster_files"], "*B0[2,3,4,8]*") # 10 m bands
dst_names = ["_".join(Path(src).stem.split("_")[1::]) for src in src_raster]
extraction_dir = Path("./xxx_uncontrolled/s2l1c_ref__s2l1c/s2_l1c/10m")
extraction.extract(src_vector=src_vector,
burn_attribute=burn_attribute,
src_raster=src_raster,
dst_names=dst_names,
dst_dir=extraction_dir)
df_extracted = extraction.load_extracted(extraction_dir, "*pid.npy")
print(df_extracted.shape)
display(df_extracted.head())
index_29 = (df_extracted["aux_vector_pid"] == 29)
index_29.sum()
print(df_extracted[index_29].shape)
display(df_extracted[index_29].head(2))
display(df_extracted[index_29].tail(2))
df_extracted_29 = extraction.load_extracted(extraction_dir, index=index_29)
print(df_extracted_29.shape)
df_extracted_29.head()
###Output
(109, 7)
###Markdown
Load with dask - WIP
###Code
npy_path_list = extraction.get_paths_of_extracted(extraction_dir)
%load_ext autoreload
%autoreload 2
ddf = extraction.load_extracted_dask(npy_path_list, index=None)
ddf
ddf_29 = extraction.load_extracted_dask(npy_path_list, index=index_29)
ddf_29
df_extracted_29.columns
ddf_29_df = ddf_29.compute()
ddf_29_df.head()
assert ddf_29_df.shape == df_extracted_29.shape
assert (ddf_29_df.columns == df_extracted_29.columns).all()
###Output
_____no_output_____
###Markdown
Note that the index does not match!
###Code
ddf_29_df.index == df_extracted_29.index
###Output
_____no_output_____
###Markdown
And therefore we cannot compare the dataframes.
###Code
ddf_29_df == df_extracted_29
###Output
_____no_output_____
|
Presentations/Intro-to-Jupyter-Notebooks/6.How-Can-I-Convert-SQL-to-Notebooks/How-Can-I-Convert-SQL-to-Notebooks.ipynb
|
###Markdown
➕
➕
= ❤
PowerShell to convert .SQL files into Notebooks First up, install the `PowerShellNotebook` module from the PowerShell Gallery...
###Code
Install-Module PowerShellNotebook -Force
###Output
_____no_output_____
###Markdown
(optional step, for demo purposes)
Make a directory to store some .SQL files
###Code
mkdir c:\temp\SQLFiles
###Output
_____no_output_____
###Markdown
Switch to a folder where you have a lot of `.SQL` files.
###Code
cd c:\temp\SQLFiles
###Output
_____no_output_____
###Markdown
If you don't have any .SQL files handy, download some from GitHub
(use the step below.)
###Code
irm https://gist.githubusercontent.com/MsSQLGirl/799d3613c6b3aba58cb4decbb30da139/raw/433ffdcefcbc4db0e5f5c9b53e1e9bde139f885d/SQLSample_01_ServerProperties.sql > '.\SQLSample_01_ServerProperties.sql'
irm https://gist.githubusercontent.com/MsSQLGirl/799d3613c6b3aba58cb4decbb30da139/raw/433ffdcefcbc4db0e5f5c9b53e1e9bde139f885d/SQLSample_02_WWI.sql > '.\SQLSample_02_WWI.sql'
irm https://gist.githubusercontent.com/MsSQLGirl/799d3613c6b3aba58cb4decbb30da139/raw/433ffdcefcbc4db0e5f5c9b53e1e9bde139f885d/SQLSample_03_StringDynamics.sql > '.\SQLSample_03_StringDynamics.sql'
irm https://gist.githubusercontent.com/MsSQLGirl/799d3613c6b3aba58cb4decbb30da139/raw/433ffdcefcbc4db0e5f5c9b53e1e9bde139f885d/SQLSample_04_VariableBatchConundrum.sql > '.\SQLSample_04_VariableBatchConundrum.sql'
irm https://gist.githubusercontent.com/vickyharp/d188b5ab2ceec12896b4a514ea52e5b6/raw/f2e4b1bc4d6a2fb293aebb9989129bd722d6a25e/AdventureWorksAddress.sql > '.\AdventureWorksAddress.sql'
irm https://gist.githubusercontent.com/vickyharp/6c254d63d3de9850b20b5861b061b5f5/raw/0ff7d7c5da9f216fb7534994c8be60fe0e7efaf3/AdventureWorksMultiStatementSBatch.sql > '.\AdventureWorksMultiStatementSBatch.sql'
irm https://raw.githubusercontent.com/microsoft/tigertoolbox/master/BPCheck/Check_BP_Servers.sql > '.\Check_BP_Servers.sql'
###Output
_____no_output_____
###Markdown
Here's the part where it gets good!
Now use `dir` to loop over all the .SQL files in the directory, and use the `ConvertTo-SQLNoteBook` function to turn them into SQL Notebooks.
###Code
dir -Filter *.SQL |
foreach {
ConvertTo-SQLNoteBook -InputFileName $_.FullName -OutputNotebookName (Join-Path -Path (Split-Path -Path $_.FullName -Parent) -ChildPath ($_.Name -replace '.sql', '.ipynb'))
}
###Output
_____no_output_____
###Markdown
Check inside that same directory, and you should now see a bunch of .IPYNB files.
###Code
dir -Filter *.ipynb
###Output
|
Python/Traditional algrothims/Trees/Lightgbm/lightgbm_cheatsheet.ipynb
|
###Markdown
LightGBM用法速查表 by 寒小阳 1.读取csv数据并指定参数建模**by 寒小阳**
###Code
# coding: utf-8
import json
import lightgbm as lgb
import pandas as pd
from sklearn.metrics import mean_squared_error
# 加载数据集合
print('Load data...')
df_train = pd.read_csv('./data/regression.train.txt', header=None, sep='\t')
df_test = pd.read_csv('./data/regression.test.txt', header=None, sep='\t')
# 设定训练集和测试集
y_train = df_train[0].values
y_test = df_test[0].values
X_train = df_train.drop(0, axis=1).values
X_test = df_test.drop(0, axis=1).values
# 构建lgb中的Dataset格式
lgb_train = lgb.Dataset(X_train, y_train)
lgb_eval = lgb.Dataset(X_test, y_test, reference=lgb_train)
# 敲定好一组参数
params = {
'task': 'train',
'boosting_type': 'gbdt',
'objective': 'regression',
'metric': {'l2', 'auc'},
'num_leaves': 31,
'learning_rate': 0.05,
'feature_fraction': 0.9,
'bagging_fraction': 0.8,
'bagging_freq': 5,
'verbose': 0
}
print('开始训练...')
# 训练
gbm = lgb.train(params,
lgb_train,
num_boost_round=20,
valid_sets=lgb_eval,
early_stopping_rounds=5)
# 保存模型
print('保存模型...')
# 保存模型到文件中
gbm.save_model('model.txt')
print('开始预测...')
# 预测
y_pred = gbm.predict(X_test, num_iteration=gbm.best_iteration)
# 评估
print('预估结果的rmse为:')
print(mean_squared_error(y_test, y_pred) ** 0.5)
###Output
Load data...
开始训练...
[1] valid_0's l2: 0.24288 valid_0's auc: 0.764496
Training until validation scores don't improve for 5 rounds.
[2] valid_0's l2: 0.239307 valid_0's auc: 0.766173
[3] valid_0's l2: 0.235559 valid_0's auc: 0.785547
[4] valid_0's l2: 0.230771 valid_0's auc: 0.797786
[5] valid_0's l2: 0.226297 valid_0's auc: 0.805155
[6] valid_0's l2: 0.223692 valid_0's auc: 0.800979
[7] valid_0's l2: 0.220941 valid_0's auc: 0.806566
[8] valid_0's l2: 0.217982 valid_0's auc: 0.808566
[9] valid_0's l2: 0.215351 valid_0's auc: 0.809041
[10] valid_0's l2: 0.213064 valid_0's auc: 0.805953
[11] valid_0's l2: 0.211053 valid_0's auc: 0.804631
[12] valid_0's l2: 0.209336 valid_0's auc: 0.802922
[13] valid_0's l2: 0.207492 valid_0's auc: 0.802011
[14] valid_0's l2: 0.206016 valid_0's auc: 0.80193
Early stopping, best iteration is:
[9] valid_0's l2: 0.215351 valid_0's auc: 0.809041
保存模型...
开始预测...
预估结果的rmse为:
0.4640593794679212
###Markdown
2.添加样本权重训练**by 寒小阳**
###Code
# coding: utf-8
import json
import lightgbm as lgb
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error
import warnings
warnings.filterwarnings("ignore")
# 加载数据集
print('加载数据...')
df_train = pd.read_csv('./data/binary.train', header=None, sep='\t')
df_test = pd.read_csv('./data/binary.test', header=None, sep='\t')
W_train = pd.read_csv('./data/binary.train.weight', header=None)[0]
W_test = pd.read_csv('./data/binary.test.weight', header=None)[0]
y_train = df_train[0].values
y_test = df_test[0].values
X_train = df_train.drop(0, axis=1).values
X_test = df_test.drop(0, axis=1).values
num_train, num_feature = X_train.shape
# 加载数据的同时加载权重
lgb_train = lgb.Dataset(X_train, y_train,
weight=W_train, free_raw_data=False)
lgb_eval = lgb.Dataset(X_test, y_test, reference=lgb_train,
weight=W_test, free_raw_data=False)
# 设定参数
params = {
'boosting_type': 'gbdt',
'objective': 'binary',
'metric': 'binary_logloss',
'num_leaves': 31,
'learning_rate': 0.05,
'feature_fraction': 0.9,
'bagging_fraction': 0.8,
'bagging_freq': 5,
'verbose': 0
}
# 产出特征名称
feature_name = ['feature_' + str(col) for col in range(num_feature)]
print('开始训练...')
gbm = lgb.train(params,
lgb_train,
num_boost_round=10,
valid_sets=lgb_train, # 评估训练集
feature_name=feature_name,
categorical_feature=[21])
###Output
加载数据...
开始训练...
[1] training's binary_logloss: 0.68205
[2] training's binary_logloss: 0.673618
[3] training's binary_logloss: 0.665891
[4] training's binary_logloss: 0.656874
[5] training's binary_logloss: 0.648523
[6] training's binary_logloss: 0.641874
[7] training's binary_logloss: 0.636029
[8] training's binary_logloss: 0.629427
[9] training's binary_logloss: 0.623354
[10] training's binary_logloss: 0.617593
###Markdown
3.模型的载入与预测**by 寒小阳**
###Code
# 查看特征名称
print('完成10轮训练...')
print('第7个特征为:')
print(repr(lgb_train.feature_name[6]))
# 存储模型
gbm.save_model('./model/lgb_model.txt')
# 特征名称
print('特征名称:')
print(gbm.feature_name())
# 特征重要度
print('特征重要度:')
print(list(gbm.feature_importance()))
# 加载模型
print('加载模型用于预测')
bst = lgb.Booster(model_file='./model/lgb_model.txt')
# 预测
y_pred = bst.predict(X_test)
# 在测试集评估效果
print('在测试集上的rmse为:')
print(mean_squared_error(y_test, y_pred) ** 0.5)
###Output
完成10轮训练...
第7个特征为:
'feature_6'
特征名称:
[u'feature_0', u'feature_1', u'feature_2', u'feature_3', u'feature_4', u'feature_5', u'feature_6', u'feature_7', u'feature_8', u'feature_9', u'feature_10', u'feature_11', u'feature_12', u'feature_13', u'feature_14', u'feature_15', u'feature_16', u'feature_17', u'feature_18', u'feature_19', u'feature_20', u'feature_21', u'feature_22', u'feature_23', u'feature_24', u'feature_25', u'feature_26', u'feature_27']
特征重要度:
[8, 5, 1, 19, 7, 33, 2, 0, 2, 10, 5, 2, 0, 9, 3, 3, 0, 2, 2, 5, 1, 0, 36, 3, 33, 45, 29, 35]
加载模型用于预测
在测试集上的rmse为:
0.4629245607636925
###Markdown
4.接着之前的模型继续训练**by 寒小阳**
###Code
# 继续训练
# 从./model/model.txt中加载模型初始化
gbm = lgb.train(params,
lgb_train,
num_boost_round=10,
init_model='./model/lgb_model.txt',
valid_sets=lgb_eval)
print('以旧模型为初始化,完成第 10-20 轮训练...')
# 在训练的过程中调整超参数
# 比如这里调整的是学习率
gbm = lgb.train(params,
lgb_train,
num_boost_round=10,
init_model=gbm,
learning_rates=lambda iter: 0.05 * (0.99 ** iter),
valid_sets=lgb_eval)
print('逐步调整学习率完成第 20-30 轮训练...')
# 调整其他超参数
gbm = lgb.train(params,
lgb_train,
num_boost_round=10,
init_model=gbm,
valid_sets=lgb_eval,
callbacks=[lgb.reset_parameter(bagging_fraction=[0.7] * 5 + [0.6] * 5)])
print('逐步调整bagging比率完成第 30-40 轮训练...')
###Output
[11] valid_0's binary_logloss: 0.616177
[12] valid_0's binary_logloss: 0.611792
[13] valid_0's binary_logloss: 0.607043
[14] valid_0's binary_logloss: 0.602314
[15] valid_0's binary_logloss: 0.598433
[16] valid_0's binary_logloss: 0.595238
[17] valid_0's binary_logloss: 0.592047
[18] valid_0's binary_logloss: 0.588673
[19] valid_0's binary_logloss: 0.586084
[20] valid_0's binary_logloss: 0.584033
以旧模型为初始化,完成第 10-20 轮训练...
[21] valid_0's binary_logloss: 0.616177
[22] valid_0's binary_logloss: 0.611834
[23] valid_0's binary_logloss: 0.607177
[24] valid_0's binary_logloss: 0.602577
[25] valid_0's binary_logloss: 0.59831
[26] valid_0's binary_logloss: 0.595259
[27] valid_0's binary_logloss: 0.592201
[28] valid_0's binary_logloss: 0.589017
[29] valid_0's binary_logloss: 0.586597
[30] valid_0's binary_logloss: 0.584454
逐步调整学习率完成第 20-30 轮训练...
[31] valid_0's binary_logloss: 0.616053
[32] valid_0's binary_logloss: 0.612291
[33] valid_0's binary_logloss: 0.60856
[34] valid_0's binary_logloss: 0.605387
[35] valid_0's binary_logloss: 0.601744
[36] valid_0's binary_logloss: 0.598556
[37] valid_0's binary_logloss: 0.595585
[38] valid_0's binary_logloss: 0.593228
[39] valid_0's binary_logloss: 0.59018
[40] valid_0's binary_logloss: 0.588391
逐步调整bagging比率完成第 30-40 轮训练...
###Markdown
5.自定义损失函数**by 寒小阳**
###Code
# 类似在xgboost中的形式
# 自定义损失函数需要
def loglikelood(preds, train_data):
labels = train_data.get_label()
preds = 1. / (1. + np.exp(-preds))
grad = preds - labels
hess = preds * (1. - preds)
return grad, hess
# 自定义评估函数
def binary_error(preds, train_data):
labels = train_data.get_label()
return 'error', np.mean(labels != (preds > 0.5)), False
gbm = lgb.train(params,
lgb_train,
num_boost_round=10,
init_model=gbm,
fobj=loglikelood,
feval=binary_error,
valid_sets=lgb_eval)
print('用自定义的损失函数与评估标准完成第40-50轮...')
###Output
[41] valid_0's binary_logloss: 0.614429 valid_0's error: 0.268
[42] valid_0's binary_logloss: 0.610689 valid_0's error: 0.26
[43] valid_0's binary_logloss: 0.606267 valid_0's error: 0.264
[44] valid_0's binary_logloss: 0.601949 valid_0's error: 0.258
[45] valid_0's binary_logloss: 0.597271 valid_0's error: 0.266
[46] valid_0's binary_logloss: 0.593971 valid_0's error: 0.276
[47] valid_0's binary_logloss: 0.591427 valid_0's error: 0.278
[48] valid_0's binary_logloss: 0.588301 valid_0's error: 0.284
[49] valid_0's binary_logloss: 0.586562 valid_0's error: 0.288
[50] valid_0's binary_logloss: 0.584056 valid_0's error: 0.288
用自定义的损失函数与评估标准完成第40-50轮...
###Markdown
sklearn与LightGBM配合使用 1.LightGBM建模,sklearn评估**by 寒小阳**
###Code
# coding: utf-8
import lightgbm as lgb
import pandas as pd
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import GridSearchCV
# 加载数据
print('加载数据...')
df_train = pd.read_csv('./data/regression.train.txt', header=None, sep='\t')
df_test = pd.read_csv('./data/regression.test.txt', header=None, sep='\t')
# 取出特征和标签
y_train = df_train[0].values
y_test = df_test[0].values
X_train = df_train.drop(0, axis=1).values
X_test = df_test.drop(0, axis=1).values
print('开始训练...')
# 直接初始化LGBMRegressor
# 这个LightGBM的Regressor和sklearn中其他Regressor基本是一致的
gbm = lgb.LGBMRegressor(objective='regression',
num_leaves=31,
learning_rate=0.05,
n_estimators=20)
# 使用fit函数拟合
gbm.fit(X_train, y_train,
eval_set=[(X_test, y_test)],
eval_metric='l1',
early_stopping_rounds=5)
# 预测
print('开始预测...')
y_pred = gbm.predict(X_test, num_iteration=gbm.best_iteration_)
# 评估预测结果
print('预测结果的rmse是:')
print(mean_squared_error(y_test, y_pred) ** 0.5)
###Output
加载数据...
开始训练...
[1] valid_0's l1: 0.491735
Training until validation scores don't improve for 5 rounds.
[2] valid_0's l1: 0.486563
[3] valid_0's l1: 0.481489
[4] valid_0's l1: 0.476848
[5] valid_0's l1: 0.47305
[6] valid_0's l1: 0.469049
[7] valid_0's l1: 0.465556
[8] valid_0's l1: 0.462208
[9] valid_0's l1: 0.458676
[10] valid_0's l1: 0.454998
[11] valid_0's l1: 0.452047
[12] valid_0's l1: 0.449158
[13] valid_0's l1: 0.44608
[14] valid_0's l1: 0.443554
[15] valid_0's l1: 0.440643
[16] valid_0's l1: 0.437687
[17] valid_0's l1: 0.435454
[18] valid_0's l1: 0.433288
[19] valid_0's l1: 0.431297
[20] valid_0's l1: 0.428946
Did not meet early stopping. Best iteration is:
[20] valid_0's l1: 0.428946
开始预测...
预测结果的rmse是:
0.4441153344254208
###Markdown
2.网格搜索查找最优超参数**by 寒小阳**
###Code
# 配合scikit-learn的网格搜索交叉验证选择最优超参数
estimator = lgb.LGBMRegressor(num_leaves=31)
param_grid = {
'learning_rate': [0.01, 0.1, 1],
'n_estimators': [20, 40]
}
gbm = GridSearchCV(estimator, param_grid)
gbm.fit(X_train, y_train)
print('用网格搜索找到的最优超参数为:')
print(gbm.best_params_)
###Output
用网格搜索找到的最优超参数为:
{'n_estimators': 40, 'learning_rate': 0.1}
###Markdown
3.绘图解释**by 寒小阳**
###Code
# coding: utf-8
import lightgbm as lgb
import pandas as pd
try:
import matplotlib.pyplot as plt
except ImportError:
raise ImportError('You need to install matplotlib for plotting.')
# 加载数据集
print('加载数据...')
df_train = pd.read_csv('./data/regression.train.txt', header=None, sep='\t')
df_test = pd.read_csv('./data/regression.test.txt', header=None, sep='\t')
# 取出特征和标签
y_train = df_train[0].values
y_test = df_test[0].values
X_train = df_train.drop(0, axis=1).values
X_test = df_test.drop(0, axis=1).values
# 构建lgb中的Dataset数据格式
lgb_train = lgb.Dataset(X_train, y_train)
lgb_test = lgb.Dataset(X_test, y_test, reference=lgb_train)
# 设定参数
params = {
'num_leaves': 5,
'metric': ('l1', 'l2'),
'verbose': 0
}
evals_result = {} # to record eval results for plotting
print('开始训练...')
# 训练
gbm = lgb.train(params,
lgb_train,
num_boost_round=100,
valid_sets=[lgb_train, lgb_test],
feature_name=['f' + str(i + 1) for i in range(28)],
categorical_feature=[21],
evals_result=evals_result,
verbose_eval=10)
print('在训练过程中绘图...')
ax = lgb.plot_metric(evals_result, metric='l1')
plt.show()
print('画出特征重要度...')
ax = lgb.plot_importance(gbm, max_num_features=10)
plt.show()
print('画出第84颗树...')
ax = lgb.plot_tree(gbm, tree_index=83, figsize=(20, 8), show_info=['split_gain'])
plt.show()
#print('用graphviz画出第84颗树...')
#graph = lgb.create_tree_digraph(gbm, tree_index=83, name='Tree84')
#graph.render(view=True)
###Output
加载数据...
开始训练...
[10] training's l2: 0.217995 training's l1: 0.457448 valid_1's l2: 0.21641 valid_1's l1: 0.456464
[20] training's l2: 0.205099 training's l1: 0.436869 valid_1's l2: 0.201616 valid_1's l1: 0.434057
[30] training's l2: 0.197421 training's l1: 0.421302 valid_1's l2: 0.192514 valid_1's l1: 0.417019
[40] training's l2: 0.192856 training's l1: 0.411107 valid_1's l2: 0.187258 valid_1's l1: 0.406303
[50] training's l2: 0.189593 training's l1: 0.403695 valid_1's l2: 0.183688 valid_1's l1: 0.398997
[60] training's l2: 0.187043 training's l1: 0.398704 valid_1's l2: 0.181009 valid_1's l1: 0.393977
[70] training's l2: 0.184982 training's l1: 0.394876 valid_1's l2: 0.178803 valid_1's l1: 0.389805
[80] training's l2: 0.1828 training's l1: 0.391147 valid_1's l2: 0.176799 valid_1's l1: 0.386476
[90] training's l2: 0.180817 training's l1: 0.388101 valid_1's l2: 0.175775 valid_1's l1: 0.384404
[100] training's l2: 0.179171 training's l1: 0.385174 valid_1's l2: 0.175321 valid_1's l1: 0.382929
在训练过程中绘图...
|
demo_notebooks/TEM_VerticalConductor_2D_forward.ipynb
|
###Markdown
Forward SimulationIn this notebook, we compute a single line of AEM data over a conductive plate in a resistive background. We plot the currents and magnetic fields in the subsurface. This notebook generates Figures 1-4 in: Heagy, L., Kang, S., Cockett, R., and Oldenburg, D., 2018, _Open source software for simulations and inversions of airborne electromagnetic data_, AEM 2018 International Workshop on Airborne Electromagnetics
###Code
from SimPEG import EM, Mesh, Maps, Utils
import numpy as np
from scipy.constants import mu_0
from scipy.interpolate import griddata
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
from matplotlib import animation, collections
from pymatsolver import Pardiso
import ipywidgets
from IPython.display import HTML
%matplotlib inline
from matplotlib import rcParams
rcParams['font.size'] = 16
###Output
_____no_output_____
###Markdown
Create a Tensor MeshHere, we build a 3D Tensor mesh to run the forward simulation on.
###Code
cs, ncx, ncy, ncz, npad = 50., 20, 1, 20, 10
pad_rate = 1.3
hx = [(cs,npad,-pad_rate), (cs,ncx), (cs,npad,pad_rate)]
hy = [(cs,npad,-pad_rate), (cs,ncy), (cs,npad,pad_rate)]
hz = [(cs,npad,-pad_rate), (cs,ncz), (cs,npad,pad_rate)]
mesh = Mesh.TensorMesh([hx,hy,hz], 'CCC')
print(mesh)
# print diffusion distance and make sure mesh padding goes beyond that
print("diffusion distance: {:1.2e} m".format(1207*np.sqrt(1e3*2.5*1e-3)))
print("mesh extent : {:1.2e} m".format(mesh.hx[-10:].sum()))
###Output
diffusion distance: 1.91e+03 m
mesh extent : 2.77e+03 m
###Markdown
Build a model
###Code
# model parameters
sig_air = 1e-8
sig_half = 1e-3
sig_plate = 1e-1
# put the omdel on the mesh
blk1 = Utils.ModelBuilder.getIndicesBlock(
np.r_[-49.5, 500, -50],
np.r_[50, -500, -450],
mesh.gridCC
)
sigma = np.ones(mesh.nC)*sig_air
sigma[mesh.gridCC[:,2]<0.] = sig_half
sigma_half = sigma.copy()
sigma[blk1] = sig_plate
xref = 0
zref = -100.
yref = 0.
xlim = [-700., 700.]
ylim = [-700., 700.]
zlim = [-700., 0.]
indx = int(np.argmin(abs(mesh.vectorCCx-xref)))
indy = int(np.argmin(abs(mesh.vectorCCy-yref)))
indz = int(np.argmin(abs(mesh.vectorCCz-zref)))
fig, ax = plt.subplots(1,2, figsize = (12,6))
clim = [1e-3, 1e-1]
dat1 = mesh.plotSlice(sigma, grid=True, ax=ax[0], ind=indz, pcolorOpts={'norm':LogNorm()}, clim=clim)
dat2 = mesh.plotSlice(sigma, grid=True, ax=ax[1], ind=indx, normal='X', pcolorOpts={'norm':LogNorm()}, clim=clim)
cb = plt.colorbar(dat2[0], orientation="horizontal", ax = ax[1])
ax[0].set_xlim(xlim)
ax[0].set_ylim(ylim)
ax[1].set_xlim(xlim)
ax[1].set_ylim(ylim[0], 0.)
plt.tight_layout()
ax[0].set_aspect(1)
ax[1].set_aspect(1)
ax[0].set_title("Conductivity at Z={:1.0f}m".format(mesh.vectorCCz[indz]))
ax[1].set_title("X={:1.0f}m".format(mesh.vectorCCx[indx]))
cb.set_label("$\sigma$ (S/m)")
fig, ax = plt.subplots(1,2, figsize = (12, 6))
clim = [1e-3, 1e-1]
dat1 = mesh.plotSlice(sigma, grid=True, ax=ax[0], ind=indz, pcolorOpts={'norm':LogNorm()}, clim=clim)
dat2 = mesh.plotSlice(sigma, grid=True, ax=ax[1], ind=indy, normal='Y', pcolorOpts={'norm':LogNorm()}, clim=clim)
cb = plt.colorbar(dat2[0], orientation="horizontal", ax = ax[1])
ax[0].set_xlim(xlim)
ax[0].set_ylim(ylim)
ax[1].set_xlim(xlim)
ax[1].set_ylim(zlim)
plt.tight_layout()
ax[0].set_aspect(1)
ax[1].set_aspect(1)
ax[0].set_title("Conductivity at Z={:1.0f} m".format(mesh.vectorCCz[indz]))
ax[1].set_title("X= {:1.0f}m".format(mesh.vectorCCx[indx]))
cb.set_label("$\sigma$ (S/m)")
###Output
_____no_output_____
###Markdown
Assemble the survey
###Code
x = mesh.vectorCCx[np.logical_and(mesh.vectorCCx>-450, mesh.vectorCCx<450)]
# np.save("x", x)
%%time
# assemble the sources and receivers
time = np.logspace(np.log10(5e-5), np.log10(2.5e-3), 21)
srcList = []
for xloc in x:
location = np.array([[xloc, 0., 30.]])
rx_z = EM.TDEM.Rx.Point_dbdt(location, time, 'z')
rx_x = EM.TDEM.Rx.Point_dbdt(location, time, 'x')
src = EM.TDEM.Src.CircularLoop([rx_z, rx_x], orientation='z', loc=location)
srcList.append(src)
###Output
CPU times: user 17.7 ms, sys: 672 µs, total: 18.4 ms
Wall time: 57.9 ms
###Markdown
Set up the forward simulation
###Code
timesteps = [(1e-05, 15), (5e-5, 10), (2e-4, 10)]
prb = EM.TDEM.Problem3D_b(mesh, Solver=Pardiso, verbose=True, timeSteps=timesteps, sigmaMap=Maps.IdentityMap(mesh))
survey = EM.TDEM.Survey(srcList)
survey.pair(prb)
###Output
_____no_output_____
###Markdown
Solve the forward simulation
###Code
%%time
fields = prb.fields(sigma)
###Output
Calculating Initial fields
**************************************************
Calculating fields(m)
**************************************************
Factoring... (dt = 1.000000e-05)
Done
Solving... (tInd = 1)
Done...
Solving... (tInd = 2)
Done...
Solving... (tInd = 3)
Done...
Solving... (tInd = 4)
Done...
Solving... (tInd = 5)
Done...
Solving... (tInd = 6)
Done...
Solving... (tInd = 7)
Done...
Solving... (tInd = 8)
Done...
Solving... (tInd = 9)
Done...
Solving... (tInd = 10)
Done...
Solving... (tInd = 11)
Done...
Solving... (tInd = 12)
Done...
Solving... (tInd = 13)
Done...
Solving... (tInd = 14)
Done...
Solving... (tInd = 15)
Done...
Factoring... (dt = 5.000000e-05)
Done
Solving... (tInd = 16)
Done...
Solving... (tInd = 17)
Done...
Solving... (tInd = 18)
Done...
Solving... (tInd = 19)
Done...
Solving... (tInd = 20)
Done...
Solving... (tInd = 21)
Done...
Solving... (tInd = 22)
Done...
Solving... (tInd = 23)
Done...
Solving... (tInd = 24)
Done...
Solving... (tInd = 25)
Done...
Factoring... (dt = 2.000000e-04)
Done
Solving... (tInd = 26)
Done...
Solving... (tInd = 27)
Done...
Solving... (tInd = 28)
Done...
Solving... (tInd = 29)
Done...
Solving... (tInd = 30)
Done...
Solving... (tInd = 31)
Done...
Solving... (tInd = 32)
Done...
Solving... (tInd = 33)
Done...
Solving... (tInd = 34)
Done...
Solving... (tInd = 35)
Done...
**************************************************
Done calculating fields(m)
**************************************************
CPU times: user 2min 23s, sys: 6.04 s, total: 2min 29s
Wall time: 3min 4s
###Markdown
Compute predicted data
###Code
dpred = survey.dpred(sigma, f=fields)
DPRED = dpred.reshape((survey.nSrc, 2, rx_z.times.size))
# uncomment to add noise
# noise = abs(dpred)*0.05 * np.random.randn(dpred.size) + 1e-14
# dobs = dpred + noise
# DOBS = dobs.reshape((survey.nSrc, 2, rx_z.times.size))
# # uncomment to save
# np.save("dobs", dobs)
# np.save("dpred", dpred)
# 0, 1e-4, 4e-4
rx_z.times[[0, 4, 12]]
fig = plt.figure(figsize=(9, 3.5), dpi=350)
for itime in range(rx_z.times.size):
plt.semilogy(x, -DPRED[:,0,itime], 'k.-', lw=1)
# plt.plot(srcList[5].loc[0][0]*np.ones(3), -DPRED[5,0,[0, 4, 12]] , 's', color="C3")
plt.xlabel("x (m)")
plt.ylabel("Voltage (V/Am$^2$)")
plt.grid(which="both", alpha=0.2)
###Output
_____no_output_____
###Markdown
Plot the fields
###Code
def plot_currents(itime, iSrc, clim=None, ax=None, showcb=True, showit=True):
if ax is None:
fig, ax = plt.subplots(1,2, figsize = (16,8))
location = srcList[iSrc].loc
S = np.kron(np.ones(3), sigma)
j = Utils.sdiag(S) * mesh.aveE2CCV * fields[srcList[iSrc], 'e', itime]
dat1 = mesh.plotSlice(
j, normal='Z', ind=int(indz), vType='CCv', view='vec', ax=ax[0],
range_x=xlim, range_y=ylim, sample_grid = [cs, cs],
pcolorOpts={'norm':LogNorm(), 'cmap':'magma'},
streamOpts={'arrowsize':2, 'color':'k'},
clim=clim, stream_threshold = 1e-12 if clim is not None else None
)
dat2 = mesh.plotSlice(
j, normal='X', ind=int(indx), vType='CCv', view='vec', ax=ax[1],
range_x=xlim, range_y=zlim,
pcolorOpts={'norm':LogNorm(), 'cmap':'magma'},
streamOpts={'arrowsize':2, 'color':'k'},
clim=clim, stream_threshold = 1e-12 if clim is not None else None
)
ax[0].plot(location[0], location[1], 'go', ms=20)
ax[0].text(-600, 500, "Time: {:1.2f} ms".format(prb.times[itime]*1e3), color='w', fontsize=20)
if showcb is True:
cb = plt.colorbar(dat1[0], orientation="horizontal", ax = ax[1])
cb.set_label("Current density (A/m$^2$)")
ax[0].set_aspect(1)
ax[1].set_aspect(1)
ax[0].set_title("Current density at Z={:1.0f}m".format(mesh.vectorCCz[indz]))
ax[1].set_title("X={:1.0f}m".format(mesh.vectorCCx[indx]))
if showit:
plt.show()
return [d for d in dat1 + dat2]
###Output
_____no_output_____
###Markdown
View the current density through time
###Code
ipywidgets.interact(
plot_currents,
itime = ipywidgets.IntSlider(min=1, max=len(prb.times), value=1),
iSrc = ipywidgets.IntSlider(min=0, max=len(srcList), value=5),
clim = ipywidgets.fixed([3e-13, 2e-9]),
ax=ipywidgets.fixed(None),
showit=ipywidgets.fixed(True),
showcb=ipywidgets.fixed(True)
)
def plot_magnetic_flux(itime, iSrc, clim=None, ax=None, showcb=True, showit=True):
if ax is None:
fig, ax = plt.subplots(1,1, figsize = (8,8))
location = srcList[iSrc].loc
b = mesh.aveF2CCV * fields[survey.srcList[iSrc], 'b', itime]
dat1 = mesh.plotSlice(
b, normal='Y', ind=int(indy), vType='CCv', view='vec',
range_x=xlim, range_y=xlim,
ax=ax, pcolorOpts={'norm':LogNorm(), 'cmap':'magma'},
streamOpts={'arrowsize':2, 'color':'k'},
clim=clim, stream_threshold = clim[0] if clim is not None else None
)
ax.plot(location[0], location[2], 'go', ms=15)
ax.text(-600, 500, "Time: {:1.2f} ms".format(prb.times[itime]*1e3), color='w', fontsize=20)
if showcb:
cb = plt.colorbar(dat1[0], ax = ax)
cb.set_label("Magnetic Flux Density (T)")
ax.set_aspect(1)
ax.set_title("Magnetic Flux Density at Y={:1.0f}m".format(mesh.vectorCCy[indy]))
if showit:
plt.show()
return [d for d in dat1 + dat2]
ipywidgets.interact(
plot_magnetic_flux,
itime = ipywidgets.IntSlider(min=1, max=len(prb.times), value=1),
iSrc = ipywidgets.IntSlider(min=0, max=len(srcList), value=5),
clim = ipywidgets.fixed([1e-17, 3e-14]),
ax=ipywidgets.fixed(None),
showit=ipywidgets.fixed(True),
showcb=ipywidgets.fixed(True)
)
###Output
_____no_output_____
###Markdown
Print Figures
###Code
fig, axes = plt.subplots(3,2, figsize = (12,6*3))
iSrc = 5
clim = [3e-13, 2e-9]
for i, itime in enumerate([1, 10, 22]):
ax = axes[i, :]
location = srcList[iSrc].loc
S = np.kron(np.ones(3), sigma)
j = Utils.sdiag(S) * mesh.aveE2CCV * fields[srcList[iSrc], 'e', itime]
dat1 = mesh.plotSlice(
j, normal='Z', ind=int(indz), vType='CCv', view='vec', ax=ax[0],
range_x=xlim, range_y=ylim,
pcolorOpts={'norm':LogNorm(), 'cmap':'magma'},
streamOpts={'arrowsize':2, 'color':'k'},
clim=clim, stream_threshold = 1e-12 if clim is not None else None
)
dat2 = mesh.plotSlice(
j, normal='X', ind=int(indx), vType='CCv', view='vec', ax=ax[1],
range_x=xlim, range_y=zlim,
pcolorOpts={'norm':LogNorm(), 'cmap':'magma'},
streamOpts={'arrowsize':2, 'color':'k'},
clim=clim, stream_threshold = 1e-12 if clim is not None else None
)
ax[0].plot(location[0,0], location[0,1], 'go', ms=20)
ax[0].text(-600, 500, "Time: {:1.2f} ms".format(prb.times[itime]*1e3), color='w', fontsize=20)
cb = plt.colorbar(dat1[0], orientation="horizontal", ax = ax[1])
cb.set_label("Current density (A/m$^2$)")
ax[0].set_aspect(1)
ax[1].set_aspect(1)
if itime == 1:
ax[0].set_title("Current density at Z={:1.0f}m".format(mesh.vectorCCz[indz]))
ax[1].set_title("X={:1.0f}m".format(mesh.vectorCCx[indx]))
else:
ax[0].set_title("")
ax[1].set_title("")
plt.tight_layout()
plt.show()
fig, ax = plt.subplots(1,3, figsize = (15, 5))
fig.subplots_adjust(bottom=0.7)
iSrc = 5
clim = [1e-17, 3e-14]
for a, itime, title in zip(ax, [1, 10, 22], ["a", "b", "c"]):
location = srcList[iSrc].loc
b = mesh.aveF2CCV * fields[survey.srcList[iSrc], 'b', itime]
dat1 = mesh.plotSlice(
b, normal='Y', ind=int(indy), vType='CCv', view='vec',
range_x=xlim, range_y=xlim,
ax=a, pcolorOpts={'norm':LogNorm(), 'cmap':'magma'},
streamOpts={'arrowsize':2, 'color':'k'},
clim=clim, stream_threshold = clim[0] if clim is not None else None
)
if itime > 1:
a.get_yaxis().set_visible(False)
a.plot(location[0], location[2], 'go', ms=15)
a.text(-600, 500, "Time: {:1.2f} ms".format(prb.times[itime]*1e3), color='w', fontsize=20)
a.set_aspect(1)
a.set_title("({})".format(title))
# a.set_title(("Magnetic Flux Density at Y=%.0fm")%(mesh_core.vectorCCy[indy]))
plt.tight_layout()
cbar_ax = fig.add_axes([0.2, -0.05, 0.6, 0.05])
cb = fig.colorbar(dat1[0], cbar_ax, orientation='horizontal')
cb.set_label('Magnetic Flux Density (T)')
###Output
_____no_output_____
###Markdown
Movie of the currents and the magnetic fluxMovies shown in the presentation. It is slow to build these, so by default we won't. Change `make_movie` to `True` to build the movies
###Code
make_movie = True
save_mp4 = True
iSrc = 5
if make_movie:
fig, ax = plt.subplots(1, 2, figsize = (16,8), dpi=290)
out = plot_currents(itime=1, iSrc=iSrc, clim=[3e-13, 2e-9], ax=ax, showcb=True, showit=False)
def init():
[o.set_array(None) for o in out if isinstance(o, collections.QuadMesh)]
return out
def update(t):
for a in ax:
a.patches = []
a.lines = []
a.clear()
return plot_currents(itime=t, iSrc=iSrc, clim=[3e-13, 2e-9], ax=ax, showcb=False, showit=False)
ani = animation.FuncAnimation(fig, update, np.arange(1, len(prb.times)), init_func=init, blit=False)
if save_mp4:
ani.save(
"currents1.mp4", writer="ffmpeg", fps=3, dpi=250, bitrate=0,
metadata={"title":"TDEM currents in a plate", "artist":"Lindsey Heagy"}
)
anihtml = ani.to_jshtml(fps=3)
HTML(anihtml)
iSrc = 5
clim=[1e-17, 3e-14]
if make_movie:
fig, ax = plt.subplots(1, 1, figsize = (8, 8), dpi=290)
out = plot_magnetic_flux(itime=1, iSrc=iSrc, clim=clim, ax=ax, showcb=True, showit=False)
def init():
[o.set_array(None) for o in out if isinstance(o, collections.QuadMesh)]
return out
def update(t):
ax.patches = []
ax.lines = []
ax.clear()
return plot_magnetic_flux(itime=t, iSrc=iSrc, clim=clim, ax=ax, showcb=False, showit=False)
ani = animation.FuncAnimation(fig, update, np.arange(1, len(prb.times)), init_func=init, blit=False)
if save_mp4:
ani.save(
"magnetic_flux.mp4", writer="ffmpeg", fps=3, dpi=250, bitrate=0,
metadata={"title":"TDEM magnetic flux in a plate", "artist":"Lindsey Heagy"}
)
anihtml = ani.to_jshtml(fps=3)
HTML(anihtml)
###Output
_____no_output_____
###Markdown
Presentation figures
###Code
fig, axes = plt.subplots(3, 3, figsize = (22,6*3))
iSrc = 5
clim_j = [3e-13, 2e-9]
clim_b = [1e-17, 3e-14]
for i, itime in enumerate([1, 10, 22]):
ax = axes[i, :]
location = srcList[iSrc].loc
S = np.kron(np.ones(3), sigma)
j = Utils.sdiag(S) * mesh.aveE2CCV * fields[srcList[iSrc], 'e', itime]
b = mesh.aveF2CCV * fields[survey.srcList[iSrc], 'b', itime]
dat1 = mesh.plotSlice(
j, normal='Z', ind=int(indz), vType='CCv', view='vec', ax=ax[0],
range_x=xlim, range_y=ylim,
pcolorOpts={'norm':LogNorm(), 'cmap':'magma'},
streamOpts={'arrowsize':2, 'color':'k'},
clim=clim_j, stream_threshold = 1e-12 if clim is not None else None
)
dat2 = mesh.plotSlice(
j, normal='X', ind=int(indx), vType='CCv', view='vec', ax=ax[1],
range_x=xlim, range_y=zlim,
pcolorOpts={'norm':LogNorm(), 'cmap':'magma'},
streamOpts={'arrowsize':2, 'color':'k'},
clim=clim_j, stream_threshold = 1e-12 if clim is not None else None
)
dat3 = mesh.plotSlice(
b, normal='Y', ind=int(indy), vType='CCv', view='vec', ax=ax[2],
range_x=xlim, range_y=xlim,
pcolorOpts={'norm':LogNorm(), 'cmap':'magma'},
streamOpts={'arrowsize':2, 'color':'k'},
clim=clim_b, stream_threshold = clim_b[0] if clim is not None else None
)
ax[0].plot(location[0], location[1], 'go', ms=20)
ax[0].text(-600, 550, "Time: {:1.2f} ms".format(prb.times[itime]*1e3), color='w', fontsize=20)
cb = plt.colorbar(dat1[0], orientation="horizontal", ax = ax[1])
cb.set_label("Current density (A/m$^2$)")
cb2 = plt.colorbar(dat3[0], ax=ax[2])
cb2.set_label("Magnetic flux density (T)")
ax[0].set_aspect(1)
ax[1].set_aspect(1)
ax[2].set_aspect(1)
if itime == 1:
ax[0].set_title("Current density, Z={:1.0f}m".format(mesh.vectorCCz[indz]))
ax[1].set_title("Current density, X={:1.0f}m".format(mesh.vectorCCx[indx]))
ax[2].set_title("Magnetic flux, Y=0m")
else:
ax[0].set_title("")
ax[1].set_title("")
ax[2].set_title("")
plt.tight_layout()
plt.show()
###Output
_____no_output_____
|
Ejercicios/02-Clasificador por particiones/clasificador_particiones.ipynb
|
###Markdown
Clasificación por Particiones - Metodo del histograma Julian Ferres - Nro.Padrón 101483 Enunciado Sean las regiones $R_0$ y $R_1$ y la cantidad de puntos $n$, donde:- $R_0$ es el triangulo con vertices $(1,0)$, $(1,1)$ y $(\frac{1}{2},0)$- $R_1$ es el triangulo con vertices $(0,0)$, $(\frac{1}{2},1)$ y $(0,1)$- $n = 10, 100, 1000, 10000, \ldots$ Se simulan $n$ puntos en $\mathbb{R}^2$ siguiendo los pasos: >- Cada punto pertenece a una de las dos clases: **_Clase 0_** o **_Clase 1_** con probabilidad $\frac{1}{2}$>- Los puntos de la clase $i$ tienen distribución uniforme con soporte en $R_i$ , con $i=0,1$ **Se pide, con la muestra, construir una regla del histograma que permita clasificar un punto que no pertenezca a la misma** Solución
###Code
#Import libraries
import numpy as np
#Plots
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
n = 100000 #Tamaño de muestra
muestra = np.zeros((n,3))
###Output
_____no_output_____
###Markdown
Toma de muestra
###Code
i = 0 #Puntos incluidos hasta el momento.
while(i < n):
x = np.random.uniform(0,1)
y = np.random.uniform(0,1)
clase = np.random.randint( 0 , 1 + 1 ) #Uniforme discreta en {0,1}
if (( clase == 0 and abs(2*x-1) < y) or ( clase == 1 and y < 2*x < 2-y )):
muestra[i][0] = x
muestra[i][1] = y
muestra[i][2] = clase
i+=1
clase0 , clase1 = muestra[(muestra[:,2] == 0.)] , muestra[(muestra[:,2] == 1.)]
g = plt.scatter( clase0[:,0] , clase0[:,1] , alpha='0.5', color='darkgreen' , label = 'Clase 0');
g = plt.scatter( clase1[:,0] , clase1[:,1] ,alpha='0.5', color='darkorange' , label = 'Clase 1');
plt.legend()
plt.title("Distribuciones", fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
Para generar la particion tengo que saber la longitud del lado de las cajas. Segun lo visto en clase, si $h_n$ es la longitud del lado de las cajas, entonces:$$h_n = \frac {1}{\sqrt[2d]{n}} = n^{-\frac{1}{2d}}$$cumple las condiciones para que la regla del histograma sea universalmente consistente. En este caso, con dos dimensiones, $d=2$:
###Code
d = 2
h_n = n **(-(1/(d*2)))
d_n = int(1/h_n) #Podria 1/h_n no ser entero
particion = np.ndarray((d_n , d_n), dtype = int )
particion.fill(0)
for i in range(n):
x_p , y_p = int(muestra[i,0]/h_n) , int(muestra[i,1]/h_n)
x_p = d_n - 1 if x_p >=d_n else x_p
y_p = d_n - 1 if y_p >=d_n else y_p
particion[y_p , x_p] += 1 if muestra[i,2] else -1
f = lambda x : 0 if (x>= 0) else 1
f_vec = np.vectorize(f)
for_heatmap = f_vec(particion) #Mapeo todos los numeros a 0 o 1
particion
for_heatmap
###Output
_____no_output_____
###Markdown
Clasificación mediante método del histograma
###Code
dims = (8, 8)
fig, axs = plt.subplots(figsize=dims)
annotable = (n<1000000)
g = sns.heatmap(for_heatmap, annot = annotable , linewidths=.5,cmap=['darkgreen','darkorange'],\
cbar = False, annot_kws={"size": 15},\
xticklabels = [round(x/d_n,2) for x in range(d_n)],\
yticklabels = [(round(1-x/d_n,2)) for x in range(1,d_n+1)])
g.set_title('Particiones Clasificadas' , size = 30)
plt.show()
###Output
_____no_output_____
|
Courses/Algorithms, Data Structures and Real-Life Python Problems/Section 3 - Algorithms and Code Complexity/Algorithms and Code Complexity.ipynb
|
###Markdown
Algorithms and Code ComplexityThis notebook is created by Eda AYDIN through by DATAI Team, Udemy. Table of Content* [Algorithms](1)* [Algorithms and Code Complexity](2)* [Big-0 Notation](3)* [Big-O | Omega | Theta](4)* [Big-0 Examples](5) * [O(1) Constant](5_1) * [O(n) Linear](5_2) * [O($n^3$) Cubic](5_3)* [Calculating Scale of Big-O](6)* [Interview Questions](7) Algorithms- It is a formula written to solve a problem.
###Code
def calculation(book_number, average_price):
return book_number * average_price
print("You have to pay {}TL".format(calculation(5,50)))
###Output
You have to pay 250TL
###Markdown
Algorithms and Code ComplexityProblem: Arranging numbers from smallest to largestAlgorithms: SortingWe can solve this problem by using Bubble Sort or Selection Sort algorithms. But, the main question is in here which algorithm works effectively. Here we come across the concept of code complexity.Example: Square the numbers from 1 to n and add them all up.$$\sum_{k=1}^{n} k^{2} = 1^{2} + 2^{2} + 3^{2} + 4^{2} + ... n^{2} = \frac{n(n+1)(2n+1))}{6}$$
###Code
def square_sum1(n):
# Take an input of n and return the sum of the squares of numbers from 0 to n.
total = 0
for i in range(n+1):
total += i**2
return total
square_sum1(6)
def square_sum2(n):
# Take an input of n and return the sum of the squares of numbers from 0 to n with formula
return int((n * ( n + 1 )*( 2 * n + 1))/6)
square_sum2(6)
###Output
_____no_output_____
###Markdown
**%timeit :** Python timeit() is a method in Python library to measure the execution time taken by the given code snippet.
###Code
%timeit square_sum1(6)
%timeit square_sum2(6) # More fastly
###Output
335 ns ± 8.75 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
###Markdown
The results give different results on each computer. The reason is that computers hardware and CPU are different. Also, because runtime is hardware dependent, **Big O notation** is used to compare these two methods, not the concept of time. Big O Notation* Don't compare by runtime when comparing two algorithms.* The concept of Big-O Notation refers to the growth of a function.* Big-O notation analysis is also called asymptotic analysis.* n refers to size of an input. \begin{matrix}\mathbf{Big-O }& \mathbf{Name} \\1 & Constant\\log(n) & Logarithmic\\n & Linear\\nlog(n) & Log Linear\\n^2 & Quadratic\\n^3 & Cubic\\2^n & Exponential\end{matrix} Big O | Omega | Theta* **Big-O:** To test how the code we wrote works in the **worst** case* **Omega:** To test how the code we wrote works in the **best** case* **Theta:** To test how the code we wrote works in the **mid** case scenario.
###Code
a = [2,3,4]
b = [3,2,4]
c = [4,3,2]
%timeit next((i for i in a if i == 2), None) # Omega
%timeit next((i for i in b if i == 2), None) # Theta
%timeit next((i for i in c if i == 2), None) # Big - O
###Output
585 ns ± 5.16 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
###Markdown
Big-O Examples O(1) Constant- It doesn't depend on input size.- If the input size is large, the time complexity doesn't change.
###Code
def constant_big_O(list):
print(list[0])
lst = [-5,0,1,2,3,4,5,6]
constant_big_O(lst)
###Output
-5
###Markdown
O(n) Linear- This functions works in linear time. In other words, the number of operations in the algorithm is directly proportional to the input size.
###Code
def linear_big_O(list):
for i in list:
print(i)
linear_big_O(lst)
###Output
-5
0
1
2
3
4
5
6
###Markdown
O(n^3) Cubic* It includes nested loops.
###Code
import numpy as np
lst2 = [1,2,3]
def cubic_big_O(list):
a = []
for i in range(0,len(list)):
for j in range(0,len(list)):
for k in range(0,len(list)):
a.append([list[i], list[j],list[k]])
a = np.array(a)
print(a)
cubic_big_O(lst2)
###Output
[[1 1 1]
[1 1 2]
[1 1 3]
[1 2 1]
[1 2 2]
[1 2 3]
[1 3 1]
[1 3 2]
[1 3 3]
[2 1 1]
[2 1 2]
[2 1 3]
[2 2 1]
[2 2 2]
[2 2 3]
[2 3 1]
[2 3 2]
[2 3 3]
[3 1 1]
[3 1 2]
[3 1 3]
[3 2 1]
[3 2 2]
[3 2 3]
[3 3 1]
[3 3 2]
[3 3 3]]
###Markdown
Calculating Scale of Big-O* Insignificant constant* Linear Big-O directly proportional with input size.* First example Big-O: O(n)* Second Example Big-O: O(2n)* number( 1,2,3,, etc.) is insignificant constant
###Code
def linear_big_O(list):
for i in list:
print(i)
linear_big_O([1,3]) # O(n)
def linear_big_O_2(list):
for i in list:
print(i)
for i in list:
print(i)
linear_big_O_2([1,3]) # O(2n)
# O(1+n)
def example(list):
print(list[0]) # O(1) constant
for i in list: # O(n) linear
print(i)
example([1,2,3,4])
###Output
1
1
2
3
4
|
carbon_capture_storage_to_meet_target.ipynb
|
###Markdown
Could We Possibly Reach the Target to Increase Number of CCUS Facilities a Hundredfold by 2040?In 2019, **International Energy Agency** (IEA) released a scenario in its World Energy Outlook, called the Sustainable Development Scenario (SDS), to highlight that CCUS contribute to 9% reduction of global CO2 emission by 2050. This reduction is meant to reach 2015 Paris Agreement. IEA stated that to reach the 9% contribution, by 2050, **the mass of CO2 captured and permanently stored (captured CO2 capacity) must reach 2.8 billion tonnes per annum**. In other words, a world institute for CCS, the **Global CCS Institute** further stated that to achieve the level outlined in the SDS, **number of CCUS facilities needs to increase a hundredfold by 2040**. To grasp the idea what a "hundredfold" means, take a look at this illustration. Website [Scottish CCS](https://www.sccs.org.uk/expertise/global-ccs-map) has a **worldmap of CCS projects** distribution. There are 3 categories: **operational** (operating large-scale facilities), **in planning** (facilities soon to be operational), and **pilot project**. Each of these categories has been counted:
###Code
# current number of CCS projects: 20 operational, 55 in planning, 70 pilot
op = 20
inplan = 55
pilot = 70
###Output
_____no_output_____
###Markdown
It is stated that CCS must increase a hundredfold. That means, by 2040, number of operational CCS facilities must be 100 times the number this year (2020), which means **20 facilities in 2020 grow to 2000 facilities in 2040**. There are **20 years** before 2040. We could draw a graph representing the growth of number of CCS facilities per year, assuming **the growth is linear**.
###Code
import numpy as np
import matplotlib.pyplot as plt
year = np.arange(2020, 2041, 1)
ccs_facilities_numbers = np.arange(20, 2020, 99)
plt.figure(figsize=(10, 7))
plt.title('Number of CCS Facilities from 2020 to 2040 (IEA 2019 Scenario)', size=17, pad=20)
plt.plot(year, ccs_facilities_numbers, '.-')
plt.xlabel('Year', size=12), plt.ylabel('Number of CCS Facilities', size=12)
plt.xlim(2020, 2040); plt.ylim(0, 2100)
###Output
_____no_output_____
###Markdown
It means **each year we have to add 99 new CCS facilities**. Well, is it possible and feasible? I think it's impossible.
###Code
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
ccs = np.array([0.09, 3, 0.25, 1, 2.5, 7, 1.5, 0.4, 0.1, 0.85, 0.1, 0.7, 0.1, 0.5, 0.5, 0.6, 1, 0.4, 0.3, 3, 0.35, 1, 0.6, 0.8, 0.8, 0.7, 1, 2, 0.16, 0.1, 4, 0.02, 0.6])
plt.figure(figsize=(10, 7))
plt.title('Distribution Plot of Large-scale CCS Facilities by 2020', size=17, pad=20)
plt.xlabel('Captured CO\u2082 capacity (million tons per annum, Mtpa)', size=12), plt.ylabel('Probability Density', size=12)
sns.distplot(ccs, bins=30, color ='blue', hist_kws=dict(edgecolor="black", linewidth=1))
sum(ccs)
import scipy.stats as stats
stats.describe(ccs)
min = 0.02
max = 7
mode = stats.mode(ccs)
inplan = np.random.triangular(0.02, 0.1, 7, 55)
sns.distplot(inplan, bins=20)
pilot = np.random.triangular(0.02, 0.1, 7, 70)
sns.distplot(pilot, bins=20)
popul_inplan_sum = 55
n = 55
popul_inplan = np.random.multinomial(popul_inplan_sum, np.ones(n)/n, size=1)[0]
sns.distplot(popul_inplan, bins=11)
popul_pilot_sum = 70
n = 70
popul_pilot = np.random.multinomial(popul_pilot_sum, np.ones(n)/n, size=1)[0]
sns.distplot(popul_pilot, bins=14)
inplan_cap = popul_inplan * inplan
pilot_cap = popul_pilot * pilot
sum_inplan_cap = sum(inplan_cap)
sum_inplan_cap
###Output
_____no_output_____
###Markdown
***
###Code
pilot = np.random.triangular(0.02, 0.1, 7, 125)
sns.distplot(pilot, bins=25)
popul_inplan_sum = 125
n = 125
popul_inplan = np.random.multinomial(popul_inplan_sum, np.ones(n)/n, size=1)[0]
sns.distplot(popul_inplan, bins=11)
a = sum(pilot * popul_inplan_sum)
a
###Output
_____no_output_____
###Markdown
Monte Carlo Simulation
###Code
popul_inplan_sum = 75
n = 75
cap = []
for i in range(0, 100):
pilot = np.random.triangular(0.02, 0.1, 5, 125)
popul_inplan = np.random.multinomial(popul_inplan_sum, np.ones(n)/n, size=1)[0]
summ = sum(pilot * popul_inplan_sum)
cap.append(float(summ))
plt.figure(figsize=(10, 7))
plt.title('Monte Carlo Simulation of Total Capacity of 125 Additional CCS Fields', size=17, pad=20)
plt.xlabel('Captured CO\u2082 capacity (million tons per annum, Mtpa)', size=12), plt.ylabel('Probability Density', size=12)
sns.distplot(cap, bins=25, color ='green', hist_kws=dict(edgecolor="black", linewidth=1))
sum(cap)
###Output
_____no_output_____
###Markdown
Rolling Dice
###Code
# generate random integer values
from random import seed
from random import randint
# generate the first 100 random numbers ranging from 1 to 6
record1 = []
for _ in range(100):
value1 = randint(1, 6)
record1.append(float(value1))
# generate the second 100 random numbers ranging from 1 to 6
record2 = []
for _ in range(100):
value2 = randint(1, 6)
record2.append(float(value2))
# multiply the two numbers
result = [record1*record2 for record1,record2 in zip(record1,record2)]
sns.distplot(result, bins=25)
for i in range(0, 100):
record1 = []; record2 = []
for _ in range(100):
value1 = randint(1, 6)
record1.append(float(value1))
value2 = randint(1, 6)
record2.append(float(value2))
multiply = record1 * record2
import pandas as pd
df = pd.DataFrame(cap)
df.describe(percentiles=[.10, .40, .60, .90]).T
###Output
_____no_output_____
###Markdown
***
###Code
target = 2800 #Mtpa
xyz = np.random.triangular(3, 4, 10, 100)
numbers = np.random.triangular(10, 25, 100, 100)
cap = xyz * numbers
sumcap = sum(cap)
sumcap
pilot = 70
inplan = 55
pil = np.random.triangular(3, 4, 55)
import numpy as np
import matplotlib.pyplot as plt
#import data
#conists of one column of datapoints as 2.231, -0.1516, 1.564, etc
data=np.array([0.09, 3, 0.25, 1, 2.5, 7, 1.5, 0.4, 0.1, 0.85, 0.1, 0.7, 0.1, 0.5, 0.5, 0.6, 1, 0.4, 0.3, 3, 0.35, 1, 0.6, 0.8, 0.8, 0.7, 1, 2, 0.16, 0.1, 4, 0.02, 0.6])
# data=uniform
#normalized histogram of loaded datase
hist, bins = np.histogram(data,bins=100,range=(np.min(data),np.max(data)) ,density=True)
width = 0.7 * (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) / 2
#generate data with double random()
generatedData=np.zeros(33)
maxData=np.max(data)
minData=np.min(data)
i=0
while i<33:
randNo=np.random.rand(1)*(maxData-minData)-np.absolute(minData)
if np.random.rand(1)<=hist[np.argmax(randNo<(center+(bins[1] - bins[0])/2))-1]:
generatedData[i]=randNo
i+=1
#normalized histogram of generatedData
hist2, bins2 = np.histogram(generatedData,bins=100,range=(np.min(data),np.max(data)), density=True)
width2 = 0.7 * (bins2[1] - bins2[0])
center2 = (bins2[:-1] + bins2[1:]) / 2
#plot both histograms
plt.figure(figsize=(20,8))
plt.subplot(1,2,1)
plt.title("Original Data")
sns.distplot(data, bins=50, color ='blue', hist_kws=dict(edgecolor="black", linewidth=1))
plt.subplot(1,2,2)
plt.title("Generated Data")
sns.distplot(generatedData, bins=50, color ='red', hist_kws=dict(edgecolor="black", linewidth=1))
print(generatedData)
import scipy
n = 100; p = 1
k = np.arange(0, 33)
binomial = scipy.stats.binom.pmf(k, n, p)
# plt.plot(k, binomial)
sns.distplot(binomial)
lognormal = np.random.lognormal(0.3, 1, 100)
sns.distplot(lognormal)
###Output
_____no_output_____
|
Tree_Methods_Consulting_Project_SOLUTION.ipynb
|
###Markdown
Tree Methods Consulting Project - SOLUTION You've been hired by a dog food company to try to predict why some batches of their dog food are spoiling much quicker than intended! Unfortunately this Dog Food company hasn't upgraded to the latest machinery, meaning that the amounts of the five preservative chemicals they are using can vary a lot, but which is the chemical that has the strongest effect? The dog food company first mixes up a batch of preservative that contains 4 different preservative chemicals (A,B,C,D) and then is completed with a "filler" chemical. The food scientists beelive one of the A,B,C, or D preservatives is causing the problem, but need your help to figure out which one!Use Machine Learning with RF to find out which parameter had the most predicitive power, thus finding out which chemical causes the early spoiling! So create a model and then find out how you can decide which chemical is the problem!* Pres_A : Percentage of preservative A in the mix* Pres_B : Percentage of preservative B in the mix* Pres_C : Percentage of preservative C in the mix* Pres_D : Percentage of preservative D in the mix* Spoiled: Label indicating whether or not the dog food batch was spoiled.___**Think carefully about what this problem is really asking you to solve. While we will use Machine Learning to solve this, it won't be with your typical train/test split workflow. If this confuses you, skip ahead to the solution code along walk-through!**____
###Code
#Tree methods Example
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('dogfood').getOrCreate()
# Load training data
data = spark.read.csv('dog_food.csv',inferSchema=True,header=True)
data.printSchema()
data.head()
data.describe().show()
# Import VectorAssembler and Vectors
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler
data.columns
assembler = VectorAssembler(inputCols=['A', 'B', 'C', 'D'],outputCol="features")
output = assembler.transform(data)
from pyspark.ml.classification import RandomForestClassifier,DecisionTreeClassifier
rfc = DecisionTreeClassifier(labelCol='Spoiled',featuresCol='features')
output.printSchema()
final_data = output.select('features','Spoiled')
final_data.head()
rfc_model = rfc.fit(final_data)
rfc_model.featureImportances
###Output
_____no_output_____
|
examples/8. Three phase equilibria (VLLE).ipynb
|
###Markdown
Vapor Liquid Liquid Equilibrium (VLLE)This notebook exemplifies the three phase equilibria calculation. The VLLE calculation is separated into two cases:- Binary mixtures: due to degrees of freedom restrictions the VLLE is computed at given temperature or pressure. This is solved using the ``vlleb`` function.- Mixtures with three or more components: the VLLE is computed at given global composition, temperature and pressure. This is solved using the ``vlle`` function.To start, the required functions are imported.
###Code
import numpy as np
from phasepy import component, mixture, virialgamma
from phasepy.equilibrium import vlleb, vlle
###Output
_____no_output_____
###Markdown
Binary (two component) mixture VLLEThe VLLE computation for binary mixtures is solution is based on the following objective function:$$ K_{ik} x_{ir} - x_{ik} = 0 \qquad i = 1,...,c \quad k = 2,3 $$$$ \sum_{i=1}^c x_{ik} = 1 \qquad k = 1, 2, 3$$Where, $x_{ik}$ is the molar fraction of the component $i$ on the phase $k$ and $ K_{ik} = x_{ik}/x_{ir} = \hat{\phi}_{ir}/\hat{\phi}_{ik} $ is the constant equilibrium respect to a reference phase $r$. **note:** this calculation does not check for the stability of the phases.In the following code block, the VLLE calculation for the binary mixture of water and mtbe is exemplified. For binary mixtures, the VLLE is computed at either given pressure (``P``) or temperature (``T``). The function ``vlleb`` requires either of those and initial guesses unknown variables (phase compositions and unknown specification).First, the mixture and its interaction parameters are set up.
###Code
water = component(name='water', Tc=647.13, Pc=220.55, Zc=0.229, Vc=55.948, w=0.344861,
Ant=[11.64785144, 3797.41566067, -46.77830444],
GC={'H2O':1})
mtbe = component(name='mtbe', Tc=497.1, Pc=34.3, Zc=0.273, Vc=329.0, w=0.266059,
Ant=[9.16238246, 2541.97883529, -50.40534341],
GC={'CH3':3, 'CH3O':1, 'C':1})
mix = mixture(water, mtbe)
mix.unifac()
eos = virialgamma(mix, actmodel='unifac')
###Output
_____no_output_____
###Markdown
The binary VLLE at constant pressure is computed as follows:
###Code
P = 1.01 # bar
T0 = 320.0 # K
x0 = np.array([0.01, 0.99])
w0 = np.array([0.99, 0.01])
y0 = (x0 + w0)/2
vlleb(x0, w0, y0, T0, P, 'P', eos)
###Output
_____no_output_____
###Markdown
Similarly, the binary VLLE at constant temperature is computed below:
###Code
P0 = 1.01 # bar
T = 320.0 # K
x0 = np.array([0.01, 0.99])
w0 = np.array([0.99, 0.01])
y0 = (x0 + w0)/2
vlleb(x0, w0, y0, P0, T, 'T', eos)
###Output
_____no_output_____
###Markdown
Multicomponent mixture VLLEPhase stability plays a key role during equilibrium computation when dealing with more than two liquid phases. For this purpose the following modified multiphase Rachford-Rice mass balance has been proposed [Gupta et al.](https://www.sciencedirect.com/science/article/pii/037838129180021M):$$ \sum_{i=1}^c \frac{z_i (K_{ik} \exp{\theta_k}-1)}{1+ \sum\limits^{\pi}_{\substack{j=1 \\ j \neq r}}{\psi_j (K_{ij}} \exp{\theta_j} -1)} = 0 \qquad k = 1,..., \pi, k \neq r $$Subject to:$$ \psi_k \theta_k = 0 $$In this system of equations, $z_i$ represents the global composition of the component $i$, $ K_{ij} = x_{ij}/x_{ir} = \hat{\phi}_{ir}/\hat{\phi}_{ij} $ is the constant equilibrium of component $i$ in phase $j$ respect to the reference phase $r$, and $\psi_j$ and $\theta_j$ are the phase fraction and stability variable of the phase $j$. The solution strategy is similar to the classic isothermal isobaric two-phase flash. First, a reference phase must be selected, this phase is considered stable during the procedure. In an inner loop, the system of equations is solved using multidimensional Newton's method for phase fractions and stability variables and then compositions are updated in an outer loop using accelerated successive substitution (ASS). Once the algorithm has converged, the stability variable gives information about the phase. If it takes a value of zero the phase is stable and if it is positive the phase is not. The proposed successive substitution method can be slow, if that is the case the algorithm attempts to minimize Gibbs Free energy of the system. This procedure also ensures stable solutions and is solved using SciPy's functions.$$ min \, {G} = \sum_{k=1}^\pi \sum_{i=1}^c F_{ik} \ln \hat{f}_{ik} $$The next code block exemplifies the VLLE calculation for the mixture of water, ethanol and MTBE. This is done with the ``vlle`` function, which incorporates the algorithm described above. This functions requires the global composition (``z``), temperature (``T``) and pressure (``P``). Additionally, the ``vlle`` function requires initial guesses for the composition of the phases.
###Code
ethanol = component(name='ethanol', Tc=514.0, Pc=61.37, Zc=0.241, Vc=168.0, w=0.643558,
Ant=[11.61809279, 3423.0259436, -56.48094263],
GC={'CH3':1, 'CH2':1, 'OH(P)':1})
mix = mixture(water, mtbe)
mix.add_component(ethanol)
mix.unifac()
eos = virialgamma(mix, actmodel='unifac')
###Output
_____no_output_____
###Markdown
Once the ternary mixture has been set up, the VLLE computation is performed as follows:
###Code
P = 1.01 # bar
T = 326. # K
Z = np.array([0.4, 0.5, 0.1])
x0 = np.array([0.95, 0.025, 0.025])
w0 = np.array([0.1, 0.7, 0.2])
y0 = np.array([0.15, 0.8, 0.05])
vlle(x0, w0, y0, Z, T, P, eos, K_tol=1e-11, full_output=True)
###Output
_____no_output_____
###Markdown
Vapor Liquid Liquid Equilibrium (VLLE)This notebook exemplifies the three phase equilibria calculation. The VLLE calculation is separated into two cases:- Binary mixtures: due to degrees of freedom restrictions the VLLE is computed at given temperature or pressure. This is solved using the ``vlleb`` function.- Mixtures with three or more components: the VLLE is computed at given global composition, temperature and pressure. This is solved using the ``vlle`` function.To start, the required functions are imported.
###Code
import numpy as np
from sgtpy import component, mixture, saftvrmie
from sgtpy.equilibrium import vlle, vlleb
###Output
_____no_output_____
###Markdown
--- Binary (two-component) mixture VLLEThe VLLE computation for binary mixtures is solution is based on the following objective function:$$ K_{ik} x_{ir} - x_{ik} = 0 \qquad i = 1,...,c \quad k = 2,3 $$$$ \sum_{i=1}^c x_{ik} = 1 \qquad k = 1, 2, 3$$Where, $x_{ik}$ is the molar fraction of the component $i$ on the phase $k$ and $ K_{ik} = x_{ik}/x_{ir} = \hat{\phi}_{ir}/\hat{\phi}_{ik} $ is the constant equilibrium respect to a reference phase $r$. **note:** this calculation does not check for the stability of the phases.In the following code block, the VLLE calculation for the binary mixture of water and butanol is exemplified. For binary mixtures, the VLLE is computed at either given pressure (``P``) or temperature (``T``). The function ``vlleb`` requires either of those and initial guesses unknown variables (phase compositions and unknown specification).First, the mixture and its interaction parameters are set up.
###Code
# creating pure components
water = component('water', ms = 1.7311, sigma = 2.4539 , eps = 110.85,
lambda_r = 8.308, lambda_a = 6., eAB = 1991.07, rcAB = 0.5624,
rdAB = 0.4, sites = [0,2,2], cii = 1.5371939421515458e-20)
butanol = component('butanol2C', ms = 1.9651, sigma = 4.1077 , eps = 277.892,
lambda_r = 10.6689, lambda_a = 6., eAB = 3300.0, rcAB = 0.2615,
rdAB = 0.4, sites = [1,0,1], npol = 1.45, mupol = 1.6609,
cii = 1.5018715324070352e-19)
mix = mixture(water, butanol)
# or
mix = water + butanol
# optimized from experimental LLE
kij, lij = np.array([-0.00736075, -0.00737153])
Kij = np.array([[0, kij], [kij, 0]])
Lij = np.array([[0., lij], [lij, 0]])
# setting interactions corrections
mix.kij_saft(Kij)
mix.lij_saft(Lij)
# or setting interaction between component i=0 (water) and j=1 (butanol)
mix.set_kijsaft(i=0, j=1, kij0=kij)
mix.set_lijsaft(i=0, j=1, lij0=lij)
# creating eos model
eosb = saftvrmie(mix)
###Output
_____no_output_____
###Markdown
The binary VLLE at constant pressure is computed as follows:
###Code
#computed
P = 1.01325e5 # Pa
# initial guesses
x0 = np.array([0.96, 0.06])
w0 = np.array([0.53, 0.47])
y0 = np.array([0.8, 0.2])
T0 = 350. # K
vlleb(x0, w0, y0, T0, P, 'P', eosb, full_output=True)
###Output
_____no_output_____
###Markdown
Similarly, the binary VLLE at constant temperature is computed below:
###Code
T = 350. # K
# initial guesses
x0 = np.array([0.96, 0.06])
w0 = np.array([0.53, 0.47])
y0 = np.array([0.8, 0.2])
P0 = 4e4 # Pa
sol_vlleb = vlleb(x0, w0, y0, P0, T, 'T', eosb, full_output=True)
sol_vlleb
###Output
_____no_output_____
###Markdown
You can also supply initial guesses for the phase volumes (``v0``) or non-bonded association site fractions (``Xass0``), which can come from a previous calculation using the ``full_output=True`` option.
###Code
T = 350. # K
# initial guesses
x0 = np.array([0.96, 0.06])
w0 = np.array([0.53, 0.47])
y0 = np.array([0.8, 0.2])
P0 = 4e4 # Pa
v0 = [sol_vlleb.vx, sol_vlleb.vw, sol_vlleb.vy]
Xass0 = [sol_vlleb.Xassx, sol_vlleb.Xassx, sol_vlleb.Xassx]
# VLLE supplying initial guess for volumes and non-bonded association sites fractions
vlleb(x0, w0, y0, P0, T, 'T', eosb, v0=v0, Xass0=Xass0, full_output=True)
###Output
_____no_output_____
###Markdown
--- Multicomponent mixture VLLEPhase stability plays a key role during equilibrium computation when dealing with more than two liquid phases. For this purpose the following modified multiphase Rachford-Rice mass balance has been proposed [Gupta et al.](https://www.sciencedirect.com/science/article/pii/037838129180021M):$$ \sum_{i=1}^c \frac{z_i (K_{ik} \exp{\theta_k}-1)}{1+ \sum\limits^{\pi}_{\substack{j=1 \\ j \neq r}}{\psi_j (K_{ij}} \exp{\theta_j} -1)} = 0 \qquad k = 1,..., \pi, k \neq r $$Subject to:$$ \psi_k \theta_k = 0 $$In this system of equations, $z_i$ represents the global composition of the component $i$, $ K_{ij} = x_{ij}/x_{ir} = \hat{\phi}_{ir}/\hat{\phi}_{ij} $ is the constant equilibrium of component $i$ in phase $j$ respect to the reference phase $r$, and $\psi_j$ and $\theta_j$ are the phase fraction and stability variable of the phase $j$. The solution strategy is similar to the classic isothermal isobaric two-phase flash. First, a reference phase must be selected, this phase is considered stable during the procedure. In an inner loop, the system of equations is solved using multidimensional Newton's method for phase fractions and stability variables and then compositions are updated in an outer loop using accelerated successive substitution (ASS). Once the algorithm has converged, the stability variable gives information about the phase. If it takes a value of zero the phase is stable and if it is positive the phase is not. The proposed successive substitution method can be slow, if that is the case the algorithm attempts to minimize Gibbs Free energy of the system. This procedure also ensures stable solutions and is solved using SciPy's functions.$$ min \, {G} = \sum_{k=1}^\pi \sum_{i=1}^c F_{ik} \ln \hat{f}_{ik} $$The next code block exemplifies the VLLE calculation for the mixture of water, ethanol and MTBE. This is done with the ``vlle`` function, which incorporates the algorithm described above. This functions requires the global composition (``z``), temperature (``T``) and pressure (``P``). Additionally, the ``vlle`` function requires initial guesses for the composition of the phases.
###Code
water = component('water', ms = 1.7311, sigma = 2.4539 , eps = 110.85,
lambda_r = 8.308, lambda_a = 6., eAB = 1991.07, rcAB = 0.5624,
rdAB = 0.4, sites = [0,2,2], cii = 1.5371939421515455e-20)
butanol = component('butanol2C', ms = 1.9651, sigma = 4.1077 , eps = 277.892,
lambda_r = 10.6689, lambda_a = 6., eAB = 3300.0, rcAB = 0.2615,
rdAB = 0.4, sites = [1,0,1], npol = 1.45, mupol = 1.6609,
cii = 1.5018715324070352e-19)
mtbe = component('mtbe', ms =2.17847383, sigma= 4.19140014, eps = 306.52083841,
lambda_r = 14.74135198, lambda_a = 6.0, npol = 2.95094686,
mupol = 1.3611, sites = [0,0,1], cii =3.5779968517655445e-19 )
mix = mixture(water, butanol)
mix.add_component(mtbe)
# or
mix = water + butanol + mtbe
#butanol water
k12, l12 = np.array([-0.00736075, -0.00737153])
#mtbe butanol
k23 = -0.0029995
l23 = 0.
rc23 = 1.90982649
#mtbe water
k13 = -0.07331438
l13 = 0.
rc13 = 2.84367922
# setting up interaction corrections
Kij = np.array([[0., k12, k13], [k12, 0., k23], [k13, k23, 0.]])
Lij = np.array([[0., l12, l13], [l12, 0., l23], [l13, l23, 0.]])
mix.kij_saft(Kij)
mix.lij_saft(Lij)
# or setting interactions by pairs (water = 0, butanol = 1, mtbe = 2)
mix.set_kijsaft(i=0, j=1, kij0=k12)
mix.set_kijsaft(i=0, j=2, kij0=k13)
mix.set_kijsaft(i=1, j=2, kij0=k23)
mix.set_lijsaft(i=0, j=1, lij0=l12)
mix.set_lijsaft(i=0, j=2, lij0=l13)
mix.set_lijsaft(i=1, j=2, lij0=l23)
eos = saftvrmie(mix)
# setting up induced association manually
#mtbe water
eos.eABij[0,2] = water.eAB / 2
eos.eABij[2,0] = water.eAB / 2
eos.rcij[0,2] = rc13 * 1e-10
eos.rcij[2,0] = rc13 * 1e-10
#mtbe butanol
eos.eABij[2,1] = butanol.eAB / 2
eos.eABij[1,2] = butanol.eAB / 2
eos.rcij[2,1] = rc23 * 1e-10
eos.rcij[1,2] = rc23 * 1e-10
# or by using the eos._set_induced_asso method
# selfasso=0 (water), inducedasso=2 (mtbe)
eos.set_induced_asso(selfasso=0, inducedasso=2, rcij=rc13)
# selfasso=1 (butanol), inducedasso=2 (mtbe)
eos.set_induced_asso(selfasso=1, inducedasso=2, rcij=rc23)
###Output
_____no_output_____
###Markdown
Once the ternary mixture has been set up, the VLLE computation is performed as follows:
###Code
T = 345. #K
P = 1.01325e5 # Pa
# global composition
z = np.array([0.5, 0.3, 0.2])
# initial guesses
x0 = np.array([0.9, 0.05, 0.05])
w0 = np.array([0.45, 0.45, 0.1])
y0 = np.array([0.3, 0.1, 0.6])
sol_vlle = vlle(x0, w0, y0, z, T, P, eos, full_output = True)
sol_vlle
###Output
_____no_output_____
###Markdown
As for the other phase equilibria functions included, you can supply initial guesses for the phase volumes (``v0``) or non-bonded association site fractions (``Xass0``), which can come from a previous calculation using the ``full_output=True`` option.
###Code
T = 345. #K
P = 1.01325e5 # Pa
# global composition
z = np.array([0.5, 0.3, 0.2])
# initial guesses
x0 = np.array([0.9, 0.05, 0.05])
w0 = np.array([0.45, 0.45, 0.1])
y0 = np.array([0.3, 0.1, 0.6])
v0 = sol_vlle.v
Xass0 = sol_vlle.Xass
# VLLE supplying initial guess for volumes and non-bonded association sites fractions
vlle(x0, w0, y0, z, T, P, eos, v0=v0, Xass0=Xass0, full_output = True)
###Output
_____no_output_____
|
others/solutions/pandas/titanic/Titanic-exercise-with-solutions.ipynb
|
###Markdown
Titanic Exercise Practise Pandas  First of all, import the needed libraries.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# you will have to import at least two other libraries along the exercise
import random
import seaborn as sns
###Output
_____no_output_____
###Markdown
1. Read in filename and call the variable `titanic` - Explore the `titanic` dataset using `info`, `dtypes` & `describe`
###Code
filename = "titanic.csv"
#your code
titanic = pd.read_csv("utils/titanic.csv")
titanic.head()
titanic.info()
titanic.dtypes
titanic.shape
###Output
_____no_output_____
###Markdown
2. Create a separate dataframe with the columns `['name', 'sex', 'age']`, call it `people`It can be done two ways, do it both!
###Code
diccionario = {"name": titanic["name"], "sex": titanic["sex"], "age": titanic["age"]}
df = pd.DataFrame(diccionario)
df
#your code
people = titanic[["name","sex","age"]]
people
titanic2 = titanic.copy()
###Output
_____no_output_____
###Markdown
3. Print the output of `people` showing the first three rows and the last four rows, using `append`,`tail` and `head`
###Code
t = people.tail(4)
h = people.head(3)
lo_que_retorna_append = h.append(t)
lo_que_retorna_append
#your code
people.head(3).append(people.tail(4))
###Output
_____no_output_____
###Markdown
4. Slice the row from 3 to 9, call it `s_titanic`
###Code
s_titanic = titanic.iloc[3:10, :]
s_titanic
#your code
s_titanic = titanic[3:10]
s_titanic
###Output
_____no_output_____
###Markdown
5. Slice the row from 40 to 63 in reverse order, call it `s_titanic_rev`
###Code
s_titanic_rev = titanic.iloc[40:64][::-1]
s_titanic_rev
#your code
s_titanic_rev = titanic.iloc[63:39:-1]
s_titanic_rev
#your code
s_titanic_rev = titanic[40:64].sort_index(ascending=False)
s_titanic_rev
#your code
s_titanic_rev = titanic[40:64][::-1]
s_titanic_rev
###Output
_____no_output_____
###Markdown
6. Slice the columns from the starting column to `'parch'`, call it `left_columns`
###Code
list(titanic.columns).index("parch")
#your code
posicion_parch = list(titanic.columns).index("parch")
titanic.iloc[:, :posicion_parch+1]
#your code
titanic.loc[:, :"parch"]
###Output
_____no_output_____
###Markdown
7. Slice the columns from `'name'` to `'age'`, call it `middle_columns`
###Code
#your code
middle_columns = titanic.loc[:, "name":"age"]
middle_columns
###Output
_____no_output_____
###Markdown
8. Slice the columns from `'ticket'` to the end, call it `right_columns`
###Code
#your code
right_columns = titanic.loc[:, "ticket":]
right_columns
###Output
_____no_output_____
###Markdown
9. What is the name of the oldest person who died in the Titanic? Was he or she travelling alone or had any family travelling with them?
###Code
non_survived = titanic[titanic["survived"]==0]
non_survived.loc[non_survived["age"] == non_survived["age"].max()]
#your code
titanic[titanic.age == titanic[titanic.survived == 0].age.max()]
titanic.groupby("survived").max()
###Output
_____no_output_____
###Markdown
In order to give an answer to the second question you should find out which columns give you that info. Usually part of your job as a Data Scientist will be get to know the dataset which you are working with. In this case the columns which give you that info are the following: - 'sibsp' Number of Siblings/Spouses Aboard - 'parch' Number of Parents/Children Aboard 10. Create the list of 5 random numbers of rows from 0 to the lenght of the dataframe, call it `rows`ex. `rows = [3,7,99,52,48]` use `random` library
###Code
range(0, titanic.shape[0])
random.choices([2, 5, 7, 9, 99, 1, 0], k=5)
#your code
rows = random.choices(range(0, titanic.shape[0]), k=5)
rows
###Output
_____no_output_____
###Markdown
This list of numbers are random, could be different. 11. Create the list of three column labels, call it `cols`
###Code
list(titanic.columns)
random.sample(list(titanic.columns),3)
#your code
cols = ["embarked", "body", "home.dest"]
cols
###Output
_____no_output_____
###Markdown
12. Use both lists `rows` and `cols` to create a new dataframe
###Code
pd.DataFrame(data=titanic, index=rows, columns=cols)
#your code
titanic.loc[rows,cols]
###Output
_____no_output_____
###Markdown
13. Create a boolean array with the condition of being a woman or a man, using the `sex` column, where **female** is True. Call it `array_fe` Bonus: Rename the column `"sex"` to `"gender"`
###Code
#your code
titanic = titanic.rename({'sex': 'gender'}, axis=1)
array_fe = titanic.gender == "female"
df_2 = pd.DataFrame(array_fe)
df_2
###Output
_____no_output_____
###Markdown
14. Filter the `titanic` dataframe with the boolean array, call it `woman_titanic`
###Code
#your code
woman_titanic = titanic[array_fe]
woman_titanic
###Output
_____no_output_____
###Markdown
15. How many woman were younger than 18? Call the variable `minor_wo`
###Code
#your code
minor_wo = woman_titanic[woman_titanic.age < 18]
len(minor_wo)
###Output
_____no_output_____
###Markdown
16. How many woman that were less than 18 actually died? Call the variable `dead_wo`
###Code
#your code
dead_wo = minor_wo[minor_wo.survived == 0]
len(dead_wo)
###Output
_____no_output_____
###Markdown
17. Drop rows with *Nan* in `titanic` with `how='any'` and print the shape
###Code
#your code
titanic.dropna(how="any").shape
###Output
_____no_output_____
###Markdown
18. Drop rows with *Nan* in `titanic` with `how='all'` and print the shape
###Code
titanic.shape
#your code
titanic.dropna(how="all").shape
###Output
_____no_output_____
###Markdown
Check in [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.html) why the shapes are different. 19. Drop columns in `titanic` with more than 1000 missing values and print the columns remaining
###Code
titanic.dropna(axis=1, thresh=len(titanic)-1000).shape
titanic.columns[titanic.isnull().sum() > 1000]
titanic.drop(['cabin', 'body'], axis = 1, inplace = True)
#your code
titanic.columns
###Output
_____no_output_____
###Markdown
20. Calculate the ratio of missing values at the `boat` column.
###Code
titanic.boat.isnull().sum()
titanic.shape[0]
round((823/1309)*100, 2)
#your code
print(stround(((titanic.boat.isnull().sum()/titanic.shape[0])*100),2)),"% of the data in the column 'boat' are null")
###Output
62.87 % of the data in the column 'boat' are null
###Markdown
21. Group `titanic` by `'pclass'` and aggregate by the columns `age` & `fare`, by `max` and `median` --> `by_class`
###Code
#your code
by_class = titanic.groupby("pclass").agg({"age":["max","median"], "fare":["max", "median"]})
by_class
###Output
_____no_output_____
###Markdown
22. Print the maximum age in each class from `by_class`
###Code
#your code
by_class.age["max"]
###Output
_____no_output_____
###Markdown
23. Print the median fare in each class from `by_class`
###Code
#your code
by_class.fare["median"]
###Output
_____no_output_____
###Markdown
24. Using [`.pivot_table()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.pivot_table.html) to count how many women or men survived by class, call it `counted`.Don't panic and read the documentation!
###Code
#your code
counted["Percent_f"] = counted["female"]/(counted.sum(axis=1))
counted
#your code
counted = pd.pivot_table(titanic, values="survived" , columns="gender" , index="pclass", aggfunc=np.sum)
counted
###Output
_____no_output_____
###Markdown
25. Add a new column with the sum of survived men and women, call it `counted['total']`
###Code
#your code
counted['total'] = counted['female']+counted['male']
counted['total']
###Output
_____no_output_____
###Markdown
26. Sort `counted` by the `'total'` column. In which class the people survived the more?
###Code
#your code
counted.sort_values('total', ascending = False).head(1)
###Output
_____no_output_____
###Markdown
27. Please, show only the rows using a mask with the following conditions: - They are woman - From third class - Younger than 30 - They survived ¿How many rows fulfill the condition?
###Code
#your code
len(titanic[(titanic.gender == "female") &
(titanic.pclass == 3) &
(titanic.age<30) &
(titanic.survived==1)
])
###Output
_____no_output_____
###Markdown
28. Now, show only the rows using `.loc` with the following conditions: - They are man - From first class - Older than 50 - They died ¿How many rows fulfill the condition?
###Code
#your code
len(titanic.loc[(titanic.gender == "male")&
(titanic.pclass == 1)&
(titanic.age>50)&
(titanic.survived==0)])
###Output
_____no_output_____
###Markdown
29. Print the uniques values at the column `'name'`
###Code
#your code
titanic["name"].unique()
#your code
titanic["name"].nunique()
###Output
_____no_output_____
###Markdown
30. Find if was there any `name` repeated at the Titanic?Hint: There were two people with the same name, who?
###Code
titanic.name.value_counts()
#your code
list(titanic.name.value_counts()[titanic.name.value_counts()>1].index)
titanic[titanic.name.duplicated()].name
titanic[titanic.name.duplicated(keep=False)]
def difference():
if len(titanic["name"]) != len(titanic.groupby(["name"]).sum()):
print("Aqui tenemos gente repetida!")
else:
print("No te asustes, NO")
return titanic[titanic.duplicated(["name"])]
difference()
###Output
Aqui tenemos gente repetida!
###Markdown
31. Using `matplotlib` find the appropriate visualization to show the distribution of the column `'age'`
###Code
#your code
hist = titanic.age.hist(bins=25)
print(hist)
# plotly
x = plt.hist(titanic.age, bins=30)
print(x[1])
x[0]
###Output
[ 0.17 2.831 5.492 8.153 10.814 13.475 16.136 18.797 21.458 24.119
26.78 29.441 32.102 34.763 37.424 40.085 42.746 45.407 48.068 50.729
53.39 56.051 58.712 61.373 64.034 66.695 69.356 72.017 74.678 77.339
80. ]
###Markdown
32. Using `matplotlib` find the appropriate plot to visualize the column `'gender'`
###Code
#your code
titanic.gender.value_counts().plot(kind='bar')
titanic.gender.value_counts().values
titanic.gender.value_counts().index
titanic.gender.value_counts().values
plt.bar(x=titanic.gender.value_counts().index.values, height=titanic.gender.value_counts().values)
plt.hist(titanic.gender.sort_values(ascending=False))
###Output
_____no_output_____
###Markdown
32b. What if you also plot the column `'gender'` using the function [`countplot`](https://seaborn.pydata.org/generated/seaborn.countplot.html) from the library [`seaborn`](https://seaborn.pydata.org/)?Remember you have never used `seaborn` before, therefore you should install it before importing it.
###Code
#your code
sns.countplot(x="gender",data=titanic)
###Output
_____no_output_____
###Markdown
33. Using the function [`catplot`](https://seaborn.pydata.org/generated/seaborn.catplot.html) from the library `seaborn`, find out if the hypothesis _"Women are more likely to survive shipwrecks"_ is true or not.You should get something like this: 
###Code
#your code
sns.catplot(x="gender", y="survived", kind="bar", data=titanic)
###Output
_____no_output_____
###Markdown
34. Using [`kdeplot`]("https://seaborn.pydata.org/generated/seaborn.kdeplot.html") from `seaborn` represent those who not survived distributed by age.Hint: First you should "filter" the `titanic` dataset where the column "survived" is 0, indexing the column `"age"` only.Arguments you should pass to the function: - color = "red" - label = "Not Survived" - shade = True You should get something like this: 
###Code
#your code
sns.kdeplot(titanic.age[titanic.survived == 0], color = "red", label = "Not Survived", shade = True)
sns.kdeplot(titanic.age[titanic.survived == 1], color="green", label="Survived", shade=True)
###Output
_____no_output_____
|
feature_engineering/notebooks/feature_test_bench.ipynb
|
###Markdown
Import libraries
###Code
import sys
import glob, os
import pandas as pd
import numpy as np
import plotly.plotly as py
import plotly.graph_objs as go
import plotly.offline as offline
from plotly import tools
import time
from scipy.spatial import distance
from scipy import linalg
from scipy import signal
import matplotlib.pyplot as plt
%matplotlib inline
offline.init_notebook_mode()
pd.options.display.float_format = '{:.6f}'.format
pd.options.display.max_rows = 999
pd.options.display.max_columns = 999
sys.path.insert(0, '../../scripts/asset_processor/')
# load the autoreload extension
%load_ext autoreload
# Set extension to reload modules every time before executing code
%autoreload 2
from video_asset_processor import VideoAssetProcessor
from video_metrics import VideoMetrics
###Output
_____no_output_____
###Markdown
Computes a list of metrics for some assets
###Code
path = '../../stream/'
resolutions = [
144,
240,
360,
480,
720,
1080
]
attacks = [
'watermark',
'watermark-345x114',
'watermark-856x856',
'vignette',
'low_bitrate_4',
'low_bitrate_8',
'black_and_white',
# 'flip_vertical',
'rotate_90_clockwise'
]
max_samples = 2
total_videos = 1
assets = np.random.choice(os.listdir(path + '1080p'), total_videos, replace=False)
metrics_list = ['temporal_texture',
'temporal_gaussian',
# 'temporal_threshold_gaussian_difference',
# 'temporal_dct'
]
metrics_df = pd.DataFrame()
init = time.time()
for asset in assets:
original_asset = {'path': path + '1080p/' + asset}
renditions_list = [{'path': path + str(res) + 'p/' + asset} for res in resolutions]
attack_list = [{'path': path + str(res) + 'p_' + attk + '/' + asset} for res in resolutions
for attk in attacks]
renditions_list += attack_list
asset_processor = VideoAssetProcessor(original_asset, renditions_list, metrics_list, max_samples=max_samples)
metrics = asset_processor.process()
metrics_df = metrics_df.append(metrics)
print('That took {0:.2f} seconds to compute'.format(time.time() - init))
metrics_df = metrics_df.reset_index()
metrics_df = metrics_df.drop('index', axis=1)
features = [metric + '-mean' for metric in metrics_list]
columns = ['dimension', 'size', 'title', 'attack'] + features
metrics_df = metrics_df.filter(columns)
attacks = metrics_df['attack'].unique()
metrics_df.head()
legit_df = metrics_df[~metrics_df['attack'].str.contains('_')]
attacks_df = metrics_df[metrics_df['attack'].str.contains('_')]
display(legit_df.head())
display(attacks_df.head())
###Output
_____no_output_____
###Markdown
Feature analysis
###Code
print('Legit assets description:')
display(legit_df[features].astype(float).describe())
print('Attack assets description:')
display(attacks_df[features].astype(float).describe())
for feat in features:
data = []
data.append(go.Box(y=attacks_df[feat], name='Attacks', boxpoints='all',
text=attacks_df['title'] + '-' + attacks_df['attack']))
data.append(go.Box(y=legit_df[feat], name='Legit', boxpoints='all',
text=attacks_df['title'] + '-' + attacks_df['attack']))
layout = go.Layout(
title=feat,
yaxis = go.layout.YAxis(title = feat),
xaxis = go.layout.XAxis(
title = 'Type of asset',
)
)
fig = go.Figure(data=data, layout=layout)
offline.iplot(fig)
for feat in features:
data = []
for res in resolutions:
data.append(go.Box(y=attacks_df[feat], name=str(res) + '-' + attacks_df['attack'], boxpoints='all',
text=attacks_df['title'] + '-' + attacks_df['attack']))
data.append(go.Box(y=legit_df[feat], name=res, boxpoints='all',
text=legit_df['title'] + '-' + legit_df['attack']))
layout = go.Layout(
title=feat,
yaxis = go.layout.YAxis(title = feat),
xaxis = go.layout.XAxis(
title = 'Type of asset',
)
)
fig = go.Figure(data=data, layout=layout)
offline.iplot(fig)
df_corr = attacks_df[features].astype(float).corr()
plt.figure(figsize=(10,10))
corr = df_corr.corr('pearson')
corr.style.background_gradient().set_precision(2)
df_corr = legit_df[features].astype(float).corr()
plt.figure(figsize=(10,10))
corr = df_corr.corr('pearson')
corr.style.background_gradient().set_precision(2)
###Output
_____no_output_____
|
assignments/PythonAdvanceProgramming/Python_Advance_Programming_21.ipynb
|
###Markdown
1. Given a sentence, return the number of words which have the same first and last letter.Examplescount_same_ends("Pop! goes the balloon") ➞ 1count_same_ends("And the crowd goes wild!") ➞ 0count_same_ends("No I am not in a gang.") ➞ 1
###Code
def count_same_ends(string):
# remove special chars except space
string = ''.join(e for e in string if e.isalnum() or e == " ")
# using list comprehension with if conditions to reduce code size
return len([x for x in string.lower().split(' ') if x[0] == x[-1] and len(x) > 1])
print(count_same_ends("Pop! goes the balloon"))
print(count_same_ends("And the crowd goes wild!"))
print(count_same_ends("No I am not in a gang."))
###Output
1
0
1
###Markdown
2. The Atbash cipher is an encryption method in which each letter of a word is replaced with its "mirror" letter in the alphabet: A Z; B Y; C X; etc. Create a function that takes a string and applies the Atbash cipher to it.Examplesatbash("apple") ➞ "zkkov"atbash("Hello world!") ➞ "Svool dliow!"atbash("Christmas is the 25th of December") ➞ "Xsirhgnzh rh gsv 25gs lu Wvxvnyvi"
###Code
def atbash(word:str):
decrypted = ""
chars = "abcdefghijklmnopqrstuvwxyz"
for c in word:
if not c.isalpha():
# if not a character don't chage it
decrypted += c
elif c.isupper():
decrypted += chars[(-1 * chars.index(c.lower())) -1].upper()
else:
decrypted += chars[(-1 * chars.index(c)) -1] # if index is '0' we need to get -1 as new index
return decrypted
print(atbash("apple"))
print(atbash("Hello world!"))
print(atbash("Christmas is the 25th of December"))
###Output
zkkov
Svool dliow!
Xsirhgnzh rh gsv 25gs lu Wvxvnyvi
###Markdown
3. Create a class Employee that will take a full name as argument, as well as a set of none, one or more keywords. Each instance should have a name and a lastname attributes plus one more attribute for each of the keywords, if any.Examples* john = Employee("John Doe") * mary = Employee("Mary Major", salary=120000) * richard = Employee("Richard Roe", salary=110000, height=178) * giancarlo = Employee("Giancarlo Rossi", salary=115000, height=182, nationality="Italian")john.name ➞ "John" mary.lastname ➞ "Major" richard.height ➞ 178 giancarlo.nationality ➞ "Italian"
###Code
class Employee:
def __init__(self, full_name, **kwargs):
self.name = full_name.split(' ')[0]
self.lastname = full_name.split(' ')[-1]
for arg in kwargs:
if isinstance(kwargs[arg], str):
exec("self.{0} = '{1}'".format(arg,kwargs[arg]))
else:
exec("self.{0} = {1}".format(arg,kwargs[arg]))
john = Employee("John Doe")
mary = Employee("Mary Major", salary=120000)
richard = Employee("Richard Roe", salary=110000, height=178)
giancarlo = Employee("Giancarlo Rossi", salary=115000, height=182, nationality="Italian")
print(john.name, mary.lastname, richard.height, giancarlo.nationality)
###Output
John Major 178 Italian
###Markdown
4. Create a function that determines whether each seat can "see" the front-stage. A number can "see" the front-stage if it is strictly greater than the number before it.Everyone can see the front-stage in the example below:FRONT STAGE [[1, 2, 3, 2, 1, 1], [2, 4, 4, 3, 2, 2], [5, 5, 5, 5, 4, 4], [6, 6, 7, 6, 5, 5]]Starting from the left, the 6 > 5 > 2 > 1, so all numbers can see. 6 > 5 > 4 > 2 - so all numbers can see, etc.Not everyone can see the front-stage in the example below:FRONT STAGE [[1, 2, 3, 2, 1, 1], [2, 4, 4, 3, 2, 2], [5, 5, 5, 10, 4, 4], [6, 6, 7, 6, 5, 5]]The 10 is directly in front of the 6 and blocking its view.The function should return True if every number can see the front-stage, and False if even a single number cannot.Examplescan_see_stage([ [1, 2, 3], [4, 5, 6], [7, 8, 9] ]) ➞ Truecan_see_stage([ [0, 0, 0], [1, 1, 1], [2, 2, 2] ]) ➞ Truecan_see_stage([ [2, 0, 0], [1, 1, 1], [2, 2, 2] ]) ➞ Falsecan_see_stage([ [1, 0, 0], [1, 1, 1], [2, 2, 2] ]) ➞ FalseNumber must be strictly smaller than the number directly behind it.
###Code
def can_see_stage(seats):
for i in range(len(seats)):
for j in range(len(seats[i]) -1):
if not (seats[j][i] < seats[j+1][i]):
# checking if first row number is smaller than next row number
return False
return True
print(can_see_stage([ [1, 2, 3], [4, 5, 6], [7, 8, 9] ]))
print(can_see_stage([ [0, 0, 0], [1, 1, 1], [2, 2, 2] ]))
print(can_see_stage([ [2, 0, 0], [1, 1, 1], [2, 2, 2] ]))
print(can_see_stage([ [1, 0, 0], [1, 1, 1], [2, 2, 2] ]))
###Output
True
True
False
False
###Markdown
5. Create a Pizza class with the attributes order_number and ingredients (which is given as a list). Only the ingredients will be given as input.You should also make it so that its possible to choose a ready made pizza flavour rather than typing out the ingredients manually! As well as creating this Pizza class, hard-code the following pizza flavours.Name Ingredients hawaiian ham, pineapple meat_festival beef, meatball, bacon garden_feast spinach, olives, mushroomExamplesp1 = Pizza(["bacon", "parmesan", "ham"]) order 1 p2 = Pizza.garden_feast() order 2p1.ingredients ➞ ["bacon", "parmesan", "ham"]p2.ingredients ➞ ["spinach", "olives", "mushroom"]p1.order_number ➞ 1p2.order_number ➞ 2
###Code
class Pizza():
order_number = 0
def __init__(self, ingredients):
self.ingredients = ingredients
Pizza.order_number += 1
@classmethod
def garden_feast(self):
return Pizza(["spinach", "olives", "mushroom"])
@classmethod
def hawaiian(self):
return Pizza(['ham', 'pineapple'])
@classmethod
def meat_festival(self):
return Pizza(['beef', 'meatball', 'bacon'])
p1 = Pizza(["bacon", "parmesan", "ham"])
print(p1.order_number, p1.ingredients)
p2 = Pizza.garden_feast()
print(p2.order_number, p2.ingredients)
p3 = Pizza.meat_festival()
print(p3.order_number, p3.ingredients)
p4 = Pizza.meat_festival()
print(p4.order_number, p4.ingredients)
p5 = Pizza(["momos", "pasta", "chicken"])
print(p5.order_number, p5.ingredients)
###Output
1 ['bacon', 'parmesan', 'ham']
2 ['spinach', 'olives', 'mushroom']
3 ['beef', 'meatball', 'bacon']
4 ['beef', 'meatball', 'bacon']
5 ['momos', 'pasta', 'chicken']
|
doc/AdminBackground/PHY321/CM_Jupyter_Notebooks/Student_Work/CM_Notebook8.ipynb
|
###Markdown
Classical Mechanics - Week 8 Last Week:- Introduced the SymPy package- Visualized Potential Energy surfaces- Explored packages in Python This Week:We will (mostly) take a break from learning new concepts in scientific computing, and instead just apply some of our current knowledge to complete Problem Set 8.
###Code
# As usual, we will need packages
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
3. Taylor problem 4.29.(a) Here we plot the potential energy $U(x)=kx^4$, with $k>0$. Note that the constant $k$ is not given. But it is just an overall scale factor for the potential, so the plot will basically look the same no matter what value you choose for $k$. For the purposes of scaling and keeping things simple, we recommend you set $k=1$.For 2D plots, we learned two different methods:- 1) Our original plotting method, using `pyplot`. (Refer back to ***Notebook 1*** for this method.) **Note:** use `plt.plot()` rather than `plt.scatter()` in order to make connected curves in the plot.- 2) Using `SymPy`. (Refer back to ***Notebook 7*** for this method.) You will have to import the sympy package if you use this method.It is up to you to decide which method you prefer for making the plot. Use the cell below to define the potential energy function and to make the plot. Q1.) Qualitatively describe the motion if the mass in this potential is initially stationary at $x=0$ and is given a sharp kick to the right at $t=0$? &9989; Double click this cell, erase its content, and put your answer to the above question here. (d) After performing the change of variable in part (c) of the problem, you should find that the period of oscillation for the mass is given by$$\tau=\dfrac{1}{A}\sqrt{\dfrac{m}{k}}I\,,$$where $$I=\dfrac{4}{\sqrt{2}}\int_0^1\dfrac{dy}{\sqrt{1-y^4}}$$ is the integral to be evaluated. (Where did the factor of 4 come from?) Note that the integral $I$ is dimensionless. Changing variables to obtain a dimensionless integral is almost always useful, especially when you need to evaluate it numerically. Also, even if we don't know what the value of $I$ is, from this expression we can see explicitly how $\tau$ depends on the parameters$A$, $m$, and $k$. Now how to do the integral?(Review back to ***Notebook 5*** for ***numerical integration***.)If we tried to use our Trapezoidal Rule routine, we would immediately run into problems.What happens to the integrand in the limit as $y$ goes to 1?Although there are ways to change variables again, so that the Trapezoidal Rule routine would work, it is here where more general integration packages are useful, since they can often handle these integrable singularities without any extra effort. So instead, let's use the `integrate.quad()` function from `SciPy` to do it.In the cell below, define the function to be integrated and then integrate it using `integrate.quad()`. Don't forget to import the numerical integration routines from `SciPy` first. (Again, refer back to Notebook 5 if you need guidance.) Q2.) What is the period of oscillation of the mass, as a function of the parameters, and including the calculated numerical factor? &9989; Double click this cell, erase its content, and put your answer to the above question here. 5. Taylor problem 4.37.(c) It should be possible to write the potential energy function in this problem as$$U(\phi)=MgR\,f(\phi,m/M)\,,$$so that the factor $MgR$ is just an overall scale factor. Might as well just set $M=g=R=1$. But the shape of the potential energy does depend on the value of $m/M$.In the cell below, plot two different $\phi$ vs $U(\phi)$ potential energy lines onto the same graph, one line where $m/M=0.7$ and a second line where $m/M=0.8$. Use your preferred method to make the plot of $\phi$ vs $U(\phi)$. In this problem, you are asked to consider the motion of the system starting from rest at $\phi=0$.What is the total energy in this case, and why is it important for determining the motion of the system?Try varying the ratio $m/M$ in your plot in order to determine the critical value $r_\mathrm{crit}$. This is defined so that if $m/Mr_\mathrm{crit}$ the wheel keeps spinning and the mass $m$ keeps falling (if released from rest at $\phi=0$).Use the cell below. Q3.) What feature of the plot did you use to determine the critical value of $m/M$? What value did you obtain? &9989; Double click this cell, erase its content, and put your answer to the above question here. 6. Taylor problem 4.38.(b) After doing the substitution in part (a), you obtained the EXACT period for the pendulum as$$\tau=\tau_0\dfrac{2}{\pi}K(A^2)\,,$$where $$K(A^2)=\int_0^1\dfrac{du}{\sqrt{1-u^2}\sqrt{1-A^2u^2}}\,.$$ The integral is dimensionless, so that the period is proportional to $\tau_0$ (the period for small oscillations),but now the proportionality factor depends on the amplitude $\Phi$ of the oscillations, through the dependence on $A=\sin(\Phi/2)$.As in problem 3, the Trapezoidal Rule method will struggle with this integral, due to the singularity in the integrand as $u$ goes to 1. However, `integrate.quad()` from the `SciPy` package should have no problem with it.In the cell below, define the function to be integrated and then integrate it using `integrate.quad()` to obtain $K(A^2)$ for the values of $\Phi=\pi/4$, $\Phi=\pi/2$, and $\Phi=3\pi/4$. Special FunctionsThe function $K(A^2)$ cannot be written in terms of elementary functions, such as cosine, sine, logarithm, exponential, etc. However, it does pop up enough in mathematics and physics problems, that it is given its own name: the ***complete elliptic integral of the first kind***.This is one of many so-called "special functions" that arise frequently in physics problems. Others are Bessel functions, Legendre functions, etc., etc. They aren't really more complicated than the elementary functions; it's just that we are not as familiar with them. It turns out that `SciPy` has many of these "special functions" already coded up. Try running the following cell. (Use the same values of $\Phi$ as before.)
###Code
# The following line imports the special functions from SciPy:
from scipy import special
A= # Evaluate for the same values of Phi that you used previously
special.ellipk(A**2)
###Output
_____no_output_____
###Markdown
You should have gotten the same answers as before.Now, use the special function to make a plot of $\tau/\tau_0$ as a function of $\Phi$ for $0\le\Phi\le3$ (in radians) in the following cell.As a check, what should $\tau/\tau_0$ be in the limit as $\Phi\rightarrow0$? Q4.) How well does the small angle approximation work for the period of a pendulum with amplitude $\Phi=\pi/4$? What happens to $\tau$ as $\Phi$ approaches $\pi$? Explain. &9989; Double click this cell, erase its content, and put your answer to the above question here. 7. The last problem.As we have seen, it is often useful to write things in terms of dimensionless combinations of parameters.Your result for the potential in this problem can be written$$U(x)=k\alpha^2\,f(y)\,,$$where $y=x/\alpha$ is dimensionless. Verify this. Thus, the natural distance scale is $\alpha$ and the natural energy scale is$k\alpha^2$. In the cell below, make a plot of $U(x)/(k\alpha^2)$ as a function of $x/\alpha$. (Note that this is equivalent tosetting $k=\alpha=1$ in U(x). Why?) Q5.) From your plot, what is special about the energy $E=k\alpha^2/4$? &9989; Double click this cell, erase its content, and put your answer to the above question here. Notebook Wrap-up. Run the cell below and copy-paste your answers into their corresponding cells.
###Code
from IPython.display import HTML
HTML(
"""
<iframe
src="https://forms.gle/o2JbpvJeUFYvWQni7"
width="100%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
"""
)
###Output
_____no_output_____
|
notebooks/gaps.ipynb
|
###Markdown
Violation: Gaps
###Code
p1 = box(0,0,10,10)
p2 = Polygon([(10,10), (12,8), (10,6), (12,4), (10,2), (20,5)])
gdf = geopandas.GeoDataFrame(geometry=[p1,p2])
gdf.plot(edgecolor='k')
geoplanar.gaps(gdf)
g = geoplanar.gaps(gdf)
g.area.values
gdf1 = geoplanar.fill_gaps(gdf)
gdf1.plot(edgecolor='k')
gdf1.area
gdf.area
geoplanar.gaps(gdf1)
###Output
/home/serge/Documents/g/geoplanar/geoplanar/geoplanar/gap.py:48: FutureWarning: Currently, index_parts defaults to True, but in the future, it will default to False to be consistent with Pandas. Use `index_parts=True` to keep the current behavior and True/False to silence the warning.
_gaps = dbu.explode()
###Markdown
The default is to merge the gap with the largest neighboring feature. To merge the gap with the smallest neighboring feature set `Largest=False':
###Code
geoplanar.fill_gaps(gdf, largest=False).plot(edgecolor='k')
geoplanar.fill_gaps(gdf, largest=False).area
###Output
/home/serge/Documents/g/geoplanar/geoplanar/geoplanar/gap.py:48: FutureWarning: Currently, index_parts defaults to True, but in the future, it will default to False to be consistent with Pandas. Use `index_parts=True` to keep the current behavior and True/False to silence the warning.
_gaps = dbu.explode()
###Markdown
Checking edge case
###Code
p1 = box(0,0,10,10)
p2 = Polygon([(10,10), (12,8), (10,6), (12,4), (10,2), (20,5)])
p3 = box(17,0,20,2)
gdf = geopandas.GeoDataFrame(geometry=[p1,p2,p3])
gdf.plot(edgecolor='k')
g = geoplanar.gaps(gdf)
g.plot()
geoplanar.fill_gaps(gdf, largest=False).plot(edgecolor='k')
geoplanar.fill_gaps(gdf, largest=True).plot(edgecolor='k')
gdf.plot()
###Output
_____no_output_____
###Markdown
Gap with an inlet (non-gap)
###Code
p1 = box(0,0,10,10)
p2 = Polygon([(10,10), (12,8), (10,6), (12,4), (11,2), (20,5)])
# a true gap with a inlet
gdf = geopandas.GeoDataFrame(geometry=[p1,p2])
gdf.plot(edgecolor='k')
geoplanar.gaps(gdf)
geoplanar.fill_gaps(gdf, largest=False).plot(edgecolor='k')
###Output
/home/serge/Documents/g/geoplanar/geoplanar/geoplanar/gap.py:48: FutureWarning: Currently, index_parts defaults to True, but in the future, it will default to False to be consistent with Pandas. Use `index_parts=True` to keep the current behavior and True/False to silence the warning.
_gaps = dbu.explode()
###Markdown
Selective Correction
###Code
p1 = box(0,0,10,10)
p2 = Polygon([(10,10), (12,8), (10,6), (12,4), (10,2), (20,5)])
p3 = box(17,0,20,2)
gdf = geopandas.GeoDataFrame(geometry=[p1,p2,p3])
gdf.plot(edgecolor='k')
gaps = geoplanar.gaps(gdf)
base = gdf.plot()
gaps.plot(color='red', ax=base)
gaps
g2 = gaps.loc[[2]]
g2
filled = geoplanar.fill_gaps(gdf,g2)
base = filled.plot()
g2.plot(color='red', ax=base)
filled.area
filled.shape
(filled.area==[104, 32,6]).all()
###Output
_____no_output_____
###Markdown
Violation: Gaps
###Code
p1 = box(0,0,10,10)
p2 = Polygon([(10,10), (12,8), (10,6), (12,4), (10,2), (20,5)])
gdf = geopandas.GeoDataFrame(geometry=[p1,p2])
gdf.plot(edgecolor='k')
geoplanar.gaps(gdf)
g = geoplanar.gaps(gdf)
g.area.values
gdf1 = geoplanar.fill_gaps(gdf)
gdf1.plot(edgecolor='k')
gdf1.area
gdf.area
geoplanar.gaps(gdf1)
###Output
/home/serge/Documents/g/geoplanar/geoplanar/geoplanar/gap.py:48: FutureWarning: Currently, index_parts defaults to True, but in the future, it will default to False to be consistent with Pandas. Use `index_parts=True` to keep the current behavior and True/False to silence the warning.
_gaps = dbu.explode()
###Markdown
The default is to merge the gap with the largest neighboring feature. To merge the gap with the smallest neighboring feature set `Largest=False':
###Code
geoplanar.fill_gaps(gdf, largest=False).plot(edgecolor='k')
geoplanar.fill_gaps(gdf, largest=False).area
###Output
/home/serge/Documents/g/geoplanar/geoplanar/geoplanar/gap.py:48: FutureWarning: Currently, index_parts defaults to True, but in the future, it will default to False to be consistent with Pandas. Use `index_parts=True` to keep the current behavior and True/False to silence the warning.
_gaps = dbu.explode()
###Markdown
Checking edge case
###Code
p1 = box(0,0,10,10)
p2 = Polygon([(10,10), (12,8), (10,6), (12,4), (10,2), (20,5)])
p3 = box(17,0,20,2)
gdf = geopandas.GeoDataFrame(geometry=[p1,p2,p3])
gdf.plot(edgecolor='k')
g = geoplanar.gaps(gdf)
g.plot()
geoplanar.fill_gaps(gdf, largest=False).plot(edgecolor='k')
geoplanar.fill_gaps(gdf, largest=True).plot(edgecolor='k')
gdf.plot()
###Output
_____no_output_____
###Markdown
Gap with an inlet (non-gap)
###Code
p1 = box(0,0,10,10)
p2 = Polygon([(10,10), (12,8), (10,6), (12,4), (11,2), (20,5)])
# a true gap with a inlet
gdf = geopandas.GeoDataFrame(geometry=[p1,p2])
gdf.plot(edgecolor='k')
geoplanar.gaps(gdf)
geoplanar.fill_gaps(gdf, largest=False).plot(edgecolor='k')
###Output
/home/serge/Documents/g/geoplanar/geoplanar/geoplanar/gap.py:48: FutureWarning: Currently, index_parts defaults to True, but in the future, it will default to False to be consistent with Pandas. Use `index_parts=True` to keep the current behavior and True/False to silence the warning.
_gaps = dbu.explode()
###Markdown
Selective Correction
###Code
p1 = box(0,0,10,10)
p2 = Polygon([(10,10), (12,8), (10,6), (12,4), (10,2), (20,5)])
p3 = box(17,0,20,2)
gdf = geopandas.GeoDataFrame(geometry=[p1,p2,p3])
gdf.plot(edgecolor='k')
gaps = geoplanar.gaps(gdf)
base = gdf.plot()
gaps.plot(color='red', ax=base)
gaps
g2 = gaps.loc[[2]]
g2
filled = geoplanar.fill_gaps(gdf,g2)
base = filled.plot()
g2.plot(color='red', ax=base)
filled.area
filled.shape
(filled.area==[104, 32,6]).all()
###Output
_____no_output_____
|
4. Numpy.ipynb
|
###Markdown
NumPy is a general-purpose array-processing package. It provides a high-performance multidimensional array object, and tools for working with these arrays. It is the fundamental package for scientific computing with Python. ArrayAn array is a data structure that stores values of same data type. In Python, list can contain values corresponding to different data types but array can only contain values corresponding to same data type
###Code
# pip install numpy or conda install numpy - for manually install
#Import Mumpy
import numpy as np
###Output
_____no_output_____
###Markdown
Single Dimension array
###Code
my_lst=[1,2,3,4,5,6]
arr1 = np.array(my_lst)
print(arr1)
print(type(arr1))
print(arr1.shape) #know only no of columns for 1D array
arr1
arr2 = arr1.reshape(3,2) #convert shape with same no of elements, otherwise error
arr2
###Output
_____no_output_____
###Markdown
Multinested / Multi Dimension array
###Code
my_lst1=[1,2,3,4,5]
my_lst2=[2,3,4,5,6]
my_lst3=[9,7,6,8,9]
arr3 = np.array([my_lst1,my_lst2,my_lst3])
print(arr3)
print(type(arr3))
print(arr3.shape) #know no of row & column
arr3
arr4 = arr3.reshape(5,3) #Convert shape with same no of elements, otherwise error
arr4
###Output
_____no_output_____
###Markdown
1D array Indexing
###Code
arr1
arr1[:] #All elements
arr1[4] #Single element
arr1[3:] #As 1d arry so, from 3rd column to last column
###Output
_____no_output_____
###Markdown
2D array Indexing
###Code
arr3
arr3[:] #All elements
arr3[1:,3:] #From 1st row to last row and from 3rd column to last column
arr3[:,3:] #For all rows, from 3rd column to last column
arr3[1:,:] #From 1st row to last row, for all columns
arr3[0:2,3:5] #Specific part
###Output
_____no_output_____
###Markdown
Build in functions
###Code
#Initialize
arr = np.linspace(1,10,6) #With n no of equally spaced values in the range
print("1D Linspace: ",arr)
arr1 = np.linspace(1,10,6).reshape(2,3) #With n no of equally spaced values in the range with shape
print("nD Linspace:\n",arr1)
print("\n")
arr = np.arange(1,10,1) #With start, end before, step values
print("1D Initialize: ",arr)
arr1 = np.arange(1,10,1).reshape(3,3) #With start, end before, step values with shape
print("nD Initialize:\n",arr1)
print("\n")
arr[4:]=0 #Replaced values and updated
print("1D Updated:\n",arr)
arr1[1:,1:]=0 #Replaced values and updated with shape
print("nD Updated:\n",arr1)
#copy() function and broadcasting for 1D, nD also possible
arr = np.array([1,2,3,4,5,6,7,8,9])
print("Actual arr: ",arr)
print("After direct copy - (reference is also copying)")
arr1= arr
arr1[4:] = 0
print("arr: ",arr)
print("arr1: ",arr1)
print("\n")
arr = np.array([1,2,3,4,5,6,7,8,9])
print("Actual arr: ",arr)
print("After copy by copy() - (copying in new reference)")
arr1= arr.copy()
arr1[4:] = 0
print("arr: ",arr)
print("arr1: ",arr1)
arr = np.ones(5) #Initialize will all 1 with default float type
print(arr)
arr = np.ones((2,3),dtype=int)
print(arr)
arr = np.random.randint(1,100,8) #default 1D array
print("Definite no of Random Integers within a range:\n",arr)
print("\n")
arr = np.random.randint(1,100,8).reshape(2,4)
print("Definite no of Random Integers within a Range with Shape:\n",arr)
print("\n")
arr = np.random.random_sample((2,5)) #default interval [0,1) and shape optional
print("Random Floats with Shape:\n",arr)
print("\n")
arr = np.random.rand(3,4) #default interval [0,1)
print("Random Sample from Uniform Distribution with Shape:\n",arr)
print("\n")
arr = np.random.randn(3,4)
print("Random Sample from Standard Normal Distribution with Shape:\n",arr)
###Output
Definite no of Random Integers within a range:
[72 35 82 33 64 52 94 88]
Definite no of Random Integers within a Range with Shape:
[[88 35 76 25]
[ 3 88 2 75]]
Random Floats with Shape:
[[0.30646409 0.90274554 0.22210068 0.46209245 0.99929203]
[0.12063093 0.34473951 0.27391851 0.50428395 0.1341161 ]]
Random Sample from Uniform Distribution with Shape:
[[0.03861068 0.49404621 0.02740142 0.35754029]
[0.58093402 0.44863046 0.32756581 0.68461918]
[0.15438759 0.65030393 0.60325935 0.17807852]]
Random Sample from Standard Normal Distribution with Shape:
[[ 0.62770403 -1.24629493 0.29413905 0.59553562]
[ 0.49028797 0.81990155 2.90390159 0.98166222]
[-2.01682907 0.48131821 0.85393762 0.2309463 ]]
###Markdown
Basic Conditions and Operations
###Code
val = 3
arr = np.array([1,2,3,4,5,6,7,8,9])
print("Actual arr: ",arr)
print("\n")
print("Summation with all elements: ",arr+val)
print("Substrtaction with all elements: ",arr-val)
print("Multiplication with all elements: ",arr*val)
print("Division with all elements: ",arr/val)
print("Reminder of all elements: ",arr%val)
print("\n")
print("Checking condition with all elements: ",arr<val)
#Extract elements with condition
print("Elements less than {} are {} ".format(val,arr[arr<val]))
###Output
Actual arr: [1 2 3 4 5 6 7 8 9]
Summation with all elements: [ 4 5 6 7 8 9 10 11 12]
Substrtaction with all elements: [-2 -1 0 1 2 3 4 5 6]
Multiplication with all elements: [ 3 6 9 12 15 18 21 24 27]
Division with all elements: [0.33333333 0.66666667 1. 1.33333333 1.66666667 2.
2.33333333 2.66666667 3. ]
Reminder of all elements: [1 2 0 1 2 0 1 2 0]
Checking condition with all elements: [ True True False False False False False False False]
Elements less than 3 are [1 2]
|
0.16/_downloads/plot_read_evoked.ipynb
|
###Markdown
Reading and writing an evoked fileThis script shows how to read and write evoked datasets.
###Code
# Author: Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
from mne import read_evokeds
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
# Reading
condition = 'Left Auditory'
evoked = read_evokeds(fname, condition=condition, baseline=(None, 0),
proj=True)
###Output
_____no_output_____
###Markdown
Show result as a butterfly plot:By using exclude=[] bad channels are not excluded and are shown in red
###Code
evoked.plot(exclude=[], time_unit='s')
# Show result as a 2D image (x: time, y: channels, color: amplitude)
evoked.plot_image(exclude=[], time_unit='s')
###Output
_____no_output_____
|
01_array_gallery.ipynb
|
###Markdown
2D Arrays
###Code
%%capture_png imgs/example.png
#Checkboard
pixX= 15
pixY= 15
array = [[(i+j)%2 for i in range(pixX)] for j in range(pixY)]
disp(array)
%%capture_png imgs/example.png
array=np.full((10,10),10)
disp(array)
%%capture_png imgs/example.png
#Linear diagonal
array = [[i+j for i in range(pixX)] for j in range(pixY)]
disp(array)
%%capture_png imgs/example.png
# alternative with numpy
array=np.fromfunction(lambda i, j: i + j, shape=(15, 15))
disp(array)
%%capture_png imgs/example.png
#Linear in axis1 direction
array= [[i for i in range(pixX)] for j in range(pixY)]
disp(array)
%%capture_png imgs/example.png
#Linear counter hori
array= [[i*pixY+j for i in range(pixX)] for j in range(pixY)]
disp(array)
%%capture_png imgs/example.png
#Random
array = np.random.randint(0, 10, size=(15, 15))
disp(array)
%%capture_png imgs/example.png
# sinudial fromfunction
array=np.fromfunction(lambda i, j: np.sin(j), (15, 15))
disp(array, sep= ".1f")
%%capture_png imgs/example.png
x = np.linspace(0, 15, 15)
y = np.linspace(0, 15, 15)
xx, yy = np.meshgrid(x, y, sparse=False)
disp(xx)
%%capture_png imgs/example.png
x = np.linspace(0, 15, 15)
y = np.linspace(0, 15, 15)
xx, yy = np.meshgrid(x, y, sparse=False)
disp(yy)
%%capture_png imgs/example.png
x = np.linspace(0, 15, 15)
y = np.linspace(0, 15, 15)
xx, yy = np.meshgrid(x,y, sparse= False)
array = xx+yy
disp(array)
%%capture_png imgs/example.png
x= np.arange(-pixX//2,pixX//2)
y= np.arange(-pixY//2,pixY//2)
xx, yy = np.meshgrid(x,y, sparse= False)
array = xx+yy
array[ array <= 0 ] = 0
disp(array)
%%capture_png imgs/example.png
#Linear increase from center in all directions
pixX,pixY=(15,15)
x, y = np.meshgrid(np.linspace(-1,1,pixX), np.linspace(-1,1,pixY),sparse=False)
array = np.sqrt(x**2+y**2)
disp(array, sep='.1f' )
%%capture_png imgs/example.png
#Gaussian
pixX,pixY=(15,15)
x, y = np.meshgrid(np.linspace(-1,1,pixX), np.linspace(-1,1,pixY))
d = np.sqrt(x**2+y**2)
sigma, mu = 1.0, 0.0
array = np.exp(-( (d-mu)**2 / ( 2.0 * sigma**2 ) ) )
disp(array, sep='.1f' )
%%capture_png imgs/example.png
# One Minus Gaussian and smaller sigma
pixX,pixY=(15,15)
x, y = np.meshgrid(np.linspace(-1,1,pixX), np.linspace(-1,1,pixY))
d = np.sqrt(x**2+y**2)
sigma, mu = 0.4, 0.0
array = 1-np.exp(-( (d-mu)**2 / ( 2.0 * sigma**2 ) ) )
disp(array, sep='.1f' )
%%capture_png imgs/example.png
x, y = np.indices((31, 31))
dx=15
dy=15
radius=13.5
circ = (x-dx)**2 + (y-dy)**2 <= radius**2
array = np.zeros(x.shape)
array[circ]=1
disp(array)
%%capture_png imgs/example.png
# spiral # code inspired from : https://stackoverflow.com/questions/36834505/creating-a-spiral-array-in-python
pixX,pixY=(15,15)
def spiral(width, height):
NORTH, S, W, E = (0, -1), (0, 1), (-1, 0), (1, 0) # directions
turn_right = {NORTH: E, E: S, S: W, W: NORTH} # old -> new direction
if width < 1 or height < 1:
raise ValueError
x, y = width // 2, height // 2 # start near the center
dx, dy = NORTH # initial direction
matrix = [[None] * width for _ in range(height)]
count = 0
while True:
count += 1
matrix[y][x] = count # visit
# try to turn right
new_dx, new_dy = turn_right[dx,dy]
new_x, new_y = x + new_dx, y + new_dy
if (0 <= new_x <= width and 0 <= new_y <= height and
matrix[new_y][new_x] is None): # can turn right
x, y = new_x, new_y
dx, dy = new_dx, new_dy
else: # try to move straight
x, y = x + dx, y + dy
if not (0 <= x < width and 0 <= y < height):
return matrix # nowhere to go
num_pixels=19
array=spiral(pixX, pixY)
disp(array)
%%capture_png imgs/example.png
# linear_step fromfunction with transition
def linear_step_func(x,x0,x1):
y= np.piecewise(x, [
x < x0,
(x >= x0) & (x <= x1),
x > x1],
[0.,
lambda x: x/(x1-x0)+x0/(x0-x1),
1.]
)
return y
array=np.fromfunction(lambda i, j: linear_step_func(j,3,12), (15, 15))
disp(array,sep='.1f')
%%capture_png imgs/example.png
#from 4 regions
region0 = np.zeros( (10,8) )
region1 = np.ones( (10,7) )
region_top= np.concatenate( [region0,region1] , axis=1)
region2 = np.full( (5,5) , 2)
region3 = np.full( (5,10) ,3)
region_bottom = np.concatenate( [region2,region3] , axis=1)
array= np.concatenate( [region_top,region_bottom] ,axis=0)
disp(array)
%%capture_png imgs/example.png
# prepare some coordinates
x, y = np.indices((8, 8))
cube1 = (x < 3) & (y < 3)
cube2 = (x >= 5) & (y >= 5)
link = np.sqrt(abs(x - y)) <= 1
array = np.zeros(x.shape)
array[link]=14
array[cube1]=10
array[cube2]=30
disp(array)
from pathlib import Path
base_directory = Path.cwd() /"temp_images"
include_text = "example"
suffix = ".png"
file_names = []
for subp in base_directory.rglob("*"): # using python assignment operator: x &= 3 equals x = x & 3
cond = True
cond &= include_text in subp.name
cond &= (suffix == subp.suffix)
if cond is True:
file_names.append(subp)
file_names.sort()
len(file_names)
# save here
file_name = "array_gallery"
base_directory = temp_path
target_directory = Path.cwd() / "imgs"
target_directory.mkdir(parents=True, exist_ok=True)
prefix = file_name # delete files that where created in the past
for file in target_directory.rglob("*"):
if (prefix in file.name):
file.unlink()
paths = sorted(Path(base_directory).iterdir(), key=os.path.getmtime)
dest_names = list(name_snippet_pairs.keys())
new_keys = []
for num, (p,des) in enumerate(zip(paths,dest_names)):
to_path = target_directory / f"{file_name}_{num:03}_{des}"
shutil.copy(p, to_path)
new_keys.append(to_path.name)
new_name_snippet_pairs ={}
new_values = list(name_snippet_pairs.values())
for key, value in zip(new_keys,new_values):
if value.startswith("\n"):
value = value[1:]
if value.endswith("\n"):
value = value[:-1]
value += "\n"
new_name_snippet_pairs[key]=value
with open(f'imgs/{file_name}.json', 'w') as fp:
json.dump(new_name_snippet_pairs, fp,indent=2)
display(new_name_snippet_pairs)
!rm -r $base_directory
!git add .
# %%capture_png final_output.png
# import matplotlib.pyplot as plt
# from mpl_toolkits.axes_grid1 import ImageGrid
# import numpy as np
# plt.rcParams['figure.dpi'] = 150
# fig = plt.figure(figsize=(15., 12.), facecolor= "#89CBEC")
# grid = ImageGrid(fig, 111,
# nrows_ncols=(4, 5),
# axes_pad=0.1,
# )
# for ax, im in zip(grid, file_names):
# im = plt.imread(im)
# ax.axis("off")
# ax.imshow(im)
# plt.show()
#shutil.rmtree(temp_path) # remove images folder
###Output
_____no_output_____
|
noise-contrastive-priors/ncp.ipynb
|
###Markdown
Reliable uncertainty estimates for neural network predictionsI previously wrote about [Bayesian neural networks](https://nbviewer.jupyter.org/github/krasserm/bayesian-machine-learning/blob/dev/bayesian-neural-networks/bayesian_neural_networks.ipynb) and explained how uncertainty estimates can be obtained for network predictions. Uncertainty in predictions that comes from uncertainty in network weights is called *epistemic uncertainty* or model uncertainty. A simple regression example demonstrated how epistemic uncertainty increases in regions outside the training data distribution:A reader later [experimented](https://github.com/krasserm/bayesian-machine-learning/issues/8) with discontinuous ranges of training data and found that uncertainty estimates are lower than expected in training data "gaps", as shown in the following figure near the center of the $x$ axis. In these out-of-distribution (OOD) regions the network is over-confident in its predictions. One reason for this over-confidence is that weight priors usually impose only weak constraints over network outputs in OOD regions.If we could instead define a prior in data space directly we could better control uncertainty estimates for OOD data. A prior in data space better captures assumptions about input-output relationships than priors in weight space. Including such a prior through a loss in data space would allow a network to learn distributions over weights that better generalize to OOD regions i.e. enables a network to output more reliable uncertainty estimates.This is exactly what the paper [Noise Contrastive Priors for Functional Uncertainty](http://proceedings.mlr.press/v115/hafner20a.html) does. In this article I'll give an introduction to their approach and demonstrate how it fixes over-confidence in OOD regions. I will again use non-linear regression with one-dimensional inputs as an example and plan to cover higher-demensional inputs in a later article. Application of noise contrastive priors (NCPs) is not limited to Bayesian neural networks, they can also be applied to deterministic neural networks. Here, I'll use a Bayesian neural network and implement it with Tensorflow 2 and [Tensorflow Probability](https://www.tensorflow.org/probability).
###Code
import logging
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sns
from tensorflow.keras.layers import Input, Dense, Lambda, LeakyReLU
from tensorflow.keras.models import Model
from tensorflow.keras.regularizers import L2
from tensorflow_probability import distributions as tfd
from tensorflow_probability import layers as tfpl
from scipy.stats import norm
from utils import (train,
backprop,
select_bands,
select_subset,
style,
plot_data,
plot_prediction,
plot_uncertainty)
%matplotlib inline
logging.getLogger('tensorflow').setLevel(logging.ERROR)
###Output
_____no_output_____
###Markdown
Training dataset
###Code
rng = np.random.RandomState(123)
###Output
_____no_output_____
###Markdown
The training dataset are 40 noisy samples from a sinusoidal function `f` taken from two distinct regions of the input space (red dots). The gray dots illustrate how the noise level increases with $x$ (heteroskedastic noise).
###Code
def f(x):
"""Sinusoidal function."""
return 0.5 * np.sin(25 * x) + 0.5 * x
def noise(x, slope, rng=np.random):
"""Create heteroskedastic noise."""
noise_std = np.maximum(0.0, x + 1.0) * slope
return rng.normal(0, noise_std).astype(np.float32)
x = np.linspace(-1.0, 1.0, 1000, dtype=np.float32).reshape(-1, 1)
x_test = np.linspace(-1.5, 1.5, 200, dtype=np.float32).reshape(-1, 1)
# Noisy samples from f (with heteroskedastic noise)
y = f(x) + noise(x, slope=0.2, rng=rng)
# Select data from 2 of 5 bands (regions)
x_bands, y_bands = select_bands(x, y, mask=[False, True, False, True, False])
# Select 40 random samples from these regions
x_train, y_train = select_subset(x_bands, y_bands, num=40, rng=rng)
plot_data(x_train, y_train, x, f(x))
plt.scatter(x, y, **style['bg_data'], label='Noisy data')
plt.legend();
###Output
_____no_output_____
###Markdown
Goal is to have a model that outputs lower epistemic uncertainty in training data regions and higher epistemic uncertainty in all other regions, including training data "gaps". In addition to estimating epistemic uncertainty the model should also estimate *aleatoric uncertainty* i.e. the heteroskedastic noise in the training data. Noise contrastive estimationThe algorithm developed by the authors of the NCP paper is inspired by [noise contrastive estimation](http://proceedings.mlr.press/v9/gutmann10a.html) (NCE). With noise contrastive estimation a model learns to recognize patterns in training data by contrasting them to random noise. Instead of training a model on training data alone it is trained in context of a binary classification task with the goal of discriminating training data from noise data sampled from an artificial noise distribution. Hence, in addition to a trained model, NCE also obtains a binary classifier that can estimate the probability of input data to come from the training distribution or from the noise distribution. This can be used to obtain more reliable uncertainty estimates. For example, a higher probability that an input comes from the noise distribution should result in higher model uncertainty.Samples from a noise distribution are often obtained by adding random noise to training data. These samples represent OOD data. In practice it is often sufficient to have OOD samples *near* the boundary of the training data distribution to also get reliable uncertainty estimates in other regions of the OOD space. Noise contrastive priors are based on this hypothesis. Noise contrastive priorsA noise contrastive prior for regression is a joint *data prior* $p(x, y)$ over input $x$ and output $y$. Using the product rule of probability, $p(x, y) = p(x)p(y \mid x)$, it can be defined as the product of an *input prior* $p(x)$ and an *output prior* $p(y \mid x)$. The input prior describes the distribution of OOD data $\tilde{x}$ that are generated from the training data $x$ by adding random noise epsilon i.e. $\tilde{x} = x + \epsilon$ where $\epsilon \sim \mathcal{N}(0, \sigma_x^2)$. The input prior can therefore be defined as the convolved distribution:$$p_{nc}(\tilde{x}) = {1 \over N} \sum_{i=1}^N \mathcal{N}(\tilde{x} - x_i \mid 0, \sigma_x^2)\tag{1}$$where $x_i$ are the inputs from the training dataset and $\sigma_x$ is a hyper-parameter. As described in the paper, models trained with NCPs are quite robust to the size of input noise $\sigma_x$. The following figure visualizes the distribution of training inputs and OOD inputs as histograms, the orange line is the input prior density.
###Code
def perturbe_x(x, y, sigma_x, n=100):
"""Perturbe input x with noise sigma_x (n samples)."""
ood_x = x + np.random.normal(scale=sigma_x, size=(x.shape[0], n))
ood_y = np.tile(y, n)
return ood_x.reshape(-1, 1), ood_y.reshape(-1, 1)
def input_prior_density(x, x_train, sigma_x):
"""Compute input prior density of x."""
return np.mean(norm(0, sigma_x).pdf(x - x_train.reshape(1, -1)), axis=1, keepdims=True)
sigma_x = 0.2
sigma_y = 1.0
ood_x, ood_y = perturbe_x(x_train, y_train, sigma_x, n=25)
ood_density = input_prior_density(ood_x, x_train, sigma_x)
sns.lineplot(x=ood_x.ravel(), y=ood_density.ravel(), color='tab:orange');
sns.histplot(data={'Train inputs': x_train.ravel(), 'OOD inputs': ood_x.ravel()},
element='bars', stat='density', alpha=0.1, common_norm=False)
plt.title('Input prior density')
plt.xlabel('x');
###Output
_____no_output_____
###Markdown
The definition of the output prior is motivated by *data augmentation*. A model should be encouraged to not only predict target $y$ at training input $x$ but also predict the same target at perturbed input $\tilde{x}$ that has been generated from $x$ by adding noise. The output prior is therefore defined as:$$p_{nc}(\tilde{y} \mid \tilde{x}) = \mathcal{N}(\tilde{y} \mid y, \sigma_y^2)\tag{2}$$$\sigma_y$ is a hyper-parameter that should cover relatively high prior uncertainty in model output given OOD input. The joint prior $p(x, y)$ is best visualized by sampling values from it and doing a kernel-density estimation from these samples (a density plot from an analytical evaluation is rather "noisy" because of the data augmentation setting).
###Code
def output_prior_dist(y, sigma_y):
"""Create output prior distribution (data augmentation setting)."""
return tfd.Independent(tfd.Normal(y.ravel(), sigma_y))
def sample_joint_prior(ood_x, output_prior, n):
"""Draw n samples from joint prior at ood_x."""
x_sample = np.tile(ood_x.ravel(), n)
y_sample = output_prior.sample(n).numpy().ravel()
return x_sample, y_sample
output_prior = output_prior_dist(ood_y, sigma_y)
x_samples, y_samples = sample_joint_prior(ood_x, output_prior, n=10)
sns.kdeplot(x=x_samples, y=y_samples,
levels=10, thresh=0,
fill=True, cmap='viridis',
cbar=True, cbar_kws={'format': '%.2f'},
gridsize=100, clip=((-1, 1), (-2, 2)))
plot_data(x_train, y_train, x, f(x))
plt.title('Joint prior density')
plt.legend();
###Output
_____no_output_____
###Markdown
Regression modelsThe regression models used in the following subsections are probabilistic models $p(y \mid x)$ parameterized by the outputs of a neural network given input $x$. All networks have two hidden layers with leaky ReLU activations and 200 units each. The details of the output layers are described along with the individual models. I will first demonstrate how two models without NCPs fail to produce reliable uncertainty estimates in OOD regions and then show how NCPs can fix that. Deterministic neural network without NCPsA regression model that uses a deterministic neural network for parameterization can be defined as $p(y \mid x, \boldsymbol{\theta}) = \mathcal{N}(y \mid \mu(x, \boldsymbol{\theta}), \sigma^2(x, \boldsymbol{\theta}))$. Mean $\mu$ and standard deviation $\sigma$ are functions of input $x$ and network weights $\boldsymbol{\theta}$. In a deterministic neural network, $\boldsymbol{\theta}$ are point estimates. Outputs at input $x$ can be generated by sampling from $p(y \mid x, \boldsymbol{\theta})$ where $\mu(x,\boldsymbol{\theta})$ is the expected value of the sampled outputs and $\sigma^2(x, \boldsymbol{\theta})$ their variance. The variance represents aleatoric uncertainty. Given a training dataset $\mathbf{x}, \mathbf{y} = \left\{ x_i, y_i \right\}$ and using $\log p(\mathbf{y} \mid \mathbf{x}, \boldsymbol{\theta}) = \sum_i \log p(y_i \mid x_i, \boldsymbol{\theta})$, a maximum likelihood (ML) estimate of $\boldsymbol{\theta}$ can be obtained by minimizing the negative log likelihood.$$L(\boldsymbol{\theta}) = - \log p(\mathbf{y} \mid \mathbf{x},\boldsymbol{\theta})\tag{3}$$A maximum-a-posteriori (MAP) estimate can be obtained by minimizing the following loss function:$$L(\boldsymbol{\theta}) = - \log p(\mathbf{y} \mid \mathbf{x}, \boldsymbol{\theta}) - \lambda \log p(\boldsymbol{\theta})\tag{4}$$where $p(\boldsymbol{\theta})$ is an isotropic normal prior over network weights with zero mean. This is also known as L2 regularization with regularization strength $\lambda$.The following implementation uses the `DistributionLambda` layer of Tensorflow Probability to produce $p(y \mid x, \boldsymbol{\theta})$ as model output. The `loc` and `scale` parameters of that distribution are set from the output of layers `mu` and `sigma`, respectively. Layer `sigma` uses a [softplus](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)Softplus) activation function to ensure non-negative output.
###Code
def create_model(n_hidden=200, regularization_strength=0.01):
l2_regularizer = L2(regularization_strength)
leaky_relu = LeakyReLU(alpha=0.2)
x_in = Input(shape=(1,))
x = Dense(n_hidden, activation=leaky_relu, kernel_regularizer=l2_regularizer)(x_in)
x = Dense(n_hidden, activation=leaky_relu, kernel_regularizer=l2_regularizer)(x)
m = Dense(1, name='mu')(x)
s = Dense(1, activation='softplus', name='sigma')(x)
d = Lambda(lambda p: tfd.Normal(loc=p[0], scale=p[1] + 1e-5))((m, s))
return Model(x_in, d)
model = create_model()
###Output
_____no_output_____
###Markdown
To reduce overfitting of the model to the relatively small training set an L2 regularizer is added to the kernel of the hidden layers. The value of the corresponding regularization term in the loss function can be obtained via `model.losses` during training. The negative log likelihood is computed with the `log_prob` method of the distribution returned from a `model` call.
###Code
@tf.function
def train_step(model, optimizer, x, y):
with tf.GradientTape() as tape:
out_dist = model(x, training=True)
nll = -out_dist.log_prob(y)
reg = model.losses
loss = tf.reduce_sum(nll) + tf.reduce_sum(reg)
optimizer.apply_gradients(backprop(model, loss, tape))
return loss, out_dist.mean()
###Output
_____no_output_____
###Markdown
With this specific regression model, we can only predict aleatoric uncertainty via $\sigma(x, \boldsymbol{\theta})$ but not epistemic uncertainty. After training the model, we can plot the expected output $\mu$ together with aleatoric uncertainty. Aleatoric uncertainty increases in training data regions as $x$ increases but is not reliable in OOD regions.
###Code
train(model, x_train, y_train, batch_size=10, epochs=4000, step_fn=train_step)
out_dist = model(x_test)
aleatoric_uncertainty=out_dist.stddev()
expected_output = out_dist.mean()
plt.figure(figsize=(15, 5))
plt.subplot(1, 2, 1)
plot_data(x_train, y_train, x, f(x))
plot_prediction(x_test,
expected_output,
aleatoric_uncertainty=aleatoric_uncertainty)
plt.ylim(-2, 2)
plt.legend()
plt.subplot(1, 2, 2)
plot_uncertainty(x_test,
aleatoric_uncertainty=aleatoric_uncertainty)
plt.ylim(0, 1)
plt.legend();
###Output
_____no_output_____
###Markdown
Bayesian neural network without NCPsIn a Bayesian neural network, we infer a posterior distribution $p(\mathbf{w} \mid \mathbf{x}, \mathbf{y})$ over weights $\mathbf{w}$ given training data $\mathbf{x}$, $\mathbf{y}$ instead of making point estimates with ML or MAP. In general, the true posterior $p(\mathbf{w} \mid \mathbf{x}, \mathbf{y})$ is untractable for a neural network and is often approximated with a variational distribution $q(\mathbf{w} \mid \boldsymbol{\theta}, \mathbf{x}, \mathbf{y})$ or $q(\mathbf{w} \mid \boldsymbol{\theta})$ for short. You can find an introduction to Bayesian neural networks and variational inference in [this article](https://nbviewer.jupyter.org/github/krasserm/bayesian-machine-learning/blob/dev/bayesian-neural-networks/bayesian_neural_networks.ipynb).Following the conventions in the linked article, I'm using the variable $\mathbf{w}$ for neural network weights that are random variables and $\boldsymbol{\theta}$ for the parameters of the variational distribution and for neural network weights that are deterministic variables. This distinction is useful for models that use both, variational and deterministic layers. Here, we will implement a variational approximation only for the `mu` layer i.e. for the layer that produces the expected output $\mu(x, \mathbf{w}, \boldsymbol{\theta})$. This time it additionally depends on weights $\mathbf{w}$ sampled from the variational distribution $q(\mathbf{w} \mid \boldsymbol{\theta})$. The variational distribution $q(\mathbf{w} \mid \boldsymbol{\theta})$ therefore induces a distribution over the expected output $q(\mu \mid x, \boldsymbol{\theta}) = \int \mu(x, \mathbf{w}, \boldsymbol{\theta}) q(\mathbf{w} \mid \boldsymbol{\theta}) d\mathbf{w}$. To generate an output at input $x$ we first sample from the variational distribution $q(\mathbf{w} \mid \boldsymbol{\theta})$ and then use that sample as input for $p(y \mid x, \mathbf{w}, \boldsymbol{\theta}) = \mathcal{N}(y \mid \mu(x, \mathbf{w}, \boldsymbol{\theta}), \sigma^2(x, \boldsymbol{\theta}))$ from which we finally sample an output value $y$. The variance of output values covers both epistemic and aleatoric uncertainty where aleatoric uncertainty is contributed by $\sigma^2(x, \boldsymbol{\theta})$. The mean of $\mu$ is the expected value of output $y$ and the variance of $\mu$ represents epistemic uncertainty i.e. model uncertainty.Since the true posterior $p(\mathbf{w} \mid \mathbf{x}, \mathbf{y})$ is untractable in the general case the predictive distribution $p(y \mid x, \boldsymbol{\theta}) = \int p(y \mid x, \mathbf{w}, \boldsymbol{\theta}) q(\mathbf{w} \mid \boldsymbol{\theta}) d\mathbf{w}$ is untractable too and cannot be used directly for optimizing $\boldsymbol{\theta}$. In the special case of Bayesian inference for layer `mu` only, there should be a tractable solution (I think) but we will assume the general case here and use variational inference. The loss function is therefore the negative variational lower bound.$$L(\boldsymbol{\theta}) = - \mathbb{E}_{q(\mathbf{w} \mid \boldsymbol{\theta})} \log p(\mathbf{y} \mid \mathbf{x}, \mathbf{w}, \boldsymbol{\theta}) + \mathrm{KL}(q(\mathbf{w} \mid \boldsymbol{\theta}) \mid\mid p(\mathbf{w}))\tag{5}$$The expectation w.r.t. $q(\mathbf{w} \mid \boldsymbol{\theta})$ is approximated via sampling in a forward pass. In a [previous article](https://nbviewer.jupyter.org/github/krasserm/bayesian-machine-learning/blob/dev/bayesian-neural-networks/bayesian_neural_networks.ipynb) I implemented that with a custom `DenseVariational` layer, here I'm using `DenseReparameterization` from Tensorflow Probability. Both model the variational distribution over weights as factorized normal distribution $q(\mathbf{w} \mid \boldsymbol{\theta})$ and produce a stochastic weight output by sampling from that distribution. They only differ in some implementation details.
###Code
def create_model(n_hidden=200):
leaky_relu = LeakyReLU(alpha=0.2)
x_in = Input(shape=(1,))
x = Dense(n_hidden, activation=leaky_relu)(x_in)
x = Dense(n_hidden, activation=leaky_relu)(x)
m = tfpl.DenseReparameterization(1, name='mu')(x)
s = Dense(1, activation='softplus', name='sigma')(x)
d = Lambda(lambda p: tfd.Normal(loc=p[0], scale=p[1] + 1e-5))((m, s))
return Model(x_in, d)
model = create_model()
###Output
_____no_output_____
###Markdown
The implementation of the loss function follows directly from Equation $(5)$. The KL divergence is added by the variational layer to the `model` object and can be obtained via `model.losses`. When using mini-batches the KL divergence must be divided by the number of batches per epoch. Because of the small training dataset, the KL divergence is further multiplied by `0.1` to lessen the influence of the prior. The likelihood term of the loss function i.e. the first term in Equation $(5)$ is computed via the distribution returned by the model.
###Code
train_size = x_train.shape[0]
batch_size = 10
batches_per_epoch = train_size / batch_size
kl_weight = 1.0 / batches_per_epoch
# Further reduce regularization effect of KL term
# in variational lower bound since we only have a
# small training set (to prevent that posterior
# over weights collapses to prior).
kl_weight = kl_weight * 0.1
@tf.function
def train_step(model, optimizer, x, y, kl_weight=kl_weight):
with tf.GradientTape() as tape:
out_dist = model(x, training=True)
nll = -out_dist.log_prob(y)
kl_div = model.losses[0]
loss = tf.reduce_sum(nll) + kl_weight * kl_div
optimizer.apply_gradients(backprop(model, loss, tape))
return loss, out_dist.mean()
###Output
_____no_output_____
###Markdown
After training, we run the test input `x_test` several times through the network to obtain samples of the stochastic output of layer `mu`. From these samples we compute the mean and variance of $\mu$ i.e. we numerically approximate $q(\mu \mid x, \boldsymbol{\theta})$. The variance of $\mu$ is a measure of epistemic uncertainty. In the next section we'll use an analytical expression for $q(\mu \mid x, \boldsymbol{\theta})$.
###Code
train(model, x_train, y_train, batch_size=batch_size, epochs=8000, step_fn=train_step)
out_dist = model(x_test)
out_dist_means = []
for i in range(100):
out_dist = model(x_test)
out_dist_means.append(out_dist.mean())
aleatoric_uncertainty = model(x_test).stddev()
epistemic_uncertainty = tf.math.reduce_std(out_dist_means, axis=0)
expected_output = tf.reduce_mean(out_dist_means, axis=0)
plt.figure(figsize=(15, 5))
plt.subplot(1, 2, 1)
plot_data(x_train, y_train, x, f(x))
plot_prediction(x_test,
expected_output,
aleatoric_uncertainty=aleatoric_uncertainty,
epistemic_uncertainty=epistemic_uncertainty)
plt.ylim(-2, 2)
plt.legend()
plt.subplot(1, 2, 2)
plot_uncertainty(x_test,
aleatoric_uncertainty=aleatoric_uncertainty,
epistemic_uncertainty=epistemic_uncertainty)
plt.ylim(0, 1)
plt.legend();
###Output
_____no_output_____
###Markdown
As mentioned in the beginning, a Bayesian neural network with a prior over weights is over-confident in the training data "gap" i.e. in the OOD region between the two training data regions. Our intuition tells us that epistemic uncertainty should be higher here. Also the model is over-confident of a linear relationship for input values less than `-0.5`. Bayesian neural network with NCPsRegularizing the variational posterior $q(\mathbf{w} \mid \boldsymbol{\theta})$ to be closer to a prior over weights $p(\mathbf{w})$ by minimizing the KL divergence in Equation $(5)$ doesn't seem to generalize well to all OOD regions. But, as mentioned in the previous section, the variational distribution $q(\mathbf{w} \mid \boldsymbol{\theta})$ induces a distribution $q(\mu \mid x, \boldsymbol{\theta})$ in data space which allows comparison to a noise contrastive prior that is also defined in data space.In particular, for OOD input $\tilde{x}$, sampled from a noise contrastive input prior $p_{nc}(\tilde{x})$, we want the mean distribution $q(\mu \mid \tilde{x}, \boldsymbol{\theta})$ to be close to a *mean prior* $p_{nc}(\tilde{y} \mid \tilde{x})$ which is the output prior defined in Equation $(2)$. In other words, the expected output and epistemic uncertainty should be close to the mean prior for OOD data. This can be achieved by reparameterizing the KL divergence in weight space as KL divergence in output space by replacing $q(\mathbf{w} \mid \boldsymbol{\theta})$ with $q(\mu \mid \tilde{x}, \boldsymbol{\theta})$ and $p(\mathbf{w})$ with $p_{nc}(\tilde{y} \mid \tilde{x})$. Using an OOD dataset $\mathbf{\tilde{x}}, \mathbf{\tilde{y}}$ derived from a training dataset $\mathbf{x}, \mathbf{y}$ the loss function is$$L(\boldsymbol{\theta}) \approx - \mathbb{E}_{q(\mathbf{w} \mid \boldsymbol{\theta})} \log p(\mathbf{y} \mid \mathbf{x}, \mathbf{w}, \boldsymbol{\theta}) + \mathrm{KL}(q(\boldsymbol{\mu} \mid \mathbf{\tilde{x}}, \boldsymbol{\theta}) \mid\mid p_{nc}(\mathbf{\tilde{y}} \mid \mathbf{\tilde{x}}))\tag{6}$$This is an approximation of Equation $(5)$ for reasons explained in Appendix B of the paper. For their experiments, the authors use the opposite direction of the KL divergence without having found a significant difference i.e. they used the loss function$$L(\boldsymbol{\theta}) = - \mathbb{E}_{q(\mathbf{w} \mid \boldsymbol{\theta})} \log p(\mathbf{y} \mid \mathbf{x}, \mathbf{w}, \boldsymbol{\theta}) + \mathrm{KL}(p_{nc}(\mathbf{\tilde{y}} \mid \mathbf{\tilde{x}}) \mid\mid q(\boldsymbol{\mu} \mid \mathbf{\tilde{x}}, \boldsymbol{\theta}))\tag{7}$$This allows an interpretation of the KL divergence as fitting the mean distribution to an empirical OOD distribution (derived from the training dataset) via maximum likelihood using data augmentation. Recall how the definition of $p_{nc}(\tilde{y} \mid \tilde{x})$ in Equation $(2)$ was motivated by data augmentation. The following implementation uses the loss function defined in Equation $(7)$.Since we have a variational approximation only in the linear `mu` layer we can derive an anlytical expression for $q(\mu \mid \tilde{x}, \boldsymbol{\theta})$ using the parameters $\boldsymbol{\theta}$ of the variational distribution $q(\mathbf{w} \mid \boldsymbol{\theta})$ and the output of the second hidden layer (`inputs` in code below). The corresponding implementation is in the (inner) function `mean_dist`. The model is extended to additionally return the mean distribution.
###Code
def mean_dist_fn(variational_layer):
def mean_dist(inputs):
# Assumes that a deterministic bias variable
# is used in variational_layer
bias_mean = variational_layer.bias_posterior.mean()
# Assumes that a random kernel variable
# is used in variational_layer
kernel_mean = variational_layer.kernel_posterior.mean()
kernel_std = variational_layer.kernel_posterior.stddev()
# A Gaussian over kernel k in variational_layer induces
# a Gaussian over output 'mu' (where mu = inputs * k + b):
#
# - q(k) = N(k|k_mean, k_std^2)
#
# - E[inputs * k + b] = inputs * E[k] + b
# = inputs * k_mean + b
# = mu_mean
#
# - Var[inputs * k + b] = inputs^2 * Var[k]
# = inputs^2 * k_std^2
# = mu_var
# = mu_std^2
#
# -q(mu) = N(mu|mu_mean, mu_std^2)
mu_mean = tf.matmul(inputs, kernel_mean) + bias_mean
mu_var = tf.matmul(inputs ** 2, kernel_std ** 2)
mu_std = tf.sqrt(mu_var)
return tfd.Normal(mu_mean, mu_std)
return mean_dist
def create_model(n_hidden=200):
leaky_relu = LeakyReLU(alpha=0.2)
variational_layer = tfpl.DenseReparameterization(1, name='mu')
x_in = Input(shape=(1,))
x = Dense(n_hidden, activation=leaky_relu)(x_in)
x = Dense(n_hidden, activation=leaky_relu)(x)
m = variational_layer(x)
s = Dense(1, activation='softplus', name='sigma')(x)
mean_dist = Lambda(mean_dist_fn(variational_layer))(x)
out_dist = Lambda(lambda p: tfd.Normal(loc=p[0], scale=p[1] + 1e-5))((m, s))
return Model(x_in, [out_dist, mean_dist])
model = create_model()
###Output
_____no_output_____
###Markdown
With the mean distribution returned by the model, the KL divergence can be computed analytically. For models with more variational layers we cannot derive an anayltical expression for $q(\mu \mid \tilde{x}, \boldsymbol{\theta})$ and have to estimate the KL divergence using samples from the mean distribution i.e. using the stochastic output of layer `mu` directly. The following implementation computes the KL divergence analytically.
###Code
@tf.function
def train_step(model, optimizer, x, y,
sigma_x=0.5,
sigma_y=1.0,
ood_std_noise=0.1,
ncp_weight=0.1):
# Generate random OOD data from training data
ood_x = x + tf.random.normal(tf.shape(x), stddev=sigma_x)
# NCP output prior (data augmentation setting)
ood_mean_prior = tfd.Normal(y, sigma_y)
with tf.GradientTape() as tape:
# output and mean distribution for training data
out_dist, mean_dist = model(x, training=True)
# output and mean distribution for OOD data
ood_out_dist, ood_mean_dist = model(ood_x, training=True)
# Negative log likelihood of training data
nll = -out_dist.log_prob(y)
# KL divergence between output prior and OOD mean distribution
kl_ood_mean = tfd.kl_divergence(ood_mean_prior, ood_mean_dist)
if ood_std_noise is None:
kl_ood_std = 0.0
else:
# Encourage aleatoric uncertainty to be close to a
# pre-defined noise for OOD data (ood_std_noise)
ood_std_prior = tfd.Normal(0, ood_std_noise)
ood_std_dist = tfd.Normal(0, ood_out_dist.stddev())
kl_ood_std = tfd.kl_divergence(ood_std_prior, ood_std_dist)
loss = tf.reduce_sum(nll + ncp_weight * kl_ood_mean + ncp_weight * kl_ood_std)
optimizer.apply_gradients(backprop(model, loss, tape))
return loss, mean_dist.mean()
###Output
_____no_output_____
###Markdown
The implementation of the loss function uses an additional term that is not present in Equation $(7)$. This term is added if `ood_std_noise` is defined. It encourages the aleatoric uncertainty for OOD data to be close to a predefined `ood_std_noise`, a hyper-parameter that we set to a rather low value so that low aleatoric uncertainty is predicted in OOD regions. This makes sense as we can only reasonably estimate aleatoric noise in regions of existing training data.After training the model we can get the expected output, epistemic uncertainty and aleatoric uncertainty with a single pass of test inputs through the model. The expected output can be obtained from the mean of the mean distribution, epistemic uncertainty from the variance of the mean distribution and aleatoric uncertainty from the variance of the output distribution (= output of the `sigma` layer).
###Code
train(model, x_train, y_train, batch_size=10, epochs=15000, step_fn=train_step)
out_dist, mean_dist = model(x_test)
aleatoric_uncertainty = out_dist.stddev()
epistemic_uncertainty = mean_dist.stddev()
expected_output = mean_dist.mean()
plt.figure(figsize=(15, 5))
plt.subplot(1, 2, 1)
plot_data(x_train, y_train, x, f(x))
plot_prediction(x_test,
expected_output,
aleatoric_uncertainty=aleatoric_uncertainty,
epistemic_uncertainty=epistemic_uncertainty)
plt.ylim(-2, 2)
plt.legend()
plt.subplot(1, 2, 2)
plot_uncertainty(x_test,
aleatoric_uncertainty=aleatoric_uncertainty,
epistemic_uncertainty=epistemic_uncertainty)
plt.ylim(0, 1)
plt.legend();
###Output
_____no_output_____
|
docs/tutorials/how-to-create-stac-catalogs.ipynb
|
###Markdown
How to create STAC Catalogs STAC Community Sprint, Arlington, November 7th 2019 This notebook runs through some of the basics of using PySTAC to create a static STAC. It was part of a 30 minute presentation at the [community STAC sprint](https://github.com/radiantearth/community-sprints/tree/master/11052019-arlignton-va) in Arlington, VA in November 2019. This tutorial will require the `boto3`, `rasterio`, and `shapely` libraries:
###Code
!pip install boto3
!pip install rasterio
!pip install shapely
###Output
Requirement already satisfied: boto3 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (1.10.8)
Requirement already satisfied: botocore<1.14.0,>=1.13.8 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from boto3) (1.13.8)
Requirement already satisfied: s3transfer<0.3.0,>=0.2.0 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from boto3) (0.2.1)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from boto3) (0.9.4)
Requirement already satisfied: urllib3<1.26,>=1.20; python_version >= "3.4" in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from botocore<1.14.0,>=1.13.8->boto3) (1.25.6)
Requirement already satisfied: docutils<0.16,>=0.10 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from botocore<1.14.0,>=1.13.8->boto3) (0.15.2)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1; python_version >= "2.7" in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from botocore<1.14.0,>=1.13.8->boto3) (2.8.1)
Requirement already satisfied: six>=1.5 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from python-dateutil<3.0.0,>=2.1; python_version >= "2.7"->botocore<1.14.0,>=1.13.8->boto3) (1.12.0)
[33mWARNING: You are using pip version 20.1.1; however, version 20.2 is available.
You should consider upgrading via the '/Users/rob/proj/stac/pystac/venv/bin/python -m pip install --upgrade pip' command.[0m
Requirement already satisfied: rasterio in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (1.1.0)
Requirement already satisfied: numpy in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (1.17.3)
Requirement already satisfied: snuggs>=1.4.1 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (1.4.7)
Requirement already satisfied: click<8,>=4.0 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (7.0)
Requirement already satisfied: click-plugins in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (1.1.1)
Requirement already satisfied: attrs in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (19.3.0)
Requirement already satisfied: cligj>=0.5 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (0.5.0)
Requirement already satisfied: affine in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (2.3.0)
Requirement already satisfied: pyparsing>=2.1.6 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from snuggs>=1.4.1->rasterio) (2.4.2)
[33mWARNING: You are using pip version 20.1.1; however, version 20.2 is available.
You should consider upgrading via the '/Users/rob/proj/stac/pystac/venv/bin/python -m pip install --upgrade pip' command.[0m
Requirement already satisfied: shapely in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (1.6.4.post2)
[33mWARNING: You are using pip version 20.1.1; however, version 20.2 is available.
You should consider upgrading via the '/Users/rob/proj/stac/pystac/venv/bin/python -m pip install --upgrade pip' command.[0m
###Markdown
We can import pystac and access most of the functionality we need with the single import:
###Code
import pystac
###Output
_____no_output_____
###Markdown
Creating a catalog from a local file To give us some material to work with, lets download a single image from the [Spacenet 5 challenge](https://www.topcoder.com/challenges/30099956). We'll use a temporary directory to save off our single-item STAC.
###Code
import os
import urllib.request
from tempfile import TemporaryDirectory
tmp_dir = TemporaryDirectory()
img_path = os.path.join(tmp_dir.name, 'image.tif')
url = ('http://spacenet-dataset.s3.amazonaws.com/'
'spacenet/SN5_roads/train/AOI_7_Moscow/MS/'
'SN5_roads_train_AOI_7_Moscow_MS_chip996.tif')
urllib.request.urlretrieve(url, img_path)
###Output
_____no_output_____
###Markdown
We want to create a Catalog. Let's check the pydocs for `Catalog` to see what information we'll need. (We use `__doc__` instead of `help()` here to avoid printing out all the docs for the class.)
###Code
print(pystac.Catalog.__doc__)
###Output
A PySTAC Catalog represents a STAC catalog in memory.
A Catalog is a :class:`~pystac.STACObject` that may contain children,
which are instances of :class:`~pystac.Catalog` or :class:`~pystac.Collection`,
as well as :class:`~pystac.Item` s.
Args:
id (str): Identifier for the catalog. Must be unique within the STAC.
description (str): Detailed multi-line description to fully explain the catalog.
`CommonMark 0.28 syntax <http://commonmark.org/>`_ MAY be used for rich text
representation.
title (str or None): Optional short descriptive one-line title for the catalog.
stac_extensions (List[str]): Optional list of extensions the Catalog implements.
href (str or None): Optional HREF for this catalog, which be set as the catalog's
self link's HREF.
Attributes:
id (str): Identifier for the catalog.
description (str): Detailed multi-line description to fully explain the catalog.
title (str or None): Optional short descriptive one-line title for the catalog.
stac_extensions (List[str] or None): Optional list of extensions the Catalog implements.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Catalog.
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this Catalog.
###Markdown
Let's just give an ID and a description. We don't have to worry about the HREF right now; that will be set later.
###Code
catalog = pystac.Catalog(id='test-catalog', description='Tutorial catalog.')
###Output
_____no_output_____
###Markdown
There are no children or items in the catalog, since we haven't added anything yet.
###Code
print(list(catalog.get_children()))
print(list(catalog.get_items()))
###Output
[]
[]
###Markdown
We'll now create an Item to represent the image. Check the pydocs to see what you need to supply:
###Code
print(pystac.Item.__doc__)
###Output
An Item is the core granular entity in a STAC, containing the core metadata
that enables any client to search or crawl online catalogs of spatial 'assets' -
satellite imagery, derived data, DEM's, etc.
Args:
id (str): Provider identifier. Must be unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float] or None): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array must be 2*n where n is the
number of dimensions. Could also be None in the case of a null geometry.
datetime (datetime or None): Datetime associated with this item. If None,
a start_datetime and end_datetime must be supplied in the properties.
properties (dict): A dictionary of additional metadata for the item.
stac_extensions (List[str]): Optional list of extensions the Item implements.
href (str or None): Optional HREF for this item, which be set as the item's
self link's HREF.
collection (Collection or str): The Collection or Collection ID that this item
belongs to.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Item.
Attributes:
id (str): Provider identifier. Unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float] or None): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array is 2*n where n is the
number of dimensions. Could also be None in the case of a null geometry.
datetime (datetime or None): Datetime associated with this item. If None,
the start_datetime and end_datetime in the common_metadata
will supply the datetime range of the Item.
properties (dict): A dictionary of additional metadata for the item.
stac_extensions (List[str] or None): Optional list of extensions the Item implements.
collection (Collection or None): Collection that this item is a part of.
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this STACObject.
assets (Dict[str, Asset]): Dictionary of asset objects that can be downloaded,
each with a unique key.
collection_id (str or None): The Collection ID that this item belongs to, if any.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Item.
###Markdown
Using [rasterio](https://rasterio.readthedocs.io/en/stable/), we can pull out the bounding box of the image to use for the image metadata. If the image contained a NoData border, we would ideally pull out the footprint and save it as the geometry; in this case, we're working with a small chip the most likely has no NoData values.
###Code
import rasterio
from shapely.geometry import Polygon, mapping
def get_bbox_and_footprint(raster_uri):
with rasterio.open(raster_uri) as ds:
bounds = ds.bounds
bbox = [bounds.left, bounds.bottom, bounds.right, bounds.top]
footprint = Polygon([
[bounds.left, bounds.bottom],
[bounds.left, bounds.top],
[bounds.right, bounds.top],
[bounds.right, bounds.bottom]
])
return (bbox, mapping(footprint))
bbox, footprint = get_bbox_and_footprint(img_path)
print(bbox)
print(footprint)
###Output
[37.6616853489879, 55.73478197572927, 37.66573047610874, 55.73882710285011]
{'type': 'Polygon', 'coordinates': (((37.6616853489879, 55.73478197572927), (37.6616853489879, 55.73882710285011), (37.66573047610874, 55.73882710285011), (37.66573047610874, 55.73478197572927), (37.6616853489879, 55.73478197572927)),)}
###Markdown
We're also using `datetime.utcnow()` to supply the required datetime property for our Item. Since this is a required property, you might often find yourself making up a time to fill in if you don't know the exact capture time.
###Code
from datetime import datetime
item = pystac.Item(id='local-image',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={})
###Output
_____no_output_____
###Markdown
We haven't added it to a catalog yet, so it's parent isn't set. Once we add it to the catalog, we can see it correctly links to it's parent.
###Code
item.get_parent() is None
catalog.add_item(item)
item.get_parent()
###Output
_____no_output_____
###Markdown
`describe()` is a useful method on `Catalog` - but be careful when using it on large catalogs, as it will walk the entire tree of the STAC.
###Code
catalog.describe()
###Output
* <Catalog id=test-catalog>
* <Item id=local-image>
###Markdown
Adding AssetsWe've created an Item, but there aren't any assets associated with it. Let's create one:
###Code
print(pystac.Asset.__doc__)
item.add_asset(
key='image',
asset=pystac.Asset(
href=img_path,
media_type=pystac.MediaType.GEOTIFF
)
)
###Output
_____no_output_____
###Markdown
At any time we can call `to_dict()` on STAC objects to see how the STAC JSON is shaping up. Notice the asset is now set:
###Code
import json
print(json.dumps(item.to_dict(), indent=4))
###Output
{
"type": "Feature",
"stac_version": "1.0.0-beta.2",
"id": "local-image",
"properties": {
"datetime": "2020-08-03T03:47:48.786929Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "root",
"href": null,
"type": "application/json"
},
{
"rel": "parent",
"href": null,
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
]
}
###Markdown
Note that the link `href` properties are `null`. This is OK, as we're working with the STAC in memory. Next, we'll talk about writing the catalog out, and how to set those HREFs. Saving the catalog As the JSON above indicates, there's no HREFs set on these in-memory items. PySTAC uses the `self` link on STAC objects to track where the file lives. Because we haven't set them, they evaluate to `None`:
###Code
print(catalog.get_self_href() is None)
print(item.get_self_href() is None)
###Output
True
True
###Markdown
In order to set them, we can use `normalize_hrefs`. This method will create a normalized set of HREFs for each STAC object in the catalog, according to the [best practices document](https://github.com/radiantearth/stac-spec/blob/v0.8.1/best-practices.mdcatalog-layout)'s recommendations on how to lay out a catalog.
###Code
catalog.normalize_hrefs(os.path.join(tmp_dir.name, 'stac'))
###Output
_____no_output_____
###Markdown
Now that we've normalized to a root directory (the temporary directory), we see that the `self` links are set:
###Code
print(catalog.get_self_href())
print(item.get_self_href())
###Output
/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/catalog.json
/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/local-image/local-image.json
###Markdown
We can now call `save` on the catalog, which will recursively save all the STAC objects to their respective self HREFs.Save requires a `CatalogType` to be set. You can review the [API docs](https://pystac.readthedocs.io/en/stable/api.htmlcatalogtype) on `CatalogType` to see what each type means (unfortunately `help` doesn't show docstrings for attributes).
###Code
catalog.save(catalog_type=pystac.CatalogType.SELF_CONTAINED)
!ls {tmp_dir.name}/stac/*
with open(catalog.get_self_href()) as f:
print(f.read())
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "1.0.0-beta.2",
"id": "local-image",
"properties": {
"datetime": "2020-08-03T03:47:48.786929Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "root",
"href": "../catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "../catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
]
}
###Markdown
As you can see, all links are saved with relative paths. That's because we used `catalog_type=CatalogType.SELF_CONTAINED`. If we save an Absolute Published catalog, we'll see absolute paths:
###Code
catalog.save(catalog_type=pystac.CatalogType.ABSOLUTE_PUBLISHED)
###Output
_____no_output_____
###Markdown
Now the links included in the STAC item are all absolute:
###Code
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "1.0.0-beta.2",
"id": "local-image",
"properties": {
"datetime": "2020-08-03T03:47:48.786929Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "self",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/local-image/local-image.json",
"type": "application/json"
},
{
"rel": "root",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
]
}
###Markdown
Notice that the Asset HREF is absolute in both cases. We can make the Asset HREF relative to the STAC Item by using `.make_all_asset_hrefs_relative()`:
###Code
catalog.make_all_asset_hrefs_relative()
catalog.save(catalog_type=pystac.CatalogType.SELF_CONTAINED)
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "1.0.0-beta.2",
"id": "local-image",
"properties": {
"datetime": "2020-08-03T03:47:48.786929Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "root",
"href": "../catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "../catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "../../image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
]
}
###Markdown
Creating an Item that implements the EO extensionIn the code above our item only implemented the core STAC Item specification. With [extensions](https://github.com/radiantearth/stac-spec/tree/v0.9.0/extensions) we can record more information and add additional functionality to the Item. Given that we know this is a World View 3 image that has earth observation data, we can enable the [eo extension](https://github.com/radiantearth/stac-spec/tree/v0.8.1/extensions/eo) to add band information. To add eo information to an item we'll need to specify some more data. First, let's define the bands of World View 3:
###Code
from pystac.extensions.eo import Band
# From: https://www.spaceimagingme.com/downloads/sensors/datasheets/DG_WorldView3_DS_2014.pdf
wv3_bands = [Band.create(name='Coastal', description='Coastal: 400 - 450 nm', common_name='coastal'),
Band.create(name='Blue', description='Blue: 450 - 510 nm', common_name='blue'),
Band.create(name='Green', description='Green: 510 - 580 nm', common_name='green'),
Band.create(name='Yellow', description='Yellow: 585 - 625 nm', common_name='yellow'),
Band.create(name='Red', description='Red: 630 - 690 nm', common_name='red'),
Band.create(name='Red Edge', description='Red Edge: 705 - 745 nm', common_name='rededge'),
Band.create(name='Near-IR1', description='Near-IR1: 770 - 895 nm', common_name='nir08'),
Band.create(name='Near-IR2', description='Near-IR2: 860 - 1040 nm', common_name='nir09')]
###Output
_____no_output_____
###Markdown
Notice that we used the `.create` method create new band information. We can now create an Item, enable the eo extension, add the band information and add it to our catalog:
###Code
eo_item = pystac.Item(id='local-image-eo',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={})
eo_item.ext.enable(pystac.Extensions.EO)
eo_item.ext.eo.apply(bands=wv3_bands)
###Output
_____no_output_____
###Markdown
There are also [common metadata](https://github.com/radiantearth/stac-spec/blob/v0.9.0/item-spec/common-metadata.md) fields that we can use to capture additional information about the WorldView 3 imagery:
###Code
eo_item.common_metadata.platform = "Maxar"
eo_item.common_metadata.instrument="WorldView3"
eo_item.common_metadata.gsd = 0.3
eo_item
###Output
_____no_output_____
###Markdown
We can use the eo extension to add bands to the assets we add to the item:
###Code
eo_ext = eo_item.ext.eo
help(eo_ext.set_bands)
#eo_item.add_asset(key='image', asset=)
asset = pystac.Asset(href=img_path,
media_type=pystac.MediaType.GEOTIFF)
eo_ext.set_bands(wv3_bands, asset)
eo_item.add_asset("image", asset)
###Output
_____no_output_____
###Markdown
If we look at the asset's JSON representation, we can see the appropriate band indexes are set:
###Code
asset.to_dict()
###Output
_____no_output_____
###Markdown
Let's clear the in-memory catalog, add the EO item, and save to a new STAC:
###Code
catalog.clear_items()
list(catalog.get_items())
catalog.add_item(eo_item)
list(catalog.get_items())
catalog.normalize_and_save(root_href=os.path.join(tmp_dir.name, 'stac-eo'),
catalog_type=pystac.CatalogType.SELF_CONTAINED)
###Output
_____no_output_____
###Markdown
Now, if we read the catalog from the filesystem, PySTAC recognizes that the item implements eo and so use it's functionality, e.g. getting the bands off the asset:
###Code
catalog2 = pystac.read_file(os.path.join(tmp_dir.name, 'stac-eo', 'catalog.json'))
list(catalog2.get_items())
item = next(catalog2.get_all_items())
item.ext.implements('eo')
item.ext.eo.get_bands(item.assets['image'])
###Output
_____no_output_____
###Markdown
CollectionsCollections are a subtype of Catalog that have some additional properties to make them more searchable. They also can define common properties so that items in the collection don't have to duplicate common data for each item. Let's create a collection to hold common properties between two images from the Spacenet 5 challenge.First we'll get another image, and it's bbox and footprint:
###Code
url2 = ('http://spacenet-dataset.s3.amazonaws.com/'
'spacenet/SN5_roads/train/AOI_7_Moscow/MS/'
'SN5_roads_train_AOI_7_Moscow_MS_chip997.tif')
img_path2 = os.path.join(tmp_dir.name, 'image.tif')
urllib.request.urlretrieve(url2, img_path2)
bbox2, footprint2 = get_bbox_and_footprint(img_path2)
###Output
_____no_output_____
###Markdown
We can take a look at the pydocs for Collection to see what information we need to supply in order to satisfy the spec.
###Code
print(pystac.Collection.__doc__)
###Output
A Collection extends the Catalog spec with additional metadata that helps
enable discovery.
Args:
id (str): Identifier for the collection. Must be unique within the STAC.
description (str): Detailed multi-line description to fully explain the collection.
`CommonMark 0.28 syntax <http://commonmark.org/>`_ MAY be used for rich text
representation.
extent (Extent): Spatial and temporal extents that describe the bounds of
all items contained within this Collection.
title (str or None): Optional short descriptive one-line title for the collection.
stac_extensions (List[str]): Optional list of extensions the Collection implements.
href (str or None): Optional HREF for this collection, which be set as the collection's
self link's HREF.
license (str): Collection's license(s) as a `SPDX License identifier
<https://spdx.org/licenses/>`_, `various`, or `proprietary`. If collection includes
data with multiple different licenses, use `various` and add a link for each.
Defaults to 'proprietary'.
keywords (List[str]): Optional list of keywords describing the collection.
providers (List[Provider]): Optional list of providers of this Collection.
properties (dict): Optional dict of common fields across referenced items.
summaries (dict): An optional map of property summaries,
either a set of values or statistics such as a range.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Collection.
Attributes:
id (str): Identifier for the collection.
description (str): Detailed multi-line description to fully explain the collection.
extent (Extent): Spatial and temporal extents that describe the bounds of
all items contained within this Collection.
title (str or None): Optional short descriptive one-line title for the collection.
stac_extensions (List[str]): Optional list of extensions the Collection implements.
keywords (List[str] or None): Optional list of keywords describing the collection.
providers (List[Provider] or None): Optional list of providers of this Collection.
properties (dict or None): Optional dict of common fields across referenced items.
summaries (dict or None): An optional map of property summaries,
either a set of values or statistics such as a range.
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this Collection.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Catalog.
###Markdown
Beyond what a Catalog reqiures, a Collection requires a license, and an `Extent` that describes the range of space and time that the items it hold occupy.
###Code
print(pystac.Extent.__doc__)
###Output
Describes the spatio-temporal extents of a Collection.
Args:
spatial (SpatialExtent): Potential spatial extent covered by the collection.
temporal (TemporalExtent): Potential temporal extent covered by the collection.
Attributes:
spatial (SpatialExtent): Potential spatial extent covered by the collection.
temporal (TemporalExtent): Potential temporal extent covered by the collection.
###Markdown
An Extent is comprised of a SpatialExtent and a TemporalExtent. These hold one or more bounding boxes and time intervals, respectively, that completely cover the items contained in the collections.Let's start with creating two new items - these will be core Items. We can set these items to implement the `eo` extension by specifying them in the `stac_extensions`.
###Code
collection_item = pystac.Item(id='local-image-col-1',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={},
stac_extensions=[pystac.Extensions.EO])
collection_item.common_metadata.gsd = 0.3
collection_item.common_metadata.platform = 'Maxar'
collection_item.common_metadata.instruments = ['WorldView3']
asset = pystac.Asset(href=img_path,
media_type=pystac.MediaType.GEOTIFF)
collection_item.ext.eo.set_bands(wv3_bands, asset)
collection_item.add_asset('image', asset)
collection_item2 = pystac.Item(id='local-image-col-2',
geometry=footprint2,
bbox=bbox2,
datetime=datetime.utcnow(),
properties={},
stac_extensions=[pystac.Extensions.EO])
collection_item2.common_metadata.gsd = 0.3
collection_item2.common_metadata.platform = 'Maxar'
collection_item2.common_metadata.instruments = ['WorldView3']
asset2 = pystac.Asset(href=img_path,
media_type=pystac.MediaType.GEOTIFF)
collection_item2.ext.eo.set_bands([
band for band in wv3_bands if band.name in ["Red", "Green", "Blue"]
], asset2)
collection_item2.add_asset('image', asset2)
###Output
_____no_output_____
###Markdown
We can use our two items' metadata to find out what the proper bounds are:
###Code
from shapely.geometry import shape
unioned_footprint = shape(footprint).union(shape(footprint2))
collection_bbox = list(unioned_footprint.bounds)
spatial_extent = pystac.SpatialExtent(bboxes=[collection_bbox])
collection_interval = sorted([collection_item.datetime, collection_item2.datetime])
temporal_extent = pystac.TemporalExtent(intervals=[collection_interval])
collection_extent = pystac.Extent(spatial=spatial_extent, temporal=temporal_extent)
collection = pystac.Collection(id='wv3-images',
description='Spacenet 5 images over Moscow',
extent=collection_extent,
license='CC-BY-SA-4.0')
###Output
_____no_output_____
###Markdown
Now if we add our items to our Collection, and our Collection to our Catalog, we get the following STAC that can be saved:
###Code
collection.add_items([collection_item, collection_item2])
catalog.clear_items()
catalog.clear_children()
catalog.add_child(collection)
catalog.describe()
catalog.normalize_and_save(root_href=os.path.join(tmp_dir.name, 'stac-collection'),
catalog_type=pystac.CatalogType.SELF_CONTAINED)
###Output
_____no_output_____
###Markdown
CleanupDon't forget to clean up the temporary directory!
###Code
tmp_dir.cleanup()
###Output
_____no_output_____
###Markdown
Creating a STAC of imagery from Spacenet 5 data Now, let's take what we've learned and create a Catalog with more data in it. Allowing PySTAC to read from AWS S3PySTAC aims to be virtually zero-dependency (notwithstanding the why-isn't-this-in-stdlib datetime-util), so it doesn't have the ability to read from or write to anything but the local file system. However, we can hook into PySTAC's IO in the following way. Learn more about how to use STAC_IO in the [documentation on the topic](https://pystac.readthedocs.io/en/latest/concepts.htmlusing-stac-io):
###Code
from urllib.parse import urlparse
import boto3
from pystac import STAC_IO
def my_read_method(uri):
parsed = urlparse(uri)
if parsed.scheme == 's3':
bucket = parsed.netloc
key = parsed.path[1:]
s3 = boto3.resource('s3')
obj = s3.Object(bucket, key)
return obj.get()['Body'].read().decode('utf-8')
else:
return STAC_IO.default_read_text_method(uri)
def my_write_method(uri, txt):
parsed = urlparse(uri)
if parsed.scheme == 's3':
bucket = parsed.netloc
key = parsed.path[1:]
s3 = boto3.resource("s3")
s3.Object(bucket, key).put(Body=txt)
else:
STAC_IO.default_write_text_method(uri, txt)
STAC_IO.read_text_method = my_read_method
STAC_IO.write_text_method = my_write_method
###Output
_____no_output_____
###Markdown
We'll need a utility to list keys for reading the lists of files from S3:
###Code
# From https://alexwlchan.net/2017/07/listing-s3-keys/
def get_s3_keys(bucket, prefix):
"""Generate all the keys in an S3 bucket."""
s3 = boto3.client('s3')
kwargs = {'Bucket': bucket, 'Prefix': prefix}
while True:
resp = s3.list_objects_v2(**kwargs)
for obj in resp['Contents']:
yield obj['Key']
try:
kwargs['ContinuationToken'] = resp['NextContinuationToken']
except KeyError:
break
###Output
_____no_output_____
###Markdown
Let's make a STAC of imagery over Moscow as part of the Spacenet 5 challenge. As a first step, we can list out the imagery and extract IDs from each of the chips.
###Code
moscow_training_chip_uris = list(get_s3_keys(bucket='spacenet-dataset',
prefix='spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS'))
import re
chip_id_to_data = {}
def get_chip_id(uri):
return re.search(r'.*\_chip(\d+)\.', uri).group(1)
for uri in moscow_training_chip_uris:
chip_id = get_chip_id(uri)
chip_id_to_data[chip_id] = { 'img': 's3://spacenet-dataset/{}'.format(uri) }
###Output
_____no_output_____
###Markdown
For this tutorial, we'll only take a subset of the data.
###Code
chip_id_to_data = dict(list(chip_id_to_data.items())[:10])
chip_id_to_data
###Output
_____no_output_____
###Markdown
Let's turn each of those chips into a STAC Item that represents the image.
###Code
chip_id_to_items = {}
###Output
_____no_output_____
###Markdown
We'll create core `Item`s for our imagery, but mark them with the `eo` extension as we did above, and store the `eo` data in a `Collection`.Note that the image CRS is in WGS:84 (Lat/Lng). If it wasn't, we'd have to reproject the footprint to WGS:84 in order to be compliant with the spec (which can easily be done with [pyproj](https://github.com/pyproj4/pyproj)).Here we're taking advantage of `rasterio`'s ability to read S3 URIs, which only grabs the GeoTIFF metadata and does not pull the whole file down.
###Code
for chip_id in chip_id_to_data:
img_uri = chip_id_to_data[chip_id]['img']
print('Processing {}'.format(img_uri))
bbox, footprint = get_bbox_and_footprint(img_uri)
item = pystac.Item(id='img_{}'.format(chip_id),
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={},
stac_extensions=[pystac.Extensions.EO])
item.common_metadata.gsd = 0.3
item.common_metadata.platform = 'Maxar'
item.common_metadata.instruments = ['WorldView3']
item.ext.eo.bands = wv3_bands
asset = pystac.Asset(href=img_uri,
media_type=pystac.MediaType.COG)
item.ext.eo.set_bands(wv3_bands, asset)
item.add_asset(key='ps-ms', asset=asset)
chip_id_to_items[chip_id] = item
###Output
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip0.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip10.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip100.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1000.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1001.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1002.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1003.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1004.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1005.tif
###Markdown
Creating the CollectionAll of these images are over Moscow. In Spacenet 5, we have a couple cities that have imagery; a good way to separate these collections of imagery. We can store all of the common `eo` metadata in the collection.
###Code
from shapely.geometry import (shape, MultiPolygon)
footprints = list(map(lambda i: shape(i.geometry).envelope,
chip_id_to_items.values()))
collection_bbox = MultiPolygon(footprints).bounds
spatial_extent = pystac.SpatialExtent(bboxes=[collection_bbox])
datetimes = sorted(list(map(lambda i: i.datetime,
chip_id_to_items.values())))
temporal_extent = pystac.TemporalExtent(intervals=[[datetimes[0], datetimes[-1]]])
collection_extent = pystac.Extent(spatial=spatial_extent, temporal=temporal_extent)
collection = pystac.Collection(id='wv3-images',
description='Spacenet 5 images over Moscow',
extent=collection_extent,
license='CC-BY-SA-4.0')
collection.add_items(chip_id_to_items.values())
collection.describe()
###Output
* <Collection id=wv3-images>
* <Item id=img_0>
* <Item id=img_1>
* <Item id=img_10>
* <Item id=img_100>
* <Item id=img_1000>
* <Item id=img_1001>
* <Item id=img_1002>
* <Item id=img_1003>
* <Item id=img_1004>
* <Item id=img_1005>
###Markdown
Now, we can create a Catalog and add the collection.
###Code
catalog = pystac.Catalog(id='spacenet5', description='Spacenet 5 Data (Test)')
catalog.add_child(collection)
catalog.describe()
###Output
* <Catalog id=spacenet5>
* <Collection id=wv3-images>
* <Item id=img_0>
* <Item id=img_1>
* <Item id=img_10>
* <Item id=img_100>
* <Item id=img_1000>
* <Item id=img_1001>
* <Item id=img_1002>
* <Item id=img_1003>
* <Item id=img_1004>
* <Item id=img_1005>
###Markdown
Adding items with the label extension to the Spacenet 5 catalogWe can use the [label extension](https://github.com/radiantearth/stac-spec/tree/v0.8.1/extensions/label) of the STAC spec to represent the training data in our STAC. For this, we need to grab the URIs of the GeoJSON of roads:
###Code
moscow_training_geojson_uris = list(get_s3_keys(bucket='spacenet-dataset',
prefix='spacenet/SN5_roads/train/AOI_7_Moscow/geojson_roads_speed/'))
for uri in moscow_training_geojson_uris:
chip_id = get_chip_id(uri)
if chip_id in chip_id_to_data:
chip_id_to_data[chip_id]['label'] = 's3://spacenet-dataset/{}'.format(uri)
###Output
_____no_output_____
###Markdown
We'll add the items to their own subcatalog; since they don't inherit the Collection's `eo` properties, they shouldn't go in the Collection.
###Code
label_catalog = pystac.Catalog(id='spacenet-data-labels', description='Labels for Spacenet 5')
catalog.add_child(label_catalog)
###Output
_____no_output_____
###Markdown
To see the required fields for the label extension we can check the pydocs on the `apply` method of the extension:
###Code
from pystac.extensions import label
print(label.LabelItemExt.apply.__doc__)
###Output
Applies label extension properties to the extended Item.
Args:
label_description (str): A description of the label, how it was created,
and what it is recommended for
label_type (str): An ENUM of either vector label type or raster label type. Use
one of :class:`~pystac.LabelType`.
label_properties (list or None): These are the names of the property field(s) in each
Feature of the label asset's FeatureCollection that contains the classes
(keywords from label:classes if the property defines classes).
If labels are rasters, this should be None.
label_classes (List[LabelClass]): Optional, but reqiured if ussing categorical data.
A list of LabelClasses defining the list of possible class names for each
label:properties. (e.g., tree, building, car, hippo)
label_tasks (List[str]): Recommended to be a subset of 'regression', 'classification',
'detection', or 'segmentation', but may be an arbitrary value.
label_methods: Recommended to be a subset of 'automated' or 'manual',
but may be an arbitrary value.
label_overviews (List[LabelOverview]): Optional list of LabelOverview classes
that store counts (for classification-type data) or summary statistics (for
continuous numerical/regression data).
###Markdown
This loop creates our label items and associates each to the appropriate source image Item.
###Code
for chip_id in chip_id_to_data:
img_item = collection.get_item('img_{}'.format(chip_id))
label_uri = chip_id_to_data[chip_id]['label']
label_item = pystac.Item(id='label_{}'.format(chip_id),
geometry=img_item.geometry,
bbox=img_item.bbox,
datetime=datetime.utcnow(),
properties={},
stac_extensions=[pystac.Extensions.LABEL])
label_item.ext.label.apply(label_description="SpaceNet 5 Road labels",
label_type=label.LabelType.VECTOR,
label_tasks=['segmentation', 'regression'])
label_item.ext.label.add_source(img_item)
label_item.ext.label.add_geojson_labels(label_uri)
label_catalog.add_item(label_item)
###Output
_____no_output_____
###Markdown
Now we have a STAC of training data!
###Code
catalog.describe()
label_item = catalog.get_child('spacenet-data-labels').get_item('label_1')
label_item.to_dict()
###Output
_____no_output_____
###Markdown
How to create STAC Catalogs STAC Community Sprint, Arlington, November 7th 2019 This notebook runs through some of the basics of using PySTAC to create a static STAC. It was part of a 30 minute presentation at the [community STAC sprint](https://github.com/radiantearth/community-sprints/tree/master/11052019-arlignton-va) in Arlington, VA in November 2019. This tutorial will require the `boto3`, `rasterio`, and `shapely` libraries:
###Code
!pip install boto3
!pip install rasterio
!pip install shapely
###Output
Requirement already satisfied: boto3 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages
Requirement already satisfied: s3transfer<0.3.0,>=0.2.0 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from boto3)
Requirement already satisfied: botocore<1.14.0,>=1.13.8 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from boto3)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from boto3)
Requirement already satisfied: urllib3<1.26,>=1.20; python_version >= "3.4" in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from botocore<1.14.0,>=1.13.8->boto3)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1; python_version >= "2.7" in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from botocore<1.14.0,>=1.13.8->boto3)
Requirement already satisfied: docutils<0.16,>=0.10 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from botocore<1.14.0,>=1.13.8->boto3)
Requirement already satisfied: six>=1.5 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from python-dateutil<3.0.0,>=2.1; python_version >= "2.7"->botocore<1.14.0,>=1.13.8->boto3)
[33mYou are using pip version 9.0.3, however version 19.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
Requirement already satisfied: rasterio in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages
Requirement already satisfied: numpy in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio)
Requirement already satisfied: attrs in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio)
Requirement already satisfied: snuggs>=1.4.1 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio)
Requirement already satisfied: click-plugins in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio)
Requirement already satisfied: click<8,>=4.0 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio)
Requirement already satisfied: cligj>=0.5 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio)
Requirement already satisfied: affine in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio)
Requirement already satisfied: pyparsing>=2.1.6 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from snuggs>=1.4.1->rasterio)
[33mYou are using pip version 9.0.3, however version 19.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
Requirement already satisfied: shapely in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages
[33mYou are using pip version 9.0.3, however version 19.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
###Markdown
We can import pystac with the alias `stac` to access all of the API we need (saving a glorious 2 characters):
###Code
import pystac as stac
###Output
_____no_output_____
###Markdown
Creating a catalog from a local file To give us some material to work with, lets download a single image from the [Spacenet 5 challenge](https://www.topcoder.com/challenges/30099956). We'll use a temporary directory to save off our single-item STAC.
###Code
import os
import urllib.request
from tempfile import TemporaryDirectory
tmp_dir = TemporaryDirectory()
img_path = os.path.join(tmp_dir.name, 'image.tif')
url = ('http://spacenet-dataset.s3.amazonaws.com/'
'spacenet/SN5_roads/train/AOI_7_Moscow/MS/'
'SN5_roads_train_AOI_7_Moscow_MS_chip996.tif')
urllib.request.urlretrieve(url, img_path)
###Output
_____no_output_____
###Markdown
We want to create a Catalog. Let's check the pydocs for `Catalog` to see what information we'll need. (We use `__doc__` instead of `help()` here to avoid printing out all the docs for the class.)
###Code
print(stac.Catalog.__doc__)
###Output
A PySTAC Catalog represents a STAC catalog in memory.
A Catalog is a :class:`~pystac.STACObject` that may contain children,
which are instances of :class:`~pystac.Catalog` or :class:`~pystac.Collection`,
as well as :class:`~pystac.Item` s.
Args:
id (str): Identifier for the catalog. Must be unique within the STAC.
description (str): Detailed multi-line description to fully explain the catalog.
`CommonMark 0.28 syntax <http://commonmark.org/>`_ MAY be used for rich text
representation.
title (str or None): Optional short descriptive one-line title for the catalog.
stac_extensions (List[str]): Optional list of extensions the Catalog implements.
href (str or None): Optional HREF for this catalog, which be set as the catalog's
self link's HREF.
Attributes:
id (str): Identifier for the catalog.
description (str): Detailed multi-line description to fully explain the catalog.
title (str or None): Optional short descriptive one-line title for the catalog.
stac_extensions (List[str] or None): Optional list of extensions the Catalog implements.
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this Catalog.
###Markdown
Let's just give an ID and a description. We don't have to worry about the HREF right now; that will be set later.
###Code
catalog = stac.Catalog(id='test-catalog', description='Tutorial catalog.')
###Output
_____no_output_____
###Markdown
There are no children or items in the catalog, since we haven't added anything yet.
###Code
print(list(catalog.get_children()))
print(list(catalog.get_items()))
###Output
[]
[]
###Markdown
We'll now create an Item to represent the image. Check the pydocs to see what you need to supply:
###Code
print(stac.Item.__doc__)
###Output
An Item is the core granular entity in a STAC, containing the core metadata
that enables any client to search or crawl online catalogs of spatial 'assets' -
satellite imagery, derived data, DEM's, etc.
Args:
id (str): Provider identifier. Must be unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float]): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array must be 2*n where n is the
number of dimensions.
datetime (Datetime): Datetime associated with this item.
properties (dict): A dictionary of additional metadata for the item.
stac_extensions (List[str]): Optional list of extensions the Item implements.
href (str or None): Optional HREF for this item, which be set as the item's
self link's HREF.
collection (Collection or str): The Collection or Collection ID that this item
belongs to.
Attributes:
id (str): Provider identifier. Unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float]): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array is 2*n where n is the
number of dimensions.
datetime (Datetime): Datetime associated with this item.
properties (dict): A dictionary of additional metadata for the item.
stac_extensions (List[str] or None): Optional list of extensions the Item implements.
collection (Collection or None): Collection that this item is a part of.
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this STACObject.
assets (Dict[str, Asset]): Dictionary of asset objects that can be downloaded,
each with a unique key.
collection_id (str or None): The Collection ID that this item belongs to, if any.
###Markdown
Using [rasterio](https://rasterio.readthedocs.io/en/stable/), we can pull out the bounding box of the image to use for the image metadata. If the image contained a NoData border, we would ideally pull out the footprint and save it as the geometry; in this case, we're working with a small chip the most likely has no NoData values.
###Code
import rasterio
from shapely.geometry import Polygon, mapping
def get_bbox_and_footprint(raster_uri):
with rasterio.open(raster_uri) as ds:
bounds = ds.bounds
bbox = [bounds.left, bounds.bottom, bounds.right, bounds.top]
footprint = Polygon([
[bounds.left, bounds.bottom],
[bounds.left, bounds.top],
[bounds.right, bounds.top],
[bounds.right, bounds.bottom]
])
return (bbox, mapping(footprint))
bbox, footprint = get_bbox_and_footprint(img_path)
print(bbox)
print(footprint)
###Output
[37.6616853489879, 55.73478197572927, 37.66573047610874, 55.73882710285011]
{'type': 'Polygon', 'coordinates': (((37.6616853489879, 55.73478197572927), (37.6616853489879, 55.73882710285011), (37.66573047610874, 55.73882710285011), (37.66573047610874, 55.73478197572927), (37.6616853489879, 55.73478197572927)),)}
###Markdown
We're also using `datetime.utcnow()` to supply the required datetime property for our Item. Since this is a required property, you might often find yourself making up a time to fill in if you don't know the exact capture time.
###Code
from datetime import datetime
item = stac.Item(id='local-image',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={})
###Output
_____no_output_____
###Markdown
We haven't added it to a catalog yet, so it's parent isn't set. Once we add it to the catalog, we can see it correctly links to it's parent.
###Code
item.get_parent() is None
catalog.add_item(item)
item.get_parent()
###Output
_____no_output_____
###Markdown
`describe()` is a useful method on `Catalog` - but be careful when using it on large catalogs, as it will walk the entire tree of the STAC.
###Code
catalog.describe()
###Output
* <Catalog id=test-catalog>
* <Item id=local-image>
###Markdown
Adding AssetsWe've created an Item, but there aren't any assets associated with it. Let's create one:
###Code
print(stac.Asset.__doc__)
item.add_asset(key='image', asset=stac.Asset(href=img_path, media_type=stac.MediaType.GEOTIFF))
###Output
_____no_output_____
###Markdown
At any time we can call `to_dict()` on STAC objects to see how the STAC JSON is shaping up. Notice the asset is now set:
###Code
import json
print(json.dumps(item.to_dict(), indent=4))
###Output
{
"type": "Feature",
"stac_version": "0.8.1",
"id": "local-image",
"properties": {
"datetime": "2019-11-05 16:43:22Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
],
"links": [
{
"rel": "root",
"href": null,
"type": "application/json"
},
{
"rel": "parent",
"href": null,
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/image.tif",
"type": "image/vnd.stac.geotiff"
}
}
}
###Markdown
Note that the link `href` properties are `null`. This is OK, as we're working with the STAC in memory. Next, we'll talk about writing the catalog out, and how to set those HREFs. Saving the catalog As the JSON above indicates, there's no HREFs set on these in-memory items. PySTAC uses the `self` link on STAC objects to track where the file lives. Because we haven't set them, they evaluate to `None`:
###Code
print(catalog.get_self_href() is None)
print(item.get_self_href() is None)
###Output
True
True
###Markdown
In order to set them, we can use `normalize_hrefs`. This method will create a normalized set of HREFs for each STAC object in the catalog, according to the [best practices document](https://github.com/radiantearth/stac-spec/blob/v0.8.1/best-practices.mdcatalog-layout)'s recommendations on how to lay out a catalog.
###Code
catalog.normalize_hrefs(os.path.join(tmp_dir.name, 'stac'))
###Output
_____no_output_____
###Markdown
Now that we've normalized to a root directory (the temporary directory), we see that the `self` links are set:
###Code
print(catalog.get_self_href())
print(item.get_self_href())
###Output
/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/stac/catalog.json
/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/stac/local-image/local-image.json
###Markdown
We can now call `save` on the catalog, which will recursively save all the STAC objects to their respective self HREFs.Save requires a `CatalogType` to be set. You can review the [API docs](https://pystac.readthedocs.io/en/stable/api.htmlcatalogtype) on `CatalogType` to see what each type means (unfortunately `help` doesn't show docstrings for attributes).
###Code
catalog.save(catalog_type=stac.CatalogType.SELF_CONTAINED)
!ls {tmp_dir.name}/stac/*
with open(catalog.get_self_href()) as f:
print(f.read())
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "0.8.1",
"id": "local-image",
"properties": {
"datetime": "2019-11-05 16:43:22Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
],
"links": [
{
"rel": "root",
"href": "../catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "../catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/image.tif",
"type": "image/vnd.stac.geotiff"
}
}
}
###Markdown
As you can see, all links are saved with relative paths. That's because we used `catalog_type=CatalogType.SELF_CONTAINED`. If we save an Absolute Published catalog, we'll see absolute paths:
###Code
catalog.save(catalog_type=stac.CatalogType.ABSOLUTE_PUBLISHED)
###Output
_____no_output_____
###Markdown
Now the links included in the STAC item are all absolute:
###Code
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "0.8.1",
"id": "local-image",
"properties": {
"datetime": "2019-11-05 16:43:22Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
],
"links": [
{
"rel": "self",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/stac/local-image/local-image.json",
"type": "application/json"
},
{
"rel": "root",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/stac/catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/stac/catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/image.tif",
"type": "image/vnd.stac.geotiff"
}
}
}
###Markdown
Notice that the Asset HREF is absolute in both cases. We can make the Asset HREF relative to the STAC Item by using `.make_all_asset_hrefs_relative()`:
###Code
catalog.make_all_asset_hrefs_relative()
catalog.save(catalog_type=stac.CatalogType.SELF_CONTAINED)
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "0.8.1",
"id": "local-image",
"properties": {
"datetime": "2019-11-05 16:43:22Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
],
"links": [
{
"rel": "root",
"href": "../catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "../catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "../../image.tif",
"type": "image/vnd.stac.geotiff"
}
}
}
###Markdown
Creating an EO ItemIn the code above, we encapsulated our imagery as a core STAC item. However, there's more information that we can encapsulate, given that we know this is a World View 3 image. We can do this by creating an `EOItem`, which is an Item that is extended via the [eo extension](https://github.com/radiantearth/stac-spec/tree/v0.8.1/extensions/eo):
###Code
print(stac.EOItem.__doc__)
###Output
EOItem represents a snapshot of the earth for a single date and time.
Args:
id (str): Provider identifier. Must be unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float]): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array must be 2*n where n is the
number of dimensions.
datetime (Datetime): Datetime associated with this item.
properties (dict): A dictionary of additional metadata for the item.
gsd (float): Ground Sample Distance at the sensor.
platform (str): Unique name of the specific platform to which the instrument is attached.
instrument (str): Name of instrument or sensor used (e.g., MODIS, ASTER, OLI, Canon F-1).
bands (List[Band]): This is a list of :class:`~pystac.Band` objects that represent
the available bands.
constellation (str): Optional name of the constellation to which the platform belongs.
epsg (int): Optional `EPSG code <http://www.epsg-registry.org/>`_.
cloud_cover (float): Optional estimate of cloud cover as a percentage (0-100) of the
entire scene. If not available the field should not be provided.
off_nadir (float): Optional viewing angle. The angle from the sensor between
nadir (straight down) and the scene center. Measured in degrees (0-90).
azimuth (float): Optional viewing azimuth angle. The angle measured from the
sub-satellite point (point on the ground below the platform) between the
scene center and true north. Measured clockwise from north in degrees (0-360).
sun_azimuth (float): Optional sun azimuth angle. From the scene center point on
the ground, this is the angle between truth north and the sun. Measured clockwise
in degrees (0-360).
sun_elevation (float): Optional sun elevation angle. The angle from the tangent of
the scene center point to the sun. Measured from the horizon in degrees (0-90).
stac_extensions (List[str]): Optional list of extensions the Item implements.
href (str or None): Optional HREF for this item, which be set as the item's
self link's HREF.
collection (Collection or str): The Collection or Collection ID that this item
belongs to.
Attributes:
id (str): Provider identifier. Unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float]): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array is 2*n where n is the
number of dimensions.
datetime (Datetime): Datetime associated with this item.
properties (dict): A dictionary of additional metadata for the item.
stac_extensions (List[str] or None): Optional list of extensions the Item implements.
collection (Collection or None): Collection that this item is a part of.
gsd (float): Ground Sample Distance at the sensor.
platform (str): Unique name of the specific platform to which the instrument is attached.
instrument (str): Name of instrument or sensor used (e.g., MODIS, ASTER, OLI, Canon F-1).
bands (List[Band]): This is a list of :class:`~pystac.Band` objects that represent
the available bands.
constellation (str or None): Name of the constellation to which the platform belongs.
epsg (int or None): `EPSG code <http://www.epsg-registry.org/>`_.
cloud_cover (float or None): Estimate of cloud cover as a percentage (0-100) of the
entire scene. If not available the field should not be provided.
off_nadir (float or None): Viewing angle. The angle from the sensor between
nadir (straight down) and the scene center. Measured in degrees (0-90).
azimuth (float or None): Viewing azimuth angle. The angle measured from the
sub-satellite point (point on the ground below the platform) between the
scene center and true north. Measured clockwise from north in degrees (0-360).
sun_azimuth (float or None): Sun azimuth angle. From the scene center point on
the ground, this is the angle between truth north and the sun. Measured clockwise
in degrees (0-360).
sun_elevation (float or None): Sun elevation angle. The angle from the tangent of
the scene center point to the sun. Measured from the horizon in degrees (0-90).
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this STACObject.
assets (Dict[str, Asset]): Dictionary of asset objects that can be downloaded,
each with a unique key.
collection_id (str or None): The Collection ID that this item belongs to, if any.
###Markdown
To create the EOItem, we'll need to encode some more information. First, let's define the bands of World View 3:
###Code
# From: https://www.spaceimagingme.com/downloads/sensors/datasheets/DG_WorldView3_DS_2014.pdf
wv3_bands = [stac.Band(name='Coastal', description='Coastal: 400 - 450 nm', common_name='coastal'),
stac.Band(name='Blue', description='Blue: 450 - 510 nm', common_name='blue'),
stac.Band(name='Green', description='Green: 510 - 580 nm', common_name='green'),
stac.Band(name='Yellow', description='Yellow: 585 - 625 nm', common_name='yellow'),
stac.Band(name='Red', description='Red: 630 - 690 nm', common_name='red'),
stac.Band(name='Red Edge', description='Red Edge: 705 - 745 nm', common_name='rededge'),
stac.Band(name='Near-IR1', description='Near-IR1: 770 - 895 nm', common_name='nir08'),
stac.Band(name='Near-IR2', description='Near-IR2: 860 - 1040 nm', common_name='nir09')]
###Output
_____no_output_____
###Markdown
We can now create an EO Item, and add it to our catalog:
###Code
eo_item = stac.EOItem(id='local-image-eo',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={},
gsd=0.3,
platform="Maxar",
instrument="WorldView3",
bands=wv3_bands)
eo_item
eo_item.add_asset(key='image', asset=stac.EOAsset(href=img_path,
media_type=stac.MediaType.GEOTIFF,
bands=list(range(0,8))))
###Output
_____no_output_____
###Markdown
Let's clear the in-memory catalog, add the EO item, and save to a new STAC:
###Code
catalog.clear_items()
list(catalog.get_items())
catalog.add_item(eo_item)
list(catalog.get_items())
catalog.normalize_and_save(root_href=os.path.join(tmp_dir.name, 'stac-eo'),
catalog_type=stac.CatalogType.SELF_CONTAINED)
###Output
_____no_output_____
###Markdown
Now, if we read the catalog from the filesystem, PySTAC recognizes the EOItem and loads it in with the correct type:
###Code
catalog2 = stac.Catalog.from_file(os.path.join(tmp_dir.name, 'stac-eo', 'catalog.json'))
list(catalog2.get_items())
next(catalog2.get_all_items()).assets
import json
print(json.dumps(eo_item.to_dict(), indent=4))
###Output
{
"type": "Feature",
"stac_version": "0.8.1",
"id": "local-image-eo",
"properties": {
"datetime": "2019-11-05 16:43:28Z",
"eo:gsd": 0.3,
"eo:platform": "Maxar",
"eo:instrument": "WorldView3",
"eo:bands": [
{
"name": "Coastal",
"common_name": "coastal",
"description": "Coastal: 400 - 450 nm"
},
{
"name": "Blue",
"common_name": "blue",
"description": "Blue: 450 - 510 nm"
},
{
"name": "Green",
"common_name": "green",
"description": "Green: 510 - 580 nm"
},
{
"name": "Yellow",
"common_name": "yellow",
"description": "Yellow: 585 - 625 nm"
},
{
"name": "Red",
"common_name": "red",
"description": "Red: 630 - 690 nm"
},
{
"name": "Red Edge",
"common_name": "rededge",
"description": "Red Edge: 705 - 745 nm"
},
{
"name": "Near-IR1",
"common_name": "nir08",
"description": "Near-IR1: 770 - 895 nm"
},
{
"name": "Near-IR2",
"common_name": "nir09",
"description": "Near-IR2: 860 - 1040 nm"
}
]
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
],
"links": [
{
"rel": "self",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/stac-eo/local-image-eo/local-image-eo.json",
"type": "application/json"
},
{
"rel": "root",
"href": "../catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "../catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/image.tif",
"type": "image/vnd.stac.geotiff",
"eo:bands": [
0,
1,
2,
3,
4,
5,
6,
7
]
}
},
"stac_extensions": [
"eo"
]
}
###Markdown
CollectionsCollections are a subtype of Catalog that have some additional properties to make them more searchable. They also can define common properties so that items in the collection don't have to duplicate common data for each item. Let's create a collection to hold common properties between two images from the Spacenet 5 challenge.First we'll get another image, and it's bbox and footprint:
###Code
url2 = ('http://spacenet-dataset.s3.amazonaws.com/'
'spacenet/SN5_roads/train/AOI_7_Moscow/MS/'
'SN5_roads_train_AOI_7_Moscow_MS_chip997.tif')
img_path2 = os.path.join(tmp_dir.name, 'image.tif')
urllib.request.urlretrieve(url2, img_path2)
bbox2, footprint2 = get_bbox_and_footprint(img_path2)
###Output
_____no_output_____
###Markdown
We can take a look at the pydocs for Collection to see what information we need to supply in order to satisfy the spec.
###Code
print(stac.Collection.__doc__)
###Output
A Collection extends the Catalog spec with additional metadata that helps
enable discovery.
Args:
id (str): Identifier for the collection. Must be unique within the STAC.
description (str): Detailed multi-line description to fully explain the collection.
`CommonMark 0.28 syntax <http://commonmark.org/>`_ MAY be used for rich text
representation.
extent (Extent): Spatial and temporal extents that describe the bounds of
all items contained within this Collection.
title (str or None): Optional short descriptive one-line title for the collection.
stac_extensions (List[str]): Optional list of extensions the Collection implements.
href (str or None): Optional HREF for this collection, which be set as the collection's
self link's HREF.
license (str): Collection's license(s) as a `SPDX License identifier
<https://spdx.org/licenses/>`_ or `expression
<https://spdx.org/spdx-specification-21-web-version#h.jxpfx0ykyb60>`_. Defaults
to 'proprietary'.
keywords (List[str]): Optional list of keywords describing the collection.
version (str): Optional version of the Collection.
providers (List[Provider]): Optional list of providers of this Collection.
properties (dict): Optional dict of common fields across referenced items.
summaries (dict): An optional map of property summaries,
either a set of values or statistics such as a range.
Attributes:
id (str): Identifier for the collection.
description (str): Detailed multi-line description to fully explain the collection.
extent (Extent): Spatial and temporal extents that describe the bounds of
all items contained within this Collection.
title (str or None): Optional short descriptive one-line title for the collection.
stac_extensions (List[str]): Optional list of extensions the Collection implements.
keywords (List[str] or None): Optional list of keywords describing the collection.
version (str or None): Optional version of the Collection.
providers (List[Provider] or None): Optional list of providers of this Collection.
properties (dict or None): Optional dict of common fields across referenced items.
summaries (dict or None): An optional map of property summaries,
either a set of values or statistics such as a range.
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this Collection.
###Markdown
Beyond what a Catalog reqiures, a Collection requires a license, and an `Extent` that describes the range of space and time that the items it hold occupy.
###Code
print(stac.Extent.__doc__)
###Output
Describes the spatio-temporal extents of a Collection.
Args:
spatial (SpatialExtent): Potential spatial extent covered by the collection.
temporal (TemporalExtent): Potential temporal extent covered by the collection.
Attributes:
spatial (SpatialExtent): Potential spatial extent covered by the collection.
temporal (TemporalExtent): Potential temporal extent covered by the collection.
###Markdown
An Extent is comprised of a SpatialExtent and a TemporalExtent. These hold one or more bounding boxes and time intervals, respectively, that completely cover the items contained in the collections.Let's start with creating two new items - these will be core Items, not `EOItems`, although they will be imparted with `eo` information by the collection. This is why we add `eo` to the `stac_extensions`. We are also adding `EOAssets` to the Items, so that the assets have the proper `eo:bands` metadata associated with them:
###Code
collection_item1 = stac.Item(id='local-image-col-1',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={},
stac_extensions=['eo'])
collection_item1.add_asset('image', stac.EOAsset(href=img_path,
media_type=stac.MediaType.GEOTIFF,
bands=list(range(0,8))))
collection_item2 = stac.Item(id='local-image-col-2',
geometry=footprint2,
bbox=bbox2,
datetime=datetime.utcnow(),
properties={},
stac_extensions=['eo'])
collection_item2.add_asset('image', stac.EOAsset(href=img_path,
media_type=stac.MediaType.GEOTIFF,
bands=list(range(0,8))))
###Output
_____no_output_____
###Markdown
We can use our two items' metadata to find out what the proper bounds are:
###Code
from shapely.geometry import shape
unioned_footprint = shape(footprint).union(shape(footprint2))
collection_bbox = list(unioned_footprint.bounds)
spatial_extent = stac.SpatialExtent(bboxes=[collection_bbox])
collection_interval = sorted([collection_item1.datetime, collection_item2.datetime])
temporal_extent = stac.TemporalExtent(intervals=[collection_interval])
collection_extent = stac.Extent(spatial=spatial_extent, temporal=temporal_extent)
###Output
_____no_output_____
###Markdown
We can list the common properties for the items, with their proper extension names, and use it in the Collection properties:
###Code
common_properties = { 'eo:bands': [b.to_dict() for b in wv3_bands],
'eo:gsd': 0.3,
'eo:platform': 'Maxar',
'eo:instrument': 'WorldView3'
}
collection = stac.Collection(id='wv3-images',
description='Spacenet 5 images over Moscow',
extent=collection_extent,
properties=common_properties,
license='CC-BY-SA-4.0')
###Output
_____no_output_____
###Markdown
Now if we add our items to our Collection, and our Collection to our Catalog, we get the following STAC that can be saved:
###Code
collection.add_items([collection_item1, collection_item2])
catalog.clear_items()
catalog.clear_children()
catalog.add_child(collection)
catalog.describe()
catalog.normalize_and_save(root_href=os.path.join(tmp_dir.name, 'stac-collection'),
catalog_type=stac.CatalogType.SELF_CONTAINED)
###Output
_____no_output_____
###Markdown
Notice our collection item does not have any of the `eo` metadata in it's properties:
###Code
collection_item1.to_dict()
###Output
_____no_output_____
###Markdown
However, when we read the catalog in, the collection information is merged with the item metadata, and we get `EOItem`s in our STAC:
###Code
catalog3 = stac.Catalog.from_file(os.path.join(tmp_dir.name, 'stac-collection', 'catalog.json'))
catalog3.describe()
col_items = list(catalog3.get_all_items())
col_items[0].bands
###Output
_____no_output_____
###Markdown
CleanupDon't forget to clean up the temporary directory!
###Code
tmp_dir.cleanup()
###Output
_____no_output_____
###Markdown
Creating a STAC of imagery from Spacenet 5 data Now, let's take what we've learned and create a Catalog with more data in it. Allowing PySTAC to read from AWS S3PySTAC aims to be virtually zero-dependency (notwithstanding the why-isn't-this-in-stdlib datetime-util), so it doesn't have the ability to read from or write to anything but the local file system. However, we can hook into PySTAC's IO in the following way. Learn more about how to use STAC_IO in the [documentation on the topic](https://pystac.readthedocs.io/en/latest/concepts.htmlusing-stac-io):
###Code
from urllib.parse import urlparse
import boto3
from pystac import STAC_IO
def my_read_method(uri):
parsed = urlparse(uri)
if parsed.scheme == 's3':
bucket = parsed.netloc
key = parsed.path[1:]
s3 = boto3.resource('s3')
obj = s3.Object(bucket, key)
return obj.get()['Body'].read().decode('utf-8')
else:
return STAC_IO.default_read_text_method(uri)
def my_write_method(uri, txt):
parsed = urlparse(uri)
if parsed.scheme == 's3':
bucket = parsed.netloc
key = parsed.path[1:]
s3 = boto3.resource("s3")
s3.Object(bucket, key).put(Body=txt)
else:
STAC_IO.default_write_text_method(uri, txt)
STAC_IO.read_text_method = my_read_method
STAC_IO.write_text_method = my_write_method
###Output
_____no_output_____
###Markdown
We'll need a utility to list keys for reading the lists of files from S3:
###Code
# From https://alexwlchan.net/2017/07/listing-s3-keys/
def get_s3_keys(bucket, prefix):
"""Generate all the keys in an S3 bucket."""
s3 = boto3.client('s3')
kwargs = {'Bucket': bucket, 'Prefix': prefix}
while True:
resp = s3.list_objects_v2(**kwargs)
for obj in resp['Contents']:
yield obj['Key']
try:
kwargs['ContinuationToken'] = resp['NextContinuationToken']
except KeyError:
break
###Output
_____no_output_____
###Markdown
Let's make a STAC of imagery over Moscow as part of the Spacenet 5 challenge. As a first step, we can list out the imagery and extract IDs from each of the chips.
###Code
moscow_training_chip_uris = list(get_s3_keys(bucket='spacenet-dataset',
prefix='spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS'))
import re
chip_id_to_data = {}
def get_chip_id(uri):
return re.search(r'.*\_chip(\d+)\.', uri).group(1)
for uri in moscow_training_chip_uris:
chip_id = get_chip_id(uri)
chip_id_to_data[chip_id] = { 'img': 's3://spacenet-dataset/{}'.format(uri) }
###Output
_____no_output_____
###Markdown
For this tutorial, we'll only take a subset of the data.
###Code
chip_id_to_data = dict(list(chip_id_to_data.items())[:10])
chip_id_to_data
###Output
_____no_output_____
###Markdown
Let's turn each of those chips into a STAC Item that represents the image.
###Code
chip_id_to_items = {}
###Output
_____no_output_____
###Markdown
We'll create core `Item`s for our imagery, but mark them with the `eo` extension as we did above, and store the `eo` data in a `Collection`.Note that the image CRS is in WGS:84 (Lat/Lng). If it wasn't, we'd have to reproject the footprint to WGS:84 in order to be compliant with the spec (which can easily be done with [pyproj](https://github.com/pyproj4/pyproj)).Here we're taking advantage of `rasterio`'s ability to read S3 URIs, which only grabs the GeoTIFF metadata and does not pull the whole file down.
###Code
for chip_id in chip_id_to_data:
img_uri = chip_id_to_data[chip_id]['img']
print('Processing {}'.format(img_uri))
bbox, footprint = get_bbox_and_footprint(img_uri)
item = stac.Item(id='img_{}'.format(chip_id),
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={},
stac_extensions=['eo'])
item.add_asset(key='ps-ms',
asset=stac.EOAsset(href=img_uri,
media_type=stac.MediaType.COG,
bands=list(range(0, 8))))
chip_id_to_items[chip_id] = item
###Output
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip0.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip10.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip100.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1000.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1001.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1002.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1003.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1004.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1005.tif
###Markdown
Creating the CollectionAll of these images are over Moscow. In Spacenet 5, we have a couple cities that have imagery; a good way to separate these collections of imagery. We can store all of the common `eo` metadata in the collection.
###Code
from shapely.geometry import (shape, MultiPolygon)
footprints = list(map(lambda i: shape(i.geometry).envelope,
chip_id_to_items.values()))
collection_bbox = MultiPolygon(footprints).bounds
spatial_extent = stac.SpatialExtent(bboxes=[collection_bbox])
datetimes = sorted(list(map(lambda i: i.datetime,
chip_id_to_items.values())))
temporal_extent = stac.TemporalExtent(intervals=[[datetimes[0], datetimes[-1]]])
collection_extent = stac.Extent(spatial=spatial_extent, temporal=temporal_extent)
common_properties = { 'eo:bands': [b.to_dict() for b in wv3_bands],
'eo:gsd': 0.3,
'eo:platform': 'Maxar',
'eo:instrument': 'WorldView3'
}
collection = stac.Collection(id='wv3-images',
description='Spacenet 5 images over Moscow',
extent=collection_extent,
properties=common_properties,
license='CC-BY-SA-4.0')
collection.add_items(chip_id_to_items.values())
collection.describe()
###Output
* <Collection id=wv3-images>
* <Item id=img_0>
* <Item id=img_1>
* <Item id=img_10>
* <Item id=img_100>
* <Item id=img_1000>
* <Item id=img_1001>
* <Item id=img_1002>
* <Item id=img_1003>
* <Item id=img_1004>
* <Item id=img_1005>
###Markdown
Now, we can create a Catalog and add the collection.
###Code
catalog = stac.Catalog(id='spacenet5', description='Spacenet 5 Data (Test)')
catalog.add_child(collection)
catalog.describe()
###Output
* <Catalog id=spacenet5>
* <Collection id=wv3-images>
* <Item id=img_0>
* <Item id=img_1>
* <Item id=img_10>
* <Item id=img_100>
* <Item id=img_1000>
* <Item id=img_1001>
* <Item id=img_1002>
* <Item id=img_1003>
* <Item id=img_1004>
* <Item id=img_1005>
###Markdown
Adding label items to the Spacenet 5 catalogWe can use the [label extension](https://github.com/radiantearth/stac-spec/tree/v0.8.1/extensions/label) of the STAC spec to represent the training data in our STAC. For this, we need to grab the URIs of the GeoJSON of roads:
###Code
moscow_training_geojson_uris = list(get_s3_keys(bucket='spacenet-dataset',
prefix='spacenet/SN5_roads/train/AOI_7_Moscow/geojson_roads_speed/'))
for uri in moscow_training_geojson_uris:
chip_id = get_chip_id(uri)
if chip_id in chip_id_to_data:
chip_id_to_data[chip_id]['label'] = 's3://spacenet-dataset/{}'.format(uri)
###Output
_____no_output_____
###Markdown
We'll add the LabelItems to their own subcatalog; since they don't inherit the Collection's `eo` properties, they shouldn't go in the Collection.
###Code
label_catalog = stac.Catalog(id='spacenet-data-labels', description='Labels for Spacenet 5')
catalog.add_child(label_catalog)
###Output
_____no_output_____
###Markdown
We can check the pydocs to see what a LabelItem needs in order to fit the spec:
###Code
print(stac.LabelItem.__doc__)
###Output
A Label Item represents a polygon, set of polygons, or raster data defining
labels and label metadata and should be part of a Collection.
Args:
id (str): Provider identifier. Must be unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float]): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array must be 2*n where n is the
number of dimensions.
datetime (Datetime): Datetime associated with this item.
properties (dict): A dictionary of additional metadata for the item.
label_desecription (str): A description of the label, how it was created,
and what it is recommended for
label_type (str): An ENUM of either vector label type or raster label type. Use
one of :class:`~pystac.LabelType`.
label_properties (dict or None): These are the names of the property field(s) in each
Feature of the label asset's FeatureCollection that contains the classes
(keywords from label:classes if the property defines classes).
If labels are rasters, this should be None.
label_classes (List[LabelClass]): Optional, but reqiured if ussing categorical data.
A list of LabelClasses defining the list of possible class names for each
label:properties. (e.g., tree, building, car, hippo)
label_tasks (str): Recommended to be a subset of 'regression', 'classification',
'detection', or 'segmentation', but may be an arbitrary value.
label_methods: Recommended to be a subset of 'automated' or 'manual',
but may be an arbitrary value.
label_overviews (List[LabelOverview]): Optional list of LabelOverview classes
that store counts (for classification-type data) or summary statistics (for
continuous numerical/regression data).
stac_extensions (List[str]): Optional list of extensions the Item implements.
href (str or None): Optional HREF for this item, which be set as the item's
self link's HREF.
collection (Collection): Optional Collection that this item is a part of.
Attributes:
id (str): Provider identifier. Unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float]): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array is 2*n where n is the
number of dimensions.
datetime (Datetime): Datetime associated with this item.
properties (dict): A dictionary of additional metadata for the item.
label_desecription (str): A description of the label, how it was created,
and what it is recommended for
label_type (str): An ENUM of either vector label type or raster label type (one
of :class:`~pystac.LabelType`).
label_properties (dict or None): These are the names of the property field(s) in each
Feature of the label asset's FeatureCollection that contains the classes
(keywords from label:classes if the property defines classes).
If labels are rasters, this should be None.
label_classes (List[LabelClass]): Optional, but reqiured if ussing categorical data.
A list of LabelClasses defining the list of possible class names for each
label:properties. (e.g., tree, building, car, hippo)
label_tasks (str): Tasks these labels apply to. Usually a subset of 'regression',
'classification', 'detection', or 'segmentation', but may be an arbitrary value.
label_methods: Methods used for labeling. Usually a subset of 'automated' or 'manual',
but may be an arbitrary value.
label_overviews (List[LabelOverview]): Optional list of LabelOverview classes
that store counts (for classification-type data) or summary statistics (for
continuous numerical/regression data).
stac_extensions (List[str] or None): Optional list of extensions the Item implements.
collection_id (str or None): The Collection ID that this item belongs to, if any.
See:
`Item fields in the label extension spec <https://github.com/radiantearth/stac-spec/tree/v0.8.1/extensions/label#item-fields>`_
###Markdown
This loop creates our LabelItems and associates each to the appropriate source image Item.
###Code
for chip_id in chip_id_to_data:
img_item = collection.get_item('img_{}'.format(chip_id))
label_uri = chip_id_to_data[chip_id]['label']
label_item = stac.LabelItem(id='label_{}'.format(chip_id),
geometry=img_item.geometry,
bbox=img_item.bbox,
datetime=datetime.utcnow(),
properties={},
label_description="SpaceNet 5 Road labels",
label_type=stac.LabelType.VECTOR,
label_tasks=['segmentation', 'regression'])
label_item.add_source(img_item)
label_item.add_geojson_labels(label_uri)
label_catalog.add_item(label_item)
###Output
_____no_output_____
###Markdown
Now we have a STAC of training data!
###Code
catalog.describe()
label_item = catalog.get_child('spacenet-data-labels').get_item('label_1')
label_item.to_dict()
###Output
_____no_output_____
###Markdown
How to create STAC Catalogs STAC Community Sprint, Arlington, November 7th 2019 This notebook runs through some of the basics of using PySTAC to create a static STAC. It was part of a 30 minute presentation at the [community STAC sprint](https://github.com/radiantearth/community-sprints/tree/master/11052019-arlignton-va) in Arlington, VA in November 2019. This tutorial will require the `boto3`, `rasterio`, and `shapely` libraries:
###Code
!pip install boto3
!pip install rasterio
!pip install shapely
###Output
Requirement already satisfied: boto3 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages
Requirement already satisfied: s3transfer<0.3.0,>=0.2.0 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from boto3)
Requirement already satisfied: botocore<1.14.0,>=1.13.8 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from boto3)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from boto3)
Requirement already satisfied: urllib3<1.26,>=1.20; python_version >= "3.4" in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from botocore<1.14.0,>=1.13.8->boto3)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1; python_version >= "2.7" in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from botocore<1.14.0,>=1.13.8->boto3)
Requirement already satisfied: docutils<0.16,>=0.10 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from botocore<1.14.0,>=1.13.8->boto3)
Requirement already satisfied: six>=1.5 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from python-dateutil<3.0.0,>=2.1; python_version >= "2.7"->botocore<1.14.0,>=1.13.8->boto3)
[33mYou are using pip version 9.0.3, however version 19.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
Requirement already satisfied: rasterio in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages
Requirement already satisfied: numpy in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio)
Requirement already satisfied: attrs in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio)
Requirement already satisfied: snuggs>=1.4.1 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio)
Requirement already satisfied: click-plugins in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio)
Requirement already satisfied: click<8,>=4.0 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio)
Requirement already satisfied: cligj>=0.5 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio)
Requirement already satisfied: affine in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio)
Requirement already satisfied: pyparsing>=2.1.6 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from snuggs>=1.4.1->rasterio)
[33mYou are using pip version 9.0.3, however version 19.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
Requirement already satisfied: shapely in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages
[33mYou are using pip version 9.0.3, however version 19.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
###Markdown
We can import pystac with the alias `stac` to access all of the API we need (saving a glorious 2 characters):
###Code
import pystac as stac
###Output
_____no_output_____
###Markdown
Creating a catalog from a local file To give us some material to work with, lets download a single image from the [Spacenet 5 challenge](https://www.topcoder.com/challenges/30099956). We'll use a temporary directory to save off our single-item STAC.
###Code
import os
import urllib.request
from tempfile import TemporaryDirectory
tmp_dir = TemporaryDirectory()
img_path = os.path.join(tmp_dir.name, 'image.tif')
url = ('http://spacenet-dataset.s3.amazonaws.com/'
'spacenet/SN5_roads/train/AOI_7_Moscow/MS/'
'SN5_roads_train_AOI_7_Moscow_MS_chip996.tif')
urllib.request.urlretrieve(url, img_path)
###Output
_____no_output_____
###Markdown
We want to create a Catalog. Let's check the pydocs for `Catalog` to see what information we'll need. (We use `__doc__` instead of `help()` here to avoid printing out all the docs for the class.)
###Code
print(stac.Catalog.__doc__)
###Output
A PySTAC Catalog represents a STAC catalog in memory.
A Catalog is a :class:`~pystac.STACObject` that may contain children,
which are instances of :class:`~pystac.Catalog` or :class:`~pystac.Collection`,
as well as :class:`~pystac.Item` s.
Args:
id (str): Identifier for the catalog. Must be unique within the STAC.
description (str): Detailed multi-line description to fully explain the catalog.
`CommonMark 0.28 syntax <http://commonmark.org/>`_ MAY be used for rich text
representation.
title (str or None): Optional short descriptive one-line title for the catalog.
stac_extensions (List[str]): Optional list of extensions the Catalog implements.
href (str or None): Optional HREF for this catalog, which be set as the catalog's
self link's HREF.
Attributes:
id (str): Identifier for the catalog.
description (str): Detailed multi-line description to fully explain the catalog.
title (str or None): Optional short descriptive one-line title for the catalog.
stac_extensions (List[str] or None): Optional list of extensions the Catalog implements.
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this Catalog.
###Markdown
Let's just give an ID and a description. We don't have to worry about the HREF right now; that will be set later.
###Code
catalog = stac.Catalog(id='test-catalog', description='Tutorial catalog.')
###Output
_____no_output_____
###Markdown
There are no children or items in the catalog, since we haven't added anything yet.
###Code
print(list(catalog.get_children()))
print(list(catalog.get_items()))
###Output
[]
[]
###Markdown
We'll now create an Item to represent the image. Check the pydocs to see what you need to supply:
###Code
print(stac.Item.__doc__)
###Output
An Item is the core granular entity in a STAC, containing the core metadata
that enables any client to search or crawl online catalogs of spatial 'assets' -
satellite imagery, derived data, DEM's, etc.
Args:
id (str): Provider identifier. Must be unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float]): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array must be 2*n where n is the
number of dimensions.
datetime (Datetime): Datetime associated with this item.
properties (dict): A dictionary of additional metadata for the item.
stac_extensions (List[str]): Optional list of extensions the Item implements.
href (str or None): Optional HREF for this item, which be set as the item's
self link's HREF.
collection (Collection or str): The Collection or Collection ID that this item
belongs to.
Attributes:
id (str): Provider identifier. Unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float]): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array is 2*n where n is the
number of dimensions.
datetime (Datetime): Datetime associated with this item.
properties (dict): A dictionary of additional metadata for the item.
stac_extensions (List[str] or None): Optional list of extensions the Item implements.
collection (Collection or None): Collection that this item is a part of.
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this STACObject.
assets (Dict[str, Asset]): Dictionary of asset objects that can be downloaded,
each with a unique key.
collection_id (str or None): The Collection ID that this item belongs to, if any.
###Markdown
Using [rasterio](https://rasterio.readthedocs.io/en/stable/), we can pull out the bounding box of the image to use for the image metadata. If the image contained a NoData border, we would ideally pull out the footprint and save it as the geometry; in this case, we're working with a small chip the most likely has no NoData values.
###Code
import rasterio
from shapely.geometry import Polygon, mapping
def get_bbox_and_footprint(raster_uri):
with rasterio.open(raster_uri) as ds:
bounds = ds.bounds
bbox = [bounds.left, bounds.bottom, bounds.right, bounds.top]
footprint = Polygon([
[bounds.left, bounds.bottom],
[bounds.left, bounds.top],
[bounds.right, bounds.top],
[bounds.right, bounds.bottom]
])
return (bbox, mapping(footprint))
bbox, footprint = get_bbox_and_footprint(img_path)
print(bbox)
print(footprint)
###Output
[37.6616853489879, 55.73478197572927, 37.66573047610874, 55.73882710285011]
{'type': 'Polygon', 'coordinates': (((37.6616853489879, 55.73478197572927), (37.6616853489879, 55.73882710285011), (37.66573047610874, 55.73882710285011), (37.66573047610874, 55.73478197572927), (37.6616853489879, 55.73478197572927)),)}
###Markdown
We're also using `datetime.utcnow()` to supply the required datetime property for our Item. Since this is a required property, you might often find yourself making up a time to fill in if you don't know the exact capture time.
###Code
from datetime import datetime
item = stac.Item(id='local-image',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={})
###Output
_____no_output_____
###Markdown
We haven't added it to a catalog yet, so it's parent isn't set. Once we add it to the catalog, we can see it correctly links to it's parent.
###Code
item.get_parent() is None
catalog.add_item(item)
item.get_parent()
###Output
_____no_output_____
###Markdown
`describe()` is a useful method on `Catalog` - but be careful when using it on large catalogs, as it will walk the entire tree of the STAC.
###Code
catalog.describe()
###Output
* <Catalog id=test-catalog>
* <Item id=local-image>
###Markdown
Adding AssetsWe've created an Item, but there aren't any assets associated with it. Let's create one:
###Code
print(stac.Asset.__doc__)
item.add_asset(key='image', asset=stac.Asset(href=img_path, media_type=stac.MediaType.GEOTIFF))
###Output
_____no_output_____
###Markdown
At any time we can call `to_dict()` on STAC objects to see how the STAC JSON is shaping up. Notice the asset is now set:
###Code
import json
print(json.dumps(item.to_dict(), indent=4))
###Output
{
"type": "Feature",
"stac_version": "0.8.1",
"id": "local-image",
"properties": {
"datetime": "2019-11-05 16:43:22Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
],
"links": [
{
"rel": "root",
"href": null,
"type": "application/json"
},
{
"rel": "parent",
"href": null,
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/image.tif",
"type": "image/vnd.stac.geotiff"
}
}
}
###Markdown
Note that the link `href` properties are `null`. This is OK, as we're working with the STAC in memory. Next, we'll talk about writing the catalog out, and how to set those HREFs. Saving the catalog As the JSON above indicates, there's no HREFs set on these in-memory items. PySTAC uses the `self` link on STAC objects to track where the file lives. Because we haven't set them, they evaluate to `None`:
###Code
print(catalog.get_self_href() is None)
print(item.get_self_href() is None)
###Output
True
True
###Markdown
In order to set them, we can use `normalize_hrefs`. This method will create a normalized set of HREFs for each STAC object in the catalog, according to the [best practices document](https://github.com/radiantearth/stac-spec/blob/v0.8.1/best-practices.mdcatalog-layout)'s recommendations on how to lay out a catalog.
###Code
catalog.normalize_hrefs(os.path.join(tmp_dir.name, 'stac'))
###Output
_____no_output_____
###Markdown
Now that we've normalized to a root directory (the temporary directory), we see that the `self` links are set:
###Code
print(catalog.get_self_href())
print(item.get_self_href())
###Output
/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/stac/catalog.json
/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/stac/local-image/local-image.json
###Markdown
We can now call `save` on the catalog, which will recursively save all the STAC objects to their respective self HREFs.Save requires a `CatalogType` to be set. You can review the [API docs](https://pystac.readthedocs.io/en/stable/api.htmlcatalogtype) on `CatalogType` to see what each type means (unfortunately `help` doesn't show docstrings for attributes).
###Code
catalog.save(catalog_type=stac.CatalogType.SELF_CONTAINED)
!ls {tmp_dir.name}/stac/*
with open(catalog.get_self_href()) as f:
print(f.read())
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "0.8.1",
"id": "local-image",
"properties": {
"datetime": "2019-11-05 16:43:22Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
],
"links": [
{
"rel": "root",
"href": "../catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "../catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/image.tif",
"type": "image/vnd.stac.geotiff"
}
}
}
###Markdown
As you can see, all links are saved with relative paths. That's because we used `catalog_type=CatalogType.SELF_CONTAINED`. If we save an Absolute Published catalog, we'll see absolute paths:
###Code
catalog.save(catalog_type=stac.CatalogType.ABSOLUTE_PUBLISHED)
###Output
_____no_output_____
###Markdown
Now the links included in the STAC item are all absolute:
###Code
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "0.8.1",
"id": "local-image",
"properties": {
"datetime": "2019-11-05 16:43:22Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
],
"links": [
{
"rel": "self",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/stac/local-image/local-image.json",
"type": "application/json"
},
{
"rel": "root",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/stac/catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/stac/catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/image.tif",
"type": "image/vnd.stac.geotiff"
}
}
}
###Markdown
Notice that the Asset HREF is absolute in both cases. We can make the Asset HREF relative to the STAC Item by using `.make_all_asset_hrefs_relative()`:
###Code
catalog.make_all_asset_hrefs_relative()
catalog.save(catalog_type=stac.CatalogType.SELF_CONTAINED)
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "0.8.1",
"id": "local-image",
"properties": {
"datetime": "2019-11-05 16:43:22Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
],
"links": [
{
"rel": "root",
"href": "../catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "../catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "../../image.tif",
"type": "image/vnd.stac.geotiff"
}
}
}
###Markdown
Creating an EO ItemIn the code above, we encapsulated our imagery as a core STAC item. However, there's more information that we can encapsulate, given that we know this is a World View 3 image. We can do this by creating an `EOItem`, which is an Item that is extended via the [eo extension](https://github.com/radiantearth/stac-spec/tree/v0.8.1/extensions/eohttps://github.com/radiantearth/stac-spec/tree/v0.8.1/extensions/eo):
###Code
print(stac.EOItem.__doc__)
###Output
EOItem represents a snapshot of the earth for a single date and time.
Args:
id (str): Provider identifier. Must be unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float]): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array must be 2*n where n is the
number of dimensions.
datetime (Datetime): Datetime associated with this item.
properties (dict): A dictionary of additional metadata for the item.
gsd (float): Ground Sample Distance at the sensor.
platform (str): Unique name of the specific platform to which the instrument is attached.
instrument (str): Name of instrument or sensor used (e.g., MODIS, ASTER, OLI, Canon F-1).
bands (List[Band]): This is a list of :class:`~pystac.Band` objects that represent
the available bands.
constellation (str): Optional name of the constellation to which the platform belongs.
epsg (int): Optional `EPSG code <http://www.epsg-registry.org/>`_.
cloud_cover (float): Optional estimate of cloud cover as a percentage (0-100) of the
entire scene. If not available the field should not be provided.
off_nadir (float): Optional viewing angle. The angle from the sensor between
nadir (straight down) and the scene center. Measured in degrees (0-90).
azimuth (float): Optional viewing azimuth angle. The angle measured from the
sub-satellite point (point on the ground below the platform) between the
scene center and true north. Measured clockwise from north in degrees (0-360).
sun_azimuth (float): Optional sun azimuth angle. From the scene center point on
the ground, this is the angle between truth north and the sun. Measured clockwise
in degrees (0-360).
sun_elevation (float): Optional sun elevation angle. The angle from the tangent of
the scene center point to the sun. Measured from the horizon in degrees (0-90).
stac_extensions (List[str]): Optional list of extensions the Item implements.
href (str or None): Optional HREF for this item, which be set as the item's
self link's HREF.
collection (Collection or str): The Collection or Collection ID that this item
belongs to.
Attributes:
id (str): Provider identifier. Unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float]): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array is 2*n where n is the
number of dimensions.
datetime (Datetime): Datetime associated with this item.
properties (dict): A dictionary of additional metadata for the item.
stac_extensions (List[str] or None): Optional list of extensions the Item implements.
collection (Collection or None): Collection that this item is a part of.
gsd (float): Ground Sample Distance at the sensor.
platform (str): Unique name of the specific platform to which the instrument is attached.
instrument (str): Name of instrument or sensor used (e.g., MODIS, ASTER, OLI, Canon F-1).
bands (List[Band]): This is a list of :class:`~pystac.Band` objects that represent
the available bands.
constellation (str or None): Name of the constellation to which the platform belongs.
epsg (int or None): `EPSG code <http://www.epsg-registry.org/>`_.
cloud_cover (float or None): Estimate of cloud cover as a percentage (0-100) of the
entire scene. If not available the field should not be provided.
off_nadir (float or None): Viewing angle. The angle from the sensor between
nadir (straight down) and the scene center. Measured in degrees (0-90).
azimuth (float or None): Viewing azimuth angle. The angle measured from the
sub-satellite point (point on the ground below the platform) between the
scene center and true north. Measured clockwise from north in degrees (0-360).
sun_azimuth (float or None): Sun azimuth angle. From the scene center point on
the ground, this is the angle between truth north and the sun. Measured clockwise
in degrees (0-360).
sun_elevation (float or None): Sun elevation angle. The angle from the tangent of
the scene center point to the sun. Measured from the horizon in degrees (0-90).
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this STACObject.
assets (Dict[str, Asset]): Dictionary of asset objects that can be downloaded,
each with a unique key.
collection_id (str or None): The Collection ID that this item belongs to, if any.
###Markdown
To create the EOItem, we'll need to encode some more information. First, let's define the bands of World View 3:
###Code
# From: https://www.spaceimagingme.com/downloads/sensors/datasheets/DG_WorldView3_DS_2014.pdf
wv3_bands = [stac.Band(name='Coastal', description='Coastal: 400 - 450 nm', common_name='coastal'),
stac.Band(name='Blue', description='Blue: 450 - 510 nm', common_name='blue'),
stac.Band(name='Green', description='Green: 510 - 580 nm', common_name='green'),
stac.Band(name='Yellow', description='Yellow: 585 - 625 nm', common_name='yellow'),
stac.Band(name='Red', description='Red: 630 - 690 nm', common_name='red'),
stac.Band(name='Red Edge', description='Red Edge: 705 - 745 nm', common_name='rededge'),
stac.Band(name='Near-IR1', description='Near-IR1: 770 - 895 nm', common_name='nir08'),
stac.Band(name='Near-IR2', description='Near-IR2: 860 - 1040 nm', common_name='nir09')]
###Output
_____no_output_____
###Markdown
We can now create an EO Item, and add it to our catalog:
###Code
eo_item = stac.EOItem(id='local-image-eo',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={},
gsd=0.3,
platform="Maxar",
instrument="WorldView3",
bands=wv3_bands)
eo_item
eo_item.add_asset(key='image', asset=stac.EOAsset(href=img_path,
media_type=stac.MediaType.GEOTIFF,
bands=list(range(0,8))))
###Output
_____no_output_____
###Markdown
Let's clear the in-memory catalog, add the EO item, and save to a new STAC:
###Code
catalog.clear_items()
list(catalog.get_items())
catalog.add_item(eo_item)
list(catalog.get_items())
catalog.normalize_and_save(root_href=os.path.join(tmp_dir.name, 'stac-eo'),
catalog_type=stac.CatalogType.SELF_CONTAINED)
###Output
_____no_output_____
###Markdown
Now, if we read the catalog from the filesystem, PySTAC recognizes the EOItem and loads it in with the correct type:
###Code
catalog2 = stac.Catalog.from_file(os.path.join(tmp_dir.name, 'stac-eo', 'catalog.json'))
list(catalog2.get_items())
next(catalog2.get_all_items()).assets
import json
print(json.dumps(eo_item.to_dict(), indent=4))
###Output
{
"type": "Feature",
"stac_version": "0.8.1",
"id": "local-image-eo",
"properties": {
"datetime": "2019-11-05 16:43:28Z",
"eo:gsd": 0.3,
"eo:platform": "Maxar",
"eo:instrument": "WorldView3",
"eo:bands": [
{
"name": "Coastal",
"common_name": "coastal",
"description": "Coastal: 400 - 450 nm"
},
{
"name": "Blue",
"common_name": "blue",
"description": "Blue: 450 - 510 nm"
},
{
"name": "Green",
"common_name": "green",
"description": "Green: 510 - 580 nm"
},
{
"name": "Yellow",
"common_name": "yellow",
"description": "Yellow: 585 - 625 nm"
},
{
"name": "Red",
"common_name": "red",
"description": "Red: 630 - 690 nm"
},
{
"name": "Red Edge",
"common_name": "rededge",
"description": "Red Edge: 705 - 745 nm"
},
{
"name": "Near-IR1",
"common_name": "nir08",
"description": "Near-IR1: 770 - 895 nm"
},
{
"name": "Near-IR2",
"common_name": "nir09",
"description": "Near-IR2: 860 - 1040 nm"
}
]
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
],
"links": [
{
"rel": "self",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/stac-eo/local-image-eo/local-image-eo.json",
"type": "application/json"
},
{
"rel": "root",
"href": "../catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "../catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpytmf1tsx/image.tif",
"type": "image/vnd.stac.geotiff",
"eo:bands": [
0,
1,
2,
3,
4,
5,
6,
7
]
}
},
"stac_extensions": [
"eo"
]
}
###Markdown
CollectionsCollections are a subtype of Catalog that have some additional properties to make them more searchable. They also can define common properties so that items in the collection don't have to duplicate common data for each item. Let's create a collection to hold common properties between two images from the Spacenet 5 challenge.First we'll get another image, and it's bbox and footprint:
###Code
url2 = ('http://spacenet-dataset.s3.amazonaws.com/'
'spacenet/SN5_roads/train/AOI_7_Moscow/MS/'
'SN5_roads_train_AOI_7_Moscow_MS_chip997.tif')
img_path2 = os.path.join(tmp_dir.name, 'image.tif')
urllib.request.urlretrieve(url2, img_path2)
bbox2, footprint2 = get_bbox_and_footprint(img_path2)
###Output
_____no_output_____
###Markdown
We can take a look at the pydocs for Collection to see what information we need to supply in order to satisfy the spec.
###Code
print(stac.Collection.__doc__)
###Output
A Collection extends the Catalog spec with additional metadata that helps
enable discovery.
Args:
id (str): Identifier for the collection. Must be unique within the STAC.
description (str): Detailed multi-line description to fully explain the collection.
`CommonMark 0.28 syntax <http://commonmark.org/>`_ MAY be used for rich text
representation.
extent (Extent): Spatial and temporal extents that describe the bounds of
all items contained within this Collection.
title (str or None): Optional short descriptive one-line title for the collection.
stac_extensions (List[str]): Optional list of extensions the Collection implements.
href (str or None): Optional HREF for this collection, which be set as the collection's
self link's HREF.
license (str): Collection's license(s) as a `SPDX License identifier
<https://spdx.org/licenses/>`_ or `expression
<https://spdx.org/spdx-specification-21-web-version#h.jxpfx0ykyb60>`_. Defaults
to 'proprietary'.
keywords (List[str]): Optional list of keywords describing the collection.
version (str): Optional version of the Collection.
providers (List[Provider]): Optional list of providers of this Collection.
properties (dict): Optional dict of common fields across referenced items.
summaries (dict): An optional map of property summaries,
either a set of values or statistics such as a range.
Attributes:
id (str): Identifier for the collection.
description (str): Detailed multi-line description to fully explain the collection.
extent (Extent): Spatial and temporal extents that describe the bounds of
all items contained within this Collection.
title (str or None): Optional short descriptive one-line title for the collection.
stac_extensions (List[str]): Optional list of extensions the Collection implements.
keywords (List[str] or None): Optional list of keywords describing the collection.
version (str or None): Optional version of the Collection.
providers (List[Provider] or None): Optional list of providers of this Collection.
properties (dict or None): Optional dict of common fields across referenced items.
summaries (dict or None): An optional map of property summaries,
either a set of values or statistics such as a range.
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this Collection.
###Markdown
Beyond what a Catalog reqiures, a Collection requires a license, and an `Extent` that describes the range of space and time that the items it hold occupy.
###Code
print(stac.Extent.__doc__)
###Output
Describes the spatio-temporal extents of a Collection.
Args:
spatial (SpatialExtent): Potential spatial extent covered by the collection.
temporal (TemporalExtent): Potential temporal extent covered by the collection.
Attributes:
spatial (SpatialExtent): Potential spatial extent covered by the collection.
temporal (TemporalExtent): Potential temporal extent covered by the collection.
###Markdown
An Extent is comprised of a SpatialExtent and a TemporalExtent. These hold one or more bounding boxes and time intervals, respectively, that completely cover the items contained in the collections.Let's start with creating two new items - these will be core Items, not `EOItems`, although they will be imparted with `eo` information by the collection. This is why we add `eo` to the `stac_extensions`. We are also adding `EOAssets` to the Items, so that the assets have the proper `eo:bands` metadata associated with them:
###Code
collection_item1 = stac.Item(id='local-image-col-1',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={},
stac_extensions=['eo'])
collection_item1.add_asset('image', stac.EOAsset(href=img_path,
media_type=stac.MediaType.GEOTIFF,
bands=list(range(0,8))))
collection_item2 = stac.Item(id='local-image-col-2',
geometry=footprint2,
bbox=bbox2,
datetime=datetime.utcnow(),
properties={},
stac_extensions=['eo'])
collection_item2.add_asset('image', stac.EOAsset(href=img_path,
media_type=stac.MediaType.GEOTIFF,
bands=list(range(0,8))))
###Output
_____no_output_____
###Markdown
We can use our two items' metadata to find out what the proper bounds are:
###Code
from shapely.geometry import shape
unioned_footprint = shape(footprint).union(shape(footprint2))
collection_bbox = list(unioned_footprint.bounds)
spatial_extent = stac.SpatialExtent(bboxes=[collection_bbox])
collection_interval = sorted([collection_item1.datetime, collection_item2.datetime])
temporal_extent = stac.TemporalExtent(intervals=[collection_interval])
collection_extent = stac.Extent(spatial=spatial_extent, temporal=temporal_extent)
###Output
_____no_output_____
###Markdown
We can list the common properties for the items, with their proper extension names, and use it in the Collection properties:
###Code
common_properties = { 'eo:bands': [b.to_dict() for b in wv3_bands],
'eo:gsd': 0.3,
'eo:platform': 'Maxar',
'eo:instrument': 'WorldView3'
}
collection = stac.Collection(id='wv3-images',
description='Spacenet 5 images over Moscow',
extent=collection_extent,
properties=common_properties,
license='CC-BY-SA-4.0')
###Output
_____no_output_____
###Markdown
Now if we add our items to our Collection, and our Collection to our Catalog, we get the following STAC that can be saved:
###Code
collection.add_items([collection_item1, collection_item2])
catalog.clear_items()
catalog.clear_children()
catalog.add_child(collection)
catalog.describe()
catalog.normalize_and_save(root_href=os.path.join(tmp_dir.name, 'stac-collection'),
catalog_type=stac.CatalogType.SELF_CONTAINED)
###Output
_____no_output_____
###Markdown
Notice our collection item does not have any of the `eo` metadata in it's properties:
###Code
collection_item1.to_dict()
###Output
_____no_output_____
###Markdown
However, when we read the catalog in, the collection information is merged with the item metadata, and we get `EOItem`s in our STAC:
###Code
catalog3 = stac.Catalog.from_file(os.path.join(tmp_dir.name, 'stac-collection', 'catalog.json'))
catalog3.describe()
col_items = list(catalog3.get_all_items())
col_items[0].bands
###Output
_____no_output_____
###Markdown
CleanupDon't forget to clean up the temporary directory!
###Code
tmp_dir.cleanup()
###Output
_____no_output_____
###Markdown
Creating a STAC of imagery from Spacenet 5 data Now, let's take what we've learned and create a Catalog with more data in it. Allowing PySTAC to read from AWS S3PySTAC aims to be virtually zero-dependency (notwithstanding the why-isn't-this-in-stdlib datetime-util), so it doesn't have the ability to read from or write to anything but the local file system. However, we can hook into PySTAC's IO in the following way. Learn more about how to use STAC_IO in the [documentation on the topic](https://pystac.readthedocs.io/en/latest/concepts.htmlusing-stac-io):
###Code
from urllib.parse import urlparse
import boto3
from pystac import STAC_IO
def my_read_method(uri):
parsed = urlparse(uri)
if parsed.scheme == 's3':
bucket = parsed.netloc
key = parsed.path[1:]
s3 = boto3.resource('s3')
obj = s3.Object(bucket, key)
return obj.get()['Body'].read().decode('utf-8')
else:
return STAC_IO.default_read_text_method(uri)
def my_write_method(uri, txt):
parsed = urlparse(uri)
if parsed.scheme == 's3':
bucket = parsed.netloc
key = parsed.path[1:]
s3 = boto3.resource("s3")
s3.Object(bucket, key).put(Body=txt)
else:
STAC_IO.default_write_text_method(uri, txt)
STAC_IO.read_text_method = my_read_method
STAC_IO.write_text_method = my_write_method
###Output
_____no_output_____
###Markdown
We'll need a utility to list keys for reading the lists of files from S3:
###Code
# From https://alexwlchan.net/2017/07/listing-s3-keys/
def get_s3_keys(bucket, prefix):
"""Generate all the keys in an S3 bucket."""
s3 = boto3.client('s3')
kwargs = {'Bucket': bucket, 'Prefix': prefix}
while True:
resp = s3.list_objects_v2(**kwargs)
for obj in resp['Contents']:
yield obj['Key']
try:
kwargs['ContinuationToken'] = resp['NextContinuationToken']
except KeyError:
break
###Output
_____no_output_____
###Markdown
Let's make a STAC of imagery over Moscow as part of the Spacenet 5 challenge. As a first step, we can list out the imagery and extract IDs from each of the chips.
###Code
moscow_training_chip_uris = list(get_s3_keys(bucket='spacenet-dataset',
prefix='spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS'))
import re
chip_id_to_data = {}
def get_chip_id(uri):
return re.search(r'.*\_chip(\d+)\.', uri).group(1)
for uri in moscow_training_chip_uris:
chip_id = get_chip_id(uri)
chip_id_to_data[chip_id] = { 'img': 's3://spacenet-dataset/{}'.format(uri) }
###Output
_____no_output_____
###Markdown
For this tutorial, we'll only take a subset of the data.
###Code
chip_id_to_data = dict(list(chip_id_to_data.items())[:10])
chip_id_to_data
###Output
_____no_output_____
###Markdown
Let's turn each of those chips into a STAC Item that represents the image.
###Code
chip_id_to_items = {}
###Output
_____no_output_____
###Markdown
We'll create core `Item`s for our imagery, but mark them with the `eo` extension as we did above, and store the `eo` data in a `Collection`.Note that the image CRS is in WGS:84 (Lat/Lng). If it wasn't, we'd have to reproject the footprint to WGS:84 in order to be compliant with the spec (which can easily be done with [pyproj](https://github.com/pyproj4/pyproj)).Here we're taking advantage of `rasterio`'s ability to read S3 URIs, which only grabs the GeoTIFF metadata and does not pull the whole file down.
###Code
for chip_id in chip_id_to_data:
img_uri = chip_id_to_data[chip_id]['img']
print('Processing {}'.format(img_uri))
bbox, footprint = get_bbox_and_footprint(img_uri)
item = stac.Item(id='img_{}'.format(chip_id),
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={},
stac_extensions=['eo'])
item.add_asset(key='ps-ms',
asset=stac.EOAsset(href=img_uri,
media_type=stac.MediaType.COG,
bands=list(range(0, 8))))
chip_id_to_items[chip_id] = item
###Output
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip0.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip10.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip100.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1000.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1001.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1002.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1003.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1004.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1005.tif
###Markdown
Creating the CollectionAll of these images are over Moscow. In Spacenet 5, we have a couple cities that have imagery; a good way to separate these collections of imagery. We can store all of the common `eo` metadata in the collection.
###Code
from shapely.geometry import (shape, MultiPolygon)
footprints = list(map(lambda i: shape(i.geometry).envelope,
chip_id_to_items.values()))
collection_bbox = MultiPolygon(footprints).bounds
spatial_extent = stac.SpatialExtent(bboxes=[collection_bbox])
datetimes = sorted(list(map(lambda i: i.datetime,
chip_id_to_items.values())))
temporal_extent = stac.TemporalExtent(intervals=[[datetimes[0], datetimes[-1]]])
collection_extent = stac.Extent(spatial=spatial_extent, temporal=temporal_extent)
common_properties = { 'eo:bands': [b.to_dict() for b in wv3_bands],
'eo:gsd': 0.3,
'eo:platform': 'Maxar',
'eo:instrument': 'WorldView3'
}
collection = stac.Collection(id='wv3-images',
description='Spacenet 5 images over Moscow',
extent=collection_extent,
properties=common_properties,
license='CC-BY-SA-4.0')
collection.add_items(chip_id_to_items.values())
collection.describe()
###Output
* <Collection id=wv3-images>
* <Item id=img_0>
* <Item id=img_1>
* <Item id=img_10>
* <Item id=img_100>
* <Item id=img_1000>
* <Item id=img_1001>
* <Item id=img_1002>
* <Item id=img_1003>
* <Item id=img_1004>
* <Item id=img_1005>
###Markdown
Now, we can create a Catalog and add the collection.
###Code
catalog = stac.Catalog(id='spacenet5', description='Spacenet 5 Data (Test)')
catalog.add_child(collection)
catalog.describe()
###Output
* <Catalog id=spacenet5>
* <Collection id=wv3-images>
* <Item id=img_0>
* <Item id=img_1>
* <Item id=img_10>
* <Item id=img_100>
* <Item id=img_1000>
* <Item id=img_1001>
* <Item id=img_1002>
* <Item id=img_1003>
* <Item id=img_1004>
* <Item id=img_1005>
###Markdown
Adding label items to the Spacenet 5 catalogWe can use the [label extension](https://github.com/radiantearth/stac-spec/tree/v0.8.1/extensions/label) of the STAC spec to represent the training data in our STAC. For this, we need to grab the URIs of the GeoJSON of roads:
###Code
moscow_training_geojson_uris = list(get_s3_keys(bucket='spacenet-dataset',
prefix='spacenet/SN5_roads/train/AOI_7_Moscow/geojson_roads_speed/'))
for uri in moscow_training_geojson_uris:
chip_id = get_chip_id(uri)
if chip_id in chip_id_to_data:
chip_id_to_data[chip_id]['label'] = 's3://spacenet-dataset/{}'.format(uri)
###Output
_____no_output_____
###Markdown
We'll add the LabelItems to their own subcatalog; since they don't inherit the Collection's `eo` properties, they shouldn't go in the Collection.
###Code
label_catalog = stac.Catalog(id='spacenet-data-labels', description='Labels for Spacenet 5')
catalog.add_child(label_catalog)
###Output
_____no_output_____
###Markdown
We can check the pydocs to see what a LabelItem needs in order to fit the spec:
###Code
print(stac.LabelItem.__doc__)
###Output
A Label Item represents a polygon, set of polygons, or raster data defining
labels and label metadata and should be part of a Collection.
Args:
id (str): Provider identifier. Must be unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float]): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array must be 2*n where n is the
number of dimensions.
datetime (Datetime): Datetime associated with this item.
properties (dict): A dictionary of additional metadata for the item.
label_desecription (str): A description of the label, how it was created,
and what it is recommended for
label_type (str): An ENUM of either vector label type or raster label type. Use
one of :class:`~pystac.LabelType`.
label_properties (dict or None): These are the names of the property field(s) in each
Feature of the label asset's FeatureCollection that contains the classes
(keywords from label:classes if the property defines classes).
If labels are rasters, this should be None.
label_classes (List[LabelClass]): Optional, but reqiured if ussing categorical data.
A list of LabelClasses defining the list of possible class names for each
label:properties. (e.g., tree, building, car, hippo)
label_tasks (str): Recommended to be a subset of 'regression', 'classification',
'detection', or 'segmentation', but may be an arbitrary value.
label_methods: Recommended to be a subset of 'automated' or 'manual',
but may be an arbitrary value.
label_overviews (List[LabelOverview]): Optional list of LabelOverview classes
that store counts (for classification-type data) or summary statistics (for
continuous numerical/regression data).
stac_extensions (List[str]): Optional list of extensions the Item implements.
href (str or None): Optional HREF for this item, which be set as the item's
self link's HREF.
collection (Collection): Optional Collection that this item is a part of.
Attributes:
id (str): Provider identifier. Unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float]): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array is 2*n where n is the
number of dimensions.
datetime (Datetime): Datetime associated with this item.
properties (dict): A dictionary of additional metadata for the item.
label_desecription (str): A description of the label, how it was created,
and what it is recommended for
label_type (str): An ENUM of either vector label type or raster label type (one
of :class:`~pystac.LabelType`).
label_properties (dict or None): These are the names of the property field(s) in each
Feature of the label asset's FeatureCollection that contains the classes
(keywords from label:classes if the property defines classes).
If labels are rasters, this should be None.
label_classes (List[LabelClass]): Optional, but reqiured if ussing categorical data.
A list of LabelClasses defining the list of possible class names for each
label:properties. (e.g., tree, building, car, hippo)
label_tasks (str): Tasks these labels apply to. Usually a subset of 'regression',
'classification', 'detection', or 'segmentation', but may be an arbitrary value.
label_methods: Methods used for labeling. Usually a subset of 'automated' or 'manual',
but may be an arbitrary value.
label_overviews (List[LabelOverview]): Optional list of LabelOverview classes
that store counts (for classification-type data) or summary statistics (for
continuous numerical/regression data).
stac_extensions (List[str] or None): Optional list of extensions the Item implements.
collection_id (str or None): The Collection ID that this item belongs to, if any.
See:
`Item fields in the label extension spec <https://github.com/radiantearth/stac-spec/tree/v0.8.1/extensions/label#item-fields>`_
###Markdown
This loop creates our LabelItems and associates each to the appropriate source image Item.
###Code
for chip_id in chip_id_to_data:
img_item = collection.get_item('img_{}'.format(chip_id))
label_uri = chip_id_to_data[chip_id]['label']
label_item = stac.LabelItem(id='label_{}'.format(chip_id),
geometry=img_item.geometry,
bbox=img_item.bbox,
datetime=datetime.utcnow(),
properties={},
label_description="SpaceNet 5 Road labels",
label_type=stac.LabelType.VECTOR,
label_tasks=['segmentation', 'regression'])
label_item.add_source(img_item)
label_item.add_geojson_labels(label_uri)
label_catalog.add_item(label_item)
###Output
_____no_output_____
###Markdown
Now we have a STAC of training data!
###Code
catalog.describe()
label_item = catalog.get_child('spacenet-data-labels').get_item('label_1')
label_item.to_dict()
###Output
_____no_output_____
###Markdown
How to create STAC Catalogs STAC Community Sprint, Arlington, November 7th 2019 This notebook runs through some of the basics of using PySTAC to create a static STAC. It was part of a 30 minute presentation at the [community STAC sprint](https://github.com/radiantearth/community-sprints/tree/master/11052019-arlignton-va) in Arlington, VA in November 2019. This tutorial will require the `boto3`, `rasterio`, and `shapely` libraries:
###Code
!pip install boto3
!pip install rasterio
!pip install shapely
###Output
Requirement already satisfied: boto3 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (1.10.8)
Requirement already satisfied: botocore<1.14.0,>=1.13.8 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from boto3) (1.13.8)
Requirement already satisfied: s3transfer<0.3.0,>=0.2.0 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from boto3) (0.2.1)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from boto3) (0.9.4)
Requirement already satisfied: urllib3<1.26,>=1.20; python_version >= "3.4" in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from botocore<1.14.0,>=1.13.8->boto3) (1.25.6)
Requirement already satisfied: docutils<0.16,>=0.10 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from botocore<1.14.0,>=1.13.8->boto3) (0.15.2)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1; python_version >= "2.7" in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from botocore<1.14.0,>=1.13.8->boto3) (2.8.1)
Requirement already satisfied: six>=1.5 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from python-dateutil<3.0.0,>=2.1; python_version >= "2.7"->botocore<1.14.0,>=1.13.8->boto3) (1.12.0)
[33mWARNING: You are using pip version 20.1.1; however, version 20.2 is available.
You should consider upgrading via the '/Users/rob/proj/stac/pystac/venv/bin/python -m pip install --upgrade pip' command.[0m
Requirement already satisfied: rasterio in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (1.1.0)
Requirement already satisfied: numpy in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (1.17.3)
Requirement already satisfied: snuggs>=1.4.1 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (1.4.7)
Requirement already satisfied: click<8,>=4.0 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (7.0)
Requirement already satisfied: click-plugins in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (1.1.1)
Requirement already satisfied: attrs in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (19.3.0)
Requirement already satisfied: cligj>=0.5 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (0.5.0)
Requirement already satisfied: affine in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (2.3.0)
Requirement already satisfied: pyparsing>=2.1.6 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from snuggs>=1.4.1->rasterio) (2.4.2)
[33mWARNING: You are using pip version 20.1.1; however, version 20.2 is available.
You should consider upgrading via the '/Users/rob/proj/stac/pystac/venv/bin/python -m pip install --upgrade pip' command.[0m
Requirement already satisfied: shapely in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (1.6.4.post2)
[33mWARNING: You are using pip version 20.1.1; however, version 20.2 is available.
You should consider upgrading via the '/Users/rob/proj/stac/pystac/venv/bin/python -m pip install --upgrade pip' command.[0m
###Markdown
We can import pystac and access most of the functionality we need with the single import:
###Code
import pystac
###Output
_____no_output_____
###Markdown
Creating a catalog from a local file To give us some material to work with, lets download a single image from the [Spacenet 5 challenge](https://www.topcoder.com/challenges/30099956). We'll use a temporary directory to save off our single-item STAC.
###Code
import os
import urllib.request
from tempfile import TemporaryDirectory
tmp_dir = TemporaryDirectory()
img_path = os.path.join(tmp_dir.name, 'image.tif')
url = ('http://spacenet-dataset.s3.amazonaws.com/'
'spacenet/SN5_roads/train/AOI_7_Moscow/MS/'
'SN5_roads_train_AOI_7_Moscow_MS_chip996.tif')
urllib.request.urlretrieve(url, img_path)
###Output
_____no_output_____
###Markdown
We want to create a Catalog. Let's check the pydocs for `Catalog` to see what information we'll need. (We use `__doc__` instead of `help()` here to avoid printing out all the docs for the class.)
###Code
print(pystac.Catalog.__doc__)
###Output
A PySTAC Catalog represents a STAC catalog in memory.
A Catalog is a :class:`~pystac.STACObject` that may contain children,
which are instances of :class:`~pystac.Catalog` or :class:`~pystac.Collection`,
as well as :class:`~pystac.Item` s.
Args:
id (str): Identifier for the catalog. Must be unique within the STAC.
description (str): Detailed multi-line description to fully explain the catalog.
`CommonMark 0.28 syntax <http://commonmark.org/>`_ MAY be used for rich text
representation.
title (str or None): Optional short descriptive one-line title for the catalog.
stac_extensions (List[str]): Optional list of extensions the Catalog implements.
href (str or None): Optional HREF for this catalog, which be set as the catalog's
self link's HREF.
Attributes:
id (str): Identifier for the catalog.
description (str): Detailed multi-line description to fully explain the catalog.
title (str or None): Optional short descriptive one-line title for the catalog.
stac_extensions (List[str] or None): Optional list of extensions the Catalog implements.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Catalog.
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this Catalog.
###Markdown
Let's just give an ID and a description. We don't have to worry about the HREF right now; that will be set later.
###Code
catalog = pystac.Catalog(id='test-catalog', description='Tutorial catalog.')
###Output
_____no_output_____
###Markdown
There are no children or items in the catalog, since we haven't added anything yet.
###Code
print(list(catalog.get_children()))
print(list(catalog.get_items()))
###Output
[]
[]
###Markdown
We'll now create an Item to represent the image. Check the pydocs to see what you need to supply:
###Code
print(pystac.Item.__doc__)
###Output
An Item is the core granular entity in a STAC, containing the core metadata
that enables any client to search or crawl online catalogs of spatial 'assets' -
satellite imagery, derived data, DEM's, etc.
Args:
id (str): Provider identifier. Must be unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float] or None): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array must be 2*n where n is the
number of dimensions. Could also be None in the case of a null geometry.
datetime (datetime or None): Datetime associated with this item. If None,
a start_datetime and end_datetime must be supplied in the properties.
properties (dict): A dictionary of additional metadata for the item.
stac_extensions (List[str]): Optional list of extensions the Item implements.
href (str or None): Optional HREF for this item, which be set as the item's
self link's HREF.
collection (Collection or str): The Collection or Collection ID that this item
belongs to.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Item.
Attributes:
id (str): Provider identifier. Unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float] or None): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array is 2*n where n is the
number of dimensions. Could also be None in the case of a null geometry.
datetime (datetime or None): Datetime associated with this item. If None,
the start_datetime and end_datetime in the common_metadata
will supply the datetime range of the Item.
properties (dict): A dictionary of additional metadata for the item.
stac_extensions (List[str] or None): Optional list of extensions the Item implements.
collection (Collection or None): Collection that this item is a part of.
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this STACObject.
assets (Dict[str, Asset]): Dictionary of asset objects that can be downloaded,
each with a unique key.
collection_id (str or None): The Collection ID that this item belongs to, if any.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Item.
###Markdown
Using [rasterio](https://rasterio.readthedocs.io/en/stable/), we can pull out the bounding box of the image to use for the image metadata. If the image contained a NoData border, we would ideally pull out the footprint and save it as the geometry; in this case, we're working with a small chip that most likely has no NoData values.
###Code
import rasterio
from shapely.geometry import Polygon, mapping
def get_bbox_and_footprint(raster_uri):
with rasterio.open(raster_uri) as ds:
bounds = ds.bounds
bbox = [bounds.left, bounds.bottom, bounds.right, bounds.top]
footprint = Polygon([
[bounds.left, bounds.bottom],
[bounds.left, bounds.top],
[bounds.right, bounds.top],
[bounds.right, bounds.bottom]
])
return (bbox, mapping(footprint))
bbox, footprint = get_bbox_and_footprint(img_path)
print(bbox)
print(footprint)
###Output
[37.6616853489879, 55.73478197572927, 37.66573047610874, 55.73882710285011]
{'type': 'Polygon', 'coordinates': (((37.6616853489879, 55.73478197572927), (37.6616853489879, 55.73882710285011), (37.66573047610874, 55.73882710285011), (37.66573047610874, 55.73478197572927), (37.6616853489879, 55.73478197572927)),)}
###Markdown
We're also using `datetime.utcnow()` to supply the required datetime property for our Item. Since this is a required property, you might often find yourself making up a time to fill in if you don't know the exact capture time.
###Code
from datetime import datetime
item = pystac.Item(id='local-image',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={})
###Output
_____no_output_____
###Markdown
We haven't added it to a catalog yet, so it's parent isn't set. Once we add it to the catalog, we can see it correctly links to it's parent.
###Code
item.get_parent() is None
catalog.add_item(item)
item.get_parent()
###Output
_____no_output_____
###Markdown
`describe()` is a useful method on `Catalog` - but be careful when using it on large catalogs, as it will walk the entire tree of the STAC.
###Code
catalog.describe()
###Output
* <Catalog id=test-catalog>
* <Item id=local-image>
###Markdown
Adding AssetsWe've created an Item, but there aren't any assets associated with it. Let's create one:
###Code
print(pystac.Asset.__doc__)
item.add_asset(
key='image',
asset=pystac.Asset(
href=img_path,
media_type=pystac.MediaType.GEOTIFF
)
)
###Output
_____no_output_____
###Markdown
At any time we can call `to_dict()` on STAC objects to see how the STAC JSON is shaping up. Notice the asset is now set:
###Code
import json
print(json.dumps(item.to_dict(), indent=4))
###Output
{
"type": "Feature",
"stac_version": "1.0.0-beta.2",
"id": "local-image",
"properties": {
"datetime": "2020-08-03T03:47:48.786929Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "root",
"href": null,
"type": "application/json"
},
{
"rel": "parent",
"href": null,
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
]
}
###Markdown
Note that the link `href` properties are `null`. This is OK, as we're working with the STAC in memory. Next, we'll talk about writing the catalog out, and how to set those HREFs. Saving the catalog As the JSON above indicates, there's no HREFs set on these in-memory items. PySTAC uses the `self` link on STAC objects to track where the file lives. Because we haven't set them, they evaluate to `None`:
###Code
print(catalog.get_self_href() is None)
print(item.get_self_href() is None)
###Output
True
True
###Markdown
In order to set them, we can use `normalize_hrefs`. This method will create a normalized set of HREFs for each STAC object in the catalog, according to the [best practices document](https://github.com/radiantearth/stac-spec/blob/v0.8.1/best-practices.mdcatalog-layout)'s recommendations on how to lay out a catalog.
###Code
catalog.normalize_hrefs(os.path.join(tmp_dir.name, 'stac'))
###Output
_____no_output_____
###Markdown
Now that we've normalized to a root directory (the temporary directory), we see that the `self` links are set:
###Code
print(catalog.get_self_href())
print(item.get_self_href())
###Output
/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/catalog.json
/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/local-image/local-image.json
###Markdown
We can now call `save` on the catalog, which will recursively save all the STAC objects to their respective self HREFs.Save requires a `CatalogType` to be set. You can review the [API docs](https://pystac.readthedocs.io/en/stable/api.htmlcatalogtype) on `CatalogType` to see what each type means (unfortunately `help` doesn't show docstrings for attributes).
###Code
catalog.save(catalog_type=pystac.CatalogType.SELF_CONTAINED)
!ls {tmp_dir.name}/stac/*
with open(catalog.get_self_href()) as f:
print(f.read())
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "1.0.0-beta.2",
"id": "local-image",
"properties": {
"datetime": "2020-08-03T03:47:48.786929Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "root",
"href": "../catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "../catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
]
}
###Markdown
As you can see, all links are saved with relative paths. That's because we used `catalog_type=CatalogType.SELF_CONTAINED`. If we save an Absolute Published catalog, we'll see absolute paths:
###Code
catalog.save(catalog_type=pystac.CatalogType.ABSOLUTE_PUBLISHED)
###Output
_____no_output_____
###Markdown
Now the links included in the STAC item are all absolute:
###Code
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "1.0.0-beta.2",
"id": "local-image",
"properties": {
"datetime": "2020-08-03T03:47:48.786929Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "self",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/local-image/local-image.json",
"type": "application/json"
},
{
"rel": "root",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
]
}
###Markdown
Notice that the Asset HREF is absolute in both cases. We can make the Asset HREF relative to the STAC Item by using `.make_all_asset_hrefs_relative()`:
###Code
catalog.make_all_asset_hrefs_relative()
catalog.save(catalog_type=pystac.CatalogType.SELF_CONTAINED)
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "1.0.0-beta.2",
"id": "local-image",
"properties": {
"datetime": "2020-08-03T03:47:48.786929Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "root",
"href": "../catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "../catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "../../image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
]
}
###Markdown
Creating an Item that implements the EO extensionIn the code above our item only implemented the core STAC Item specification. With [extensions](https://github.com/radiantearth/stac-spec/tree/v0.9.0/extensions) we can record more information and add additional functionality to the Item. Given that we know this is a World View 3 image that has earth observation data, we can enable the [eo extension](https://github.com/radiantearth/stac-spec/tree/v0.8.1/extensions/eo) to add band information. To add eo information to an item we'll need to specify some more data. First, let's define the bands of World View 3:
###Code
from pystac.extensions.eo import Band
# From: https://www.spaceimagingme.com/downloads/sensors/datasheets/DG_WorldView3_DS_2014.pdf
wv3_bands = [Band.create(name='Coastal', description='Coastal: 400 - 450 nm', common_name='coastal'),
Band.create(name='Blue', description='Blue: 450 - 510 nm', common_name='blue'),
Band.create(name='Green', description='Green: 510 - 580 nm', common_name='green'),
Band.create(name='Yellow', description='Yellow: 585 - 625 nm', common_name='yellow'),
Band.create(name='Red', description='Red: 630 - 690 nm', common_name='red'),
Band.create(name='Red Edge', description='Red Edge: 705 - 745 nm', common_name='rededge'),
Band.create(name='Near-IR1', description='Near-IR1: 770 - 895 nm', common_name='nir08'),
Band.create(name='Near-IR2', description='Near-IR2: 860 - 1040 nm', common_name='nir09')]
###Output
_____no_output_____
###Markdown
Notice that we used the `.create` method create new band information. We can now create an Item, enable the eo extension, add the band information and add it to our catalog:
###Code
eo_item = pystac.Item(id='local-image-eo',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={})
eo_item.ext.enable(pystac.Extensions.EO)
eo_item.ext.eo.apply(bands=wv3_bands)
###Output
_____no_output_____
###Markdown
There are also [common metadata](https://github.com/radiantearth/stac-spec/blob/v0.9.0/item-spec/common-metadata.md) fields that we can use to capture additional information about the WorldView 3 imagery:
###Code
eo_item.common_metadata.platform = "Maxar"
eo_item.common_metadata.instrument="WorldView3"
eo_item.common_metadata.gsd = 0.3
eo_item
###Output
_____no_output_____
###Markdown
We can use the eo extension to add bands to the assets we add to the item:
###Code
eo_ext = eo_item.ext.eo
help(eo_ext.set_bands)
#eo_item.add_asset(key='image', asset=)
asset = pystac.Asset(href=img_path,
media_type=pystac.MediaType.GEOTIFF)
eo_ext.set_bands(wv3_bands, asset)
eo_item.add_asset("image", asset)
###Output
_____no_output_____
###Markdown
If we look at the asset's JSON representation, we can see the appropriate band indexes are set:
###Code
asset.to_dict()
###Output
_____no_output_____
###Markdown
Let's clear the in-memory catalog, add the EO item, and save to a new STAC:
###Code
catalog.clear_items()
list(catalog.get_items())
catalog.add_item(eo_item)
list(catalog.get_items())
catalog.normalize_and_save(root_href=os.path.join(tmp_dir.name, 'stac-eo'),
catalog_type=pystac.CatalogType.SELF_CONTAINED)
###Output
_____no_output_____
###Markdown
Now, if we read the catalog from the filesystem, PySTAC recognizes that the item implements eo and so use it's functionality, e.g. getting the bands off the asset:
###Code
catalog2 = pystac.read_file(os.path.join(tmp_dir.name, 'stac-eo', 'catalog.json'))
list(catalog2.get_items())
item = next(catalog2.get_all_items())
item.ext.implements('eo')
item.ext.eo.get_bands(item.assets['image'])
###Output
_____no_output_____
###Markdown
CollectionsCollections are a subtype of Catalog that have some additional properties to make them more searchable. They also can define common properties so that items in the collection don't have to duplicate common data for each item. Let's create a collection to hold common properties between two images from the Spacenet 5 challenge.First we'll get another image, and it's bbox and footprint:
###Code
url2 = ('http://spacenet-dataset.s3.amazonaws.com/'
'spacenet/SN5_roads/train/AOI_7_Moscow/MS/'
'SN5_roads_train_AOI_7_Moscow_MS_chip997.tif')
img_path2 = os.path.join(tmp_dir.name, 'image.tif')
urllib.request.urlretrieve(url2, img_path2)
bbox2, footprint2 = get_bbox_and_footprint(img_path2)
###Output
_____no_output_____
###Markdown
We can take a look at the pydocs for Collection to see what information we need to supply in order to satisfy the spec.
###Code
print(pystac.Collection.__doc__)
###Output
A Collection extends the Catalog spec with additional metadata that helps
enable discovery.
Args:
id (str): Identifier for the collection. Must be unique within the STAC.
description (str): Detailed multi-line description to fully explain the collection.
`CommonMark 0.28 syntax <http://commonmark.org/>`_ MAY be used for rich text
representation.
extent (Extent): Spatial and temporal extents that describe the bounds of
all items contained within this Collection.
title (str or None): Optional short descriptive one-line title for the collection.
stac_extensions (List[str]): Optional list of extensions the Collection implements.
href (str or None): Optional HREF for this collection, which be set as the collection's
self link's HREF.
license (str): Collection's license(s) as a `SPDX License identifier
<https://spdx.org/licenses/>`_, `various`, or `proprietary`. If collection includes
data with multiple different licenses, use `various` and add a link for each.
Defaults to 'proprietary'.
keywords (List[str]): Optional list of keywords describing the collection.
providers (List[Provider]): Optional list of providers of this Collection.
properties (dict): Optional dict of common fields across referenced items.
summaries (dict): An optional map of property summaries,
either a set of values or statistics such as a range.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Collection.
Attributes:
id (str): Identifier for the collection.
description (str): Detailed multi-line description to fully explain the collection.
extent (Extent): Spatial and temporal extents that describe the bounds of
all items contained within this Collection.
title (str or None): Optional short descriptive one-line title for the collection.
stac_extensions (List[str]): Optional list of extensions the Collection implements.
keywords (List[str] or None): Optional list of keywords describing the collection.
providers (List[Provider] or None): Optional list of providers of this Collection.
properties (dict or None): Optional dict of common fields across referenced items.
summaries (dict or None): An optional map of property summaries,
either a set of values or statistics such as a range.
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this Collection.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Catalog.
###Markdown
Beyond what a Catalog reqiures, a Collection requires a license, and an `Extent` that describes the range of space and time that the items it hold occupy.
###Code
print(pystac.Extent.__doc__)
###Output
Describes the spatio-temporal extents of a Collection.
Args:
spatial (SpatialExtent): Potential spatial extent covered by the collection.
temporal (TemporalExtent): Potential temporal extent covered by the collection.
Attributes:
spatial (SpatialExtent): Potential spatial extent covered by the collection.
temporal (TemporalExtent): Potential temporal extent covered by the collection.
###Markdown
An Extent is comprised of a SpatialExtent and a TemporalExtent. These hold one or more bounding boxes and time intervals, respectively, that completely cover the items contained in the collections.Let's start with creating two new items - these will be core Items. We can set these items to implement the `eo` extension by specifying them in the `stac_extensions`.
###Code
collection_item = pystac.Item(id='local-image-col-1',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={},
stac_extensions=[pystac.Extensions.EO])
collection_item.common_metadata.gsd = 0.3
collection_item.common_metadata.platform = 'Maxar'
collection_item.common_metadata.instruments = ['WorldView3']
asset = pystac.Asset(href=img_path,
media_type=pystac.MediaType.GEOTIFF)
collection_item.ext.eo.set_bands(wv3_bands, asset)
collection_item.add_asset('image', asset)
collection_item2 = pystac.Item(id='local-image-col-2',
geometry=footprint2,
bbox=bbox2,
datetime=datetime.utcnow(),
properties={},
stac_extensions=[pystac.Extensions.EO])
collection_item2.common_metadata.gsd = 0.3
collection_item2.common_metadata.platform = 'Maxar'
collection_item2.common_metadata.instruments = ['WorldView3']
asset2 = pystac.Asset(href=img_path,
media_type=pystac.MediaType.GEOTIFF)
collection_item2.ext.eo.set_bands([
band for band in wv3_bands if band.name in ["Red", "Green", "Blue"]
], asset2)
collection_item2.add_asset('image', asset2)
###Output
_____no_output_____
###Markdown
We can use our two items' metadata to find out what the proper bounds are:
###Code
from shapely.geometry import shape
unioned_footprint = shape(footprint).union(shape(footprint2))
collection_bbox = list(unioned_footprint.bounds)
spatial_extent = pystac.SpatialExtent(bboxes=[collection_bbox])
collection_interval = sorted([collection_item.datetime, collection_item2.datetime])
temporal_extent = pystac.TemporalExtent(intervals=[collection_interval])
collection_extent = pystac.Extent(spatial=spatial_extent, temporal=temporal_extent)
collection = pystac.Collection(id='wv3-images',
description='Spacenet 5 images over Moscow',
extent=collection_extent,
license='CC-BY-SA-4.0')
###Output
_____no_output_____
###Markdown
Now if we add our items to our Collection, and our Collection to our Catalog, we get the following STAC that can be saved:
###Code
collection.add_items([collection_item, collection_item2])
catalog.clear_items()
catalog.clear_children()
catalog.add_child(collection)
catalog.describe()
catalog.normalize_and_save(root_href=os.path.join(tmp_dir.name, 'stac-collection'),
catalog_type=pystac.CatalogType.SELF_CONTAINED)
###Output
_____no_output_____
###Markdown
CleanupDon't forget to clean up the temporary directory!
###Code
tmp_dir.cleanup()
###Output
_____no_output_____
###Markdown
Creating a STAC of imagery from Spacenet 5 data Now, let's take what we've learned and create a Catalog with more data in it. Allowing PySTAC to read from AWS S3PySTAC aims to be virtually zero-dependency (notwithstanding the why-isn't-this-in-stdlib datetime-util), so it doesn't have the ability to read from or write to anything but the local file system. However, we can hook into PySTAC's IO in the following way. Learn more about how to use STAC_IO in the [documentation on the topic](https://pystac.readthedocs.io/en/latest/concepts.htmlusing-stac-io):
###Code
from urllib.parse import urlparse
import boto3
from pystac import STAC_IO
def my_read_method(uri):
parsed = urlparse(uri)
if parsed.scheme == 's3':
bucket = parsed.netloc
key = parsed.path[1:]
s3 = boto3.resource('s3')
obj = s3.Object(bucket, key)
return obj.get()['Body'].read().decode('utf-8')
else:
return STAC_IO.default_read_text_method(uri)
def my_write_method(uri, txt):
parsed = urlparse(uri)
if parsed.scheme == 's3':
bucket = parsed.netloc
key = parsed.path[1:]
s3 = boto3.resource("s3")
s3.Object(bucket, key).put(Body=txt)
else:
STAC_IO.default_write_text_method(uri, txt)
STAC_IO.read_text_method = my_read_method
STAC_IO.write_text_method = my_write_method
###Output
_____no_output_____
###Markdown
We'll need a utility to list keys for reading the lists of files from S3:
###Code
# From https://alexwlchan.net/2017/07/listing-s3-keys/
def get_s3_keys(bucket, prefix):
"""Generate all the keys in an S3 bucket."""
s3 = boto3.client('s3')
kwargs = {'Bucket': bucket, 'Prefix': prefix}
while True:
resp = s3.list_objects_v2(**kwargs)
for obj in resp['Contents']:
yield obj['Key']
try:
kwargs['ContinuationToken'] = resp['NextContinuationToken']
except KeyError:
break
###Output
_____no_output_____
###Markdown
Let's make a STAC of imagery over Moscow as part of the Spacenet 5 challenge. As a first step, we can list out the imagery and extract IDs from each of the chips.
###Code
moscow_training_chip_uris = list(get_s3_keys(bucket='spacenet-dataset',
prefix='spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS'))
import re
chip_id_to_data = {}
def get_chip_id(uri):
return re.search(r'.*\_chip(\d+)\.', uri).group(1)
for uri in moscow_training_chip_uris:
chip_id = get_chip_id(uri)
chip_id_to_data[chip_id] = { 'img': 's3://spacenet-dataset/{}'.format(uri) }
###Output
_____no_output_____
###Markdown
For this tutorial, we'll only take a subset of the data.
###Code
chip_id_to_data = dict(list(chip_id_to_data.items())[:10])
chip_id_to_data
###Output
_____no_output_____
###Markdown
Let's turn each of those chips into a STAC Item that represents the image.
###Code
chip_id_to_items = {}
###Output
_____no_output_____
###Markdown
We'll create core `Item`s for our imagery, but mark them with the `eo` extension as we did above, and store the `eo` data in a `Collection`.Note that the image CRS is in WGS:84 (Lat/Lng). If it wasn't, we'd have to reproject the footprint to WGS:84 in order to be compliant with the spec (which can easily be done with [pyproj](https://github.com/pyproj4/pyproj)).Here we're taking advantage of `rasterio`'s ability to read S3 URIs, which only grabs the GeoTIFF metadata and does not pull the whole file down.
###Code
for chip_id in chip_id_to_data:
img_uri = chip_id_to_data[chip_id]['img']
print('Processing {}'.format(img_uri))
bbox, footprint = get_bbox_and_footprint(img_uri)
item = pystac.Item(id='img_{}'.format(chip_id),
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={},
stac_extensions=[pystac.Extensions.EO])
item.common_metadata.gsd = 0.3
item.common_metadata.platform = 'Maxar'
item.common_metadata.instruments = ['WorldView3']
item.ext.eo.bands = wv3_bands
asset = pystac.Asset(href=img_uri,
media_type=pystac.MediaType.COG)
item.ext.eo.set_bands(wv3_bands, asset)
item.add_asset(key='ps-ms', asset=asset)
chip_id_to_items[chip_id] = item
###Output
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip0.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip10.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip100.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1000.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1001.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1002.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1003.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1004.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1005.tif
###Markdown
Creating the CollectionAll of these images are over Moscow. In Spacenet 5, we have a couple cities that have imagery; a good way to separate these collections of imagery. We can store all of the common `eo` metadata in the collection.
###Code
from shapely.geometry import (shape, MultiPolygon)
footprints = list(map(lambda i: shape(i.geometry).envelope,
chip_id_to_items.values()))
collection_bbox = MultiPolygon(footprints).bounds
spatial_extent = pystac.SpatialExtent(bboxes=[collection_bbox])
datetimes = sorted(list(map(lambda i: i.datetime,
chip_id_to_items.values())))
temporal_extent = pystac.TemporalExtent(intervals=[[datetimes[0], datetimes[-1]]])
collection_extent = pystac.Extent(spatial=spatial_extent, temporal=temporal_extent)
collection = pystac.Collection(id='wv3-images',
description='Spacenet 5 images over Moscow',
extent=collection_extent,
license='CC-BY-SA-4.0')
collection.add_items(chip_id_to_items.values())
collection.describe()
###Output
* <Collection id=wv3-images>
* <Item id=img_0>
* <Item id=img_1>
* <Item id=img_10>
* <Item id=img_100>
* <Item id=img_1000>
* <Item id=img_1001>
* <Item id=img_1002>
* <Item id=img_1003>
* <Item id=img_1004>
* <Item id=img_1005>
###Markdown
Now, we can create a Catalog and add the collection.
###Code
catalog = pystac.Catalog(id='spacenet5', description='Spacenet 5 Data (Test)')
catalog.add_child(collection)
catalog.describe()
###Output
* <Catalog id=spacenet5>
* <Collection id=wv3-images>
* <Item id=img_0>
* <Item id=img_1>
* <Item id=img_10>
* <Item id=img_100>
* <Item id=img_1000>
* <Item id=img_1001>
* <Item id=img_1002>
* <Item id=img_1003>
* <Item id=img_1004>
* <Item id=img_1005>
###Markdown
Adding items with the label extension to the Spacenet 5 catalogWe can use the [label extension](https://github.com/radiantearth/stac-spec/tree/v0.8.1/extensions/label) of the STAC spec to represent the training data in our STAC. For this, we need to grab the URIs of the GeoJSON of roads:
###Code
moscow_training_geojson_uris = list(get_s3_keys(bucket='spacenet-dataset',
prefix='spacenet/SN5_roads/train/AOI_7_Moscow/geojson_roads_speed/'))
for uri in moscow_training_geojson_uris:
chip_id = get_chip_id(uri)
if chip_id in chip_id_to_data:
chip_id_to_data[chip_id]['label'] = 's3://spacenet-dataset/{}'.format(uri)
###Output
_____no_output_____
###Markdown
We'll add the items to their own subcatalog; since they don't inherit the Collection's `eo` properties, they shouldn't go in the Collection.
###Code
label_catalog = pystac.Catalog(id='spacenet-data-labels', description='Labels for Spacenet 5')
catalog.add_child(label_catalog)
###Output
_____no_output_____
###Markdown
To see the required fields for the label extension we can check the pydocs on the `apply` method of the extension:
###Code
from pystac.extensions import label
print(label.LabelItemExt.apply.__doc__)
###Output
Applies label extension properties to the extended Item.
Args:
label_description (str): A description of the label, how it was created,
and what it is recommended for
label_type (str): An ENUM of either vector label type or raster label type. Use
one of :class:`~pystac.LabelType`.
label_properties (list or None): These are the names of the property field(s) in each
Feature of the label asset's FeatureCollection that contains the classes
(keywords from label:classes if the property defines classes).
If labels are rasters, this should be None.
label_classes (List[LabelClass]): Optional, but reqiured if ussing categorical data.
A list of LabelClasses defining the list of possible class names for each
label:properties. (e.g., tree, building, car, hippo)
label_tasks (List[str]): Recommended to be a subset of 'regression', 'classification',
'detection', or 'segmentation', but may be an arbitrary value.
label_methods: Recommended to be a subset of 'automated' or 'manual',
but may be an arbitrary value.
label_overviews (List[LabelOverview]): Optional list of LabelOverview classes
that store counts (for classification-type data) or summary statistics (for
continuous numerical/regression data).
###Markdown
This loop creates our label items and associates each to the appropriate source image Item.
###Code
for chip_id in chip_id_to_data:
img_item = collection.get_item('img_{}'.format(chip_id))
label_uri = chip_id_to_data[chip_id]['label']
label_item = pystac.Item(id='label_{}'.format(chip_id),
geometry=img_item.geometry,
bbox=img_item.bbox,
datetime=datetime.utcnow(),
properties={},
stac_extensions=[pystac.Extensions.LABEL])
label_item.ext.label.apply(label_description="SpaceNet 5 Road labels",
label_type=label.LabelType.VECTOR,
label_tasks=['segmentation', 'regression'])
label_item.ext.label.add_source(img_item)
label_item.ext.label.add_geojson_labels(label_uri)
label_catalog.add_item(label_item)
###Output
_____no_output_____
###Markdown
Now we have a STAC of training data!
###Code
catalog.describe()
label_item = catalog.get_child('spacenet-data-labels').get_item('label_1')
label_item.to_dict()
###Output
_____no_output_____
###Markdown
How to create STAC Catalogs STAC Community Sprint, Arlington, November 7th 2019 This notebook runs through some of the basics of using PySTAC to create a static STAC. It was part of a 30 minute presentation at the [community STAC sprint](https://github.com/radiantearth/community-sprints/tree/master/11052019-arlignton-va) in Arlington, VA in November 2019, updated to work with current PySTAC. This tutorial will require the `boto3`, `rasterio`, and `shapely` libraries:
###Code
%pip install boto3 rasterio shapely pystac
###Output
Requirement already satisfied: boto3 in /Users/gadomski/.virtualenvs/pystac/lib/python3.9/site-packages (1.21.28)
Requirement already satisfied: rasterio in /Users/gadomski/.virtualenvs/pystac/lib/python3.9/site-packages (1.2.10)
Requirement already satisfied: shapely in /Users/gadomski/.virtualenvs/pystac/lib/python3.9/site-packages (1.8.1.post1)
Requirement already satisfied: pystac in /Users/gadomski/Code/pystac (1.3.0)
Requirement already satisfied: jmespath<2.0.0,>=0.7.1 in /Users/gadomski/.virtualenvs/pystac/lib/python3.9/site-packages (from boto3) (1.0.0)
Requirement already satisfied: botocore<1.25.0,>=1.24.28 in /Users/gadomski/.virtualenvs/pystac/lib/python3.9/site-packages (from boto3) (1.24.28)
Requirement already satisfied: s3transfer<0.6.0,>=0.5.0 in /Users/gadomski/.virtualenvs/pystac/lib/python3.9/site-packages (from boto3) (0.5.2)
Requirement already satisfied: certifi in /Users/gadomski/.virtualenvs/pystac/lib/python3.9/site-packages (from rasterio) (2021.5.30)
Requirement already satisfied: numpy in /Users/gadomski/.virtualenvs/pystac/lib/python3.9/site-packages (from rasterio) (1.22.3)
Requirement already satisfied: click-plugins in /Users/gadomski/.virtualenvs/pystac/lib/python3.9/site-packages (from rasterio) (1.1.1)
Requirement already satisfied: click>=4.0 in /Users/gadomski/.virtualenvs/pystac/lib/python3.9/site-packages (from rasterio) (8.0.1)
Requirement already satisfied: snuggs>=1.4.1 in /Users/gadomski/.virtualenvs/pystac/lib/python3.9/site-packages (from rasterio) (1.4.7)
Requirement already satisfied: cligj>=0.5 in /Users/gadomski/.virtualenvs/pystac/lib/python3.9/site-packages (from rasterio) (0.7.2)
Requirement already satisfied: affine in /Users/gadomski/.virtualenvs/pystac/lib/python3.9/site-packages (from rasterio) (2.3.1)
Requirement already satisfied: setuptools in /Users/gadomski/.virtualenvs/pystac/lib/python3.9/site-packages (from rasterio) (56.0.0)
Requirement already satisfied: attrs in /Users/gadomski/.virtualenvs/pystac/lib/python3.9/site-packages (from rasterio) (21.2.0)
Requirement already satisfied: python-dateutil>=2.7.0 in /Users/gadomski/.virtualenvs/pystac/lib/python3.9/site-packages (from pystac) (2.8.1)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /Users/gadomski/.virtualenvs/pystac/lib/python3.9/site-packages (from botocore<1.25.0,>=1.24.28->boto3) (1.26.5)
Requirement already satisfied: six>=1.5 in /Users/gadomski/.virtualenvs/pystac/lib/python3.9/site-packages (from python-dateutil>=2.7.0->pystac) (1.16.0)
Requirement already satisfied: pyparsing>=2.1.6 in /Users/gadomski/.virtualenvs/pystac/lib/python3.9/site-packages (from snuggs>=1.4.1->rasterio) (2.4.7)
###Markdown
We can import pystac and access most of the functionality we need with the single import:
###Code
import pystac
###Output
_____no_output_____
###Markdown
Creating a catalog from a local file To give us some material to work with, lets download a single image from the [Spacenet 5 challenge](https://www.topcoder.com/challenges/30099956). We'll use a temporary directory to save off our single-item STAC.
###Code
import os
import urllib.request
from tempfile import TemporaryDirectory
tmp_dir = TemporaryDirectory()
img_path = os.path.join(tmp_dir.name, 'image.tif')
url = ('https://spacenet-dataset.s3.amazonaws.com/'
'spacenet/SN5_roads/train/AOI_7_Moscow/MS/'
'SN5_roads_train_AOI_7_Moscow_MS_chip996.tif')
urllib.request.urlretrieve(url, img_path)
###Output
_____no_output_____
###Markdown
We want to create a Catalog. Let's check the pydocs for `Catalog` to see what information we'll need. (We use `__doc__` instead of `help()` here to avoid printing out all the docs for the class.)
###Code
print(pystac.Catalog.__doc__)
###Output
A PySTAC Catalog represents a STAC catalog in memory.
A Catalog is a :class:`~pystac.STACObject` that may contain children,
which are instances of :class:`~pystac.Catalog` or :class:`~pystac.Collection`,
as well as :class:`~pystac.Item` s.
Args:
id : Identifier for the catalog. Must be unique within the STAC.
description : Detailed multi-line description to fully explain the catalog.
`CommonMark 0.28 syntax <https://commonmark.org/>`_ MAY be used for rich
text representation.
title : Optional short descriptive one-line title for the catalog.
stac_extensions : Optional list of extensions the Catalog implements.
href : Optional HREF for this catalog, which be set as the
catalog's self link's HREF.
catalog_type : Optional catalog type for this catalog. Must
be one of the values in :class:`~pystac.CatalogType`.
###Markdown
Let's just give an ID and a description. We don't have to worry about the HREF right now; that will be set later.
###Code
catalog = pystac.Catalog(id='test-catalog', description='Tutorial catalog.')
###Output
_____no_output_____
###Markdown
There are no children or items in the catalog, since we haven't added anything yet.
###Code
print(list(catalog.get_children()))
print(list(catalog.get_items()))
###Output
[]
[]
###Markdown
We'll now create an Item to represent the image. Check the pydocs to see what you need to supply:
###Code
print(pystac.Item.__doc__)
###Output
An Item is the core granular entity in a STAC, containing the core metadata
that enables any client to search or crawl online catalogs of spatial 'assets' -
satellite imagery, derived data, DEM's, etc.
Args:
id : Provider identifier. Must be unique within the STAC.
geometry : Defines the full footprint of the asset represented by this
item, formatted according to
`RFC 7946, section 3.1 (GeoJSON) <https://tools.ietf.org/html/rfc7946>`_.
bbox : Bounding Box of the asset represented by this item
using either 2D or 3D geometries. The length of the array must be 2*n
where n is the number of dimensions. Could also be None in the case of a
null geometry.
datetime : Datetime associated with this item. If None,
a start_datetime and end_datetime must be supplied in the properties.
properties : A dictionary of additional metadata for the item.
stac_extensions : Optional list of extensions the Item implements.
href : Optional HREF for this item, which be set as the item's
self link's HREF.
collection : The Collection or Collection ID that this item
belongs to.
extra_fields : Extra fields that are part of the top-level JSON
properties of the Item.
###Markdown
Using [rasterio](https://rasterio.readthedocs.io/en/stable/), we can pull out the bounding box of the image to use for the image metadata. If the image contained a NoData border, we would ideally pull out the footprint and save it as the geometry; in this case, we're working with a small chip that most likely has no NoData values.
###Code
import rasterio
from shapely.geometry import Polygon, mapping
def get_bbox_and_footprint(raster_uri):
with rasterio.open(raster_uri) as ds:
bounds = ds.bounds
bbox = [bounds.left, bounds.bottom, bounds.right, bounds.top]
footprint = Polygon([
[bounds.left, bounds.bottom],
[bounds.left, bounds.top],
[bounds.right, bounds.top],
[bounds.right, bounds.bottom]
])
return (bbox, mapping(footprint))
bbox, footprint = get_bbox_and_footprint(img_path)
print(bbox)
print(footprint)
###Output
[37.6616853489879, 55.73478197572927, 37.66573047610874, 55.73882710285011]
{'type': 'Polygon', 'coordinates': (((37.6616853489879, 55.73478197572927), (37.6616853489879, 55.73882710285011), (37.66573047610874, 55.73882710285011), (37.66573047610874, 55.73478197572927), (37.6616853489879, 55.73478197572927)),)}
###Markdown
We're also using `datetime.utcnow()` to supply the required datetime property for our Item. Since this is a required property, you might often find yourself making up a time to fill in if you don't know the exact capture time.
###Code
from datetime import datetime
item = pystac.Item(id='local-image',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={})
###Output
_____no_output_____
###Markdown
We haven't added it to a catalog yet, so it's parent isn't set. Once we add it to the catalog, we can see it correctly links to it's parent.
###Code
assert item.get_parent() is None
catalog.add_item(item)
item.get_parent()
###Output
_____no_output_____
###Markdown
`describe()` is a useful method on `Catalog` - but be careful when using it on large catalogs, as it will walk the entire tree of the STAC.
###Code
catalog.describe()
###Output
* <Catalog id=test-catalog>
* <Item id=local-image>
###Markdown
Adding AssetsWe've created an Item, but there aren't any assets associated with it. Let's create one:
###Code
print(pystac.Asset.__doc__)
item.add_asset(
key='image',
asset=pystac.Asset(
href=img_path,
media_type=pystac.MediaType.GEOTIFF
)
)
###Output
_____no_output_____
###Markdown
At any time we can call `to_dict()` on STAC objects to see how the STAC JSON is shaping up. Notice the asset is now set:
###Code
import json
print(json.dumps(item.to_dict(), indent=4))
###Output
{
"type": "Feature",
"stac_version": "1.0.0",
"id": "local-image",
"properties": {
"datetime": "2022-03-29T12:47:45.754444Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "root",
"href": null,
"type": "application/json"
},
{
"rel": "parent",
"href": null,
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/9z/lnsvqfqj4gs2d1j1nw3vynrm0000gn/T/tmpzpx86d17/image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
],
"stac_extensions": []
}
###Markdown
Note that the link `href` properties are `null`. This is OK, as we're working with the STAC in memory. Next, we'll talk about writing the catalog out, and how to set those HREFs. Saving the catalog As the JSON above indicates, there's no HREFs set on these in-memory items. PySTAC uses the `self` link on STAC objects to track where the file lives. Because we haven't set them, they evaluate to `None`:
###Code
print(catalog.get_self_href() is None)
print(item.get_self_href() is None)
###Output
True
True
###Markdown
In order to set them, we can use `normalize_hrefs`. This method will create a normalized set of HREFs for each STAC object in the catalog, according to the [best practices document](https://github.com/radiantearth/stac-spec/blob/v0.8.1/best-practices.mdcatalog-layout)'s recommendations on how to lay out a catalog.
###Code
catalog.normalize_hrefs(os.path.join(tmp_dir.name, 'stac'))
###Output
_____no_output_____
###Markdown
Now that we've normalized to a root directory (the temporary directory), we see that the `self` links are set:
###Code
print(catalog.get_self_href())
print(item.get_self_href())
###Output
/var/folders/9z/lnsvqfqj4gs2d1j1nw3vynrm0000gn/T/tmpzpx86d17/stac/catalog.json
/var/folders/9z/lnsvqfqj4gs2d1j1nw3vynrm0000gn/T/tmpzpx86d17/stac/local-image/local-image.json
###Markdown
We can now call `save` on the catalog, which will recursively save all the STAC objects to their respective self HREFs.Save requires a `CatalogType` to be set. You can review the [API docs](https://pystac.readthedocs.io/en/stable/api.htmlcatalogtype) on `CatalogType` to see what each type means (unfortunately `help` doesn't show docstrings for attributes).
###Code
catalog.save(catalog_type=pystac.CatalogType.SELF_CONTAINED)
!ls {tmp_dir.name}/stac/*
with open(item.self_href) as f:
print(f.read())
with open(item.self_href) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "1.0.0",
"id": "local-image",
"properties": {
"datetime": "2022-03-29T12:47:45.754444Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "root",
"href": "../catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "../catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/9z/lnsvqfqj4gs2d1j1nw3vynrm0000gn/T/tmpzpx86d17/image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
],
"stac_extensions": []
}
###Markdown
As you can see, all links are saved with relative paths. That's because we used `catalog_type=CatalogType.SELF_CONTAINED`. If we save an Absolute Published catalog, we'll see absolute paths:
###Code
catalog.save(catalog_type=pystac.CatalogType.ABSOLUTE_PUBLISHED)
###Output
_____no_output_____
###Markdown
Now the links included in the STAC item are all absolute:
###Code
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "1.0.0",
"id": "local-image",
"properties": {
"datetime": "2022-03-29T12:47:45.754444Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "root",
"href": "/var/folders/9z/lnsvqfqj4gs2d1j1nw3vynrm0000gn/T/tmpzpx86d17/stac/catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "/var/folders/9z/lnsvqfqj4gs2d1j1nw3vynrm0000gn/T/tmpzpx86d17/stac/catalog.json",
"type": "application/json"
},
{
"rel": "self",
"href": "/var/folders/9z/lnsvqfqj4gs2d1j1nw3vynrm0000gn/T/tmpzpx86d17/stac/local-image/local-image.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/9z/lnsvqfqj4gs2d1j1nw3vynrm0000gn/T/tmpzpx86d17/image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
],
"stac_extensions": []
}
###Markdown
Notice that the Asset HREF is absolute in both cases. We can make the Asset HREF relative to the STAC Item by using `.make_all_asset_hrefs_relative()`:
###Code
catalog.make_all_asset_hrefs_relative()
catalog.save(catalog_type=pystac.CatalogType.SELF_CONTAINED)
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "1.0.0",
"id": "local-image",
"properties": {
"datetime": "2022-03-29T12:47:45.754444Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "root",
"href": "../catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "../catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "../../image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
],
"stac_extensions": []
}
###Markdown
Creating an Item that implements the EO extensionIn the code above our item only implemented the core STAC Item specification. With [extensions](https://github.com/radiantearth/stac-spec/tree/v0.9.0/extensions) we can record more information and add additional functionality to the Item. Given that we know this is a World View 3 image that has earth observation data, we can enable the [eo extension](https://github.com/radiantearth/stac-spec/tree/v0.8.1/extensions/eo) to add band information. To add eo information to an item we'll need to specify some more data. First, let's define the bands of World View 3:
###Code
from pystac.extensions.eo import Band, EOExtension
# From: https://www.spaceimagingme.com/downloads/sensors/datasheets/DG_WorldView3_DS_2014.pdf
wv3_bands = [Band.create(name='Coastal', description='Coastal: 400 - 450 nm', common_name='coastal'),
Band.create(name='Blue', description='Blue: 450 - 510 nm', common_name='blue'),
Band.create(name='Green', description='Green: 510 - 580 nm', common_name='green'),
Band.create(name='Yellow', description='Yellow: 585 - 625 nm', common_name='yellow'),
Band.create(name='Red', description='Red: 630 - 690 nm', common_name='red'),
Band.create(name='Red Edge', description='Red Edge: 705 - 745 nm', common_name='rededge'),
Band.create(name='Near-IR1', description='Near-IR1: 770 - 895 nm', common_name='nir08'),
Band.create(name='Near-IR2', description='Near-IR2: 860 - 1040 nm', common_name='nir09')]
###Output
_____no_output_____
###Markdown
Notice that we used the `.create` method create new band information. We can now create an Item, enable the eo extension, add the band information and add it to our catalog:
###Code
eo_item = pystac.Item(id='local-image-eo',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={})
eo = EOExtension.ext(eo_item, add_if_missing=True)
eo.apply(bands=wv3_bands)
###Output
_____no_output_____
###Markdown
There are also [common metadata](https://github.com/radiantearth/stac-spec/blob/v0.9.0/item-spec/common-metadata.md) fields that we can use to capture additional information about the WorldView 3 imagery:
###Code
eo_item.common_metadata.platform = "Maxar"
eo_item.common_metadata.instruments = ["WorldView3"]
eo_item.common_metadata.gsd = 0.3
print(eo_item)
###Output
<Item id=local-image-eo>
###Markdown
We can use the eo extension to add bands to the assets we add to the item:
###Code
asset = pystac.Asset(href=img_path,
media_type=pystac.MediaType.GEOTIFF)
eo_item.add_asset("image", asset)
eo_on_asset = EOExtension.ext(eo_item.assets["image"])
eo_on_asset.apply(wv3_bands)
###Output
_____no_output_____
###Markdown
If we look at the asset's JSON representation, we can see the appropriate band indexes are set:
###Code
asset.to_dict()
###Output
_____no_output_____
###Markdown
Let's clear the in-memory catalog, add the EO item, and save to a new STAC:
###Code
catalog.clear_items()
list(catalog.get_items())
catalog.add_item(eo_item)
list(catalog.get_items())
catalog.normalize_and_save(root_href=os.path.join(tmp_dir.name, 'stac-eo'),
catalog_type=pystac.CatalogType.SELF_CONTAINED)
###Output
_____no_output_____
###Markdown
Now, if we read the catalog from the filesystem, PySTAC recognizes that the item implements eo and so use it's functionality, e.g. getting the bands off the asset:
###Code
catalog2 = pystac.read_file(os.path.join(tmp_dir.name, 'stac-eo', 'catalog.json'))
assert isinstance(catalog2, pystac.Catalog)
list(catalog2.get_items())
item: pystac.Item = next(catalog2.get_all_items())
assert EOExtension.has_extension(item)
eo_on_asset = EOExtension.ext(item.assets["image"])
print(eo_on_asset.bands)
###Output
[<Band name=Coastal>, <Band name=Blue>, <Band name=Green>, <Band name=Yellow>, <Band name=Red>, <Band name=Red Edge>, <Band name=Near-IR1>, <Band name=Near-IR2>]
###Markdown
CollectionsCollections are a subtype of Catalog that have some additional properties to make them more searchable. They also can define common properties so that items in the collection don't have to duplicate common data for each item. Let's create a collection to hold common properties between two images from the Spacenet 5 challenge.First we'll get another image, and it's bbox and footprint:
###Code
url2 = ('https://spacenet-dataset.s3.amazonaws.com/'
'spacenet/SN5_roads/train/AOI_7_Moscow/MS/'
'SN5_roads_train_AOI_7_Moscow_MS_chip997.tif')
img_path2 = os.path.join(tmp_dir.name, 'image.tif')
urllib.request.urlretrieve(url2, img_path2)
bbox2, footprint2 = get_bbox_and_footprint(img_path2)
###Output
_____no_output_____
###Markdown
We can take a look at the pydocs for Collection to see what information we need to supply in order to satisfy the spec.
###Code
print(pystac.Collection.__doc__)
###Output
A Collection extends the Catalog spec with additional metadata that helps
enable discovery.
Args:
id : Identifier for the collection. Must be unique within the STAC.
description : Detailed multi-line description to fully explain the
collection. `CommonMark 0.28 syntax <https://commonmark.org/>`_ MAY
be used for rich text representation.
extent : Spatial and temporal extents that describe the bounds of
all items contained within this Collection.
title : Optional short descriptive one-line title for the
collection.
stac_extensions : Optional list of extensions the Collection
implements.
href : Optional HREF for this collection, which be set as the
collection's self link's HREF.
catalog_type : Optional catalog type for this catalog. Must
be one of the values in :class`~pystac.CatalogType`.
license : Collection's license(s) as a
`SPDX License identifier <https://spdx.org/licenses/>`_,
`various`, or `proprietary`. If collection includes
data with multiple different licenses, use `various` and add a link for
each. Defaults to 'proprietary'.
keywords : Optional list of keywords describing the collection.
providers : Optional list of providers of this Collection.
summaries : An optional map of property summaries,
either a set of values or statistics such as a range.
extra_fields : Extra fields that are part of the top-level
JSON properties of the Collection.
###Markdown
Beyond what a Catalog reqiures, a Collection requires a license, and an `Extent` that describes the range of space and time that the items it hold occupy.
###Code
print(pystac.Extent.__doc__)
###Output
Describes the spatiotemporal extents of a Collection.
Args:
spatial : Potential spatial extent covered by the collection.
temporal : Potential temporal extent covered by the collection.
extra_fields : Dictionary containing additional top-level fields defined on the
Extent object.
###Markdown
An Extent is comprised of a SpatialExtent and a TemporalExtent. These hold one or more bounding boxes and time intervals, respectively, that completely cover the items contained in the collections.Let's start with creating two new items - these will be core Items. We can set these items to implement the `eo` extension by specifying them in the `stac_extensions`.
###Code
collection_item = pystac.Item(id='local-image-col-1',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={})
collection_item.common_metadata.gsd = 0.3
collection_item.common_metadata.platform = 'Maxar'
collection_item.common_metadata.instruments = ['WorldView3']
asset = pystac.Asset(href=img_path,
media_type=pystac.MediaType.GEOTIFF)
collection_item.add_asset("image", asset)
eo = EOExtension.ext(collection_item.assets["image"], add_if_missing=True)
eo.apply(wv3_bands)
collection_item2 = pystac.Item(id='local-image-col-2',
geometry=footprint2,
bbox=bbox2,
datetime=datetime.utcnow(),
properties={})
collection_item2.common_metadata.gsd = 0.3
collection_item2.common_metadata.platform = 'Maxar'
collection_item2.common_metadata.instruments = ['WorldView3']
asset2 = pystac.Asset(href=img_path,
media_type=pystac.MediaType.GEOTIFF)
collection_item2.add_asset("image", asset2)
eo = EOExtension.ext(collection_item2.assets["image"], add_if_missing=True)
eo.apply([
band for band in wv3_bands if band.name in ["Red", "Green", "Blue"]
])
###Output
_____no_output_____
###Markdown
We can use our two items' metadata to find out what the proper bounds are:
###Code
from shapely.geometry import shape
unioned_footprint = shape(footprint).union(shape(footprint2))
collection_bbox = list(unioned_footprint.bounds)
spatial_extent = pystac.SpatialExtent(bboxes=[collection_bbox])
collection_interval = sorted([collection_item.datetime, collection_item2.datetime])
temporal_extent = pystac.TemporalExtent(intervals=[collection_interval])
collection_extent = pystac.Extent(spatial=spatial_extent, temporal=temporal_extent)
collection = pystac.Collection(id='wv3-images',
description='Spacenet 5 images over Moscow',
extent=collection_extent,
license='CC-BY-SA-4.0')
###Output
_____no_output_____
###Markdown
Now if we add our items to our Collection, and our Collection to our Catalog, we get the following STAC that can be saved:
###Code
collection.add_items([collection_item, collection_item2])
catalog.clear_items()
catalog.clear_children()
catalog.add_child(collection)
catalog.describe()
catalog.normalize_and_save(root_href=os.path.join(tmp_dir.name, 'stac-collection'),
catalog_type=pystac.CatalogType.SELF_CONTAINED)
###Output
_____no_output_____
###Markdown
CleanupDon't forget to clean up the temporary directory!
###Code
tmp_dir.cleanup()
###Output
_____no_output_____
###Markdown
Creating a STAC of imagery from Spacenet 5 data Now, let's take what we've learned and create a Catalog with more data in it. Allowing PySTAC to read from AWS S3PySTAC aims to be virtually zero-dependency (notwithstanding the why-isn't-this-in-stdlib datetime-util), so it doesn't have the ability to read from or write to anything but the local file system. However, we can hook into PySTAC's IO in the following way. Learn more about how to customize I/O in STAC from the [documentation](https://pystac.readthedocs.io/en/stable/concepts.htmli-o-in-pystac):
###Code
from typing import Union, Any
from urllib.parse import urlparse
import boto3
from pystac import Link
from pystac.stac_io import DefaultStacIO
class CustomStacIO(DefaultStacIO):
def __init__(self):
self.s3 = boto3.resource("s3")
def read_text(
self, source: Union[str, Link], *args: Any, **kwargs: Any
) -> str:
parsed = urlparse(uri)
if parsed.scheme == "s3":
bucket = parsed.netloc
key = parsed.path[1:]
obj = self.s3.Object(bucket, key)
return obj.get()["Body"].read().decode("utf-8")
else:
return super().read_text(source, *args, **kwargs)
def write_text(
self, dest: Union[str, Link], txt: str, *args: Any, **kwargs: Any
) -> None:
parsed = urlparse(uri)
if parsed.scheme == "s3":
bucket = parsed.netloc
key = parsed.path[1:]
self.s3.Object(bucket, key).put(Body=txt, ContentEncoding="utf-8")
else:
super().write_text(dest, txt, *args, **kwargs)
###Output
_____no_output_____
###Markdown
We'll need a utility to list keys for reading the lists of files from S3:
###Code
# From https://alexwlchan.net/2017/07/listing-s3-keys/
from botocore import UNSIGNED
from botocore.config import Config
def get_s3_keys(bucket, prefix):
"""Generate all the keys in an S3 bucket."""
s3 = boto3.client('s3', config=Config(signature_version=UNSIGNED))
kwargs = {'Bucket': bucket, 'Prefix': prefix}
while True:
resp = s3.list_objects_v2(**kwargs)
for obj in resp['Contents']:
yield obj['Key']
try:
kwargs['ContinuationToken'] = resp['NextContinuationToken']
except KeyError:
break
###Output
_____no_output_____
###Markdown
Let's make a STAC of imagery over Moscow as part of the Spacenet 5 challenge. As a first step, we can list out the imagery and extract IDs from each of the chips.
###Code
moscow_training_chip_uris = list(get_s3_keys(bucket='spacenet-dataset',
prefix='spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/'))
import re
chip_id_to_data = {}
def get_chip_id(uri):
return re.search(r'.*\_chip(\d+)\.', uri).group(1)
for uri in moscow_training_chip_uris:
chip_id = get_chip_id(uri)
chip_id_to_data[chip_id] = { 'img': 's3://spacenet-dataset/{}'.format(uri) }
###Output
_____no_output_____
###Markdown
For this tutorial, we'll only take a subset of the data.
###Code
chip_id_to_data = dict(list(chip_id_to_data.items())[:10])
chip_id_to_data
###Output
_____no_output_____
###Markdown
Let's turn each of those chips into a STAC Item that represents the image.
###Code
chip_id_to_items = {}
###Output
_____no_output_____
###Markdown
We'll create core `Item`s for our imagery, but mark them with the `eo` extension as we did above, and store the `eo` data in a `Collection`.Note that the image CRS is in WGS:84 (Lat/Lng). If it wasn't, we'd have to reproject the footprint to WGS:84 in order to be compliant with the spec (which can easily be done with [pyproj](https://github.com/pyproj4/pyproj)).Here we're taking advantage of `rasterio`'s ability to read S3 URIs, which only grabs the GeoTIFF metadata and does not pull the whole file down.
###Code
import os
os.environ["AWS_NO_SIGN_REQUEST"] = "true"
for chip_id in chip_id_to_data:
img_uri = chip_id_to_data[chip_id]['img']
print('Processing {}'.format(img_uri))
bbox, footprint = get_bbox_and_footprint(img_uri)
item = pystac.Item(id='img_{}'.format(chip_id),
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={})
item.common_metadata.gsd = 0.3
item.common_metadata.platform = 'Maxar'
item.common_metadata.instruments = ['WorldView3']
eo = EOExtension.ext(item, add_if_missing=True)
eo.bands = wv3_bands
asset = pystac.Asset(href=img_uri,
media_type=pystac.MediaType.COG)
item.add_asset(key='ps-ms', asset=asset)
eo = EOExtension.ext(item.assets["ps-ms"])
eo.bands = wv3_bands
chip_id_to_items[chip_id] = item
###Output
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip0.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip10.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip100.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1000.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1001.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1002.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1003.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1004.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1005.tif
###Markdown
Creating the CollectionAll of these images are over Moscow. In Spacenet 5, we have a couple cities that have imagery; a good way to separate these collections of imagery. We can store all of the common `eo` metadata in the collection.
###Code
from shapely.geometry import (shape, MultiPolygon)
footprints = list(map(lambda i: shape(i.geometry).envelope,
chip_id_to_items.values()))
collection_bbox = MultiPolygon(footprints).bounds
spatial_extent = pystac.SpatialExtent(bboxes=[collection_bbox])
datetimes = sorted(list(map(lambda i: i.datetime,
chip_id_to_items.values())))
temporal_extent = pystac.TemporalExtent(intervals=[[datetimes[0], datetimes[-1]]])
collection_extent = pystac.Extent(spatial=spatial_extent, temporal=temporal_extent)
collection = pystac.Collection(id='wv3-images',
description='Spacenet 5 images over Moscow',
extent=collection_extent,
license='CC-BY-SA-4.0')
collection.add_items(chip_id_to_items.values())
collection.describe()
###Output
* <Collection id=wv3-images>
* <Item id=img_0>
* <Item id=img_1>
* <Item id=img_10>
* <Item id=img_100>
* <Item id=img_1000>
* <Item id=img_1001>
* <Item id=img_1002>
* <Item id=img_1003>
* <Item id=img_1004>
* <Item id=img_1005>
###Markdown
Now, we can create a Catalog and add the collection.
###Code
catalog = pystac.Catalog(id='spacenet5', description='Spacenet 5 Data (Test)')
catalog.add_child(collection)
catalog.describe()
###Output
* <Catalog id=spacenet5>
* <Collection id=wv3-images>
* <Item id=img_0>
* <Item id=img_1>
* <Item id=img_10>
* <Item id=img_100>
* <Item id=img_1000>
* <Item id=img_1001>
* <Item id=img_1002>
* <Item id=img_1003>
* <Item id=img_1004>
* <Item id=img_1005>
###Markdown
Adding items with the label extension to the Spacenet 5 catalogWe can use the [label extension](https://github.com/radiantearth/stac-spec/tree/v0.8.1/extensions/label) of the STAC spec to represent the training data in our STAC. For this, we need to grab the URIs of the GeoJSON of roads:
###Code
moscow_training_geojson_uris = list(get_s3_keys(bucket='spacenet-dataset',
prefix='spacenet/SN5_roads/train/AOI_7_Moscow/geojson_roads_speed/'))
for uri in moscow_training_geojson_uris:
chip_id = get_chip_id(uri)
if chip_id in chip_id_to_data:
chip_id_to_data[chip_id]['label'] = 's3://spacenet-dataset/{}'.format(uri)
###Output
_____no_output_____
###Markdown
We'll add the items to their own subcatalog; since they don't inherit the Collection's `eo` properties, they shouldn't go in the Collection.
###Code
label_catalog = pystac.Catalog(id='spacenet-data-labels', description='Labels for Spacenet 5')
catalog.add_child(label_catalog)
###Output
_____no_output_____
###Markdown
To see the required fields for the label extension we can check the pydocs on the `apply` method of the extension:
###Code
from pystac.extensions.label import LabelExtension
print(LabelExtension.apply.__doc__)
###Output
Applies label extension properties to the extended Item.
Args:
label_description : A description of the label, how it was created,
and what it is recommended for
label_type : An Enum of either vector label type or raster label type. Use
one of :class:`~pystac.LabelType`.
label_properties : These are the names of the property field(s) in each
Feature of the label asset's FeatureCollection that contains the classes
(keywords from label:classes if the property defines classes).
If labels are rasters, this should be None.
label_classes : Optional, but required if using categorical data.
A list of :class:`LabelClasses` instances defining the list of possible
class names for each label:properties. (e.g., tree, building, car,
hippo)
label_tasks : Recommended to be a subset of 'regression', 'classification',
'detection', or 'segmentation', but may be an arbitrary value.
label_methods: Recommended to be a subset of 'automated' or 'manual',
but may be an arbitrary value.
label_overviews : Optional list of :class:`LabelOverview` instances
that store counts (for classification-type data) or summary statistics
(for continuous numerical/regression data).
###Markdown
This loop creates our label items and associates each to the appropriate source image Item.
###Code
from pystac.extensions.label import LabelType
for chip_id in chip_id_to_data:
img_item = collection.get_item('img_{}'.format(chip_id))
assert img_item
label_uri = chip_id_to_data[chip_id]['label']
label_item = pystac.Item(id='label_{}'.format(chip_id),
geometry=img_item.geometry,
bbox=img_item.bbox,
datetime=datetime.utcnow(),
properties={})
label = LabelExtension.ext(label_item, add_if_missing=True)
label.apply(label_description="SpaceNet 5 Road labels",
label_type=LabelType.VECTOR,
label_tasks=['segmentation', 'regression'])
label.add_source(img_item)
label.add_geojson_labels(label_uri)
label_catalog.add_item(label_item)
###Output
_____no_output_____
###Markdown
Now we have a STAC of training data!
###Code
catalog.describe()
label_item = catalog.get_child('spacenet-data-labels').get_item('label_1')
label_item.to_dict()
###Output
_____no_output_____
###Markdown
How to create STAC Catalogs STAC Community Sprint, Arlington, November 7th 2019 This notebook runs through some of the basics of using PySTAC to create a static STAC. It was part of a 30 minute presentation at the [community STAC sprint](https://github.com/radiantearth/community-sprints/tree/master/11052019-arlignton-va) in Arlington, VA in November 2019. This tutorial will require the `boto3`, `rasterio`, and `shapely` libraries:
###Code
!pip install boto3
!pip install rasterio
!pip install shapely
###Output
Requirement already satisfied: boto3 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (1.10.8)
Requirement already satisfied: botocore<1.14.0,>=1.13.8 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from boto3) (1.13.8)
Requirement already satisfied: s3transfer<0.3.0,>=0.2.0 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from boto3) (0.2.1)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from boto3) (0.9.4)
Requirement already satisfied: urllib3<1.26,>=1.20; python_version >= "3.4" in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from botocore<1.14.0,>=1.13.8->boto3) (1.25.6)
Requirement already satisfied: docutils<0.16,>=0.10 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from botocore<1.14.0,>=1.13.8->boto3) (0.15.2)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1; python_version >= "2.7" in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from botocore<1.14.0,>=1.13.8->boto3) (2.8.1)
Requirement already satisfied: six>=1.5 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from python-dateutil<3.0.0,>=2.1; python_version >= "2.7"->botocore<1.14.0,>=1.13.8->boto3) (1.12.0)
[33mWARNING: You are using pip version 20.1.1; however, version 20.2 is available.
You should consider upgrading via the '/Users/rob/proj/stac/pystac/venv/bin/python -m pip install --upgrade pip' command.[0m
Requirement already satisfied: rasterio in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (1.1.0)
Requirement already satisfied: numpy in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (1.17.3)
Requirement already satisfied: snuggs>=1.4.1 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (1.4.7)
Requirement already satisfied: click<8,>=4.0 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (7.0)
Requirement already satisfied: click-plugins in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (1.1.1)
Requirement already satisfied: attrs in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (19.3.0)
Requirement already satisfied: cligj>=0.5 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (0.5.0)
Requirement already satisfied: affine in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (2.3.0)
Requirement already satisfied: pyparsing>=2.1.6 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from snuggs>=1.4.1->rasterio) (2.4.2)
[33mWARNING: You are using pip version 20.1.1; however, version 20.2 is available.
You should consider upgrading via the '/Users/rob/proj/stac/pystac/venv/bin/python -m pip install --upgrade pip' command.[0m
Requirement already satisfied: shapely in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (1.6.4.post2)
[33mWARNING: You are using pip version 20.1.1; however, version 20.2 is available.
You should consider upgrading via the '/Users/rob/proj/stac/pystac/venv/bin/python -m pip install --upgrade pip' command.[0m
###Markdown
We can import pystac and access most of the functionality we need with the single import:
###Code
import pystac
###Output
_____no_output_____
###Markdown
Creating a catalog from a local file To give us some material to work with, lets download a single image from the [Spacenet 5 challenge](https://www.topcoder.com/challenges/30099956). We'll use a temporary directory to save off our single-item STAC.
###Code
import os
import urllib.request
from tempfile import TemporaryDirectory
tmp_dir = TemporaryDirectory()
img_path = os.path.join(tmp_dir.name, 'image.tif')
url = ('https://spacenet-dataset.s3.amazonaws.com/'
'spacenet/SN5_roads/train/AOI_7_Moscow/MS/'
'SN5_roads_train_AOI_7_Moscow_MS_chip996.tif')
urllib.request.urlretrieve(url, img_path)
###Output
_____no_output_____
###Markdown
We want to create a Catalog. Let's check the pydocs for `Catalog` to see what information we'll need. (We use `__doc__` instead of `help()` here to avoid printing out all the docs for the class.)
###Code
print(pystac.Catalog.__doc__)
###Output
A PySTAC Catalog represents a STAC catalog in memory.
A Catalog is a :class:`~pystac.STACObject` that may contain children,
which are instances of :class:`~pystac.Catalog` or :class:`~pystac.Collection`,
as well as :class:`~pystac.Item` s.
Args:
id (str): Identifier for the catalog. Must be unique within the STAC.
description (str): Detailed multi-line description to fully explain the catalog.
`CommonMark 0.28 syntax <http://commonmark.org/>`_ MAY be used for rich text
representation.
title (str or None): Optional short descriptive one-line title for the catalog.
stac_extensions (List[str]): Optional list of extensions the Catalog implements.
href (str or None): Optional HREF for this catalog, which be set as the catalog's
self link's HREF.
Attributes:
id (str): Identifier for the catalog.
description (str): Detailed multi-line description to fully explain the catalog.
title (str or None): Optional short descriptive one-line title for the catalog.
stac_extensions (List[str] or None): Optional list of extensions the Catalog implements.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Catalog.
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this Catalog.
###Markdown
Let's just give an ID and a description. We don't have to worry about the HREF right now; that will be set later.
###Code
catalog = pystac.Catalog(id='test-catalog', description='Tutorial catalog.')
###Output
_____no_output_____
###Markdown
There are no children or items in the catalog, since we haven't added anything yet.
###Code
print(list(catalog.get_children()))
print(list(catalog.get_items()))
###Output
[]
[]
###Markdown
We'll now create an Item to represent the image. Check the pydocs to see what you need to supply:
###Code
print(pystac.Item.__doc__)
###Output
An Item is the core granular entity in a STAC, containing the core metadata
that enables any client to search or crawl online catalogs of spatial 'assets' -
satellite imagery, derived data, DEM's, etc.
Args:
id (str): Provider identifier. Must be unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float] or None): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array must be 2*n where n is the
number of dimensions. Could also be None in the case of a null geometry.
datetime (datetime or None): Datetime associated with this item. If None,
a start_datetime and end_datetime must be supplied in the properties.
properties (dict): A dictionary of additional metadata for the item.
stac_extensions (List[str]): Optional list of extensions the Item implements.
href (str or None): Optional HREF for this item, which be set as the item's
self link's HREF.
collection (Collection or str): The Collection or Collection ID that this item
belongs to.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Item.
Attributes:
id (str): Provider identifier. Unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float] or None): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array is 2*n where n is the
number of dimensions. Could also be None in the case of a null geometry.
datetime (datetime or None): Datetime associated with this item. If None,
the start_datetime and end_datetime in the common_metadata
will supply the datetime range of the Item.
properties (dict): A dictionary of additional metadata for the item.
stac_extensions (List[str] or None): Optional list of extensions the Item implements.
collection (Collection or None): Collection that this item is a part of.
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this STACObject.
assets (Dict[str, Asset]): Dictionary of asset objects that can be downloaded,
each with a unique key.
collection_id (str or None): The Collection ID that this item belongs to, if any.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Item.
###Markdown
Using [rasterio](https://rasterio.readthedocs.io/en/stable/), we can pull out the bounding box of the image to use for the image metadata. If the image contained a NoData border, we would ideally pull out the footprint and save it as the geometry; in this case, we're working with a small chip that most likely has no NoData values.
###Code
import rasterio
from shapely.geometry import Polygon, mapping
def get_bbox_and_footprint(raster_uri):
with rasterio.open(raster_uri) as ds:
bounds = ds.bounds
bbox = [bounds.left, bounds.bottom, bounds.right, bounds.top]
footprint = Polygon([
[bounds.left, bounds.bottom],
[bounds.left, bounds.top],
[bounds.right, bounds.top],
[bounds.right, bounds.bottom]
])
return (bbox, mapping(footprint))
bbox, footprint = get_bbox_and_footprint(img_path)
print(bbox)
print(footprint)
###Output
[37.6616853489879, 55.73478197572927, 37.66573047610874, 55.73882710285011]
{'type': 'Polygon', 'coordinates': (((37.6616853489879, 55.73478197572927), (37.6616853489879, 55.73882710285011), (37.66573047610874, 55.73882710285011), (37.66573047610874, 55.73478197572927), (37.6616853489879, 55.73478197572927)),)}
###Markdown
We're also using `datetime.utcnow()` to supply the required datetime property for our Item. Since this is a required property, you might often find yourself making up a time to fill in if you don't know the exact capture time.
###Code
from datetime import datetime
item = pystac.Item(id='local-image',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={})
###Output
_____no_output_____
###Markdown
We haven't added it to a catalog yet, so it's parent isn't set. Once we add it to the catalog, we can see it correctly links to it's parent.
###Code
item.get_parent() is None
catalog.add_item(item)
item.get_parent()
###Output
_____no_output_____
###Markdown
`describe()` is a useful method on `Catalog` - but be careful when using it on large catalogs, as it will walk the entire tree of the STAC.
###Code
catalog.describe()
###Output
* <Catalog id=test-catalog>
* <Item id=local-image>
###Markdown
Adding AssetsWe've created an Item, but there aren't any assets associated with it. Let's create one:
###Code
print(pystac.Asset.__doc__)
item.add_asset(
key='image',
asset=pystac.Asset(
href=img_path,
media_type=pystac.MediaType.GEOTIFF
)
)
###Output
_____no_output_____
###Markdown
At any time we can call `to_dict()` on STAC objects to see how the STAC JSON is shaping up. Notice the asset is now set:
###Code
import json
print(json.dumps(item.to_dict(), indent=4))
###Output
{
"type": "Feature",
"stac_version": "1.0.0-beta.2",
"id": "local-image",
"properties": {
"datetime": "2020-08-03T03:47:48.786929Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "root",
"href": null,
"type": "application/json"
},
{
"rel": "parent",
"href": null,
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
]
}
###Markdown
Note that the link `href` properties are `null`. This is OK, as we're working with the STAC in memory. Next, we'll talk about writing the catalog out, and how to set those HREFs. Saving the catalog As the JSON above indicates, there's no HREFs set on these in-memory items. PySTAC uses the `self` link on STAC objects to track where the file lives. Because we haven't set them, they evaluate to `None`:
###Code
print(catalog.get_self_href() is None)
print(item.get_self_href() is None)
###Output
True
True
###Markdown
In order to set them, we can use `normalize_hrefs`. This method will create a normalized set of HREFs for each STAC object in the catalog, according to the [best practices document](https://github.com/radiantearth/stac-spec/blob/v0.8.1/best-practices.mdcatalog-layout)'s recommendations on how to lay out a catalog.
###Code
catalog.normalize_hrefs(os.path.join(tmp_dir.name, 'stac'))
###Output
_____no_output_____
###Markdown
Now that we've normalized to a root directory (the temporary directory), we see that the `self` links are set:
###Code
print(catalog.get_self_href())
print(item.get_self_href())
###Output
/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/catalog.json
/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/local-image/local-image.json
###Markdown
We can now call `save` on the catalog, which will recursively save all the STAC objects to their respective self HREFs.Save requires a `CatalogType` to be set. You can review the [API docs](https://pystac.readthedocs.io/en/stable/api.htmlcatalogtype) on `CatalogType` to see what each type means (unfortunately `help` doesn't show docstrings for attributes).
###Code
catalog.save(catalog_type=pystac.CatalogType.SELF_CONTAINED)
!ls {tmp_dir.name}/stac/*
with open(catalog.get_self_href()) as f:
print(f.read())
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "1.0.0-beta.2",
"id": "local-image",
"properties": {
"datetime": "2020-08-03T03:47:48.786929Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "root",
"href": "../catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "../catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
]
}
###Markdown
As you can see, all links are saved with relative paths. That's because we used `catalog_type=CatalogType.SELF_CONTAINED`. If we save an Absolute Published catalog, we'll see absolute paths:
###Code
catalog.save(catalog_type=pystac.CatalogType.ABSOLUTE_PUBLISHED)
###Output
_____no_output_____
###Markdown
Now the links included in the STAC item are all absolute:
###Code
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "1.0.0-beta.2",
"id": "local-image",
"properties": {
"datetime": "2020-08-03T03:47:48.786929Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "self",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/local-image/local-image.json",
"type": "application/json"
},
{
"rel": "root",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
]
}
###Markdown
Notice that the Asset HREF is absolute in both cases. We can make the Asset HREF relative to the STAC Item by using `.make_all_asset_hrefs_relative()`:
###Code
catalog.make_all_asset_hrefs_relative()
catalog.save(catalog_type=pystac.CatalogType.SELF_CONTAINED)
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "1.0.0-beta.2",
"id": "local-image",
"properties": {
"datetime": "2020-08-03T03:47:48.786929Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "root",
"href": "../catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "../catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "../../image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
]
}
###Markdown
Creating an Item that implements the EO extensionIn the code above our item only implemented the core STAC Item specification. With [extensions](https://github.com/radiantearth/stac-spec/tree/v0.9.0/extensions) we can record more information and add additional functionality to the Item. Given that we know this is a World View 3 image that has earth observation data, we can enable the [eo extension](https://github.com/radiantearth/stac-spec/tree/v0.8.1/extensions/eo) to add band information. To add eo information to an item we'll need to specify some more data. First, let's define the bands of World View 3:
###Code
from pystac.extensions.eo import Band
# From: https://www.spaceimagingme.com/downloads/sensors/datasheets/DG_WorldView3_DS_2014.pdf
wv3_bands = [Band.create(name='Coastal', description='Coastal: 400 - 450 nm', common_name='coastal'),
Band.create(name='Blue', description='Blue: 450 - 510 nm', common_name='blue'),
Band.create(name='Green', description='Green: 510 - 580 nm', common_name='green'),
Band.create(name='Yellow', description='Yellow: 585 - 625 nm', common_name='yellow'),
Band.create(name='Red', description='Red: 630 - 690 nm', common_name='red'),
Band.create(name='Red Edge', description='Red Edge: 705 - 745 nm', common_name='rededge'),
Band.create(name='Near-IR1', description='Near-IR1: 770 - 895 nm', common_name='nir08'),
Band.create(name='Near-IR2', description='Near-IR2: 860 - 1040 nm', common_name='nir09')]
###Output
_____no_output_____
###Markdown
Notice that we used the `.create` method create new band information. We can now create an Item, enable the eo extension, add the band information and add it to our catalog:
###Code
eo_item = pystac.Item(id='local-image-eo',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={})
eo_item.ext.enable(pystac.Extensions.EO)
eo_item.ext.eo.apply(bands=wv3_bands)
###Output
_____no_output_____
###Markdown
There are also [common metadata](https://github.com/radiantearth/stac-spec/blob/v0.9.0/item-spec/common-metadata.md) fields that we can use to capture additional information about the WorldView 3 imagery:
###Code
eo_item.common_metadata.platform = "Maxar"
eo_item.common_metadata.instrument="WorldView3"
eo_item.common_metadata.gsd = 0.3
eo_item
###Output
_____no_output_____
###Markdown
We can use the eo extension to add bands to the assets we add to the item:
###Code
eo_ext = eo_item.ext.eo
help(eo_ext.set_bands)
#eo_item.add_asset(key='image', asset=)
asset = pystac.Asset(href=img_path,
media_type=pystac.MediaType.GEOTIFF)
eo_ext.set_bands(wv3_bands, asset)
eo_item.add_asset("image", asset)
###Output
_____no_output_____
###Markdown
If we look at the asset's JSON representation, we can see the appropriate band indexes are set:
###Code
asset.to_dict()
###Output
_____no_output_____
###Markdown
Let's clear the in-memory catalog, add the EO item, and save to a new STAC:
###Code
catalog.clear_items()
list(catalog.get_items())
catalog.add_item(eo_item)
list(catalog.get_items())
catalog.normalize_and_save(root_href=os.path.join(tmp_dir.name, 'stac-eo'),
catalog_type=pystac.CatalogType.SELF_CONTAINED)
###Output
_____no_output_____
###Markdown
Now, if we read the catalog from the filesystem, PySTAC recognizes that the item implements eo and so use it's functionality, e.g. getting the bands off the asset:
###Code
catalog2 = pystac.read_file(os.path.join(tmp_dir.name, 'stac-eo', 'catalog.json'))
list(catalog2.get_items())
item = next(catalog2.get_all_items())
item.ext.implements('eo')
item.ext.eo.get_bands(item.assets['image'])
###Output
_____no_output_____
###Markdown
CollectionsCollections are a subtype of Catalog that have some additional properties to make them more searchable. They also can define common properties so that items in the collection don't have to duplicate common data for each item. Let's create a collection to hold common properties between two images from the Spacenet 5 challenge.First we'll get another image, and it's bbox and footprint:
###Code
url2 = ('https://spacenet-dataset.s3.amazonaws.com/'
'spacenet/SN5_roads/train/AOI_7_Moscow/MS/'
'SN5_roads_train_AOI_7_Moscow_MS_chip997.tif')
img_path2 = os.path.join(tmp_dir.name, 'image.tif')
urllib.request.urlretrieve(url2, img_path2)
bbox2, footprint2 = get_bbox_and_footprint(img_path2)
###Output
_____no_output_____
###Markdown
We can take a look at the pydocs for Collection to see what information we need to supply in order to satisfy the spec.
###Code
print(pystac.Collection.__doc__)
###Output
A Collection extends the Catalog spec with additional metadata that helps
enable discovery.
Args:
id (str): Identifier for the collection. Must be unique within the STAC.
description (str): Detailed multi-line description to fully explain the collection.
`CommonMark 0.28 syntax <http://commonmark.org/>`_ MAY be used for rich text
representation.
extent (Extent): Spatial and temporal extents that describe the bounds of
all items contained within this Collection.
title (str or None): Optional short descriptive one-line title for the collection.
stac_extensions (List[str]): Optional list of extensions the Collection implements.
href (str or None): Optional HREF for this collection, which be set as the collection's
self link's HREF.
license (str): Collection's license(s) as a `SPDX License identifier
<https://spdx.org/licenses/>`_, `various`, or `proprietary`. If collection includes
data with multiple different licenses, use `various` and add a link for each.
Defaults to 'proprietary'.
keywords (List[str]): Optional list of keywords describing the collection.
providers (List[Provider]): Optional list of providers of this Collection.
properties (dict): Optional dict of common fields across referenced items.
summaries (dict): An optional map of property summaries,
either a set of values or statistics such as a range.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Collection.
Attributes:
id (str): Identifier for the collection.
description (str): Detailed multi-line description to fully explain the collection.
extent (Extent): Spatial and temporal extents that describe the bounds of
all items contained within this Collection.
title (str or None): Optional short descriptive one-line title for the collection.
stac_extensions (List[str]): Optional list of extensions the Collection implements.
keywords (List[str] or None): Optional list of keywords describing the collection.
providers (List[Provider] or None): Optional list of providers of this Collection.
properties (dict or None): Optional dict of common fields across referenced items.
summaries (dict or None): An optional map of property summaries,
either a set of values or statistics such as a range.
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this Collection.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Catalog.
###Markdown
Beyond what a Catalog reqiures, a Collection requires a license, and an `Extent` that describes the range of space and time that the items it hold occupy.
###Code
print(pystac.Extent.__doc__)
###Output
Describes the spatio-temporal extents of a Collection.
Args:
spatial (SpatialExtent): Potential spatial extent covered by the collection.
temporal (TemporalExtent): Potential temporal extent covered by the collection.
Attributes:
spatial (SpatialExtent): Potential spatial extent covered by the collection.
temporal (TemporalExtent): Potential temporal extent covered by the collection.
###Markdown
An Extent is comprised of a SpatialExtent and a TemporalExtent. These hold one or more bounding boxes and time intervals, respectively, that completely cover the items contained in the collections.Let's start with creating two new items - these will be core Items. We can set these items to implement the `eo` extension by specifying them in the `stac_extensions`.
###Code
collection_item = pystac.Item(id='local-image-col-1',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={},
stac_extensions=[pystac.Extensions.EO])
collection_item.common_metadata.gsd = 0.3
collection_item.common_metadata.platform = 'Maxar'
collection_item.common_metadata.instruments = ['WorldView3']
asset = pystac.Asset(href=img_path,
media_type=pystac.MediaType.GEOTIFF)
collection_item.ext.eo.set_bands(wv3_bands, asset)
collection_item.add_asset('image', asset)
collection_item2 = pystac.Item(id='local-image-col-2',
geometry=footprint2,
bbox=bbox2,
datetime=datetime.utcnow(),
properties={},
stac_extensions=[pystac.Extensions.EO])
collection_item2.common_metadata.gsd = 0.3
collection_item2.common_metadata.platform = 'Maxar'
collection_item2.common_metadata.instruments = ['WorldView3']
asset2 = pystac.Asset(href=img_path,
media_type=pystac.MediaType.GEOTIFF)
collection_item2.ext.eo.set_bands([
band for band in wv3_bands if band.name in ["Red", "Green", "Blue"]
], asset2)
collection_item2.add_asset('image', asset2)
###Output
_____no_output_____
###Markdown
We can use our two items' metadata to find out what the proper bounds are:
###Code
from shapely.geometry import shape
unioned_footprint = shape(footprint).union(shape(footprint2))
collection_bbox = list(unioned_footprint.bounds)
spatial_extent = pystac.SpatialExtent(bboxes=[collection_bbox])
collection_interval = sorted([collection_item.datetime, collection_item2.datetime])
temporal_extent = pystac.TemporalExtent(intervals=[collection_interval])
collection_extent = pystac.Extent(spatial=spatial_extent, temporal=temporal_extent)
collection = pystac.Collection(id='wv3-images',
description='Spacenet 5 images over Moscow',
extent=collection_extent,
license='CC-BY-SA-4.0')
###Output
_____no_output_____
###Markdown
Now if we add our items to our Collection, and our Collection to our Catalog, we get the following STAC that can be saved:
###Code
collection.add_items([collection_item, collection_item2])
catalog.clear_items()
catalog.clear_children()
catalog.add_child(collection)
catalog.describe()
catalog.normalize_and_save(root_href=os.path.join(tmp_dir.name, 'stac-collection'),
catalog_type=pystac.CatalogType.SELF_CONTAINED)
###Output
_____no_output_____
###Markdown
CleanupDon't forget to clean up the temporary directory!
###Code
tmp_dir.cleanup()
###Output
_____no_output_____
###Markdown
Creating a STAC of imagery from Spacenet 5 data Now, let's take what we've learned and create a Catalog with more data in it. Allowing PySTAC to read from AWS S3PySTAC aims to be virtually zero-dependency (notwithstanding the why-isn't-this-in-stdlib datetime-util), so it doesn't have the ability to read from or write to anything but the local file system. However, we can hook into PySTAC's IO in the following way. Learn more about how to use STAC_IO in the [documentation on the topic](https://pystac.readthedocs.io/en/latest/concepts.htmlusing-stac-io):
###Code
from urllib.parse import urlparse
import boto3
from pystac import STAC_IO
def my_read_method(uri):
parsed = urlparse(uri)
if parsed.scheme == 's3':
bucket = parsed.netloc
key = parsed.path[1:]
s3 = boto3.resource('s3')
obj = s3.Object(bucket, key)
return obj.get()['Body'].read().decode('utf-8')
else:
return STAC_IO.default_read_text_method(uri)
def my_write_method(uri, txt):
parsed = urlparse(uri)
if parsed.scheme == 's3':
bucket = parsed.netloc
key = parsed.path[1:]
s3 = boto3.resource("s3")
s3.Object(bucket, key).put(Body=txt)
else:
STAC_IO.default_write_text_method(uri, txt)
STAC_IO.read_text_method = my_read_method
STAC_IO.write_text_method = my_write_method
###Output
_____no_output_____
###Markdown
We'll need a utility to list keys for reading the lists of files from S3:
###Code
# From https://alexwlchan.net/2017/07/listing-s3-keys/
def get_s3_keys(bucket, prefix):
"""Generate all the keys in an S3 bucket."""
s3 = boto3.client('s3')
kwargs = {'Bucket': bucket, 'Prefix': prefix}
while True:
resp = s3.list_objects_v2(**kwargs)
for obj in resp['Contents']:
yield obj['Key']
try:
kwargs['ContinuationToken'] = resp['NextContinuationToken']
except KeyError:
break
###Output
_____no_output_____
###Markdown
Let's make a STAC of imagery over Moscow as part of the Spacenet 5 challenge. As a first step, we can list out the imagery and extract IDs from each of the chips.
###Code
moscow_training_chip_uris = list(get_s3_keys(bucket='spacenet-dataset',
prefix='spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS'))
import re
chip_id_to_data = {}
def get_chip_id(uri):
return re.search(r'.*\_chip(\d+)\.', uri).group(1)
for uri in moscow_training_chip_uris:
chip_id = get_chip_id(uri)
chip_id_to_data[chip_id] = { 'img': 's3://spacenet-dataset/{}'.format(uri) }
###Output
_____no_output_____
###Markdown
For this tutorial, we'll only take a subset of the data.
###Code
chip_id_to_data = dict(list(chip_id_to_data.items())[:10])
chip_id_to_data
###Output
_____no_output_____
###Markdown
Let's turn each of those chips into a STAC Item that represents the image.
###Code
chip_id_to_items = {}
###Output
_____no_output_____
###Markdown
We'll create core `Item`s for our imagery, but mark them with the `eo` extension as we did above, and store the `eo` data in a `Collection`.Note that the image CRS is in WGS:84 (Lat/Lng). If it wasn't, we'd have to reproject the footprint to WGS:84 in order to be compliant with the spec (which can easily be done with [pyproj](https://github.com/pyproj4/pyproj)).Here we're taking advantage of `rasterio`'s ability to read S3 URIs, which only grabs the GeoTIFF metadata and does not pull the whole file down.
###Code
for chip_id in chip_id_to_data:
img_uri = chip_id_to_data[chip_id]['img']
print('Processing {}'.format(img_uri))
bbox, footprint = get_bbox_and_footprint(img_uri)
item = pystac.Item(id='img_{}'.format(chip_id),
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={},
stac_extensions=[pystac.Extensions.EO])
item.common_metadata.gsd = 0.3
item.common_metadata.platform = 'Maxar'
item.common_metadata.instruments = ['WorldView3']
item.ext.eo.bands = wv3_bands
asset = pystac.Asset(href=img_uri,
media_type=pystac.MediaType.COG)
item.ext.eo.set_bands(wv3_bands, asset)
item.add_asset(key='ps-ms', asset=asset)
chip_id_to_items[chip_id] = item
###Output
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip0.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip10.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip100.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1000.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1001.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1002.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1003.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1004.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1005.tif
###Markdown
Creating the CollectionAll of these images are over Moscow. In Spacenet 5, we have a couple cities that have imagery; a good way to separate these collections of imagery. We can store all of the common `eo` metadata in the collection.
###Code
from shapely.geometry import (shape, MultiPolygon)
footprints = list(map(lambda i: shape(i.geometry).envelope,
chip_id_to_items.values()))
collection_bbox = MultiPolygon(footprints).bounds
spatial_extent = pystac.SpatialExtent(bboxes=[collection_bbox])
datetimes = sorted(list(map(lambda i: i.datetime,
chip_id_to_items.values())))
temporal_extent = pystac.TemporalExtent(intervals=[[datetimes[0], datetimes[-1]]])
collection_extent = pystac.Extent(spatial=spatial_extent, temporal=temporal_extent)
collection = pystac.Collection(id='wv3-images',
description='Spacenet 5 images over Moscow',
extent=collection_extent,
license='CC-BY-SA-4.0')
collection.add_items(chip_id_to_items.values())
collection.describe()
###Output
* <Collection id=wv3-images>
* <Item id=img_0>
* <Item id=img_1>
* <Item id=img_10>
* <Item id=img_100>
* <Item id=img_1000>
* <Item id=img_1001>
* <Item id=img_1002>
* <Item id=img_1003>
* <Item id=img_1004>
* <Item id=img_1005>
###Markdown
Now, we can create a Catalog and add the collection.
###Code
catalog = pystac.Catalog(id='spacenet5', description='Spacenet 5 Data (Test)')
catalog.add_child(collection)
catalog.describe()
###Output
* <Catalog id=spacenet5>
* <Collection id=wv3-images>
* <Item id=img_0>
* <Item id=img_1>
* <Item id=img_10>
* <Item id=img_100>
* <Item id=img_1000>
* <Item id=img_1001>
* <Item id=img_1002>
* <Item id=img_1003>
* <Item id=img_1004>
* <Item id=img_1005>
###Markdown
Adding items with the label extension to the Spacenet 5 catalogWe can use the [label extension](https://github.com/radiantearth/stac-spec/tree/v0.8.1/extensions/label) of the STAC spec to represent the training data in our STAC. For this, we need to grab the URIs of the GeoJSON of roads:
###Code
moscow_training_geojson_uris = list(get_s3_keys(bucket='spacenet-dataset',
prefix='spacenet/SN5_roads/train/AOI_7_Moscow/geojson_roads_speed/'))
for uri in moscow_training_geojson_uris:
chip_id = get_chip_id(uri)
if chip_id in chip_id_to_data:
chip_id_to_data[chip_id]['label'] = 's3://spacenet-dataset/{}'.format(uri)
###Output
_____no_output_____
###Markdown
We'll add the items to their own subcatalog; since they don't inherit the Collection's `eo` properties, they shouldn't go in the Collection.
###Code
label_catalog = pystac.Catalog(id='spacenet-data-labels', description='Labels for Spacenet 5')
catalog.add_child(label_catalog)
###Output
_____no_output_____
###Markdown
To see the required fields for the label extension we can check the pydocs on the `apply` method of the extension:
###Code
from pystac.extensions import label
print(label.LabelItemExt.apply.__doc__)
###Output
Applies label extension properties to the extended Item.
Args:
label_description (str): A description of the label, how it was created,
and what it is recommended for
label_type (str): An ENUM of either vector label type or raster label type. Use
one of :class:`~pystac.LabelType`.
label_properties (list or None): These are the names of the property field(s) in each
Feature of the label asset's FeatureCollection that contains the classes
(keywords from label:classes if the property defines classes).
If labels are rasters, this should be None.
label_classes (List[LabelClass]): Optional, but reqiured if ussing categorical data.
A list of LabelClasses defining the list of possible class names for each
label:properties. (e.g., tree, building, car, hippo)
label_tasks (List[str]): Recommended to be a subset of 'regression', 'classification',
'detection', or 'segmentation', but may be an arbitrary value.
label_methods: Recommended to be a subset of 'automated' or 'manual',
but may be an arbitrary value.
label_overviews (List[LabelOverview]): Optional list of LabelOverview classes
that store counts (for classification-type data) or summary statistics (for
continuous numerical/regression data).
###Markdown
This loop creates our label items and associates each to the appropriate source image Item.
###Code
for chip_id in chip_id_to_data:
img_item = collection.get_item('img_{}'.format(chip_id))
label_uri = chip_id_to_data[chip_id]['label']
label_item = pystac.Item(id='label_{}'.format(chip_id),
geometry=img_item.geometry,
bbox=img_item.bbox,
datetime=datetime.utcnow(),
properties={},
stac_extensions=[pystac.Extensions.LABEL])
label_item.ext.label.apply(label_description="SpaceNet 5 Road labels",
label_type=label.LabelType.VECTOR,
label_tasks=['segmentation', 'regression'])
label_item.ext.label.add_source(img_item)
label_item.ext.label.add_geojson_labels(label_uri)
label_catalog.add_item(label_item)
###Output
_____no_output_____
###Markdown
Now we have a STAC of training data!
###Code
catalog.describe()
label_item = catalog.get_child('spacenet-data-labels').get_item('label_1')
label_item.to_dict()
###Output
_____no_output_____
###Markdown
How to create STAC Catalogs STAC Community Sprint, Arlington, November 7th 2019 This notebook runs through some of the basics of using PySTAC to create a static STAC. It was part of a 30 minute presentation at the [community STAC sprint](https://github.com/radiantearth/community-sprints/tree/master/11052019-arlignton-va) in Arlington, VA in November 2019. This tutorial will require the `boto3`, `rasterio`, and `shapely` libraries:
###Code
!pip install boto3
!pip install rasterio
!pip install shapely
###Output
Requirement already satisfied: boto3 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (1.10.8)
Requirement already satisfied: botocore<1.14.0,>=1.13.8 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from boto3) (1.13.8)
Requirement already satisfied: s3transfer<0.3.0,>=0.2.0 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from boto3) (0.2.1)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from boto3) (0.9.4)
Requirement already satisfied: urllib3<1.26,>=1.20; python_version >= "3.4" in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from botocore<1.14.0,>=1.13.8->boto3) (1.25.6)
Requirement already satisfied: docutils<0.16,>=0.10 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from botocore<1.14.0,>=1.13.8->boto3) (0.15.2)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1; python_version >= "2.7" in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from botocore<1.14.0,>=1.13.8->boto3) (2.8.1)
Requirement already satisfied: six>=1.5 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from python-dateutil<3.0.0,>=2.1; python_version >= "2.7"->botocore<1.14.0,>=1.13.8->boto3) (1.12.0)
[33mWARNING: You are using pip version 20.1.1; however, version 20.2 is available.
You should consider upgrading via the '/Users/rob/proj/stac/pystac/venv/bin/python -m pip install --upgrade pip' command.[0m
Requirement already satisfied: rasterio in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (1.1.0)
Requirement already satisfied: numpy in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (1.17.3)
Requirement already satisfied: snuggs>=1.4.1 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (1.4.7)
Requirement already satisfied: click<8,>=4.0 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (7.0)
Requirement already satisfied: click-plugins in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (1.1.1)
Requirement already satisfied: attrs in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (19.3.0)
Requirement already satisfied: cligj>=0.5 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (0.5.0)
Requirement already satisfied: affine in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from rasterio) (2.3.0)
Requirement already satisfied: pyparsing>=2.1.6 in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (from snuggs>=1.4.1->rasterio) (2.4.2)
[33mWARNING: You are using pip version 20.1.1; however, version 20.2 is available.
You should consider upgrading via the '/Users/rob/proj/stac/pystac/venv/bin/python -m pip install --upgrade pip' command.[0m
Requirement already satisfied: shapely in /Users/rob/proj/stac/pystac/venv/lib/python3.6/site-packages (1.6.4.post2)
[33mWARNING: You are using pip version 20.1.1; however, version 20.2 is available.
You should consider upgrading via the '/Users/rob/proj/stac/pystac/venv/bin/python -m pip install --upgrade pip' command.[0m
###Markdown
We can import pystac and access most of the functionality we need with the single import:
###Code
import pystac
###Output
_____no_output_____
###Markdown
Creating a catalog from a local file To give us some material to work with, lets download a single image from the [Spacenet 5 challenge](https://www.topcoder.com/challenges/30099956). We'll use a temporary directory to save off our single-item STAC.
###Code
import os
import urllib.request
from tempfile import TemporaryDirectory
tmp_dir = TemporaryDirectory()
img_path = os.path.join(tmp_dir.name, 'image.tif')
url = ('http://spacenet-dataset.s3.amazonaws.com/'
'spacenet/SN5_roads/train/AOI_7_Moscow/MS/'
'SN5_roads_train_AOI_7_Moscow_MS_chip996.tif')
urllib.request.urlretrieve(url, img_path)
###Output
_____no_output_____
###Markdown
We want to create a Catalog. Let's check the pydocs for `Catalog` to see what information we'll need. (We use `__doc__` instead of `help()` here to avoid printing out all the docs for the class.)
###Code
print(pystac.Catalog.__doc__)
###Output
A PySTAC Catalog represents a STAC catalog in memory.
A Catalog is a :class:`~pystac.STACObject` that may contain children,
which are instances of :class:`~pystac.Catalog` or :class:`~pystac.Collection`,
as well as :class:`~pystac.Item` s.
Args:
id (str): Identifier for the catalog. Must be unique within the STAC.
description (str): Detailed multi-line description to fully explain the catalog.
`CommonMark 0.28 syntax <http://commonmark.org/>`_ MAY be used for rich text
representation.
title (str or None): Optional short descriptive one-line title for the catalog.
stac_extensions (List[str]): Optional list of extensions the Catalog implements.
href (str or None): Optional HREF for this catalog, which be set as the catalog's
self link's HREF.
Attributes:
id (str): Identifier for the catalog.
description (str): Detailed multi-line description to fully explain the catalog.
title (str or None): Optional short descriptive one-line title for the catalog.
stac_extensions (List[str] or None): Optional list of extensions the Catalog implements.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Catalog.
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this Catalog.
###Markdown
Let's just give an ID and a description. We don't have to worry about the HREF right now; that will be set later.
###Code
catalog = pystac.Catalog(id='test-catalog', description='Tutorial catalog.')
###Output
_____no_output_____
###Markdown
There are no children or items in the catalog, since we haven't added anything yet.
###Code
print(list(catalog.get_children()))
print(list(catalog.get_items()))
###Output
[]
[]
###Markdown
We'll now create an Item to represent the image. Check the pydocs to see what you need to supply:
###Code
print(pystac.Item.__doc__)
###Output
An Item is the core granular entity in a STAC, containing the core metadata
that enables any client to search or crawl online catalogs of spatial 'assets' -
satellite imagery, derived data, DEM's, etc.
Args:
id (str): Provider identifier. Must be unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float] or None): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array must be 2*n where n is the
number of dimensions. Could also be None in the case of a null geometry.
datetime (datetime or None): Datetime associated with this item. If None,
a start_datetime and end_datetime must be supplied in the properties.
properties (dict): A dictionary of additional metadata for the item.
stac_extensions (List[str]): Optional list of extensions the Item implements.
href (str or None): Optional HREF for this item, which be set as the item's
self link's HREF.
collection (Collection or str): The Collection or Collection ID that this item
belongs to.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Item.
Attributes:
id (str): Provider identifier. Unique within the STAC.
geometry (dict): Defines the full footprint of the asset represented by this item,
formatted according to `RFC 7946, section 3.1 (GeoJSON)
<https://tools.ietf.org/html/rfc7946>`_.
bbox (List[float] or None): Bounding Box of the asset represented by this item using
either 2D or 3D geometries. The length of the array is 2*n where n is the
number of dimensions. Could also be None in the case of a null geometry.
datetime (datetime or None): Datetime associated with this item. If None,
the start_datetime and end_datetime in the common_metadata
will supply the datetime range of the Item.
properties (dict): A dictionary of additional metadata for the item.
stac_extensions (List[str] or None): Optional list of extensions the Item implements.
collection (Collection or None): Collection that this item is a part of.
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this STACObject.
assets (Dict[str, Asset]): Dictionary of asset objects that can be downloaded,
each with a unique key.
collection_id (str or None): The Collection ID that this item belongs to, if any.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Item.
###Markdown
Using [rasterio](https://rasterio.readthedocs.io/en/stable/), we can pull out the bounding box of the image to use for the image metadata. If the image contained a NoData border, we would ideally pull out the footprint and save it as the geometry; in this case, we're working with a small chip the most likely has no NoData values.
###Code
import rasterio
from shapely.geometry import Polygon, mapping
def get_bbox_and_footprint(raster_uri):
with rasterio.open(raster_uri) as ds:
bounds = ds.bounds
bbox = [bounds.left, bounds.bottom, bounds.right, bounds.top]
footprint = Polygon([
[bounds.left, bounds.bottom],
[bounds.left, bounds.top],
[bounds.right, bounds.top],
[bounds.right, bounds.bottom]
])
return (bbox, mapping(footprint))
bbox, footprint = get_bbox_and_footprint(img_path)
print(bbox)
print(footprint)
###Output
[37.6616853489879, 55.73478197572927, 37.66573047610874, 55.73882710285011]
{'type': 'Polygon', 'coordinates': (((37.6616853489879, 55.73478197572927), (37.6616853489879, 55.73882710285011), (37.66573047610874, 55.73882710285011), (37.66573047610874, 55.73478197572927), (37.6616853489879, 55.73478197572927)),)}
###Markdown
We're also using `datetime.utcnow()` to supply the required datetime property for our Item. Since this is a required property, you might often find yourself making up a time to fill in if you don't know the exact capture time.
###Code
from datetime import datetime
item = pystac.Item(id='local-image',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={})
###Output
_____no_output_____
###Markdown
We haven't added it to a catalog yet, so it's parent isn't set. Once we add it to the catalog, we can see it correctly links to it's parent.
###Code
item.get_parent() is None
catalog.add_item(item)
item.get_parent()
###Output
_____no_output_____
###Markdown
`describe()` is a useful method on `Catalog` - but be careful when using it on large catalogs, as it will walk the entire tree of the STAC.
###Code
catalog.describe()
###Output
* <Catalog id=test-catalog>
* <Item id=local-image>
###Markdown
Adding AssetsWe've created an Item, but there aren't any assets associated with it. Let's create one:
###Code
print(pystac.Asset.__doc__)
item.add_asset(
key='image',
asset=pystac.Asset(
href=img_path,
media_type=pystac.MediaType.GEOTIFF
)
)
###Output
_____no_output_____
###Markdown
At any time we can call `to_dict()` on STAC objects to see how the STAC JSON is shaping up. Notice the asset is now set:
###Code
import json
print(json.dumps(item.to_dict(), indent=4))
###Output
{
"type": "Feature",
"stac_version": "1.0.0-beta.2",
"id": "local-image",
"properties": {
"datetime": "2020-08-03T03:47:48.786929Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "root",
"href": null,
"type": "application/json"
},
{
"rel": "parent",
"href": null,
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
]
}
###Markdown
Note that the link `href` properties are `null`. This is OK, as we're working with the STAC in memory. Next, we'll talk about writing the catalog out, and how to set those HREFs. Saving the catalog As the JSON above indicates, there's no HREFs set on these in-memory items. PySTAC uses the `self` link on STAC objects to track where the file lives. Because we haven't set them, they evaluate to `None`:
###Code
print(catalog.get_self_href() is None)
print(item.get_self_href() is None)
###Output
True
True
###Markdown
In order to set them, we can use `normalize_hrefs`. This method will create a normalized set of HREFs for each STAC object in the catalog, according to the [best practices document](https://github.com/radiantearth/stac-spec/blob/v0.8.1/best-practices.mdcatalog-layout)'s recommendations on how to lay out a catalog.
###Code
catalog.normalize_hrefs(os.path.join(tmp_dir.name, 'stac'))
###Output
_____no_output_____
###Markdown
Now that we've normalized to a root directory (the temporary directory), we see that the `self` links are set:
###Code
print(catalog.get_self_href())
print(item.get_self_href())
###Output
/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/catalog.json
/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/local-image/local-image.json
###Markdown
We can now call `save` on the catalog, which will recursively save all the STAC objects to their respective self HREFs.Save requires a `CatalogType` to be set. You can review the [API docs](https://pystac.readthedocs.io/en/stable/api.htmlcatalogtype) on `CatalogType` to see what each type means (unfortunately `help` doesn't show docstrings for attributes).
###Code
catalog.save(catalog_type=pystac.CatalogType.SELF_CONTAINED)
!ls {tmp_dir.name}/stac/*
with open(catalog.get_self_href()) as f:
print(f.read())
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "1.0.0-beta.2",
"id": "local-image",
"properties": {
"datetime": "2020-08-03T03:47:48.786929Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "root",
"href": "../catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "../catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
]
}
###Markdown
As you can see, all links are saved with relative paths. That's because we used `catalog_type=CatalogType.SELF_CONTAINED`. If we save an Absolute Published catalog, we'll see absolute paths:
###Code
catalog.save(catalog_type=pystac.CatalogType.ABSOLUTE_PUBLISHED)
###Output
_____no_output_____
###Markdown
Now the links included in the STAC item are all absolute:
###Code
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "1.0.0-beta.2",
"id": "local-image",
"properties": {
"datetime": "2020-08-03T03:47:48.786929Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "self",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/local-image/local-image.json",
"type": "application/json"
},
{
"rel": "root",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/stac/catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "/var/folders/sv/zr8j0t4j1f726nhlt3vb8c300000gn/T/tmpt1wuelid/image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
]
}
###Markdown
Notice that the Asset HREF is absolute in both cases. We can make the Asset HREF relative to the STAC Item by using `.make_all_asset_hrefs_relative()`:
###Code
catalog.make_all_asset_hrefs_relative()
catalog.save(catalog_type=pystac.CatalogType.SELF_CONTAINED)
with open(item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "1.0.0-beta.2",
"id": "local-image",
"properties": {
"datetime": "2020-08-03T03:47:48.786929Z"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
37.6616853489879,
55.73478197572927
],
[
37.6616853489879,
55.73882710285011
],
[
37.66573047610874,
55.73882710285011
],
[
37.66573047610874,
55.73478197572927
],
[
37.6616853489879,
55.73478197572927
]
]
]
},
"links": [
{
"rel": "root",
"href": "../catalog.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "../catalog.json",
"type": "application/json"
}
],
"assets": {
"image": {
"href": "../../image.tif",
"type": "image/tiff; application=geotiff"
}
},
"bbox": [
37.6616853489879,
55.73478197572927,
37.66573047610874,
55.73882710285011
]
}
###Markdown
Creating an Item that implements the EO extensionIn the code above our item only implemented the core STAC Item specification. With [extensions](https://github.com/radiantearth/stac-spec/tree/v0.9.0/extensions) we can record more information and add additional functionality to the Item. Given that we know this is a World View 3 image that has earth observation data, we can enable the [eo extension](https://github.com/radiantearth/stac-spec/tree/v0.8.1/extensions/eo) to add band information. To add eo information to an item we'll need to specify some more data. First, let's define the bands of World View 3:
###Code
from pystac.extensions.eo import Band
# From: https://www.spaceimagingme.com/downloads/sensors/datasheets/DG_WorldView3_DS_2014.pdf
wv3_bands = [Band.create(name='Coastal', description='Coastal: 400 - 450 nm', common_name='coastal'),
Band.create(name='Blue', description='Blue: 450 - 510 nm', common_name='blue'),
Band.create(name='Green', description='Green: 510 - 580 nm', common_name='green'),
Band.create(name='Yellow', description='Yellow: 585 - 625 nm', common_name='yellow'),
Band.create(name='Red', description='Red: 630 - 690 nm', common_name='red'),
Band.create(name='Red Edge', description='Red Edge: 705 - 745 nm', common_name='rededge'),
Band.create(name='Near-IR1', description='Near-IR1: 770 - 895 nm', common_name='nir08'),
Band.create(name='Near-IR2', description='Near-IR2: 860 - 1040 nm', common_name='nir09')]
###Output
_____no_output_____
###Markdown
Notice that we used the `.create` method create new band information. We can now create an Item, enable the eo extension, add the band information and add it to our catalog:
###Code
eo_item = pystac.Item(id='local-image-eo',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={})
eo_item.ext.enable(pystac.Extensions.EO)
eo_item.ext.eo.apply(bands=wv3_bands)
###Output
_____no_output_____
###Markdown
There are also [common metadata](https://github.com/radiantearth/stac-spec/blob/v0.9.0/item-spec/common-metadata.md) fields that we can use to capture additional information about the WorldView 3 imagery:
###Code
eo_item.common_metadata.platform = "Maxar"
eo_item.common_metadata.instrument="WorldView3"
eo_item.common_metadata.gsd = 0.3
eo_item
###Output
_____no_output_____
###Markdown
We can use the eo extension to add bands to the assets we add to the item:
###Code
eo_ext = eo_item.ext.eo
help(eo_ext.set_bands)
#eo_item.add_asset(key='image', asset=)
asset = pystac.Asset(href=img_path,
media_type=pystac.MediaType.GEOTIFF)
eo_ext.set_bands(wv3_bands, asset)
eo_item.add_asset("image", asset)
###Output
_____no_output_____
###Markdown
If we look at the asset's JSON representation, we can see the appropriate band indexes are set:
###Code
asset.to_dict()
###Output
_____no_output_____
###Markdown
Let's clear the in-memory catalog, add the EO item, and save to a new STAC:
###Code
catalog.clear_items()
list(catalog.get_items())
catalog.add_item(eo_item)
list(catalog.get_items())
catalog.normalize_and_save(root_href=os.path.join(tmp_dir.name, 'stac-eo'),
catalog_type=pystac.CatalogType.SELF_CONTAINED)
###Output
_____no_output_____
###Markdown
Now, if we read the catalog from the filesystem, PySTAC recognizes that the item implements eo and so use it's functionality, e.g. getting the bands off the asset:
###Code
catalog2 = pystac.read_file(os.path.join(tmp_dir.name, 'stac-eo', 'catalog.json'))
list(catalog2.get_items())
item = next(catalog2.get_all_items())
item.ext.implements('eo')
item.ext.eo.get_bands(item.assets['image'])
###Output
_____no_output_____
###Markdown
CollectionsCollections are a subtype of Catalog that have some additional properties to make them more searchable. They also can define common properties so that items in the collection don't have to duplicate common data for each item. Let's create a collection to hold common properties between two images from the Spacenet 5 challenge.First we'll get another image, and it's bbox and footprint:
###Code
url2 = ('http://spacenet-dataset.s3.amazonaws.com/'
'spacenet/SN5_roads/train/AOI_7_Moscow/MS/'
'SN5_roads_train_AOI_7_Moscow_MS_chip997.tif')
img_path2 = os.path.join(tmp_dir.name, 'image.tif')
urllib.request.urlretrieve(url2, img_path2)
bbox2, footprint2 = get_bbox_and_footprint(img_path2)
###Output
_____no_output_____
###Markdown
We can take a look at the pydocs for Collection to see what information we need to supply in order to satisfy the spec.
###Code
print(pystac.Collection.__doc__)
###Output
A Collection extends the Catalog spec with additional metadata that helps
enable discovery.
Args:
id (str): Identifier for the collection. Must be unique within the STAC.
description (str): Detailed multi-line description to fully explain the collection.
`CommonMark 0.28 syntax <http://commonmark.org/>`_ MAY be used for rich text
representation.
extent (Extent): Spatial and temporal extents that describe the bounds of
all items contained within this Collection.
title (str or None): Optional short descriptive one-line title for the collection.
stac_extensions (List[str]): Optional list of extensions the Collection implements.
href (str or None): Optional HREF for this collection, which be set as the collection's
self link's HREF.
license (str): Collection's license(s) as a `SPDX License identifier
<https://spdx.org/licenses/>`_, `various`, or `proprietary`. If collection includes
data with multiple different licenses, use `various` and add a link for each.
Defaults to 'proprietary'.
keywords (List[str]): Optional list of keywords describing the collection.
providers (List[Provider]): Optional list of providers of this Collection.
properties (dict): Optional dict of common fields across referenced items.
summaries (dict): An optional map of property summaries,
either a set of values or statistics such as a range.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Collection.
Attributes:
id (str): Identifier for the collection.
description (str): Detailed multi-line description to fully explain the collection.
extent (Extent): Spatial and temporal extents that describe the bounds of
all items contained within this Collection.
title (str or None): Optional short descriptive one-line title for the collection.
stac_extensions (List[str]): Optional list of extensions the Collection implements.
keywords (List[str] or None): Optional list of keywords describing the collection.
providers (List[Provider] or None): Optional list of providers of this Collection.
properties (dict or None): Optional dict of common fields across referenced items.
summaries (dict or None): An optional map of property summaries,
either a set of values or statistics such as a range.
links (List[Link]): A list of :class:`~pystac.Link` objects representing
all links associated with this Collection.
extra_fields (dict or None): Extra fields that are part of the top-level JSON properties
of the Catalog.
###Markdown
Beyond what a Catalog reqiures, a Collection requires a license, and an `Extent` that describes the range of space and time that the items it hold occupy.
###Code
print(pystac.Extent.__doc__)
###Output
Describes the spatio-temporal extents of a Collection.
Args:
spatial (SpatialExtent): Potential spatial extent covered by the collection.
temporal (TemporalExtent): Potential temporal extent covered by the collection.
Attributes:
spatial (SpatialExtent): Potential spatial extent covered by the collection.
temporal (TemporalExtent): Potential temporal extent covered by the collection.
###Markdown
An Extent is comprised of a SpatialExtent and a TemporalExtent. These hold one or more bounding boxes and time intervals, respectively, that completely cover the items contained in the collections.Let's start with creating two new items - these will be core Items. We can set these items to implement the `eo` extension by specifing them in the `stac_extensions`.
###Code
collection_item = pystac.Item(id='local-image-col-1',
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={},
stac_extensions=[pystac.Extensions.EO])
collection_item.common_metadata.gsd = 0.3
collection_item.common_metadata.platform = 'Maxar'
collection_item.common_metadata.instruments = ['WorldView3']
asset = pystac.Asset(href=img_path,
media_type=pystac.MediaType.GEOTIFF)
collection_item.ext.eo.set_bands(wv3_bands, asset)
collection_item.add_asset('image', asset)
collection_item2 = pystac.Item(id='local-image-col-2',
geometry=footprint2,
bbox=bbox2,
datetime=datetime.utcnow(),
properties={},
stac_extensions=[pystac.Extensions.EO])
collection_item2.common_metadata.gsd = 0.3
collection_item2.common_metadata.platform = 'Maxar'
collection_item2.common_metadata.instruments = ['WorldView3']
asset2 = pystac.Asset(href=img_path,
media_type=pystac.MediaType.GEOTIFF)
collection_item2.ext.eo.set_bands([
band for band in wv3_bands if band.name in ["Red", "Green", "Blue"]
], asset2)
collection_item2.add_asset('image', asset2)
###Output
_____no_output_____
###Markdown
We can use our two items' metadata to find out what the proper bounds are:
###Code
from shapely.geometry import shape
unioned_footprint = shape(footprint).union(shape(footprint2))
collection_bbox = list(unioned_footprint.bounds)
spatial_extent = pystac.SpatialExtent(bboxes=[collection_bbox])
collection_interval = sorted([collection_item.datetime, collection_item2.datetime])
temporal_extent = pystac.TemporalExtent(intervals=[collection_interval])
collection_extent = pystac.Extent(spatial=spatial_extent, temporal=temporal_extent)
collection = pystac.Collection(id='wv3-images',
description='Spacenet 5 images over Moscow',
extent=collection_extent,
license='CC-BY-SA-4.0')
###Output
_____no_output_____
###Markdown
Now if we add our items to our Collection, and our Collection to our Catalog, we get the following STAC that can be saved:
###Code
collection.add_items([collection_item, collection_item2])
catalog.clear_items()
catalog.clear_children()
catalog.add_child(collection)
catalog.describe()
catalog.normalize_and_save(root_href=os.path.join(tmp_dir.name, 'stac-collection'),
catalog_type=pystac.CatalogType.SELF_CONTAINED)
###Output
_____no_output_____
###Markdown
CleanupDon't forget to clean up the temporary directory!
###Code
tmp_dir.cleanup()
###Output
_____no_output_____
###Markdown
Creating a STAC of imagery from Spacenet 5 data Now, let's take what we've learned and create a Catalog with more data in it. Allowing PySTAC to read from AWS S3PySTAC aims to be virtually zero-dependency (notwithstanding the why-isn't-this-in-stdlib datetime-util), so it doesn't have the ability to read from or write to anything but the local file system. However, we can hook into PySTAC's IO in the following way. Learn more about how to use STAC_IO in the [documentation on the topic](https://pystac.readthedocs.io/en/latest/concepts.htmlusing-stac-io):
###Code
from urllib.parse import urlparse
import boto3
from pystac import STAC_IO
def my_read_method(uri):
parsed = urlparse(uri)
if parsed.scheme == 's3':
bucket = parsed.netloc
key = parsed.path[1:]
s3 = boto3.resource('s3')
obj = s3.Object(bucket, key)
return obj.get()['Body'].read().decode('utf-8')
else:
return STAC_IO.default_read_text_method(uri)
def my_write_method(uri, txt):
parsed = urlparse(uri)
if parsed.scheme == 's3':
bucket = parsed.netloc
key = parsed.path[1:]
s3 = boto3.resource("s3")
s3.Object(bucket, key).put(Body=txt)
else:
STAC_IO.default_write_text_method(uri, txt)
STAC_IO.read_text_method = my_read_method
STAC_IO.write_text_method = my_write_method
###Output
_____no_output_____
###Markdown
We'll need a utility to list keys for reading the lists of files from S3:
###Code
# From https://alexwlchan.net/2017/07/listing-s3-keys/
def get_s3_keys(bucket, prefix):
"""Generate all the keys in an S3 bucket."""
s3 = boto3.client('s3')
kwargs = {'Bucket': bucket, 'Prefix': prefix}
while True:
resp = s3.list_objects_v2(**kwargs)
for obj in resp['Contents']:
yield obj['Key']
try:
kwargs['ContinuationToken'] = resp['NextContinuationToken']
except KeyError:
break
###Output
_____no_output_____
###Markdown
Let's make a STAC of imagery over Moscow as part of the Spacenet 5 challenge. As a first step, we can list out the imagery and extract IDs from each of the chips.
###Code
moscow_training_chip_uris = list(get_s3_keys(bucket='spacenet-dataset',
prefix='spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS'))
import re
chip_id_to_data = {}
def get_chip_id(uri):
return re.search(r'.*\_chip(\d+)\.', uri).group(1)
for uri in moscow_training_chip_uris:
chip_id = get_chip_id(uri)
chip_id_to_data[chip_id] = { 'img': 's3://spacenet-dataset/{}'.format(uri) }
###Output
_____no_output_____
###Markdown
For this tutorial, we'll only take a subset of the data.
###Code
chip_id_to_data = dict(list(chip_id_to_data.items())[:10])
chip_id_to_data
###Output
_____no_output_____
###Markdown
Let's turn each of those chips into a STAC Item that represents the image.
###Code
chip_id_to_items = {}
###Output
_____no_output_____
###Markdown
We'll create core `Item`s for our imagery, but mark them with the `eo` extension as we did above, and store the `eo` data in a `Collection`.Note that the image CRS is in WGS:84 (Lat/Lng). If it wasn't, we'd have to reproject the footprint to WGS:84 in order to be compliant with the spec (which can easily be done with [pyproj](https://github.com/pyproj4/pyproj)).Here we're taking advantage of `rasterio`'s ability to read S3 URIs, which only grabs the GeoTIFF metadata and does not pull the whole file down.
###Code
for chip_id in chip_id_to_data:
img_uri = chip_id_to_data[chip_id]['img']
print('Processing {}'.format(img_uri))
bbox, footprint = get_bbox_and_footprint(img_uri)
item = pystac.Item(id='img_{}'.format(chip_id),
geometry=footprint,
bbox=bbox,
datetime=datetime.utcnow(),
properties={},
stac_extensions=[pystac.Extensions.EO])
item.common_metadata.gsd = 0.3
item.common_metadata.platform = 'Maxar'
item.common_metadata.instruments = ['WorldView3']
item.ext.eo.bands = wv3_bands
asset = pystac.Asset(href=img_uri,
media_type=pystac.MediaType.COG)
item.ext.eo.set_bands(wv3_bands, asset)
item.add_asset(key='ps-ms', asset=asset)
chip_id_to_items[chip_id] = item
###Output
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip0.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip10.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip100.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1000.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1001.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1002.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1003.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1004.tif
Processing s3://spacenet-dataset/spacenet/SN5_roads/train/AOI_7_Moscow/PS-MS/SN5_roads_train_AOI_7_Moscow_PS-MS_chip1005.tif
###Markdown
Creating the CollectionAll of these images are over Moscow. In Spacenet 5, we have a couple cities that have imagery; a good way to separate these collections of imagery. We can store all of the common `eo` metadata in the collection.
###Code
from shapely.geometry import (shape, MultiPolygon)
footprints = list(map(lambda i: shape(i.geometry).envelope,
chip_id_to_items.values()))
collection_bbox = MultiPolygon(footprints).bounds
spatial_extent = pystac.SpatialExtent(bboxes=[collection_bbox])
datetimes = sorted(list(map(lambda i: i.datetime,
chip_id_to_items.values())))
temporal_extent = pystac.TemporalExtent(intervals=[[datetimes[0], datetimes[-1]]])
collection_extent = pystac.Extent(spatial=spatial_extent, temporal=temporal_extent)
collection = pystac.Collection(id='wv3-images',
description='Spacenet 5 images over Moscow',
extent=collection_extent,
license='CC-BY-SA-4.0')
collection.add_items(chip_id_to_items.values())
collection.describe()
###Output
* <Collection id=wv3-images>
* <Item id=img_0>
* <Item id=img_1>
* <Item id=img_10>
* <Item id=img_100>
* <Item id=img_1000>
* <Item id=img_1001>
* <Item id=img_1002>
* <Item id=img_1003>
* <Item id=img_1004>
* <Item id=img_1005>
###Markdown
Now, we can create a Catalog and add the collection.
###Code
catalog = pystac.Catalog(id='spacenet5', description='Spacenet 5 Data (Test)')
catalog.add_child(collection)
catalog.describe()
###Output
* <Catalog id=spacenet5>
* <Collection id=wv3-images>
* <Item id=img_0>
* <Item id=img_1>
* <Item id=img_10>
* <Item id=img_100>
* <Item id=img_1000>
* <Item id=img_1001>
* <Item id=img_1002>
* <Item id=img_1003>
* <Item id=img_1004>
* <Item id=img_1005>
###Markdown
Adding items with the label extension to the Spacenet 5 catalogWe can use the [label extension](https://github.com/radiantearth/stac-spec/tree/v0.8.1/extensions/label) of the STAC spec to represent the training data in our STAC. For this, we need to grab the URIs of the GeoJSON of roads:
###Code
moscow_training_geojson_uris = list(get_s3_keys(bucket='spacenet-dataset',
prefix='spacenet/SN5_roads/train/AOI_7_Moscow/geojson_roads_speed/'))
for uri in moscow_training_geojson_uris:
chip_id = get_chip_id(uri)
if chip_id in chip_id_to_data:
chip_id_to_data[chip_id]['label'] = 's3://spacenet-dataset/{}'.format(uri)
###Output
_____no_output_____
###Markdown
We'll add the items to their own subcatalog; since they don't inherit the Collection's `eo` properties, they shouldn't go in the Collection.
###Code
label_catalog = pystac.Catalog(id='spacenet-data-labels', description='Labels for Spacenet 5')
catalog.add_child(label_catalog)
###Output
_____no_output_____
###Markdown
To see the required fields for the label extension we can check the pydocs on the `apply` method of the extension:
###Code
from pystac.extensions import label
print(label.LabelItemExt.apply.__doc__)
###Output
Applies label extension properties to the extended Item.
Args:
label_description (str): A description of the label, how it was created,
and what it is recommended for
label_type (str): An ENUM of either vector label type or raster label type. Use
one of :class:`~pystac.LabelType`.
label_properties (list or None): These are the names of the property field(s) in each
Feature of the label asset's FeatureCollection that contains the classes
(keywords from label:classes if the property defines classes).
If labels are rasters, this should be None.
label_classes (List[LabelClass]): Optional, but reqiured if ussing categorical data.
A list of LabelClasses defining the list of possible class names for each
label:properties. (e.g., tree, building, car, hippo)
label_tasks (List[str]): Recommended to be a subset of 'regression', 'classification',
'detection', or 'segmentation', but may be an arbitrary value.
label_methods: Recommended to be a subset of 'automated' or 'manual',
but may be an arbitrary value.
label_overviews (List[LabelOverview]): Optional list of LabelOverview classes
that store counts (for classification-type data) or summary statistics (for
continuous numerical/regression data).
###Markdown
This loop creates our label items and associates each to the appropriate source image Item.
###Code
for chip_id in chip_id_to_data:
img_item = collection.get_item('img_{}'.format(chip_id))
label_uri = chip_id_to_data[chip_id]['label']
label_item = pystac.Item(id='label_{}'.format(chip_id),
geometry=img_item.geometry,
bbox=img_item.bbox,
datetime=datetime.utcnow(),
properties={},
stac_extensions=[pystac.Extensions.LABEL])
label_item.ext.label.apply(label_description="SpaceNet 5 Road labels",
label_type=label.LabelType.VECTOR,
label_tasks=['segmentation', 'regression'])
label_item.ext.label.add_source(img_item)
label_item.ext.label.add_geojson_labels(label_uri)
label_catalog.add_item(label_item)
###Output
_____no_output_____
###Markdown
Now we have a STAC of training data!
###Code
catalog.describe()
label_item = catalog.get_child('spacenet-data-labels').get_item('label_1')
label_item.to_dict()
###Output
_____no_output_____
|
week9/in_class_notebooks/week9-193.ipynb
|
###Markdown
 **Data Visualization and Exploratory Data Analysis** Visualization is an important part of data analysis. By presenting information visually, you facilitate the process of its perception, which makes it possible to highlight additional patterns, evaluate the ratios of quantities, and quickly communicate key aspects in the data.Let's start with a little "memo" that should always be kept in mind when creating any graphs. How to visualize data and make everyone hate you 1. Chart **titles** are unnecessary. It is always clear from the graph what data it describes.2. Do not label under any circumstances both **axes** of the graph. Let the others check their intuition!3. **Units** are optional. What difference does it make if the quantity was measured, in people or in liters!4. The smaller the **text** on the graph, the sharper the viewer's eyesight.5. You should try to fit all the **information** that you have in the dataset in one chart. With full titles, transcripts, footnotes. The more text, the more informative!6. Whenever possible, use as many 3D and special effects as you have. There will be less visual distortion rather than 2D. As an example, consider the pandemic case. Let's use a dataset with promptly updated statistics on coronavirus (COVID-19), which is publicly available on Kaggle: https://www.kaggle.com/imdevskp/corona-virus-report?select=covid_19_clean_complete.csv The main libraries for visualization in Python that we need today are **matplotlib, seaborn, plotly**.
###Code
# Download required binded packages
!pip install plotly-express
!pip install nbformat==4.2.0
!pip install plotly
import matplotlib.pyplot as plt #the most popular library for making the plots
%matplotlib inline
import numpy as np
import seaborn as sns # on the basis of matplotlib, but fancier
import pandas as pd
import pickle # for JSON serialization
import plotly # dynamic plots
import plotly_express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
from plotly.offline import init_notebook_mode
init_notebook_mode(connected=True)
%config InlineBackend.figure_format = 'svg' # graphs in svg look sharper
# Change the default plot size
from pylab import rcParams
rcParams['figure.figsize'] = 7, 5
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
We read the data and look at the number of countries in the dataset and what time period it covers.
###Code
data = pd.read_csv('./data/covid_19_clean.csv')
data.head(10)
###Output
_____no_output_____
###Markdown
How many countries there are in this table?
###Code
data['Country/Region'].nunique()
data.shape
data.describe()
float(-1.400000e+01)
data.shape
data[data['Active'] >= 0]
data['Active'].sort_values()[:20]
data[data['Country/Region'] == 'Andorra'].iloc[-30:]
data.describe(include=['object'])
###Output
_____no_output_____
###Markdown
How many cases in average were confirmed per report? Metrics of centrality:
###Code
data[data['Country/Region'] == 'India']
data['Confirmed'].mode()
data['Confirmed'].median()
data[data['Date'] == max(data['Date'])]['Confirmed'].median()
data['Confirmed'].mean()
###Output
_____no_output_____
###Markdown
What is the maximum number of confirmed cases in every country?
###Code
data.groupby('Country/Region')['Confirmed'].agg('max').sort_values(ascending=False)[:10]
data.groupby('Country/Region')['Confirmed'].max().sort_values(ascending=False)[:10]
data.groupby('Country/Region')['Confirmed'].agg(['mean', 'std', 'sum'])
###Output
_____no_output_____
###Markdown
More info on groupby: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html* **mean()**: Compute mean of groups* **sum()**: Compute sum of group values* **size()**: Compute group sizes* **count()**: Compute count of group* **std()**: Standard deviation of groups* **var()**: Compute variance of groups* **sem()**: Standard error of the mean of groups* **describe()**: Generates descriptive statistics* **first()**: Compute first of group values* **last()**: Compute last of group values* **nth()** : Take nth value, or a subset if n is a list* **min()**: Compute min of group values* **max()**: Compute max of group values You can see several characteristics at once (mean, median, prod, sum, std,var) - both in DataFrame and Series:
###Code
data.groupby('Country/Region')['Confirmed'].agg(['mean', 'median', 'std'])
data.pivot_table(columns='WHO Region', index='Date', values='Confirmed', aggfunc='sum')
data[data['WHO Region'] == 'Western Pacific']
data[data['WHO Region'] == 'Western Pacific']['Country/Region'].unique()
data.Confirmed.mean()
true_mean = data[data.Date == max(data.Date)].Confirmed.mean()
data[(data['WHO Region'] == 'Western Pacific') & (data['Confirmed'] > true_mean)]
data[(data['WHO Region'] == 'Western Pacific') & (data['Confirmed'] > true_mean)]['Country/Region'].unique()
some_countries = ['China', 'Singapore', 'Philippines', 'Japan']
data[data['Country/Region'].isin(some_countries)]
###Output
_____no_output_____
###Markdown
Let's make a small report:
###Code
data = pd.read_csv('./data/covid_19_clean.csv')
print("Number of countries: ", data['Country/Region'].nunique())
print(f"Day from {min(data['Date'])} till {max(data['Date'])}, overall {data['Date'].nunique()} days.")
display(data[data['Country/Region'] == 'Russia'].tail())
data['Date'] = pd.to_datetime(data['Date'], format='%Y-%m-%d')
###Output
Number of countries: 187
Day from 2020-01-22 till 2020-07-27, overall 188 days.
###Markdown
The coronavirus pandemic is a clear example of an exponential distribution. To demonstrate this, let's build a graph of the total number of infected and dead. We will use a linear chart type (** Line Chart **), which can reflect the dynamics of one or several indicators. It is convenient to use it to see how a value changes over time.
###Code
data
data['Confirmed'].plot()
data[['Confirmed', 'Deaths', 'Date']].groupby('Date').sum().plot()
# Line chart
ax = data[['Confirmed', 'Deaths', 'Date']].groupby('Date').sum().plot(title='The trend of growing number of confirmed cases')
ax.set_xlabel("Time")
ax.set_ylabel("Number of confirmed cases, 10 mln of people");
# TODO
# Change the title and axes names
###Output
_____no_output_____
###Markdown
The graph above shows us general information around the world. Let's select the 10 most affected countries (based on the results of the last day from the dataset) and on one **Line Chart** show data for each of them according to the number of registered cases of the disease. This time, let's try using the **plotly** library.
###Code
# Preparation steps fot the table
# Extract the top 10 countries by the number of confirmed cases
df_top = data[data['Date'] == max(data.Date)]
df_top = df_top.groupby('Country/Region', as_index=False)['Confirmed'].sum()
df_top = df_top.nlargest(10,'Confirmed')
# Extract trend across time
df_trend = data.groupby(['Date','Country/Region'], as_index=False)['Confirmed'].sum()
df_trend = df_trend.merge(df_top, on='Country/Region')
df_trend.rename(columns={'Country/Region' : 'Countries',
'Confirmed_x':'Cases',
'Date' : 'Dates'},
inplace=True)
# Plot a graph
# px stands for plotly_express
px.line(df_trend,
title='Increased number of cases of COVID-19',
x='Dates',
y='Cases',
color='Countries')
###Output
_____no_output_____
###Markdown
Let's put a logarithm on this column.
###Code
# Add a column to visualize the logarithmic
df_trend['ln(Cases)'] = np.log(df_trend['Cases'] + 1) # Add 1 for log (0) case
px.line(df_trend,
x='Dates',
y='ln(Cases)',
color='Countries',
title='COVID19 Total Cases growth for top 10 worst affected countries(Logarithmic Scale)')
###Output
_____no_output_____
###Markdown
What interesting conclusions can you draw from this graph? Try to do similar graphs for the deaths and active cases.
###Code
# TODO
###Output
_____no_output_____
###Markdown
Another popular chart is the **Pie chart**. Most often, this graph is used to visualize the relationship between parts (ratios).
###Code
# Pie chart
fig = make_subplots(rows=1, cols=2, specs=[[{'type':'domain'}, {'type':'domain'}]])
labels_donut = [country for country in df_top['Country/Region']]
fig.add_trace(go.Pie(labels=labels_donut, hole=.4, hoverinfo="label+percent+name",
values=[cases for cases in df_top.Confirmed],
name="Ratio", ), 1, 1)
labels_pie = [country for country in df_top['Country/Region']]
fig.add_trace(go.Pie(labels=labels_pie, pull=[0, 0, 0.2, 0],
values=[cases for cases in df_top.Confirmed],
name="Ratio"), 1, 2)
fig.update_layout(
title_text="Donut & Pie Chart: Distribution of COVID-19 cases among the top-10 affected countries",
# Add annotations in the center of the donut pies.
annotations=[dict(text=' ', x=0.5, y=0.5, font_size=16, showarrow=False)],
colorway=['rgb(69, 135, 24)', 'rgb(136, 204, 41)', 'rgb(204, 204, 41)',
'rgb(235, 210, 26)', 'rgb(209, 156, 42)', 'rgb(209, 86, 42)', 'rgb(209, 42, 42)', ])
fig.show()
###Output
_____no_output_____
###Markdown
In the line graphs above, we have visualized aggregate country information by the number of cases detected. Now, let's try to plot a daily trend chart by calculating the difference between the current value and the previous day's value.For this purpose, we will use a histogram (**Histogram**). Also, let's add pointers to key events, for example, lockdown dates in Wuhan province in China, Italy and the UK.
###Code
# Histogram
def add_daily_diffs(df):
# 0 because the previous value is unknown
df.loc[0,'Cases_daily'] = 0
df.loc[0,'Deaths_daily'] = 0
for i in range(1, len(df)):
df.loc[i,'Cases_daily'] = df.loc[i,'Confirmed'] - df.loc[i - 1,'Confirmed']
df.loc[i,'Deaths_daily'] = df.loc[i,'Deaths'] - df.loc[i - 1,'Deaths']
return df
df_world = data.groupby('Date', as_index=False)['Deaths', 'Confirmed'].sum()
df_world = add_daily_diffs(df_world)
fig = go.Figure(data=[
go.Bar(name='The number of cases',
marker={'color': 'rgb(0,100,153)'},
x=df_world.Date,
y=df_world.Cases_daily),
go.Bar(name='The number of cases', x=df_world.Date, y=df_world.Deaths_daily)
])
fig.update_layout(barmode='overlay', title='Statistics on the number of Confirmed and Deaths from COVID-19 across the world',
annotations=[dict(x='2020-01-23', y=1797, text="Lockdown (Wuhan)",
showarrow=True, arrowhead=1, ax=-100, ay=-200),
dict(x='2020-03-09', y=1797, text="Lockdown (Italy)",
showarrow=True, arrowhead=1, ax=-100, ay=-200),
dict(x='2020-03-23', y=19000, text="Lockdown (UK)",
showarrow=True, arrowhead=1, ax=-100, ay=-200)])
fig.show()
# Save
plotly.offline.plot(fig, filename='my_beautiful_histogram.html', show_link=False)
###Output
_____no_output_____
###Markdown
A histogram is often mistaken for a bar chart due to its visual similarity, but these charts have different purposes. The bar graph shows how the data is distributed over a continuous interval or a specific period of time. Frequency is located along the vertical axis of the histogram, intervals or some time period along the horizontal axis.Let's build the **Bar Chart** now. It can be vertical and horizontal, let's choose the second option.Let's build a graph only for the top 20 countries in mortality. We will calculate this statistics as the ratio of the number of deaths to the number of confirmed cases for each country.For some countries in the dataset, statistics are presented for each region (for example, for all US states). For such countries, we will leave only one (maximum) value. Alternatively, one could calculate the average for the regions and leave it as an indicator for the country.
###Code
# Bar chart
df_mortality = data.query('(Date == "2020-07-17") & (Confirmed > 100)')
df_mortality['mortality'] = df_mortality['Deaths'] / df_mortality['Confirmed']
df_mortality['mortality'] = df_mortality['mortality'].apply(lambda x: round(x, 3))
df_mortality.sort_values('mortality', ascending=False, inplace=True)
# Keep the maximum mortality rate for countries for which statistics are provided for each region.
df_mortality.drop_duplicates(subset=['Country/Region'], keep='first', inplace=True)
fig = px.bar(df_mortality[:20].iloc[::-1],
x='mortality',
y='Country/Region',
labels={'mortality': 'Death rate', 'Country\Region': 'Country'},
title=f'Death rate: top-20 affected countries on 2020-07-17',
text='mortality',
height=800,
orientation='h') # горизонтальный
fig.show()
# TODO: раскрасить столбцы по тепловой карте (используя уровень смерности)
# Для этого добавьте аргументы color = 'mortality'
###Output
_____no_output_____
###Markdown
**Heat Maps** quite useful for additional visualization of correlation matrices between features. When there are a lot of features, with the help of such a graph you can more quickly assess which features are highly correlated or do not have a linear relationship.
###Code
# Heat map
sns.heatmap(data.corr(), annot=True, fmt='.2f', cmap='cividis'); # try another color, e.g.'RdBu'
###Output
_____no_output_____
###Markdown
The scatter plot helps to find the relationship between the two indicators. To do this, you can use pairplot, which will immediately display a histogram for each variable and a scatter plot for two variables (along different plot axes).
###Code
# Pairplot
sns_plot = sns.pairplot(data[['Deaths', 'Confirmed']])
sns_plot.savefig('pairplot.png') # save
###Output
_____no_output_____
###Markdown
**Pivot table** can automatically sort and aggregate your data.
###Code
# Pivot table
plt.figure(figsize=(12, 4))
df_new = df_mortality.iloc[:10]
df_new['Confirmed'] = df_new['Confirmed'].astype(np.int)
df_new['binned_fatalities'] = pd.cut(df_new['Deaths'], 3)
platform_genre_sales = df_new.pivot_table(
index='binned_fatalities',
columns='Country/Region',
values='Confirmed',
aggfunc=sum).fillna(int(0)).applymap(np.int)
sns.heatmap(platform_genre_sales, annot=True, fmt=".1f", linewidths=0.7, cmap="viridis");
# Geo
# file with abbreviations
with open('./data/countries_codes.pkl', 'rb') as file:
countries_codes = pickle.load(file)
df_map = data.copy()
df_map['Date'] = data['Date'].astype(str)
df_map = df_map.groupby(['Date','Country/Region'], as_index=False)['Confirmed','Deaths'].sum()
df_map['iso_alpha'] = df_map['Country/Region'].map(countries_codes)
df_map['ln(Confirmed)'] = np.log(df_map.Confirmed + 1)
df_map['ln(Deaths)'] = np.log(df_map.Deaths + 1)
px.choropleth(df_map,
locations="iso_alpha",
color="ln(Confirmed)",
hover_name="Country/Region",
hover_data=["Confirmed"],
animation_frame="Date",
color_continuous_scale=px.colors.sequential.OrRd,
title = 'Total Confirmed Cases growth (Logarithmic Scale)')
###Output
_____no_output_____
|
no_show_medical_appointment/no_show_medical.ipynb
|
###Markdown
No Show Medical Appointment ** IMPORTANT OVERVIEW OF THE DATA SET:**This dataset collects information from 100k medical appointments in Brazil and is focused on the question of whether or not patients show up for their appointment. A number of characteristics about the patient are included in each row. ✓ ‘ScheduledDay’ tells us on what day the patient set up their appointment. ✓ ‘Neighbourhood’ indicates the location of the hospital. ✓ ‘Scholarship’ indicates whether or not the patient is enrolled in Brasilian welfare program Bolsa Família. Be careful about the encoding of the last column: it says ‘No’ if the patient showed up to their appointment, and ‘Yes’ if they did not show up.**WHAT THE PROJECT IS ABOUT. *** This uses a csv file, data from kaggle called No show appointment. ** BELOW ARE THE COLUMNS IN THE DATASET AND WHAT THEY MEAN : **+ PatientId = ID of the patient+ AppointmentID = ID of the appointment+ Gender = Gender of patient+ ScheduledDay = The day which appointment was scheduled+ AppointmentDay = The day which appintment planned to occur+ Age = Age of the patient+ Neighbourhood = The place where hospital located+ Scholarship = If the patient has scholarship or not, That is 1 for True and 0 for False+ Hipertension = If the patient has Hipertension or not, That is 1 for True and 0 for False+ Diabetes = If the patient has Diabetes or not, That is 1 for True and 0 for False+ Alcoholism = If the patient has Alcoholism or not, That is 1 for True and 0 for False+ Handcap = If the patient has Handcap or not. That is 1 for True and 0 for False+ SMS_received = If the patient received an SMS for the appointment+ No.show = no show information. “Yes” means patient did not come to the appointment, “No” means patient came to appointment.** QUESTIONS TO BE INVESTIGATED WITH THIS DATA SET: **+ what factors are important for us to know in order to predict if a patient will show up for their scheduled appointmment. + what gender missed the most appointments ? + what day of the week, do the patients mostly miss appointment.
###Code
# import modules needed for the investigation
%matplotlib inline
import numpy as np
import pandas as pd
from datetime import datetime as dt
import matplotlib.pyplot as plt
# import the data from the csv and assigned it to a variable called df
df = pd.read_csv('noshowappointments-kagglev2-may-2016.csv')
df.head()
# using the header shows us the first 5 roles from top of the data set.
# this is drop the missing values
df_no_missing = df.dropna(axis = 1, how='any')
print df.describe()
df.columns # this gives us the idea of the total columns we have in the data set.
# Here i am renaming some columns to provide some clarity into the data
# like removing spaces
df.rename(columns = { "PatientId":'Patient_Id', 'Hipertension': 'Hypertension', \
"AppointmentID":'Appointment_ID',"ScheduledDay": "Scheduled_Day", \
'AppointmentDay': 'Appointment_Day' }, inplace=True)
df.columns = df.columns.str.replace('-', '_')
df.columns
# this is to present the total rows and the columns we have in the data set
df.shape
# reading from left to right, rows first and then column
df.info()
# this is showing more information about the data set like the type of data they are
# Here we converted the date, time and seconds using pandas in built function to make the date and time more
# comprehensive to understand
# below is the link for the documentation
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html#pandas.to_datetime
df['Scheduled_Day'] = pd.to_datetime(df['Scheduled_Day'])
df['Appointment_Day'] = pd.to_datetime(df['Appointment_Day'])
df.head()
###Output
_____no_output_____
###Markdown
**According to Youth Policy.org**,Here is the source: *http://www.youthpolicy.org/factsheets/country/brazil/*- 18 Years is considered to be the minimum for criminal reponsibility, + In this analysis, i assuming that since 18 is the minimum age for responsibility, then the child is still a teenager and not yet responsible for this actions looking at from the economic perspective. Below i create a variable named Teens_start_age = 18, WHY ? * This is because i want to assume that since they are kids and not yet responsible for themselves yet, they cannot visit the doctor or go for any medical appointment unless in Life threatening situations
###Code
teens_start_age = 18
teenagers = df[df.Age < teens_start_age] # no of people below the age of 18
len(teenagers)
print "Here is the number of people below the age of 18 : " + str(len(teenagers))
###Output
Here is the number of people below the age of 18 : 27380
###Markdown
In the next cell, I am taking a range from 18 to 40, + people in this range are responsible and they can take care of themselves by law. + a variable was created 40, i am using this as a max for people to have settled down
###Code
settled_down_age = 40 # to ask my mentor if i have to turn this into a function.
settling_down = df[(df['Age'] >= teens_start_age) & (df['Age'] <= settled_down_age)]
len(settling_down)
print "This is the number of people from 18 to 40 of the data set : " + str(len(settling_down))
old_age = 60 # Here i looked at the age over 40
getting_old = df[(df['Age'] > settled_down_age) & (df['Age'] <= old_age)]
len(getting_old)
print "This is the number of people over 40 till 60 years : " + str(len(getting_old))
finally_old = df[df.Age > old_age] # Here i want to know the people above the age of 60
len(finally_old)
print" Here is the nummber of people over 60 years : " + str(len(finally_old))
female_in_the_data = df[df.Gender == "F"] # no of females in the gender columns
male_in_the_data = df[df.Gender == "M"] # no of males in the gender columns
len(male_in_the_data)
len(female_in_the_data)
print " Here is the number of Males : " + str(len(male_in_the_data))
print " Here is the number of Females : "+ str(len(female_in_the_data))
###Output
Here is the number of Males : 38687
Here is the number of Females : 71840
###Markdown
Below is a simple for histogram of age of the people+ The yellow line passing through it is called the line of best fit+ Also we have the mean age of the people in the data set + The standard deviation is included
###Code
age_mean = df['Age'].mean() # mean of the age distribution
age_standard_deviation = df['Age'].std() # standard deviation of the age distribution
print 'This is the mean : ',age_mean
print 'This is the standard deviation : ',age_standard_deviation
%matplotlib inline
import numpy as np
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
mu = age_mean # mean of distribution
sigma = age_standard_deviation # standard deviation of distribution
num_bins = 50
fig, ax = plt.subplots()
# the histogram of the data
n, bins, patches = ax.hist(df['Age'], num_bins, normed=1)
# add a 'best fit' line
y = mlab.normpdf(bins, mu, sigma)
ax.plot(bins, y, '--')
ax.set_xlabel('Age of the people')
ax.set_ylabel('Probability density')
ax.set_title(r'Histogram of People Age')
#$\mu=, $\sigma=$
# Tweak spacing to prevent clipping of ylabel
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Question 1 + what gender missed the most appointment ?so here i am trying to consider both genders, I want to find out, which one missed the most appointment
###Code
male_no_shows = df[(df['Gender'] == "M") & (df['No_show'] == 'Yes')]
males_no_shows = int(len(male_no_shows))
print "Male that did not show up for appointment : ",males_no_shows
# Here i am trying to get the female in the gender that did not show for the appointment
female_no_shows = df[(df['Gender'] == 'F') & (df['No_show'] == 'Yes')]
females_no_shows = int(len(female_no_shows))
print "Female that did not show up for appointment : ",females_no_shows
# Here, we are looking at the number of males that showed up
male_shows_up = df[(df['Gender'] == "M") & (df['No_show'] == 'No')]
males_shows_up = int(len(male_shows_up))
print " Males that showed up for the appointment : ", males_shows_up
#We are taking a look at the number of females that showed up for the appointment
female_shows_up = df[(df['Gender'] == 'F') & (df['No_show'] == 'No')]
females_shows_up = int(len(female_shows_up))
print " Females that showed up for the Medical appointment: ",females_shows_up
###Output
Females that showed up for the Medical appointment: 57246
###Markdown
Below is a representation of Gender with No show + to the left we have a pie chart representing the percentage of gender that show+ To the right we have a pie chart also showing the percentage of gender that did not show
###Code
grouping_gender_noshow = df.groupby(['Gender','No_show'])
grouping_gender_noshow.size().unstack().plot(kind='pie', subplots = True,
autopct='%1.1f%%', figsize=(15,15)
,title = 'Pie showing the relating of Gender with No Show')
###Output
_____no_output_____
###Markdown
Below is the bar chart representation of No show with Gender+ the Blue bar shows the people that showed for the medical appointment + the Orange bar shows the people that did not show for the medical appointment
###Code
grouping_gender_noshow.size().unstack().plot(kind='bar', figsize=(15,15), title = 'Bar chart of No show grouped by Gender')
###Output
_____no_output_____
###Markdown
So my conclusion is that, + *the number of Male that did not show up is 7725*+ *Females that did not show up is 14594*+ *This means that No of females that do not show up is higher than the number of male that do not show.*+ *Looking at the analysis more, you will see that we have greater no of female that showed up too.* More analysis i will like to exlpore here :+ To know the age range of male and females that showed and did not show up for the medical appointment.+ I am suppecting that there could a hight numner of teens missing appointments because of thier guardian+ Also the old people, little or no guardian to take them for the appointment So i am not concluding here but needs to be investigated further
###Code
from __future__ import division
percentage = 100
total_number_shows = len(df)
total_number_shows = int(total_number_shows)
def calculating_percentage(input_show):
the_percentage = ((input_show / total_number_shows)*percentage)
print the_percentage,'%'
male_not_showed = calculating_percentage(males_no_shows)
female_not_showed = calculating_percentage(females_no_shows)
male_that_showed = calculating_percentage(males_shows_up)
female_that_showed = calculating_percentage(females_shows_up)
###Output
6.98924244755 %
13.204013499 %
28.013064681 %
51.7936793725 %
###Markdown
QUESTION 2 : + what day of the week did people missed the most appointment + Below is the next cell, i formatted the Appointment_day to days in the week so that i can get the day of the week.
###Code
df['Day_of_the_appointment'] = df['Appointment_Day'].dt.weekday_name # the day of the week was formatted here
Monday_Appointment = df[(df['Day_of_the_appointment'] == 'Monday')]
len(Monday_Appointment)
def checking_for_days(appointment_day, day_of_week):
appointment = df[(appointment_day == day_of_week)]
print day_of_week+'s : ' + str(len(appointment))
return len(appointment)
max_of_days = [
checking_for_days(df['Day_of_the_appointment'], 'Monday'),
checking_for_days(df['Day_of_the_appointment'], 'Tuesday'),
checking_for_days(df['Day_of_the_appointment'], 'Wednesday'),
checking_for_days(df['Day_of_the_appointment'], 'Thursday'),
checking_for_days(df['Day_of_the_appointment'], 'Friday'),
checking_for_days(df['Day_of_the_appointment'], 'Saturday'),
checking_for_days(df['Day_of_the_appointment'], 'Sunday')
]
# returning the number of appointment missed based on the weekday
print 'Here is the most days where appointment was missed : ', max(max_of_days)
# People missed most appointment on a Wednesday
df['Day_of_the_appointment'].mode()
%matplotlib inline
df['Day_of_the_appointment'].value_counts().plot(kind="pie", shadow = True, startangle=270, autopct='%1.1f%%', figsize=(10,10),
title = ('Percentage of days of the week'), legend = True)
###Output
_____no_output_____
###Markdown
The above pie chart shows the percentage of how the days of the week appears. What do i mean by this ? + each variables shows how many times each day occurred in the data set. However note that i wanted to see which of the days of the week that the people missed the most appointment, and given the looks of things on the pie chart we can see the percentages of how they occur.+ **Monday** with 20.6%+ **Tuesday** with 23.2%+ **Wednesday** with 23.4%+ **Thurday** with 15.6%+ **Friday** with 17.2%+ **Satuday** with 0%*From pie chart analysis what i can deduce is that, the people missed their appointment most on Wednesday, slightly a bit more than Tuesday. * The reason for checking this: + I wanted to know if the reason for missing the appointment could be work related or not, What i would like to explore more here is that : What is the percentage of the working class and the non working class in this analysisin order to better understand the result.
###Code
ax = df['Day_of_the_appointment'].hist()
ax.set_xlabel("(Day of the week)")
ax.set_ylabel('Frequency')
###Output
_____no_output_____
|
2020_week_2/Taylor_problem_5.32.ipynb
|
###Markdown
Taylor problem 5.32last revised: 12-Jan-2019 by Dick Furnstahl [[email protected]] **Replace by appropriate expressions.** The equation for an underdamped oscillator, such as a mass on the end of a spring, takes the form $\begin{align} x(t) = e^{-\beta t} [B_1 \cos(\omega_1 t) + B_2 \sin(\omega_1 t)]\end{align}$where$\begin{align} \omega_1 = \sqrt{\omega_0^2 - \beta^2}\end{align}$and the mass is released from rest at position $x_0$ at $t=0$. **Goal: plot $x(t)$ for $0 \leq t \leq 20$, with $x_0 = 1$, $\omega_0=1$, and $\beta = 0.$, 0.02, 0.1, 0.3, and 1.**
###Code
import numpy as np
import matplotlib.pyplot as plt
def underdamped(t, beta, omega_0=1, x_0=1):
"""Solution x(t) for an underdamped harmonic oscillator."""
omega_1 = np.sqrt(omega_0**2 - beta**2)
B_1 = 2
B_2 = 5
return np.exp(-beta*t) \
* ( B_1 * np.cos(omega_1*t) + B_2 * np.sin(omega_1*t) )
t_pts = np.arange(0., 20., .01)
betas = [0., 0.02, 0.1, 0.3, 0.9999]
fig = plt.figure(figsize=(10,6))
# look up "python enumerate" to find out how this works!
for i, beta in enumerate(betas):
ax = fig.add_subplot(2, 3, i+1)
ax.plot(t_pts, underdamped(t_pts, beta), color='blue')
ax.set_title(rf'$\beta = {beta:.2f}$')
ax.set_xlabel('t')
ax.set_ylabel('x(t)')
ax.set_ylim(-1.1,1.1)
ax.axhline(0., color='black', alpha=0.3) # lightened black zero line
fig.tight_layout()
### add code to print the figure
###Output
_____no_output_____
###Markdown
Bonus: Widgetized!
###Code
from ipywidgets import interact, fixed
import ipywidgets as widgets
omega_0 = 1.
def plot_beta(beta):
"""Plot function for underdamped harmonic oscillator."""
t_pts = np.arange(0., 20., .01)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(t_pts, underdamped(t_pts, beta), color='blue')
ax.set_title(rf'$\beta = {beta:.2f}$')
ax.set_xlabel('t')
ax.set_ylabel('x(t)')
ax.set_ylim(-1.1,1.1)
ax.axhline(0., color='black', alpha=0.3)
fig.tight_layout()
max_value = omega_0 - 0.0001
interact(plot_beta,
beta=widgets.FloatSlider(min=0., max=max_value, step=0.01,
value=0., readout_format='.2f',
continuous_update=False));
###Output
_____no_output_____
###Markdown
Now let's allow for complex numbers! This will enable us to take $\beta > \omega_0$.
###Code
# numpy.lib.scimath version of sqrt handles complex numbers.
# numpy exp, cos, and sin already can.
import numpy.lib.scimath as smath
def all_beta(t, beta, omega_0=1, x_0=1):
"""Solution x(t) for damped harmonic oscillator, allowing for overdamped
as well as underdamped solution.
"""
omega_1 = smath.sqrt(omega_0**2 - beta**2)
return np.real( x_0 * np.exp(-beta*t) \
* (np.cos(omega_1*t) + (beta/omega_1)*np.sin(omega_1*t)) )
from ipywidgets import interact, fixed
import ipywidgets as widgets
omega_0 = 1.
def plot_all_beta(beta):
"""Plot of x(t) for damped harmonic oscillator, allowing for overdamped
as well as underdamped cases."""
t_pts = np.arange(0., 20., .01)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(t_pts, all_beta(t_pts, beta), color='blue')
ax.set_title(rf'$\beta = {beta:.2f}$')
ax.set_xlabel('t')
ax.set_ylabel('x(t)')
ax.set_ylim(-1.1,1.1)
ax.axhline(0., color='black', alpha=0.3)
fig.tight_layout()
interact(plot_all_beta,
beta=widgets.FloatSlider(min=0., max=2, step=0.01,
value=0., readout_format='.2f',
continuous_update=False));
###Output
_____no_output_____
###Markdown
Taylor problem 5.32last revised: 12-Jan-2019 by Dick Furnstahl [[email protected]] **Replace by appropriate expressions.** The equation for an underdamped oscillator, such as a mass on the end of a spring, takes the form $\begin{align} x(t) = e^{-\beta t} [B_1 \cos(\omega_1 t) + B_2 \sin(\omega_1 t)]\end{align}$where$\begin{align} \omega_1 = \sqrt{\omega_0^2 - \beta^2}\end{align}$and the mass is released from rest at position $x_0$ at $t=0$. **Goal: plot $x(t)$ for $0 \leq t \leq 20$, with $x_0 = 1$, $\omega_0=1$, and $\beta = 0.$, 0.02, 0.1, 0.3, and 1.**
###Code
import numpy as np
import matplotlib.pyplot as plt
def underdamped(t, beta, omega_0=1, x_0=1):
"""Solution x(t) for an underdamped harmonic oscillator."""
omega_1 = np.sqrt(omega_0**2 - beta**2)
B_1 = ### fill in the blank
B_2 = ### fill in the blank
return np.exp(-beta*t) \
* ( B_1 * np.cos(omega_1*t) + B_2 * np.sin(omega_1*t) )
t_pts = np.arange(0., 20., .01)
betas = [0., 0.02, 0.1, 0.3, 0.9999]
fig = plt.figure(figsize=(10,6))
# look up "python enumerate" to find out how this works!
for i, beta in enumerate(betas):
ax = fig.add_subplot(2, 3, i+1)
ax.plot(t_pts, underdamped(t_pts, beta), color='blue')
ax.set_title(rf'$\beta = {beta:.2f}$')
ax.set_xlabel('t')
ax.set_ylabel('x(t)')
ax.set_ylim(-1.1,1.1)
ax.axhline(0., color='black', alpha=0.3) # lightened black zero line
fig.tight_layout()
### add code to print the figure
###Output
_____no_output_____
###Markdown
Bonus: Widgetized!
###Code
from ipywidgets import interact, fixed
import ipywidgets as widgets
omega_0 = 1.
def plot_beta(beta):
"""Plot function for underdamped harmonic oscillator."""
t_pts = np.arange(0., 20., .01)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(t_pts, underdamped(t_pts, beta), color='blue')
ax.set_title(rf'$\beta = {beta:.2f}$')
ax.set_xlabel('t')
ax.set_ylabel('x(t)')
ax.set_ylim(-1.1,1.1)
ax.axhline(0., color='black', alpha=0.3)
fig.tight_layout()
max_value = omega_0 - 0.0001
interact(plot_beta,
beta=widgets.FloatSlider(min=0., max=max_value, step=0.01,
value=0., readout_format='.2f',
continuous_update=False));
###Output
_____no_output_____
###Markdown
Now let's allow for complex numbers! This will enable us to take $\beta > \omega_0$.
###Code
# numpy.lib.scimath version of sqrt handles complex numbers.
# numpy exp, cos, and sin already can.
import numpy.lib.scimath as smath
def all_beta(t, beta, omega_0=1, x_0=1):
"""Solution x(t) for damped harmonic oscillator, allowing for overdamped
as well as underdamped solution.
"""
omega_1 = smath.sqrt(omega_0**2 - beta**2)
return np.real( x_0 * np.exp(-beta*t) \
* (np.cos(omega_1*t) + (beta/omega_1)*np.sin(omega_1*t)) )
from ipywidgets import interact, fixed
import ipywidgets as widgets
omega_0 = 1.
def plot_all_beta(beta):
"""Plot of x(t) for damped harmonic oscillator, allowing for overdamped
as well as underdamped cases."""
t_pts = np.arange(0., 20., .01)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(t_pts, all_beta(t_pts, beta), color='blue')
ax.set_title(rf'$\beta = {beta:.2f}$')
ax.set_xlabel('t')
ax.set_ylabel('x(t)')
ax.set_ylim(-1.1,1.1)
ax.axhline(0., color='black', alpha=0.3)
fig.tight_layout()
interact(plot_all_beta,
beta=widgets.FloatSlider(min=0., max=2, step=0.01,
value=0., readout_format='.2f',
continuous_update=False));
###Output
_____no_output_____
|
benchmarks/notebooks/baseline-paper/baseline-throughput-latency-paper.ipynb
|
###Markdown
Calculate and plot the number of exchanged messages over time
###Code
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Global variables
###Code
pd.set_option("display.precision", 5)
pd.set_option('display.max_rows', 500)
timeframe_upper_limit = 60 # Seconds after startup that you want to look at
paths = {
# set the topology size as key
# "111": "../../logs/baseline/25-03-2021",
"73": "../../logs/baseline/30-03-2021"
}
exclude_paths = []
num_nodes = 73
height = 2
degree = 8
num_runs = {
73: 9,
111: 5
}
# naming: m_xxx = maintenance_xxx
# naming: d_xxx = data_xxx
d_stamps = []
d_topologies = []
d_node_ids = []
d_runs = []
d_latencies = []
startup_times = {}
for path in paths :
#print(path)
for root, dirs, files in os.walk(paths[path]) :
dirs[:] = [directory for directory in dirs if directory not in exclude_paths]
# print(root)
# print(dirs)
# print(files)
got_startup_time = False
run = root.split('_')[-1]
for file in files :
with open( os.path.join(root, file) ) as log :
node_id = file.split('_')[0][4:]
for line in log :
if "DATA RECEIVED" in line:
# data messages
elem = line.split( )
if not got_startup_time:
# get timestamp of the first data tuple
got_startup_time = True
startup_times[int(run)] = int(elem[8])
d_node_ids.append(int(node_id))
d_stamps.append( int(elem[8]) ) # unix timestamp
d_topologies.append( int(path) )
d_runs.append(int(run))
d_latencies.append(int(1000000*float(elem[-1])))
d_data = pd.DataFrame(np.column_stack([d_topologies, d_runs, d_node_ids, d_latencies, d_stamps]),
columns=['topology', 'run', 'node_id', 'latency_micros','timestamp'])
# print(startup_time)
d_data.head()
d_data['timestamp'] = d_data.apply(lambda row: row.timestamp - startup_times[row.run], axis=1)
d_data['timestamp_sec'] = d_data['timestamp'].apply(lambda x: x // 1000000000)
d_data
###Output
_____no_output_____
###Markdown
Reduce timeframe
###Code
d_data = d_data[d_data.timestamp_sec <= timeframe_upper_limit]
d_data
###Output
_____no_output_____
###Markdown
Try to find outliers
###Code
d_outliers = d_data.groupby(['topology', 'run', 'node_id', 'timestamp_sec']).size().reset_index(name='number of messages').sort_values(by=['number of messages'], ascending=False, axis=0)
# d_outliers.head()
###Output
_____no_output_____
###Markdown
Compute results
###Code
d_grouped = d_data.groupby(['topology', 'timestamp_sec']).size().reset_index(name='number of messages')
d_grouped['number of messages'] = d_grouped.apply(lambda row: row['number of messages'] / num_runs[row['topology']], axis=1)
d_grouped['number of messages per node'] = d_grouped.apply(lambda row: row['number of messages'] / row['topology'], axis=1)
# d_grouped
###Output
_____no_output_____
###Markdown
Throughput
###Code
ax = plt.gca()
d_grouped.plot(kind='line',x='timestamp_sec',y='number of messages',ax=ax)
plt.xlabel("[s] since startup")
plt.ylabel("Throughput at root per [s]")
plt.legend([str(num_nodes) +' nodes, ' + str(num_runs[num_nodes]) + ' runs, h=' + str(height) + ', d=' + str(degree)], loc=1)
stepsize=10
ax.xaxis.set_ticks(np.arange(0, timeframe_upper_limit + 1, stepsize))
plt.savefig('baseline-throughput-paper.pdf')
first_30 = d_grouped[d_grouped['timestamp_sec'] <= 30]['number of messages'].sum()
first_60 = d_grouped[d_grouped['timestamp_sec'] <= 50]['number of messages'].sum()
f_30_60 = d_grouped[(d_grouped['timestamp_sec'] >=5) & (d_grouped['timestamp_sec'] < 50)]['number of messages'].sum()
print('Overall exchanged data within first 30s: ' + str(first_30))
print('Overall exchanged data within first 60s: ' + str(first_60))
print('Overall exchanged data between 30s and 60s: ' + str(f_30_60))
print('Exchanged data between 30s and 60s per second per node: ' + str(f_30_60/(45*(num_nodes-1))))
print('\nOverall exchanged maintenance messages during first 30s per node: ' + str(first_30 / num_nodes))
print('\nOverall exchanged maintenance messages during first 50s per node: ' + str(first_60 / num_nodes))
print('\nOverall throughput at root: ' + str(first_60 / 50))
###Output
Overall exchanged data within first 30s: 4244.666666666667
Overall exchanged data within first 60s: 7124.333333333334
Overall exchanged data between 30s and 60s: 6479.0
Exchanged data between 30s and 60s per second per node: 1.9996913580246913
Overall exchanged maintenance messages during first 30s per node: 58.14611872146119
Overall exchanged maintenance messages during first 50s per node: 97.59360730593608
Overall throughput at root: 142.48666666666668
###Markdown
Latency
###Code
d_latency = d_data.groupby(['topology', 'timestamp_sec'], as_index=False)['latency_micros'].mean()
d_latency['latency_micros'] = d_latency['latency_micros'] / 1000
d_latency.rename(columns = {'latency_micros':'latency_millis'}, inplace=True)
d_latency
d_latency = d_latency[d_latency["timestamp_sec"] >= 0]
ax = plt.gca()
d_latency.plot(kind='line',x='timestamp_sec',y='latency_millis',ax=ax)
plt.xlabel("[s] since startup")
plt.ylabel("Mean source-to-sink latency in [ms]")
plt.legend([str(num_nodes) +' nodes, ' + str(num_runs[num_nodes]) + ' runs, h=' + str(height) + ', d=' + str(degree)], loc=1)
stepsize=10
ax.xaxis.set_ticks(np.arange(0, timeframe_upper_limit + 1, stepsize))
plt.savefig('baseline-latency-paper.pdf')
avg_latency = d_latency['latency_millis'].mean()
print("Average latency per packet from source to sink: " + str(avg_latency))
###Output
Average latency per packet from source to sink: 96.501147069178
|
content/lessons/02/Class-Coding-Lab/CCL-Intro-To-Python-Programming-Copy1.ipynb
|
###Markdown
Class Coding Lab: Introduction to ProgrammingThe goals of this lab are to help you to understand:1. the Jupyter and IDLE programming environments1. basic Python Syntax2. variables and their use3. how to sequence instructions together into a cohesive program4. the input() function for input and print() function for output Let's start with an example: Hello, world!This program asks for your name as input, then says hello to you as output. Most often it's the first program you write when learning a new programming language. Click in the cell below and click the run cell button.
###Code
your_name = input("What is your name? ")
print('Hello there',your_name)
###Output
_____no_output_____
###Markdown
Believe it or not there's a lot going on in this simple two-line program, so let's break it down. - The first line: - Asks you for input, prompting you `What is your Name?` - It then stores your input in the variable `your_name` - The second line: - prints out the following text: `Hello there` - then prints out the contents of the variable `your_name`At this point you might have a few questions. What is a variable? Why do I need it? Why is this two lines? Etc... All will be revealed in time. VariablesVariables are names in our code which store values. I think of variables as cardboard boxes. Boxes hold things. Variables hold things. The name of the variable is on the ouside of the box (that way you know which box it is), and value of the variable represents the contents of the box. Variable Assignment**Assignment** is an operation where we store data in our variable. It's like packing something up in the box.In this example we assign the value "USA" to the variable **country**
###Code
# Here's an example of variable assignment. Wre
country = 'USA'
###Output
_____no_output_____
###Markdown
Variable Access What good is storing data if you cannot retrieve it? Lucky for us, retrieving the data in variable is as simple as calling its name:
###Code
country # This should say 'USA'
###Output
_____no_output_____
###Markdown
At this point you might be thinking: Can I overwrite a variable? The answer, of course, is yes! Just re-assign it a different value:
###Code
country = 'Canada'
###Output
_____no_output_____
###Markdown
You can also access a variable multiple times. Each time it simply gives you its value:
###Code
country, country, country
###Output
_____no_output_____
###Markdown
The Purpose Of VariablesVariables play an vital role in programming. Computer instructions have no memory of each other. That is one line of code has no idea what is happening in the other lines of code. The only way we can "connect" what happens from one line to the next is through variables. For example, if we re-write the Hello, World program at the top of the page without variables, we get the following:
###Code
input("What is your name? ")
print('Hello there')
###Output
_____no_output_____
###Markdown
When you execute this program, notice there is no longer a connection between the input and the output. In fact, the input on line 1 doesn't matter because the output on line 2 doesn't know about it. It cannot because we never stored the results of the input into a variable! What's in a name? Um, EVERYTHINGComputer code serves two equally important purposes:1. To solve a problem (obviously)2. To communicate hwo you solved problem to another person (hmmm... I didn't think of that!)If our code does something useful, like land a rocket, predict the weather, or calculate month-end account balances then the chances are 100% certain that *someone else will need to read and understand our code.* Therefore it's just as important we develop code that is easilty understood by both the computer and our colleagues.This starts with the names we choose for our variables. Consider the following program:
###Code
y = input("Enter your city: ")
x = input("Enter your state: ")
print(x,y,'is a nice place to live')
###Output
_____no_output_____
###Markdown
What do `x` and `y` represent? Is there a semantic (design) error in this program?You might find it easy to figure out the answers to these questions, but consider this more human-friendly version:
###Code
state = input("Enter your city: ")
city = input("Enter your state: ")
print(city,state,'is a nice place to live')
###Output
_____no_output_____
###Markdown
Do the aptly-named variables make it easier to find the semantic errors in this second version? You Do It:Finally re-write this program so that it uses well-thought out variables AND in semantically correct:
###Code
# TODO: Code it re-write the above program to work as it should: Stating City State is a nice place to live
city = input("Enter your city")
state = input("Enter your state")
print(city,state, "is a nice place to live")
###Output
Enter your cityRiver Vale
Enter your stateNew Jersey
River Vale New Jersey is a nice place to live
###Markdown
Now Try This:Now try to write a program which asks for two separate inputs: your first name and your last name. The program should then output `Hello` with your first name and last name.For example if you enter `Mike` for the first name and `Fudge` for the last name the program should output `Hello Mike Fudge`**HINTS** - Use appropriate variable names. If you need to create a two word variable name use an underscore in place of the space between the words. eg. `two_words` - You will need a separate set of inputs for each name.
###Code
# TODO: write your code here
firstname = input("Enter your first name")
lastname = input("Enter your last name")
print("Hello", firstname,lastname)
###Output
Enter your first nameJoe
Enter your last nameRecca
Hello Joe Recca
###Markdown
Variable Concatenation: Your First OperatorThe `+` symbol is used to combine to variables containing text values together. Consider the following example:
###Code
prefix = "re"
suffix = "ment"
root = input("Enter a root word, like 'ship': ")
print( prefix + root + suffix)
###Output
Enter a root word, like 'ship': ship
reshipment
###Markdown
Now Try ThisWrite a program to prompt for three colors as input, then outputs those three colors as a lis, informing me which one was the middle (2nd entered) color. For example if you were to enter `red` then `green` then `blue` the program would output: `Your colors were: red, green, and blue. The middle was is green.`**HINTS** - you'll need three variables one fore each input - you should try to make the program output like my example. This includes commas and the word `and`.
###Code
color1 = input("Enter color1")
color2 = input("Enter color2")
color3 = input("Enter color3")
print("Your colors were:", color1 + ",", color2 + ",", "and", color3 + "." "The middle color was", color2 + ".")
###Output
Enter color1blue
Enter color2red
Enter color3green
Your colors were: blue, red, and green.The middle color was red.
|
implementations/notebooks_df/code_changes_lines.ipynb
|
###Markdown
Code_Changes_LinesThis is the reference implementation for [Code_Changes_Lines](https://github.com/chaoss/wg-evolution/blob/master/metrics/Code_Changes_Lines.md),a metric specified by the[Evolution Working Group](https://github.com/chaoss/wg-evolution) of the[CHAOSS project](https://chaoss.community).Have a look at [README.md](../README.md) to find out how to run this notebook (and others in this directory) as well as to get a better understanding of the purpose of the implementations.The implementation is described in two parts (see below):* Class for computing Code_Changes_Lines* An explanatory analysis of the class' functionalitySome more auxiliary information in this notebook:* Examples of the use of the implementation As discussed in the [README](../README.md) file, the scripts required to analyze the data fetched by Perceval are located in the `code_df` package. Due to python's import system, to import modules from a package which is not in the current directory, we have to either add the package to `PYTHONPATH` or simply append a `..` to `sys.path`, so that `code_df` can be successfully imported.
###Code
from datetime import datetime
import matplotlib.pyplot as plt
import sys
sys.path.append('..')
from code_df import utils
from code_df import conditions
from code_df.commit import Commit
%matplotlib inline
class CodeChangesLines(Commit):
"""
Class for Code_Changes_Lines
"""
def _flatten(self, item):
"""
Flatten a raw commit fetched by Perceval into a flat dictionary.
A list with a single flat directory will be returned.
That dictionary will have the elements we need for computing metrics.
The list may be empty, if for some reason the commit should not
be considered.
:param item: raw item fetched by Perceval (dictionary)
:returns: list of a single flat dictionary
"""
creation_date = utils.str_to_date(item['data']['AuthorDate'])
if self.since and (self.since > creation_date):
return []
if self.until and (self.until < creation_date):
return []
code_files = [file['file'] for file in item['data']['files'] if
all(condition.check(file['file'])
for condition in self.is_code)]
if len(code_files) > 0:
flat = {
'repo': item['origin'],
'hash': item['data']['commit'],
'author': item['data']['Author'],
'category': "commit",
'created_date': creation_date,
'committer': item['data']['Commit'],
'commit_date': utils.str_to_date(item['data']['CommitDate']),
'files_no': len(item['data']['files']),
'refs': item['data']['refs'],
'parents': item['data']['parents'],
'files': item['data']['files']
}
# actions
actions = 0
for file in item['data']['files']:
if 'action' in file:
actions += 1
flat['files_action'] = actions
# Merge commit check
if 'Merge' in item['data']:
flat['merge'] = True
else:
flat['merge'] = False
# modifications
modified_lines = 0
for file in item['data']['files']:
if 'added' and 'removed' in file:
try:
modified_lines += int(file['added']) + int(file['removed'])
except ValueError:
# in case of compressed files,
# additions and deletions are "-"
pass
flat['modifications'] = modified_lines
return [flat]
else:
return []
def compute(self):
"""
Compute the number of lines modified in the data fetched
by Perceval.
It computes the sum of the 'modifications' column
in the DataFrame.
:returns modifications_count: The total number of
lines modified (int)
"""
df = self.df
modifications_count = df['modifications'].sum()
return modifications_count
def _agg(self, df, period):
"""
Perform an aggregation operation on a DataFrame or Series
to find the total number of lines modified in a every interval
of the period specified in the time_series method, like
'M', 'W',etc.
It adds the number of lines modified for every row in the
series.
:param df: a pandas DataFrame on which the aggregation will be
applied.
:param period: A string which can be any one of the pandas time
series rules:
'W': week
'M': month
'D': day
:returns df: The aggregated dataframe, where aggregations have
been performed on the "modifications"
"""
df = df.resample(period)['modifications'].agg(['sum'])
return df
###Output
_____no_output_____
###Markdown
Performing the AnalysisUsing the above class, we can perform several kinds of analysis on the JSON data file, fetched by Perceval. For starters, we can perform a simple calculation of the number of modified lines (additions plus deletions) in the file. To make things simple, we will use the `Naive` implementation for deciding whether a given commit affects the source code or not. Again, the naive implementation assumes that all files are part of the source code, and hence, all commits are considered to affect it. The `Naive` implementation is the default option. Counting the total number of modified lines We first read the JSON file containing Perceval data using the `read_json_file` utility function.
###Code
items = utils.read_json_file('../git-commits.json')
###Output
_____no_output_____
###Markdown
Let's use the `compute` method to count the total number of lines modified after instantiating the above class.Notice that here, we are redefining the `_flatten` method of the `Commit` class, the parent of the `CodeChangesLines` class. The reason for doing this is to add a `modifications` column to the dataframe. This makes it easier to compute this metric.First, we will do the computation without passing any since and until dates. Next, we can pass in the start and end dates as a tuple. The format would be `%Y-%m-%d`.
###Code
changes = CodeChangesLines(items)
print("The total number of modified lines "
"in the file is {}.".format(changes.compute()))
date_since = datetime.strptime("2018-01-01", "%Y-%m-%d")
date_until = datetime.strptime("2018-07-01", "%Y-%m-%d")
changes_dated = CodeChangesLines(items,
date_range=(date_since, date_until))
print("The total number of lines modified between "
"2018-01-01 and 2018-07-01 is {}.".format(changes_dated.compute()))
###Output
The total number of modified lines in the file is 563670.
The total number of lines modified between 2018-01-01 and 2018-07-01 is 192972.
###Markdown
Counting the total number of lines modified by commits excluding merge commitsMoving on, lets make use of the `EmptyExclude` and `MergeExclude` classes to filter out empty and merge commits respectively. These classes are sub-classes of the `Commit` class in the `conditions` module. They provide two methods: `check()` and `set_commits`.The `set_commits` method selects commits which satisfy a given condition (like excluding empty commits, for example) and stores the hashes of those commits in the set `included`, an instance variable of all `Commit` classes. The `check()` method checks each commit in the DataFrame created from Perceval data and drops those rows which correspond to commits not in `included`.
###Code
changes_non_merge = CodeChangesLines(items,
(date_since, date_until),
conds=[conditions.MergeExclude()])
print("The total number of lines modified by non-merge commits between"
" 2018-01-01 and 2018-07-01 is {}.".format(changes_non_merge.compute()))
###Output
The total number of lines modified by non-merge commits between 2018-01-01 and 2018-07-01 is 94047.
###Markdown
Counting the number of lines modified over regular time intervalsUsing the `time_series` method, it is possible to compute the number of lines modified every month, or every week. This kind of analysis is useful in finding trends over time, as we will see in the cell below.Let's perform a basic analysis: lets see the change in the number of lines modified between the same dates we used above on a weekly basis: 2018-01-01 and 2018-07-01. The Code_Changes_Lines object, `changes_dated`, will be the same as used above.
###Code
weekly_df = changes_dated.time_series(period='W')
###Output
_____no_output_____
###Markdown
Lets see what the dataframe returned by `time_series` looks like. As you will notice, the dataframe has rows corresponding to each and every week between the start and end dates. To do this, we simply set the `created_date` column of the DataFrame `changes_dated.df`, as its index and then `resample` it to whatever time period we need. In this case, we have used `W`. We will apply the 'sum' aggregation function on both the 'additions' and 'deletions' columns of the dataframe.
###Code
weekly_df
###Output
_____no_output_____
###Markdown
Lets plot the dataframe `weekly_df` using matplotlib.pyplot. We use the `seaborn` theme and plot a simple line plot --- lines modified vs time interval. Using the `plt.fill_between` method allows us to "fill up" the area between the line plots and the x axis.
###Code
plt.figure(figsize=[15, 5])
plt.style.use('seaborn')
plt.plot(weekly_df['sum'])
plt.fill_between(y1=weekly_df['sum'], y2=0, x=weekly_df.index)
plt.title("Lines modified");
plt.show()
###Output
_____no_output_____
###Markdown
The same thing can be tried for months, instead of weeks. By passing `month` in place of week, we get a similar dataframe but with only a few rows, due to the larger timescale. Counting line modifications by commits only made on the master branchAnother option one has while using this class for analyzing git commit data is to include only those commits for analysis which are on the master branch. To do this, we pass in an object of the `MasterInclude` class as a list to the `conds` parameter while instantiating the `CodeChangesGit` class.We compute the number of commits created on the master branch after `2018-01-01`, which we stored in the `datetime` object, `date_since`.
###Code
changes_only_master = CodeChangesLines(items,
date_range=(date_since, None),
conds=[conditions.MasterInclude()])
print("The total number of lines modified (additions, deletions) by commits made on the master branch "
"after 2018-01-01 is {}.".format(changes_only_master.compute()))
###Output
The total number of lines modified (additions, deletions) by commits made on the master branch after 2018-01-01 is 215191.
###Markdown
Lets do one last thing: the same thing we did in the cell above, but without including empty commits. In this case, we would also need to pass a `conditions.ExcludeEmpty` object to `conds`. Also, lets exclude those commits which work solely on `markdown` files. We use the `PostfixExclude` class, a sub-class of `Code` for this.
###Code
changes_non_empty_master = CodeChangesLines(items,
is_code=[conditions.PostfixExclude(postfixes=['.md'])],
conds=[conditions.MasterInclude(), conditions.EmptyExclude()])
print("The total number of lines modified by non-empty commits made on the master branch is: {}".format(changes_non_empty_master.compute()))
###Output
The total number of lines modified by non-empty commits made on the master branch is: 9226
|
code/exploratory/fish_munging.ipynb
|
###Markdown
MungingLoad data from Brewster, pre-tidied by Manuel, and drop the spurious column that was the index in csv.
###Code
df_fish = pd.read_csv("../../data/jones_brewster_2014.csv")
del df_fish['Unnamed: 0']
df_fish.head()
###Output
_____no_output_____
###Markdown
Let's take a quick look at everything we've got.
###Code
plot_kwargs = {
"x_axis_label": "counts",
"y_axis_label": "expt",
"width": 500,
"height": 1000,
"horizontal": True,
}
p = bokeh_catplot.box(data=df_fish, cats="experiment", val="mRNA_cell", **plot_kwargs)
bokeh.io.show(p)
###Output
_____no_output_____
###Markdown
Wait, what are all the experiment labels in the dataset?
###Code
raw_expt_labels = df_fish['experiment'].unique()
raw_expt_labels.sort()
raw_expt_labels
###Output
_____no_output_____
###Markdown
Huh, is this the complete dataset, with constitutive promoters _and_ LacI regulated measurements? Then what is in the regulated file?
###Code
df_reg = pd.read_csv("../../data/jones_brewster_regulated_2014.csv")
reg_labels = df_reg['experiment'].unique()
reg_labels
###Output
_____no_output_____
###Markdown
uuuuuh, what? Is that duplicates of what's in the main dataset?
###Code
print(len(df_reg[df_reg['experiment'] == 'O3_10ngmL']))
print(len(df_fish[df_fish['experiment'] == 'O3_10ngmL']))
print(len(df_reg[df_reg['experiment'] == 'Oid_0p5ngmL']))
print(len(df_fish[df_fish['experiment'] == 'Oid_0p5ngmL']))
###Output
_____no_output_____
###Markdown
I think the contents of the regulated file either duplicate or are a subset of the contents of the other file... Let's write a quick test function.
###Code
def check_counts_subset(subset_series, total_series):
subset_vals, subset_counts = np.unique(subset_series, return_counts=True)
total_vals, total_counts = np.unique(total_series, return_counts=True)
for i, val in enumerate(subset_vals):
assert val in total_vals, "%r not found in total_series" % val
assert (
subset_counts[i] <= total_counts[np.searchsorted(total_vals, val)]
), "More occurances of %r in subset_series than in total_series!" % val
check_counts_subset([0,1], [0,1,1]) # passes
# check_counts_subset([0,1,2], [0,1,1]) # fails
# check_counts_subset([0,1,2,3], [0,1,2,4]) # fails
check_counts_subset([0,0,1,2,3,3,3], [0,0,1,2,3,4,4,3,3]) # passes
# check_counts_subset([0,0,1,2,3,3], [0,0,1,2,4,3]) # fails
###Output
_____no_output_____
###Markdown
Seems to work. Now use it for reals.
###Code
check_counts_subset(df_reg[df_reg["experiment"] == "Oid_0p5ngmL"]['mRNA_cell'],
df_fish[df_fish["experiment"] == "Oid_0p5ngmL"]['mRNA_cell'])
check_counts_subset(df_reg[df_reg["experiment"] == "O3_10ngmL"]['mRNA_cell'],
df_fish[df_fish["experiment"] == "O3_10ngmL"]['mRNA_cell'])
###Output
_____no_output_____
###Markdown
No assertions raised so the contents of the regulated file are in fact a subset of the full dataframe.So, I dunno what happened with the regulated file, but I think we can ignore it and work only with the main file. EnergiesNext, let's get the energies from the supplement of Brewster/Jones 2012 paper.
###Code
df_energies = pd.read_csv("../../data/brewster_jones_2012.csv")
df_energies.head()
###Output
_____no_output_____
###Markdown
Are all the promoters in the 2012 dataset in the 2014 fish dataset? These are the only constitutive promoters I'm interested in.
###Code
all(item in df_fish.experiment.unique() for item in df_energies.Name)
###Output
_____no_output_____
###Markdown
Splitting into regulated & constitutive dataSome of these datasets are not of interest right now so let's split it into multiple dataframes for easier downstream handling. The regulated datasets start with O1, O2, or O3. Everything else doesn't. From that everything else, grab the ones that we have energies for, and set aside the rest. Use regex to parse.
###Code
# put all strings that start w/ 'O' in one list
regulated_labels = [label for label in raw_expt_labels if re.match('^O', label)]
# and put all the others in another list
other_labels = [label for label in raw_expt_labels if not re.match('^O', label)]
# from that, split out those we have energies for...
constitutive_labels = [label for label in other_labels if label in tuple(df_energies.Name)]
# ...and those we don't
leftover_labels = [label for label in other_labels if label not in tuple(df_energies.Name)]
leftover_labels
###Output
_____no_output_____
###Markdown
Without more metadata, I don't really know what to do with the leftover labels data, e.g., what good does the aTc concentration do me if I don't know what promoter it was for? Parameter estimation Chi-by-eye to sanity check UV5, 5DL10, and 5DL20 look like good candidates for a closer look; all have decent non-zero expression, and they look different from each other.
###Code
df_slice = df_fish.query("experiment == 'UV5' \
or experiment == '5DL10' \
or experiment == '5DL20'")
df_slice['experiment'].unique()
###Output
_____no_output_____
###Markdown
Now that we've got a more manageable set, let's make ECDFs and chi-by-eye with negative binomial. `scipy.stats` convention is `cdf(k, n, p, loc=0)`, where $n$ is the number of successes we're waiting for and $p$ is probability of success.
###Code
p = bokeh_catplot.ecdf(data=df_slice, cats='experiment', val='mRNA_cell', style='staircase')
# compute upper bound for theoretical CDF plots
u_bound = max(df_slice['mRNA_cell'])
x = np.arange(u_bound+1)
p.line(x, st.nbinom.cdf(x, 5, 0.2))
p.line(x, st.nbinom.cdf(x, 3, 0.4), color='orange')
p.line(x, st.nbinom.cdf(x, .3, 0.26), color='green')
bokeh.io.show(p)
###Output
_____no_output_____
|
Module - Interactive User Input with ipywidgets.ipynb
|
###Markdown
UNCLASSIFIED~~//FOR OFFICIAL USE ONLY~~Transcribed from FOIA Doc ID: 6689695https://archive.org/details/comp3321 **Note:** The overall classification of this lesson is marked as indicated above, however the material doesn't have portion markings after the second paragraph below. Also, looking through the material it's clear there isn't anything about it that warants an FOUO categorization so it appears this lesson was overprotected. (U) ipywidgets (U) ipywidgets is used for making interactive widgets inside your jupyter notebook (U) The most basic way to get user input is to use the python built in `input` function. For more complicated types of interaction, you can use ipywidgets
###Code
# input example (not using ipywidgets)
a = input("Give me your input: ")
print("Your input was: " + a)
import ipywidgets
from ipywidgets import *
###Output
_____no_output_____
###Markdown
`interact` is the easiest way to get started with ipywidgets by creating a user interface and automatically calling the specified function.
###Code
def f(x):
return x*2
interact(f,x=10)
def g(check,y):
print ("{} {}".format(check,y))
interact(g,check=True,y="Hi there! ")
###Output
_____no_output_____
###Markdown
But, if you need more flexibility, you can start from scratch by picking a widget and then calling the functionality you want. Hint: you get more widget choices this way.
###Code
IntSlider()
w = IntSlider()
w
###Output
_____no_output_____
###Markdown
You can explicitly display using IPython's display module. Note what happens when you display the same widget more than once!
###Code
from IPython.display import display
display(w)
w.value
###Output
_____no_output_____
###Markdown
Now we have a value from our slider we can use in code. But what other attributes or "keys" does our slider widget have?
###Code
w.max
new_w = IntSlider(max=200)
display(new_w)
###Output
_____no_output_____
###Markdown
You can also close your widget
###Code
w.close()
new_w.close()
###Output
_____no_output_____
###Markdown
Here are all the available widgets:
###Code
list(Widget.widget_types.items())
###Output
_____no_output_____
###Markdown
Categories of widgets**Numeric:** IntSlider, FloatSlider, IntRangeSlider, FloatRangeSlider, IntProgress, FloatProgress, BoundedlntText, BoundedFloatText, IntText, FloatText **Boolean:** ToggleButton, Checkbox, Valid **Selection:** Dropdown, RadioButtons, Select, ToggleButtons, SelectMultiple **String Widgets:** Text, Textarea **Other common:** Button, ColorPicker, HTML, Image
###Code
Dropdown(options=["1", "2", "3", "cat"])
bt = Button(description="Click me!")
display(bt)
###Output
_____no_output_____
###Markdown
Buttons don't do much on their own, so we have to use some event handling. We can define a function with the desired behavior and call it with the button's `on_click` method.
###Code
def clicker(b):
print("Hello World!!!!")
bt.on_click(clicker)
def f(change):
print(change['new'])
w = IntSlider()
display(w)
w.observe(f, names='value')
###Output
_____no_output_____
###Markdown
Wrapping Multiple Widgets in Boxes When working with multiple input widgets, it's often nice to wrap it all in a nice little box. ipywidgets provides a few options for this -- we'll cover HBox (horizontal box) and VBox (vertical box). HBox This will display the widgets horizontally
###Code
fruit_list = Dropdown(
options = ['apple', 'cherry', 'orange', 'plum', 'pear']
)
fruit_label = HTML(
value = 'Select a fruit from the list: '
)
fruit_box = HBox(children=[fruit_label, fruit_list])
fruit_box
###Output
_____no_output_____
###Markdown
VBox This will display the widgets (or boxes) vertically
###Code
num_label = HTML(
value = 'Choose the number of fruits: '
)
num_options = IntSlider(min=1, max=20)
num_box = HBox(children=(num_label, num_options))
type_label = HTML(
value = 'Select the type of fruit: '
)
type_options = RadioButtons(
options=('Under-ripe', 'Ripe', 'Rotten')
)
type_box = HBox(children=(type_label, type_options))
fruit_vbox = VBox(children=(fruit_box, num_box, type_box))
fruit_vbox
###Output
_____no_output_____
###Markdown
Retrieving Values from a Box
###Code
box_values = {}
# the elements in a box can be accessed using the children attribute
for index, box in enumerate(fruit_vbox.children):
for child in box.children:
if type(child) != ipywidgets.widgets.widget_string.HTML:
if index == 0:
print("The selected fruit is:", child.value)
box_values['fruit'] = child.value
elif index == 1:
print("The select number of fruits is: ", str (child.value))
box_values['count'] = child.value
elif index == 2:
print("The selected type of fruit is: ", str(child.value))
box_values['type'] = child.value
box_values
###Output
_____no_output_____
###Markdown
Specify Layout of the Widgets/Boxes
###Code
form_item_layout = Layout(
display = 'flex',
flex_flow = 'row',
justify_content = 'space-between',
width = '70%',
align_items = 'initial',
)
veggie_label = HTML(
value='Select a vegetable from the list: ',
layout=Layout(width='20%', height='65px', border='solid 1px')
)
veggie_options = Dropdown(
options=['corn', 'lettuce', 'tomato', 'potato', 'spinach'],
layout=Layout(width='30%', height='65px')
)
veggie_box = HBox(
children=(veggie_label, veggie_options),
layout=Layout(width='100%', border='solid 2px', height='100px'))
veggie_box
###Output
_____no_output_____
|
Deep Learning/Convolutional Networks/Convolutional Neural Networks/transfer-learning/bottleneck_features.ipynb
|
###Markdown
Convolutional Neural Networks
---
In your upcoming project, you will download pre-computed bottleneck features. In this notebook, we'll show you how to calculate VGG-16 bottleneck features on a toy dataset. Note that unless you have a powerful GPU, computing the bottleneck features takes a significant amount of time.
1. Load and Preprocess Sample Images
Before supplying an image to a pre-trained network in Keras, there are some required preprocessing steps. You will learn more about this in the project; for now, we have implemented this functionality for you in the first code cell of the notebook. We have imported a very small dataset of 8 images and stored the preprocessed image input as `img_input`. Note that the dimensionality of this array is `(8, 224, 224, 3)`. In this case, each of the 8 images is a 3D tensor, with shape `(224, 224, 3)`.
###Code
%tensorflow_version 1.x
from keras.applications.vgg16 import preprocess_input
from keras.preprocessing import image
import numpy as np
import glob
img_paths = glob.glob("images/*.jpg")
def path_to_tensor(img_path):
# loads RGB image as PIL.Image.Image type
img = image.load_img(img_path, target_size=(224, 224))
# convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3)
x = image.img_to_array(img)
# convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor
return np.expand_dims(x, axis=0)
def paths_to_tensor(img_paths):
list_of_tensors = [path_to_tensor(img_path) for img_path in img_paths]
return np.vstack(list_of_tensors)
# calculate the image input. you will learn more about how this works the project!
img_input = preprocess_input(paths_to_tensor(img_paths))
print(img_input.shape)
###Output
Using TensorFlow backend.
###Markdown
2. Recap How to Import VGG-16
Recall how we import the VGG-16 network (including the final classification layer) that has been pre-trained on ImageNet.
###Code
from keras.applications.vgg16 import VGG16
model = VGG16()
model.summary()
###Output
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/keras/backend/tensorflow_backend.py:4070: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.
Model: "vgg16"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 224, 224, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 25088) 0
_________________________________________________________________
fc1 (Dense) (None, 4096) 102764544
_________________________________________________________________
fc2 (Dense) (None, 4096) 16781312
_________________________________________________________________
predictions (Dense) (None, 1000) 4097000
=================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
_________________________________________________________________
###Markdown
For this network, `model.predict` returns a 1000-dimensional probability vector containing the predicted probability that an image returns each of the 1000 ImageNet categories. The dimensionality of the obtained output from passing `img_input` through the model is `(8, 1000)`. The first value of `8` merely denotes that 8 images were passed through the network.
###Code
model.predict(img_input).shape
###Output
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
###Markdown
3. Import the VGG-16 Model, with the Final Fully-Connected Layers Removed
When performing transfer learning, we need to remove the final layers of the network, as they are too specific to the ImageNet database.
###Code
from keras.applications.vgg16 import VGG16
model = VGG16(include_top=False)
model.summary()
###Output
Model: "vgg16"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) (None, None, None, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, None, None, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, None, None, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, None, None, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, None, None, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, None, None, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, None, None, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, None, None, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, None, None, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, None, None, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, None, None, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, None, None, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, None, None, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, None, None, 512) 0
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0
_________________________________________________________________
###Markdown
4. Extract Output of Final Max Pooling Layer
Now, the network stored in `model` is a truncated version of the VGG-16 network, where the final three fully-connected layers have been removed. In this case, `model.predict` returns a 3D array (with dimensions $7\times 7\times 512$) corresponding to the final max pooling layer of VGG-16. The dimensionality of the obtained output from passing `img_input` through the model is `(8, 7, 7, 512)`. The first value of `8` merely denotes that 8 images were passed through the network.
###Code
print(model.predict(img_input).shape)
###Output
(8, 7, 7, 512)
|
doc/source/notebooks/models.ipynb
|
###Markdown
Handling models in GPflow--*James Hensman November 2015, January 2016*One of the key ingredients in GPflow is the model class, which allows the user to carefully control parameters. This notebook shows how some of these parameter control features work, and how to build your own model with GPflow. First we'll look at - how to view models and parameters - how to set parameter values - how to constrain parameters (e.g. variance > 0) - how to fix model parameters - how to apply priors to parameters - how to optimize modelsThen we'll show how to build a simple logistic regression model, demonstrating the ease of the parameter framework. GPy users should feel right at home, but there are some small differences.First, let's deal with the usual notebook boilerplate and make a simple GP regression model. See the Regression notebook for specifics of the model: we just want some parameters to play with.
###Code
import GPflow
import numpy as np
#build a very simple GPR model
X = np.random.rand(20,1)
Y = np.sin(12*X) + 0.66*np.cos(25*X) + np.random.randn(20,1)*0.01
m = GPflow.gpr.GPR(X, Y, kern=GPflow.kernels.Matern32(1) + GPflow.kernels.Linear(1))
###Output
_____no_output_____
###Markdown
Viewing, getting and setting parametersYou can display the state of the model in a terminal with `print m` (or `print(m)`), and by simply returning it in a notebook
###Code
print(m)
###Output
model.kern.matern32.[1mvariance[0m transform:+ve prior:None
[ 1.]
model.kern.matern32.[1mlengthscales[0m transform:+ve prior:None
[ 1.]
model.kern.linear.[1mvariance[0m transform:+ve prior:None
[ 1.]
model.likelihood.[1mvariance[0m transform:+ve prior:None
[ 1.]
###Markdown
This model has four parameters. The kernel is made of the sum of two parts: the RBF kernel has a variance parameter and a lengthscale parameter, the linear kernel only has a variance parameter. There is also a parmaeter controlling the variance of the noise, as part of the likelihood. All of the model variables have been initialized at one. Individual parameters can be accessed in the same way as they are displayed in the table: to see all the parameters that are part of the likelihood, do
###Code
m.likelihood
###Output
_____no_output_____
###Markdown
This gets more useful with more complex models! To set the value of a parameter, just assign.
###Code
m.kern.matern32.lengthscales = 0.5
m.likelihood.variance = 0.01
m
###Output
_____no_output_____
###Markdown
Constraints and fixesGPflow helpfully creates a 'free' vector, containing an unconstrained representation of all the variables. Above, all the variables are constrained positive (see right hand table column), the unconstrained representation is given by $\alpha = \log(\exp(\theta)-1)$. You can get at this vector with `m.get_free_state()`.
###Code
print(m.get_free_state())
###Output
[ 0.54132327 -0.43275467 0.54132327 -4.60026653]
###Markdown
Constraints are handled by the `Transform` classes. You might prefer the constrain $\alpha = \log(\theta)$: this is easily done by changing setting the transform attribute on a parameter:
###Code
m.kern.matern32.lengthscales.transform = GPflow.transforms.Exp()
print(m.get_free_state())
###Output
[ 0.54132327 -0.69314918 0.54132327 -4.60026653]
###Markdown
The second free parameter, representing unconstrained lengthscale has changed (though the lengthscale itself remains the same). Another helpful feature is the ability to fix parameters. This is done by simply setting the fixed boolean to True: a 'fixed' notice appears in the representation and the corresponding variable is removed from the free state.
###Code
m.kern.linear.variance.fixed = True
m
print(m.get_free_state())
###Output
[ 0.54132327 -0.69314918 -4.60026653]
###Markdown
To unfix a parameter, just flip the boolean back. The transformation (+ve) reappears.
###Code
m.kern.linear.variance.fixed = False
m
###Output
_____no_output_____
###Markdown
PriorsPriors are set just like transforms and fixes, using members of the `GPflow.priors.` module. Let's set a Gamma prior on the RBF-variance.
###Code
m.kern.matern32.variance.prior = GPflow.priors.Gamma(2,3)
m
###Output
_____no_output_____
###Markdown
OptimizationOptimization is done by calling `m.optimize()` which has optional arguments that are passed through to `scipy.optimize.minimize` (we minimize the negative log-likelihood). Variables that have priors are MAP-estimated, others are ML, i.e. we add the log prior to the log likelihood.
###Code
m.optimize()
m
###Output
compiling tensorflow function...
done
optimization terminated, setting model state
###Markdown
Building new modelsTo build new models, you'll need to inherrit from `GPflow.model.Model`. Parameters are instantiated with `GPflow.param.Param`. You may also be interested in `GPflow.param.Parameterized` which acts as a 'container' of `Param`s (e.g. kernels are Parameterized). In this very simple demo, we'll implement linear multiclass classification. There will be two parameters: a weight matrix and a 'bias' (offset). The key thing to implement is the `build_likelihood` method, which should return a tensorflow scalar representing the (log) likelihood. Param objects can be used inside `build_likelihood`: they will appear as appropriate (unconstrained) tensors.
###Code
import tensorflow as tf
class LinearMulticlass(GPflow.model.Model):
def __init__(self, X, Y):
GPflow.model.Model.__init__(self) # always call the parent constructor
self.X = X.copy() # X is a numpy array of inputs
self.Y = Y.copy() # Y is a 1-of-k representation of the labels
self.num_data, self.input_dim = X.shape
_, self.num_classes = Y.shape
#make some parameters
self.W = GPflow.param.Param(np.random.randn(self.input_dim, self.num_classes))
self.b = GPflow.param.Param(np.random.randn(self.num_classes))
# ^^ You must make the parameters attributes of the class for
# them to be picked up by the model. i.e. this won't work:
#
# W = GPflow.param.Param(... <-- must be self.W
def build_likelihood(self): # takes no arguments
p = tf.nn.softmax(tf.matmul(self.X, self.W) + self.b) # Param variables are used as tensorflow arrays.
return tf.reduce_sum(tf.log(p) * self.Y) # be sure to return a scalar
###Output
_____no_output_____
###Markdown
...and that's it. Let's build a really simple demo to show that it works.
###Code
X = np.vstack([np.random.randn(10,2) + [2,2],
np.random.randn(10,2) + [-2,2],
np.random.randn(10,2) + [2,-2]])
Y = np.repeat(np.eye(3), 10, 0)
from matplotlib import pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (12,6)
plt.scatter(X[:,0], X[:,1], 100, np.argmax(Y, 1), lw=2, cmap=plt.cm.viridis)
m = LinearMulticlass(X, Y)
m
m.optimize()
m
xx, yy = np.mgrid[-4:4:200j, -4:4:200j]
X_test = np.vstack([xx.flatten(), yy.flatten()]).T
f_test = np.dot(X_test, m.W.value) + m.b._array
p_test = np.exp(f_test)
p_test /= p_test.sum(1)[:,None]
for i in range(3):
plt.contour(xx, yy, p_test[:,i].reshape(200,200), [0.5], colors='k', linewidths=1)
plt.scatter(X[:,0], X[:,1], 100, np.argmax(Y, 1), lw=2, cmap=plt.cm.viridis)
###Output
_____no_output_____
###Markdown
Handling models in GPflow--*James Hensman November 2015, January 2016*,*Artem Artemev December 2017*One of the key ingredients in GPflow is the model class, which allows the user to carefully control parameters. This notebook shows how some of these parameter control features work, and how to build your own model with GPflow. First we'll look at - How to view models and parameters - How to set parameter values - How to constrain parameters (e.g. variance > 0) - How to fix model parameters - How to apply priors to parameters - How to optimize modelsThen we'll show how to build a simple logistic regression model, demonstrating the ease of the parameter framework. GPy users should feel right at home, but there are some small differences.First, let's deal with the usual notebook boilerplate and make a simple GP regression model. See the Regression notebook for specifics of the model: we just want some parameters to play with.
###Code
import gpflow
import numpy as np
###Output
_____no_output_____
###Markdown
Create a very simple GPR model without building it in TensorFlow graph.
###Code
np.random.seed(1)
X = np.random.rand(20, 1)
Y = np.sin(12 * X) + 0.66 * np.cos(25 * X) + np.random.randn(20,1) * 0.01
with gpflow.defer_build():
m = gpflow.models.GPR(X, Y, kern=gpflow.kernels.Matern32(1) + gpflow.kernels.Linear(1))
###Output
_____no_output_____
###Markdown
Viewing, getting and setting parametersYou can display the state of the model in a terminal with `print(m)`, and by simply returning it in a notebook:
###Code
m
###Output
_____no_output_____
###Markdown
This model has four parameters. The kernel is made of the sum of two parts: the first (counting from zero) is an RBF kernel that has a variance parameter and a lengthscale parameter; the second is a linear kernel that only has a variance parameter. There is also a parameter controlling the variance of the noise, as part of the likelihood. All of the model variables have been initialized at one. Individual parameters can be accessed in the same way as they are displayed in the table: to see all the parameters that are part of the likelihood, do
###Code
m.likelihood
###Output
_____no_output_____
###Markdown
This gets more useful with more complex models! To set the value of a parameter, just assign.
###Code
m.kern.kernels[0].lengthscales = 0.5
m.likelihood.variance = 0.01
m
###Output
_____no_output_____
###Markdown
Constraints and trainable variablesGPflow helpfully creates an unconstrained representation of all the variables. Above, all the variables are constrained positive (see right hand table column), the unconstrained representation is given by $\alpha = \log(\exp(\theta)-1)$. `read_trainables()` returns the constrained values:
###Code
m.read_trainables()
###Output
_____no_output_____
###Markdown
Each parameter has an `unconstrained_tensor` attribute that allows accessing the unconstrained value as a tensorflow Tensor (though only after the model has been compiled). We can also check the unconstrained value as follows:
###Code
p = m.kern.kernels[0].lengthscales
p.transform.backward(p.value)
###Output
_____no_output_____
###Markdown
Constraints are handled by the `Transform` classes. You might prefer the constraint $\alpha = \log(\theta)$: this is easily done by changing the transform attribute on a parameter, with one simple condition - the model has not been compiled yet:
###Code
m.kern.kernels[0].lengthscales.transform = gpflow.transforms.Exp()
###Output
_____no_output_____
###Markdown
Though the lengthscale itself remains the same, the unconstrained lengthscale has changed:
###Code
p.transform.backward(p.value)
###Output
_____no_output_____
###Markdown
Another helpful feature is the ability to fix parameters. This is done by simply setting the `trainable` attribute to False: this is shown in the 'trainable' column of the representation, and the corresponding variable is removed from the free state.
###Code
m.kern.kernels[1].variance.trainable = False
m
m.read_trainables()
###Output
_____no_output_____
###Markdown
To unfix a parameter, just flip the boolean back and set the parameter to be trainable again.
###Code
m.kern.kernels[1].variance.trainable = True
m
###Output
_____no_output_____
###Markdown
PriorsPriors are set just like transforms and trainability, using members of the `gpflow.priors` module. Let's set a Gamma prior on the RBF-variance.
###Code
m.kern.kernels[0].variance.prior = gpflow.priors.Gamma(2, 3)
m
###Output
_____no_output_____
###Markdown
OptimizationOptimization is done by creating an instance of optimizer, in our case it is `gpflow.train.ScipyOptimizer`, which has optional arguments that are passed through to `scipy.optimize.minimize` (we minimize the negative log-likelihood) and calling `minimize` method of that optimizer with model as optimization target. Variables that have priors are MAP-estimated, i.e. we add the log prior to the log likelihood, otherwise using Maximum Likelihood.
###Code
m.compile()
opt = gpflow.train.ScipyOptimizer()
opt.minimize(m)
###Output
WARNING:gpflow.logdensities:Shape of x must be 2D at computation.
###Markdown
Building new modelsTo build new models, you'll need to inherit from `gpflow.models.Model`. Parameters are instantiated with `gpflow.Param`. You may also be interested in `gpflow.params.Parameterized` which acts as a 'container' of `Param`s (e.g. kernels are Parameterized). In this very simple demo, we'll implement linear multiclass classification. There will be two parameters: a weight matrix and a 'bias' (offset). The key thing to implement is the private `_build_likelihood` method, which should return a tensorflow scalar representing the (log) likelihood. By decorating the function with `@gpflow.params_as_tensors`, Param objects can be used inside `_build_likelihood`: they will appear as appropriate (constrained) tensors.
###Code
import tensorflow as tf
class LinearMulticlass(gpflow.models.Model):
def __init__(self, X, Y, name=None):
super().__init__(name=name) # always call the parent constructor
self.X = X.copy() # X is a numpy array of inputs
self.Y = Y.copy() # Y is a 1-of-k (one-hot) representation of the labels
self.num_data, self.input_dim = X.shape
_, self.num_classes = Y.shape
#make some parameters
self.W = gpflow.Param(np.random.randn(self.input_dim, self.num_classes))
self.b = gpflow.Param(np.random.randn(self.num_classes))
# ^^ You must make the parameters attributes of the class for
# them to be picked up by the model. i.e. this won't work:
#
# W = gpflow.Param(... <-- must be self.W
@gpflow.params_as_tensors
def _build_likelihood(self): # takes no arguments
p = tf.nn.softmax(tf.matmul(self.X, self.W) + self.b) # Param variables are used as tensorflow arrays.
return tf.reduce_sum(tf.log(p) * self.Y) # be sure to return a scalar
###Output
_____no_output_____
###Markdown
...and that's it. Let's build a really simple demo to show that it works.
###Code
np.random.seed(123)
X = np.vstack([np.random.randn(10,2) + [2,2],
np.random.randn(10,2) + [-2,2],
np.random.randn(10,2) + [2,-2]])
Y = np.repeat(np.eye(3), 10, 0)
from matplotlib import pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (12,6)
plt.scatter(X[:,0], X[:,1], 100, np.argmax(Y, 1), lw=2, cmap=plt.cm.viridis);
m = LinearMulticlass(X, Y)
m
opt = gpflow.train.ScipyOptimizer()
opt.minimize(m)
m
xx, yy = np.mgrid[-4:4:200j, -4:4:200j]
X_test = np.vstack([xx.flatten(), yy.flatten()]).T
f_test = np.dot(X_test, m.W.read_value()) + m.b.read_value()
p_test = np.exp(f_test)
p_test /= p_test.sum(1)[:,None]
plt.figure(figsize=(12, 6))
for i in range(3):
plt.contour(xx, yy, p_test[:,i].reshape(200,200), [0.5], colors='k', linewidths=1)
plt.scatter(X[:,0], X[:,1], 100, np.argmax(Y, 1), lw=2, cmap=plt.cm.viridis);
###Output
_____no_output_____
###Markdown
Handling models in GPflow--*James Hensman November 2015, January 2016*,*Artem Artemev December 2017*One of the key ingredients in GPflow is the model class, which allows the user to carefully control parameters. This notebook shows how some of these parameter control features work, and how to build your own model with GPflow. First we'll look at - How to view models and parameters - How to set parameter values - How to constrain parameters (e.g. variance > 0) - How to fix model parameters - How to apply priors to parameters - How to optimize modelsThen we'll show how to build a simple logistic regression model, demonstrating the ease of the parameter framework. GPy users should feel right at home, but there are some small differences.First, let's deal with the usual notebook boilerplate and make a simple GP regression model. See the Regression notebook for specifics of the model: we just want some parameters to play with.
###Code
import gpflow
import numpy as np
###Output
_____no_output_____
###Markdown
Create a very simple GPR model without building it in TensorFlow graph.
###Code
with gpflow.defer_build():
X = np.random.rand(20, 1)
Y = np.sin(12 * X) + 0.66 * np.cos(25 * X) + np.random.randn(20,1) * 0.01
m = gpflow.models.GPR(X, Y, kern=gpflow.kernels.Matern32(1) + gpflow.kernels.Linear(1))
###Output
_____no_output_____
###Markdown
Viewing, getting and setting parametersYou can display the state of the model in a terminal with `print m` (or `print(m)`), and by simply returning it in a notebook
###Code
m.as_pandas_table()
###Output
_____no_output_____
###Markdown
This model has four parameters. The kernel is made of the sum of two parts: the RBF kernel has a variance parameter and a lengthscale parameter, the linear kernel only has a variance parameter. There is also a parameter controlling the variance of the noise, as part of the likelihood. All of the model variables have been initialized at one. Individual parameters can be accessed in the same way as they are displayed in the table: to see all the parameters that are part of the likelihood, do
###Code
m.likelihood.as_pandas_table()
###Output
_____no_output_____
###Markdown
This gets more useful with more complex models! To set the value of a parameter, just assign.
###Code
m.kern.kernels[0].lengthscales = 0.5
m.likelihood.variance = 0.01
m.as_pandas_table()
###Output
_____no_output_____
###Markdown
Constraints and trainable variablesGPflow helpfully creates an unconstrained representation of all the variables. Above, all the variables are constrained positive (see right hand table column), the unconstrained representation is given by $\alpha = \log(\exp(\theta)-1)$.
###Code
m.read_trainables()
###Output
_____no_output_____
###Markdown
Constraints are handled by the `Transform` classes. You might prefer the constrain $\alpha = \log(\theta)$: this is easily done by changing setting the transform attribute on a parameter, with one simple condition - the model has not been compiled yet:
###Code
m.kern.kernels[0].lengthscales.transform = gpflow.transforms.Exp()
m.read_trainables()
###Output
_____no_output_____
###Markdown
The second free parameter, representing unconstrained lengthscale has changed (though the lengthscale itself remains the same). Another helpful feature is the ability to fix parameters. This is done by simply setting the fixed boolean to True: a 'fixed' notice appears in the representation and the corresponding variable is removed from the free state.
###Code
m.kern.kernels[1].variance.trainable = False
m.as_pandas_table()
m.read_trainables()
###Output
_____no_output_____
###Markdown
To unfix a parameter, just flip the boolean back. The transformation (+ve) reappears.
###Code
m.kern.kernels[1].variance.trainable = True
m.as_pandas_table()
###Output
_____no_output_____
###Markdown
PriorsPriors are set just like transforms and fixes, using members of the `gpflow.priors.` module. Let's set a Gamma prior on the RBF-variance.
###Code
m.kern.kernels[0].variance.prior = gpflow.priors.Gamma(2,3)
m.as_pandas_table()
###Output
_____no_output_____
###Markdown
OptimizationOptimization is done by creating an instance of optimizer, in our case it is `gpflow.train.ScipyOptimizer`, which has optional arguments that are passed through to `scipy.optimize.minimize` (we minimize the negative log-likelihood) and calling `minimize` method of that optimizer with model as optimization target. Variables that have priors are MAP-estimated, others are ML, i.e. we add the log prior to the log likelihood.
###Code
m.compile()
opt = gpflow.train.ScipyOptimizer()
opt.minimize(m)
###Output
INFO:tensorflow:Optimization terminated with:
Message: b'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
Objective function value: 2.634113
Number of iterations: 34
Number of functions evaluations: 39
###Markdown
Building new modelsTo build new models, you'll need to inherrit from `gpflow.models.Model`. Parameters are instantiated with `gpflow.Param`. You may also be interested in `gpflow.params.Parameterized` which acts as a 'container' of `Param`s (e.g. kernels are Parameterized). In this very simple demo, we'll implement linear multiclass classification. There will be two parameters: a weight matrix and a 'bias' (offset). The key thing to implement is the private `_build_likelihood` method, which should return a tensorflow scalar representing the (log) likelihood. Param objects can be used inside `_build_likelihood`: they will appear as appropriate (unconstrained) tensors.
###Code
import tensorflow as tf
class LinearMulticlass(gpflow.models.Model):
def __init__(self, X, Y, name=None):
super().__init__(name=name) # always call the parent constructor
self.X = X.copy() # X is a numpy array of inputs
self.Y = Y.copy() # Y is a 1-of-k representation of the labels
self.num_data, self.input_dim = X.shape
_, self.num_classes = Y.shape
#make some parameters
self.W = gpflow.Param(np.random.randn(self.input_dim, self.num_classes))
self.b = gpflow.Param(np.random.randn(self.num_classes))
# ^^ You must make the parameters attributes of the class for
# them to be picked up by the model. i.e. this won't work:
#
# W = gpflow.Param(... <-- must be self.W
@gpflow.params_as_tensors
def _build_likelihood(self): # takes no arguments
p = tf.nn.softmax(tf.matmul(self.X, self.W) + self.b) # Param variables are used as tensorflow arrays.
return tf.reduce_sum(tf.log(p) * self.Y) # be sure to return a scalar
###Output
_____no_output_____
###Markdown
...and that's it. Let's build a really simple demo to show that it works.
###Code
X = np.vstack([np.random.randn(10,2) + [2,2],
np.random.randn(10,2) + [-2,2],
np.random.randn(10,2) + [2,-2]])
Y = np.repeat(np.eye(3), 10, 0)
from matplotlib import pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (12,6)
plt.scatter(X[:,0], X[:,1], 100, np.argmax(Y, 1), lw=2, cmap=plt.cm.viridis)
m = LinearMulticlass(X, Y)
m.as_pandas_table()
opt = gpflow.train.ScipyOptimizer()
opt.minimize(m)
m.as_pandas_table()
xx, yy = np.mgrid[-4:4:200j, -4:4:200j]
X_test = np.vstack([xx.flatten(), yy.flatten()]).T
f_test = np.dot(X_test, m.W.read_value()) + m.b.read_value()
p_test = np.exp(f_test)
p_test /= p_test.sum(1)[:,None]
for i in range(3):
plt.contour(xx, yy, p_test[:,i].reshape(200,200), [0.5], colors='k', linewidths=1)
plt.scatter(X[:,0], X[:,1], 100, np.argmax(Y, 1), lw=2, cmap=plt.cm.viridis)
###Output
_____no_output_____
###Markdown
Handling models in GPflow--*James Hensman November 2015, January 2016*,*Artem Artemev December 2017*One of the key ingredients in GPflow is the model class, which allows the user to carefully control parameters. This notebook shows how some of these parameter control features work, and how to build your own model with GPflow. First we'll look at - How to view models and parameters - How to set parameter values - How to constrain parameters (e.g. variance > 0) - How to fix model parameters - How to apply priors to parameters - How to optimize modelsThen we'll show how to build a simple logistic regression model, demonstrating the ease of the parameter framework. GPy users should feel right at home, but there are some small differences.First, let's deal with the usual notebook boilerplate and make a simple GP regression model. See the Regression notebook for specifics of the model: we just want some parameters to play with.
###Code
import gpflow
import numpy as np
###Output
_____no_output_____
###Markdown
Create a very simple GPR model without building it in TensorFlow graph.
###Code
with gpflow.defer_build():
X = np.random.rand(20, 1)
Y = np.sin(12 * X) + 0.66 * np.cos(25 * X) + np.random.randn(20,1) * 0.01
m = gpflow.models.GPR(X, Y, kern=gpflow.kernels.Matern32(1) + gpflow.kernels.Linear(1))
###Output
_____no_output_____
###Markdown
Viewing, getting and setting parametersYou can display the state of the model in a terminal with `print m` (or `print(m)`), and by simply returning it in a notebook
###Code
m.as_pandas_table()
###Output
_____no_output_____
###Markdown
This model has four parameters. The kernel is made of the sum of two parts: the RBF kernel has a variance parameter and a lengthscale parameter, the linear kernel only has a variance parameter. There is also a parameter controlling the variance of the noise, as part of the likelihood. All of the model variables have been initialized at one. Individual parameters can be accessed in the same way as they are displayed in the table: to see all the parameters that are part of the likelihood, do
###Code
m.likelihood.as_pandas_table()
###Output
_____no_output_____
###Markdown
This gets more useful with more complex models! To set the value of a parameter, just assign.
###Code
m.kern.kernels[0].lengthscales = 0.5
m.likelihood.variance = 0.01
m.as_pandas_table()
###Output
_____no_output_____
###Markdown
Constraints and trainable variablesGPflow helpfully creates an unconstrained representation of all the variables. Above, all the variables are constrained positive (see right hand table column), the unconstrained representation is given by $\alpha = \log(\exp(\theta)-1)$.
###Code
m.read_trainables()
###Output
_____no_output_____
###Markdown
Constraints are handled by the `Transform` classes. You might prefer the constrain $\alpha = \log(\theta)$: this is easily done by changing setting the transform attribute on a parameter, with one simple condition - the model has not been compiled yet:
###Code
m.kern.kernels[0].lengthscales.transform = gpflow.transforms.Exp()
m.read_trainables()
###Output
_____no_output_____
###Markdown
The second free parameter, representing unconstrained lengthscale has changed (though the lengthscale itself remains the same). Another helpful feature is the ability to fix parameters. This is done by simply setting the fixed boolean to True: a 'fixed' notice appears in the representation and the corresponding variable is removed from the free state.
###Code
m.kern.kernels[1].variance.trainable = False
m.as_pandas_table()
m.read_trainables()
###Output
_____no_output_____
###Markdown
To unfix a parameter, just flip the boolean back. The transformation (+ve) reappears.
###Code
m.kern.kernels[1].variance.trainable = True
m.as_pandas_table()
###Output
_____no_output_____
###Markdown
PriorsPriors are set just like transforms and fixes, using members of the `gpflow.priors.` module. Let's set a Gamma prior on the RBF-variance.
###Code
m.kern.kernels[0].variance.prior = gpflow.priors.Gamma(2,3)
m.as_pandas_table()
###Output
_____no_output_____
###Markdown
OptimizationOptimization is done by creating an instance of optimizer, in our case it is `gpflow.train.ScipyOptimizer`, which has optional arguments that are passed through to `scipy.optimize.minimize` (we minimize the negative log-likelihood) and calling `minimize` method of that optimizer with model as optimization target. Variables that have priors are MAP-estimated, others are ML, i.e. we add the log prior to the log likelihood.
###Code
m.compile()
opt = gpflow.train.ScipyOptimizer()
opt.minimize(m)
###Output
INFO:tensorflow:Optimization terminated with:
Message: b'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
Objective function value: 2.634113
Number of iterations: 34
Number of functions evaluations: 39
###Markdown
Building new modelsTo build new models, you'll need to inherrit from `gpflow.models.Model`. Parameters are instantiated with `gpflow.Param`. You may also be interested in `gpflow.params.Parameterized` which acts as a 'container' of `Param`s (e.g. kernels are Parameterized). In this very simple demo, we'll implement linear multiclass classification. There will be two parameters: a weight matrix and a 'bias' (offset). The key thing to implement is the private `_build_likelihood` method, which should return a tensorflow scalar representing the (log) likelihood. Param objects can be used inside `_build_likelihood`: they will appear as appropriate (unconstrained) tensors.
###Code
import tensorflow as tf
class LinearMulticlass(gpflow.models.Model):
def __init__(self, X, Y, name=None):
super().__init__(name=name) # always call the parent constructor
self.X = X.copy() # X is a numpy array of inputs
self.Y = Y.copy() # Y is a 1-of-k representation of the labels
self.num_data, self.input_dim = X.shape
_, self.num_classes = Y.shape
#make some parameters
self.W = gpflow.Param(np.random.randn(self.input_dim, self.num_classes))
self.b = gpflow.Param(np.random.randn(self.num_classes))
# ^^ You must make the parameters attributes of the class for
# them to be picked up by the model. i.e. this won't work:
#
# W = gpflow.Param(... <-- must be self.W
@gpflow.params_as_tensors
def _build_likelihood(self): # takes no arguments
p = tf.nn.softmax(tf.matmul(self.X, self.W) + self.b) # Param variables are used as tensorflow arrays.
return tf.reduce_sum(tf.log(p) * self.Y) # be sure to return a scalar
###Output
_____no_output_____
###Markdown
...and that's it. Let's build a really simple demo to show that it works.
###Code
X = np.vstack([np.random.randn(10,2) + [2,2],
np.random.randn(10,2) + [-2,2],
np.random.randn(10,2) + [2,-2]])
Y = np.repeat(np.eye(3), 10, 0)
from matplotlib import pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (12,6)
plt.scatter(X[:,0], X[:,1], 100, np.argmax(Y, 1), lw=2, cmap=plt.cm.viridis)
m = LinearMulticlass(X, Y)
m.as_pandas_table()
opt = gpflow.train.ScipyOptimizer()
opt.minimize(m)
m.as_pandas_table()
xx, yy = np.mgrid[-4:4:200j, -4:4:200j]
X_test = np.vstack([xx.flatten(), yy.flatten()]).T
f_test = np.dot(X_test, m.W.read_value()) + m.b.read_value()
p_test = np.exp(f_test)
p_test /= p_test.sum(1)[:,None]
for i in range(3):
plt.contour(xx, yy, p_test[:,i].reshape(200,200), [0.5], colors='k', linewidths=1)
plt.scatter(X[:,0], X[:,1], 100, np.argmax(Y, 1), lw=2, cmap=plt.cm.viridis)
###Output
_____no_output_____
###Markdown
Handling models in GPflow--*James Hensman November 2015, January 2016*,*Artem Artemev December 2017*One of the key ingredients in GPflow is the model class, which allows the user to carefully control parameters. This notebook shows how some of these parameter control features work, and how to build your own model with GPflow. First we'll look at - How to view models and parameters - How to set parameter values - How to constrain parameters (e.g. variance > 0) - How to fix model parameters - How to apply priors to parameters - How to optimize modelsThen we'll show how to build a simple logistic regression model, demonstrating the ease of the parameter framework. GPy users should feel right at home, but there are some small differences.First, let's deal with the usual notebook boilerplate and make a simple GP regression model. See the Regression notebook for specifics of the model: we just want some parameters to play with.
###Code
import gpflow
import numpy as np
###Output
_____no_output_____
###Markdown
Create a very simple GPR model without building it in TensorFlow graph.
###Code
np.random.seed(1)
X = np.random.rand(20, 1)
Y = np.sin(12 * X) + 0.66 * np.cos(25 * X) + np.random.randn(20,1) * 0.01
with gpflow.defer_build():
m = gpflow.models.GPR(X, Y, kern=gpflow.kernels.Matern32(1) + gpflow.kernels.Linear(1))
###Output
_____no_output_____
###Markdown
Viewing, getting and setting parametersYou can display the state of the model in a terminal with `print(m)`, and by simply returning it in a notebook:
###Code
m
###Output
_____no_output_____
###Markdown
This model has four parameters. The kernel is made of the sum of two parts: the first (counting from zero) is an RBF kernel that has a variance parameter and a lengthscale parameter; the second is a linear kernel that only has a variance parameter. There is also a parameter controlling the variance of the noise, as part of the likelihood. All of the model variables have been initialized at one. Individual parameters can be accessed in the same way as they are displayed in the table: to see all the parameters that are part of the likelihood, do
###Code
m.likelihood
###Output
_____no_output_____
###Markdown
This gets more useful with more complex models! To set the value of a parameter, just assign.
###Code
m.kern.kernels[0].lengthscales = 0.5
m.likelihood.variance = 0.01
m
###Output
_____no_output_____
###Markdown
Constraints and trainable variablesGPflow helpfully creates an unconstrained representation of all the variables. Above, all the variables are constrained positive (see right hand table column), the unconstrained representation is given by $\alpha = \log(\exp(\theta)-1)$. `read_trainables()` returns the constrained values:
###Code
m.read_trainables()
###Output
_____no_output_____
###Markdown
Each parameter has an `unconstrained_tensor` attribute that allows accessing the unconstrained value as a tensorflow Tensor (though only after the model has been compiled). We can also check the unconstrained value as follows:
###Code
p = m.kern.kernels[0].lengthscales
p.transform.backward(p.value)
###Output
_____no_output_____
###Markdown
Constraints are handled by the `Transform` classes. You might prefer the constraint $\alpha = \log(\theta)$: this is easily done by changing the transform attribute on a parameter, with one simple condition - the model has not been compiled yet:
###Code
m.kern.kernels[0].lengthscales.transform = gpflow.transforms.Exp()
###Output
_____no_output_____
###Markdown
Though the lengthscale itself remains the same, the unconstrained lengthscale has changed:
###Code
p.transform.backward(p.value)
###Output
_____no_output_____
###Markdown
Another helpful feature is the ability to fix parameters. This is done by simply setting the `trainable` attribute to False: this is shown in the 'trainable' column of the representation, and the corresponding variable is removed from the free state.
###Code
m.kern.kernels[1].variance.trainable = False
m
m.read_trainables()
###Output
_____no_output_____
###Markdown
To unfix a parameter, just flip the boolean back and set the parameter to be trainable again.
###Code
m.kern.kernels[1].variance.trainable = True
m
###Output
_____no_output_____
###Markdown
PriorsPriors are set just like transforms and trainability, using members of the `gpflow.priors` module. Let's set a Gamma prior on the RBF-variance.
###Code
m.kern.kernels[0].variance.prior = gpflow.priors.Gamma(2, 3)
m
###Output
_____no_output_____
###Markdown
OptimizationOptimization is done by creating an instance of optimizer, in our case it is `gpflow.train.ScipyOptimizer`, which has optional arguments that are passed through to `scipy.optimize.minimize` (we minimize the negative log-likelihood) and calling `minimize` method of that optimizer with model as optimization target. Variables that have priors are MAP-estimated, i.e. we add the log prior to the log likelihood, otherwise using Maximum Likelihood.
###Code
m.compile()
opt = gpflow.train.ScipyOptimizer()
opt.minimize(m)
###Output
WARNING:gpflow.logdensities:Shape of x must be 2D at computation.
###Markdown
Building new modelsTo build new models, you'll need to inherit from `gpflow.models.Model`. Parameters are instantiated with `gpflow.Param`. You may also be interested in `gpflow.params.Parameterized` which acts as a 'container' of `Param`s (e.g. kernels are Parameterized). In this very simple demo, we'll implement linear multiclass classification. There will be two parameters: a weight matrix and a 'bias' (offset). The key thing to implement is the private `_build_likelihood` method, which should return a tensorflow scalar representing the (log) likelihood. By decorating the function with `@gpflow.params_as_tensors`, Param objects can be used inside `_build_likelihood`: they will appear as appropriate (constrained) tensors.
###Code
import tensorflow as tf
class LinearMulticlass(gpflow.models.Model):
def __init__(self, X, Y, name=None):
super().__init__(name=name) # always call the parent constructor
self.X = X.copy() # X is a numpy array of inputs
self.Y = Y.copy() # Y is a 1-of-k (one-hot) representation of the labels
self.num_data, self.input_dim = X.shape
_, self.num_classes = Y.shape
#make some parameters
self.W = gpflow.Param(np.random.randn(self.input_dim, self.num_classes))
self.b = gpflow.Param(np.random.randn(self.num_classes))
# ^^ You must make the parameters attributes of the class for
# them to be picked up by the model. i.e. this won't work:
#
# W = gpflow.Param(... <-- must be self.W
@gpflow.params_as_tensors
def _build_likelihood(self): # takes no arguments
p = tf.nn.softmax(tf.matmul(self.X, self.W) + self.b) # Param variables are used as tensorflow arrays.
return tf.reduce_sum(tf.log(p) * self.Y) # be sure to return a scalar
###Output
_____no_output_____
###Markdown
...and that's it. Let's build a really simple demo to show that it works.
###Code
np.random.seed(123)
X = np.vstack([np.random.randn(10,2) + [2,2],
np.random.randn(10,2) + [-2,2],
np.random.randn(10,2) + [2,-2]])
Y = np.repeat(np.eye(3), 10, 0)
from matplotlib import pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (12,6)
plt.scatter(X[:,0], X[:,1], 100, np.argmax(Y, 1), lw=2, cmap=plt.cm.viridis);
m = LinearMulticlass(X, Y)
m
opt = gpflow.train.ScipyOptimizer()
opt.minimize(m)
m
xx, yy = np.mgrid[-4:4:200j, -4:4:200j]
X_test = np.vstack([xx.flatten(), yy.flatten()]).T
f_test = np.dot(X_test, m.W.read_value()) + m.b.read_value()
p_test = np.exp(f_test)
p_test /= p_test.sum(1)[:,None]
plt.figure(figsize=(12, 6))
for i in range(3):
plt.contour(xx, yy, p_test[:,i].reshape(200,200), [0.5], colors='k', linewidths=1)
plt.scatter(X[:,0], X[:,1], 100, np.argmax(Y, 1), lw=2, cmap=plt.cm.viridis);
###Output
_____no_output_____
###Markdown
Handling models in GPflow--*James Hensman November 2015, January 2016*,*Artem Artemev December 2017*One of the key ingredients in GPflow is the model class, which allows the user to carefully control parameters. This notebook shows how some of these parameter control features work, and how to build your own model with GPflow. First we'll look at - How to view models and parameters - How to set parameter values - How to constrain parameters (e.g. variance > 0) - How to fix model parameters - How to apply priors to parameters - How to optimize modelsThen we'll show how to build a simple logistic regression model, demonstrating the ease of the parameter framework. GPy users should feel right at home, but there are some small differences.First, let's deal with the usual notebook boilerplate and make a simple GP regression model. See the Regression notebook for specifics of the model: we just want some parameters to play with.
###Code
import gpflow
import numpy as np
###Output
_____no_output_____
###Markdown
Create a very simple GPR model without building it in TensorFlow graph.
###Code
with gpflow.defer_build():
X = np.random.rand(20, 1)
Y = np.sin(12 * X) + 0.66 * np.cos(25 * X) + np.random.randn(20,1) * 0.01
m = gpflow.models.GPR(X, Y, kern=gpflow.kernels.Matern32(1) + gpflow.kernels.Linear(1))
###Output
_____no_output_____
###Markdown
Viewing, getting and setting parametersYou can display the state of the model in a terminal with `print m` (or `print(m)`), and by simply returning it in a notebook
###Code
m.as_pandas_table()
###Output
_____no_output_____
###Markdown
This model has four parameters. The kernel is made of the sum of two parts: the RBF kernel has a variance parameter and a lengthscale parameter, the linear kernel only has a variance parameter. There is also a parameter controlling the variance of the noise, as part of the likelihood. All of the model variables have been initialized at one. Individual parameters can be accessed in the same way as they are displayed in the table: to see all the parameters that are part of the likelihood, do
###Code
m.likelihood.as_pandas_table()
###Output
_____no_output_____
###Markdown
This gets more useful with more complex models! To set the value of a parameter, just assign.
###Code
m.kern.matern32.lengthscales = 0.5
m.likelihood.variance = 0.01
m.as_pandas_table()
###Output
_____no_output_____
###Markdown
Constraints and trainable variablesGPflow helpfully creates an unconstrained representation of all the variables. Above, all the variables are constrained positive (see right hand table column), the unconstrained representation is given by $\alpha = \log(\exp(\theta)-1)$.
###Code
m.read_trainables()
###Output
_____no_output_____
###Markdown
Constraints are handled by the `Transform` classes. You might prefer the constrain $\alpha = \log(\theta)$: this is easily done by changing setting the transform attribute on a parameter, with one simple condition - the model has not been compiled yet:
###Code
m.kern.matern32.lengthscales.transform = gpflow.transforms.Exp()
m.read_trainables()
###Output
_____no_output_____
###Markdown
The second free parameter, representing unconstrained lengthscale has changed (though the lengthscale itself remains the same). Another helpful feature is the ability to fix parameters. This is done by simply setting the fixed boolean to True: a 'fixed' notice appears in the representation and the corresponding variable is removed from the free state.
###Code
m.kern.linear.variance.trainable = False
m.as_pandas_table()
m.read_trainables()
###Output
_____no_output_____
###Markdown
To unfix a parameter, just flip the boolean back. The transformation (+ve) reappears.
###Code
m.kern.linear.variance.trainable = True
m.as_pandas_table()
###Output
_____no_output_____
###Markdown
PriorsPriors are set just like transforms and fixes, using members of the `gpflow.priors.` module. Let's set a Gamma prior on the RBF-variance.
###Code
m.kern.matern32.variance.prior = gpflow.priors.Gamma(2,3)
m.as_pandas_table()
###Output
_____no_output_____
###Markdown
OptimizationOptimization is done by creating an instance of optimizer, in our case it is `gpflow.train.ScipyOptimizer`, which has optional arguments that are passed through to `scipy.optimize.minimize` (we minimize the negative log-likelihood) and calling `minimize` method of that optimizer with model as optimization target. Variables that have priors are MAP-estimated, others are ML, i.e. we add the log prior to the log likelihood.
###Code
m.compile()
opt = gpflow.train.ScipyOptimizer()
opt.minimize(m)
###Output
INFO:tensorflow:Optimization terminated with:
Message: b'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
Objective function value: 2.634113
Number of iterations: 34
Number of functions evaluations: 39
###Markdown
Building new modelsTo build new models, you'll need to inherrit from `gpflow.models.Model`. Parameters are instantiated with `gpflow.Param`. You may also be interested in `gpflow.params.Parameterized` which acts as a 'container' of `Param`s (e.g. kernels are Parameterized). In this very simple demo, we'll implement linear multiclass classification. There will be two parameters: a weight matrix and a 'bias' (offset). The key thing to implement is the private `_build_likelihood` method, which should return a tensorflow scalar representing the (log) likelihood. Param objects can be used inside `_build_likelihood`: they will appear as appropriate (unconstrained) tensors.
###Code
import tensorflow as tf
class LinearMulticlass(gpflow.models.Model):
def __init__(self, X, Y, name=None):
super().__init__(name=name) # always call the parent constructor
self.X = X.copy() # X is a numpy array of inputs
self.Y = Y.copy() # Y is a 1-of-k representation of the labels
self.num_data, self.input_dim = X.shape
_, self.num_classes = Y.shape
#make some parameters
self.W = gpflow.Param(np.random.randn(self.input_dim, self.num_classes))
self.b = gpflow.Param(np.random.randn(self.num_classes))
# ^^ You must make the parameters attributes of the class for
# them to be picked up by the model. i.e. this won't work:
#
# W = gpflow.Param(... <-- must be self.W
@gpflow.params_as_tensors
def _build_likelihood(self): # takes no arguments
p = tf.nn.softmax(tf.matmul(self.X, self.W) + self.b) # Param variables are used as tensorflow arrays.
return tf.reduce_sum(tf.log(p) * self.Y) # be sure to return a scalar
###Output
_____no_output_____
###Markdown
...and that's it. Let's build a really simple demo to show that it works.
###Code
X = np.vstack([np.random.randn(10,2) + [2,2],
np.random.randn(10,2) + [-2,2],
np.random.randn(10,2) + [2,-2]])
Y = np.repeat(np.eye(3), 10, 0)
from matplotlib import pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (12,6)
plt.scatter(X[:,0], X[:,1], 100, np.argmax(Y, 1), lw=2, cmap=plt.cm.viridis)
m = LinearMulticlass(X, Y)
m.as_pandas_table()
opt = gpflow.train.ScipyOptimizer()
opt.minimize(m)
m.as_pandas_table()
xx, yy = np.mgrid[-4:4:200j, -4:4:200j]
X_test = np.vstack([xx.flatten(), yy.flatten()]).T
f_test = np.dot(X_test, m.W.read_value()) + m.b.read_value()
p_test = np.exp(f_test)
p_test /= p_test.sum(1)[:,None]
for i in range(3):
plt.contour(xx, yy, p_test[:,i].reshape(200,200), [0.5], colors='k', linewidths=1)
plt.scatter(X[:,0], X[:,1], 100, np.argmax(Y, 1), lw=2, cmap=plt.cm.viridis)
###Output
_____no_output_____
|
day_01_notebooks/SPARK_Day1_Victoria.ipynb
|
###Markdown
 SPARK | Day 1 | July 12th, 2021The agenda for today:1. About the Tools2. Introduction to Pandas (What is it?, Reading Excel and CSV Files, Viewing DataFrames)3. Indexing DataFrames4. Cleaning DataFrames5. Version Control - Introduction to Git6. How to Give Feedback About the tools Google Colab**Google Colab** is an online version of Jupyter Notebooks. It has cells where you can write notes and execute lines of code. We will be running the programming language called **Python** within these notebooks.To use Google Colab, you will need to have [Google Chrome](https://www.google.com/chrome/) intalled on your computer. You will also need a Google Account to be able to access [Google Drive](https://drive.google.com), where the Colab notebooks can be accessed. Pandas What is Pandas?Pandas are adorable little animals. But, they are also an open-source **data analysis and manilpulation tool**. It's built on top Python. Programmers like animals — can you tell?**You can think of Pandas as cuter, better and more powerful version EXCEL** Importing the Pandas Library**Pandas** is a library that is built on top of the **Numpy** library.**Libraries:** are a collection of functions on methods that allow us to form specific actions ---Before we can use Pandas and Numpy, we need to **import** their libraries in Python. This is the standard way of importanting the Pandas and Numpy LibraryWhen we import pandas as **pd**, every time we want to access the pandas library we can just write "pd" instead of "pandas". Likewise, importing **numpy** as np, allows us to access the numpy library with the shorter **np**
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Reading a Excel (.xlsx) and CSV (.csv) FilesWe can use Pandas to read and import data in different file formats.Two common data file formats are **(1) Excel** and **(2) CSV**. ExcelExcel (.xlsx) files are workbook files that are specifically created for Microsoft Excel, but they work in Google Sheets as well. --- Reading Excel Files To import excel files in Pandas we use the [pd.read_excel](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html) function> df = pd.read_excel ("path-of-file.xlsx")In this case:* **df** is the variable that we are saving the data frame to* **=** is how we assign data to a variable* **pd** is the name of the *library** **read_excel** is the name of the *function** **path-of-file.xlsx** is the *parameter* (i.e. the name of the file) Exercise 1: Reading your first excel fileTry importing your first excel file. The name of the file is **https://github.com/AutismResearchCentre/Spark_2020_Day_01/blob/master/US_School_Census_2019.xlsx?raw=true** (*note* The name of the file is in quotations because it string. We put all strings in single quotes ('string') or double quotes ("string"). Save the string to a variable called **census_2019**.
###Code
# TO-DO: Read in the US School Census Data file from 2019
path = "https://github.com/AutismResearchCentre/Spark_Datasets/blob/master/US_School_Census_2019.xlsx?raw=true"
# [VARIABLE_NAME] = pd.read_excel([PATH_NAME])
census_2019 = pd.read_excel(path)
###Output
_____no_output_____
###Markdown
To view the data frame just call the name of the variable that you saved the data frame to.
###Code
# Type the name of the variable of the data frame and run the cell
census_2019
###Output
_____no_output_____
###Markdown
CSVCSV stands for Comma Separated Values. It is a plain text file where the values are separated by commas. CSV files can be opened by any spreadsheet program (e.g., Excel, Open Office, Google Sheets). You can also open and edit CSV files in simple text editors as well (e.g., NotePad) Reading Excel Files To import excel files in Pandas we use the [pd.read_csv](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html) function`df = pd.read_csv ("path-of-file.csv")`In this case:* **df** is the variable that we are saving the data frame to* **=** is how we assign data to a variable* **pd** is the name of the *library** **read_csv** is the name of the *function** **path-of-file.csv** is the *parameter* (i.e. the name of the file) Exercise 2: Reading your first csv fileTry importing your first csv file. It's very similar to reading an excel file. The path of the file is **https://raw.githubusercontent.com/AutismResearchCentre/Spark_Datasets/master/US_School_Census_2020.csv** Save the string to a variable called **census_2020**.BONUS: What do you think would happen if we tried to use pd.read_csv to read an excel file?
###Code
# TO-DO: Read your first CSV file below
census_2020 = pd.read_csv("https://raw.githubusercontent.com/AutismResearchCentre/Spark_Datasets/master/US_School_Census_2020.csv")
###Output
_____no_output_____
###Markdown
Viewing Data FramesThe DataFrame is a powerful tool in Pandas. To be able to use it properly, we need to know how to view the data in our DataFrames.As seen above, simply calling the name of our DataFrame we can see our rows and columns. If our DataFrame is super large, Jupyter NoteBook will just display the first and last rows and columns.
###Code
census_2020 = pd.read_csv("https://raw.githubusercontent.com/AutismResearchCentre/Spark_Datasets/master/US_School_Census_2020.csv")
# Simply call the name of the DataFrame to display table
census_2020
###Output
_____no_output_____
###Markdown
Shape of the Data FrameHow do we know how many data points (rows) we have? How do we know how many features (columns) we have?To determine how much data we have, we can use [df.shape](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.shape.html), which will give us information of how ```df.shape```* **df** is the name of the Pandas DataFrame* **.shape** is the attribute of the data framedf.shape will return a tuple in the form of (x, y)* **x** represents the number of rows* **y** represents the number of columns
###Code
census_2020.shape
###Output
_____no_output_____
###Markdown
Therefore, our data frame has **250 rows** and **60 columns** HeadTo view the first rows of a DataFrame, we can use [pd.head()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.head.html)---```df.head()```By deafult` pd.head()` will display thie** first 5 rows** of the DataFrame```df.head(n)```Where **n is an integer** value that represents the number of rows. For example, `pd.head(15)` will display the first 15 rows.
###Code
census_2020.head()
###Output
_____no_output_____
###Markdown
TailTo view the last rows of a DataFrame, we can use [pd.tail()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.tail.html)```df.tail()```By deafult` df.tail()` will display by default **last 5 rows** of the DataFrame```df.tail(n)```Where **n is an integer** value that represents the number of rows. For example, `df.tail(15)` will display the last 15 rows. Exercise 3: Males vs FemalesDisplay the last 6 rows of the DataFrame? How many of the students are female? How many are male?
###Code
# TO-DO: Display the last 6 rows of the Census_2020 DataFrame
census_2020.tail(6)
# TO-DO: How many students are female
print ('X students are female, and Y students are male') # Print statments let us see string in the console
###Output
X students are female, and Y students are male
###Markdown
Indexing DataFrames DataFrame IndexThe **DataFrame index** is the row labels of the table. * DataFrame Indexes could be **numbers, words, or phrases*** DataFrame Indexes have to be **unique** (you can think of it as an ID number). We can't have two people with the same ID or name because that would be confusing. Integer DataFrame IndexFor the Census 2020. Data we can see that that the DataFrame index is the bolded values **(0, 1, 2, 3 ...)**. This helps us identify each row.
###Code
census_2020.head()
###Output
_____no_output_____
###Markdown
String DataFrame Index**String** is just another word another way to say a series of characters. Examples of strings include:* "Hello"* "Password123"* "12345"* 'Bob the Builder!'* "$%&$^$^!* *&%*salkdfjlksf"Notice how strings are wrapped in double ("") or single quotations (''). Strings can also be DataFrame indexes. Take the example below of the cities DataFrame. The index of is the name of Finish cities (e.g., Helsink, Espoo, Tampere)
###Code
# Getting the cities DataFrame
cities = pd.DataFrame(['Helsinki', 'Espoo', 'Tampere', 'Vantaa', 'Oulu'])
cities['Population'] = [643272, 279044, 231853, 223027, 201810]
cities['Total area'] = [715.48, 528.03, 689.59, 240.35, 3817.52]
cities.set_index([0], inplace=True)
# Viewing the cities DataFrame
cities
###Output
_____no_output_____
###Markdown
Setting a DataFrame IndexWe can actually change the index of DataFrames. [df.set_index](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html) allows us to set the DataFrame index (row labels) with existing columns```df.set_index('[COLUMN_NAME]', inplace=True)```* **df** is the name of the DataFrame* **.set_index** is the function that we are apply to the dataframe* **'[COLUMN_NAME]'** is the name of the pre-existing column that we would * **inplace** gives us the option to save over our pre-exisiting DataBase. If we set *inplace=True*, our edited DataFrame overwrites the original. If we set *inplace=False*, then we keep our original DataFrame. By deafult, inplace is set to False.
###Code
# Changing the index from city name to population
cities.set_index('Population')
# The original cities still exists
cities
# Now lets use set_index with the inplace attribute
cities.set_index('Population', inplace=True)
# When we call cities we can see that we have saved our changes
cities
###Output
_____no_output_____
###Markdown
Exercise 3: Mountains! Below is a table about some mountains. What column be an appropriate index for the data? Why? Create a new index of the mtns DataFrame. (Do not forget to use the `inplace=True` attribute).
###Code
# TO-DO: Run this cell to get assign the mtns DataFrame
mtns = pd.DataFrame([
{'name': 'Mount Everest',
'height (m)': 8848,
'summited': 1953,
'mountain range': 'Mahalangur Himalaya'},
{'name': 'K2',
'height (m)': 8611,
'summited': 1954,
'mountain range': 'Baltoro Karakoram'},
{'name': 'Kangchenjunga',
'height (m)': 8586,
'summited': 1955,
'mountain range': 'Kangchenjunga Himalaya'},
{'name': 'Lhotse',
'height (m)': 8516,
'summited': 1956,
'mountain range': 'Mahalangur Himalaya'},
])
mtns
# TO-DO: Set an new index for the table
mtns = pd.DataFrame([
{'name': 'Mount Everest',
'height (m)': 8848,
'summited': 1953,
'mountain range': 'Mahalangur Himalaya'},
{'name': 'K2',
'height (m)': 8611,
'summited': 1954,
'mountain range': 'Baltoro Karakoram'},
{'name': 'Kangchenjunga',
'height (m)': 8586,
'summited': 1955,
'mountain range': 'Kangchenjunga Himalaya'},
{'name': 'Lhotse',
'height (m)': 8516,
'summited': 1956,
'mountain range': 'Mahalangur Himalaya'},
])
mtns.set_index('name', inplace=True)
mtns
###Output
_____no_output_____
###Markdown
Loc and Label-based Indexing[df.loc[]](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html) is used to slice and index a DataFrame based on the labels of the columns and the rows.The inputs of .loc can be:--- Using Loc for a Single Row```df.loc['ROW_LABEL']```Returns a single row of the DataFrame
###Code
# Read the csv file
hospitals = pd.read_csv("https://raw.githubusercontent.com/AutismResearchCentre/Spark_Datasets/master/Toronto_Hospitals.csv")
# Set the index to Hospital "Name"
hospitals.set_index('Name', inplace=True)
# View hospital DataFrame
hospitals
# Get the row for Holland Bloorview
hospitals.loc['Holland Bloorview'] # Case Sensitve
# Get the row for Centric Health Surgical Centre
hospitals.loc['Centric Health Surgical Centre']
###Output
_____no_output_____
###Markdown
Using Loc for a Single Cell```df.loc[ROW_NAME, COLUMN_NAME]```Returns a specific cell of the DataFrame
###Code
# Get the year Etobicoke General was founded
hospitals.loc["Etobicoke General", "Founded"]
# Get the district of Casey House
hospitals.loc["Casey House", "District"]
# Get the former name of Bellwood
hospitals.loc["Bellwood", "Former name(s)"] # Return nan, which indicates missing data
###Output
_____no_output_____
###Markdown
Using Loc to Subset the DataFrameSometimes we only want a subset of our DataFrame. The slice operator (:)```[START_INDEX : STOP_INDEX]```The slice operator cuts a sequence that starts at the `START_INDEX` and ends with `END_INDEX`
###Code
# Slicing Rows: Get rows Bellwood to Etobicoke General
hospitals.loc['Bellwood': 'Etobicoke General']
# Slicing Rows and Columns:
# Get Rows from Casey House to Holland Bloorview, and columns from District to Network
hospitals.loc['Casey House': 'Holland Bloorview', 'District' : 'Network']
###Output
_____no_output_____
###Markdown
```[:]```A single colon (with nothing before or after it), means get everything
###Code
# Get the whole hospital DataFrame
hospitals.loc[:] # This is the same as just calling hospitals
# Get all the rows, but only the 'Network' and 'Former name(s)' Columns
hospitals.loc[:, 'Network': 'Former name(s)']
###Output
_____no_output_____
###Markdown
```[: END_INDEX]```Assumes the Start Index is the first entry of the DataFrame```[START_INDEX : ]```Assumes the End Index is the last entry of the DataFrame
###Code
# Get all rows from "Sunnybrook" to the end of the list
hospitals.loc['Sunnybrook':]
# Get all rows up to "Bridgepoint" (Inclusive)
hospitals.loc[ :"Bridgepoint"]
###Output
_____no_output_____
###Markdown
Arrays for IndexingArrays can help us select the specifc items (columns or rows) that we need. ```[ INDEX1, INDEX2, INDEX3 ... INDEX8 ]```This helps us select the specific indexes that we want. ```[ INDEX13, INDEX2, INDEX34 ... INDEX2 ]```Indexes do not need to be in order
###Code
# Get the Years that Sunnybrook, Humber River, Mount Sinai were founded
hospitals.loc[['Sunnybrook', 'Humber River', 'Mount Sinai'], 'Founded'] # Notice how he names of the hospitals are in an array
# We can also save the array of of indexes to variables
row_indexes = ['Birchmount', 'Baycrest', 'Toronto General']
column_indexes = ['District', 'Founded']
hospitals.loc[row_indexes, column_indexes]
###Output
_____no_output_____
###Markdown
Exercise 4: Pokedex (Indexing with Pokemon)Below is a table of Generation 1 Pokemon. Answer the question below.
###Code
pokemon = pd.read_csv("https://raw.githubusercontent.com/AutismResearchCentre/Spark_2020_Day_01/master/Gen1_Pokemon.csv")
pokemon
###Output
_____no_output_____
###Markdown
**Set the index of Pokemon DataFrame to `"Name"`**
###Code
# TO-DO: Set the `"Name"` as the DataFrame Index (Row Labels)
pokemon.set_index('Name', inplace=True)
pokemon
###Output
_____no_output_____
###Markdown
**What is the type of Pokemon is the `"Abra"`?**
###Code
# TO-DO: Find the type of Abra?
pokemon.loc['Abra', 'Type']
###Output
_____no_output_____
###Markdown
**What are the evolutions and numbers of `"Charmander"`, `"Pikachu"` and `"Squirtle"`?**
###Code
# TO-DO: Get the evolutions of these three pokemon
pokemon.loc[['Charmander', 'Pikachu', 'Squirtle'], 'Evolves_Into']
pokemon.loc['Eevee', 'Evolves_Into']
###Output
_____no_output_____
###Markdown
**Cut the Table from `"Eevee"` to `"Mewtwo"`. What is the range of numbers in the Pokedex**to
###Code
# TO-DO: Cut the table from Evee to Dragonite
pokemon.loc['Eevee':'Mewtwo']
###Output
_____no_output_____
###Markdown
Cleaning DataBefore we can start analyzing data, we need to clean it. IBM Data Analytics estimates that data scientists can spend up to 80% of time cleaning data. Missing ValuesOne of the biggest data cleaning tasks is the problem of **missing values**. Sources of Missing ValuesData can be missing for a variety of different reasons. For example,* The participant forget to fill in a field in a survey* Data was lost while transferring files* There was a programming error* Data was not collected for a specific participant We are going to load a csv file with some property data. As you can see there is some issue with this Data.
###Code
# Import libraries
import pandas as pd
import numpy as np
# Read csv file into a pandas DataFrame
df_property = pd.read_csv("https://raw.githubusercontent.com/nguyenjenny/spark_shared_repo/main/datasets/Property_Data.csv")
# Print out the Data
df_property
###Output
_____no_output_____
###Markdown
Features and Data TypesBefore we begin cleaning our data we need to understand what features we have (i.e. what do the columns of the data represent) and what are the expected data types. What are the features?- `PID`: Property ID number- `ST_NUM`: Street number- `ST_NAME`: Street name- `OWN_OCCUPIED`: Is the residence owner occupied- `NUM_BEDROOMS`: Number of bedrooms- `NUM_BATH`: Number of bathrooms- `SQ_FT`: Square footage of the property- `Extra`: Extra column with no data in it What are the expected types?- `PID`: float (decimal number) or int (whole number), something numeric- `ST_NUM`: float (decimal number) or int (whole number), something numeric- `ST_NAME`: string (a mix of letters or numbers)- `OWN_OCCUPIED`: string (where Y = "yes" and N "no")- `NUM_BEDROOMS`: float (decimal number) or int (whole number), something numeric- `NUM_BATH`: float (decimal number) or int (whole number), something numeric- `SQ_FT`: float (decimal number) or int (whole number), something numeric- `Extra`: NaN (missing data) How Pandas Deals with Missing Values (`NaN`)Missing data in pandas is often called `NaN` which stands for **Not a Number**.Here we can see that row `4`, column `"PID"` is nan.
###Code
# Index row 4, column "PID"
df_property.loc[4, "PID"]
## How to check if data is considered missing
###Output
_____no_output_____
###Markdown
How to check if Data is Considered Missing (`isnull`)There is a method called `isnull` that we can use to determine if data is missing.- Missing or null data will return `True`- Non-missing or non-null data will return `False`
###Code
# Check is the data is null or not
df_property.isnull()
###Output
_____no_output_____
###Markdown
We can also check if nulls on a single column
###Code
# We can check a single column
print(df_property['ST_NUM'])
df_property['ST_NUM'].isnull()
# We can also count how many missing/null values we have using the `sum()` method
df_property['ST_NUM'].isnull().sum()
# We can use the sum method on the entire DataFrame too
df_property.isnull().sum()
###Output
_____no_output_____
###Markdown
Standard Missing ValuesPandas will recognize the following automatically as standard missing values- empty cells- "n/a"- "NA"- "na"As we can see all the items in blue, have been importated as `NaN`
###Code
# Return the table
df_property
# Non standard missing val
###Output
_____no_output_____
###Markdown
Non-standard Missing ValuesThere are also some missing values that are not automatically recognized as missing. For example `"--"` is being recognized as a string instead of a missing value. In this case, we tell pandas what our missing values are by using the parameter `na_values`, when we use `pd.read_csv`.
###Code
# Create a list of missing values and save as a variable
missing_values = ['n/a', 'na', 'NaN', 'NA', '--']
# Read csv as as pandas data frame, but use the paramter na_values
df_property = pd.read_csv("https://raw.githubusercontent.com/nguyenjenny/spark_shared_repo/main/datasets/Property_Data.csv", na_values=missing_values)
# Return df_property, now "--" has been replaced with NaN
df_property
###Output
_____no_output_____
###Markdown
Dealing with Errors in the DataThere are also a few errors in the data (yellow cells). We can use `replace` to mass replace incorrect cells. Or, we can use fix errors on a case-by-case basis by overwriting them using indexing (`loc`). Replacing Errors (`replace`)This method works well, when there's a value in a data set that makes no sense. In the data set we have, "HURLEY" is incorrect, so we can replace with nan (`np.nan`)We use the `replace` method.```df.replace("[OLD_VALUE]", "[NEW_VALUE]")```
###Code
# Replace "Hurley" with np.nan
df_property = df_property.replace("HURLEY", np.nan)
# Show data frame
df_property
###Output
_____no_output_____
###Markdown
Overwriting Errors with Indexing (`loc`)To overwrite a data point or error. We simply index it using `loc` and the make it equal to our new value.```df.loc[ "ROW_NAME" , "COLUMN_NAME"] = [NEW_VALUE]```We want to overwrite errors using indexing when we have to deal with data on a case by case basis. For example, it doesn't make sense to have a value of `12` for the column `"OWN_OCCUPIED"`. But we don't want to just replace all values of `12` with `np.nan`. For example, we could have a house with a street number of 12 and that's not an error. This is where indexing comes in handy. The row name is `3` and the row_columns is `"OWN_OCCUPIED"` and we want to replace this with `np.nan`
###Code
# Overwrite row 3, column "OWN_OCCUPIED" with np.nan
df_property.loc[3, "OWN_OCCUPIED"] = np.nan
# Show the updated dataframe
df_property
###Output
_____no_output_____
###Markdown
Example 5: Replacing missing data using overwriting with indexingOverwriting with indexing can also be used to replacing missing values with actual values. Lets replace this missing value in column "PID" with 100005000 Lets replace this missing value in column `"PID"` with `100005000`
###Code
### TO-DO: Repalce this missing ID in the "PID" column with 100005000
df_property.loc[4, 'PID'] = 100005000
df_property
###Output
_____no_output_____
###Markdown
Replacing Missing Data (`fillna`)The `fillna` method is used to replace missing data. ```df = df.fillna(VALUE_TO_FILL)df["COLUMN_NAME"] = df["COLUMN_NAME"].fillna(VALUE_TO_FILL)``` Let's say we want to assume if we don't have information about the number of bathrooms (`"NUM_BATH"`), we can assume there is only 1 bathroom. In this case, we want to fill value to be 1. We can apply `fillna`
###Code
# Fill all nan values
df_property["NUM_BATH"] = df_property["NUM_BATH"].fillna(1)
# Show the new dataframe
df_property
###Output
_____no_output_____
###Markdown
For number of bedrooms (`"NUM_BEDROOMS"`), we can fill missing data with the mean number of bedrooms
###Code
# Calculate the mean number of bedrooms first and save as variable
mean_beds = df_property["NUM_BEDROOMS"].mean() # You will learn more about this later
print(mean_beds)
# Fill all nan values for NUM_BEDRROMS with the mean
# Fill all nan values
df_property["NUM_BEDROOMS"] = df_property["NUM_BEDROOMS"].fillna(mean_beds)
# Show the new dataframe
df_property
###Output
_____no_output_____
###Markdown
Example 6: Filling missing data for the `"OWN_OCCUPIED"` columnIf data on whether a place is `"OWN_OCCUPIED"` is missing, fill it with `"N"`, meaning that it is not occupied.
###Code
### TO-DO: Fill missing "OWN_OCCUPIED" data with "N"
###Output
_____no_output_____
###Markdown
Dropping rows/indexes and columns with missing data (`dropna`)Sometimes we may not want to use rows/indexes or columns if data is missing. This is where the `dropna` function comes in handy.Documentation for this function is found here: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.html?highlight=dropnaParameters: - `axis`: the axis that you want to drop data on - can be "index" or "columns" - `axis="index"` drops rows with missing data - `axis="columns"` drops columns with missing data- `how`: how you want to drop the data - can be "any" or "all" - `how="any"` If any NA values are present, drop that row or column - `how="all"` If all values are NA, drop that row or column. Examples Drop columns where all the values are missing```df.dropna(axis="columns", how="all") ```Drop columns where there are any missing values```df.dropna(axis="columns", how="any") ```Drop rows/index where all the values are missing```df.dropna(axis="index", how="all") ```Drop rows/index where there are any missing values```df.dropna(axis="index", how="any") ```
###Code
# Drop columns where all values are missing
df_property.dropna(axis="columns", how="all")
# Drop columns where any values are missing
df_property.dropna(axis="columns", how="any")
###Output
_____no_output_____
###Markdown
Exercise 7: Drop rows/indexes where all and any of the data is missing.
###Code
### TO-DO: drop rows/indexes where all values are missing
### TO-DO: drop rows/indexes where any values are missing
###Output
_____no_output_____
###Markdown
GitHub**Git** is a version control programming language. Installing Git and GitHub * **Installing Git on Windows** Download and install git from [Git for Windows](https://gitforwindows.org/) * **Installing Git on Mac** Install [Git OSX Installer](https://sourceforge.net/projects/git-osx-installer/files/ ) 2. Create a [GitHub Account](https://github.com/join) 3. Configure Git to your GitHub account * Open git bash * Set a Git username * `$ git config --global user.name "Mona Lisa"` * Confirm that you have set the Git username correctly * `$ git config --global user.name` * `> Mona Lisa` * Setting your commit email address in Git * `$ git config --global user.email "[email protected]"` * Confirm that you have set the Git email correctly * `$ git config --global user.email` * `> [email protected]` 4. Install **Git for Desktop*** Install [git for desktop here](https://desktop.github.com/)4. Share your git username with us in the Zoom chat and we will add you to repository Git WorkflowBelow is an image of the git work flow and the common commands we use. Remote RepositoryA repo/repository is a place where all your files for a project are stored.The remote repository is the repository shared with everyone and hosted on github.com. Because it's the shared version, we don't want to make changes to it directly. Local RepopositoryThe local repository is our version of the repository that we have saved on our personal comptuers. We can try things out and change things without affecting the remote version of the code that everyone else uses. Once Working DirectoryWhen we make changes to any of the the file in the repository. It goes into the the working directory. We can choose to keep (commit) or discard these changes Staging AreaThe staging area is where you want you add files that you have changed to get ready to commit `git clone` To get a local repository we first need to clone it using the command git clone `git add`To add files to the staging area we use git add. This moves files from the working directory into the staging area. `git commit`Once we have collected all the files that we would like to move you our local repository. We can use the command `git commit`. When we commit something, we must included a commit message that explains what the changes were. This helps us keep track of the changes incase we need to revert to an old version or solve a bug. `git push`Once we are comfortable with all our chages and commit and we are ready to share it with our team, we need `git push` our commits from the local repository to the remote repository `git pull`Other people will make changes to remote repository. To get those changes and to make sure our local repository is up-to-date, we need to `git pull` all the changes `git checkout`Git checkout is used in more complicated situations when we have to worry about branching `git checkout`Git checkout is used in more complicated situations when we have to worry about branching. It is used to change branches `git merge`Git merge is used when we want to combine separate branches into a single branch Cloning your first repository1. Open **GitHub for Desktop**2. Find the repository called **Shared Spark Repo** and select `Clone [your_username]/spark_shared_repo`* 3. Clone the repository* 4. Select `View the files of repository in Explorer` to see the files.* You should be able to see all the cloned files saved locally* We have just created our own local repository on our computer! Making your first commit1. Open the README.md file located in the local repository in notebook or any text editor2. Add your name under contributors and save the file * README files are in markdown. So `*` and a space represents a bullet point * 3. When you return to GitHub Desktop, you'll notice that the change will have been detected * Make sure the files you want to commit are ticked off. This move files from our working directory into the staging area. * Add a **commit message** at the bottom * Commit messages usually start with a **captialized word** * And start with a **verb in simple tense** (e.g., Add, Change, Remove, Fix) *We have just commit our changes to our local directory! Pushing the changes to remote repositoryThe changes we made are in our local repo are ready to be "pushed" to the main branch of your remote repo.To push our changes to remote repository we can click any of the two circled push buttons.
###Code
###Output
_____no_output_____
###Markdown
Fetching and pulling up-to-date changes from the main remote repositoryBefore we make changes to our local repository, we want to get into the habit of fetching and pulling changes from the remote repository to make sure we are working with the most up-to-date version.1. Fetch the from origin (i.e. the remote repository) * Fetch gets and list the commits that have been made on the remote branch * 2. To get those changes on our local machine, we need to pull them. *  Exercise 8: Adding your name to the contributor listGo through all the steps listed above to add your name to contributor list. Since we are all editing the same README.md file, we want to make these changes one at a time.**NB: Remember to `git pull` before you add your name, to get the most up-to-date version that your peers have added to** Exercise 9: Add your notebook from day 1 to GitHub RepositoryThere is a folder in the repository called `day01_notebooks`: https://github.com/nguyenjenny/spark_shared_repo/tree/main/day_01_notebooks1. Download the notebook from Google Collab onto your computer as an .ipynb *  * Rename it `SPARK-Day1-[FirstName].ipynb`2. Copy it to the correct `day01_notebooks` folder in your local repository3. Stage and commit to your local repository * Be sure to add an informative message4. Push the change to remote repository Accessing data files through GitHubGitHub can be used to host and access datafiles through Google Collab.Hotel booking demand data is hosted here: https://www.kaggle.com/jessemostipak/hotel-booking-demand?select=hotel_bookings.csvWe can download it to our local machine and copy it to our local repository, then commit and push the datasets to GitHub.In the `datasets` folder of our shared repository (https://github.com/nguyenjenny/spark_shared_repo/tree/main/datasets), we have saved the hotel booking information in the file called Hotel_Booking.csv* If we click `view raw`, it will lead us to the raw CSV file* This leads us to the raw CSV file. We can use the URL as the file path when importing our data from pandas* We can use `pd.read_csv("https://raw.githubusercontent.com/nguyenjenny/spark_shared_repo/main/datasets/Hotel_Bookings.csv")` to load our data Peer Programming: Loading and cleaning the hotel data Part 1: Load the hotel booking data using `pd.read_csv` and save it under the varialbe `df_hotels`The na_values are "non-standard". Look at the data to figure out what it is: https://raw.githubusercontent.com/nguyenjenny/spark_shared_repo/main/datasets/Hotel_Bookings.csv*Hint there are multiple "non-standard" notations for missing data. It might help to look at the data in excel. *
###Code
### TO-DO: Load the hotel booking data
import pandas as pd
import numpy as np
### TO_DO: What are the missing values called in this data set
missing_values = [""] # replace with what the missing values are called (hint there are multiple nam)
### ### TO_DO: Import the data; use pd.read_csv to import the missing data, be sure to set `na_values = missing_values`
df_hotels = "" # replace with read statement
missing_values = ["NULL", "missing_data"]
df_hotels = pd.read_csv("https://raw.githubusercontent.com/nguyenjenny/spark_shared_repo/main/datasets/Hotel_Bookings.csv", na_values = missing_values)
###Output
_____no_output_____
###Markdown
Part 2: How many features (columns) and datapoints (rows/indexes) are three?
###Code
# TO-DO: Get the number of features and datapoints
df_hotels.shape
###Output
_____no_output_____
###Markdown
Part 3: How much missing data do you have for each column? Which column has the most missing data? Which column has the least missing data?
###Code
### TO-DO: Figure out how much missing data there is in each column
df_hotels.isnull().sum()
###Output
_____no_output_____
###Markdown
Part 4: Use `loc` to get the `"hotel"`, `"adults"`, `"children"`, `"babies"` for the rows of `40600`, `40667`, `40679` and `41160`. What do you notice about this data?
###Code
### TO-DO: Use loc to get a subset of the dataset
df_hotels.loc[[40600, 40667, 40679, 41160], ['hotel', 'adults', 'children', 'babies']]
###Output
_____no_output_____
###Markdown
Part 5: Fill missing data for `"children"` with the value `0`. Confirm that there is no missing data using `isnull`
###Code
### TO-DO: Fill missing data for the "children" column with the value 0
### TO-DO: Confirm that there is now no missing data in that column
###Output
_____no_output_____
###Markdown
Part 6: Drop all indexes/rows with any missing data save it as `df_hotels_processed`? How many data points do you have now?---
###Code
### TO-DO: Drop all indexes/rows with missing data
df_hotels_processed = "" # replace with drop statement
###Output
_____no_output_____
|
docker/work/preprocess_knock_R.ipynb
|
###Markdown
データサイエンス100本ノック(構造化データ加工編) - R はじめに- 初めに以下のセルを実行してください- 必要なライブラリのインポートとデータベース(PostgreSQL)からのデータ読み込みを行います- 利用が想定されるライブラリは以下セルでインポートしています- その他利用したいライブラリがあれば適宜インストールしてください(!マークに続けてOSコマンドを入力することで、任意のubuntu Linuxコマンドが入力可能)- 名前、住所等はダミーデータであり、実在するものではありません
###Code
require('RPostgreSQL')
require('tidyr')
require('dplyr')
require('stringr')
require('caret')
require('lubridate')
require('rsample')
require('recipes')
host <- 'db'
port <- Sys.getenv()["PG_PORT"]
dbname <- Sys.getenv()["PG_DATABASE"]
user <- Sys.getenv()["PG_USER"]
password <- Sys.getenv()["PG_PASSWORD"]
con <- dbConnect(PostgreSQL(), host=host, port=port, dbname=dbname, user=user, password=password)
df_customer <- dbGetQuery(con,"SELECT * FROM customer")
df_category <- dbGetQuery(con,"SELECT * FROM category")
df_product <- dbGetQuery(con,"SELECT * FROM product")
df_receipt <- dbGetQuery(con,"SELECT * FROM receipt")
df_store <- dbGetQuery(con,"SELECT * FROM store")
df_geocode <- dbGetQuery(con,"SELECT * FROM geocode")
###Output
_____no_output_____
###Markdown
データサイエンス100本ノック(構造化データ加工編) - R はじめに- 初めに以下のセルを実行してください- 必要なライブラリのインポートとデータベース(PostgreSQL)からのデータ読み込みを行います- 利用が想定されるライブラリは以下セルでインポートしています- その他利用したいライブラリがあればinstall.packages()で適宜インストールしてください- 名前、住所等はダミーデータであり、実在するものではありません
###Code
require("RPostgreSQL")
require("tidyr")
require("dplyr")
require("stringr")
require("caret")
require("lubridate")
require("rsample")
require("recipes")
require("themis")
host <- "db"
port <- Sys.getenv()["PG_PORT"]
dbname <- Sys.getenv()["PG_DATABASE"]
user <- Sys.getenv()["PG_USER"]
password <- Sys.getenv()["PG_PASSWORD"]
con <- dbConnect(PostgreSQL(), host=host, port=port, dbname=dbname, user=user, password=password)
df_customer <- dbGetQuery(con,"SELECT * FROM customer")
df_category <- dbGetQuery(con,"SELECT * FROM category")
df_product <- dbGetQuery(con,"SELECT * FROM product")
df_receipt <- dbGetQuery(con,"SELECT * FROM receipt")
df_store <- dbGetQuery(con,"SELECT * FROM store")
df_geocode <- dbGetQuery(con,"SELECT * FROM geocode")
###Output
_____no_output_____
###Markdown
データサイエンス100本ノック(構造化データ加工編) - R はじめに- 初めに以下のセルを実行してください- 必要なライブラリのインポートとデータベース(PostgreSQL)からのデータ読み込みを行います- 利用が想定されるライブラリは以下セルでインポートしています- その他利用したいライブラリがあればinstall.packages()で適宜インストールしてください- 名前、住所等はダミーデータであり、実在するものではありません
###Code
require('RPostgreSQL')
require('tidyr')
require('dplyr')
require('stringr')
require('caret')
require('lubridate')
require('rsample')
require('recipes')
require('themis')
host <- 'db'
port <- Sys.getenv()["PG_PORT"]
dbname <- Sys.getenv()["PG_DATABASE"]
user <- Sys.getenv()["PG_USER"]
password <- Sys.getenv()["PG_PASSWORD"]
con <- dbConnect(PostgreSQL(), host=host, port=port, dbname=dbname, user=user, password=password)
df_customer <- dbGetQuery(con,"SELECT * FROM customer")
df_category <- dbGetQuery(con,"SELECT * FROM category")
df_product <- dbGetQuery(con,"SELECT * FROM product")
df_receipt <- dbGetQuery(con,"SELECT * FROM receipt")
df_store <- dbGetQuery(con,"SELECT * FROM store")
df_geocode <- dbGetQuery(con,"SELECT * FROM geocode")
###Output
_____no_output_____
###Markdown
データサイエンス100本ノック(構造化データ加工編) - R はじめに- 初めに以下のセルを実行してください- 必要なライブラリのインポートとデータベース(PostgreSQL)からのデータ読み込みを行います- 利用が想定されるライブラリは以下セルでインポートしています- その他利用したいライブラリがあれば適宜インストールしてください(!マークに続けてOSコマンドを入力することで、任意のubuntu Linuxコマンドが入力可能)- 名前、住所等はダミーデータであり、実在するものではありません
###Code
require('RPostgreSQL')
require('tidyr')
require('dplyr')
require('stringr')
require('caret')
require('lubridate')
require('rsample')
require('recipes')
host <- 'db'
port <- Sys.getenv()["PG_PORT"]
dbname <- Sys.getenv()["PG_DATABASE"]
user <- Sys.getenv()["PG_USER"]
password <- Sys.getenv()["PG_PASSWORD"]
con <- dbConnect(PostgreSQL(), host=host, port=port, dbname=dbname, user=user, password=password)
df_customer <- dbGetQuery(con,"SELECT * FROM customer")
df_category <- dbGetQuery(con,"SELECT * FROM category")
df_product <- dbGetQuery(con,"SELECT * FROM product")
df_receipt <- dbGetQuery(con,"SELECT * FROM receipt")
df_store <- dbGetQuery(con,"SELECT * FROM store")
df_geocode <- dbGetQuery(con,"SELECT * FROM geocode")
###Output
_____no_output_____
###Markdown
データサイエンス100本ノック(構造化データ加工編) - R はじめに- 初めに以下のセルを実行してください- 必要なライブラリのインポートとデータベース(PostgreSQL)からのデータ読み込みを行います- 利用が想定されるライブラリは以下セルでインポートしています- その他利用したいライブラリがあれば適宜インストールしてください(!マークに続けてOSコマンドを入力することで、任意のubuntu Linuxコマンドが入力可能)- 名前、住所等はダミーデータであり、実在するものではありません
###Code
require('RPostgreSQL')
require('tidyr')
require('dplyr')
require('stringr')
require('caret')
require('lubridate')
require('rsample')
require('recipes')
host <- 'db'
port <- Sys.getenv()["PG_PORT"]
dbname <- Sys.getenv()["PG_DATABASE"]
user <- Sys.getenv()["PG_USER"]
password <- Sys.getenv()["PG_PASSWORD"]
con <- dbConnect(PostgreSQL(), host=host, port=port, dbname=dbname, user=user, password=password)
df_customer <- dbGetQuery(con,"SELECT * FROM customer")
df_category <- dbGetQuery(con,"SELECT * FROM category")
df_product <- dbGetQuery(con,"SELECT * FROM product")
df_receipt <- dbGetQuery(con,"SELECT * FROM receipt")
df_store <- dbGetQuery(con,"SELECT * FROM store")
df_geocode <- dbGetQuery(con,"SELECT * FROM geocode")
###Output
_____no_output_____
###Markdown
データサイエンス100本ノック(構造化データ加工編) - R はじめに- 初めに以下のセルを実行してください- 必要なライブラリのインポートとデータベース(PostgreSQL)からのデータ読み込みを行います- 利用が想定されるライブラリは以下セルでインポートしています- その他利用したいライブラリがあればinstall.packages()で適宜インストールしてください- 名前、住所等はダミーデータであり、実在するものではありません
###Code
require('RPostgreSQL')
require('tidyr')
require('dplyr')
require('stringr')
require('caret')
require('lubridate')
require('rsample')
require('recipes')
require('themis')
host <- 'db'
port <- Sys.getenv()["PG_PORT"]
dbname <- Sys.getenv()["PG_DATABASE"]
user <- Sys.getenv()["PG_USER"]
password <- Sys.getenv()["PG_PASSWORD"]
con <- dbConnect(PostgreSQL(), host=host, port=port, dbname=dbname, user=user, password=password)
df_customer <- dbGetQuery(con,"SELECT * FROM customer")
df_category <- dbGetQuery(con,"SELECT * FROM category")
df_product <- dbGetQuery(con,"SELECT * FROM product")
df_receipt <- dbGetQuery(con,"SELECT * FROM receipt")
df_store <- dbGetQuery(con,"SELECT * FROM store")
df_geocode <- dbGetQuery(con,"SELECT * FROM geocode")
###Output
_____no_output_____
###Markdown
データサイエンス100本ノック(構造化データ加工編) - R はじめに- 初めに以下のセルを実行してください- 必要なライブラリのインポートとデータベース(PostgreSQL)からのデータ読み込みを行います- 利用が想定されるライブラリは以下セルでインポートしています- その他利用したいライブラリがあれば適宜インストールしてください(!マークに続けてOSコマンドを入力することで、任意のubuntu Linuxコマンドが入力可能)- 名前、住所等はダミーデータであり、実在するものではありません
###Code
require('RPostgreSQL')
require('tidyr')
require('dplyr')
require('stringr')
require('caret')
require('lubridate')
require('rsample')
require('unbalanced')
host <- 'db'
port <- Sys.getenv()["PG_PORT"]
dbname <- Sys.getenv()["PG_DATABASE"]
user <- Sys.getenv()["PG_USER"]
password <- Sys.getenv()["PG_PASSWORD"]
con <- dbConnect(PostgreSQL(), host=host, port=port, dbname=dbname, user=user, password=password)
df_customer <- dbGetQuery(con,"SELECT * FROM customer")
df_category <- dbGetQuery(con,"SELECT * FROM category")
df_product <- dbGetQuery(con,"SELECT * FROM product")
df_receipt <- dbGetQuery(con,"SELECT * FROM receipt")
df_store <- dbGetQuery(con,"SELECT * FROM store")
df_geocode <- dbGetQuery(con,"SELECT * FROM geocode")
###Output
Loading required package: RPostgreSQL
Loading required package: DBI
Loading required package: tidyr
Loading required package: dplyr
Attaching package: ‘dplyr’
The following objects are masked from ‘package:stats’:
filter, lag
The following objects are masked from ‘package:base’:
intersect, setdiff, setequal, union
Loading required package: stringr
Loading required package: caret
Loading required package: lattice
Loading required package: ggplot2
Loading required package: lubridate
Attaching package: ‘lubridate’
The following objects are masked from ‘package:base’:
date, intersect, setdiff, union
Loading required package: rsample
Loading required package: unbalanced
Warning message in library(package, lib.loc = lib.loc, character.only = TRUE, logical.return = TRUE, :
“there is no package called ‘unbalanced’”
###Markdown
演習問題 ---> R-001: レシート明細データフレーム(df_receipt)から全項目の先頭10件を表示し、どのようなデータを保有しているか目視で確認せよ。
###Code
head(df_receipt)
###Output
_____no_output_____
###Markdown
---> R-002: レシート明細データフレーム(df_receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、10件表示させよ。
###Code
df_receipt %>%
select(sales_ymd, customer_id, product_cd, amount)%>%
head()
###Output
_____no_output_____
###Markdown
---> R-003: レシート明細データフレーム(df_receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、10件表示させよ。ただし、sales_ymdはsales_dateに項目名を変更しながら抽出すること。
###Code
df_receipt %>%
select(sales_ymd, customer_id, product_cd, amount)%>%
rename(sales_date = sales_ymd)%>%
head()
###Output
_____no_output_____
###Markdown
---> R-004: レシート明細データフレーム(df_receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。> - 顧客ID(customer_id)が"CS018205000001"
###Code
df_receipt %>%
select(sales_ymd, customer_id, product_cd, amount)%>%
filter(customer_id == "CS018205000001")
###Output
_____no_output_____
###Markdown
---> R-005: レシート明細データフレーム(df_receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。> - 顧客ID(customer_id)が"CS018205000001"> - 売上金額(amount)が1,000以上
###Code
df_receipt %>%
select(sales_ymd, customer_id, product_cd, amount)%>%
filter(customer_id == "CS018205000001" & amount >= 1000)
###Output
_____no_output_____
###Markdown
---> R-006: レシート明細データフレーム(receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上数量(quantity)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。> - 顧客ID(customer_id)が"CS018205000001"> - 売上金額(amount)が1,000以上または売上数量(quantity)が5以上
###Code
df_receipt %>%
select(sales_ymd, customer_id,quantity ,product_cd, amount)%>%
filter(customer_id == "CS018205000001" & (amount >= 1000 | quantity >= 5))
###Output
_____no_output_____
###Markdown
---> R-007: レシート明細のデータフレーム(df_receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。> - 顧客ID(customer_id)が"CS018205000001"> - 売上金額(amount)が1,000以上2,000以下
###Code
df_receipt %>%
select(sales_ymd, customer_id,quantity ,product_cd, amount)%>%
filter(customer_id == "CS018205000001" & between(amount,1000 , 2000))
###Output
_____no_output_____
###Markdown
---> R-008: レシート明細データフレーム(df_receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。> - 顧客ID(customer_id)が"CS018205000001"> - 商品コード(product_cd)が"P071401019"以外
###Code
df_receipt %>%
select(sales_ymd, customer_id,quantity ,product_cd, amount)%>%
filter(customer_id == "CS018205000001" & product_cd != 'P071401019')
###Output
_____no_output_____
###Markdown
---> R-009: 以下の処理において、出力結果を変えずにORをANDに書き換えよ。`df_store %>% filter(!(prefecture_cd == "13" | floor_area > 900))`
###Code
df_store %>%
filter(prefecture_cd != "13" & floor_area <= 900)
###Output
_____no_output_____
###Markdown
---> R-010: 店舗データフレーム(df_store)から、店舗コード(store_cd)が"S14"で始まるものだけ全項目抽出し、10件だけ表示せよ。
###Code
df_store%>%
filter(grepl("^S14", store_cd))%>%
head(10)
###Output
_____no_output_____
###Markdown
---> R-011: 顧客データフレーム(df_customer)から顧客ID(customer_id)の末尾が1のものだけ全項目抽出し、10件だけ表示せよ。
###Code
df_customer%>%
filter(grepl("1$", customer_id))%>%
head(3)
###Output
_____no_output_____
###Markdown
---> R-012: 店舗データフレーム(df_store)から横浜市の店舗だけ全項目表示せよ。
###Code
df_store%>%
filter(grepl("横浜市",address))%>%
head(3)
###Output
_____no_output_____
###Markdown
---> R-013: 顧客データフレーム(df_customer)から、ステータスコード(status_cd)の先頭がアルファベットのA〜Fで始まるデータを全項目抽出し、10件だけ表示せよ。
###Code
df_customer %>% filter(grepl("^[A-F]", status_cd))%>%head(3)
###Output
_____no_output_____
###Markdown
---> R-014: 顧客データフレーム(df_customer)から、ステータスコード(status_cd)の末尾が数字の1〜9で終わるデータを全項目抽出し、10件だけ表示せよ。
###Code
df_customer%>%
filter(grepl("[1-9]$", status_cd))%>%
head(3)
###Output
_____no_output_____
###Markdown
---> R-015: 顧客データフレーム(df_customer)から、ステータスコード(status_cd)の先頭がアルファベットのA〜Fで始まり、末尾が数字の1〜9で終わるデータを全項目抽出し、10件だけ表示せよ。
###Code
df_customer%>%
filter(grepl("^[A-F].*[1-9]$", status_cd))%>%
head(3)
###Output
_____no_output_____
###Markdown
---> R-016: 店舗データフレーム(df_store)から、電話番号(tel_no)が3桁-3桁-4桁のデータを全項目表示せよ。
###Code
df_store%>%
filter(grepl("^[0-9]{3}-[0-9]{3}-[0-9]{4}$", tel_no))%>%
head(3)
###Output
_____no_output_____
###Markdown
---> R-017: 顧客データフレーム(df_customer)を生年月日(birth_day)で高齢順にソートし、先頭10件を全項目表示せよ。
###Code
df_customer%>%
arrange(birth_day)%>%
head(3)
###Output
_____no_output_____
###Markdown
---> R-018: 顧客データフレーム(df_customer)を生年月日(birth_day)で若い順にソートし、先頭10件を全項目表示せよ。
###Code
df_customer%>%
arrange(desc(birth_day))%>%
head(3)
###Output
_____no_output_____
###Markdown
---> R-019: レシート明細データフレーム(df_receipt)に対し、1件あたりの売上金額(amount)が高い順にランクを付与し、先頭10件を抽出せよ。項目は顧客ID(customer_id)、売上金額(amount)、付与したランクを表示させること。なお、売上金額(amount)が等しい場合は同一順位を付与するものとする。
###Code
df_receipt%>%
mutate(ranking = min_rank(desc(amount)))%>%
arrange(ranking)%>%
slice(1:10)
###Output
_____no_output_____
###Markdown
---> R-020: レシート明細データフレーム(df_receipt)に対し、1件あたりの売上金額(amount)が高い順にランクを付与し、先頭10件を抽出せよ。項目は顧客ID(customer_id)、売上金額(amount)、付与したランクを表示させること。なお、売上金額(amount)が等しい場合でも別順位を付与すること。
###Code
df_receipt%>%
mutate(ranking = row_number(desc(amount)))%>%
arrange(ranking)%>%
slice(1:10)
###Output
_____no_output_____
###Markdown
---> R-021: レシート明細データフレーム(df_receipt)に対し、件数をカウントせよ。
###Code
nrow(df_receipt)
###Output
_____no_output_____
###Markdown
---> R-022: レシート明細データフレーム(df_receipt)の顧客ID(customer_id)に対し、ユニーク件数をカウントせよ。
###Code
unique(df_receipt$customer_id)%>%
length()
###Output
_____no_output_____
###Markdown
---> R-023: レシート明細データフレーム(df_receipt)に対し、店舗コード(store_cd)ごとに売上金額(amount)と売上数量(quantity)を合計せよ。
###Code
df_receipt%>%
group_by(store_cd)%>%
summarise(amount = sum(amount),
quantity = sum(quantity),
.groups = 'drop')%>%
head()
# .groups = 'drop'でグループ化を解除する
###Output
_____no_output_____
###Markdown
---> R-024: レシート明細データフレーム(df_receipt)に対し、顧客ID(customer_id)ごとに最も新しい売上日(sales_ymd)を求め、10件表示せよ。
###Code
df_receipt%>%
group_by(customer_id)%>%
summarise(sales_ymd = max(sales_ymd), .groups = 'drop')%>%
head(10)
#
###Output
_____no_output_____
###Markdown
---> R-025: レシート明細データフレーム(df_receipt)に対し、顧客ID(customer_id)ごとに最も古い売上日(sales_ymd)を求め、10件表示せよ。
###Code
df_receipt%>%
group_by(customer_id)%>%
summarise(sales_ymd = min(sales_ymd))%>%
head(10)
###Output
_____no_output_____
###Markdown
データサイエンス100本ノック(構造化データ加工編) - R はじめに- 初めに以下のセルを実行してください- 必要なライブラリのインポートとデータベース(PostgreSQL)からのデータ読み込みを行います- 利用が想定されるライブラリは以下セルでインポートしています- その他利用したいライブラリがあればinstall.packages()で適宜インストールしてください- 名前、住所等はダミーデータであり、実在するものではありません
###Code
require('RPostgreSQL')
require('tidyr')
require('dplyr')
require('stringr')
require('caret')
require('lubridate')
require('rsample')
require('recipes')
require('themis')
host <- 'db'
port <- Sys.getenv()["PG_PORT"]
dbname <- Sys.getenv()["PG_DATABASE"]
user <- Sys.getenv()["PG_USER"]
password <- Sys.getenv()["PG_PASSWORD"]
con <- dbConnect(PostgreSQL(), host=host, port=port, dbname=dbname, user=user, password=password)
df_customer <- dbGetQuery(con,"SELECT * FROM customer")
df_category <- dbGetQuery(con,"SELECT * FROM category")
df_product <- dbGetQuery(con,"SELECT * FROM product")
df_receipt <- dbGetQuery(con,"SELECT * FROM receipt")
df_store <- dbGetQuery(con,"SELECT * FROM store")
df_geocode <- dbGetQuery(con,"SELECT * FROM geocode")
###Output
_____no_output_____
###Markdown
データサイエンス100本ノック(構造化データ加工編) - R はじめに- 初めに以下のセルを実行してください- 必要なライブラリのインポートとデータベース(PostgreSQL)からのデータ読み込みを行います- 利用が想定されるライブラリは以下セルでインポートしています- その他利用したいライブラリがあれば適宜インストールしてください(!マークに続けてOSコマンドを入力することで、任意のubuntu Linuxコマンドが入力可能)- 名前、住所等はダミーデータであり、実在するものではありません
###Code
require('RPostgreSQL')
require('tidyr')
require('dplyr')
require('stringr')
require('caret')
require('lubridate')
require('rsample')
require('unbalanced')
host <- 'db'
port <- Sys.getenv()["PG_PORT"]
dbname <- Sys.getenv()["PG_DATABASE"]
user <- Sys.getenv()["PG_USER"]
password <- Sys.getenv()["PG_PASSWORD"]
con <- dbConnect(PostgreSQL(), host=host, port=port, dbname=dbname, user=user, password=password)
df_customer <- dbGetQuery(con,"SELECT * FROM customer")
df_category <- dbGetQuery(con,"SELECT * FROM category")
df_product <- dbGetQuery(con,"SELECT * FROM product")
df_receipt <- dbGetQuery(con,"SELECT * FROM receipt")
df_store <- dbGetQuery(con,"SELECT * FROM store")
df_geocode <- dbGetQuery(con,"SELECT * FROM geocode")
###Output
_____no_output_____
|
Algorithms Implementations/Monte Carlo/MC_Prediction.ipynb
|
###Markdown
Monte-Carlo Prediction for BlackjackHere is the implementation of on-policy every-visit Monte-Carlo algorithm to evaluate state value function $v_\pi$ for a given policy $\pi$. The algorithm is implemented using incremental update.
###Code
import numpy as np
from collections import defaultdict
import matplotlib
import matplotlib.pyplot as plt
import sys
import gym
from gym import logger as gymlogger
matplotlib.style.use('ggplot')
gymlogger.set_level(40) #error only
%matplotlib inline
###Output
_____no_output_____
###Markdown
Blackjack environmentA state in Blackjask is defined by three things:1. Player current sum (1-31)2. Dealer's one showing card (1-10)3. Does player holds a usable ace (0 or 1)
###Code
env = gym.make("Blackjack-v0")
print(env.observation_space)
print(env.action_space)
###Output
Tuple(Discrete(32), Discrete(11), Discrete(2))
Discrete(2)
###Markdown
On-Policy Every-visit Monte Carlo Prediction
###Code
def policy1(state):
"""
A sample policy to be evaluated
- This policy is same as the one used in Sutton and Barto Example 5.1.
- This allows to check the results against that mentioned in book
"""
player_sum, dealer_card, usable_ace = state
if player_sum >= 20:
return 0
return 1
def sample_episode(env, policy):
"""
Sample an episode using given policy
"""
state = env.reset()
episode = []
while True:
action = policy(state)
next_state, reward, done, info = env.step(action)
episode.append((state,action,reward))
if done:
break
state = next_state
return episode
def MC_predict(env, policy, num_episodes=10000, gamma=1.0):
"""
Performs MC on-policy evaluation
Args:
env: Learning enviroment, e.g. Blackjack
policy: Policy to be evaluated
num_episodes: Number of episodes to sample
gamma: Discount factor
Returns:
state_values: dictionary mapping states (tuple) to their values
"""
state_counts = defaultdict(int)
state_values = defaultdict(float)
for episode_num in range(num_episodes):
if (episode_num+1) % 1000 == 0:
print("\rFinished {}/{} episodes".format(episode_num+1,num_episodes), end="")
sys.stdout.flush()
episode = sample_episode(env, policy)
current_return = 0
T = len(episode)
for t in reversed(range(T)):
state, action, reward = episode[t]
current_return = gamma*current_return + reward
state_counts[state] += 1
state_values[state] += (current_return - (gamma*state_values[state])) / float(state_counts[state])
return state_values
###Output
_____no_output_____
###Markdown
Helper Functions to Plot State Values
###Code
# Following function to plot the Blackjack value function is based on the
# implementation here: https://github.com/dennybritz/reinforcement-learning/
def plot_value_function(V, title="Value Function"):
"""
Plots the value function as a surface plot.
"""
min_x, max_x = 11, 22
min_y, max_y = min(k[1] for k in V.keys()), max(k[1] for k in V.keys())
x_range = np.arange(min_x, max_x + 1)
y_range = np.arange(min_y, max_y + 1)
X, Y = np.meshgrid(x_range, y_range)
# Find value for all (x, y) coordinates
Z_noace = np.apply_along_axis(lambda _: V[(_[0], _[1], False)], 2, np.dstack([X, Y]))
Z_ace = np.apply_along_axis(lambda _: V[(_[0], _[1], True)], 2, np.dstack([X, Y]))
def plot_surface(X, Y, Z, ax, title):
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1,
cmap=matplotlib.cm.coolwarm, vmin=-1.0, vmax=1.0)
ax.set_xlabel('Player Sum')
ax.set_ylabel('Dealer Showing')
ax.set_zlabel('Value')
ax.set_title(title)
ax.view_init(ax.elev, -120)
fig.colorbar(surf)
fig = plt.figure(figsize=(20, 7))
ax1 = fig.add_subplot(121, projection='3d')
plot_surface(X, Y, Z_noace, ax1, "{} (No Usable Ace)".format(title))
ax2 = fig.add_subplot(122, projection='3d')
plot_surface(X, Y, Z_ace, ax2, "{} (Usable Ace)".format(title))
###Output
_____no_output_____
###Markdown
Run Model and Plot Results
###Code
mc_values_20k = MC_predict(env, policy1, num_episodes=20000)
plot_value_function(mc_values_20k, title="20,000 Steps")
mc_values_500k = MC_predict(env, policy1, num_episodes=500000)
plot_value_function(mc_values_500k, title="500,000 Steps")
###Output
Finished 500000/500000 episodes
|
notebooks/example_cusp_hopf.ipynb
|
###Markdown
Tutorial on how to use the PyCascades Python framework for simulating tipping cascades on complex networks.The core of PyCascades consists of the basic classes for tipping elements, couplings between them and a tipping_network class that contains information aboutthe network structure between these basic elements as well asan evolve class, which is able to simulate the dynamics of atipping network.
###Code
import numpy as np
import matplotlib.pyplot as plt
import pycascades as pc
import seaborn as sns
sns.set(font_scale=1.5)
sns.set_style("whitegrid")
###Output
_____no_output_____
###Markdown
Create two cusp tipping elements
###Code
cusp_element_0 = pc.cusp( a = -4, b = 1, c = 0.2, x_0 = 0.5 )
cusp_element_1 = pc.cusp( a = -4, b = 1, c = 0.0, x_0 = 0.5 )
###Output
_____no_output_____
###Markdown
Create a hopf tipping element.
###Code
hopf_element_1 = pc.hopf( a = 1, c = -1.0)
###Output
_____no_output_____
###Markdown
Create a linear coupling with strength 0.5
###Code
coupling_0 = pc.linear_coupling(strength = 0.5)
###Output
_____no_output_____
###Markdown
We first create a tipping network with two cusp elements.
###Code
net = pc.tipping_network()
net.add_element( cusp_element_0 )
net.add_element( cusp_element_1 )
net.add_coupling( 0, 1, coupling_0 )
###Output
_____no_output_____
###Markdown
Integrate the system.
###Code
initial_state = [0.0, 0.0]
ev = pc.evolve( net, initial_state )
timestep = 0.01
t_end = 30
ev.integrate( timestep , t_end )
time = ev.get_timeseries()[0]
cusp0 = ev.get_timeseries()[1][:,:].T[0]
cusp1 = ev.get_timeseries()[1][:,:].T[1]
###Output
_____no_output_____
###Markdown
We repeat the process, but with one cusp and one hopf element.
###Code
net2 = pc.tipping_network()
net2.add_element( cusp_element_0 )
net2.add_element( hopf_element_1 )
net2.add_coupling( 0, 1, coupling_0 )
initial_state = [0.0, 0.0]
ev = pc.evolve( net2, initial_state )
timestep = 0.01
t_end = 30
ev.integrate( timestep , t_end )
time2 = ev.get_timeseries()[0]
cusp2 = ev.get_timeseries()[1][:,:].T[0]
hopf = np.multiply(ev.get_timeseries()[1][:,:].T[1], np.cos(ev.get_timeseries()[0]))
###Output
_____no_output_____
###Markdown
We plot the results.
###Code
fig, (ax0, ax1) = plt.subplots(1, 2, figsize=(10, 4)) #(default: 8, 6)
ax0.plot(time, cusp0, color="c", linewidth=2.0, label="Cusp-1")
ax0.plot(time, cusp1, color="r", linewidth=2.0, label="Cusp-2")
ax0.set_xlabel("Time [a.u.]")
ax0.set_ylabel("State [a.u.]")
ax0.set_ylim([-1.0, 1.25])
ax0.set_xticks(np.arange(0, 35, 5))
ax0.set_yticks(np.arange(-1.0, 1.5, 0.5))
ax0.legend(loc="lower left")
#ax0.text(x_text, y_text, "$\\mathbf{c}$", fontsize=size)
ax1.plot(time2, cusp2, color="c", linewidth=2.0, label="Cusp")
ax1.plot(time2, hopf, color="r", linewidth=2.0, label="Hopf")
ax1.set_xlabel("Time [a.u.]")
ax1.set_ylabel("State [a.u.]")
ax1.set_ylim([-1.0, 1.25])
ax1.set_xticks(np.arange(0, 35, 5))
ax1.legend(loc="lower left")
#ax1.text(x_text, y_text, "$\\mathbf{d}$", fontsize=size)
fig.tight_layout()
plt.show()
###Output
_____no_output_____
|
Assignment_01_A.ipynb
|
###Markdown
###Code
import pandas as pd # import data analysis module
import io # import input/output module
df = pd.read_csv('content/Car_Price.csv') #reads csv file into a new data frame
df # displays full data frame
df.head()
df.columns # displays labels for all attributes
# list the data types for each column
print(df.dtypes)
df.corr()
###Output
_____no_output_____
###Markdown
In order to start understanding the (linear) relationship between an individual variable and the price. We can do this by using "regplot", which plots the scatterplot plus the fitted regression line for the data.
###Code
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# Engine size as potential predictor variable of price
sns.regplot(x="enginesize", y="car_price", data=df)
plt.ylim(0,)
###Output
_____no_output_____
###Markdown
As the engine-size goes up, the price goes up: this indicates a positive direct correlation between these two variables. Engine size seems like a pretty good predictor of price since the regression line is almost a perfect diagonal line.
###Code
df[["enginesize", "car_price"]].corr()
###Output
_____no_output_____
###Markdown
horsepower could be potential predictor variable of price
###Code
df[['horsepower', 'car_price']].corr()
###Output
_____no_output_____
###Markdown
**Weak Linear Relationship**
###Code
#Let's see if "carheight" as a predictor variable of "price".
sns.regplot(x="carheight", y="car_price", data=df)
###Output
_____no_output_____
###Markdown
Car height does not seem like a good predictor of the price at all since the regression line is close to horizontal. Also, the data points are very scattered and far from the fitted line, showing lots of variability. Therefore it's it is not a reliable variable.
###Code
#We can examine the correlation between 'carheight' and 'car_price' and see it's approximately 0.1193
df[['carheight', 'car_price']].corr()
#Let's calculate the Pearson Correlation Coefficient and P-value of 'wheel-base' and 'price'.
from scipy import stats
pearson_coef, p_value = stats.pearsonr(df['enginesize'], df['car_price'])
print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P =", p_value)
###Output
The Pearson Correlation Coefficient is 0.8741448025245117 with a P-value of P = 1.3547637598648421e-65
###Markdown
Conclusion: **Since the p-value is < 0.001, the correlation between enginesize and price is statistically significant, and the linear relationship is quite strong (~0.874).** **Horsepower vs Price** **Let's calculate the Pearson Correlation Coefficient and P-value of 'horsepower' and 'price'.**
###Code
pearson_coef, p_value = stats.pearsonr(df['horsepower'], df['car_price'])
print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P =", p_value)
import numpy as np # import numerical module
X=df[['car_ID', 'wheelbase', 'carheight', 'enginesize', 'stroke',
'compressionratio', 'horsepower' ]] # data frame of independent variables
y=df['car_price'] # dependent variable
from scipy import stats # import statistical analysis module
x=df.iloc[:,7] # accesses each cell in final column
print('car_price')
print('mean:',np.mean(x))
print('median:',np.median(x))
print('standard deviation:',x.std())
print('variance:',x.var())
###Output
car_price
mean: 13276.710570731706
median: 10295.0
standard deviation: 7988.85233174315
variance: 63821761.57839796
###Markdown
we can calculate the correlation between variables **Lets load the modules for linear regression**
###Code
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm
###Output
_____no_output_____
###Markdown
**How could enginesize help us predict car price?** ** Using simple linear regression, we will create a linear function with "enginesize" as the predictor variable and the "price" as the response variable.**
###Code
X = df[['enginesize']]
Y = df['car_price']
#Fit the linear model using enginesize.
lm.fit(X,Y)
#We can output a prediction
Yhat=lm.predict(X)
Yhat[0:7]
#What is the value of the intercept (a)?
lm.intercept_
#What is the value of the Slope (b)?
lm.coef_
###Output
_____no_output_____
###Markdown
**What is the final estimated linear model we get?**As we saw above, we should get a final linear model with the structure:Plugging in the actual values we get: price = -8005.4455 - 167.69 x enginesize **When evaluating our models, not only do we want to visualize the results, but we also want a quantitative measure to determine how accurate the model is.** R-squaredR squared, also known as the coefficient of determination, is a measure to indicate how close the data is to the fitted regression line. The value of the R-squared is the percentage of variation of the response variable (y) that is explained by a linear model. Model 1: Simple Linear RegressionLet's calculate the R^2
###Code
#enginesize_fit
lm.fit(X, Y)
# Find the R^2
print('The R-square is: ', lm.score(X, Y))
###Output
The R-square is: 0.7641291357806176
|
manuscript_analyses/variants_in_published_libraries/src/Brunello Library Analysis.ipynb
|
###Markdown
Brunello Library Analysis The Brunello sgRNA library was downloaded from the publication by [Doench et al. Nature Biotechnology 2016](https://www.nature.com/articles/nbt.3437), in supplementary table 21. The start and end positions for the BED file from the provided "position of base after cut" because it is known for SpCas9 that the cut occurs 3 bp upstream of the PAM.
###Code
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_context('paper')
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load file.
###Code
brunello = pd.read_excel('../dat/brunello/STable 21 Brunello.xlsx')
brunello.head()
###Output
_____no_output_____
###Markdown
Convert to BED, including PAM.
###Code
brunello_bed = pd.DataFrame()
brunello_bed['chrom'] = brunello['Genomic Sequence'].str.split('_').str[1].str.split('.').str[0].str.replace('0','')
starts = []
stops = []
# use this later to filter variants that occur in "N" position
n_positions = []
for ix, row in brunello.iterrows():
# sense and antisense appear to be mixed up in spreadsheet
if row['Strand'] == 'antisense':
starts.append(row['Position of Base After Cut (1-based)'] - 17)
stops.append(row['Position of Base After Cut (1-based)'] + 5) # to incorporate PAM
n_positions.append(row['Position of Base After Cut (1-based)'] + 3)
else:
starts.append(row['Position of Base After Cut (1-based)'] - 6)
stops.append(row['Position of Base After Cut (1-based)'] + 16)
n_positions.append(row['Position of Base After Cut (1-based)'] - 4)
brunello_bed['start'] = starts
brunello_bed['stop'] = stops
brunello_bed['name'] = brunello.index.astype(str) + '_' + brunello['Target Gene Symbol']
brunello_bed.head()
###Output
_____no_output_____
###Markdown
Save BED file of sgRNA protospacer and PAM coordinates.
###Code
brunello_bed_chr = brunello_bed.copy()
brunello_bed_chr['chrom'] = 'chr' + brunello_bed['chrom'].astype(str)
brunello_bed.to_csv('../dat/brunello/brunello.bed', sep='\t', index=False, header=False)
brunello_bed_chr.to_csv('../dat/brunello/brunello_chr.bed', sep='\t', index=False, header=False)
brunello_bed = pd.read_csv('../dat/brunello/brunello.bed', sep='\t', header=None, names=['chrom','start','stop','name'])
###Output
_____no_output_____
###Markdown
BCFtools command to determine which common variants from the 1000 Genomes Project occur in the sgRNAs.`bcftools view -R ../dat/brunello.bed --min-af 0.05 -G ~/path_to_1kg_dat/1kg_all_autosomal_variants.bcf -O v > ../dat/variants_in_guides_init.vcf` Filter these to make sure none of the variants we identified are from the "N" position of the "NGG" PAM. `n_positions` defined above stored the positions where the "N" in the "NGG" PAM is.
###Code
unfilt_vars = pd.read_csv('../dat/brunello/variants_in_guides_init.vcf', sep='\t', header=None,
names=['chrom','position','rsid','ref','alt','score','passing','info'], comment='#')
###Output
_____no_output_____
###Markdown
Determine the number of variants present before filtering out variants in the "N" position:
###Code
len(unfilt_vars)
###Output
_____no_output_____
###Markdown
Determine the number of variants that occur in the "N" position.
###Code
len(unfilt_vars[unfilt_vars['position'].isin(n_positions)])
###Output
_____no_output_____
###Markdown
Generate a filtered set of non-N position variants.
###Code
filt_vars = unfilt_vars[~unfilt_vars['position'].isin(n_positions)].copy()
###Output
_____no_output_____
###Markdown
Determine how many sgRNAs are impacted by the non-N variants.
###Code
brunello_guides = brunello_bed.copy().sort_values(by='chrom')
brun_n_vars = []
for ix, row in brunello_guides.iterrows():
chrom = row['chrom']
start = row['start']
stop = row['stop']
guide_positions = list(range(start, stop))
vars_in_guide = filt_vars.query('(chrom == @chrom) and (position >= @start) and (position <= @stop)')
brun_n_vars.append(len(vars_in_guide))
brunello_guides['n_vars'] = brun_n_vars
###Output
_____no_output_____
###Markdown
Identify the percentage of guides that have a common variant in the non-N position.
###Code
100.0 * len(brunello_guides.query('n_vars > 0')) / len(brunello_guides)
###Output
_____no_output_____
###Markdown
Save the results to a file.
###Code
brunello_guides.to_csv('../dat/brunello/brunello_guides_results_1kgp.tsv', sep='\t')
###Output
_____no_output_____
###Markdown
Visualize the number of variants per sgRNA.
###Code
p = sns.countplot(x=brunello_guides['n_vars'], color='dodgerblue')
p.set_yscale('log')
plt.ylabel('Number of gRNAs')
plt.xlabel('Number of Common Variants (MAF >= 5%)')
plt.title('Common Variants from the\n 1000 Genomes Project\n in the Brunello gRNA library')
###Output
_____no_output_____
###Markdown
WTC analysis of Brunello sgRNA libraryBCFtools command to figure out which variants in WTC are in Brunello gRNAs.`bcftools view -R ../dat/brunello_chr.bed ~/path_to_wtc_dat/wtc.bcf -O v > ../dat/wtc_variants_in_guides_init.vcf`
###Code
unfilt_vars_wtc = pd.read_csv('../dat/brunello/wtc_variants_in_guides_init.vcf', sep='\t', header=None,
names=['chrom','position','random','ref','alt','rsid','score','passing','info','genotype'], comment='#')
###Output
_____no_output_____
###Markdown
Determine the number of variants present before filtering out variants in the "N" position:
###Code
len(unfilt_vars_wtc)
###Output
_____no_output_____
###Markdown
Determine the number of variants that occur in the "N" position.
###Code
len(unfilt_vars_wtc[unfilt_vars_wtc['position'].isin(n_positions)])
###Output
_____no_output_____
###Markdown
Generate a filtered file with only non-N variants.
###Code
filt_vars_wtc = unfilt_vars_wtc[~unfilt_vars_wtc['position'].isin(n_positions)].copy()
###Output
_____no_output_____
###Markdown
Determine how many sgRNAs are impacted by the non-N variants.
###Code
brunello_guides_wtc = brunello_bed_chr.copy().sort_values(by='chrom')
brun_n_vars_wtc = []
for ix, row in brunello_guides_wtc.iterrows():
chrom = row['chrom']
start = row['start']
stop = row['stop']
guide_positions = list(range(start, stop))
vars_in_guide = filt_vars_wtc.query('(chrom == @chrom) and (position >= @start) and (position <= @stop)')
brun_n_vars_wtc.append(len(vars_in_guide))
brunello_guides_wtc['n_vars'] = brun_n_vars_wtc
###Output
_____no_output_____
###Markdown
Identify the percentage of guides that have a common variant in the non-N position.
###Code
100.0 * len(brunello_guides_wtc.query('n_vars > 0')) / len(brunello_guides_wtc)
###Output
_____no_output_____
###Markdown
Save the results to a file.
###Code
brunello_guides_wtc.to_csv('../dat/brunello/brunello_guides_results_wtc.tsv', sep='\t')
p = sns.countplot(x=brunello_guides_wtc['n_vars'], color='dodgerblue')
p.set_yscale('log')
plt.ylabel('Number of gRNAs')
plt.xlabel('Number of Variants')
plt.title('Variants from WTC\n in the Brunello gRNA library')
###Output
_____no_output_____
###Markdown
Put the plots together for 1000 Genomes and WTC and save to a file.
###Code
brunello_guides['Dataset'] = 'Common variants\n (MAF >= 5%)\n 1000 Genomes'
brunello_guides_wtc['Dataset'] = 'WTC variants'
df_overall = pd.concat([brunello_guides, brunello_guides_wtc])
df_overall.to_csv('../dat/brunello/1kg_wtc_brunello_results.tsv', sep='\t', index=False)
p = sns.countplot(x=df_overall['n_vars'], hue=df_overall['Dataset'])
p.set_yscale('log')
plt.ylabel('Number of gRNAs')
plt.xlabel('Number of Variants')
plt.title('Variants from WTC and \nthe 1000 Genomes Project\n in the Brunello gRNA library')
p = sns.catplot('n_vars', kind='count', row='Dataset', hue='Dataset',
data=df_overall, height=3, aspect=2, sharex=False)
(p.set_axis_labels('Number of Variants', 'Number of gRNAs (log scale)')
.set_titles('{row_name} {row_var}', verticalalignment='top'))
p.fig.get_axes()[0].set_yscale('log')
p.fig.suptitle('Variants in the Brunello gRNA library')
p.savefig('../figs/brunello_wtc_1kgp_countplots.pdf', dpi=300,
bbox_inches='tight')
###Output
_____no_output_____
|
notebooks/live_training/tensor-fied_intro_to_tensorflow_LT.ipynb
|
###Markdown
Introduction to TensorFlow, now leveraging tensors! In this notebook, we modify our [intro to TensorFlow notebook](https://github.com/the-deep-learners/TensorFlow-LiveLessons/blob/master/notebooks/point_by_point_intro_to_tensorflow.ipynb) to use tensors in place of our *for* loop. This is a derivation of Jared Ostmeyer's [Naked Tensor](https://github.com/jostmey/NakedTensor/) code. The initial steps are identical to the earlier notebook
###Code
import numpy as np
np.random.seed(42)
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
tf.set_random_seed(42)
xs = [0., 1., 2., 3., 4., 5., 6., 7.]
ys = [-.82, -.94, -.12, .26, .39, .64, 1.02, 1.]
fig, ax = plt.subplots()
_ = ax.scatter(xs, ys)
m = tf.Variable(-0.5)
b = tf.Variable(1.0)
###Output
_____no_output_____
###Markdown
Define the cost as a tensor -- more elegant than a *for* loop and enables distributed computing in TensorFlow
###Code
ys_model = m*xs+b
total_error = # DEFINE
###Output
_____no_output_____
###Markdown
The remaining steps are also identical to the earlier notebook!
###Code
optimizer_operation = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(total_error)
initializer_operation = tf.global_variables_initializer()
with tf.Session() as session:
session.run(initializer_operation)
n_epochs = 1000
for iteration in range(n_epochs):
session.run(optimizer_operation)
slope, intercept = session.run([m, b])
slope
intercept
y_hat = intercept + slope*np.array(xs)
pd.DataFrame(list(zip(ys, y_hat)), columns=['y', 'y_hat'])
fig, ax = plt.subplots()
ax.scatter(xs, ys)
x_min, x_max = ax.get_xlim()
y_min, y_max = intercept, intercept + slope*(x_max-x_min)
ax.plot([x_min, x_max], [y_min, y_max])
_ = ax.set_xlim([x_min, x_max])
###Output
_____no_output_____
|
nbs/models.common.ipynb
|
###Markdown
Table of Contents0.0.1 Conv1d0.0.2 Attentions0.0.3 STFT0.0.4 Global style tokens1 VITS common1.0.1 LayerNorm1.0.2 Flip1.0.3 Log1.0.4 ElementWiseAffine1.0.5 DDSConv1.0.6 ConvFLow1.0.7 WN1.0.8 ResidualCouplingLayer1.0.9 ResBlock
###Code
# default_exp models.common
# export
import numpy as np
from scipy.signal import get_window
import torch
from torch.autograd import Variable
from torch import nn
from torch.nn import functional as F
from torch.nn.utils import remove_weight_norm, weight_norm
from librosa.filters import mel as librosa_mel
from librosa.util import pad_center, tiny
from uberduck_ml_dev.utils.utils import *
from uberduck_ml_dev.vendor.tfcompat.hparam import HParams
###Output
_____no_output_____
###Markdown
Conv1d
###Code
# export
class Conv1d(nn.Module):
def __init__(
self,
in_channels,
out_channels,
kernel_size=1,
stride=1,
padding=None,
dilation=1,
bias=True,
w_init_gain="linear",
):
super().__init__()
if padding is None:
assert kernel_size % 2 == 1
padding = int(dilation * (kernel_size - 1) / 2)
self.conv = nn.Conv1d(
in_channels,
out_channels,
kernel_size=kernel_size,
stride=stride,
padding=padding,
dilation=dilation,
bias=bias,
)
nn.init.xavier_uniform_(
self.conv.weight, gain=nn.init.calculate_gain(w_init_gain)
)
def forward(self, signal):
return self.conv(signal)
# export
class LinearNorm(torch.nn.Module):
def __init__(self, in_dim, out_dim, bias=True, w_init_gain="linear"):
super().__init__()
self.linear_layer = torch.nn.Linear(in_dim, out_dim, bias=bias)
torch.nn.init.xavier_uniform_(
self.linear_layer.weight, gain=torch.nn.init.calculate_gain(w_init_gain)
)
def forward(self, x):
return self.linear_layer(x)
###Output
_____no_output_____
###Markdown
Attentions
###Code
# export
from numpy import finfo
class LocationLayer(nn.Module):
def __init__(self, attention_n_filters, attention_kernel_size, attention_dim):
super(LocationLayer, self).__init__()
padding = int((attention_kernel_size - 1) / 2)
self.location_conv = Conv1d(
2,
attention_n_filters,
kernel_size=attention_kernel_size,
padding=padding,
bias=False,
stride=1,
dilation=1,
)
self.location_dense = LinearNorm(
attention_n_filters, attention_dim, bias=False, w_init_gain="tanh"
)
def forward(self, attention_weights_cat):
processed_attention = self.location_conv(attention_weights_cat)
processed_attention = processed_attention.transpose(1, 2)
processed_attention = self.location_dense(processed_attention)
return processed_attention
class Attention(nn.Module):
def __init__(
self,
attention_rnn_dim,
embedding_dim,
attention_dim,
attention_location_n_filters,
attention_location_kernel_size,
fp16_run,
):
super(Attention, self).__init__()
self.query_layer = LinearNorm(
attention_rnn_dim, attention_dim, bias=False, w_init_gain="tanh"
)
self.memory_layer = LinearNorm(
embedding_dim, attention_dim, bias=False, w_init_gain="tanh"
)
self.v = LinearNorm(attention_dim, 1, bias=False)
self.location_layer = LocationLayer(
attention_location_n_filters, attention_location_kernel_size, attention_dim
)
if fp16_run:
self.score_mask_value = finfo("float16").min
else:
self.score_mask_value = -float("inf")
def get_alignment_energies(self, query, processed_memory, attention_weights_cat):
"""
PARAMS
------
query: decoder output (batch, n_mel_channels * n_frames_per_step)
processed_memory: processed encoder outputs (B, T_in, attention_dim)
attention_weights_cat: cumulative and prev. att weights (B, 2, max_time)
RETURNS
-------
alignment (batch, max_time)
"""
processed_query = self.query_layer(query.unsqueeze(1))
processed_attention_weights = self.location_layer(attention_weights_cat)
energies = self.v(
torch.tanh(processed_query + processed_attention_weights + processed_memory)
)
energies = energies.squeeze(-1)
return energies
def forward(
self,
attention_hidden_state,
memory,
processed_memory,
attention_weights_cat,
mask,
attention_weights=None,
):
"""
PARAMS
------
attention_hidden_state: attention rnn last output
memory: encoder outputs
processed_memory: processed encoder outputs
attention_weights_cat: previous and cummulative attention weights
mask: binary mask for padded data
"""
if attention_weights is None:
alignment = self.get_alignment_energies(
attention_hidden_state, processed_memory, attention_weights_cat
)
if mask is not None:
alignment.data.masked_fill_(mask, self.score_mask_value)
attention_weights = F.softmax(alignment, dim=1)
attention_context = torch.bmm(attention_weights.unsqueeze(1), memory)
attention_context = attention_context.squeeze(1)
return attention_context, attention_weights
from numpy import finfo
finfo("float16").min
F.pad(torch.rand(1, 3, 3), (2, 2), mode="reflect")
###Output
_____no_output_____
###Markdown
STFT
###Code
# export
class STFT:
"""adapted from Prem Seetharaman's https://github.com/pseeth/pytorch-stft"""
def __init__(
self,
filter_length=1024,
hop_length=256,
win_length=1024,
window="hann",
padding=None,
device="cpu",
rank=None,
):
self.filter_length = filter_length
self.hop_length = hop_length
self.win_length = win_length
self.window = window
self.forward_transform = None
scale = self.filter_length / self.hop_length
fourier_basis = np.fft.fft(np.eye(self.filter_length))
self.padding = padding or (filter_length // 2)
cutoff = int((self.filter_length / 2 + 1))
fourier_basis = np.vstack(
[np.real(fourier_basis[:cutoff, :]), np.imag(fourier_basis[:cutoff, :])]
)
if device == "cuda":
dev = torch.device(f"cuda:{rank}")
forward_basis = torch.cuda.FloatTensor(
fourier_basis[:, None, :], device=dev
)
inverse_basis = torch.cuda.FloatTensor(
np.linalg.pinv(scale * fourier_basis).T[:, None, :].astype(np.float32),
device=dev,
)
else:
forward_basis = torch.FloatTensor(fourier_basis[:, None, :])
inverse_basis = torch.FloatTensor(
np.linalg.pinv(scale * fourier_basis).T[:, None, :].astype(np.float32)
)
if window is not None:
assert filter_length >= win_length
# get window and zero center pad it to filter_length
fft_window = get_window(window, win_length, fftbins=True)
fft_window = pad_center(fft_window, filter_length)
fft_window = torch.from_numpy(fft_window).float()
if device == "cuda":
fft_window = fft_window.cuda(rank)
# window the bases
forward_basis *= fft_window
inverse_basis *= fft_window
self.fft_window = fft_window
self.forward_basis = forward_basis.float()
self.inverse_basis = inverse_basis.float()
def transform(self, input_data):
num_batches = input_data.size(0)
num_samples = input_data.size(1)
self.num_samples = num_samples
# similar to librosa, reflect-pad the input
input_data = input_data.view(num_batches, 1, num_samples)
input_data = F.pad(
input_data.unsqueeze(1),
(self.padding, self.padding, 0, 0,),
mode="reflect",
)
input_data = input_data.squeeze(1)
forward_transform = F.conv1d(
input_data,
Variable(self.forward_basis, requires_grad=False),
stride=self.hop_length,
padding=0,
)
cutoff = self.filter_length // 2 + 1
real_part = forward_transform[:, :cutoff, :]
imag_part = forward_transform[:, cutoff:, :]
magnitude = torch.sqrt(real_part ** 2 + imag_part ** 2)
phase = torch.autograd.Variable(torch.atan2(imag_part.data, real_part.data))
return magnitude, phase
def inverse(self, magnitude, phase):
recombine_magnitude_phase = torch.cat(
[magnitude * torch.cos(phase), magnitude * torch.sin(phase)], dim=1,
)
inverse_transform = F.conv_transpose1d(
recombine_magnitude_phase,
Variable(self.inverse_basis, requires_grad=False),
stride=self.hop_length,
padding=0,
)
if self.window is not None:
window_sum = window_sumsquare(
self.window,
magnitude.size(-1),
hop_length=self.hop_length,
win_length=self.win_length,
n_fft=self.filter_length,
dtype=np.float32,
)
# remove modulation effects
approx_nonzero_indices = torch.from_numpy(
np.where(window_sum > tiny(window_sum))[0]
)
window_sum = torch.autograd.Variable(
torch.from_numpy(window_sum), requires_grad=False
)
window_sum = window_sum.cuda() if magnitude.is_cuda else window_sum
inverse_transform[:, :, approx_nonzero_indices] /= window_sum[
approx_nonzero_indices
]
# scale by hop ratio
inverse_transform *= float(self.filter_length) / self.hop_length
inverse_transform = inverse_transform[:, :, int(self.filter_length / 2) :]
inverse_transform = inverse_transform[:, :, : -int(self.filter_length / 2) :]
return inverse_transform
def forward(self, input_data):
self.magnitude, self.phase = self.transform(input_data)
reconstruction = self.inverse(self.magnitude, self.phase)
return reconstruction
# export
class MelSTFT:
def __init__(
self,
filter_length=1024,
hop_length=256,
win_length=1024,
n_mel_channels=80,
sampling_rate=22050,
mel_fmin=0.0,
mel_fmax=8000.0,
device="cpu",
padding=None,
rank=None,
):
self.n_mel_channels = n_mel_channels
self.sampling_rate = sampling_rate
if padding is None:
padding = filter_length // 2
self.stft_fn = STFT(
filter_length,
hop_length,
win_length,
device=device,
rank=rank,
padding=padding,
)
mel_basis = librosa_mel(
sampling_rate, filter_length, n_mel_channels, mel_fmin, mel_fmax
)
mel_basis = torch.from_numpy(mel_basis).float()
if device == "cuda":
mel_basis = mel_basis.cuda()
self.mel_basis = mel_basis
def spectral_normalize(self, magnitudes):
output = dynamic_range_compression(magnitudes)
return output
def spectral_de_normalize(self, magnitudes):
output = dynamic_range_decompression(magnitudes)
return output
def spec_to_mel(self, spec):
mel_output = torch.matmul(self.mel_basis, spec)
mel_output = self.spectral_normalize(mel_output)
return mel_output
def spectrogram(self, y):
assert y.min() >= -1
assert y.max() <= 1
magnitudes, phases = self.stft_fn.transform(y)
return magnitudes.data
def mel_spectrogram(self, y, ref_level_db=20, magnitude_power=1.5):
"""Computes mel-spectrograms from a batch of waves
PARAMS
------
y: Variable(torch.FloatTensor) with shape (B, T) in range [-1, 1]
RETURNS
-------
mel_output: torch.FloatTensor of shape (B, n_mel_channels, T)
"""
assert y.min() >= -1
assert y.max() <= 1
magnitudes, phases = self.stft_fn.transform(y)
magnitudes = magnitudes.data
return self.spec_to_mel(magnitudes)
def griffin_lim(self, mel_spectrogram, n_iters=30):
mel_dec = self.spectral_de_normalize(mel_spectrogram)
# Float cast required for fp16 training.
mel_dec = mel_dec.transpose(0, 1).cpu().data.float()
spec_from_mel = torch.mm(mel_dec, self.mel_basis).transpose(0, 1)
spec_from_mel *= 1000
out = griffin_lim(spec_from_mel.unsqueeze(0), self.stft_fn, n_iters=n_iters)
return out
from IPython.display import Audio
stft = STFT()
mel_stft = MelSTFT()
mel = mel_stft.mel_spectrogram(torch.clip(torch.randn(1, 1000), -1, 1))
assert mel.shape[0] == 1
assert mel.shape[1] == 80
mel = torch.load("./test/fixtures/stevejobs-1.pt")
aud = mel_stft.griffin_lim(mel)
# hide
Audio(aud, rate=22050)
###Output
_____no_output_____
###Markdown
Global style tokens
###Code
# export
from torch.nn import init
class ReferenceEncoder(nn.Module):
"""
inputs --- [N, Ty/r, n_mels*r] mels
outputs --- [N, ref_enc_gru_size]
"""
def __init__(self, hp):
super().__init__()
K = len(hp.ref_enc_filters)
filters = [1] + hp.ref_enc_filters
convs = [
nn.Conv2d(
in_channels=filters[i],
out_channels=filters[i + 1],
kernel_size=(3, 3),
stride=(2, 2),
padding=(1, 1),
)
for i in range(K)
]
self.convs = nn.ModuleList(convs)
self.bns = nn.ModuleList(
[nn.BatchNorm2d(num_features=hp.ref_enc_filters[i]) for i in range(K)]
)
out_channels = self.calculate_channels(hp.n_mel_channels, 3, 2, 1, K)
self.gru = nn.GRU(
input_size=hp.ref_enc_filters[-1] * out_channels,
hidden_size=hp.ref_enc_gru_size,
batch_first=True,
)
self.n_mel_channels = hp.n_mel_channels
self.ref_enc_gru_size = hp.ref_enc_gru_size
def forward(self, inputs, input_lengths=None):
out = inputs.view(inputs.size(0), 1, -1, self.n_mel_channels)
for conv, bn in zip(self.convs, self.bns):
out = conv(out)
out = bn(out)
out = F.relu(out)
out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K]
N, T = out.size(0), out.size(1)
out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K]
if input_lengths is not None:
input_lengths = torch.ceil(input_lengths.float() / 2 ** len(self.convs))
input_lengths = input_lengths.cpu().numpy().astype(int)
out = nn.utils.rnn.pack_padded_sequence(
out, input_lengths, batch_first=True, enforce_sorted=False
)
self.gru.flatten_parameters()
_, out = self.gru(out)
return out.squeeze(0)
def calculate_channels(self, L, kernel_size, stride, pad, n_convs):
for _ in range(n_convs):
L = (L - kernel_size + 2 * pad) // stride + 1
return L
class MultiHeadAttention(nn.Module):
"""
input:
query --- [N, T_q, query_dim]
key --- [N, T_k, key_dim]
output:
out --- [N, T_q, num_units]
"""
def __init__(self, query_dim, key_dim, num_units, num_heads):
super().__init__()
self.num_units = num_units
self.num_heads = num_heads
self.key_dim = key_dim
self.W_query = nn.Linear(
in_features=query_dim, out_features=num_units, bias=False
)
self.W_key = nn.Linear(in_features=key_dim, out_features=num_units, bias=False)
self.W_value = nn.Linear(
in_features=key_dim, out_features=num_units, bias=False
)
def forward(self, query, key):
querys = self.W_query(query) # [N, T_q, num_units]
keys = self.W_key(key) # [N, T_k, num_units]
values = self.W_value(key)
split_size = self.num_units // self.num_heads
querys = torch.stack(
torch.split(querys, split_size, dim=2), dim=0
) # [h, N, T_q, num_units/h]
keys = torch.stack(
torch.split(keys, split_size, dim=2), dim=0
) # [h, N, T_k, num_units/h]
values = torch.stack(
torch.split(values, split_size, dim=2), dim=0
) # [h, N, T_k, num_units/h]
# score = softmax(QK^T / (d_k ** 0.5))
scores = torch.matmul(querys, keys.transpose(2, 3)) # [h, N, T_q, T_k]
scores = scores / (self.key_dim ** 0.5)
scores = F.softmax(scores, dim=3)
# out = score * V
out = torch.matmul(scores, values) # [h, N, T_q, num_units/h]
out = torch.cat(torch.split(out, 1, dim=0), dim=3).squeeze(
0
) # [N, T_q, num_units]
return out
class STL(nn.Module):
"""
inputs --- [N, token_embedding_size//2]
"""
def __init__(self, hp):
super().__init__()
self.embed = nn.Parameter(
torch.FloatTensor(hp.token_num, hp.token_embedding_size // hp.num_heads)
)
d_q = hp.ref_enc_gru_size
d_k = hp.token_embedding_size // hp.num_heads
self.attention = MultiHeadAttention(
query_dim=d_q,
key_dim=d_k,
num_units=hp.token_embedding_size,
num_heads=hp.num_heads,
)
init.normal_(self.embed, mean=0, std=0.5)
def forward(self, inputs):
N = inputs.size(0)
query = inputs.unsqueeze(1)
keys = (
torch.tanh(self.embed).unsqueeze(0).expand(N, -1, -1)
) # [N, token_num, token_embedding_size // num_heads]
style_embed = self.attention(query, keys)
return style_embed
class GST(nn.Module):
def __init__(self, hp):
super().__init__()
self.encoder = ReferenceEncoder(hp)
self.stl = STL(hp)
def forward(self, inputs, input_lengths=None):
enc_out = self.encoder(inputs, input_lengths=input_lengths)
style_embed = self.stl(enc_out)
return style_embed
DEFAULTS = HParams(
n_symbols=100,
symbols_embedding_dim=512,
mask_padding=True,
fp16_run=False,
n_mel_channels=80,
# encoder parameters
encoder_kernel_size=5,
encoder_n_convolutions=3,
encoder_embedding_dim=512,
# decoder parameters
n_frames_per_step=1, # currently only 1 is supported
decoder_rnn_dim=1024,
prenet_dim=256,
prenet_f0_n_layers=1,
prenet_f0_dim=1,
prenet_f0_kernel_size=1,
prenet_rms_dim=0,
prenet_fms_kernel_size=1,
max_decoder_steps=1000,
gate_threshold=0.5,
p_attention_dropout=0.1,
p_decoder_dropout=0.1,
p_teacher_forcing=1.0,
# attention parameters
attention_rnn_dim=1024,
attention_dim=128,
# location layer parameters
attention_location_n_filters=32,
attention_location_kernel_size=31,
# mel post-processing network parameters
postnet_embedding_dim=512,
postnet_kernel_size=5,
postnet_n_convolutions=5,
# speaker_embedding
n_speakers=123, # original nvidia libritts training
speaker_embedding_dim=128,
# reference encoder
with_gst=True,
ref_enc_filters=[32, 32, 64, 64, 128, 128],
ref_enc_size=[3, 3],
ref_enc_strides=[2, 2],
ref_enc_pad=[1, 1],
ref_enc_gru_size=128,
# style token layer
token_embedding_size=256,
token_num=10,
num_heads=8,
)
GST(DEFAULTS)
###Output
_____no_output_____
###Markdown
VITS common LayerNorm
###Code
# export
class LayerNorm(nn.Module):
def __init__(self, channels, eps=1e-5):
super().__init__()
self.channels = channels
self.eps = eps
self.gamma = nn.Parameter(torch.ones(channels))
self.beta = nn.Parameter(torch.zeros(channels))
def forward(self, x):
x = x.transpose(1, -1)
x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
return x.transpose(1, -1)
LayerNorm(3)
###Output
_____no_output_____
###Markdown
Flip
###Code
# export
class Flip(nn.Module):
def forward(self, x, *args, reverse=False, **kwargs):
x = torch.flip(x, [1])
if not reverse:
logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
return x, logdet
else:
return x
###Output
_____no_output_____
###Markdown
Log
###Code
# export
class Log(nn.Module):
def forward(self, x, x_mask, reverse=False, **kwargs):
if not reverse:
y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
logdet = torch.sum(-y, [1, 2])
return y, logdet
else:
x = torch.exp(x) * x_mask
return x
###Output
_____no_output_____
###Markdown
ElementWiseAffine
###Code
# export
class ElementwiseAffine(nn.Module):
def __init__(self, channels):
super().__init__()
self.channels = channels
self.m = nn.Parameter(torch.zeros(channels, 1))
self.logs = nn.Parameter(torch.zeros(channels, 1))
def forward(self, x, x_mask, reverse=False, **kwargs):
if not reverse:
y = self.m + torch.exp(self.logs) * x
y = y * x_mask
logdet = torch.sum(self.logs * x_mask, [1, 2])
return y, logdet
else:
x = (x - self.m) * torch.exp(-self.logs) * x_mask
return x
###Output
_____no_output_____
###Markdown
DDSConv
###Code
# export
class DDSConv(nn.Module):
"""
Dialted and Depth-Separable Convolution
"""
def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
super().__init__()
self.channels = channels
self.kernel_size = kernel_size
self.n_layers = n_layers
self.p_dropout = p_dropout
self.drop = nn.Dropout(p_dropout)
self.convs_sep = nn.ModuleList()
self.convs_1x1 = nn.ModuleList()
self.norms_1 = nn.ModuleList()
self.norms_2 = nn.ModuleList()
for i in range(n_layers):
dilation = kernel_size ** i
padding = (kernel_size * dilation - dilation) // 2
self.convs_sep.append(
nn.Conv1d(
channels,
channels,
kernel_size,
groups=channels,
dilation=dilation,
padding=padding,
)
)
self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
self.norms_1.append(LayerNorm(channels))
self.norms_2.append(LayerNorm(channels))
def forward(self, x, x_mask, g=None):
if g is not None:
x = x + g
for i in range(self.n_layers):
y = self.convs_sep[i](x * x_mask)
y = self.norms_1[i](y)
y = F.gelu(y)
y = self.convs_1x1[i](y)
y = self.norms_2[i](y)
y = F.gelu(y)
y = self.drop(y)
x = x + y
return x * x_mask
###Output
_____no_output_____
###Markdown
ConvFLow
###Code
# export
import math
from uberduck_ml_dev.models.transforms import piecewise_rational_quadratic_transform
class ConvFlow(nn.Module):
def __init__(
self,
in_channels,
filter_channels,
kernel_size,
n_layers,
num_bins=10,
# tail_bound=5.0,
tail_bound=10.0,
):
super().__init__()
self.in_channels = in_channels
self.filter_channels = filter_channels
self.kernel_size = kernel_size
self.n_layers = n_layers
self.num_bins = num_bins
self.tail_bound = tail_bound
self.half_channels = in_channels // 2
self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
self.proj = nn.Conv1d(
filter_channels, self.half_channels * (num_bins * 3 - 1), 1
)
self.proj.weight.data.zero_()
self.proj.bias.data.zero_()
def forward(self, x, x_mask, g=None, reverse=False):
x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
h = self.pre(x0)
h = self.convs(h, x_mask, g=g)
h = self.proj(h) * x_mask
b, c, t = x0.shape
h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
self.filter_channels
)
unnormalized_derivatives = h[..., 2 * self.num_bins :]
x1, logabsdet = piecewise_rational_quadratic_transform(
x1,
unnormalized_widths,
unnormalized_heights,
unnormalized_derivatives,
inverse=reverse,
tails="linear",
tail_bound=self.tail_bound,
)
x = torch.cat([x0, x1], 1) * x_mask
logdet = torch.sum(logabsdet * x_mask, [1, 2])
if not reverse:
return x, logdet
else:
return x
cf = ConvFlow(192, 2, 3, 3)
# NOTE(zach): figure out the shape of the forward stuff.
# cf(torch.rand(2, 2, 1), torch.ones(2, 2, 1))
###Output
_____no_output_____
###Markdown
WN
###Code
# export
from uberduck_ml_dev.utils.utils import fused_add_tanh_sigmoid_multiply
class WN(torch.nn.Module):
def __init__(
self,
hidden_channels,
kernel_size,
dilation_rate,
n_layers,
gin_channels=0,
p_dropout=0,
):
super(WN, self).__init__()
assert kernel_size % 2 == 1
self.hidden_channels = hidden_channels
self.kernel_size = (kernel_size,)
self.dilation_rate = dilation_rate
self.n_layers = n_layers
self.gin_channels = gin_channels
self.p_dropout = p_dropout
self.in_layers = torch.nn.ModuleList()
self.res_skip_layers = torch.nn.ModuleList()
self.drop = nn.Dropout(p_dropout)
if gin_channels != 0:
cond_layer = nn.Conv1d(gin_channels, 2 * hidden_channels * n_layers, 1)
self.cond_layer = weight_norm(cond_layer, name="weight")
for i in range(n_layers):
dilation = dilation_rate ** i
padding = int((kernel_size * dilation - dilation) / 2)
in_layer = nn.Conv1d(
hidden_channels,
2 * hidden_channels,
kernel_size,
dilation=dilation,
padding=padding,
)
in_layer = weight_norm(in_layer, name="weight")
self.in_layers.append(in_layer)
# last one is not necessary
if i < n_layers - 1:
res_skip_channels = 2 * hidden_channels
else:
res_skip_channels = hidden_channels
res_skip_layer = nn.Conv1d(hidden_channels, res_skip_channels, 1)
res_skip_layer = weight_norm(res_skip_layer, name="weight")
self.res_skip_layers.append(res_skip_layer)
def forward(self, x, x_mask, g=None, **kwargs):
output = torch.zeros_like(x)
n_channels_tensor = torch.IntTensor([self.hidden_channels])
if g is not None:
g = self.cond_layer(g)
for i in range(self.n_layers):
x_in = self.in_layers[i](x)
if g is not None:
cond_offset = i * 2 * self.hidden_channels
g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
else:
g_l = torch.zeros_like(x_in)
acts = fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
acts = self.drop(acts)
res_skip_acts = self.res_skip_layers[i](acts)
if i < self.n_layers - 1:
res_acts = res_skip_acts[:, : self.hidden_channels, :]
x = (x + res_acts) * x_mask
output = output + res_skip_acts[:, self.hidden_channels :, :]
else:
output = output + res_skip_acts
return output * x_mask
def remove_weight_norm(self):
if self.gin_channels != 0:
remove_weight_norm(self.cond_layer)
for l in self.in_layers:
remove_weight_norm(l)
for l in self.res_skip_layers:
remove_weight_norm(l)
###Output
_____no_output_____
###Markdown
ResidualCouplingLayer
###Code
# export
class ResidualCouplingLayer(nn.Module):
def __init__(
self,
channels,
hidden_channels,
kernel_size,
dilation_rate,
n_layers,
p_dropout=0,
gin_channels=0,
mean_only=False,
):
assert channels % 2 == 0, "channels should be divisible by 2"
super().__init__()
self.channels = channels
self.hidden_channels = hidden_channels
self.kernel_size = kernel_size
self.dilation_rate = dilation_rate
self.n_layers = n_layers
self.half_channels = channels // 2
self.mean_only = mean_only
self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
self.enc = WN(
hidden_channels,
kernel_size,
dilation_rate,
n_layers,
p_dropout=p_dropout,
gin_channels=gin_channels,
)
self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
self.post.weight.data.zero_()
self.post.bias.data.zero_()
def forward(self, x, x_mask, g=None, reverse=False):
x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
h = self.pre(x0) * x_mask
h = self.enc(h, x_mask, g=g)
stats = self.post(h) * x_mask
if not self.mean_only:
m, logs = torch.split(stats, [self.half_channels] * 2, 1)
else:
m = stats
logs = torch.zeros_like(m)
if not reverse:
x1 = m + x1 * torch.exp(logs) * x_mask
x = torch.cat([x0, x1], 1)
logdet = torch.sum(logs, [1, 2])
return x, logdet
else:
x1 = (x1 - m) * torch.exp(-logs) * x_mask
x = torch.cat([x0, x1], 1)
return x
###Output
_____no_output_____
###Markdown
ResBlock
###Code
# export
class ResBlock1(torch.nn.Module):
def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
super(ResBlock1, self).__init__()
self.convs1 = nn.ModuleList(
[
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[0],
padding=get_padding(kernel_size, dilation[0]),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[1],
padding=get_padding(kernel_size, dilation[1]),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[2],
padding=get_padding(kernel_size, dilation[2]),
)
),
]
)
self.convs1.apply(init_weights)
self.convs2 = nn.ModuleList(
[
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=1,
padding=get_padding(kernel_size, 1),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=1,
padding=get_padding(kernel_size, 1),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=1,
padding=get_padding(kernel_size, 1),
)
),
]
)
self.convs2.apply(init_weights)
def forward(self, x, x_mask=None):
for c1, c2 in zip(self.convs1, self.convs2):
xt = F.leaky_relu(x, LRELU_SLOPE)
if x_mask is not None:
xt = xt * x_mask
xt = c1(xt)
xt = F.leaky_relu(xt, LRELU_SLOPE)
if x_mask is not None:
xt = xt * x_mask
xt = c2(xt)
x = xt + x
if x_mask is not None:
x = x * x_mask
return x
def remove_weight_norm(self):
for l in self.convs1:
remove_weight_norm(l)
for l in self.convs2:
remove_weight_norm(l)
class ResBlock2(torch.nn.Module):
def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
super(ResBlock2, self).__init__()
self.convs = nn.ModuleList(
[
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[0],
padding=get_padding(kernel_size, dilation[0]),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[1],
padding=get_padding(kernel_size, dilation[1]),
)
),
]
)
self.convs.apply(init_weights)
def forward(self, x, x_mask=None):
for c in self.convs:
xt = F.leaky_relu(x, LRELU_SLOPE)
if x_mask is not None:
xt = xt * x_mask
xt = c(xt)
x = xt + x
if x_mask is not None:
x = x * x_mask
return x
def remove_weight_norm(self):
for l in self.convs:
remove_weight_norm(l)
# export
LRELU_SLOPE = 0.1
###Output
_____no_output_____
###Markdown
Table of Contents0.0.1 Conv1d0.0.2 Attentions0.0.3 STFT0.0.4 Global style tokens1 VITS common1.0.1 LayerNorm1.0.2 Flip1.0.3 Log1.0.4 ElementWiseAffine1.0.5 DDSConv1.0.6 ConvFLow1.0.7 WN1.0.8 ResidualCouplingLayer1.0.9 ResBlock
###Code
# default_exp models.common
# export
import numpy as np
from scipy.signal import get_window
import torch
from torch.autograd import Variable
from torch import nn
from torch.nn import functional as F
from torch.nn.utils import remove_weight_norm, weight_norm
from librosa.filters import mel as librosa_mel
from librosa.util import pad_center, tiny
from uberduck_ml_dev.utils.utils import *
from uberduck_ml_dev.vendor.tfcompat.hparam import HParams
###Output
_____no_output_____
###Markdown
Conv1d
###Code
# export
class Conv1d(nn.Module):
def __init__(
self,
in_channels,
out_channels,
kernel_size=1,
stride=1,
padding=None,
dilation=1,
bias=True,
w_init_gain="linear",
):
super().__init__()
if padding is None:
assert kernel_size % 2 == 1
padding = int(dilation * (kernel_size - 1) / 2)
self.conv = nn.Conv1d(
in_channels,
out_channels,
kernel_size=kernel_size,
stride=stride,
padding=padding,
dilation=dilation,
bias=bias,
)
nn.init.xavier_uniform_(
self.conv.weight, gain=nn.init.calculate_gain(w_init_gain)
)
def forward(self, signal):
return self.conv(signal)
# export
class LinearNorm(torch.nn.Module):
def __init__(self, in_dim, out_dim, bias=True, w_init_gain="linear"):
super().__init__()
self.linear_layer = torch.nn.Linear(in_dim, out_dim, bias=bias)
torch.nn.init.xavier_uniform_(
self.linear_layer.weight, gain=torch.nn.init.calculate_gain(w_init_gain)
)
def forward(self, x):
return self.linear_layer(x)
###Output
_____no_output_____
###Markdown
Attentions
###Code
# export
from numpy import finfo
class LocationLayer(nn.Module):
def __init__(self, attention_n_filters, attention_kernel_size, attention_dim):
super(LocationLayer, self).__init__()
padding = int((attention_kernel_size - 1) / 2)
self.location_conv = Conv1d(
2,
attention_n_filters,
kernel_size=attention_kernel_size,
padding=padding,
bias=False,
stride=1,
dilation=1,
)
self.location_dense = LinearNorm(
attention_n_filters, attention_dim, bias=False, w_init_gain="tanh"
)
def forward(self, attention_weights_cat):
processed_attention = self.location_conv(attention_weights_cat)
processed_attention = processed_attention.transpose(1, 2)
processed_attention = self.location_dense(processed_attention)
return processed_attention
class Attention(nn.Module):
def __init__(
self,
attention_rnn_dim,
embedding_dim,
attention_dim,
attention_location_n_filters,
attention_location_kernel_size,
fp16_run,
):
super(Attention, self).__init__()
self.query_layer = LinearNorm(
attention_rnn_dim, attention_dim, bias=False, w_init_gain="tanh"
)
self.memory_layer = LinearNorm(
embedding_dim, attention_dim, bias=False, w_init_gain="tanh"
)
self.v = LinearNorm(attention_dim, 1, bias=False)
self.location_layer = LocationLayer(
attention_location_n_filters, attention_location_kernel_size, attention_dim
)
if fp16_run:
self.score_mask_value = finfo("float16").min
else:
self.score_mask_value = -float("inf")
def get_alignment_energies(self, query, processed_memory, attention_weights_cat):
"""
PARAMS
------
query: decoder output (batch, n_mel_channels * n_frames_per_step)
processed_memory: processed encoder outputs (B, T_in, attention_dim)
attention_weights_cat: cumulative and prev. att weights (B, 2, max_time)
RETURNS
-------
alignment (batch, max_time)
"""
processed_query = self.query_layer(query.unsqueeze(1))
processed_attention_weights = self.location_layer(attention_weights_cat)
energies = self.v(
torch.tanh(processed_query + processed_attention_weights + processed_memory)
)
energies = energies.squeeze(-1)
return energies
def forward(
self,
attention_hidden_state,
memory,
processed_memory,
attention_weights_cat,
mask,
attention_weights=None,
):
"""
PARAMS
------
attention_hidden_state: attention rnn last output
memory: encoder outputs
processed_memory: processed encoder outputs
attention_weights_cat: previous and cummulative attention weights
mask: binary mask for padded data
"""
if attention_weights is None:
alignment = self.get_alignment_energies(
attention_hidden_state, processed_memory, attention_weights_cat
)
if mask is not None:
alignment.data.masked_fill_(mask, self.score_mask_value)
attention_weights = F.softmax(alignment, dim=1)
attention_context = torch.bmm(attention_weights.unsqueeze(1), memory)
attention_context = attention_context.squeeze(1)
return attention_context, attention_weights
from numpy import finfo
finfo("float16").min
F.pad(torch.rand(1, 3, 3), (2, 2), mode="reflect")
###Output
_____no_output_____
###Markdown
STFT
###Code
# export
class STFT:
"""adapted from Prem Seetharaman's https://github.com/pseeth/pytorch-stft"""
def __init__(
self,
filter_length=1024,
hop_length=256,
win_length=1024,
window="hann",
padding=None,
device="cpu",
rank=None,
):
self.filter_length = filter_length
self.hop_length = hop_length
self.win_length = win_length
self.window = window
self.forward_transform = None
scale = self.filter_length / self.hop_length
fourier_basis = np.fft.fft(np.eye(self.filter_length))
self.padding = padding or (filter_length // 2)
cutoff = int((self.filter_length / 2 + 1))
fourier_basis = np.vstack(
[np.real(fourier_basis[:cutoff, :]), np.imag(fourier_basis[:cutoff, :])]
)
if device == "cuda":
dev = torch.device(f"cuda:{rank}")
forward_basis = torch.cuda.FloatTensor(
fourier_basis[:, None, :], device=dev
)
inverse_basis = torch.cuda.FloatTensor(
np.linalg.pinv(scale * fourier_basis).T[:, None, :].astype(np.float32),
device=dev,
)
else:
forward_basis = torch.FloatTensor(fourier_basis[:, None, :])
inverse_basis = torch.FloatTensor(
np.linalg.pinv(scale * fourier_basis).T[:, None, :].astype(np.float32)
)
if window is not None:
assert filter_length >= win_length
# get window and zero center pad it to filter_length
fft_window = get_window(window, win_length, fftbins=True)
fft_window = pad_center(fft_window, filter_length)
fft_window = torch.from_numpy(fft_window).float()
if device == "cuda":
fft_window = fft_window.cuda(rank)
# window the bases
forward_basis *= fft_window
inverse_basis *= fft_window
self.fft_window = fft_window
self.forward_basis = forward_basis.float()
self.inverse_basis = inverse_basis.float()
def transform(self, input_data):
num_batches = input_data.size(0)
num_samples = input_data.size(1)
self.num_samples = num_samples
# similar to librosa, reflect-pad the input
input_data = input_data.view(num_batches, 1, num_samples)
input_data = F.pad(
input_data.unsqueeze(1),
(
self.padding,
self.padding,
0,
0,
),
mode="reflect",
)
input_data = input_data.squeeze(1)
forward_transform = F.conv1d(
input_data,
Variable(self.forward_basis, requires_grad=False),
stride=self.hop_length,
padding=0,
)
cutoff = self.filter_length // 2 + 1
real_part = forward_transform[:, :cutoff, :]
imag_part = forward_transform[:, cutoff:, :]
magnitude = torch.sqrt(real_part ** 2 + imag_part ** 2)
phase = torch.autograd.Variable(torch.atan2(imag_part.data, real_part.data))
return magnitude, phase
def inverse(self, magnitude, phase):
recombine_magnitude_phase = torch.cat(
[magnitude * torch.cos(phase), magnitude * torch.sin(phase)],
dim=1,
)
inverse_transform = F.conv_transpose1d(
recombine_magnitude_phase,
Variable(self.inverse_basis, requires_grad=False),
stride=self.hop_length,
padding=0,
)
if self.window is not None:
window_sum = window_sumsquare(
self.window,
magnitude.size(-1),
hop_length=self.hop_length,
win_length=self.win_length,
n_fft=self.filter_length,
dtype=np.float32,
)
# remove modulation effects
approx_nonzero_indices = torch.from_numpy(
np.where(window_sum > tiny(window_sum))[0]
)
window_sum = torch.autograd.Variable(
torch.from_numpy(window_sum), requires_grad=False
)
window_sum = window_sum.cuda() if magnitude.is_cuda else window_sum
inverse_transform[:, :, approx_nonzero_indices] /= window_sum[
approx_nonzero_indices
]
# scale by hop ratio
inverse_transform *= float(self.filter_length) / self.hop_length
inverse_transform = inverse_transform[:, :, int(self.filter_length / 2) :]
inverse_transform = inverse_transform[:, :, : -int(self.filter_length / 2) :]
return inverse_transform
def forward(self, input_data):
self.magnitude, self.phase = self.transform(input_data)
reconstruction = self.inverse(self.magnitude, self.phase)
return reconstruction
# export
class MelSTFT:
def __init__(
self,
filter_length=1024,
hop_length=256,
win_length=1024,
n_mel_channels=80,
sampling_rate=22050,
mel_fmin=0.0,
mel_fmax=8000.0,
device="cpu",
padding=None,
rank=None,
):
self.n_mel_channels = n_mel_channels
self.sampling_rate = sampling_rate
if padding is None:
padding = filter_length // 2
self.stft_fn = STFT(
filter_length,
hop_length,
win_length,
device=device,
rank=rank,
padding=padding,
)
mel_basis = librosa_mel(
sampling_rate, filter_length, n_mel_channels, mel_fmin, mel_fmax
)
mel_basis = torch.from_numpy(mel_basis).float()
if device == "cuda":
mel_basis = mel_basis.cuda()
self.mel_basis = mel_basis
def spectral_normalize(self, magnitudes):
output = dynamic_range_compression(magnitudes)
return output
def spectral_de_normalize(self, magnitudes):
output = dynamic_range_decompression(magnitudes)
return output
def spec_to_mel(self, spec):
mel_output = torch.matmul(self.mel_basis, spec)
mel_output = self.spectral_normalize(mel_output)
return mel_output
def spectrogram(self, y):
assert y.min() >= -1
assert y.max() <= 1
magnitudes, phases = self.stft_fn.transform(y)
return magnitudes.data
def mel_spectrogram(self, y, ref_level_db=20, magnitude_power=1.5):
"""Computes mel-spectrograms from a batch of waves
PARAMS
------
y: Variable(torch.FloatTensor) with shape (B, T) in range [-1, 1]
RETURNS
-------
mel_output: torch.FloatTensor of shape (B, n_mel_channels, T)
"""
assert y.min() >= -1
assert y.max() <= 1
magnitudes, phases = self.stft_fn.transform(y)
magnitudes = magnitudes.data
return self.spec_to_mel(magnitudes)
def griffin_lim(self, mel_spectrogram, n_iters=30):
mel_dec = self.spectral_de_normalize(mel_spectrogram)
# Float cast required for fp16 training.
mel_dec = mel_dec.transpose(0, 1).cpu().data.float()
spec_from_mel = torch.mm(mel_dec, self.mel_basis).transpose(0, 1)
spec_from_mel *= 1000
out = griffin_lim(spec_from_mel.unsqueeze(0), self.stft_fn, n_iters=n_iters)
return out
from IPython.display import Audio
stft = STFT()
mel_stft = MelSTFT()
mel = mel_stft.mel_spectrogram(torch.clip(torch.randn(1, 1000), -1, 1))
assert mel.shape[0] == 1
assert mel.shape[1] == 80
mel = torch.load("./test/fixtures/stevejobs-1.pt")
aud = mel_stft.griffin_lim(mel)
# hide
Audio(aud, rate=22050)
###Output
_____no_output_____
###Markdown
Global style tokens
###Code
# export
from torch.nn import init
class ReferenceEncoder(nn.Module):
"""
inputs --- [N, Ty/r, n_mels*r] mels
outputs --- [N, ref_enc_gru_size]
"""
def __init__(self, hp):
super().__init__()
K = len(hp.ref_enc_filters)
filters = [1] + hp.ref_enc_filters
convs = [
nn.Conv2d(
in_channels=filters[i],
out_channels=filters[i + 1],
kernel_size=(3, 3),
stride=(2, 2),
padding=(1, 1),
)
for i in range(K)
]
self.convs = nn.ModuleList(convs)
self.bns = nn.ModuleList(
[nn.BatchNorm2d(num_features=hp.ref_enc_filters[i]) for i in range(K)]
)
out_channels = self.calculate_channels(hp.n_mel_channels, 3, 2, 1, K)
self.gru = nn.GRU(
input_size=hp.ref_enc_filters[-1] * out_channels,
hidden_size=hp.ref_enc_gru_size,
batch_first=True,
)
self.n_mel_channels = hp.n_mel_channels
self.ref_enc_gru_size = hp.ref_enc_gru_size
def forward(self, inputs, input_lengths=None):
out = inputs.view(inputs.size(0), 1, -1, self.n_mel_channels)
for conv, bn in zip(self.convs, self.bns):
out = conv(out)
out = bn(out)
out = F.relu(out)
out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K]
N, T = out.size(0), out.size(1)
out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K]
if input_lengths is not None:
input_lengths = torch.ceil(input_lengths.float() / 2 ** len(self.convs))
input_lengths = input_lengths.cpu().numpy().astype(int)
out = nn.utils.rnn.pack_padded_sequence(
out, input_lengths, batch_first=True, enforce_sorted=False
)
self.gru.flatten_parameters()
_, out = self.gru(out)
return out.squeeze(0)
def calculate_channels(self, L, kernel_size, stride, pad, n_convs):
for _ in range(n_convs):
L = (L - kernel_size + 2 * pad) // stride + 1
return L
class MultiHeadAttention(nn.Module):
"""
input:
query --- [N, T_q, query_dim]
key --- [N, T_k, key_dim]
output:
out --- [N, T_q, num_units]
"""
def __init__(self, query_dim, key_dim, num_units, num_heads):
super().__init__()
self.num_units = num_units
self.num_heads = num_heads
self.key_dim = key_dim
self.W_query = nn.Linear(
in_features=query_dim, out_features=num_units, bias=False
)
self.W_key = nn.Linear(in_features=key_dim, out_features=num_units, bias=False)
self.W_value = nn.Linear(
in_features=key_dim, out_features=num_units, bias=False
)
def forward(self, query, key):
querys = self.W_query(query) # [N, T_q, num_units]
keys = self.W_key(key) # [N, T_k, num_units]
values = self.W_value(key)
split_size = self.num_units // self.num_heads
querys = torch.stack(
torch.split(querys, split_size, dim=2), dim=0
) # [h, N, T_q, num_units/h]
keys = torch.stack(
torch.split(keys, split_size, dim=2), dim=0
) # [h, N, T_k, num_units/h]
values = torch.stack(
torch.split(values, split_size, dim=2), dim=0
) # [h, N, T_k, num_units/h]
# score = softmax(QK^T / (d_k ** 0.5))
scores = torch.matmul(querys, keys.transpose(2, 3)) # [h, N, T_q, T_k]
scores = scores / (self.key_dim ** 0.5)
scores = F.softmax(scores, dim=3)
# out = score * V
out = torch.matmul(scores, values) # [h, N, T_q, num_units/h]
out = torch.cat(torch.split(out, 1, dim=0), dim=3).squeeze(
0
) # [N, T_q, num_units]
return out
class STL(nn.Module):
"""
inputs --- [N, token_embedding_size//2]
"""
def __init__(self, hp):
super().__init__()
self.embed = nn.Parameter(
torch.FloatTensor(hp.token_num, hp.token_embedding_size // hp.num_heads)
)
d_q = hp.ref_enc_gru_size
d_k = hp.token_embedding_size // hp.num_heads
self.attention = MultiHeadAttention(
query_dim=d_q,
key_dim=d_k,
num_units=hp.token_embedding_size,
num_heads=hp.num_heads,
)
init.normal_(self.embed, mean=0, std=0.5)
def forward(self, inputs):
N = inputs.size(0)
query = inputs.unsqueeze(1)
keys = (
torch.tanh(self.embed).unsqueeze(0).expand(N, -1, -1)
) # [N, token_num, token_embedding_size // num_heads]
style_embed = self.attention(query, keys)
return style_embed
class GST(nn.Module):
def __init__(self, hp):
super().__init__()
self.encoder = ReferenceEncoder(hp)
self.stl = STL(hp)
def forward(self, inputs, input_lengths=None):
enc_out = self.encoder(inputs, input_lengths=input_lengths)
style_embed = self.stl(enc_out)
return style_embed
DEFAULTS = HParams(
n_symbols=100,
symbols_embedding_dim=512,
mask_padding=True,
fp16_run=False,
n_mel_channels=80,
# encoder parameters
encoder_kernel_size=5,
encoder_n_convolutions=3,
encoder_embedding_dim=512,
# decoder parameters
n_frames_per_step=1, # currently only 1 is supported
decoder_rnn_dim=1024,
prenet_dim=256,
prenet_f0_n_layers=1,
prenet_f0_dim=1,
prenet_f0_kernel_size=1,
prenet_rms_dim=0,
prenet_fms_kernel_size=1,
max_decoder_steps=1000,
gate_threshold=0.5,
p_attention_dropout=0.1,
p_decoder_dropout=0.1,
p_teacher_forcing=1.0,
# attention parameters
attention_rnn_dim=1024,
attention_dim=128,
# location layer parameters
attention_location_n_filters=32,
attention_location_kernel_size=31,
# mel post-processing network parameters
postnet_embedding_dim=512,
postnet_kernel_size=5,
postnet_n_convolutions=5,
# speaker_embedding
n_speakers=123, # original nvidia libritts training
speaker_embedding_dim=128,
# reference encoder
with_gst=True,
ref_enc_filters=[32, 32, 64, 64, 128, 128],
ref_enc_size=[3, 3],
ref_enc_strides=[2, 2],
ref_enc_pad=[1, 1],
ref_enc_gru_size=128,
# style token layer
token_embedding_size=256,
token_num=10,
num_heads=8,
)
GST(DEFAULTS)
###Output
_____no_output_____
###Markdown
VITS common LayerNorm
###Code
# export
class LayerNorm(nn.Module):
def __init__(self, channels, eps=1e-5):
super().__init__()
self.channels = channels
self.eps = eps
self.gamma = nn.Parameter(torch.ones(channels))
self.beta = nn.Parameter(torch.zeros(channels))
def forward(self, x):
x = x.transpose(1, -1)
x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
return x.transpose(1, -1)
LayerNorm(3)
###Output
_____no_output_____
###Markdown
Flip
###Code
# export
class Flip(nn.Module):
def forward(self, x, *args, reverse=False, **kwargs):
x = torch.flip(x, [1])
if not reverse:
logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
return x, logdet
else:
return x
###Output
_____no_output_____
###Markdown
Log
###Code
# export
class Log(nn.Module):
def forward(self, x, x_mask, reverse=False, **kwargs):
if not reverse:
y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
logdet = torch.sum(-y, [1, 2])
return y, logdet
else:
x = torch.exp(x) * x_mask
return x
###Output
_____no_output_____
###Markdown
ElementWiseAffine
###Code
# export
class ElementwiseAffine(nn.Module):
def __init__(self, channels):
super().__init__()
self.channels = channels
self.m = nn.Parameter(torch.zeros(channels, 1))
self.logs = nn.Parameter(torch.zeros(channels, 1))
def forward(self, x, x_mask, reverse=False, **kwargs):
if not reverse:
y = self.m + torch.exp(self.logs) * x
y = y * x_mask
logdet = torch.sum(self.logs * x_mask, [1, 2])
return y, logdet
else:
x = (x - self.m) * torch.exp(-self.logs) * x_mask
return x
###Output
_____no_output_____
###Markdown
DDSConv
###Code
# export
class DDSConv(nn.Module):
"""
Dialted and Depth-Separable Convolution
"""
def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
super().__init__()
self.channels = channels
self.kernel_size = kernel_size
self.n_layers = n_layers
self.p_dropout = p_dropout
self.drop = nn.Dropout(p_dropout)
self.convs_sep = nn.ModuleList()
self.convs_1x1 = nn.ModuleList()
self.norms_1 = nn.ModuleList()
self.norms_2 = nn.ModuleList()
for i in range(n_layers):
dilation = kernel_size ** i
padding = (kernel_size * dilation - dilation) // 2
self.convs_sep.append(
nn.Conv1d(
channels,
channels,
kernel_size,
groups=channels,
dilation=dilation,
padding=padding,
)
)
self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
self.norms_1.append(LayerNorm(channels))
self.norms_2.append(LayerNorm(channels))
def forward(self, x, x_mask, g=None):
if g is not None:
x = x + g
for i in range(self.n_layers):
y = self.convs_sep[i](x * x_mask)
y = self.norms_1[i](y)
y = F.gelu(y)
y = self.convs_1x1[i](y)
y = self.norms_2[i](y)
y = F.gelu(y)
y = self.drop(y)
x = x + y
return x * x_mask
###Output
_____no_output_____
###Markdown
ConvFLow
###Code
# export
import math
from uberduck_ml_dev.models.transforms import piecewise_rational_quadratic_transform
class ConvFlow(nn.Module):
def __init__(
self,
in_channels,
filter_channels,
kernel_size,
n_layers,
num_bins=10,
# tail_bound=5.0,
tail_bound=10.0,
):
super().__init__()
self.in_channels = in_channels
self.filter_channels = filter_channels
self.kernel_size = kernel_size
self.n_layers = n_layers
self.num_bins = num_bins
self.tail_bound = tail_bound
self.half_channels = in_channels // 2
self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
self.proj = nn.Conv1d(
filter_channels, self.half_channels * (num_bins * 3 - 1), 1
)
self.proj.weight.data.zero_()
self.proj.bias.data.zero_()
def forward(self, x, x_mask, g=None, reverse=False):
x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
h = self.pre(x0)
h = self.convs(h, x_mask, g=g)
h = self.proj(h) * x_mask
b, c, t = x0.shape
h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
self.filter_channels
)
unnormalized_derivatives = h[..., 2 * self.num_bins :]
x1, logabsdet = piecewise_rational_quadratic_transform(
x1,
unnormalized_widths,
unnormalized_heights,
unnormalized_derivatives,
inverse=reverse,
tails="linear",
tail_bound=self.tail_bound,
)
x = torch.cat([x0, x1], 1) * x_mask
logdet = torch.sum(logabsdet * x_mask, [1, 2])
if not reverse:
return x, logdet
else:
return x
cf = ConvFlow(192, 2, 3, 3)
# NOTE(zach): figure out the shape of the forward stuff.
# cf(torch.rand(2, 2, 1), torch.ones(2, 2, 1))
###Output
_____no_output_____
###Markdown
WN
###Code
# export
from uberduck_ml_dev.utils.utils import fused_add_tanh_sigmoid_multiply
class WN(torch.nn.Module):
def __init__(
self,
hidden_channels,
kernel_size,
dilation_rate,
n_layers,
gin_channels=0,
p_dropout=0,
):
super(WN, self).__init__()
assert kernel_size % 2 == 1
self.hidden_channels = hidden_channels
self.kernel_size = (kernel_size,)
self.dilation_rate = dilation_rate
self.n_layers = n_layers
self.gin_channels = gin_channels
self.p_dropout = p_dropout
self.in_layers = torch.nn.ModuleList()
self.res_skip_layers = torch.nn.ModuleList()
self.drop = nn.Dropout(p_dropout)
if gin_channels != 0:
cond_layer = nn.Conv1d(gin_channels, 2 * hidden_channels * n_layers, 1)
self.cond_layer = weight_norm(cond_layer, name="weight")
for i in range(n_layers):
dilation = dilation_rate ** i
padding = int((kernel_size * dilation - dilation) / 2)
in_layer = nn.Conv1d(
hidden_channels,
2 * hidden_channels,
kernel_size,
dilation=dilation,
padding=padding,
)
in_layer = weight_norm(in_layer, name="weight")
self.in_layers.append(in_layer)
# last one is not necessary
if i < n_layers - 1:
res_skip_channels = 2 * hidden_channels
else:
res_skip_channels = hidden_channels
res_skip_layer = nn.Conv1d(hidden_channels, res_skip_channels, 1)
res_skip_layer = weight_norm(res_skip_layer, name="weight")
self.res_skip_layers.append(res_skip_layer)
def forward(self, x, x_mask, g=None, **kwargs):
output = torch.zeros_like(x)
n_channels_tensor = torch.IntTensor([self.hidden_channels])
if g is not None:
g = self.cond_layer(g)
for i in range(self.n_layers):
x_in = self.in_layers[i](x)
if g is not None:
cond_offset = i * 2 * self.hidden_channels
g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
else:
g_l = torch.zeros_like(x_in)
acts = fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
acts = self.drop(acts)
res_skip_acts = self.res_skip_layers[i](acts)
if i < self.n_layers - 1:
res_acts = res_skip_acts[:, : self.hidden_channels, :]
x = (x + res_acts) * x_mask
output = output + res_skip_acts[:, self.hidden_channels :, :]
else:
output = output + res_skip_acts
return output * x_mask
def remove_weight_norm(self):
if self.gin_channels != 0:
remove_weight_norm(self.cond_layer)
for l in self.in_layers:
remove_weight_norm(l)
for l in self.res_skip_layers:
remove_weight_norm(l)
###Output
_____no_output_____
###Markdown
ResidualCouplingLayer
###Code
# export
class ResidualCouplingLayer(nn.Module):
def __init__(
self,
channels,
hidden_channels,
kernel_size,
dilation_rate,
n_layers,
p_dropout=0,
gin_channels=0,
mean_only=False,
):
assert channels % 2 == 0, "channels should be divisible by 2"
super().__init__()
self.channels = channels
self.hidden_channels = hidden_channels
self.kernel_size = kernel_size
self.dilation_rate = dilation_rate
self.n_layers = n_layers
self.half_channels = channels // 2
self.mean_only = mean_only
self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
self.enc = WN(
hidden_channels,
kernel_size,
dilation_rate,
n_layers,
p_dropout=p_dropout,
gin_channels=gin_channels,
)
self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
self.post.weight.data.zero_()
self.post.bias.data.zero_()
def forward(self, x, x_mask, g=None, reverse=False):
x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
h = self.pre(x0) * x_mask
h = self.enc(h, x_mask, g=g)
stats = self.post(h) * x_mask
if not self.mean_only:
m, logs = torch.split(stats, [self.half_channels] * 2, 1)
else:
m = stats
logs = torch.zeros_like(m)
if not reverse:
x1 = m + x1 * torch.exp(logs) * x_mask
x = torch.cat([x0, x1], 1)
logdet = torch.sum(logs, [1, 2])
return x, logdet
else:
x1 = (x1 - m) * torch.exp(-logs) * x_mask
x = torch.cat([x0, x1], 1)
return x
###Output
_____no_output_____
###Markdown
ResBlock
###Code
# export
class ResBlock1(torch.nn.Module):
def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
super(ResBlock1, self).__init__()
self.convs1 = nn.ModuleList(
[
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[0],
padding=get_padding(kernel_size, dilation[0]),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[1],
padding=get_padding(kernel_size, dilation[1]),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[2],
padding=get_padding(kernel_size, dilation[2]),
)
),
]
)
self.convs1.apply(init_weights)
self.convs2 = nn.ModuleList(
[
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=1,
padding=get_padding(kernel_size, 1),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=1,
padding=get_padding(kernel_size, 1),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=1,
padding=get_padding(kernel_size, 1),
)
),
]
)
self.convs2.apply(init_weights)
def forward(self, x, x_mask=None):
for c1, c2 in zip(self.convs1, self.convs2):
xt = F.leaky_relu(x, LRELU_SLOPE)
if x_mask is not None:
xt = xt * x_mask
xt = c1(xt)
xt = F.leaky_relu(xt, LRELU_SLOPE)
if x_mask is not None:
xt = xt * x_mask
xt = c2(xt)
x = xt + x
if x_mask is not None:
x = x * x_mask
return x
def remove_weight_norm(self):
for l in self.convs1:
remove_weight_norm(l)
for l in self.convs2:
remove_weight_norm(l)
class ResBlock2(torch.nn.Module):
def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
super(ResBlock2, self).__init__()
self.convs = nn.ModuleList(
[
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[0],
padding=get_padding(kernel_size, dilation[0]),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[1],
padding=get_padding(kernel_size, dilation[1]),
)
),
]
)
self.convs.apply(init_weights)
def forward(self, x, x_mask=None):
for c in self.convs:
xt = F.leaky_relu(x, LRELU_SLOPE)
if x_mask is not None:
xt = xt * x_mask
xt = c(xt)
x = xt + x
if x_mask is not None:
x = x * x_mask
return x
def remove_weight_norm(self):
for l in self.convs:
remove_weight_norm(l)
# export
LRELU_SLOPE = 0.1
###Output
_____no_output_____
###Markdown
Table of Contents0.0.1 Conv1d0.0.2 Attentions0.0.3 STFT0.0.4 Global style tokens1 VITS common1.0.1 LayerNorm1.0.2 Flip1.0.3 Log1.0.4 ElementWiseAffine1.0.5 DDSConv1.0.6 ConvFLow1.0.7 WN1.0.8 ResidualCouplingLayer1.0.9 ResBlock
###Code
# default_exp models.common
# export
import numpy as np
from scipy.signal import get_window
import torch
from torch.autograd import Variable
from torch import nn
from torch.nn import functional as F
from torch.nn.utils import remove_weight_norm, weight_norm
from librosa.filters import mel as librosa_mel
from librosa.util import pad_center, tiny
from uberduck_ml_dev.utils.utils import *
from uberduck_ml_dev.vendor.tfcompat.hparam import HParams
###Output
_____no_output_____
###Markdown
Conv1d
###Code
# export
class Conv1d(nn.Module):
def __init__(
self,
in_channels,
out_channels,
kernel_size=1,
stride=1,
padding=None,
dilation=1,
bias=True,
w_init_gain="linear",
):
super().__init__()
if padding is None:
assert kernel_size % 2 == 1
padding = int(dilation * (kernel_size - 1) / 2)
self.conv = nn.Conv1d(
in_channels,
out_channels,
kernel_size=kernel_size,
stride=stride,
padding=padding,
dilation=dilation,
bias=bias,
)
nn.init.xavier_uniform_(
self.conv.weight, gain=nn.init.calculate_gain(w_init_gain)
)
def forward(self, signal):
return self.conv(signal)
# export
class LinearNorm(torch.nn.Module):
def __init__(self, in_dim, out_dim, bias=True, w_init_gain="linear"):
super().__init__()
self.linear_layer = torch.nn.Linear(in_dim, out_dim, bias=bias)
torch.nn.init.xavier_uniform_(
self.linear_layer.weight, gain=torch.nn.init.calculate_gain(w_init_gain)
)
def forward(self, x):
return self.linear_layer(x)
###Output
_____no_output_____
###Markdown
Attentions
###Code
# export
from numpy import finfo
class LocationLayer(nn.Module):
def __init__(self, attention_n_filters, attention_kernel_size, attention_dim):
super(LocationLayer, self).__init__()
padding = int((attention_kernel_size - 1) / 2)
self.location_conv = Conv1d(
2,
attention_n_filters,
kernel_size=attention_kernel_size,
padding=padding,
bias=False,
stride=1,
dilation=1,
)
self.location_dense = LinearNorm(
attention_n_filters, attention_dim, bias=False, w_init_gain="tanh"
)
def forward(self, attention_weights_cat):
processed_attention = self.location_conv(attention_weights_cat)
processed_attention = processed_attention.transpose(1, 2)
processed_attention = self.location_dense(processed_attention)
return processed_attention
class Attention(nn.Module):
def __init__(
self,
attention_rnn_dim,
embedding_dim,
attention_dim,
attention_location_n_filters,
attention_location_kernel_size,
fp16_run,
):
super(Attention, self).__init__()
self.query_layer = LinearNorm(
attention_rnn_dim, attention_dim, bias=False, w_init_gain="tanh"
)
self.memory_layer = LinearNorm(
embedding_dim, attention_dim, bias=False, w_init_gain="tanh"
)
self.v = LinearNorm(attention_dim, 1, bias=False)
self.location_layer = LocationLayer(
attention_location_n_filters, attention_location_kernel_size, attention_dim
)
if fp16_run:
self.score_mask_value = finfo("float16").min
else:
self.score_mask_value = -float("inf")
def get_alignment_energies(self, query, processed_memory, attention_weights_cat):
"""
PARAMS
------
query: decoder output (batch, n_mel_channels * n_frames_per_step)
processed_memory: processed encoder outputs (B, T_in, attention_dim)
attention_weights_cat: cumulative and prev. att weights (B, 2, max_time)
RETURNS
-------
alignment (batch, max_time)
"""
processed_query = self.query_layer(query.unsqueeze(1))
processed_attention_weights = self.location_layer(attention_weights_cat)
energies = self.v(
torch.tanh(processed_query + processed_attention_weights + processed_memory)
)
energies = energies.squeeze(-1)
return energies
def forward(
self,
attention_hidden_state,
memory,
processed_memory,
attention_weights_cat,
mask,
attention_weights=None,
):
"""
PARAMS
------
attention_hidden_state: attention rnn last output
memory: encoder outputs
processed_memory: processed encoder outputs
attention_weights_cat: previous and cummulative attention weights
mask: binary mask for padded data
"""
if attention_weights is None:
alignment = self.get_alignment_energies(
attention_hidden_state, processed_memory, attention_weights_cat
)
if mask is not None:
alignment.data.masked_fill_(mask, self.score_mask_value)
attention_weights = F.softmax(alignment, dim=1)
attention_context = torch.bmm(attention_weights.unsqueeze(1), memory)
attention_context = attention_context.squeeze(1)
return attention_context, attention_weights
from numpy import finfo
finfo("float16").min
F.pad(torch.rand(1, 3, 3), (2, 2), mode="reflect")
###Output
_____no_output_____
###Markdown
STFT
###Code
# export
class STFT:
"""adapted from Prem Seetharaman's https://github.com/pseeth/pytorch-stft"""
def __init__(
self,
filter_length=1024,
hop_length=256,
win_length=1024,
window="hann",
padding=None,
device="cpu",
rank=None,
):
self.filter_length = filter_length
self.hop_length = hop_length
self.win_length = win_length
self.window = window
self.forward_transform = None
scale = self.filter_length / self.hop_length
fourier_basis = np.fft.fft(np.eye(self.filter_length))
self.padding = padding or (filter_length // 2)
cutoff = int((self.filter_length / 2 + 1))
fourier_basis = np.vstack(
[np.real(fourier_basis[:cutoff, :]), np.imag(fourier_basis[:cutoff, :])]
)
if device == "cuda":
dev = torch.device(f"cuda:{rank}")
forward_basis = torch.cuda.FloatTensor(
fourier_basis[:, None, :], device=dev
)
inverse_basis = torch.cuda.FloatTensor(
np.linalg.pinv(scale * fourier_basis).T[:, None, :].astype(np.float32),
device=dev,
)
else:
forward_basis = torch.FloatTensor(fourier_basis[:, None, :])
inverse_basis = torch.FloatTensor(
np.linalg.pinv(scale * fourier_basis).T[:, None, :].astype(np.float32)
)
if window is not None:
assert filter_length >= win_length
# get window and zero center pad it to filter_length
fft_window = get_window(window, win_length, fftbins=True)
fft_window = pad_center(fft_window, filter_length)
fft_window = torch.from_numpy(fft_window).float()
if device == "cuda":
fft_window = fft_window.cuda(rank)
# window the bases
forward_basis *= fft_window
inverse_basis *= fft_window
self.fft_window = fft_window
self.forward_basis = forward_basis.float()
self.inverse_basis = inverse_basis.float()
def transform(self, input_data):
num_batches = input_data.size(0)
num_samples = input_data.size(1)
self.num_samples = num_samples
# similar to librosa, reflect-pad the input
input_data = input_data.view(num_batches, 1, num_samples)
input_data = F.pad(
input_data.unsqueeze(1),
(
self.padding,
self.padding,
0,
0,
),
mode="reflect",
)
input_data = input_data.squeeze(1)
forward_transform = F.conv1d(
input_data,
Variable(self.forward_basis, requires_grad=False),
stride=self.hop_length,
padding=0,
)
cutoff = self.filter_length // 2 + 1
real_part = forward_transform[:, :cutoff, :]
imag_part = forward_transform[:, cutoff:, :]
magnitude = torch.sqrt(real_part**2 + imag_part**2)
phase = torch.autograd.Variable(torch.atan2(imag_part.data, real_part.data))
return magnitude, phase
def inverse(self, magnitude, phase):
recombine_magnitude_phase = torch.cat(
[magnitude * torch.cos(phase), magnitude * torch.sin(phase)],
dim=1,
)
inverse_transform = F.conv_transpose1d(
recombine_magnitude_phase,
Variable(self.inverse_basis, requires_grad=False),
stride=self.hop_length,
padding=0,
)
if self.window is not None:
window_sum = window_sumsquare(
self.window,
magnitude.size(-1),
hop_length=self.hop_length,
win_length=self.win_length,
n_fft=self.filter_length,
dtype=np.float32,
)
# remove modulation effects
approx_nonzero_indices = torch.from_numpy(
np.where(window_sum > tiny(window_sum))[0]
)
window_sum = torch.autograd.Variable(
torch.from_numpy(window_sum), requires_grad=False
)
window_sum = window_sum.cuda() if magnitude.is_cuda else window_sum
inverse_transform[:, :, approx_nonzero_indices] /= window_sum[
approx_nonzero_indices
]
# scale by hop ratio
inverse_transform *= float(self.filter_length) / self.hop_length
inverse_transform = inverse_transform[:, :, int(self.filter_length / 2) :]
inverse_transform = inverse_transform[:, :, : -int(self.filter_length / 2) :]
return inverse_transform
def forward(self, input_data):
self.magnitude, self.phase = self.transform(input_data)
reconstruction = self.inverse(self.magnitude, self.phase)
return reconstruction
# export
class MelSTFT:
def __init__(
self,
filter_length=1024,
hop_length=256,
win_length=1024,
n_mel_channels=80,
sampling_rate=22050,
mel_fmin=0.0,
mel_fmax=8000.0,
device="cpu",
padding=None,
rank=None,
):
self.n_mel_channels = n_mel_channels
self.sampling_rate = sampling_rate
if padding is None:
padding = filter_length // 2
self.stft_fn = STFT(
filter_length,
hop_length,
win_length,
device=device,
rank=rank,
padding=padding,
)
mel_basis = librosa_mel(
sampling_rate, filter_length, n_mel_channels, mel_fmin, mel_fmax
)
mel_basis = torch.from_numpy(mel_basis).float()
if device == "cuda":
mel_basis = mel_basis.cuda()
self.mel_basis = mel_basis
def spectral_normalize(self, magnitudes):
output = dynamic_range_compression(magnitudes)
return output
def spectral_de_normalize(self, magnitudes):
output = dynamic_range_decompression(magnitudes)
return output
def spec_to_mel(self, spec):
mel_output = torch.matmul(self.mel_basis, spec)
mel_output = self.spectral_normalize(mel_output)
return mel_output
def spectrogram(self, y):
assert y.min() >= -1
assert y.max() <= 1
magnitudes, phases = self.stft_fn.transform(y)
return magnitudes.data
def mel_spectrogram(self, y, ref_level_db=20, magnitude_power=1.5):
"""Computes mel-spectrograms from a batch of waves
PARAMS
------
y: Variable(torch.FloatTensor) with shape (B, T) in range [-1, 1]
RETURNS
-------
mel_output: torch.FloatTensor of shape (B, n_mel_channels, T)
"""
assert y.min() >= -1
assert y.max() <= 1
magnitudes, phases = self.stft_fn.transform(y)
magnitudes = magnitudes.data
return self.spec_to_mel(magnitudes)
def griffin_lim(self, mel_spectrogram, n_iters=30):
mel_dec = self.spectral_de_normalize(mel_spectrogram)
# Float cast required for fp16 training.
mel_dec = mel_dec.transpose(0, 1).cpu().data.float()
spec_from_mel = torch.mm(mel_dec, self.mel_basis).transpose(0, 1)
spec_from_mel *= 1000
out = griffin_lim(spec_from_mel.unsqueeze(0), self.stft_fn, n_iters=n_iters)
return out
from IPython.display import Audio
stft = STFT()
mel_stft = MelSTFT()
mel = mel_stft.mel_spectrogram(torch.clip(torch.randn(1, 1000), -1, 1))
assert mel.shape[0] == 1
assert mel.shape[1] == 80
mel = torch.load("./test/fixtures/stevejobs-1.pt")
aud = mel_stft.griffin_lim(mel)
# hide
Audio(aud, rate=22050)
###Output
_____no_output_____
###Markdown
Global style tokens
###Code
# export
from torch.nn import init
class ReferenceEncoder(nn.Module):
"""
inputs --- [N, Ty/r, n_mels*r] mels
outputs --- [N, ref_enc_gru_size]
"""
def __init__(self, hp):
super().__init__()
K = len(hp.ref_enc_filters)
filters = [1] + hp.ref_enc_filters
convs = [
nn.Conv2d(
in_channels=filters[i],
out_channels=filters[i + 1],
kernel_size=(3, 3),
stride=(2, 2),
padding=(1, 1),
)
for i in range(K)
]
self.convs = nn.ModuleList(convs)
self.bns = nn.ModuleList(
[nn.BatchNorm2d(num_features=hp.ref_enc_filters[i]) for i in range(K)]
)
out_channels = self.calculate_channels(hp.n_mel_channels, 3, 2, 1, K)
self.gru = nn.GRU(
input_size=hp.ref_enc_filters[-1] * out_channels,
hidden_size=hp.ref_enc_gru_size,
batch_first=True,
)
self.n_mel_channels = hp.n_mel_channels
self.ref_enc_gru_size = hp.ref_enc_gru_size
def forward(self, inputs, input_lengths=None):
out = inputs.view(inputs.size(0), 1, -1, self.n_mel_channels)
for conv, bn in zip(self.convs, self.bns):
out = conv(out)
out = bn(out)
out = F.relu(out)
out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K]
N, T = out.size(0), out.size(1)
out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K]
if input_lengths is not None:
input_lengths = torch.ceil(input_lengths.float() / 2 ** len(self.convs))
input_lengths = input_lengths.cpu().numpy().astype(int)
out = nn.utils.rnn.pack_padded_sequence(
out, input_lengths, batch_first=True, enforce_sorted=False
)
self.gru.flatten_parameters()
_, out = self.gru(out)
return out.squeeze(0)
def calculate_channels(self, L, kernel_size, stride, pad, n_convs):
for _ in range(n_convs):
L = (L - kernel_size + 2 * pad) // stride + 1
return L
class MultiHeadAttention(nn.Module):
"""
input:
query --- [N, T_q, query_dim]
key --- [N, T_k, key_dim]
output:
out --- [N, T_q, num_units]
"""
def __init__(self, query_dim, key_dim, num_units, num_heads):
super().__init__()
self.num_units = num_units
self.num_heads = num_heads
self.key_dim = key_dim
self.W_query = nn.Linear(
in_features=query_dim, out_features=num_units, bias=False
)
self.W_key = nn.Linear(in_features=key_dim, out_features=num_units, bias=False)
self.W_value = nn.Linear(
in_features=key_dim, out_features=num_units, bias=False
)
def forward(self, query, key):
querys = self.W_query(query) # [N, T_q, num_units]
keys = self.W_key(key) # [N, T_k, num_units]
values = self.W_value(key)
split_size = self.num_units // self.num_heads
querys = torch.stack(
torch.split(querys, split_size, dim=2), dim=0
) # [h, N, T_q, num_units/h]
keys = torch.stack(
torch.split(keys, split_size, dim=2), dim=0
) # [h, N, T_k, num_units/h]
values = torch.stack(
torch.split(values, split_size, dim=2), dim=0
) # [h, N, T_k, num_units/h]
# score = softmax(QK^T / (d_k ** 0.5))
scores = torch.matmul(querys, keys.transpose(2, 3)) # [h, N, T_q, T_k]
scores = scores / (self.key_dim**0.5)
scores = F.softmax(scores, dim=3)
# out = score * V
out = torch.matmul(scores, values) # [h, N, T_q, num_units/h]
out = torch.cat(torch.split(out, 1, dim=0), dim=3).squeeze(
0
) # [N, T_q, num_units]
return out
class STL(nn.Module):
"""
inputs --- [N, token_embedding_size//2]
"""
def __init__(self, hp):
super().__init__()
self.embed = nn.Parameter(
torch.FloatTensor(hp.token_num, hp.token_embedding_size // hp.num_heads)
)
d_q = hp.ref_enc_gru_size
d_k = hp.token_embedding_size // hp.num_heads
self.attention = MultiHeadAttention(
query_dim=d_q,
key_dim=d_k,
num_units=hp.token_embedding_size,
num_heads=hp.num_heads,
)
init.normal_(self.embed, mean=0, std=0.5)
def forward(self, inputs):
N = inputs.size(0)
query = inputs.unsqueeze(1)
keys = (
torch.tanh(self.embed).unsqueeze(0).expand(N, -1, -1)
) # [N, token_num, token_embedding_size // num_heads]
style_embed = self.attention(query, keys)
return style_embed
class GST(nn.Module):
def __init__(self, hp):
super().__init__()
self.encoder = ReferenceEncoder(hp)
self.stl = STL(hp)
def forward(self, inputs, input_lengths=None):
enc_out = self.encoder(inputs, input_lengths=input_lengths)
style_embed = self.stl(enc_out)
return style_embed
DEFAULTS = HParams(
n_symbols=100,
symbols_embedding_dim=512,
mask_padding=True,
fp16_run=False,
n_mel_channels=80,
# encoder parameters
encoder_kernel_size=5,
encoder_n_convolutions=3,
encoder_embedding_dim=512,
# decoder parameters
n_frames_per_step=1, # currently only 1 is supported
decoder_rnn_dim=1024,
prenet_dim=256,
prenet_f0_n_layers=1,
prenet_f0_dim=1,
prenet_f0_kernel_size=1,
prenet_rms_dim=0,
prenet_fms_kernel_size=1,
max_decoder_steps=1000,
gate_threshold=0.5,
p_attention_dropout=0.1,
p_decoder_dropout=0.1,
p_teacher_forcing=1.0,
# attention parameters
attention_rnn_dim=1024,
attention_dim=128,
# location layer parameters
attention_location_n_filters=32,
attention_location_kernel_size=31,
# mel post-processing network parameters
postnet_embedding_dim=512,
postnet_kernel_size=5,
postnet_n_convolutions=5,
# speaker_embedding
n_speakers=123, # original nvidia libritts training
speaker_embedding_dim=128,
# reference encoder
with_gst=True,
ref_enc_filters=[32, 32, 64, 64, 128, 128],
ref_enc_size=[3, 3],
ref_enc_strides=[2, 2],
ref_enc_pad=[1, 1],
ref_enc_gru_size=128,
# style token layer
token_embedding_size=256,
token_num=10,
num_heads=8,
)
GST(DEFAULTS)
###Output
_____no_output_____
###Markdown
VITS common LayerNorm
###Code
# export
class LayerNorm(nn.Module):
def __init__(self, channels, eps=1e-5):
super().__init__()
self.channels = channels
self.eps = eps
self.gamma = nn.Parameter(torch.ones(channels))
self.beta = nn.Parameter(torch.zeros(channels))
def forward(self, x):
x = x.transpose(1, -1)
x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
return x.transpose(1, -1)
LayerNorm(3)
###Output
_____no_output_____
###Markdown
Flip
###Code
# export
class Flip(nn.Module):
def forward(self, x, *args, reverse=False, **kwargs):
x = torch.flip(x, [1])
if not reverse:
logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
return x, logdet
else:
return x
###Output
_____no_output_____
###Markdown
Log
###Code
# export
class Log(nn.Module):
def forward(self, x, x_mask, reverse=False, **kwargs):
if not reverse:
y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
logdet = torch.sum(-y, [1, 2])
return y, logdet
else:
x = torch.exp(x) * x_mask
return x
###Output
_____no_output_____
###Markdown
ElementWiseAffine
###Code
# export
class ElementwiseAffine(nn.Module):
def __init__(self, channels):
super().__init__()
self.channels = channels
self.m = nn.Parameter(torch.zeros(channels, 1))
self.logs = nn.Parameter(torch.zeros(channels, 1))
def forward(self, x, x_mask, reverse=False, **kwargs):
if not reverse:
y = self.m + torch.exp(self.logs) * x
y = y * x_mask
logdet = torch.sum(self.logs * x_mask, [1, 2])
return y, logdet
else:
x = (x - self.m) * torch.exp(-self.logs) * x_mask
return x
###Output
_____no_output_____
###Markdown
DDSConv
###Code
# export
class DDSConv(nn.Module):
"""
Dialted and Depth-Separable Convolution
"""
def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
super().__init__()
self.channels = channels
self.kernel_size = kernel_size
self.n_layers = n_layers
self.p_dropout = p_dropout
self.drop = nn.Dropout(p_dropout)
self.convs_sep = nn.ModuleList()
self.convs_1x1 = nn.ModuleList()
self.norms_1 = nn.ModuleList()
self.norms_2 = nn.ModuleList()
for i in range(n_layers):
dilation = kernel_size**i
padding = (kernel_size * dilation - dilation) // 2
self.convs_sep.append(
nn.Conv1d(
channels,
channels,
kernel_size,
groups=channels,
dilation=dilation,
padding=padding,
)
)
self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
self.norms_1.append(LayerNorm(channels))
self.norms_2.append(LayerNorm(channels))
def forward(self, x, x_mask, g=None):
if g is not None:
x = x + g
for i in range(self.n_layers):
y = self.convs_sep[i](x * x_mask)
y = self.norms_1[i](y)
y = F.gelu(y)
y = self.convs_1x1[i](y)
y = self.norms_2[i](y)
y = F.gelu(y)
y = self.drop(y)
x = x + y
return x * x_mask
###Output
_____no_output_____
###Markdown
ConvFLow
###Code
# export
import math
from uberduck_ml_dev.models.transforms import piecewise_rational_quadratic_transform
class ConvFlow(nn.Module):
def __init__(
self,
in_channels,
filter_channels,
kernel_size,
n_layers,
num_bins=10,
# tail_bound=5.0,
tail_bound=10.0,
):
super().__init__()
self.in_channels = in_channels
self.filter_channels = filter_channels
self.kernel_size = kernel_size
self.n_layers = n_layers
self.num_bins = num_bins
self.tail_bound = tail_bound
self.half_channels = in_channels // 2
self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
self.proj = nn.Conv1d(
filter_channels, self.half_channels * (num_bins * 3 - 1), 1
)
self.proj.weight.data.zero_()
self.proj.bias.data.zero_()
def forward(self, x, x_mask, g=None, reverse=False):
x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
h = self.pre(x0)
h = self.convs(h, x_mask, g=g)
h = self.proj(h) * x_mask
b, c, t = x0.shape
h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
self.filter_channels
)
unnormalized_derivatives = h[..., 2 * self.num_bins :]
x1, logabsdet = piecewise_rational_quadratic_transform(
x1,
unnormalized_widths,
unnormalized_heights,
unnormalized_derivatives,
inverse=reverse,
tails="linear",
tail_bound=self.tail_bound,
)
x = torch.cat([x0, x1], 1) * x_mask
logdet = torch.sum(logabsdet * x_mask, [1, 2])
if not reverse:
return x, logdet
else:
return x
cf = ConvFlow(192, 2, 3, 3)
# NOTE(zach): figure out the shape of the forward stuff.
# cf(torch.rand(2, 2, 1), torch.ones(2, 2, 1))
###Output
_____no_output_____
###Markdown
WN
###Code
# export
from uberduck_ml_dev.utils.utils import fused_add_tanh_sigmoid_multiply
class WN(torch.nn.Module):
def __init__(
self,
hidden_channels,
kernel_size,
dilation_rate,
n_layers,
gin_channels=0,
p_dropout=0,
):
super(WN, self).__init__()
assert kernel_size % 2 == 1
self.hidden_channels = hidden_channels
self.kernel_size = (kernel_size,)
self.dilation_rate = dilation_rate
self.n_layers = n_layers
self.gin_channels = gin_channels
self.p_dropout = p_dropout
self.in_layers = torch.nn.ModuleList()
self.res_skip_layers = torch.nn.ModuleList()
self.drop = nn.Dropout(p_dropout)
if gin_channels != 0:
cond_layer = nn.Conv1d(gin_channels, 2 * hidden_channels * n_layers, 1)
self.cond_layer = weight_norm(cond_layer, name="weight")
for i in range(n_layers):
dilation = dilation_rate**i
padding = int((kernel_size * dilation - dilation) / 2)
in_layer = nn.Conv1d(
hidden_channels,
2 * hidden_channels,
kernel_size,
dilation=dilation,
padding=padding,
)
in_layer = weight_norm(in_layer, name="weight")
self.in_layers.append(in_layer)
# last one is not necessary
if i < n_layers - 1:
res_skip_channels = 2 * hidden_channels
else:
res_skip_channels = hidden_channels
res_skip_layer = nn.Conv1d(hidden_channels, res_skip_channels, 1)
res_skip_layer = weight_norm(res_skip_layer, name="weight")
self.res_skip_layers.append(res_skip_layer)
def forward(self, x, x_mask, g=None, **kwargs):
output = torch.zeros_like(x)
n_channels_tensor = torch.IntTensor([self.hidden_channels])
if g is not None:
g = self.cond_layer(g)
for i in range(self.n_layers):
x_in = self.in_layers[i](x)
if g is not None:
cond_offset = i * 2 * self.hidden_channels
g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
else:
g_l = torch.zeros_like(x_in)
acts = fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
acts = self.drop(acts)
res_skip_acts = self.res_skip_layers[i](acts)
if i < self.n_layers - 1:
res_acts = res_skip_acts[:, : self.hidden_channels, :]
x = (x + res_acts) * x_mask
output = output + res_skip_acts[:, self.hidden_channels :, :]
else:
output = output + res_skip_acts
return output * x_mask
def remove_weight_norm(self):
if self.gin_channels != 0:
remove_weight_norm(self.cond_layer)
for l in self.in_layers:
remove_weight_norm(l)
for l in self.res_skip_layers:
remove_weight_norm(l)
###Output
_____no_output_____
###Markdown
ResidualCouplingLayer
###Code
# export
class ResidualCouplingLayer(nn.Module):
def __init__(
self,
channels,
hidden_channels,
kernel_size,
dilation_rate,
n_layers,
p_dropout=0,
gin_channels=0,
mean_only=False,
):
assert channels % 2 == 0, "channels should be divisible by 2"
super().__init__()
self.channels = channels
self.hidden_channels = hidden_channels
self.kernel_size = kernel_size
self.dilation_rate = dilation_rate
self.n_layers = n_layers
self.half_channels = channels // 2
self.mean_only = mean_only
self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
self.enc = WN(
hidden_channels,
kernel_size,
dilation_rate,
n_layers,
p_dropout=p_dropout,
gin_channels=gin_channels,
)
self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
self.post.weight.data.zero_()
self.post.bias.data.zero_()
def forward(self, x, x_mask, g=None, reverse=False):
x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
h = self.pre(x0) * x_mask
h = self.enc(h, x_mask, g=g)
stats = self.post(h) * x_mask
if not self.mean_only:
m, logs = torch.split(stats, [self.half_channels] * 2, 1)
else:
m = stats
logs = torch.zeros_like(m)
if not reverse:
x1 = m + x1 * torch.exp(logs) * x_mask
x = torch.cat([x0, x1], 1)
logdet = torch.sum(logs, [1, 2])
return x, logdet
else:
x1 = (x1 - m) * torch.exp(-logs) * x_mask
x = torch.cat([x0, x1], 1)
return x
###Output
_____no_output_____
###Markdown
ResBlock
###Code
# export
class ResBlock1(torch.nn.Module):
def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
super(ResBlock1, self).__init__()
self.convs1 = nn.ModuleList(
[
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[0],
padding=get_padding(kernel_size, dilation[0]),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[1],
padding=get_padding(kernel_size, dilation[1]),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[2],
padding=get_padding(kernel_size, dilation[2]),
)
),
]
)
self.convs1.apply(init_weights)
self.convs2 = nn.ModuleList(
[
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=1,
padding=get_padding(kernel_size, 1),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=1,
padding=get_padding(kernel_size, 1),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=1,
padding=get_padding(kernel_size, 1),
)
),
]
)
self.convs2.apply(init_weights)
def forward(self, x, x_mask=None):
for c1, c2 in zip(self.convs1, self.convs2):
xt = F.leaky_relu(x, LRELU_SLOPE)
if x_mask is not None:
xt = xt * x_mask
xt = c1(xt)
xt = F.leaky_relu(xt, LRELU_SLOPE)
if x_mask is not None:
xt = xt * x_mask
xt = c2(xt)
x = xt + x
if x_mask is not None:
x = x * x_mask
return x
def remove_weight_norm(self):
for l in self.convs1:
remove_weight_norm(l)
for l in self.convs2:
remove_weight_norm(l)
class ResBlock2(torch.nn.Module):
def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
super(ResBlock2, self).__init__()
self.convs = nn.ModuleList(
[
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[0],
padding=get_padding(kernel_size, dilation[0]),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[1],
padding=get_padding(kernel_size, dilation[1]),
)
),
]
)
self.convs.apply(init_weights)
def forward(self, x, x_mask=None):
for c in self.convs:
xt = F.leaky_relu(x, LRELU_SLOPE)
if x_mask is not None:
xt = xt * x_mask
xt = c(xt)
x = xt + x
if x_mask is not None:
x = x * x_mask
return x
def remove_weight_norm(self):
for l in self.convs:
remove_weight_norm(l)
# export
LRELU_SLOPE = 0.1
###Output
_____no_output_____
###Markdown
Table of Contents0.0.1 Conv1d0.0.2 Attentions0.0.3 STFT0.0.4 Global style tokens1 VITS common1.0.1 LayerNorm1.0.2 Flip1.0.3 Log1.0.4 ElementWiseAffine1.0.5 DDSConv1.0.6 ConvFLow1.0.7 WN1.0.8 ResidualCouplingLayer1.0.9 ResBlock
###Code
# default_exp models.common
# export
from librosa.filters import mel as librosa_mel
from librosa.util import pad_center, tiny
import numpy as np
from scipy.signal import get_window
import torch
from torch.autograd import Variable
from torch import nn
from torch.nn import functional as F
from torch.nn.utils import remove_weight_norm, weight_norm
from uberduck_ml_dev.utils.utils import *
from uberduck_ml_dev.vendor.tfcompat.hparam import HParams
###Output
_____no_output_____
###Markdown
Conv1d
###Code
# export
class Conv1d(nn.Module):
def __init__(
self,
in_channels,
out_channels,
kernel_size=1,
stride=1,
padding=None,
dilation=1,
bias=True,
w_init_gain="linear",
):
super().__init__()
if padding is None:
assert kernel_size % 2 == 1
padding = int(dilation * (kernel_size - 1) / 2)
self.conv = nn.Conv1d(
in_channels,
out_channels,
kernel_size=kernel_size,
stride=stride,
padding=padding,
dilation=dilation,
bias=bias,
)
nn.init.xavier_uniform_(
self.conv.weight, gain=nn.init.calculate_gain(w_init_gain)
)
def forward(self, signal):
return self.conv(signal)
# export
class LinearNorm(torch.nn.Module):
def __init__(self, in_dim, out_dim, bias=True, w_init_gain="linear"):
super().__init__()
self.linear_layer = torch.nn.Linear(in_dim, out_dim, bias=bias)
torch.nn.init.xavier_uniform_(
self.linear_layer.weight, gain=torch.nn.init.calculate_gain(w_init_gain)
)
def forward(self, x):
return self.linear_layer(x)
###Output
_____no_output_____
###Markdown
Attentions
###Code
# export
from numpy import finfo
class LocationLayer(nn.Module):
def __init__(self, attention_n_filters, attention_kernel_size, attention_dim):
super(LocationLayer, self).__init__()
padding = int((attention_kernel_size - 1) / 2)
self.location_conv = Conv1d(
2,
attention_n_filters,
kernel_size=attention_kernel_size,
padding=padding,
bias=False,
stride=1,
dilation=1,
)
self.location_dense = LinearNorm(
attention_n_filters, attention_dim, bias=False, w_init_gain="tanh"
)
def forward(self, attention_weights_cat):
processed_attention = self.location_conv(attention_weights_cat)
processed_attention = processed_attention.transpose(1, 2)
processed_attention = self.location_dense(processed_attention)
return processed_attention
class Attention(nn.Module):
def __init__(
self,
attention_rnn_dim,
embedding_dim,
attention_dim,
attention_location_n_filters,
attention_location_kernel_size,
fp16_run,
):
super(Attention, self).__init__()
self.query_layer = LinearNorm(
attention_rnn_dim, attention_dim, bias=False, w_init_gain="tanh"
)
self.memory_layer = LinearNorm(
embedding_dim, attention_dim, bias=False, w_init_gain="tanh"
)
self.v = LinearNorm(attention_dim, 1, bias=False)
self.location_layer = LocationLayer(
attention_location_n_filters, attention_location_kernel_size, attention_dim
)
if fp16_run:
self.score_mask_value = finfo("float16").min
else:
self.score_mask_value = -float("inf")
def get_alignment_energies(self, query, processed_memory, attention_weights_cat):
"""
PARAMS
------
query: decoder output (batch, n_mel_channels * n_frames_per_step)
processed_memory: processed encoder outputs (B, T_in, attention_dim)
attention_weights_cat: cumulative and prev. att weights (B, 2, max_time)
RETURNS
-------
alignment (batch, max_time)
"""
processed_query = self.query_layer(query.unsqueeze(1))
processed_attention_weights = self.location_layer(attention_weights_cat)
energies = self.v(
torch.tanh(processed_query + processed_attention_weights + processed_memory)
)
energies = energies.squeeze(-1)
return energies
def forward(
self,
attention_hidden_state,
memory,
processed_memory,
attention_weights_cat,
mask,
attention_weights=None,
):
"""
PARAMS
------
attention_hidden_state: attention rnn last output
memory: encoder outputs
processed_memory: processed encoder outputs
attention_weights_cat: previous and cummulative attention weights
mask: binary mask for padded data
"""
if attention_weights is None:
alignment = self.get_alignment_energies(
attention_hidden_state, processed_memory, attention_weights_cat
)
if mask is not None:
alignment.data.masked_fill_(mask, self.score_mask_value)
attention_weights = F.softmax(alignment, dim=1)
attention_context = torch.bmm(attention_weights.unsqueeze(1), memory)
attention_context = attention_context.squeeze(1)
return attention_context, attention_weights
from numpy import finfo
finfo("float16").min
F.pad(torch.rand(1, 3, 3), (2, 2), mode="reflect")
###Output
_____no_output_____
###Markdown
STFT
###Code
# export
class STFT:
"""adapted from Prem Seetharaman's https://github.com/pseeth/pytorch-stft"""
def __init__(
self,
filter_length=1024,
hop_length=256,
win_length=1024,
window="hann",
padding=None,
device="cpu",
rank=None,
):
self.filter_length = filter_length
self.hop_length = hop_length
self.win_length = win_length
self.window = window
self.forward_transform = None
scale = self.filter_length / self.hop_length
fourier_basis = np.fft.fft(np.eye(self.filter_length))
self.padding = padding or (filter_length // 2)
cutoff = int((self.filter_length / 2 + 1))
fourier_basis = np.vstack(
[np.real(fourier_basis[:cutoff, :]), np.imag(fourier_basis[:cutoff, :])]
)
if device == "cuda":
dev = torch.device(f"cuda:{rank}")
forward_basis = torch.cuda.FloatTensor(
fourier_basis[:, None, :], device=dev
)
inverse_basis = torch.cuda.FloatTensor(
np.linalg.pinv(scale * fourier_basis).T[:, None, :].astype(np.float32),
device=dev,
)
else:
forward_basis = torch.FloatTensor(fourier_basis[:, None, :])
inverse_basis = torch.FloatTensor(
np.linalg.pinv(scale * fourier_basis).T[:, None, :].astype(np.float32)
)
if window is not None:
assert filter_length >= win_length
# get window and zero center pad it to filter_length
fft_window = get_window(window, win_length, fftbins=True)
fft_window = pad_center(fft_window, filter_length)
fft_window = torch.from_numpy(fft_window).float()
if device == "cuda":
fft_window = fft_window.cuda(rank)
# window the bases
forward_basis *= fft_window
inverse_basis *= fft_window
self.fft_window = fft_window
self.forward_basis = forward_basis.float()
self.inverse_basis = inverse_basis.float()
def transform(self, input_data):
num_batches = input_data.size(0)
num_samples = input_data.size(1)
self.num_samples = num_samples
# similar to librosa, reflect-pad the input
input_data = input_data.view(num_batches, 1, num_samples)
input_data = F.pad(
input_data.unsqueeze(1),
(
self.padding,
self.padding,
0,
0,
),
mode="reflect",
)
input_data = input_data.squeeze(1)
forward_transform = F.conv1d(
input_data,
Variable(self.forward_basis, requires_grad=False),
stride=self.hop_length,
padding=0,
)
cutoff = self.filter_length // 2 + 1
real_part = forward_transform[:, :cutoff, :]
imag_part = forward_transform[:, cutoff:, :]
magnitude = torch.sqrt(real_part ** 2 + imag_part ** 2)
phase = torch.autograd.Variable(torch.atan2(imag_part.data, real_part.data))
return magnitude, phase
def inverse(self, magnitude, phase):
recombine_magnitude_phase = torch.cat(
[magnitude * torch.cos(phase), magnitude * torch.sin(phase)],
dim=1,
)
inverse_transform = F.conv_transpose1d(
recombine_magnitude_phase,
Variable(self.inverse_basis, requires_grad=False),
stride=self.hop_length,
padding=0,
)
if self.window is not None:
window_sum = window_sumsquare(
self.window,
magnitude.size(-1),
hop_length=self.hop_length,
win_length=self.win_length,
n_fft=self.filter_length,
dtype=np.float32,
)
# remove modulation effects
approx_nonzero_indices = torch.from_numpy(
np.where(window_sum > tiny(window_sum))[0]
)
window_sum = torch.autograd.Variable(
torch.from_numpy(window_sum), requires_grad=False
)
window_sum = window_sum.cuda() if magnitude.is_cuda else window_sum
inverse_transform[:, :, approx_nonzero_indices] /= window_sum[
approx_nonzero_indices
]
# scale by hop ratio
inverse_transform *= float(self.filter_length) / self.hop_length
inverse_transform = inverse_transform[:, :, int(self.filter_length / 2) :]
inverse_transform = inverse_transform[:, :, : -int(self.filter_length / 2) :]
return inverse_transform
def forward(self, input_data):
self.magnitude, self.phase = self.transform(input_data)
reconstruction = self.inverse(self.magnitude, self.phase)
return reconstruction
# export
class MelSTFT:
def __init__(
self,
filter_length=1024,
hop_length=256,
win_length=1024,
n_mel_channels=80,
sampling_rate=22050,
mel_fmin=0.0,
mel_fmax=8000.0,
device="cpu",
padding=None,
rank=None,
):
self.n_mel_channels = n_mel_channels
self.sampling_rate = sampling_rate
if padding is None:
padding = filter_length // 2
self.stft_fn = STFT(
filter_length,
hop_length,
win_length,
device=device,
rank=rank,
padding=padding,
)
mel_basis = librosa_mel(
sampling_rate, filter_length, n_mel_channels, mel_fmin, mel_fmax
)
mel_basis = torch.from_numpy(mel_basis).float()
if device == "cuda":
mel_basis = mel_basis.cuda()
self.mel_basis = mel_basis
def spectral_normalize(self, magnitudes):
output = dynamic_range_compression(magnitudes)
return output
def spectral_de_normalize(self, magnitudes):
output = dynamic_range_decompression(magnitudes)
return output
def spec_to_mel(self, spec):
mel_output = torch.matmul(self.mel_basis, spec)
mel_output = self.spectral_normalize(mel_output)
return mel_output
def spectrogram(self, y):
assert y.min() >= -1
assert y.max() <= 1
magnitudes, phases = self.stft_fn.transform(y)
return magnitudes.data
def mel_spectrogram(self, y, ref_level_db=20, magnitude_power=1.5):
"""Computes mel-spectrograms from a batch of waves
PARAMS
------
y: Variable(torch.FloatTensor) with shape (B, T) in range [-1, 1]
RETURNS
-------
mel_output: torch.FloatTensor of shape (B, n_mel_channels, T)
"""
assert y.min() >= -1
assert y.max() <= 1
magnitudes, phases = self.stft_fn.transform(y)
magnitudes = magnitudes.data
return self.spec_to_mel(magnitudes)
def griffin_lim(self, mel_spectrogram, n_iters=30):
mel_dec = self.spectral_de_normalize(mel_spectrogram)
# Float cast required for fp16 training.
mel_dec = mel_dec.transpose(0, 1).cpu().data.float()
spec_from_mel = torch.mm(mel_dec, self.mel_basis).transpose(0, 1)
spec_from_mel *= 1000
out = griffin_lim(spec_from_mel.unsqueeze(0), self.stft_fn, n_iters=n_iters)
return out
from IPython.display import Audio
stft = STFT()
mel_stft = MelSTFT()
mel = mel_stft.mel_spectrogram(torch.clip(torch.randn(1, 1000), -1, 1))
assert mel.shape[0] == 1
assert mel.shape[1] == 80
mel = torch.load("./test/fixtures/stevejobs-1.pt")
aud = mel_stft.griffin_lim(mel)
# hide
Audio(aud, rate=22050)
###Output
_____no_output_____
###Markdown
Global style tokens
###Code
# export
from torch.nn import init
class ReferenceEncoder(nn.Module):
"""
inputs --- [N, Ty/r, n_mels*r] mels
outputs --- [N, ref_enc_gru_size]
"""
def __init__(self, hp):
super().__init__()
K = len(hp.ref_enc_filters)
filters = [1] + hp.ref_enc_filters
convs = [
nn.Conv2d(
in_channels=filters[i],
out_channels=filters[i + 1],
kernel_size=(3, 3),
stride=(2, 2),
padding=(1, 1),
)
for i in range(K)
]
self.convs = nn.ModuleList(convs)
self.bns = nn.ModuleList(
[nn.BatchNorm2d(num_features=hp.ref_enc_filters[i]) for i in range(K)]
)
out_channels = self.calculate_channels(hp.n_mel_channels, 3, 2, 1, K)
self.gru = nn.GRU(
input_size=hp.ref_enc_filters[-1] * out_channels,
hidden_size=hp.ref_enc_gru_size,
batch_first=True,
)
self.n_mel_channels = hp.n_mel_channels
self.ref_enc_gru_size = hp.ref_enc_gru_size
def forward(self, inputs, input_lengths=None):
out = inputs.view(inputs.size(0), 1, -1, self.n_mel_channels)
for conv, bn in zip(self.convs, self.bns):
out = conv(out)
out = bn(out)
out = F.relu(out)
out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K]
N, T = out.size(0), out.size(1)
out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K]
if input_lengths is not None:
input_lengths = torch.ceil(input_lengths.float() / 2 ** len(self.convs))
input_lengths = input_lengths.cpu().numpy().astype(int)
out = nn.utils.rnn.pack_padded_sequence(
out, input_lengths, batch_first=True, enforce_sorted=False
)
self.gru.flatten_parameters()
_, out = self.gru(out)
return out.squeeze(0)
def calculate_channels(self, L, kernel_size, stride, pad, n_convs):
for _ in range(n_convs):
L = (L - kernel_size + 2 * pad) // stride + 1
return L
class MultiHeadAttention(nn.Module):
"""
input:
query --- [N, T_q, query_dim]
key --- [N, T_k, key_dim]
output:
out --- [N, T_q, num_units]
"""
def __init__(self, query_dim, key_dim, num_units, num_heads):
super().__init__()
self.num_units = num_units
self.num_heads = num_heads
self.key_dim = key_dim
self.W_query = nn.Linear(
in_features=query_dim, out_features=num_units, bias=False
)
self.W_key = nn.Linear(in_features=key_dim, out_features=num_units, bias=False)
self.W_value = nn.Linear(
in_features=key_dim, out_features=num_units, bias=False
)
def forward(self, query, key):
querys = self.W_query(query) # [N, T_q, num_units]
keys = self.W_key(key) # [N, T_k, num_units]
values = self.W_value(key)
split_size = self.num_units // self.num_heads
querys = torch.stack(
torch.split(querys, split_size, dim=2), dim=0
) # [h, N, T_q, num_units/h]
keys = torch.stack(
torch.split(keys, split_size, dim=2), dim=0
) # [h, N, T_k, num_units/h]
values = torch.stack(
torch.split(values, split_size, dim=2), dim=0
) # [h, N, T_k, num_units/h]
# score = softmax(QK^T / (d_k ** 0.5))
scores = torch.matmul(querys, keys.transpose(2, 3)) # [h, N, T_q, T_k]
scores = scores / (self.key_dim ** 0.5)
scores = F.softmax(scores, dim=3)
# out = score * V
out = torch.matmul(scores, values) # [h, N, T_q, num_units/h]
out = torch.cat(torch.split(out, 1, dim=0), dim=3).squeeze(
0
) # [N, T_q, num_units]
return out
class STL(nn.Module):
"""
inputs --- [N, token_embedding_size//2]
"""
def __init__(self, hp):
super().__init__()
self.embed = nn.Parameter(
torch.FloatTensor(hp.token_num, hp.token_embedding_size // hp.num_heads)
)
d_q = hp.ref_enc_gru_size
d_k = hp.token_embedding_size // hp.num_heads
self.attention = MultiHeadAttention(
query_dim=d_q,
key_dim=d_k,
num_units=hp.token_embedding_size,
num_heads=hp.num_heads,
)
init.normal_(self.embed, mean=0, std=0.5)
def forward(self, inputs):
N = inputs.size(0)
query = inputs.unsqueeze(1)
keys = (
torch.tanh(self.embed).unsqueeze(0).expand(N, -1, -1)
) # [N, token_num, token_embedding_size // num_heads]
style_embed = self.attention(query, keys)
return style_embed
class GST(nn.Module):
def __init__(self, hp):
super().__init__()
self.encoder = ReferenceEncoder(hp)
self.stl = STL(hp)
def forward(self, inputs, input_lengths=None):
enc_out = self.encoder(inputs, input_lengths=input_lengths)
style_embed = self.stl(enc_out)
return style_embed
DEFAULTS = HParams(
n_symbols=100,
symbols_embedding_dim=512,
mask_padding=True,
fp16_run=False,
n_mel_channels=80,
# encoder parameters
encoder_kernel_size=5,
encoder_n_convolutions=3,
encoder_embedding_dim=512,
# decoder parameters
n_frames_per_step=1, # currently only 1 is supported
decoder_rnn_dim=1024,
prenet_dim=256,
prenet_f0_n_layers=1,
prenet_f0_dim=1,
prenet_f0_kernel_size=1,
prenet_rms_dim=0,
prenet_fms_kernel_size=1,
max_decoder_steps=1000,
gate_threshold=0.5,
p_attention_dropout=0.1,
p_decoder_dropout=0.1,
p_teacher_forcing=1.0,
# attention parameters
attention_rnn_dim=1024,
attention_dim=128,
# location layer parameters
attention_location_n_filters=32,
attention_location_kernel_size=31,
# mel post-processing network parameters
postnet_embedding_dim=512,
postnet_kernel_size=5,
postnet_n_convolutions=5,
# speaker_embedding
n_speakers=123, # original nvidia libritts training
speaker_embedding_dim=128,
# reference encoder
with_gst=True,
ref_enc_filters=[32, 32, 64, 64, 128, 128],
ref_enc_size=[3, 3],
ref_enc_strides=[2, 2],
ref_enc_pad=[1, 1],
ref_enc_gru_size=128,
# style token layer
token_embedding_size=256,
token_num=10,
num_heads=8,
)
GST(DEFAULTS)
###Output
_____no_output_____
###Markdown
VITS common LayerNorm
###Code
# export
class LayerNorm(nn.Module):
def __init__(self, channels, eps=1e-5):
super().__init__()
self.channels = channels
self.eps = eps
self.gamma = nn.Parameter(torch.ones(channels))
self.beta = nn.Parameter(torch.zeros(channels))
def forward(self, x):
x = x.transpose(1, -1)
x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
return x.transpose(1, -1)
LayerNorm(3)
###Output
_____no_output_____
###Markdown
Flip
###Code
# export
class Flip(nn.Module):
def forward(self, x, *args, reverse=False, **kwargs):
x = torch.flip(x, [1])
if not reverse:
logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
return x, logdet
else:
return x
###Output
_____no_output_____
###Markdown
Log
###Code
# export
class Log(nn.Module):
def forward(self, x, x_mask, reverse=False, **kwargs):
if not reverse:
y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
logdet = torch.sum(-y, [1, 2])
return y, logdet
else:
x = torch.exp(x) * x_mask
return x
###Output
_____no_output_____
###Markdown
ElementWiseAffine
###Code
# export
class ElementwiseAffine(nn.Module):
def __init__(self, channels):
super().__init__()
self.channels = channels
self.m = nn.Parameter(torch.zeros(channels, 1))
self.logs = nn.Parameter(torch.zeros(channels, 1))
def forward(self, x, x_mask, reverse=False, **kwargs):
if not reverse:
y = self.m + torch.exp(self.logs) * x
y = y * x_mask
logdet = torch.sum(self.logs * x_mask, [1, 2])
return y, logdet
else:
x = (x - self.m) * torch.exp(-self.logs) * x_mask
return x
###Output
_____no_output_____
###Markdown
DDSConv
###Code
# export
class DDSConv(nn.Module):
"""
Dialted and Depth-Separable Convolution
"""
def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
super().__init__()
self.channels = channels
self.kernel_size = kernel_size
self.n_layers = n_layers
self.p_dropout = p_dropout
self.drop = nn.Dropout(p_dropout)
self.convs_sep = nn.ModuleList()
self.convs_1x1 = nn.ModuleList()
self.norms_1 = nn.ModuleList()
self.norms_2 = nn.ModuleList()
for i in range(n_layers):
dilation = kernel_size ** i
padding = (kernel_size * dilation - dilation) // 2
self.convs_sep.append(
nn.Conv1d(
channels,
channels,
kernel_size,
groups=channels,
dilation=dilation,
padding=padding,
)
)
self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
self.norms_1.append(LayerNorm(channels))
self.norms_2.append(LayerNorm(channels))
def forward(self, x, x_mask, g=None):
if g is not None:
x = x + g
for i in range(self.n_layers):
y = self.convs_sep[i](x * x_mask)
y = self.norms_1[i](y)
y = F.gelu(y)
y = self.convs_1x1[i](y)
y = self.norms_2[i](y)
y = F.gelu(y)
y = self.drop(y)
x = x + y
return x * x_mask
###Output
_____no_output_____
###Markdown
ConvFLow
###Code
# export
import math
from uberduck_ml_dev.models.transforms import piecewise_rational_quadratic_transform
class ConvFlow(nn.Module):
def __init__(
self,
in_channels,
filter_channels,
kernel_size,
n_layers,
num_bins=10,
# tail_bound=5.0,
tail_bound=10.0,
):
super().__init__()
self.in_channels = in_channels
self.filter_channels = filter_channels
self.kernel_size = kernel_size
self.n_layers = n_layers
self.num_bins = num_bins
self.tail_bound = tail_bound
self.half_channels = in_channels // 2
self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
self.proj = nn.Conv1d(
filter_channels, self.half_channels * (num_bins * 3 - 1), 1
)
self.proj.weight.data.zero_()
self.proj.bias.data.zero_()
def forward(self, x, x_mask, g=None, reverse=False):
x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
h = self.pre(x0)
h = self.convs(h, x_mask, g=g)
h = self.proj(h) * x_mask
b, c, t = x0.shape
h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
self.filter_channels
)
unnormalized_derivatives = h[..., 2 * self.num_bins :]
x1, logabsdet = piecewise_rational_quadratic_transform(
x1,
unnormalized_widths,
unnormalized_heights,
unnormalized_derivatives,
inverse=reverse,
tails="linear",
tail_bound=self.tail_bound,
)
x = torch.cat([x0, x1], 1) * x_mask
logdet = torch.sum(logabsdet * x_mask, [1, 2])
if not reverse:
return x, logdet
else:
return x
cf = ConvFlow(192, 2, 3, 3)
# NOTE(zach): figure out the shape of the forward stuff.
# cf(torch.rand(2, 2, 1), torch.ones(2, 2, 1))
###Output
_____no_output_____
###Markdown
WN
###Code
# export
from uberduck_ml_dev.utils.utils import fused_add_tanh_sigmoid_multiply
class WN(torch.nn.Module):
def __init__(
self,
hidden_channels,
kernel_size,
dilation_rate,
n_layers,
gin_channels=0,
p_dropout=0,
):
super(WN, self).__init__()
assert kernel_size % 2 == 1
self.hidden_channels = hidden_channels
self.kernel_size = (kernel_size,)
self.dilation_rate = dilation_rate
self.n_layers = n_layers
self.gin_channels = gin_channels
self.p_dropout = p_dropout
self.in_layers = torch.nn.ModuleList()
self.res_skip_layers = torch.nn.ModuleList()
self.drop = nn.Dropout(p_dropout)
if gin_channels != 0:
cond_layer = nn.Conv1d(gin_channels, 2 * hidden_channels * n_layers, 1)
self.cond_layer = weight_norm(cond_layer, name="weight")
for i in range(n_layers):
dilation = dilation_rate ** i
padding = int((kernel_size * dilation - dilation) / 2)
in_layer = nn.Conv1d(
hidden_channels,
2 * hidden_channels,
kernel_size,
dilation=dilation,
padding=padding,
)
in_layer = weight_norm(in_layer, name="weight")
self.in_layers.append(in_layer)
# last one is not necessary
if i < n_layers - 1:
res_skip_channels = 2 * hidden_channels
else:
res_skip_channels = hidden_channels
res_skip_layer = nn.Conv1d(hidden_channels, res_skip_channels, 1)
res_skip_layer = weight_norm(res_skip_layer, name="weight")
self.res_skip_layers.append(res_skip_layer)
def forward(self, x, x_mask, g=None, **kwargs):
output = torch.zeros_like(x)
n_channels_tensor = torch.IntTensor([self.hidden_channels])
if g is not None:
g = self.cond_layer(g)
for i in range(self.n_layers):
x_in = self.in_layers[i](x)
if g is not None:
cond_offset = i * 2 * self.hidden_channels
g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
else:
g_l = torch.zeros_like(x_in)
acts = fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
acts = self.drop(acts)
res_skip_acts = self.res_skip_layers[i](acts)
if i < self.n_layers - 1:
res_acts = res_skip_acts[:, : self.hidden_channels, :]
x = (x + res_acts) * x_mask
output = output + res_skip_acts[:, self.hidden_channels :, :]
else:
output = output + res_skip_acts
return output * x_mask
def remove_weight_norm(self):
if self.gin_channels != 0:
remove_weight_norm(self.cond_layer)
for l in self.in_layers:
remove_weight_norm(l)
for l in self.res_skip_layers:
remove_weight_norm(l)
###Output
_____no_output_____
###Markdown
ResidualCouplingLayer
###Code
# export
class ResidualCouplingLayer(nn.Module):
def __init__(
self,
channels,
hidden_channels,
kernel_size,
dilation_rate,
n_layers,
p_dropout=0,
gin_channels=0,
mean_only=False,
):
assert channels % 2 == 0, "channels should be divisible by 2"
super().__init__()
self.channels = channels
self.hidden_channels = hidden_channels
self.kernel_size = kernel_size
self.dilation_rate = dilation_rate
self.n_layers = n_layers
self.half_channels = channels // 2
self.mean_only = mean_only
self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
self.enc = WN(
hidden_channels,
kernel_size,
dilation_rate,
n_layers,
p_dropout=p_dropout,
gin_channels=gin_channels,
)
self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
self.post.weight.data.zero_()
self.post.bias.data.zero_()
def forward(self, x, x_mask, g=None, reverse=False):
x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
h = self.pre(x0) * x_mask
h = self.enc(h, x_mask, g=g)
stats = self.post(h) * x_mask
if not self.mean_only:
m, logs = torch.split(stats, [self.half_channels] * 2, 1)
else:
m = stats
logs = torch.zeros_like(m)
if not reverse:
x1 = m + x1 * torch.exp(logs) * x_mask
x = torch.cat([x0, x1], 1)
logdet = torch.sum(logs, [1, 2])
return x, logdet
else:
x1 = (x1 - m) * torch.exp(-logs) * x_mask
x = torch.cat([x0, x1], 1)
return x
###Output
_____no_output_____
###Markdown
ResBlock
###Code
# export
class ResBlock1(torch.nn.Module):
def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
super(ResBlock1, self).__init__()
self.convs1 = nn.ModuleList(
[
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[0],
padding=get_padding(kernel_size, dilation[0]),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[1],
padding=get_padding(kernel_size, dilation[1]),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[2],
padding=get_padding(kernel_size, dilation[2]),
)
),
]
)
self.convs1.apply(init_weights)
self.convs2 = nn.ModuleList(
[
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=1,
padding=get_padding(kernel_size, 1),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=1,
padding=get_padding(kernel_size, 1),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=1,
padding=get_padding(kernel_size, 1),
)
),
]
)
self.convs2.apply(init_weights)
def forward(self, x, x_mask=None):
for c1, c2 in zip(self.convs1, self.convs2):
xt = F.leaky_relu(x, LRELU_SLOPE)
if x_mask is not None:
xt = xt * x_mask
xt = c1(xt)
xt = F.leaky_relu(xt, LRELU_SLOPE)
if x_mask is not None:
xt = xt * x_mask
xt = c2(xt)
x = xt + x
if x_mask is not None:
x = x * x_mask
return x
def remove_weight_norm(self):
for l in self.convs1:
remove_weight_norm(l)
for l in self.convs2:
remove_weight_norm(l)
class ResBlock2(torch.nn.Module):
def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
super(ResBlock2, self).__init__()
self.convs = nn.ModuleList(
[
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[0],
padding=get_padding(kernel_size, dilation[0]),
)
),
weight_norm(
nn.Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation[1],
padding=get_padding(kernel_size, dilation[1]),
)
),
]
)
self.convs.apply(init_weights)
def forward(self, x, x_mask=None):
for c in self.convs:
xt = F.leaky_relu(x, LRELU_SLOPE)
if x_mask is not None:
xt = xt * x_mask
xt = c(xt)
x = xt + x
if x_mask is not None:
x = x * x_mask
return x
def remove_weight_norm(self):
for l in self.convs:
remove_weight_norm(l)
# export
LRELU_SLOPE = 0.1
###Output
_____no_output_____
|
silver/.ipynb_checkpoints/C02_Quantum_States_With_Complex_Numbers-checkpoint.ipynb
|
###Markdown
prepared by Maksim Dimitrijev (QLatvia) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $ Quantum states with complex numbers The main properties of quantum states do not change whether we are using complex numbers or not. Let's recall the definition we had in Bronze:**Recall: Quantum states with real numbers**When a quantum system is measured, the probability of observing one state is the square of its value.The summation of amplitude squares must be 1 for a valid quantum state.The second property also means that the overall probability must be 1 when we observe a quantum system. If we consider a quantum system as a vector, then the length of such vector should be 1. How complex numbers affect probabilitiesSuppose that we have a quantum state with the amplitude $a+bi$. What is the probability to observe such state when the quantum system is measured? We need a small update to our statement about the probability of the measurement - it is equal to the square of the absolute value of the amplitude. If amplitudes are restricted to real numbers, then this update makes no difference. With complex numbers we obtain the following:$\mathopen|a+bi\mathclose| = \sqrt{a^2+b^2} \implies \mathopen|a+bi\mathclose|^2 = a^2+b^2$.It is easy to see that this calculation works fine if we do not have imaginary part - we just obtain the real part $a^2$. Notice that for the probability $a^2 + b^2$ both real and imaginary part contribute in a similar way - with the square of its value.<!--Let's check the square of the complex number:$(a+bi)^2 = (a+bi)(a+bi) = a^2 + 2abi + b^2i^2 = (a^2-b^2) + 2abi$.In such case we still obtain a complex number, but for a probability we need a real number. -->Suppose that we have the following vector, representing a quantum system:$$ \myvector{ \frac{1+i}{\sqrt{3}} \\ -\frac{1}{\sqrt{3}} }.$$This vector represents the state $\frac{1+i}{\sqrt{3}}\ket{0} - \frac{1}{\sqrt{3}}\ket{1}$. After doing measurement, we observe state $\ket{1}$ with probability $\mypar{-\frac{1}{\sqrt{3}}}^2 = \frac{1}{3}$. Let's decompose the amplitude of state $\ket{0}$ into form $a+bi$. Then we obtain $\frac{1}{\sqrt{3}} + \frac{1}{\sqrt{3}}i$, and so our probability is $\mypar{\frac{1}{\sqrt{3}}}^2 + \mypar{\frac{1}{\sqrt{3}}}^2 = \frac{2}{3}$. Task 1 Calculate on the paper the probabilities to observe state $\ket{0}$ and $\ket{1}$ for each quantum system:$$ \myvector{ \frac{1-i\sqrt{2}}{2} \\ \frac{i}{2} } \mbox{ , } \myvector{ \frac{2i}{\sqrt{6}} \\ \frac{1-i}{\sqrt{6}} } \mbox{ and } \myvector{ \frac{1+i\sqrt{3}}{\sqrt{5}} \\ \frac{-i}{\sqrt{5}} }.$$. click for our solution Task 2 If the following vectors are valid quantum states, then what can be the values of $a$ and $b$?$$ \ket{v} = \myrvector{0.1 - ai \\ -0.7 \\ 0.4 + 0.3i } ~~~~~ \mbox{and} ~~~~~ \ket{u} = \myrvector{ \frac{1-i}{\sqrt{6}} \\ \frac{1+2i}{\sqrt{b}} \\ -\frac{1}{\sqrt{4}} }.$$
###Code
#
# your code is here
# (you may find the values by hand (in mind) as well)
#
###Output
_____no_output_____
###Markdown
click for our solution Task 3Randomly create a 2-dimensional quantum state, where both amplitudes are complex numbers.Write a function that returns a randomly created 2-dimensional quantum state.Hint: Pick four random values between -100 and 100 for the real and imaginary parts of the amplitudes of state 0 and state 1 Find an appropriate normalization factor to divide each amplitude such that the length of quantum state should be 1 Repeat several times: Randomly pick a quantum state Check whether the picked quantum state is valid _Note:_ You can store your function to use it later by commenting out the first line.
###Code
#%%writefile random_complex_quantum_state.py
from random import randrange
def random_complex_quantum_state():
# quantum state
quantum_state=[0,0]
#
#
#
return quantum_state
%%writefile is_quantum_state.py
# testing whether a given quantum state is valid
def is_quantum_state(quantum_state):
#
# your code is here
#
#Use the functions you have written to randomly generate and check quantum states
#
# your code is here
#
###Output
_____no_output_____
|
tensorflow/TF_Keras_ResNet101_Filter_Shape_Weights_and_MaxPooling_Strided_Convolution_on_Odd_Input.ipynb
|
###Markdown
Setup ResNet101 model
###Code
!wget "https://upload.wikimedia.org/wikipedia/commons/thumb/3/37/African_Bush_Elephant.jpg/1200px-African_Bush_Elephant.jpg" -O "elephant.jpg"
from tensorflow.keras.applications import ResNet101
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet import preprocess_input, decode_predictions
import numpy as np
model = ResNet101(weights='imagenet')
img_path = 'elephant.jpg'
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = model.predict(x)
# decode the results into a list of tuples (class, description, probability)
# (one such list for each sample in the batch)
print('Predicted:', decode_predictions(preds, top=3)[0])
model.summary()
print(len(model.layers[2].weights))
print(model.layers[2].name)
print(model.layers[2].weights[0].shape)
print(model.layers[2].weights[1].shape)
print(model.layers[2].name)
print(model.layers[2].weights)
print(model.layers[7].name)
print(model.layers[7].weights)
###Output
conv2_block1_1_conv
[<tf.Variable 'conv2_block1_1_conv/kernel:0' shape=(1, 1, 64, 64) dtype=float32, numpy=
array([[[[ 0.00160399, -0.00254265, -0.01731922, ..., -0.01651942,
0.01513864, -0.0354646 ],
[ 0.01515921, -0.00627845, 0.00166968, ..., 0.01461547,
0.03117803, 0.11374122],
[ 0.00282309, 0.00229593, 0.03237155, ..., 0.0035359 ,
-0.03562421, -0.00834576],
...,
[-0.02377483, -0.0010978 , -0.02749332, ..., -0.08129182,
-0.00911469, -0.04912051],
[ 0.02666844, 0.00969497, 0.07066278, ..., -0.02592005,
-0.01759226, -0.02110136],
[-0.0223264 , 0.00080224, -0.08305199, ..., 0.006657 ,
0.0217971 , -0.03367427]]]], dtype=float32)>, <tf.Variable 'conv2_block1_1_conv/bias:0' shape=(64,) dtype=float32, numpy=
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)>]
###Markdown
Enumerrate layers, get filter shape and counts
###Code
for i, layer in enumerate(model.layers):
print(layer.name)
if layer.weights:
print(layer.weights[0].shape)
print(layer.weights[1].shape)
print('-' * 30)
###Output
input_3
------------------------------
conv1_pad
------------------------------
conv1_conv
(7, 7, 3, 64)
(64,)
------------------------------
conv1_bn
(64,)
(64,)
------------------------------
conv1_relu
------------------------------
pool1_pad
------------------------------
pool1_pool
------------------------------
conv2_block1_1_conv
(1, 1, 64, 64)
(64,)
------------------------------
conv2_block1_1_bn
(64,)
(64,)
------------------------------
conv2_block1_1_relu
------------------------------
conv2_block1_2_conv
(3, 3, 64, 64)
(64,)
------------------------------
conv2_block1_2_bn
(64,)
(64,)
------------------------------
conv2_block1_2_relu
------------------------------
conv2_block1_0_conv
(1, 1, 64, 256)
(256,)
------------------------------
conv2_block1_3_conv
(1, 1, 64, 256)
(256,)
------------------------------
conv2_block1_0_bn
(256,)
(256,)
------------------------------
conv2_block1_3_bn
(256,)
(256,)
------------------------------
conv2_block1_add
------------------------------
conv2_block1_out
------------------------------
conv2_block2_1_conv
(1, 1, 256, 64)
(64,)
------------------------------
conv2_block2_1_bn
(64,)
(64,)
------------------------------
conv2_block2_1_relu
------------------------------
conv2_block2_2_conv
(3, 3, 64, 64)
(64,)
------------------------------
conv2_block2_2_bn
(64,)
(64,)
------------------------------
conv2_block2_2_relu
------------------------------
conv2_block2_3_conv
(1, 1, 64, 256)
(256,)
------------------------------
conv2_block2_3_bn
(256,)
(256,)
------------------------------
conv2_block2_add
------------------------------
conv2_block2_out
------------------------------
conv2_block3_1_conv
(1, 1, 256, 64)
(64,)
------------------------------
conv2_block3_1_bn
(64,)
(64,)
------------------------------
conv2_block3_1_relu
------------------------------
conv2_block3_2_conv
(3, 3, 64, 64)
(64,)
------------------------------
conv2_block3_2_bn
(64,)
(64,)
------------------------------
conv2_block3_2_relu
------------------------------
conv2_block3_3_conv
(1, 1, 64, 256)
(256,)
------------------------------
conv2_block3_3_bn
(256,)
(256,)
------------------------------
conv2_block3_add
------------------------------
conv2_block3_out
------------------------------
conv3_block1_1_conv
(1, 1, 256, 128)
(128,)
------------------------------
conv3_block1_1_bn
(128,)
(128,)
------------------------------
conv3_block1_1_relu
------------------------------
conv3_block1_2_conv
(3, 3, 128, 128)
(128,)
------------------------------
conv3_block1_2_bn
(128,)
(128,)
------------------------------
conv3_block1_2_relu
------------------------------
conv3_block1_0_conv
(1, 1, 256, 512)
(512,)
------------------------------
conv3_block1_3_conv
(1, 1, 128, 512)
(512,)
------------------------------
conv3_block1_0_bn
(512,)
(512,)
------------------------------
conv3_block1_3_bn
(512,)
(512,)
------------------------------
conv3_block1_add
------------------------------
conv3_block1_out
------------------------------
conv3_block2_1_conv
(1, 1, 512, 128)
(128,)
------------------------------
conv3_block2_1_bn
(128,)
(128,)
------------------------------
conv3_block2_1_relu
------------------------------
conv3_block2_2_conv
(3, 3, 128, 128)
(128,)
------------------------------
conv3_block2_2_bn
(128,)
(128,)
------------------------------
conv3_block2_2_relu
------------------------------
conv3_block2_3_conv
(1, 1, 128, 512)
(512,)
------------------------------
conv3_block2_3_bn
(512,)
(512,)
------------------------------
conv3_block2_add
------------------------------
conv3_block2_out
------------------------------
conv3_block3_1_conv
(1, 1, 512, 128)
(128,)
------------------------------
conv3_block3_1_bn
(128,)
(128,)
------------------------------
conv3_block3_1_relu
------------------------------
conv3_block3_2_conv
(3, 3, 128, 128)
(128,)
------------------------------
conv3_block3_2_bn
(128,)
(128,)
------------------------------
conv3_block3_2_relu
------------------------------
conv3_block3_3_conv
(1, 1, 128, 512)
(512,)
------------------------------
conv3_block3_3_bn
(512,)
(512,)
------------------------------
conv3_block3_add
------------------------------
conv3_block3_out
------------------------------
conv3_block4_1_conv
(1, 1, 512, 128)
(128,)
------------------------------
conv3_block4_1_bn
(128,)
(128,)
------------------------------
conv3_block4_1_relu
------------------------------
conv3_block4_2_conv
(3, 3, 128, 128)
(128,)
------------------------------
conv3_block4_2_bn
(128,)
(128,)
------------------------------
conv3_block4_2_relu
------------------------------
conv3_block4_3_conv
(1, 1, 128, 512)
(512,)
------------------------------
conv3_block4_3_bn
(512,)
(512,)
------------------------------
conv3_block4_add
------------------------------
conv3_block4_out
------------------------------
conv4_block1_1_conv
(1, 1, 512, 256)
(256,)
------------------------------
conv4_block1_1_bn
(256,)
(256,)
------------------------------
conv4_block1_1_relu
------------------------------
conv4_block1_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block1_2_bn
(256,)
(256,)
------------------------------
conv4_block1_2_relu
------------------------------
conv4_block1_0_conv
(1, 1, 512, 1024)
(1024,)
------------------------------
conv4_block1_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block1_0_bn
(1024,)
(1024,)
------------------------------
conv4_block1_3_bn
(1024,)
(1024,)
------------------------------
conv4_block1_add
------------------------------
conv4_block1_out
------------------------------
conv4_block2_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block2_1_bn
(256,)
(256,)
------------------------------
conv4_block2_1_relu
------------------------------
conv4_block2_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block2_2_bn
(256,)
(256,)
------------------------------
conv4_block2_2_relu
------------------------------
conv4_block2_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block2_3_bn
(1024,)
(1024,)
------------------------------
conv4_block2_add
------------------------------
conv4_block2_out
------------------------------
conv4_block3_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block3_1_bn
(256,)
(256,)
------------------------------
conv4_block3_1_relu
------------------------------
conv4_block3_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block3_2_bn
(256,)
(256,)
------------------------------
conv4_block3_2_relu
------------------------------
conv4_block3_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block3_3_bn
(1024,)
(1024,)
------------------------------
conv4_block3_add
------------------------------
conv4_block3_out
------------------------------
conv4_block4_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block4_1_bn
(256,)
(256,)
------------------------------
conv4_block4_1_relu
------------------------------
conv4_block4_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block4_2_bn
(256,)
(256,)
------------------------------
conv4_block4_2_relu
------------------------------
conv4_block4_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block4_3_bn
(1024,)
(1024,)
------------------------------
conv4_block4_add
------------------------------
conv4_block4_out
------------------------------
conv4_block5_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block5_1_bn
(256,)
(256,)
------------------------------
conv4_block5_1_relu
------------------------------
conv4_block5_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block5_2_bn
(256,)
(256,)
------------------------------
conv4_block5_2_relu
------------------------------
conv4_block5_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block5_3_bn
(1024,)
(1024,)
------------------------------
conv4_block5_add
------------------------------
conv4_block5_out
------------------------------
conv4_block6_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block6_1_bn
(256,)
(256,)
------------------------------
conv4_block6_1_relu
------------------------------
conv4_block6_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block6_2_bn
(256,)
(256,)
------------------------------
conv4_block6_2_relu
------------------------------
conv4_block6_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block6_3_bn
(1024,)
(1024,)
------------------------------
conv4_block6_add
------------------------------
conv4_block6_out
------------------------------
conv4_block7_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block7_1_bn
(256,)
(256,)
------------------------------
conv4_block7_1_relu
------------------------------
conv4_block7_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block7_2_bn
(256,)
(256,)
------------------------------
conv4_block7_2_relu
------------------------------
conv4_block7_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block7_3_bn
(1024,)
(1024,)
------------------------------
conv4_block7_add
------------------------------
conv4_block7_out
------------------------------
conv4_block8_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block8_1_bn
(256,)
(256,)
------------------------------
conv4_block8_1_relu
------------------------------
conv4_block8_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block8_2_bn
(256,)
(256,)
------------------------------
conv4_block8_2_relu
------------------------------
conv4_block8_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block8_3_bn
(1024,)
(1024,)
------------------------------
conv4_block8_add
------------------------------
conv4_block8_out
------------------------------
conv4_block9_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block9_1_bn
(256,)
(256,)
------------------------------
conv4_block9_1_relu
------------------------------
conv4_block9_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block9_2_bn
(256,)
(256,)
------------------------------
conv4_block9_2_relu
------------------------------
conv4_block9_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block9_3_bn
(1024,)
(1024,)
------------------------------
conv4_block9_add
------------------------------
conv4_block9_out
------------------------------
conv4_block10_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block10_1_bn
(256,)
(256,)
------------------------------
conv4_block10_1_relu
------------------------------
conv4_block10_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block10_2_bn
(256,)
(256,)
------------------------------
conv4_block10_2_relu
------------------------------
conv4_block10_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block10_3_bn
(1024,)
(1024,)
------------------------------
conv4_block10_add
------------------------------
conv4_block10_out
------------------------------
conv4_block11_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block11_1_bn
(256,)
(256,)
------------------------------
conv4_block11_1_relu
------------------------------
conv4_block11_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block11_2_bn
(256,)
(256,)
------------------------------
conv4_block11_2_relu
------------------------------
conv4_block11_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block11_3_bn
(1024,)
(1024,)
------------------------------
conv4_block11_add
------------------------------
conv4_block11_out
------------------------------
conv4_block12_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block12_1_bn
(256,)
(256,)
------------------------------
conv4_block12_1_relu
------------------------------
conv4_block12_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block12_2_bn
(256,)
(256,)
------------------------------
conv4_block12_2_relu
------------------------------
conv4_block12_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block12_3_bn
(1024,)
(1024,)
------------------------------
conv4_block12_add
------------------------------
conv4_block12_out
------------------------------
conv4_block13_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block13_1_bn
(256,)
(256,)
------------------------------
conv4_block13_1_relu
------------------------------
conv4_block13_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block13_2_bn
(256,)
(256,)
------------------------------
conv4_block13_2_relu
------------------------------
conv4_block13_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block13_3_bn
(1024,)
(1024,)
------------------------------
conv4_block13_add
------------------------------
conv4_block13_out
------------------------------
conv4_block14_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block14_1_bn
(256,)
(256,)
------------------------------
conv4_block14_1_relu
------------------------------
conv4_block14_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block14_2_bn
(256,)
(256,)
------------------------------
conv4_block14_2_relu
------------------------------
conv4_block14_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block14_3_bn
(1024,)
(1024,)
------------------------------
conv4_block14_add
------------------------------
conv4_block14_out
------------------------------
conv4_block15_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block15_1_bn
(256,)
(256,)
------------------------------
conv4_block15_1_relu
------------------------------
conv4_block15_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block15_2_bn
(256,)
(256,)
------------------------------
conv4_block15_2_relu
------------------------------
conv4_block15_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block15_3_bn
(1024,)
(1024,)
------------------------------
conv4_block15_add
------------------------------
conv4_block15_out
------------------------------
conv4_block16_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block16_1_bn
(256,)
(256,)
------------------------------
conv4_block16_1_relu
------------------------------
conv4_block16_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block16_2_bn
(256,)
(256,)
------------------------------
conv4_block16_2_relu
------------------------------
conv4_block16_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block16_3_bn
(1024,)
(1024,)
------------------------------
conv4_block16_add
------------------------------
conv4_block16_out
------------------------------
conv4_block17_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block17_1_bn
(256,)
(256,)
------------------------------
conv4_block17_1_relu
------------------------------
conv4_block17_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block17_2_bn
(256,)
(256,)
------------------------------
conv4_block17_2_relu
------------------------------
conv4_block17_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block17_3_bn
(1024,)
(1024,)
------------------------------
conv4_block17_add
------------------------------
conv4_block17_out
------------------------------
conv4_block18_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block18_1_bn
(256,)
(256,)
------------------------------
conv4_block18_1_relu
------------------------------
conv4_block18_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block18_2_bn
(256,)
(256,)
------------------------------
conv4_block18_2_relu
------------------------------
conv4_block18_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block18_3_bn
(1024,)
(1024,)
------------------------------
conv4_block18_add
------------------------------
conv4_block18_out
------------------------------
conv4_block19_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block19_1_bn
(256,)
(256,)
------------------------------
conv4_block19_1_relu
------------------------------
conv4_block19_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block19_2_bn
(256,)
(256,)
------------------------------
conv4_block19_2_relu
------------------------------
conv4_block19_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block19_3_bn
(1024,)
(1024,)
------------------------------
conv4_block19_add
------------------------------
conv4_block19_out
------------------------------
conv4_block20_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block20_1_bn
(256,)
(256,)
------------------------------
conv4_block20_1_relu
------------------------------
conv4_block20_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block20_2_bn
(256,)
(256,)
------------------------------
conv4_block20_2_relu
------------------------------
conv4_block20_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block20_3_bn
(1024,)
(1024,)
------------------------------
conv4_block20_add
------------------------------
conv4_block20_out
------------------------------
conv4_block21_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block21_1_bn
(256,)
(256,)
------------------------------
conv4_block21_1_relu
------------------------------
conv4_block21_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block21_2_bn
(256,)
(256,)
------------------------------
conv4_block21_2_relu
------------------------------
conv4_block21_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block21_3_bn
(1024,)
(1024,)
------------------------------
conv4_block21_add
------------------------------
conv4_block21_out
------------------------------
conv4_block22_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block22_1_bn
(256,)
(256,)
------------------------------
conv4_block22_1_relu
------------------------------
conv4_block22_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block22_2_bn
(256,)
(256,)
------------------------------
conv4_block22_2_relu
------------------------------
conv4_block22_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block22_3_bn
(1024,)
(1024,)
------------------------------
conv4_block22_add
------------------------------
conv4_block22_out
------------------------------
conv4_block23_1_conv
(1, 1, 1024, 256)
(256,)
------------------------------
conv4_block23_1_bn
(256,)
(256,)
------------------------------
conv4_block23_1_relu
------------------------------
conv4_block23_2_conv
(3, 3, 256, 256)
(256,)
------------------------------
conv4_block23_2_bn
(256,)
(256,)
------------------------------
conv4_block23_2_relu
------------------------------
conv4_block23_3_conv
(1, 1, 256, 1024)
(1024,)
------------------------------
conv4_block23_3_bn
(1024,)
(1024,)
------------------------------
conv4_block23_add
------------------------------
conv4_block23_out
------------------------------
conv5_block1_1_conv
(1, 1, 1024, 512)
(512,)
------------------------------
conv5_block1_1_bn
(512,)
(512,)
------------------------------
conv5_block1_1_relu
------------------------------
conv5_block1_2_conv
(3, 3, 512, 512)
(512,)
------------------------------
conv5_block1_2_bn
(512,)
(512,)
------------------------------
conv5_block1_2_relu
------------------------------
conv5_block1_0_conv
(1, 1, 1024, 2048)
(2048,)
------------------------------
conv5_block1_3_conv
(1, 1, 512, 2048)
(2048,)
------------------------------
conv5_block1_0_bn
(2048,)
(2048,)
------------------------------
conv5_block1_3_bn
(2048,)
(2048,)
------------------------------
conv5_block1_add
------------------------------
conv5_block1_out
------------------------------
conv5_block2_1_conv
(1, 1, 2048, 512)
(512,)
------------------------------
conv5_block2_1_bn
(512,)
(512,)
------------------------------
conv5_block2_1_relu
------------------------------
conv5_block2_2_conv
(3, 3, 512, 512)
(512,)
------------------------------
conv5_block2_2_bn
(512,)
(512,)
------------------------------
conv5_block2_2_relu
------------------------------
conv5_block2_3_conv
(1, 1, 512, 2048)
(2048,)
------------------------------
conv5_block2_3_bn
(2048,)
(2048,)
------------------------------
conv5_block2_add
------------------------------
conv5_block2_out
------------------------------
conv5_block3_1_conv
(1, 1, 2048, 512)
(512,)
------------------------------
conv5_block3_1_bn
(512,)
(512,)
------------------------------
conv5_block3_1_relu
------------------------------
conv5_block3_2_conv
(3, 3, 512, 512)
(512,)
------------------------------
conv5_block3_2_bn
(512,)
(512,)
------------------------------
conv5_block3_2_relu
------------------------------
conv5_block3_3_conv
(1, 1, 512, 2048)
(2048,)
------------------------------
conv5_block3_3_bn
(2048,)
(2048,)
------------------------------
conv5_block3_add
------------------------------
conv5_block3_out
------------------------------
avg_pool
------------------------------
predictions
(2048, 1000)
(1000,)
------------------------------
###Markdown
Tensorflow odd size max pooling
###Code
import tensorflow as tf
x = tf.constant(123, shape=(255, 255, 64))
x = tf.reshape(x, [1, 255, 255, 64])
max_pool_2d = tf.keras.layers.MaxPooling2D(pool_size=(2, 2),
strides=(2, 2), padding='valid')
max_pool_2d(x)
import tensorflow as tf
x = tf.constant(1.2, shape=(255, 255, 3))
x = tf.reshape(x, [1, 255, 255, 3])
filters = tf.constant(1, shape=(7, 7, 3, 64))
print(filters.shape)
c_2d = tf.keras.layers.Conv2D(64, (7, 7), strides=(2, 2), padding='same')
c_2d(x)
###Output
_____no_output_____
|
nbs/05-constraints.ipynb
|
###Markdown
Battery ConstraintsThe solution must meet constraints of the battery:- Must only charge between 00:00-15:30- Must only discharge between 15:30--21:00- Must not charge or discharge between 21:00--00:00- Battery must be empty at 00:00 each day Imports
###Code
#exports
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Loading an Example Charging Rate Profile
###Code
df_charge_rate = pd.read_csv('../data/output/latest_submission.csv')
s_charge_rate = df_charge_rate.set_index('datetime')['charge_MW']
s_charge_rate.index = pd.to_datetime(s_charge_rate.index)
s_charge_rate.head()
###Output
_____no_output_____
###Markdown
Checking for NullsBefore we start doing anything clever we'll do a simple check for null values
###Code
#exports
def check_for_nulls(s_charge_rate):
assert s_charge_rate.isnull().sum()==0, 'There are null values in the charge rate time-series'
check_for_nulls(s_charge_rate)
###Output
_____no_output_____
###Markdown
Converting a charging schedule to capacityThe solution is given in terms of the battery charge/discharge schedule, but it is also necessary to satisfy constraints on the capacity of the battery (see below). The charge is determined by $C_{t+1} = C_{t} + 0.5B_{t}$We'll start by generating the charge state time-series
###Code
#exports
def construct_charge_state_s(s_charge_rate: pd.Series, time_unit: float=0.5) -> pd.Series:
s_charge_state = (s_charge_rate
.cumsum()
.divide(1/time_unit)
)
return s_charge_state
s_charge_state = construct_charge_state_s(s_charge_rate)
s_charge_state.plot()
###Output
_____no_output_____
###Markdown
Checking Capacity Constraints$0 \leq C \leq C_{max}$We'll confirm that the bounds of the values in the charging time-series do not fall outside of the 0-6 MWh capacity of the battery
###Code
#exports
doesnt_exceed_charge_state_min = lambda s_charge_state, min_charge=0: (s_charge_state.round(10)<min_charge).sum()==0
doesnt_exceed_charge_state_max = lambda s_charge_state, max_charge=6: (s_charge_state.round(10)>max_charge).sum()==0
def check_capacity_constraints(s_charge_state, min_charge=0, max_charge=6):
assert doesnt_exceed_charge_state_min(s_charge_state, min_charge), 'The state of charge falls below 0 MWh which is beyond the bounds of possibility'
assert doesnt_exceed_charge_state_max(s_charge_state, max_charge), 'The state of charge exceeds the 6 MWh capacity'
return
check_capacity_constraints(s_charge_state)
###Output
_____no_output_____
###Markdown
Checking Full UtilisationWe'll also check that the battery falls to 0 MWh and rises to 6 MWh each day
###Code
#exports
check_all_values_equal = lambda s, value=0: (s==0).mean()==1
charge_state_always_drops_to_0MWh = lambda s_charge_state, min_charge=0: s_charge_state.groupby(s_charge_state.index.date).min().round(10).pipe(check_all_values_equal, min_charge)
charge_state_always_gets_to_6MWh = lambda s_charge_state, max_charge=6: s_charge_state.groupby(s_charge_state.index.date).min().round(10).pipe(check_all_values_equal, max_charge)
def check_full_utilisation(s_charge_state, min_charge=0, max_charge=6):
assert charge_state_always_drops_to_0MWh(s_charge_state, min_charge), 'The state of charge does not always drop to 0 MWh each day'
assert charge_state_always_gets_to_6MWh(s_charge_state, max_charge), 'The state of charge does not always rise to 6 MWh each day'
return
check_full_utilisation(s_charge_state)
###Output
_____no_output_____
###Markdown
Checking Charge Rates$B_{min} \leq B \leq B_{max}$ We'll then check that the minimum and maximum rates fall inside the -2.5 - 2.5 MW allowed by the battery
###Code
#exports
doesnt_exceed_charge_rate_min = lambda s_charge_rate, min_rate=-2.5: (s_charge_rate.round(10)<min_rate).sum()==0
doesnt_exceed_charge_rate_max = lambda s_charge_rate, max_rate=2.5: (s_charge_rate.round(10)>max_rate).sum()==0
def check_rate_constraints(s_charge_rate, min_rate=-2.5, max_rate=2.5):
assert doesnt_exceed_charge_rate_min(s_charge_rate, min_rate), 'The rate of charge falls below -2.5 MW limit'
assert doesnt_exceed_charge_rate_max(s_charge_rate, max_rate), 'The rate of charge exceeds the 2.5 MW limit'
return
check_rate_constraints(s_charge_rate)
###Output
_____no_output_____
###Markdown
Checking Charge/Discharge/Inactive PeriodsWe can only charge the battery between periods 1 (00:00) and 31 (15:00) inclusive, and discharge between periods 32 (15:30) and 42 (20:30) inclusive. For periods 43 to 48, there should be no activity, and the day must start with $C=0$.
###Code
#exports
charge_is_0_at_midnight = lambda s_charge_state: (s_charge_state.between_time('23:30', '23:59').round(10)==0).mean()==1
all_charge_periods_charge = lambda s_charge_rate, charge_times=('00:00', '15:00'): (s_charge_rate.between_time(charge_times[0], charge_times[1]).round(10) >= 0).mean() == 1
all_discharge_periods_discharge = lambda s_charge_rate, discharge_times=('15:30', '20:30'): (s_charge_rate.between_time(discharge_times[0], discharge_times[1]).round(10) <= 0).mean() == 1
all_inactive_periods_do_nothing = lambda s_charge_rate, inactive_times=('21:00', '23:30'): (s_charge_rate.between_time(inactive_times[0], inactive_times[1]).round(10) == 0).mean() == 1
def check_charging_patterns(s_charge_rate, s_charge_state, charge_times=('00:00', '15:00'), discharge_times=('15:30', '20:30'), inactive_times=('21:00', '23:30')):
assert charge_is_0_at_midnight(s_charge_state), 'The battery is not always at 0 MWh at midnight'
assert all_charge_periods_charge(s_charge_rate, charge_times), 'Some of the periods which should only be charging are instead discharging'
assert all_discharge_periods_discharge(s_charge_rate, discharge_times), 'Some of the periods which should only be discharging are instead charging'
assert all_inactive_periods_do_nothing(s_charge_rate, inactive_times), 'Some of the periods which should be doing nothing are instead charging/discharging'
return
check_charging_patterns(s_charge_rate, s_charge_state)
#exports
def schedule_is_legal(s_charge_rate, time_unit=0.5,
min_rate=-2.5, max_rate=2.5,
min_charge=0, max_charge=6,
charge_times=('00:00', '15:00'),
discharge_times=('15:30', '20:30'),
inactive_times=('21:00', '23:30')):
"""
Determine if a battery schedule meets the specified constraints
"""
check_for_nulls(s_charge_rate)
s_charge_state = construct_charge_state_s(s_charge_rate, time_unit)
check_capacity_constraints(s_charge_state, min_charge, max_charge)
check_full_utilisation(s_charge_state, min_charge, max_charge)
check_rate_constraints(s_charge_rate, min_rate, max_rate)
check_charging_patterns(s_charge_rate, s_charge_state, charge_times, discharge_times, inactive_times)
return True
schedule_is_legal(s_charge_rate)
###Output
_____no_output_____
###Markdown
Finally we'll export the relevant code to our `batopt` module
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00-utilities.ipynb.
Converted 01-cleaning.ipynb.
Converted 02-discharging.ipynb.
Converted 03-charging.ipynb.
Converted 04-constraints.ipynb.
Converted 05-pipeline.ipynb.
|
code/Depression_Model_Part_1.ipynb
|
###Markdown
Depression Sentiment Prediction (Part - 1) 1. Helper Libraries and Data Imports For this Task, I will be using NumPy, Pandas, Matplotlib, NLTK and Wordcloud
###Code
import os
try:
import re
import numpy as np
import pandas as pd
import nltk
from matplotlib import pyplot as plt
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.tokenize import word_tokenize
from wordcloud import WordCloud
nltk.download('punkt')
nltk.download('stopwords')
except:
print("Installing Required Dependencies, Please restart the Program thereafter")
os.system('pip install numpy pandas matplotlib wordcloud nltk')
nltk.download('punkt')
nltk.download('stopwords')
# Get the Data
raw_data = pd.read_csv("training.1600000.processed.noemoticon.csv", encoding='latin-1')
raw_data.head()
###Output
_____no_output_____
###Markdown
2. Data Cleaning As clearly visible, Column names are not proper and our data Consists of a lot of Unneeded Columns (such as Id (That Big Number), Date of Tweet, Query Flag, User handle id). We also need to Change the Target Variables (currently, (0, 4) to (0, 1)). We shall remove this Data Dirt so that we can proceed to the next step which is Mild data exploration
###Code
# Let's First give all the columns proper name
new_columns = ['target', 'tweet_id', 'date', 'query_flag', 'user_id', 'text']
raw_data.columns = new_columns
raw_data.sample(5)
# Now that the data is a bit more comprehensible, We will drop all the unwanted columns
unwanted_cols = ['tweet_id', 'date', 'query_flag', 'user_id']
raw_data = raw_data.drop(unwanted_cols, axis=1)
raw_data.head()
# Now, the Data looks a lot more cleaner and better to work on, let's change the target variables
# from 0 (Negative - Depressive) and 4 (Positive) --> 0 (Positive) and 1 (Negative - Depressive)
def change_target(x):
if int(x) == 0:
return 1
else:
return 0
# Let's apply this helper function to the Dataset
raw_data['target'] = raw_data['target'].apply(change_target)
raw_data.head()
###Output
_____no_output_____
###Markdown
3. Data Exploration using WordCloud As we can see, the first many tweets (which seem negative & depressive from the text too) have been labeled 1 (Negative) and similarly the positive ones are now labeled 0 (Positive). This has made our data very easy to work on! Now let's use Wordcloud to see the different positive and negative words in form of a WordCloud. I will do this in following steps:1. Get a list of all Depressive (Negative) and Positive Words2. Generate a Wordcloud3. Show the Wordcloud using ```plt.show()```
###Code
# This is some good old masking technique I prefer to get required data!
depressive_words = " ".join(list(raw_data[raw_data['target'] == 1]['text']))
# Now let's generate our wordcloud
dep_words_cloud = WordCloud(width=256, height=256, collocations=False).generate(depressive_words)
plt.figure(figsize = (7,7))
plt.imshow(dep_words_cloud)
plt.axis('off')
plt.tight_layout(pad = 0)
plt.show()
# Now, let's do the same for Positive Words
positive_words = " ".join(list(raw_data[raw_data['target'] == 0]['text']))
# Generate Wordcloud
positive_words_cloud = WordCloud(width=256, height=256, collocations=False).generate(positive_words)
plt.figure(figsize = (7,7))
plt.imshow(positive_words_cloud)
plt.axis('off')
plt.tight_layout(pad = 0)
plt.show()
###Output
_____no_output_____
###Markdown
4. Feature Extraction We can clearly see the difference between both the WordClouds. The Former WordCloud (Negative one) shows the presence of depressive and negative words. This further reinforces the idea of detecting Depression %-chances from Negative Sentiment Tweets. Now is the time to extract important features from the text as it containes a lot of unnecessary words such as "@-references", Hypertext links, general stopwords, etc. We will do this in 2 steps:1. First, we will apply Regex, remove Stopwords, apply PorterStemmer Algorithm and some other basic Processing on the data.2. Since the Data is just utterly huge, I will export this cleaned and extracted data into a new `.csv` file. This will act as a checkpoint and save us some time in the future.Extra: In the next and final notebook, we will load this new extracted data (which we are going to export in form of `.csv`) and will Proper Feature Extraction using Tf-Idf Vectorizer.
###Code
# Now is the time to use all the NLTK imports we have done before!
# I will create a helper function to save ourself from redundant work
def process_text(text):
"""
@param: text (Raw Text sentence)
@return: final_str (Final processed string)
Function -> It takes raw string as an Input and processed data to get important features extracted
First it converts to lower, then applies regex code, then tokenizes the words, then removes
stopwords, then stems the words, then puts the output list back into a string.
"""
# Convert text to lower and remove all special characters from it using regex
text = text.lower()
text = re.sub(r'[^(a-zA-Z)\s]','', text)
# Tokenize the words using the word_tokenize() from nltk lib
words = word_tokenize(text)
# Only take the words whose length is greater than 2
words = [w for w in words if len(w) > 2]
# Get the stopwords for english language
sw = stopwords.words('english')
# Get only those words which are not in stopwords (those which are not stopwords)
words = [word for word in words if word not in sw]
# Get the PorterStemmer algorithm module
stemmer = PorterStemmer()
# Take the words with commoner morphological and inflexional endings from words removed
words = [stemmer.stem(word) for word in words]
# Till this point, we have a list of strings (words), we want them to be converted to a string of text
final_str = ""
for w in words:
final_str += w
final_str += " "
# Return the final string
return final_str
%%time
# Let's apply this to our data and export it into a '.csv' file
raw_data['text'] = raw_data['text'].apply(process_text)
raw_data.to_csv('feature_extracted.csv', index=False)
raw_data.head()
###Output
_____no_output_____
|
NLTK_Basics.ipynb
|
###Markdown
NLTK includes certain steps to be followed,1. Tokenization2. Stopword removal3. Stemming & Lemmatizing4. POS Tagging5. Named entity recognition, Same has been explained in the following codes
###Code
# Importing relevant packages
import nltk
# Reading a text file
file = open('brown_corpus_ca10.txt','r')
# Cleaning the file for further processing
file = file.read().replace('\n','')
data = file.replace('/',"")
data
###Output
_____no_output_____
###Markdown
Step 1 : Tokenizationa) Sentence tokenization is the process of splitting up strings into “sentences”b) Word tokenization is the process of splitting up “sentences” into individual “words”
###Code
Enumerate function in python
Enumerate() method adds a counter to an iterable and returns it in a form of enumerate object. This enumerate object can then be used directly in for loops or be converted into a list of tuples using list() method.
###Output
_____no_output_____
###Markdown
Each line represeents one advertisementfor i, line in enumerate(data.split('\n')): if i > 10: Lets take a look at the first 10 ads. breakprint(str(i)+':\t'+line)
###Code
### a) Sentence Tokenization
###Output
_____no_output_____
###Markdown
Importing relevant packagesfrom nltk import sent_tokenize, word_tokenize sentences = str(sent_tokenize(data))
###Code
### b) Word tokenize
###Output
_____no_output_____
###Markdown
for words in sent_tokenize(sentences): word_tokenize(sentences)
###Code
# Step 2 : Eliminating stop words
###Output
_____no_output_____
###Markdown
importing relevant librariesfrom nltk.corpus import stopwords Initializationeng_Stopwords = set(stopwords.words('english')) set checking is faster in python Transforming all the letter in the data to a lower casewords_lowered = list(map(str.lower,word_tokenize(sentences))) forming list comprehensions to eliminate stop wordsprint([word for word in words_lowered if word not in eng_Stopwords]) it is necessary to remove the punctuation to simplifyfrom string import punctuationeng_StopwordsPunc = eng_Stopwords.union(set(punctuation)) Removing punctuationsprint([words for words in words_lowered if words not in eng_StopwordsPunc])
###Code
# Step 3 : Stemming & Lemmatization
a) Stemming: Trying to shorten a word to their base dictionary word
b) Lemmatization: Trying to find the root word with linguistics rules (with the use of regexes)
###Output
_____no_output_____
###Markdown
There are two appraches for stemming, i.e PorterStemmer & LancasterStemmerfollowing link will give the comparison,Link - https://www.datacamp.com/community/tutorials/stemming-lemmatization-python
###Code
# Stemming
from nltk.stem import PorterStemmer
# Initializing
porter = PorterStemmer()
for words in words_lowered:
print(porter.stem(words))
# Package for lemmatization
from nltk.stem import WordNetLemmatizer
wnl = WordNetLemmatizer()
for words in words_lowered:
print(wnl.lemmatize(words))
###Output
[
``
\tvincentnp
g.np
ierullinp
hashvz
beenben
appointedvbn
temporaryjj
assistantnn
districtnn
attorneynn
,
,
itpps
wasbedz
announcedvbn
mondaynr
byin
charlesnp
e.np
raymondnp
,
,
districtnn-tl
attorneynn-tl
..\tierullinp
willmd
replacevb
desmondnp
d.np
connallnp
whowps
hashvz
beenben
calledvbn
toin
activejj
militaryjj
servicenn
butcc
isbez
expectedvbn
backrb
onin
theat
jobnn
byin
marchnp
31cd
..\tierullinp
,
,
29cd
,
,
hashvz
beenben
practicingvbg
inin
portlandnp
sincein
novembernp
,
,
1959cd
..hepps
isbez
aat
graduatenn
ofin
portlandnp-tl
universitynn-tl
andcc
theat
northwesternjj-tl
collegenn-tl
ofin-tl
lawnn-tl
..hepps
isbez
marriedvbn
andcc
theat
fathernn
ofin
threecd
childrennns
..\thelpingvbg
foreignjj
countriesnns
toto
buildvb
aat
soundjj
politicaljj
structurenn
isbez
moreql
importantjj
thancs
aidingvbg
themppo
economicallyrb
,
,
e.np
m.np
martinnp
,
,
assistantnn
secretarynn
ofin
statenn
forin
economicjj
affairsnns
toldvbd
membersnns
ofin
theat
worldnn-tl
affairsnns-tl
councilnn-tl
mondaynr
nightnn
..\tmartinnp
,
,
whowps
hashvz
beenben
inin
officenn
inin
washingtonnp
,
,
d.np
c.np
,
,
forin
13cd
monthsnns
spokevbd
atin
theat
council'snn
$
annualjj
meetingnn
atin
theat
multnomahnp-tl
hotelnn-tl
..hepps
toldvbd
somedti
350cd
personsnns
thatcs
theat
unitedvbn-tl
states'nns
$
-tl
challengenn
wasbedz
toto
helpvb
countriesnns
buildvb
theirpp
$
ownjj
societiesnns
theirpp
$
ownjj
waysnns
,
,
followingvbg
theirpp
$
ownjj
pathsnns
..\t
``
``
weppss
mustmd
persuadevb
themppo
toto
enjoyvb
aat
waynn
ofin
lifenn
whichwdt
,
,
ifc
not*
identicaljj
,
,
isbez
congenialjj
within
ourspp
$
$
``
''
,
,
hepps
saidvbd
butcc
addingvbg
thatcs
ifc
theyppss
dodo
not*
developvb
theat
kindnn
ofin
societynn
theyppss
themselvesppls
wantvb
itpps
willmd
lackvb
ritiualitynn
andcc
loyaltynn
..patiencenn-hl
neededvbn-hl
insuringvbg
thatcs
theat
countriesnns
havehv
aat
freedomnn
ofin
choicenn
,
,
hepps
saidvbd
,
,
wasbedz
theat
biggestjjt
detrimentnn
toin
theat
sovietnn-tl
unionnn-tl
..\thepps
citedvbd
eastjj-tl
germanynp-tl
wherewrb
afterin
15cd
yearsnns
ofin
sovietnp
rulenn
itpps
hashvz
becomevbn
necessaryjj
toto
buildvb
aat
wallnn
toto
keepvb
theat
peoplenns
inrp
,
,
andcc
addedvbd
,
,
``
``
soql
longrb
ascs
peoplenns
rebelvb
,
,
weppss
mustmd
not*
givevb
uprp
``
''
..\tmartinnp
calledvbd
forin
patiencenn
onin
theat
partnn
ofin
americansnps
..\t
``
``
theat
countriesnns
areber
tryingvbg
toto
buildvb
inin
aat
decadenn
theat
kindnn
ofin
societynn
weppss
tookvbd
aat
centurynn
toto
buildvb
``
''
,
,
hepps
saidvbd
..\tbyin
leavingvbg
ourpp
$
doorsnns
openrb
theat
unitedvbn-tl
statesnns-tl
givesvbz
otherap
peoplesnns
theat
opportunitynn
toto
seevb
usppo
andcc
toto
comparevb
,
,
hepps
saidvbd
..individualjj-hl
helpnn-hl
bestjjt-hl
``
``
weppss
havehv
noat
reasonnn
toto
fearvb
failurenn
,
,
butcc
weppss
mustmd
bebe
extraordinarilyrb
patientjj
``
''
,
,
theat
assistantnn
secretarynn
saidvbd
..\teconomicallyrb
,
,
martinnp
saidvbd
,
,
theat
unitedvbn-tl
statesnns-tl
couldmd
bestvb
helpvb
foreignjj
countriesnns
byin
helpingvbg
themppo
helpvb
themselvesppls
..privatejj
businessnn
isbez
moreql
effectivejj
thancs
governmentnn
aidnn
,
,
hepps
explainedvbd
,
,
becausecs
individualsnns
areber
ablejj
toto
workvb
within
theat
peoplenns
themselvesppls
..\ttheat
unitedvbn-tl
statesnns-tl
mustmd
planvb
toto
absorbvb
theat
exportedvbn
goodsnns
ofin
theat
countrynn
,
,
atin
whatwdt
hepps
termedvbd
aat
``
``
socialjj
costnn
``
''
..\tmartinnp
saidvbd
theat
governmentnn
hashvz
beenben
workingvbg
toto
establishvb
firmerjjr
pricesnns
onin
primaryjj
productsnns
whichwdt
maymd
involvevb
theat
totalnn
incomenn
ofin
onecd
countrynn
..\ttheat
portlandnp
schoolnn
boardnn
wasbedz
askedvbn
mondaynr
toto
takevb
aat
positivejj
standvb
towardsin
developingvbg
andcc
coordinatingvbg
within
portland'snp
$
civiljj
defensenn
moreap
plansnns
forin
theat
city'snn
$
schoolsnns
inin
eventnn
ofin
attacknn
..\tbutcc
thereex
seemedvbd
toto
bebe
somedti
differencenn
ofin
opinionnn
asin
toin
howql
farrb
theat
boardnn
shouldmd
govb
,
,
andcc
whosewp
$
advicenn
itpps
shouldmd
followvb
..\ttheat
boardnn
membersnns
,
,
afterin
hearingvbg
theat
coordinationnn
pleann
fromin
mrs.np
ralphnp
h.np
molvarnp
,
,
1409cd
swnn
maplecrestnp
dr.nn-tl
,
,
saidvbd
theyppss
thoughtvbd
theyppss
hadhvd
alreadyrb
beenben
cooperatingvbg
..\tchairmannn-tl
c.np
richardnp
mearsnp
pointedvbd
outrp
thatcs
perhapsrb
thisdt
wasbedz
not*
strictlyrb
aat
schoolnn
boardnn
problemnn
,
,
inin
casenn
ofin
atomicjj
attacknn
,
,
butcc
thatcs
theat
boardnn
wouldmd
cooperatevb
soql
farrb
ascs
possiblejj
toto
getvb
theat
childrennns
toto
wherewrb
theat
parentsnns
wantedvbd
themppo
toto
govb
..\tdr.nn-tl
melvinnp
w.np
barnesnp
,
,
superintendentnn
,
,
saidvbd
hepps
thoughtvbd
theat
schoolsnns
werebed
waitingvbg
forin
somedti
leadershipnn
,
,
perhapsrb
onin
theat
nationaljj
levelnn
,
,
toto
makevb
surejj
thatcs
whateverwdt
stepsnns
ofin
planningvbg
theyppss
tookvbd
wouldmd
``
``
bebe
moreql
fruitfuljj
``
''
,
,
andcc
thatcs
hepps
hadhvd
foundvbn
thatcs
otherap
schoolnn
districtsnns
werebed
not*
asql
farrb
alongrb
inin
theirpp
$
planningnn
ascs
thisdt
districtnn
..\t
``
``
losnp
angelesnp
hashvz
saidvbn
theyppss
wouldmd
sendvb
theat
childrennns
toin
theirpp
$
homesnns
inin
casenn
ofin
disasternn
``
''
,
,
hepps
saidvbd
..
``
``
nobodypn
reallyrb
expectsvbz
toto
evacuatevb
..ippss
thinkvb
everybodypn
isbez
agreedvbn
thatcs
weppss
needvb
toto
hearvb
somedti
voicenn
onin
theat
nationaljj
levelnn
thatdt
wouldmd
makevb
somedti
sensenn
andcc
inin
whichwdt
weppss
wouldmd
havehv
somedti
confidencenn
inin
followingnn
..\tmrs.np
molvarnp
,
,
whowps
keptvbd
reiteratingvbg
herpp
$
requestnn
thatcs
theyppss
``
``
pleasevb
takevb
aat
standnn
``
''
,
,
saidvbd
,
,
``
``
weppss
mustmd
havehv
faithnn
inin
somebodypn
--
--
onin
theat
localjj
levelnn
,
,
andcc
itpps
wouldn'tmd*
bebe
possiblejj
forcs
everyonepn
toto
rushvb
toin
aat
schoolnn
toto
getvb
theirpp
$
childrennns
``
''
..\tdr.nn-tl
barnesnp
saidvbd
thatcs
thereex
seemedvbd
toto
bebe
feelingnn
thatcs
evacuationnn
plansnns
,
,
evenrb
forin
aat
highjj
schoolnn
wherewrb
thereex
werebed
lotsnns
ofin
carsnns
``
``
mightmd
not*
bebe
realisticjj
andcc
wouldmd
not*
workvb
``
''
..\tmrs.np
molvarnp
askedvbd
againrb
thatcs
theat
boardnn
joinvb
inin
takingvbg
aat
standnn
inin
keepingvbg
within
jacknp
lowe'snp
$
programnn
..theat
boardnn
saidvbd
itpps
thoughtvbd
itpps
hadhvd
gonevbn
asql
farrb
ascs
instructedvbn
soql
farrb
andcc
askedvbd
forin
moreap
informationnn
toto
bebe
broughtvbn
atin
theat
nextap
meetingnn
..\titpps
wasbedz
generallyrb
agreedvbn
thatcs
theat
subjectnn
wasbedz
importantjj
andcc
theat
boardnn
shouldmd
bebe
informedvbn
onin
whatwdt
wasbedz
donevbn
,
,
isbez
goingvbg
toto
bebe
donevbn
andcc
whatwdt
itpps
thoughtvbd
shouldmd
bebe
donevbn
..salemnp-hl
(
(
-hl
apnp-hl
)
)
-hl
--
--
theat
statewidejj
meetingnn
ofin
warnn
mothersnns
tuesdaynr
inin
salemnp
willmd
hearvb
aat
greetingnn
fromin
gov.nn-tl
marknp
hatfieldnp
..\thatfieldnp
alsorb
isbez
scheduledvbn
toto
holdvb
aat
publicjj
unitedvbn-tl
nationsnns-tl
daynn-tl
receptionnn
inin
theat
statenn
capitolnn
onin
tuesdaynr
..\thispp
$
schedulenn
callsvbz
forin
aat
noonnn
speechnn
mondaynr
inin
eugenenp
atin
theat
emeraldnn-tl
empirenn-tl
kiwanisnp-tl
clubnn-tl
..\thepps
willmd
speakvb
toin
willamettenp-tl
universitynn-tl
youngjj-tl
republicansnps
thursdaynr
nightnn
inin
salemnp
..\tonin
fridaynr
hepps
willmd
govb
toin
portlandnp
forin
theat
swearingnn
inin
ofin
deannp
brysonnp
ascs
multnomahnp-tl
countynn-tl
circuitnn-tl
judgenn-tl
..\thepps
willmd
attendvb
aat
meetingnn
ofin
theat
republicannp
statenn-tl
centraljj-tl
committeenn-tl
saturdaynr
inin
portlandnp
andcc
seevb
theat
washington-oregonnp
footballnn
gamenn
..\tbeavertonnp-tl
schoolnn-tl
districtnn-tl
no.nn-tl
48cd-tl
boardnn
membersnns
examinedvbd
blueprintsnns
andcc
specificationsnns
forin
twocd
proposedvbn
juniorjj
highjj
schoolsnns
atin
aat
mondaynr
nightnn
workshopnn
sessionnn
..\taat
bondnn
issuenn
whichwdt
wouldmd
havehv
providedvbn
somedti
$
3.5nns
millioncd
forin
constructionnn
ofin
theat
twocd
900-studentjj
schoolsnns
wasbedz
defeatedvbn
byin
districtnn
votersnns
inin
januarynp
..\tlastap
weeknn
theat
boardnn
,
,
byin
aat
4cd
toin
3cd
votenn
,
,
decidedvbd
toto
askvb
votersnns
whethercs
theyppss
prefervb
theat
6-3-3cd
(
(
juniorjj
highjj
schoolnn
)
)
systemnn
orcc
theat
8-4cd
systemnn
..boardnn
membersnns
indicatedvbd
mondaynr
nightnn
thisdt
wouldmd
bebe
donevbn
byin
anat
advisorynn
pollnn
toto
bebe
takenvbn
onin
nov.np
15cd
,
,
theat
sameap
datenn
ascs
aat
$
581,000nns
bondnn
electionnn
forin
theat
constructionnn
ofin
threecd
newjj
elementaryjj
schoolsnns
..\tsecretarynn-tl
ofin-tl
labornn-tl
arthurnp
goldbergnp
willmd
speakvb
sundaynr
nightnn
atin
theat
masonicjj-tl
templenn-tl
atin
aat
$
25-a-platenn
dinnernn
honoringvbg
sen.nn-tl
waynenp
l.np
morsenp
,
,
ajnn
..\ttheat
dinnernn
isbez
sponsoredvbn
byin
organizedvbn
labornn
andcc
isbez
scheduledvbn
forin
7cd
p.m.rb
..\tsecretarynn-tl
goldbergnp
andcc
sen.nn-tl
morsenp
willmd
holdvb
aat
jointnn
pressnn
conferencenn
atin
theat
rooseveltnp
hotelnn-tl
atin
4:30cd
p.m.rb
sundaynr
,
,
blainenp
whipplenp
,
,
executivenn
secretarynn
ofin
theat
democraticjj-tl
partynn-tl
ofin
oregonnp
,
,
reportedvbd
tuesdaynr
..\totherap
speakersnns
forin
theat
fund-raisingnn
dinnernn
includevb
reps.nns-tl
edithnp
greennp
andcc
alnp
ullmannp
,
,
labornn-tl
commissionernn-tl
normannp
nilsennp
andcc
mayornn-tl
terrynp
schrunknp
,
,
allabn
democratsnps
..oaknn-tl-hl
grovenn-tl-hl
(
(
-hl
specialjj-hl
)
)
-hl
--
--
threecd
positionsnns
onin
theat
oaknn-tl
lodgenn-tl
waternn-tl
districtnn
boardnn
ofin
directorsnns
havehv
attractedvbn
11cd
candidatesnns
..theat
electionnn
willmd
bebe
dec.np
4cd
fromin
8cd
a.m.rb
toin
8cd
p.m.rb
..pollsnns
willmd
bebe
inin
theat
waternn
officenn
..\tincumbentjj
richardnp
salternp
seeksvbz
re-electionnn
andcc
isbez
opposedvbn
byin
donaldnp
huffmannp
forin
theat
five-yearjj
termnn
..incumbentjj
williamnp
brodnp
isbez
opposedvbn
inin
hispp
$
re-electionnn
bidnn
byin
barbaranp
njustnp
,
,
milesnp
c.np
bubeniknp
andcc
franknp
leenp
..\tfivecd
candidatesnns
seekvb
theat
placenn
vacatedvbn
byin
secretarynn-tl
hughnp
g.np
stoutnp
..seekingvbg
thisdt
two-yearjj
termnn
areber
jamesnp
culbertsonnp
,
,
dwightnp
m.np
steevesnp
,
,
jamesnp
c.np
pierseenp
,
,
w.m.np
sextonnp
andcc
theodorenp
w.np
heitschmidtnp
..\taat
strongerjjr
standnn
onin
theirpp
$
beliefsnns
andcc
aat
firmerjjr
graspnn
onin
theirpp
$
futurenn
werebed
takenvbn
fridaynr
byin
delegatesnns
toin
theat
29thod
generaljj
councilnn
ofin
theat
assembliesnns-tl
ofin-tl
godnp-tl
,
,
inin
sessionnn
atin
theat
memorialjj-tl
coliseumnp-tl
..\ttheat
councilnn
revisedvbd
,
,
inin
anat
effortnn
toto
strengthenvb
,
,
theat
denomination'snn
$
16cd
basicjj
beliefsnns
adoptedvbn
inin
1966cd
..\ttheat
changesnns
,
,
unanimouslyrb
adoptedvbn
,
,
werebed
feltvbn
necessaryjj
inin
theat
facenn
ofin
modernjj
trendsnns
awayrb
fromin
theat
biblenp
..theat
councilnn
agreedvbd
itpps
shouldmd
moreql
firmlyrb
statevb
itspp
$
beliefnn
inin
andcc
dependencenn
onin
theat
biblenp
..\tatin
theat
adoptionnn
,
,
theat
rev.np
t.np
f.np
zimmermannp
,
,
generaljj
superintendentnn
,
,
commentedvbd
,
,
``
``
theat-tl
assembliesnns-tl
ofin-tl
godnp
hashvz
beenben
aat
bulwarknn
forin
fundamentalismnn
inin
thesedts
modernjj
daysnns
andcc
hashvz
,
,
withoutin
compromisenn
,
,
stoodvbd
forin
theat
greatjj
truthsnns
ofin
theat
biblenp
forin
whichwdt
mennns
inin
theat
pastnn
havehv
beenben
willingjj
toto
givevb
theirpp
$
livesnns
``
''
..newjj-hl
pointnn-hl
addedvbn-hl
manyap
changesnns
involvedvbd
minorjj
editingnn
andcc
clarificationnn
;
.
``
,
``
;
.howeverwrb
,
,
theat
firstod
beliefnn
stoodvbd
forin
entirejj
revisionnn
within
aat
newjj
thirdod
pointnn
addedvbn
toin
theat
listnn
..\ttheat
firstod
ofin
16cd
beliefsnns
ofin
theat
denominationnn
,
,
nowrb
readsvbz
:
:
\t
``
``
theat
scripturesnns
,
,
bothabx
oldjj-tl
andcc
newjj-tl
testamentnn-tl
,
,
areber
verballyrb
inspiredvbn
ofin
godnp
andcc
areber
theat
revelationnn
ofin
godnp
toin
mannn
,
,
theat
infalliblejj
,
,
authoritativejj
rulenn
ofin
faithnn
andcc
conductnn
``
''
..\ttheat
thirdod
beliefnn
,
,
inin
sixcd
pointsnns
,
,
emphasizesvbz
theat
dietynn-tl
ofin
theat
lordnn-tl
jesusnp
christnp
,
,
andcc
:
:
\t
--
--
emphasizesvbz
theat
virginnn-tl
birthnn
\t
--
--
theat
sinlessjj
lifenn
ofin
christnp
\t
--
--
hispp
$
miraclesnns
\t
--
--
hispp
$
substitutionaryjj
worknn
onin
theat
crossnn
\t
--
--
hispp
$
bodilyjj
resurrectionnn
fromin
theat
deadjj
\t
--
--
andcc
hispp
$
exaltationnn
toin
theat
rightjj
handnn
ofin
godnp
..supernn-hl
againrb-hl
electedvbn-hl
fridaynr
afternoonnn
theat
rev.np
t.np
f.np
zimmermannp
wasbedz
reelectedvbn
forin
hispp
$
secondod
consecutivejj
two-yearjj
termnn
ascs
generaljj
superintendentnn
ofin
assembliesnns-tl
ofin-tl
godnp-tl
..hispp
$
officesnns
areber
inin
springfieldnp
,
,
mo.np
..electionnn
camevbd
onin
theat
nominatingvbg
ballotnn
..\tfridaynr
nightnn
theat
delegatesnns
heardvbd
theat
neednn
forin
theirpp
$
forthcomingjj
programnn
,
,
``
``
breakthroughnn-tl
``
''
scheduledvbn
toto
fillvb
theat
churchesnns
forin
theat
nextap
twocd
yearsnns
..inin
hispp
$
openingvbg
addressnn
wednesdaynr
theat
rev.np
mr.np
zimmermannp
,
,
urgedvbd
theat
delegatesnns
toto
considervb
aat
10-yearjj
expansionnn
programnn
,
,
within
``
``
breakthroughnn-tl
``
''
theat
themenn
forin
theat
firstod
twocd
yearsnns
..\ttheat
rev.np
r.np
l.np
brandtnp
,
,
nationaljj
secretarynn
ofin
theat
homenr
missionsnns
departmentnn
,
,
stressedvbd
theat
neednn
forin
theat
firstod
twocd
years'nns
$
worknn
..\t
``
``
surveysnns
showvb
thatcs
onecd
outin
ofin
threecd
americansnps
hashvz
vitaljj
contactnn
within
theat
churchnn
..thisdt
meansvbz
thatcs
moreap
thanin
100cd
millioncd
havehv
noat
vitaljj
touchnn
within
theat
churchnn
orcc
religiousjj
lifenn
``
''
,
,
hepps
toldvbd
delegatesnns
fridaynr
..churchnn-hl
losesvbz-hl
pacenn-hl
talkingvbg
ofin
theat
rapidjj
populationnn
growthnn
(
(
upwardsrb
ofin
12,000cd
babiesnns
bornvbn
dailyrb
)
)
within
anat
immigrantnn
enteringvbg
theat
unitedvbn-tl
statesnns-tl
everyat
1-12cd
minutesnns
,
,
hepps
saidvbd
``
``
ourpp
$
organizationnn
hashvz
not*
beenben
keepingvbg
pacenn
within
thisdt
challengenn
``
''
..\t
``
``
inin
35cd
yearsnns
weppss
havehv
openedvbn
7,000cd
churchesnns
``
''
,
,
theat
rev.np
mr.np
brandtnp
saidvbd
,
,
addingvbg
thatcs
theat
denominationnn
hadhvd
aat
nationaljj
goalnn
ofin
onecd
churchnn
forin
everyat
10,000cd
personsnns
..\t
``
``
inin
thisdt
lightnn
weppss
needvb
1,000cd
churchesnns
inin
illinoisnp
,
,
wherewrb
weppss
havehv
200cd
;
.
``
,
'
;
.800cd
inin
southernjj-tl
newjj-tl
englandnp
,
,
weppss
havehv
60cd
;
.
'
,
``
;
.weppss
needvb
100cd
inin
rhodenp-tl
islandnn-tl
,
,
weppss
havehv
nonepn
``
''
,
,
hepps
saidvbd
..\ttoto
stepvb
uprp
theat
denomination'snn
$
programnn
,
,
theat
rev.np
mr.np
brandtnp
suggestedvbd
theat
visionnn
ofin
8,000cd
newjj
assembliesnns-tl
ofin-tl
godnp-tl
churchesnns
inin
theat
nextap
10cd
yearsnns
..\ttoto
accomplishvb
thisdt
wouldmd
necessitatevb
somedti
changesnns
inin
methodsnns
,
,
hepps
saidvbd
..
''
churchnn-hl
meetsvbz-hl
changenn-hl
``
``
``
theat
church'snn
$
abilitynn
toto
changevb
herpp
$
methodsnns
isbez
goingvbg
toto
determinevb
herpp
$
abilitynn
toto
meetvb
theat
challengenn
ofin
thisdt
hournn
``
''
..\taat
capsulenn
viewnn
ofin
proposedvbn
plansnns
includesvbz
:
:
\t
--
--
encouragingvbg
byin
everyat
meansnns
,
,
allabn
existingvbg
assembliesnns-tl
ofin-tl
godnp-tl
churchesnns
toto
startvb
newjj
churchesnns
..\t
--
--
engagingvbg
maturejj
,
,
experiencedvbn
mennns
toto
pioneervb
orcc
openvb
newjj
churchesnns
inin
strategicjj
populationnn
centersnns
..\t
--
--
surroundingvbg
pioneernn
pastorsnns
within
vocationaljj
volunteersnns
(
(
laymennns
,
,
whowps
willmd
bebe
urgedvbn
toto
movevb
intoin
theat
areann
ofin
newjj
churchesnns
inin
theat
interestnn
ofin
lendingvbg
theirpp
$
supportnn
toin
theat
newjj
projectnn
)
)
..\t
--
--
arrangingvbg
forin
ministerialjj
graduatesnns
toto
spendvb
fromin
6-12cd
monthsnns
ascs
apprenticesnns
inin
well-establishedjj
churchesnns
..\tu.s.np-tl
dist.nn-tl
judgenn-tl
charlesnp
l.np
powellnp
deniedvbd
allabn
motionsnns
madevbn
byin
defensenn
attorneysnns
mondaynr
inin
portland'snp
$
insurancenn
fraudnn
trialnn
..\tdenialsnns
werebed
ofin
motionsnns
ofin
dismissalnn
,
,
continuancenn
,
,
mistrialnn
,
,
separatejj
trialnn
,
,
acquittalnn
,
,
strikingnn
ofin
testimonynn
andcc
directedvbn
verdictnn
..\tinin
denyingvbg
motionsnns
forin
dismissalnn
,
,
judgenn-tl
powellnp
statedvbd
thatcs
massnn
trialsnns
havehv
beenben
upheldvbn
ascs
properjj
inin
otherap
courtsnns
andcc
thatcs
``
``
aat
personnn
maymd
joinvb
aat
conspiracynn
withoutin
knowingvbg
whowps
allabn
ofin
theat
conspiratorsnns
areber
``
''
..\tattorneynn
dwightnp
l.np
schwabnp
,
,
inin
behalfnn
ofin
defendantnn
philipnp
weinsteinnp
,
,
arguedvbd
thereex
isbez
noat
evidencenn
linkingvbg
weinsteinnp
toin
theat
conspiracynn
,
,
butcc
judgenn-tl
powellnp
declaredvbd
thisdt
isbez
aat
matternn
forcs
theat
jurynn
toto
decidevb
..proofnn-hl
lacknn-hl
chargedvbn-hl
schwabnp
alsorb
declaredvbd
thereex
isbez
noat
proofnn
ofin
weinstein'snp
$
enteringvbg
aat
conspiracynn
toto
usevb
theat
u.s.np
mailsnns
toto
defraudvb
,
,
toin
whichwdt
federaljj
prosecutornn
a.np
lawrencenp
burbanknp
repliedvbd
:
:
\t
``
``
itpps
isbez
not*
necessaryjj
thatcs
aat
defendantnn
actuallyrb
havehv
conpiredvbn
toto
usevb
theat
u.s.np
mailsnns
toto
defraudvb
asql
longrb
ascs
thereex
isbez
evidencenn
ofin
aat
conspiracynn
,
,
andcc
theat
mailsnns
werebed
thenrb
usedvbn
toto
carryvb
itppo
outrp
``
''
..\tinin
theat
afternoonnn
,
,
defensenn
attorneysnns
beganvbd
theat
presentationnn
ofin
theirpp
$
casesnns
within
openingvbg
statementsnns
,
,
somedti
ofin
whichwdt
hadhvd
beenben
deferredvbn
untilin
afterin
theat
governmentnn
hadhvd
calledvbn
witnessesnns
andcc
presentedvbn
itspp
$
casenn
..
''
]
###Markdown
Step 4 : TaggingTags the word to be part of speech ( such as verb / noun ) based on definition and context
###Code
stop_words = set(stopwords.words('english'))
tokenized = sent_tokenize(data)
for i in tokenized:
wordsList = nltk.word_tokenize(i)
wordsList = [w for w in wordsList if not w in stop_words]
tagged = nltk.pos_tag(wordsList)
print(tagged)
###Output
[('Vincentnp', 'NNP'), ('G.np', 'NNP'), ('Ierullinp', 'NNP'), ('hashvz', 'NN'), ('beenben', 'NN'), ('appointedvbn', 'NN'), ('temporaryjj', 'NN'), ('assistantnn', 'NN'), ('districtnn', 'NN'), ('attorneynn', 'NN'), (',', ','), (',', ','), ('itpps', 'JJ'), ('wasbedz', 'NN'), ('announcedvbn', 'NN'), ('Mondaynr', 'NNP'), ('byin', 'NN'), ('Charlesnp', 'NNP'), ('E.np', 'NNP'), ('Raymondnp', 'NNP'), (',', ','), (',', ','), ('Districtnn-tl', 'NNP'), ('Attorneynn-tl', 'NNP'), ('..', 'NNP'), ('Ierullinp', 'NNP'), ('willmd', 'VBD'), ('replacevb', 'JJ'), ('Desmondnp', 'NNP'), ('D.np', 'NNP'), ('Connallnp', 'NNP'), ('whowps', 'VBD'), ('hashvz', 'JJ'), ('beenben', 'NN'), ('calledvbn', 'NN'), ('toin', 'NN'), ('activejj', 'NN'), ('militaryjj', 'NN'), ('servicenn', 'NN'), ('butcc', 'NN'), ('isbez', 'NN'), ('expectedvbn', 'NN'), ('backrb', 'NN'), ('onin', 'NN'), ('theat', 'NN'), ('jobnn', 'NN'), ('byin', 'NN'), ('Marchnp', 'NNP'), ('31cd', 'CD'), ('..', 'NNP'), ('Ierullinp', 'NNP'), (',', ','), (',', ','), ('29cd', 'CD'), (',', ','), (',', ','), ('hashvz', 'JJ'), ('beenben', 'NN'), ('practicingvbg', 'NN'), ('inin', 'NN'), ('Portlandnp', 'NNP'), ('sincein', 'NN'), ('Novembernp', 'NNP'), (',', ','), (',', ','), ('1959cd', 'CD'), ('..Hepps', 'NN'), ('isbez', 'NN'), ('aat', 'NN'), ('graduatenn', 'NN'), ('ofin', 'IN'), ('Portlandnp-tl', 'NNP'), ('Universitynn-tl', 'NNP'), ('andcc', 'JJ'), ('theat', 'NN'), ('Northwesternjj-tl', 'JJ'), ('Collegenn-tl', 'JJ'), ('ofin-tl', 'JJ'), ('Lawnn-tl', 'NNP'), ('..Hepps', 'NNP'), ('isbez', 'NN'), ('marriedvbn', 'NN'), ('andcc', 'NN'), ('theat', 'NN'), ('fathernn', 'JJ'), ('ofin', 'NN'), ('threecd', 'NN'), ('childrennns', 'NN'), ('..', 'NNP'), ('Helpingvbg', 'NNP'), ('foreignjj', 'NN'), ('countriesnns', 'NN'), ('toto', 'NN'), ('buildvb', 'NN'), ('aat', 'NN'), ('soundjj', 'NN'), ('politicaljj', 'NN'), ('structurenn', 'NN'), ('isbez', 'NN'), ('moreql', 'NN'), ('importantjj', 'JJ'), ('thancs', 'NN'), ('aidingvbg', 'NN'), ('themppo', 'NN'), ('economicallyrb', 'NN'), (',', ','), (',', ','), ('E.np', 'NNP'), ('M.np', 'NNP'), ('Martinnp', 'NNP'), (',', ','), (',', ','), ('assistantnn', 'JJ'), ('secretarynn', 'NN'), ('ofin', 'NN'), ('statenn', 'NN'), ('forin', 'NN'), ('economicjj', 'NN'), ('affairsnns', 'NN'), ('toldvbd', 'NN'), ('membersnns', 'NN'), ('ofin', 'JJ'), ('theat', 'JJ'), ('Worldnn-tl', 'JJ'), ('Affairsnns-tl', 'JJ'), ('Councilnn-tl', 'NNP'), ('Mondaynr', 'NNP'), ('nightnn', 'MD'), ('..', 'VB'), ('Martinnp', 'NNP'), (',', ','), (',', ','), ('whowps', 'VBP'), ('hashvz', 'JJ'), ('beenben', 'NN'), ('inin', 'NN'), ('officenn', 'NN'), ('inin', 'NN'), ('Washingtonnp', 'NNP'), (',', ','), (',', ','), ('D.np', 'NNP'), ('C.np', 'NNP'), (',', ','), (',', ','), ('forin', 'VBD'), ('13cd', 'CD'), ('monthsnns', 'NN'), ('spokevbd', 'NN'), ('atin', 'NN'), ('theat', 'NN'), ("council'snn", 'JJ'), ('$', '$'), ('annualjj', 'JJ'), ('meetingnn', 'NN'), ('atin', 'NN'), ('theat', 'JJ'), ('Multnomahnp-tl', 'JJ'), ('Hotelnn-tl', 'NNP'), ('..Hepps', 'NNP'), ('toldvbd', 'NN'), ('somedti', 'VBD'), ('350cd', 'CD'), ('personsnns', 'NN'), ('thatcs', 'NN'), ('theat', 'NN'), ('Unitedvbn-tl', 'NNP'), ("States'nns", 'NNP'), ('$', '$'), ('-tl', 'NNP'), ('challengenn', 'NN'), ('wasbedz', 'NN'), ('toto', 'NN'), ('helpvb', 'NN'), ('countriesnns', 'NN'), ('buildvb', 'NN'), ('theirpp', 'VBD'), ('$', '$'), ('ownjj', 'JJ'), ('societiesnns', 'NN'), ('theirpp', 'VBD'), ('$', '$'), ('ownjj', 'JJ'), ('waysnns', 'NN'), (',', ','), (',', ','), ('followingvbg', 'VBD'), ('theirpp', 'JJ'), ('$', '$'), ('ownjj', 'JJ'), ('pathsnns', 'NN'), ('..', 'VBZ'), ('``', '``'), ('``', '``'), ('Weppss', 'NNP'), ('mustmd', 'NN'), ('persuadevb', 'NN'), ('themppo', 'NN'), ('toto', 'NN'), ('enjoyvb', 'NN'), ('aat', 'NN'), ('waynn', 'NN'), ('ofin', 'NN'), ('lifenn', 'NN'), ('whichwdt', 'NN'), (',', ','), (',', ','), ('ifcs', 'JJ'), ('not*', 'NN'), ('identicaljj', 'NN'), (',', ','), (',', ','), ('isbez', 'JJ'), ('congenialjj', 'NN'), ('within', 'IN'), ('ourspp', 'JJ'), ('$', '$'), ('$', '$'), ('``', '``'), ("''", "''"), (',', ','), (',', ','), ('hepps', 'JJ'), ('saidvbd', 'NN'), ('butcc', 'NN'), ('addingvbg', 'NN'), ('thatcs', 'NN'), ('ifcs', 'NN'), ('theyppss', 'NN'), ('dodo', 'NN'), ('not*', 'JJ'), ('developvb', 'NN'), ('theat', 'NN'), ('kindnn', 'NN'), ('ofin', 'NN'), ('societynn', 'NN'), ('theyppss', 'NN'), ('themselvesppls', 'NN'), ('wantvb', 'NN'), ('itpps', 'NN'), ('willmd', 'NN'), ('lackvb', 'NN'), ('ritiualitynn', 'NN'), ('andcc', 'JJ'), ('loyaltynn', 'JJ'), ('..Patiencenn-hl', 'JJ'), ('neededvbn-hl', 'JJ'), ('Insuringvbg', 'NNP'), ('thatcs', 'NN'), ('theat', 'NN'), ('countriesnns', 'NN'), ('havehv', 'NN'), ('aat', 'NN'), ('freedomnn', 'NN'), ('ofin', 'NN'), ('choicenn', 'NN'), (',', ','), (',', ','), ('hepps', 'NN'), ('saidvbd', 'NN'), (',', ','), (',', ','), ('wasbedz', 'JJ'), ('theat', 'NN'), ('biggestjjt', 'NN'), ('detrimentnn', 'NN'), ('toin', 'JJ'), ('theat', 'JJ'), ('Sovietnn-tl', 'JJ'), ('Unionnn-tl', 'JJ'), ('..', 'NN'), ('Hepps', 'NNP'), ('citedvbd', 'VBZ'), ('Eastjj-tl', 'NNP'), ('Germanynp-tl', 'NNP'), ('wherewrb', 'NN'), ('afterin', 'NN'), ('15cd', 'CD'), ('yearsnns', 'NN'), ('ofin', 'NN'), ('Sovietnp', 'NNP'), ('rulenn', 'NN'), ('itpps', 'NN'), ('hashvz', 'NN'), ('becomevbn', 'NN'), ('necessaryjj', 'JJ'), ('toto', 'NN'), ('buildvb', 'NN'), ('aat', 'NN'), ('wallnn', 'NN'), ('toto', 'NN'), ('keepvb', 'VBZ'), ('theat', 'NN'), ('peoplenns', 'NN'), ('inrp', 'NN'), (',', ','), (',', ','), ('andcc', 'JJ'), ('addedvbd', 'NN'), (',', ','), (',', ','), ('``', '``'), ('``', '``'), ('soql', 'JJ'), ('longrb', 'NN'), ('ascs', 'NN'), ('peoplenns', 'NN'), ('rebelvb', 'NN'), (',', ','), (',', ','), ('weppss', 'VBP'), ('mustmd', 'JJ'), ('not*', 'JJ'), ('givevb', 'NN'), ('uprp', 'JJ'), ('``', '``'), ("''", "''"), ('..', 'NN'), ('Martinnp', 'NNP'), ('calledvbd', 'NN'), ('forin', 'NN'), ('patiencenn', 'NN'), ('onin', 'NN'), ('theat', 'NN'), ('partnn', 'NN'), ('ofin', 'NN'), ('Americansnps', 'NNP'), ('..', 'NNP'), ('``', '``'), ('``', '``'), ('Theat', 'NNP'), ('countriesnns', 'JJ'), ('areber', 'NN'), ('tryingvbg', 'NN'), ('toto', 'NN'), ('buildvb', 'NN'), ('inin', 'NN'), ('aat', 'NN'), ('decadenn', 'NN'), ('theat', 'NN'), ('kindnn', 'NN'), ('ofin', 'NN'), ('societynn', 'NN'), ('weppss', 'NN'), ('tookvbd', 'NN'), ('aat', 'NN'), ('centurynn', 'NN'), ('toto', 'NN'), ('buildvb', 'NN'), ('``', '``'), ("''", "''"), (',', ','), (',', ','), ('hepps', 'JJ'), ('saidvbd', 'NN'), ('..', 'NNP'), ('Byin', 'NNP'), ('leavingvbg', 'VBD'), ('ourpp', 'RP'), ('$', '$'), ('doorsnns', 'JJ'), ('openrb', 'JJ'), ('theat', 'NN'), ('Unitedvbn-tl', 'JJ'), ('Statesnns-tl', 'NNP'), ('givesvbz', 'NN'), ('otherap', 'NN'), ('peoplesnns', 'NN'), ('theat', 'NN'), ('opportunitynn', 'JJ'), ('toto', 'NN'), ('seevb', 'NN'), ('usppo', 'JJ'), ('andcc', 'NN'), ('toto', 'NN'), ('comparevb', 'NN'), (',', ','), (',', ','), ('hepps', 'JJ'), ('saidvbd', 'JJ'), ('..Individualjj-hl', 'JJ'), ('helpnn-hl', 'JJ'), ('bestjjt-hl', 'NN'), ('``', '``'), ('``', '``'), ('Weppss', 'NNP'), ('havehv', 'NN'), ('noat', 'NN'), ('reasonnn', 'NN'), ('toto', 'NN'), ('fearvb', 'NN'), ('failurenn', 'NN'), (',', ','), (',', ','), ('butcc', 'FW'), ('weppss', 'NN'), ('mustmd', 'NN'), ('bebe', 'NN'), ('extraordinarilyrb', 'NN'), ('patientjj', 'NN'), ('``', '``'), ("''", "''"), (',', ','), (',', ','), ('theat', 'NN'), ('assistantnn', 'JJ'), ('secretarynn', 'NN'), ('saidvbd', 'NN'), ('..', 'NNP'), ('Economicallyrb', 'NNP'), (',', ','), (',', ','), ('Martinnp', 'NNP'), ('saidvbd', 'NN'), (',', ','), (',', ','), ('theat', 'JJ'), ('Unitedvbn-tl', 'JJ'), ('Statesnns-tl', 'NNP'), ('couldmd', 'NN'), ('bestvb', 'NN'), ('helpvb', 'NN'), ('foreignjj', 'NN'), ('countriesnns', 'NN'), ('byin', 'NN'), ('helpingvbg', 'NN'), ('themppo', 'NN'), ('helpvb', 'NN'), ('themselvesppls', 'NN'), ('..Privatejj', 'NNP'), ('businessnn', 'NN'), ('isbez', 'NN'), ('moreql', 'NN'), ('effectivejj', 'JJ'), ('thancs', 'NN'), ('governmentnn', 'NN'), ('aidnn', 'NN'), (',', ','), (',', ','), ('hepps', 'NN'), ('explainedvbd', 'NN'), (',', ','), (',', ','), ('becausecs', 'FW'), ('individualsnns', 'JJ'), ('areber', 'NN'), ('ablejj', 'NN'), ('toto', 'NN'), ('workvb', 'NN'), ('within', 'IN'), ('theat', 'NN'), ('peoplenns', 'NN'), ('themselvesppls', 'NN'), ('..', 'NNP'), ('Theat', 'NNP'), ('Unitedvbn-tl', 'JJ'), ('Statesnns-tl', 'NNP'), ('mustmd', 'NN'), ('planvb', 'NN'), ('toto', 'NN'), ('absorbvb', 'VBZ'), ('theat', 'NN'), ('exportedvbn', 'JJ'), ('goodsnns', 'NN'), ('ofin', 'NN'), ('theat', 'NN'), ('countrynn', 'NN'), (',', ','), (',', ','), ('atin', 'JJ'), ('whatwdt', 'NN'), ('hepps', 'NN'), ('termedvbd', 'NN'), ('aat', 'IN'), ('``', '``'), ('``', '``'), ('socialjj', 'JJ'), ('costnn', 'NN'), ('``', '``'), ("''", "''"), ('..', 'NN'), ('Martinnp', 'NNP'), ('saidvbd', 'VBZ'), ('theat', 'NN'), ('governmentnn', 'NN'), ('hashvz', 'NN'), ('beenben', 'NN'), ('workingvbg', 'NN'), ('toto', 'NN'), ('establishvb', 'NN'), ('firmerjjr', 'NN'), ('pricesnns', 'NN'), ('onin', 'NN'), ('primaryjj', 'NN'), ('productsnns', 'NN'), ('whichwdt', 'NN'), ('maymd', 'NN'), ('involvevb', 'NN'), ('theat', 'NN'), ('totalnn', 'NN'), ('incomenn', 'NN'), ('ofin', 'NN'), ('onecd', 'IN'), ('countrynn', 'NN'), ('..', 'NNP'), ('Theat', 'NNP'), ('Portlandnp', 'NNP'), ('schoolnn', 'VBD'), ('boardnn', 'JJ'), ('wasbedz', 'NN'), ('askedvbn', 'NN'), ('Mondaynr', 'NNP'), ('toto', 'NN'), ('takevb', 'NN'), ('aat', 'NN'), ('positivejj', 'NN'), ('standvb', 'NN'), ('towardsin', 'NN'), ('developingvbg', 'NN'), ('andcc', 'NN'), ('coordinatingvbg', 'NN'), ('within', 'IN'), ("Portland'snp", 'NNP'), ('$', '$'), ('civiljj', 'NN'), ('defensenn', 'NN'), ('moreap', 'NN'), ('plansnns', 'NN'), ('forin', 'NN'), ('theat', 'NN'), ("city'snn", 'JJ'), ('$', '$'), ('schoolsnns', 'JJ'), ('inin', 'NN'), ('eventnn', 'NN'), ('ofin', 'NN'), ('attacknn', 'NN'), ('..', 'NNP'), ('Butcc', 'NNP'), ('thereex', 'NN'), ('seemedvbd', 'NN'), ('toto', 'NN'), ('bebe', 'NN'), ('somedti', 'NN'), ('differencenn', 'NN'), ('ofin', 'NN'), ('opinionnn', 'NN'), ('asin', 'NN'), ('toin', 'NN'), ('howql', 'NN'), ('farrb', 'NN'), ('theat', 'NN'), ('boardnn', 'NN'), ('shouldmd', 'NN'), ('govb', 'NN'), (',', ','), (',', ','), ('andcc', 'VBD'), ('whosewp', 'JJ'), ('$', '$'), ('advicenn', 'JJ'), ('itpps', 'NN'), ('shouldmd', 'NN'), ('followvb', 'NN'), ('..', 'NNP'), ('Theat', 'NNP'), ('boardnn', 'NN'), ('membersnns', 'NN'), (',', ','), (',', ','), ('afterin', 'JJ'), ('hearingvbg', 'NN'), ('theat', 'NN'), ('coordinationnn', 'NN'), ('pleann', 'NN'), ('fromin', 'NN'), ('Mrs.np', 'NNP'), ('Ralphnp', 'NNP'), ('H.np', 'NNP'), ('Molvarnp', 'NNP'), (',', ','), (',', ','), ('1409cd', 'CD'), ('SWnn', 'NNP'), ('Maplecrestnp', 'NNP'), ('Dr.nn-tl', 'NNP'), (',', ','), (',', ','), ('saidvbd', 'JJ'), ('theyppss', 'NN'), ('thoughtvbd', 'NN'), ('theyppss', 'NN'), ('hadhvd', 'NN'), ('alreadyrb', 'NN'), ('beenben', 'NN'), ('cooperatingvbg', 'NN'), ('..', 'NNP'), ('Chairmannn-tl', 'NNP'), ('C.np', 'NNP'), ('Richardnp', 'NNP'), ('Mearsnp', 'NNP'), ('pointedvbd', 'NN'), ('outrp', 'NN'), ('thatcs', 'NN'), ('perhapsrb', 'NN'), ('thisdt', 'NN'), ('wasbedz', 'NN'), ('not*', 'JJ'), ('strictlyrb', 'NN'), ('aat', 'NN'), ('schoolnn', 'NN'), ('boardnn', 'NN'), ('problemnn', 'NN'), (',', ','), (',', ','), ('inin', 'JJ'), ('casenn', 'NN'), ('ofin', 'NN'), ('atomicjj', 'NN'), ('attacknn', 'NN'), (',', ','), (',', ','), ('butcc', 'FW'), ('thatcs', 'JJ'), ('theat', 'NN'), ('boardnn', 'NN'), ('wouldmd', 'NN'), ('cooperatevb', 'NN'), ('soql', 'NN'), ('farrb', 'NN'), ('ascs', 'NN'), ('possiblejj', 'NN'), ('toto', 'NN'), ('getvb', 'NN'), ('theat', 'NN'), ('childrennns', 'NN'), ('toto', 'NN'), ('wherewrb', 'NN'), ('theat', 'NN'), ('parentsnns', 'NN'), ('wantedvbd', 'NN'), ('themppo', 'NN'), ('toto', 'NN'), ('govb', 'NN'), ('..', 'NNP'), ('Dr.nn-tl', 'NNP'), ('Melvinnp', 'NNP'), ('W.np', 'NNP'), ('Barnesnp', 'NNP'), (',', ','), (',', ','), ('superintendentnn', 'NN'), (',', ','), (',', ','), ('saidvbd', 'JJ'), ('hepps', 'NN'), ('thoughtvbd', 'NN'), ('theat', 'NN'), ('schoolsnns', 'NN'), ('werebed', 'VBD'), ('waitingvbg', 'JJ'), ('forin', 'NN'), ('somedti', 'NN'), ('leadershipnn', 'NN'), (',', ','), (',', ','), ('perhapsrb', 'FW'), ('onin', 'FW'), ('theat', 'NN'), ('nationaljj', 'JJ'), ('levelnn', 'NN'), (',', ','), (',', ','), ('toto', 'NN'), ('makevb', 'NN'), ('surejj', 'NN'), ('thatcs', 'NN'), ('whateverwdt', 'NN'), ('stepsnns', 'NN'), ('ofin', 'NN'), ('planningvbg', 'NN'), ('theyppss', 'NN'), ('tookvbd', 'NN'), ('wouldmd', 'VBP'), ('``', '``'), ('``', '``'), ('bebe', 'JJ'), ('moreql', 'NN'), ('fruitfuljj', 'NN'), ('``', '``'), ("''", "''"), (',', ','), (',', ','), ('andcc', 'JJ'), ('thatcs', 'NN'), ('hepps', 'NN'), ('hadhvd', 'NN'), ('foundvbn', 'NN'), ('thatcs', 'NN'), ('otherap', 'NN'), ('schoolnn', 'NN'), ('districtsnns', 'NN'), ('werebed', 'VBD'), ('not*', 'JJ'), ('asql', 'NN'), ('farrb', 'NN'), ('alongrb', 'NN'), ('inin', 'NN'), ('theirpp', 'VBD'), ('$', '$'), ('planningnn', 'JJ'), ('ascs', 'NN'), ('thisdt', 'NN'), ('districtnn', 'NN'), ('..', 'VBZ'), ('``', '``'), ('``', '``'), ('Losnp', 'NNP'), ('Angelesnp', 'NNP'), ('hashvz', 'NN'), ('saidvbn', 'NN'), ('theyppss', 'NN'), ('wouldmd', 'NN'), ('sendvb', 'NN'), ('theat', 'NN'), ('childrennns', 'NN'), ('toin', 'NN'), ('theirpp', 'VBD'), ('$', '$'), ('homesnns', 'JJ'), ('inin', 'NN'), ('casenn', 'NN'), ('ofin', 'NN'), ('disasternn', 'NN'), ('``', '``'), ("''", "''"), (',', ','), (',', ','), ('hepps', 'JJ'), ('saidvbd', 'NN'), ('..', 'VBZ'), ('``', '``'), ('``', '``'), ('Nobodypn', 'NNP'), ('reallyrb', 'NN'), ('expectsvbz', 'NN'), ('toto', 'NN'), ('evacuatevb', 'NN'), ('..Ippss', 'NNP'), ('thinkvb', 'NN'), ('everybodypn', 'NN'), ('isbez', 'NN'), ('agreedvbn', 'NN'), ('thatcs', 'NN'), ('weppss', 'NN'), ('needvb', 'JJ'), ('toto', 'NN'), ('hearvb', 'NN'), ('somedti', 'NN'), ('voicenn', 'NN'), ('onin', 'NN'), ('theat', 'NN'), ('nationaljj', 'JJ'), ('levelnn', 'NN'), ('thatdt', 'NN'), ('wouldmd', 'NN'), ('makevb', 'NN'), ('somedti', 'NN'), ('sensenn', 'NN'), ('andcc', 'NN'), ('inin', 'NN'), ('whichwdt', 'NN'), ('weppss', 'NN'), ('wouldmd', 'NN'), ('havehv', 'NN'), ('somedti', 'NN'), ('confidencenn', 'NN'), ('inin', 'NN'), ('followingnn', 'NN'), ('..', 'NNP'), ('Mrs.np', 'NNP'), ('Molvarnp', 'NNP'), (',', ','), (',', ','), ('whowps', 'VBP'), ('keptvbd', 'JJ'), ('reiteratingvbg', 'NN'), ('herpp', 'VBD'), ('$', '$'), ('requestnn', 'JJ'), ('thatcs', 'NN'), ('theyppss', 'NN'), ('``', '``'), ('``', '``'), ('pleasevb', 'JJ'), ('takevb', 'NN'), ('aat', 'NN'), ('standnn', 'NN'), ('``', '``'), ("''", "''"), (',', ','), (',', ','), ('saidvbd', 'NN'), (',', ','), (',', ','), ('``', '``'), ('``', '``'), ('Weppss', 'NNP'), ('mustmd', 'NN'), ('havehv', 'NN'), ('faithnn', 'NN'), ('inin', 'NN'), ('somebodypn', 'NN'), ('--', ':'), ('--', ':'), ('onin', 'JJ'), ('theat', 'NN'), ('localjj', 'NN'), ('levelnn', 'NN'), (',', ','), (',', ','), ('andcc', 'JJ'), ('itpps', 'NN'), ("wouldn'tmd*", 'NN'), ('bebe', 'NN'), ('possiblejj', 'NN'), ('forcs', 'NN'), ('everyonepn', 'NN'), ('toto', 'NN'), ('rushvb', 'NN'), ('toin', 'NN'), ('aat', 'NN'), ('schoolnn', 'NN'), ('toto', 'NN'), ('getvb', 'NN'), ('theirpp', 'VBD'), ('$', '$'), ('childrennns', 'NN'), ('``', '``'), ("''", "''"), ('..', 'VBZ'), ('Dr.nn-tl', 'NNP'), ('Barnesnp', 'NNP'), ('saidvbd', 'NN'), ('thatcs', 'NN'), ('thereex', 'NN'), ('seemedvbd', 'NN'), ('toto', 'NN'), ('bebe', 'NN'), ('feelingnn', 'NN'), ('thatcs', 'NN'), ('evacuationnn', 'NN'), ('plansnns', 'NN'), (',', ','), (',', ','), ('evenrb', 'FW'), ('forin', 'JJ'), ('aat', 'NN'), ('highjj', 'NN'), ('schoolnn', 'NN'), ('wherewrb', 'NN'), ('thereex', 'NN'), ('werebed', 'VBD'), ('lotsnns', 'JJ'), ('ofin', 'NN'), ('carsnns', 'NN'), ('``', '``'), ('``', '``'), ('mightmd', 'JJ'), ('not*', 'NN'), ('bebe', 'NN'), ('realisticjj', 'NN'), ('andcc', 'NN'), ('wouldmd', 'NN'), ('not*', 'NN'), ('workvb', 'NN'), ('``', '``'), ("''", "''"), ('..', 'NN'), ('Mrs.np', 'NNP'), ('Molvarnp', 'NNP'), ('askedvbd', 'VBZ'), ('againrb', 'JJ'), ('thatcs', 'NN'), ('theat', 'NN'), ('boardnn', 'NN'), ('joinvb', 'NN'), ('inin', 'NN'), ('takingvbg', 'NN'), ('aat', 'NN'), ('standnn', 'NN'), ('inin', 'NN'), ('keepingvbg', 'NN'), ('within', 'IN'), ('Jacknp', 'NNP'), ("Lowe'snp", 'NNP'), ('$', '$'), ('programnn', 'JJ'), ('..Theat', 'NNP'), ('boardnn', 'NN'), ('saidvbd', 'NN'), ('itpps', 'NN'), ('thoughtvbd', 'NN'), ('itpps', 'NN'), ('hadhvd', 'NN'), ('gonevbn', 'NN'), ('asql', 'NN'), ('farrb', 'NN'), ('ascs', 'NN'), ('instructedvbn', 'NN'), ('soql', 'NN'), ('farrb', 'NN'), ('andcc', 'NN'), ('askedvbd', 'NN'), ('forin', 'NN'), ('moreap', 'NN'), ('informationnn', 'NN'), ('toto', 'NN'), ('bebe', 'NN'), ('broughtvbn', 'NN'), ('atin', 'NN'), ('theat', 'NN'), ('nextap', 'JJ'), ('meetingnn', 'NN'), ('..', 'NNP'), ('Itpps', 'NNP'), ('wasbedz', 'VBD'), ('generallyrb', 'JJ'), ('agreedvbn', 'NN'), ('thatcs', 'NN'), ('theat', 'NN'), ('subjectnn', 'NN'), ('wasbedz', 'NN'), ('importantjj', 'NN'), ('andcc', 'NN'), ('theat', 'NN'), ('boardnn', 'NN'), ('shouldmd', 'NN'), ('bebe', 'NN'), ('informedvbn', 'NN'), ('onin', 'NN'), ('whatwdt', 'NN'), ('wasbedz', 'NN'), ('donevbn', 'NN'), (',', ','), (',', ','), ('isbez', 'JJ'), ('goingvbg', 'NN'), ('toto', 'NN'), ('bebe', 'NN'), ('donevbn', 'NN'), ('andcc', 'NN'), ('whatwdt', 'NN'), ('itpps', 'NN'), ('thoughtvbd', 'NN'), ('shouldmd', 'NN'), ('bebe', 'NN'), ('donevbn', 'VBZ'), ('..Salemnp-hl', 'JJ'), ('(', '('), ('(', '('), ('-hl', 'VB'), ('APnp-hl', 'NNP'), (')', ')'), (')', ')'), ('-hl', 'FW'), ('--', ':'), ('--', ':'), ('Theat', 'NNP'), ('statewidejj', 'JJ'), ('meetingnn', 'NN'), ('ofin', 'NN'), ('warnn', 'NN'), ('mothersnns', 'NN'), ('Tuesdaynr', 'NNP'), ('inin', 'NN'), ('Salemnp', 'NNP'), ('willmd', 'NN'), ('hearvb', 'NN'), ('aat', 'NN'), ('greetingnn', 'NN'), ('fromin', 'VBD'), ('Gov.nn-tl', 'NNP'), ('Marknp', 'NNP'), ('Hatfieldnp', 'NNP'), ('..', 'NNP'), ('Hatfieldnp', 'NNP'), ('alsorb', 'VBZ'), ('isbez', 'JJ'), ('scheduledvbn', 'NN'), ('toto', 'NN'), ('holdvb', 'NN'), ('aat', 'JJ'), ('publicjj', 'JJ'), ('Unitedvbn-tl', 'JJ'), ('Nationsnns-tl', 'JJ'), ('Daynn-tl', 'NNP'), ('receptionnn', 'NN'), ('inin', 'NN'), ('theat', 'NN'), ('statenn', 'JJ'), ('capitolnn', 'NN'), ('onin', 'NN'), ('Tuesdaynr', 'NNP'), ('..', 'NNP'), ('Hispp', 'NNP'), ('$', '$'), ('schedulenn', 'JJ'), ('callsvbz', 'NN'), ('forin', 'NN'), ('aat', 'NN'), ('noonnn', 'JJ'), ('speechnn', 'NN'), ('Mondaynr', 'NNP'), ('inin', 'NN'), ('Eugenenp', 'NNP'), ('atin', 'VBZ'), ('theat', 'JJ'), ('Emeraldnn-tl', 'JJ'), ('Empirenn-tl', 'JJ'), ('Kiwanisnp-tl', 'NNP'), ('Clubnn-tl', 'NNP'), ('..', 'NNP'), ('Hepps', 'NNP'), ('willmd', 'VBD'), ('speakvb', 'JJ'), ('toin', 'JJ'), ('Willamettenp-tl', 'JJ'), ('Universitynn-tl', 'JJ'), ('Youngjj-tl', 'JJ'), ('Republicansnps', 'NNP'), ('Thursdaynr', 'NNP'), ('nightnn', 'MD'), ('inin', 'VB'), ('Salemnp', 'NNP'), ('..', 'NNP'), ('Onin', 'NNP'), ('Fridaynr', 'NNP'), ('hepps', 'NN'), ('willmd', 'NN'), ('govb', 'NN'), ('toin', 'NN'), ('Portlandnp', 'NNP'), ('forin', 'VBZ'), ('theat', 'NN'), ('swearingnn', 'JJ'), ('inin', 'NN'), ('ofin', 'NN'), ('Deannp', 'NNP'), ('Brysonnp', 'NNP'), ('ascs', 'VBD'), ('Multnomahnp-tl', 'NNP'), ('Countynn-tl', 'NNP'), ('Circuitnn-tl', 'NNP'), ('Judgenn-tl', 'NNP'), ('..', 'NNP'), ('Hepps', 'NNP'), ('willmd', 'VBD'), ('attendvb', 'JJ'), ('aat', 'NN'), ('meetingnn', 'NN'), ('ofin', 'NN'), ('theat', 'NN'), ('Republicannp', 'NNP'), ('Statenn-tl', 'NNP'), ('Centraljj-tl', 'NNP'), ('Committeenn-tl', 'NNP'), ('Saturdaynr', 'NNP'), ('inin', 'NN'), ('Portlandnp', 'NNP'), ('andcc', 'NN'), ('seevb', 'NN'), ('theat', 'NN'), ('Washington-Oregonnp', 'NNP'), ('footballnn', 'NN'), ('gamenn', 'NN'), ('..', 'JJ'), ('Beavertonnp-tl', 'JJ'), ('Schoolnn-tl', 'JJ'), ('Districtnn-tl', 'JJ'), ('No.nn-tl', 'JJ'), ('48cd-tl', 'JJ'), ('boardnn', 'NN'), ('membersnns', 'NN'), ('examinedvbd', 'NN'), ('blueprintsnns', 'NN'), ('andcc', 'NN'), ('specificationsnns', 'NN'), ('forin', 'NN'), ('twocd', 'NN'), ('proposedvbn', 'NN'), ('juniorjj', 'NN'), ('highjj', 'NN'), ('schoolsnns', 'NN'), ('atin', 'NN'), ('aat', 'NN'), ('Mondaynr', 'NNP'), ('nightnn', 'CC'), ('workshopnn', 'JJ'), ('sessionnn', 'NN'), ('..', 'NNP'), ('Aat', 'NNP'), ('bondnn', 'NN'), ('issuenn', 'NN'), ('whichwdt', 'NN'), ('wouldmd', 'NN'), ('havehv', 'NN'), ('providedvbn', 'NN'), ('somedti', 'VBD'), ('$', '$'), ('3.5nns', 'CD'), ('millioncd', 'NN'), ('forin', 'NN'), ('constructionnn', 'NN'), ('ofin', 'NN'), ('theat', 'NN'), ('twocd', 'JJ'), ('900-studentjj', 'CD'), ('schoolsnns', 'NN'), ('wasbedz', 'NN'), ('defeatedvbn', 'NN'), ('byin', 'NN'), ('districtnn', 'NN'), ('votersnns', 'NN'), ('inin', 'NN'), ('Januarynp', 'NNP'), ('..', 'NNP'), ('Lastap', 'NNP'), ('weeknn', 'VBD'), ('theat', 'NN'), ('boardnn', 'NN'), (',', ','), (',', ','), ('byin', 'VBD'), ('aat', 'RB'), ('4cd', 'CD'), ('toin', 'NNS'), ('3cd', 'CD'), ('votenn', 'NN'), (',', ','), (',', ','), ('decidedvbd', 'JJ'), ('toto', 'NN'), ('askvb', 'NN'), ('votersnns', 'NN'), ('whethercs', 'NN'), ('theyppss', 'NN'), ('prefervb', 'JJ'), ('theat', 'NN'), ('6-3-3cd', 'JJ'), ('(', '('), ('(', '('), ('juniorjj', 'NN'), ('highjj', 'NN'), ('schoolnn', 'NN'), (')', ')'), (')', ')'), ('systemnn', 'VBP'), ('orcc', 'JJ'), ('theat', 'NN'), ('8-4cd', 'JJ'), ('systemnn', 'NN'), ('..Boardnn', 'NNP'), ('membersnns', 'NN'), ('indicatedvbd', 'NN'), ('Mondaynr', 'NNP'), ('nightnn', 'CC'), ('thisdt', 'JJ'), ('wouldmd', 'NN'), ('bebe', 'NN'), ('donevbn', 'NN'), ('byin', 'NN'), ('anat', 'NN'), ('advisorynn', 'NN'), ('pollnn', 'NN'), ('toto', 'NN'), ('bebe', 'NN'), ('takenvbn', 'NN'), ('onin', 'NN'), ('Nov.np', 'NNP'), ('15cd', 'CD'), (',', ','), (',', ','), ('theat', 'NN'), ('sameap', 'JJ'), ('datenn', 'NN'), ('ascs', 'NN'), ('aat', 'VBD'), ('$', '$'), ('581,000nns', 'CD'), ('bondnn', 'NN'), ('electionnn', 'NN'), ('forin', 'NN'), ('theat', 'NN'), ('constructionnn', 'NN'), ('ofin', 'NN'), ('threecd', 'NN'), ('newjj', 'JJ'), ('elementaryjj', 'NN'), ('schoolsnns', 'NN'), ('..', 'JJ'), ('Secretarynn-tl', 'JJ'), ('ofin-tl', 'JJ'), ('Labornn-tl', 'NNP'), ('Arthurnp', 'NNP'), ('Goldbergnp', 'NNP'), ('willmd', 'VBD'), ('speakvb', 'JJ'), ('Sundaynr', 'NNP'), ('nightnn', 'NN'), ('atin', 'NN'), ('theat', 'JJ'), ('Masonicjj-tl', 'JJ'), ('Templenn-tl', 'NNP'), ('atin', 'NN'), ('aat', 'JJ'), ('$', '$'), ('25-a-platenn', 'JJ'), ('dinnernn', 'NN'), ('honoringvbg', 'NN'), ('Sen.nn-tl', 'NNP'), ('Waynenp', 'NNP'), ('L.np', 'NNP'), ('Morsenp', 'NNP'), (',', ','), (',', ','), ('Ajnn', 'NNP'), ('..', 'NNP'), ('Theat', 'NNP'), ('dinnernn', 'NN'), ('isbez', 'NN'), ('sponsoredvbn', 'NN'), ('byin', 'NN'), ('organizedvbn', 'NN'), ('labornn', 'NN'), ('andcc', 'NN'), ('isbez', 'NN'), ('scheduledvbn', 'VBD'), ('forin', 'JJ'), ('7cd', 'CD'), ('p.m.rb', 'JJ'), ('..', 'JJ'), ('Secretarynn-tl', 'NNP'), ('Goldbergnp', 'NNP'), ('andcc', 'IN'), ('Sen.nn-tl', 'NNP'), ('Morsenp', 'NNP'), ('willmd', 'NN'), ('holdvb', 'NN'), ('aat', 'NN'), ('jointnn', 'NN'), ('pressnn', 'NN'), ('conferencenn', 'NN'), ('atin', 'NN'), ('theat', 'NN'), ('Rooseveltnp', 'NNP'), ('Hotelnn-tl', 'NNP'), ('atin', 'NN'), ('4:30cd', 'CD'), ('p.m.rb', 'NN'), ('Sundaynr', 'NNP'), (',', ','), (',', ','), ('Blainenp', 'NNP'), ('Whipplenp', 'NNP'), (',', ','), (',', ','), ('executivenn', 'FW'), ('secretarynn', 'FW'), ('ofin', 'FW'), ('theat', 'JJ'), ('Democraticjj-tl', 'JJ'), ('Partynn-tl', 'NNP'), ('ofin', 'NN'), ('Oregonnp', 'NNP'), (',', ','), (',', ','), ('reportedvbd', 'VBP'), ('Tuesdaynr', 'NNP'), ('..', 'NNP'), ('Otherap', 'NNP'), ('speakersnns', 'VBD'), ('forin', 'JJ'), ('theat', 'NN'), ('fund-raisingnn', 'JJ'), ('dinnernn', 'NN'), ('includevb', 'JJ'), ('Reps.nns-tl', 'NNP'), ('Edithnp', 'NNP'), ('Greennp', 'NNP'), ('andcc', 'VBZ'), ('Alnp', 'NNP'), ('Ullmannp', 'NNP'), (',', ','), (',', ','), ('Labornn-tl', 'JJ'), ('Commissionernn-tl', 'NNP'), ('Normannp', 'NNP'), ('Nilsennp', 'NNP'), ('andcc', 'VBD'), ('Mayornn-tl', 'NNP'), ('Terrynp', 'NNP'), ('Schrunknp', 'NNP'), (',', ','), (',', ','), ('allabn', 'JJ'), ('Democratsnps', 'NNP'), ('..Oaknn-tl-hl', 'JJ'), ('Grovenn-tl-hl', 'NNP'), ('(', '('), ('(', '('), ('-hl', 'VB'), ('specialjj-hl', 'NN'), (')', ')'), (')', ')'), ('-hl', 'FW'), ('--', ':'), ('--', ':'), ('Threecd', 'NNP'), ('positionsnns', 'VBP'), ('onin', 'JJ'), ('theat', 'NN'), ('Oaknn-tl', 'JJ'), ('Lodgenn-tl', 'JJ'), ('Waternn-tl', 'NNP'), ('districtnn', 'NN'), ('boardnn', 'NN'), ('ofin', 'NN'), ('directorsnns', 'NN'), ('havehv', 'NN'), ('attractedvbn', 'VBD'), ('11cd', 'CD'), ('candidatesnns', 'NN'), ('..Theat', 'NN'), ('electionnn', 'JJ'), ('willmd', 'NN'), ('bebe', 'NN'), ('Dec.np', 'NNP'), ('4cd', 'CD'), ('fromin', 'NN'), ('8cd', 'CD'), ('a.m.rb', 'NN'), ('toin', 'NN'), ('8cd', 'CD'), ('p.m.rb', 'NN'), ('..Pollsnns', 'NNP'), ('willmd', 'NN'), ('bebe', 'NN'), ('inin', 'NN'), ('theat', 'NN'), ('waternn', 'NN'), ('officenn', 'IN'), ('..', 'NNP'), ('Incumbentjj', 'NNP'), ('Richardnp', 'NNP'), ('Salternp', 'NNP'), ('seeksvbz', 'VBD'), ('re-electionnn', 'JJ'), ('andcc', 'NN'), ('isbez', 'NN'), ('opposedvbn', 'NN'), ('byin', 'NN'), ('Donaldnp', 'NNP'), ('Huffmannp', 'NNP'), ('forin', 'JJ'), ('theat', 'NN'), ('five-yearjj', 'JJ'), ('termnn', 'NN'), ('..Incumbentjj', 'NNP'), ('Williamnp', 'NNP'), ('Brodnp', 'NNP'), ('isbez', 'NN'), ('opposedvbn', 'NN'), ('inin', 'NN'), ('hispp', 'VBD'), ('$', '$'), ('re-electionnn', 'JJ'), ('bidnn', 'NN'), ('byin', 'NN'), ('Barbaranp', 'NNP'), ('Njustnp', 'NNP'), (',', ','), (',', ','), ('Milesnp', 'NNP'), ('C.np', 'NNP'), ('Bubeniknp', 'NNP'), ('andcc', 'VBZ'), ('Franknp', 'NNP'), ('Leenp', 'NNP'), ('..', 'NNP'), ('Fivecd', 'NNP'), ('candidatesnns', 'NN'), ('seekvb', 'NN'), ('theat', 'NN'), ('placenn', 'NN'), ('vacatedvbn', 'NN'), ('byin', 'IN'), ('Secretarynn-tl', 'NNP'), ('Hughnp', 'NNP'), ('G.np', 'NNP'), ('Stoutnp', 'NNP'), ('..Seekingvbg', 'NNP'), ('thisdt', 'VBD'), ('two-yearjj', 'JJ'), ('termnn', 'JJ'), ('areber', 'NN'), ('Jamesnp', 'NNP'), ('Culbertsonnp', 'NNP'), (',', ','), (',', ','), ('Dwightnp', 'NNP'), ('M.np', 'NNP'), ('Steevesnp', 'NNP'), (',', ','), (',', ','), ('Jamesnp', 'NNP'), ('C.np', 'NNP'), ('Pierseenp', 'NNP'), (',', ','), (',', ','), ('W.M.np', 'NNP'), ('Sextonnp', 'NNP'), ('andcc', 'VBZ'), ('Theodorenp', 'NNP'), ('W.np', 'NNP'), ('Heitschmidtnp', 'NNP'), ('..', 'NNP'), ('Aat', 'NNP'), ('strongerjjr', 'VBD'), ('standnn', 'JJ'), ('onin', 'NN'), ('theirpp', 'VBD'), ('$', '$'), ('beliefsnns', 'JJ'), ('andcc', 'NN'), ('aat', 'NN'), ('firmerjjr', 'NN'), ('graspnn', 'NN'), ('onin', 'NN'), ('theirpp', 'VBD'), ('$', '$'), ('futurenn', 'JJ'), ('werebed', 'VBN'), ('takenvbn', 'JJ'), ('Fridaynr', 'NNP'), ('byin', 'NN'), ('delegatesnns', 'NN'), ('toin', 'NN'), ('theat', 'NN'), ('29thod', 'CD'), ('generaljj', 'NN'), ('councilnn', 'NN'), ('ofin', 'JJ'), ('theat', 'JJ'), ('Assembliesnns-tl', 'JJ'), ('ofin-tl', 'JJ'), ('Godnp-tl', 'NNP'), (',', ','), (',', ','), ('inin', 'JJ'), ('sessionnn', 'NN'), ('atin', 'NN'), ('theat', 'JJ'), ('Memorialjj-tl', 'JJ'), ('Coliseumnp-tl', 'NNP'), ('..', 'NNP'), ('Theat', 'NNP'), ('councilnn', 'NN'), ('revisedvbd', 'NN'), (',', ','), (',', ','), ('inin', 'JJ'), ('anat', 'NN'), ('effortnn', 'NN'), ('toto', 'NN'), ('strengthenvb', 'NN'), (',', ','), (',', ','), ('theat', 'NN'), ("denomination'snn", 'VBD'), ('$', '$'), ('16cd', 'CD'), ('basicjj', 'NN'), ('beliefsnns', 'NN'), ('adoptedvbn', 'NN'), ('inin', 'NN'), ('1966cd', 'CD'), ('..', 'JJ'), ('Theat', 'NNP'), ('changesnns', 'NN'), (',', ','), (',', ','), ('unanimouslyrb', 'JJ'), ('adoptedvbn', 'NN'), (',', ','), (',', ','), ('werebed', 'VBD'), ('feltvbn', 'JJ'), ('necessaryjj', 'JJ'), ('inin', 'NN'), ('theat', 'NN'), ('facenn', 'JJ'), ('ofin', 'NN'), ('modernjj', 'NN'), ('trendsnns', 'NN'), ('awayrb', 'IN'), ('fromin', 'JJ'), ('theat', 'NN'), ('Biblenp', 'NNP'), ('..Theat', 'NNP'), ('councilnn', 'NN'), ('agreedvbd', 'NN'), ('itpps', 'NN'), ('shouldmd', 'NN'), ('moreql', 'NN'), ('firmlyrb', 'NN'), ('statevb', 'VBD'), ('itspp', 'JJ'), ('$', '$'), ('beliefnn', 'JJ'), ('inin', 'NN'), ('andcc', 'NN'), ('dependencenn', 'NN'), ('onin', 'NN'), ('theat', 'NN'), ('Biblenp', 'NNP'), ('..', 'NNP'), ('Atin', 'NNP'), ('theat', 'NN'), ('adoptionnn', 'NN'), (',', ','), (',', ','), ('theat', 'NN'), ('Rev.np', 'NNP'), ('T.np', 'NNP'), ('F.np', 'NNP'), ('Zimmermannp', 'NNP'), (',', ','), (',', ','), ('generaljj', 'NN'), ('superintendentnn', 'NN'), (',', ','), (',', ','), ('commentedvbd', 'NN'), (',', ','), (',', ','), ('``', '``'), ('``', '``'), ('Theat-tl', 'JJ'), ('Assembliesnns-tl', 'JJ'), ('ofin-tl', 'JJ'), ('Godnp', 'NNP'), ('hashvz', 'NN'), ('beenben', 'NN'), ('aat', 'NN'), ('bulwarknn', 'NN'), ('forin', 'NN'), ('fundamentalismnn', 'NN'), ('inin', 'NN'), ('thesedts', 'NN'), ('modernjj', 'NN'), ('daysnns', 'NN'), ('andcc', 'NN'), ('hashvz', 'NN'), (',', ','), (',', ','), ('withoutin', 'NN'), ('compromisenn', 'NN'), (',', ','), (',', ','), ('stoodvbd', 'JJ'), ('forin', 'NN'), ('theat', 'NN'), ('greatjj', 'NN'), ('truthsnns', 'NN'), ('ofin', 'NN'), ('theat', 'NN'), ('Biblenp', 'NNP'), ('forin', 'NN'), ('whichwdt', 'NN'), ('mennns', 'NN'), ('inin', 'NN'), ('theat', 'NN'), ('pastnn', 'NN'), ('havehv', 'NN'), ('beenben', 'NN'), ('willingjj', 'NN'), ('toto', 'NN'), ('givevb', 'NN'), ('theirpp', 'VBD'), ('$', '$'), ('livesnns', 'FW'), ('``', '``'), ("''", "''"), ('..Newjj-hl', 'JJ'), ('pointnn-hl', 'JJ'), ('addedvbn-hl', 'JJ'), ('Manyap', 'NNP'), ('changesnns', 'NN'), ('involvedvbd', 'NN'), ('minorjj', 'NN'), ('editingnn', 'NN'), ('andcc', 'NN'), ('clarificationnn', 'NN'), (';', ':'), ('.', '.')]
[(';', ':'), ('.howeverwrb', 'NN'), (',', ','), (',', ','), ('theat', 'NN'), ('firstod', 'NN'), ('beliefnn', 'NN'), ('stoodvbd', 'NN'), ('forin', 'NN'), ('entirejj', 'NN'), ('revisionnn', 'NN'), ('within', 'IN'), ('aat', 'JJ'), ('newjj', 'JJ'), ('thirdod', 'NN'), ('pointnn', 'NN'), ('addedvbn', 'NN'), ('toin', 'NN'), ('theat', 'NN'), ('listnn', 'JJ'), ('..', 'NNP'), ('Theat', 'NNP'), ('firstod', 'VBD'), ('ofin', 'RB'), ('16cd', 'CD'), ('beliefsnns', 'NN'), ('ofin', 'NN'), ('theat', 'NN'), ('denominationnn', 'NN'), (',', ','), (',', ','), ('nowrb', 'JJ'), ('readsvbz', 'NN'), (':', ':'), (':', ':'), ('``', '``'), ('``', '``'), ('Theat', 'NNP'), ('scripturesnns', 'NN'), (',', ','), (',', ','), ('bothabx', 'VBD'), ('Oldjj-tl', 'NNP'), ('andcc', 'JJ'), ('Newjj-tl', 'NNP'), ('Testamentnn-tl', 'NNP'), (',', ','), (',', ','), ('areber', 'VBP'), ('verballyrb', 'JJ'), ('inspiredvbn', 'NN'), ('ofin', 'NN'), ('Godnp', 'NNP'), ('andcc', 'VBZ'), ('areber', 'RB'), ('theat', 'JJ'), ('revelationnn', 'NN'), ('ofin', 'NN'), ('Godnp', 'NNP'), ('toin', 'NN'), ('mannn', 'NN'), (',', ','), (',', ','), ('theat', 'NN'), ('infalliblejj', 'NN'), (',', ','), (',', ','), ('authoritativejj', 'JJ'), ('rulenn', 'NN'), ('ofin', 'NN'), ('faithnn', 'NN'), ('andcc', 'NN'), ('conductnn', 'NN'), ('``', '``'), ("''", "''"), ('..', 'VBZ'), ('Theat', 'NNP'), ('thirdod', 'NN'), ('beliefnn', 'NN'), (',', ','), (',', ','), ('inin', 'JJ'), ('sixcd', 'NN'), ('pointsnns', 'NN'), (',', ','), (',', ','), ('emphasizesvbz', 'JJ'), ('theat', 'NN'), ('Dietynn-tl', 'NNP'), ('ofin', 'NN'), ('theat', 'NN'), ('Lordnn-tl', 'NNP'), ('Jesusnp', 'NNP'), ('Christnp', 'NNP'), (',', ','), (',', ','), ('andcc', 'NN'), (':', ':'), (':', ':'), ('--', ':'), ('--', ':'), ('emphasizesvbz', 'JJ'), ('theat', 'NN'), ('Virginnn-tl', 'NNP'), ('birthnn', 'NN'), ('--', ':'), ('--', ':'), ('theat', 'NN'), ('sinlessjj', 'JJ'), ('lifenn', 'NN'), ('ofin', 'NN'), ('Christnp', 'NNP'), ('--', ':'), ('--', ':'), ('Hispp', 'NNP'), ('$', '$'), ('miraclesnns', 'CD'), ('--', ':'), ('--', ':'), ('Hispp', 'NNP'), ('$', '$'), ('substitutionaryjj', 'JJ'), ('worknn', 'NN'), ('onin', 'NN'), ('theat', 'NN'), ('crossnn', 'NN'), ('--', ':'), ('--', ':'), ('Hispp', 'NNP'), ('$', '$'), ('bodilyjj', 'NN'), ('resurrectionnn', 'NN'), ('fromin', 'NN'), ('theat', 'NN'), ('deadjj', 'NN'), ('--', ':'), ('--', ':'), ('andcc', 'JJ'), ('Hispp', 'NNP'), ('$', '$'), ('exaltationnn', 'JJ'), ('toin', 'NN'), ('theat', 'NN'), ('rightjj', 'NN'), ('handnn', 'NN'), ('ofin', 'JJ'), ('Godnp', 'NNP'), ('..Supernn-hl', 'JJ'), ('againrb-hl', 'JJ'), ('electedvbn-hl', 'JJ'), ('Fridaynr', 'NNP'), ('afternoonnn', 'NN'), ('theat', 'NN'), ('Rev.np', 'NNP'), ('T.np', 'NNP'), ('F.np', 'NNP'), ('Zimmermannp', 'NNP'), ('wasbedz', 'VBD'), ('reelectedvbn', 'JJ'), ('forin', 'NN'), ('hispp', 'VBD'), ('$', '$'), ('secondod', 'JJ'), ('consecutivejj', 'NN'), ('two-yearjj', 'JJ'), ('termnn', 'NN'), ('ascs', 'NN'), ('generaljj', 'NN'), ('superintendentnn', 'JJ'), ('ofin', 'JJ'), ('Assembliesnns-tl', 'JJ'), ('ofin-tl', 'JJ'), ('Godnp-tl', 'NNP'), ('..Hispp', 'NNP'), ('$', '$'), ('officesnns', 'JJ'), ('areber', 'NN'), ('inin', 'NN'), ('Springfieldnp', 'NNP'), (',', ','), (',', ','), ('Mo.np', 'NNP'), ('..Electionnn', 'NNP'), ('camevbd', 'NN'), ('onin', 'NN'), ('theat', 'NN'), ('nominatingvbg', 'JJ'), ('ballotnn', 'NN'), ('..', 'NNP'), ('Fridaynr', 'NNP'), ('nightnn', 'RB'), ('theat', 'VBD'), ('delegatesnns', 'JJ'), ('heardvbd', 'NN'), ('theat', 'NN'), ('neednn', 'JJ'), ('forin', 'NN'), ('theirpp', 'VBD'), ('$', '$'), ('forthcomingjj', 'JJ'), ('programnn', 'NN'), (',', ','), (',', ','), ('``', '``'), ('``', '``'), ('Breakthroughnn-tl', 'JJ'), ('``', '``'), ("''", "''"), ('scheduledvbn', 'NN'), ('toto', 'NN'), ('fillvb', 'NN'), ('theat', 'NN'), ('churchesnns', 'NN'), ('forin', 'NN'), ('theat', 'NN'), ('nextap', 'JJ'), ('twocd', 'NN'), ('yearsnns', 'NN'), ('..Inin', 'NNP'), ('hispp', 'VBD'), ('$', '$'), ('openingvbg', 'JJ'), ('addressnn', 'NN'), ('Wednesdaynr', 'NNP'), ('theat', 'NN'), ('Rev.np', 'NNP'), ('Mr.np', 'NNP'), ('Zimmermannp', 'NNP'), (',', ','), (',', ','), ('urgedvbd', 'JJ'), ('theat', 'NN'), ('delegatesnns', 'NN'), ('toto', 'NN'), ('considervb', 'NN'), ('aat', 'IN'), ('10-yearjj', 'JJ'), ('expansionnn', 'NN'), ('programnn', 'NN'), (',', ','), (',', ','), ('within', 'IN'), ('``', '``'), ('``', '``'), ('Breakthroughnn-tl', 'JJ'), ('``', '``'), ("''", "''"), ('theat', 'NN'), ('themenn', 'VBD'), ('forin', 'JJ'), ('theat', 'NN'), ('firstod', 'NN'), ('twocd', 'NN'), ('yearsnns', 'NN'), ('..', 'NNP'), ('Theat', 'NNP'), ('Rev.np', 'NNP'), ('R.np', 'NNP'), ('L.np', 'NNP'), ('Brandtnp', 'NNP'), (',', ','), (',', ','), ('nationaljj', 'JJ'), ('secretarynn', 'NN'), ('ofin', 'NN'), ('theat', 'NN'), ('homenr', 'NN'), ('missionsnns', 'NN'), ('departmentnn', 'NN'), (',', ','), (',', ','), ('stressedvbd', 'JJ'), ('theat', 'NN'), ('neednn', 'JJ'), ('forin', 'NN'), ('theat', 'NN'), ('firstod', 'JJ'), ('twocd', 'NN'), ("years'nns", 'RB'), ('$', '$'), ('worknn', 'JJ'), ('..', 'NNP'), ('``', '``'), ('``', '``'), ('Surveysnns', 'NNP'), ('showvb', 'NN'), ('thatcs', 'NN'), ('onecd', 'NN'), ('outin', 'NN'), ('ofin', 'NN'), ('threecd', 'NN'), ('Americansnps', 'NNP'), ('hashvz', 'NN'), ('vitaljj', 'NN'), ('contactnn', 'NN'), ('within', 'IN'), ('theat', 'NN'), ('churchnn', 'NN'), ('..Thisdt', 'NNP'), ('meansvbz', 'NN'), ('thatcs', 'NN'), ('moreap', 'VBD'), ('thanin', 'JJ'), ('100cd', 'CD'), ('millioncd', 'NN'), ('havehv', 'NN'), ('noat', 'NN'), ('vitaljj', 'NN'), ('touchnn', 'NN'), ('within', 'IN'), ('theat', 'NN'), ('churchnn', 'NN'), ('orcc', 'NN'), ('religiousjj', 'NN'), ('lifenn', 'VBZ'), ('``', '``'), ("''", "''"), (',', ','), (',', ','), ('hepps', 'JJ'), ('toldvbd', 'NN'), ('delegatesnns', 'NN'), ('Fridaynr', 'NNP'), ('..Churchnn-hl', 'JJ'), ('losesvbz-hl', 'JJ'), ('pacenn-hl', 'JJ'), ('Talkingvbg', 'NNP'), ('ofin', 'NN'), ('theat', 'NN'), ('rapidjj', 'NN'), ('populationnn', 'NN'), ('growthnn', 'NN'), ('(', '('), ('(', '('), ('upwardsrb', 'JJ'), ('ofin', 'NN'), ('12,000cd', 'CD'), ('babiesnns', 'NN'), ('bornvbn', 'NN'), ('dailyrb', 'NN'), (')', ')'), (')', ')'), ('within', 'IN'), ('anat', 'NN'), ('immigrantnn', 'NN'), ('enteringvbg', 'JJ'), ('theat', 'JJ'), ('Unitedvbn-tl', 'JJ'), ('Statesnns-tl', 'JJ'), ('everyat', 'NN'), ('1-12cd', 'JJ'), ('minutesnns', 'NN'), (',', ','), (',', ','), ('hepps', 'VBD'), ('saidvbd', 'JJ'), ('``', '``'), ('``', '``'), ('ourpp', 'JJ'), ('$', '$'), ('organizationnn', 'JJ'), ('hashvz', 'NN'), ('not*', 'NN'), ('beenben', 'NN'), ('keepingvbg', 'NN'), ('pacenn', 'NN'), ('within', 'IN'), ('thisdt', 'JJ'), ('challengenn', 'NN'), ('``', '``'), ("''", "''"), ('..', 'VBZ'), ('``', '``'), ('``', '``'), ('Inin', 'NNP'), ('35cd', 'CD'), ('yearsnns', 'NN'), ('weppss', 'NN'), ('havehv', 'NN'), ('openedvbn', 'VBD'), ('7,000cd', 'CD'), ('churchesnns', 'NN'), ('``', '``'), ("''", "''"), (',', ','), (',', ','), ('theat', 'NN'), ('Rev.np', 'NNP'), ('Mr.np', 'NNP'), ('Brandtnp', 'NNP'), ('saidvbd', 'NN'), (',', ','), (',', ','), ('addingvbg', 'JJ'), ('thatcs', 'NN'), ('theat', 'NN'), ('denominationnn', 'NN'), ('hadhvd', 'NN'), ('aat', 'NN'), ('nationaljj', 'JJ'), ('goalnn', 'NN'), ('ofin', 'NN'), ('onecd', 'NN'), ('churchnn', 'NN'), ('forin', 'NN'), ('everyat', 'VBD'), ('10,000cd', 'CD'), ('personsnns', 'NN'), ('..', 'NN'), ('``', '``'), ('``', '``'), ('Inin', 'NNP'), ('thisdt', 'NN'), ('lightnn', 'NN'), ('weppss', 'VBD'), ('needvb', 'RB'), ('1,000cd', 'CD'), ('churchesnns', 'NNS'), ('inin', 'JJ'), ('Illinoisnp', 'NNP'), (',', ','), (',', ','), ('wherewrb', 'VBP'), ('weppss', 'JJ'), ('havehv', 'NN'), ('200cd', 'CD'), (';', ':'), ('.', '.')]
[(';', ':'), ('.800cd', 'CD'), ('inin', 'JJ'), ('Southernjj-tl', 'JJ'), ('Newjj-tl', 'NNP'), ('Englandnp', 'NNP'), (',', ','), (',', ','), ('weppss', 'VBD'), ('havehv', 'JJ'), ('60cd', 'CD'), (';', ':'), ('.', '.')]
[(';', ':'), ('.weppss', 'CC'), ('needvb', '$'), ('100cd', 'CD'), ('inin', 'JJ'), ('Rhodenp-tl', 'JJ'), ('Islandnn-tl', 'NNP'), (',', ','), (',', ','), ('weppss', 'VBP'), ('havehv', 'JJ'), ('nonepn', 'FW'), ('``', '``'), ("''", "''"), (',', ','), (',', ','), ('hepps', 'JJ'), ('saidvbd', 'NN'), ('..', 'NNP'), ('Toto', 'NNP'), ('stepvb', 'VBD'), ('uprp', 'JJ'), ('theat', 'NN'), ("denomination'snn", 'JJ'), ('$', '$'), ('programnn', 'NN'), (',', ','), (',', ','), ('theat', 'NN'), ('Rev.np', 'NNP'), ('Mr.np', 'NNP'), ('Brandtnp', 'NNP'), ('suggestedvbd', 'VBD'), ('theat', 'NN'), ('visionnn', 'NN'), ('ofin', 'VBD'), ('8,000cd', 'CD'), ('newjj', 'JJ'), ('Assembliesnns-tl', 'JJ'), ('ofin-tl', 'JJ'), ('Godnp-tl', 'NNP'), ('churchesnns', 'NN'), ('inin', 'NN'), ('theat', 'NN'), ('nextap', 'JJ'), ('10cd', 'CD'), ('yearsnns', 'NN'), ('..', 'NNP'), ('Toto', 'NNP'), ('accomplishvb', 'VBZ'), ('thisdt', 'JJ'), ('wouldmd', 'NN'), ('necessitatevb', 'NN'), ('somedti', 'NN'), ('changesnns', 'NN'), ('inin', 'NN'), ('methodsnns', 'NN'), (',', ','), (',', ','), ('hepps', 'JJ'), ('saidvbd', 'NN'), ('..', 'NN'), ("''", "''"), ('churchnn-hl', 'JJ'), ('meetsvbz-hl', 'JJ'), ('changenn-hl', 'NN'), ('``', '``'), ('``', '``'), ('``', '``'), ('Theat', 'NNP'), ("church'snn", 'JJ'), ('$', '$'), ('abilitynn', 'JJ'), ('toto', 'NN'), ('changevb', 'NN'), ('herpp', 'VBD'), ('$', '$'), ('methodsnns', 'CD'), ('isbez', 'NN'), ('goingvbg', 'NN'), ('toto', 'NN'), ('determinevb', 'NN'), ('herpp', 'VBD'), ('$', '$'), ('abilitynn', 'JJ'), ('toto', 'NN'), ('meetvb', 'NN'), ('theat', 'NN'), ('challengenn', 'NN'), ('ofin', 'NN'), ('thisdt', 'NN'), ('hournn', 'NN'), ('``', '``'), ("''", "''"), ('..', 'NN'), ('Aat', 'NNP'), ('capsulenn', 'NN'), ('viewnn', 'NN'), ('ofin', 'NN'), ('proposedvbn', 'NN'), ('plansnns', 'NN'), ('includesvbz', 'NN'), (':', ':'), (':', ':'), ('--', ':'), ('--', ':'), ('Encouragingvbg', 'NNP'), ('byin', 'NN'), ('everyat', 'NN'), ('meansnns', 'NN'), (',', ','), (',', ','), ('allabn', 'JJ'), ('existingvbg', 'JJ'), ('Assembliesnns-tl', 'JJ'), ('ofin-tl', 'JJ'), ('Godnp-tl', 'NNP'), ('churchesnns', 'NN'), ('toto', 'NN'), ('startvb', 'NN'), ('newjj', 'JJ'), ('churchesnns', 'NN'), ('..', 'NNP'), ('--', ':'), ('--', ':'), ('Engagingvbg', 'NNP'), ('maturejj', 'NN'), (',', ','), (',', ','), ('experiencedvbn', 'FW'), ('mennns', 'FW'), ('toto', 'NN'), ('pioneervb', 'NN'), ('orcc', 'NN'), ('openvb', 'NN'), ('newjj', 'JJ'), ('churchesnns', 'NN'), ('inin', 'NN'), ('strategicjj', 'NN'), ('populationnn', 'NN'), ('centersnns', 'NN'), ('..', 'NNP'), ('--', ':'), ('--', ':'), ('Surroundingvbg', 'NNP'), ('pioneernn', 'NN'), ('pastorsnns', 'NN'), ('within', 'IN'), ('vocationaljj', 'JJ'), ('volunteersnns', 'NN'), ('(', '('), ('(', '('), ('laymennns', 'NN'), (',', ','), (',', ','), ('whowps', 'VBP'), ('willmd', 'JJ'), ('bebe', 'NN'), ('urgedvbn', 'JJ'), ('toto', 'NN'), ('movevb', 'NN'), ('intoin', 'NN'), ('theat', 'NN'), ('areann', 'JJ'), ('ofin', 'NN'), ('newjj', 'NN'), ('churchesnns', 'NN'), ('inin', 'NN'), ('theat', 'NN'), ('interestnn', 'JJ'), ('ofin', 'NN'), ('lendingvbg', 'NN'), ('theirpp', 'VBD'), ('$', '$'), ('supportnn', 'JJ'), ('toin', 'NN'), ('theat', 'NN'), ('newjj', 'JJ'), ('projectnn', 'NN'), (')', ')'), (')', ')'), ('..', 'FW'), ('--', ':'), ('--', ':'), ('Arrangingvbg', 'NNP'), ('forin', 'VBP'), ('ministerialjj', 'NN'), ('graduatesnns', 'NN'), ('toto', 'NN'), ('spendvb', 'JJ'), ('fromin', 'JJ'), ('6-12cd', 'JJ'), ('monthsnns', 'NN'), ('ascs', 'NN'), ('apprenticesnns', 'JJ'), ('inin', 'JJ'), ('well-establishedjj', 'JJ'), ('churchesnns', 'NN'), ('..', 'JJ'), ('U.S.np-tl', 'JJ'), ('Dist.nn-tl', 'NNP'), ('Judgenn-tl', 'NNP'), ('Charlesnp', 'NNP'), ('L.np', 'NNP'), ('Powellnp', 'NNP'), ('deniedvbd', 'NN'), ('allabn', 'NN'), ('motionsnns', 'NN'), ('madevbn', 'NN'), ('byin', 'NN'), ('defensenn', 'NN'), ('attorneysnns', 'NN'), ('Mondaynr', 'NNP'), ('inin', 'NN'), ("Portland'snp", 'NNP'), ('$', '$'), ('insurancenn', 'JJ'), ('fraudnn', 'NN'), ('trialnn', 'NN'), ('..', 'NNP'), ('Denialsnns', 'NNP'), ('werebed', 'VBD'), ('ofin', 'JJ'), ('motionsnns', 'NN'), ('ofin', 'NN'), ('dismissalnn', 'NN'), (',', ','), (',', ','), ('continuancenn', 'NN'), (',', ','), (',', ','), ('mistrialnn', 'NN'), (',', ','), (',', ','), ('separatejj', 'JJ'), ('trialnn', 'NN'), (',', ','), (',', ','), ('acquittalnn', 'NN'), (',', ','), (',', ','), ('strikingnn', 'JJ'), ('ofin', 'NN'), ('testimonynn', 'NN'), ('andcc', 'NN'), ('directedvbn', 'NN'), ('verdictnn', 'NN'), ('..', 'NNP'), ('Inin', 'NNP'), ('denyingvbg', 'NN'), ('motionsnns', 'NN'), ('forin', 'NN'), ('dismissalnn', 'NN'), (',', ','), (',', ','), ('Judgenn-tl', 'NNP'), ('Powellnp', 'NNP'), ('statedvbd', 'NN'), ('thatcs', 'NN'), ('massnn', 'NN'), ('trialsnns', 'NN'), ('havehv', 'NN'), ('beenben', 'NN'), ('upheldvbn', 'JJ'), ('ascs', 'NN'), ('properjj', 'NN'), ('inin', 'NN'), ('otherap', 'NN'), ('courtsnns', 'NN'), ('andcc', 'NN'), ('thatcs', 'IN'), ('``', '``'), ('``', '``'), ('aat', 'JJ'), ('personnn', 'NN'), ('maymd', 'NN'), ('joinvb', 'NN'), ('aat', 'NN'), ('conspiracynn', 'NN'), ('withoutin', 'NN'), ('knowingvbg', 'NN'), ('whowps', 'NN'), ('allabn', 'NN'), ('ofin', 'NN'), ('theat', 'NN'), ('conspiratorsnns', 'JJ'), ('areber', 'NN'), ('``', '``'), ("''", "''"), ('..', 'VBZ'), ('Attorneynn', 'NNP'), ('Dwightnp', 'NNP'), ('L.np', 'NNP'), ('Schwabnp', 'NNP'), (',', ','), (',', ','), ('inin', 'JJ'), ('behalfnn', 'NN'), ('ofin', 'NN'), ('defendantnn', 'NN'), ('Philipnp', 'NNP'), ('Weinsteinnp', 'NNP'), (',', ','), (',', ','), ('arguedvbd', 'JJ'), ('thereex', 'NN'), ('isbez', 'NN'), ('noat', 'NN'), ('evidencenn', 'NN'), ('linkingvbg', 'NN'), ('Weinsteinnp', 'NNP'), ('toin', 'NN'), ('theat', 'NN'), ('conspiracynn', 'NN'), (',', ','), (',', ','), ('butcc', 'VBD'), ('Judgenn-tl', 'NNP'), ('Powellnp', 'NNP'), ('declaredvbd', 'NN'), ('thisdt', 'NN'), ('isbez', 'NN'), ('aat', 'NN'), ('matternn', 'NN'), ('forcs', 'NN'), ('theat', 'NN'), ('jurynn', 'NN'), ('toto', 'NN'), ('decidevb', 'JJ'), ('..Proofnn-hl', 'JJ'), ('lacknn-hl', 'JJ'), ('chargedvbn-hl', 'JJ'), ('Schwabnp', 'NNP'), ('alsorb', 'NN'), ('declaredvbd', 'NN'), ('thereex', 'NN'), ('isbez', 'NN'), ('noat', 'NN'), ('proofnn', 'NN'), ('ofin', 'NN'), ("Weinstein'snp", 'NNP'), ('$', '$'), ('enteringvbg', 'JJ'), ('aat', 'NN'), ('conspiracynn', 'NN'), ('toto', 'NN'), ('usevb', 'JJ'), ('theat', 'NN'), ('U.S.np', 'NNP'), ('mailsnns', 'NN'), ('toto', 'NN'), ('defraudvb', 'NN'), (',', ','), (',', ','), ('toin', 'VB'), ('whichwdt', 'JJ'), ('federaljj', 'NN'), ('prosecutornn', 'NN'), ('A.np', 'NNP'), ('Lawrencenp', 'NNP'), ('Burbanknp', 'NNP'), ('repliedvbd', 'NN'), (':', ':'), (':', ':'), ('``', '``'), ('``', '``'), ('Itpps', 'NNP'), ('isbez', 'NN'), ('not*', 'NN'), ('necessaryjj', 'JJ'), ('thatcs', 'NN'), ('aat', 'NN'), ('defendantnn', 'NN'), ('actuallyrb', 'NN'), ('havehv', 'NN'), ('conpiredvbn', 'NN'), ('toto', 'NN'), ('usevb', 'JJ'), ('theat', 'NN'), ('U.S.np', 'NNP'), ('mailsnns', 'NN'), ('toto', 'NN'), ('defraudvb', 'NN'), ('asql', 'NN'), ('longrb', 'NN'), ('ascs', 'NN'), ('thereex', 'NN'), ('isbez', 'NN'), ('evidencenn', 'NN'), ('ofin', 'NN'), ('aat', 'NN'), ('conspiracynn', 'NN'), (',', ','), (',', ','), ('andcc', 'JJ'), ('theat', 'NN'), ('mailsnns', 'NN'), ('werebed', 'VBD'), ('thenrb', 'JJ'), ('usedvbn', 'JJ'), ('toto', 'NN'), ('carryvb', 'NN'), ('itppo', 'NN'), ('outrp', 'IN'), ('``', '``'), ("''", "''"), ('..', 'NN'), ('Inin', 'NNP'), ('theat', 'NN'), ('afternoonnn', 'NN'), (',', ','), (',', ','), ('defensenn', 'JJ'), ('attorneysnns', 'NN'), ('beganvbd', 'NN'), ('theat', 'NN'), ('presentationnn', 'NN'), ('ofin', 'NN'), ('theirpp', 'VBD'), ('$', '$'), ('casesnns', 'NN'), ('within', 'IN'), ('openingvbg', 'JJ'), ('statementsnns', 'NN'), (',', ','), (',', ','), ('somedti', 'JJ'), ('ofin', 'NN'), ('whichwdt', 'NN'), ('hadhvd', 'NN'), ('beenben', 'NN'), ('deferredvbn', 'NN'), ('untilin', 'JJ'), ('afterin', 'JJ'), ('theat', 'NN'), ('governmentnn', 'NN'), ('hadhvd', 'NN'), ('calledvbn', 'NN'), ('witnessesnns', 'NN'), ('andcc', 'NN'), ('presentedvbn', 'NN'), ('itspp', 'JJ'), ('$', '$'), ('casenn', 'NNS'), ('..', 'VBP')]
###Markdown
Step 5 : Named entity recognitionNamed Entity Recognition, also known as entity extraction classifies named entities that are present in a text into pre-defined categories like “individuals”, “companies”, “places”, “organization”, “cities”, “dates”, “product terminologies” etc
###Code
sentences = nltk.sent_tokenize(data)
tokenized_sentences = [nltk.word_tokenize(sentence) for sentence in sentences]
tagged_sentences = [nltk.pos_tag(sentence) for sentence in tokenized_sentences]
chunked_sentences = nltk.ne_chunk_sents(tagged_sentences, binary=True)
def extract_entity_names(x):
entity_names = []
if hasattr(x, 'label') and x.label: #hasattr - Its main task is to check if an object has the given named attribute and return true if present, else false.
if x.label() == 'NE':
entity_names.append(' '.join([word[0] for word in x]))
else:
for word in x:
entity_names.extend(extract_entity_names(word))
return entity_names
entity_names = []
for tree in chunked_sentences:
entity_names.extend(extract_entity_names(tree))
# Print unique entity names
print (set(entity_names))
###Output
{'Tuesdaynr', 'Dwightnp', 'Jamesnp Culbertsonnp', 'Eugenenp', 'Donaldnp Huffmannp', 'Salemnp', 'Biblenp', 'Milesnp', 'Portlandnp', 'Franknp Leenp', 'Americansnps', 'Barbaranp Njustnp', 'Martinnp', 'Alnp Ullmannp', 'Blainenp Whipplenp', 'SWnn Maplecrestnp', 'Oregonnp', 'Jamesnp', 'Attorneynn Dwightnp', 'Sovietnp', 'Philipnp Weinsteinnp', 'Jacknp', 'Ajnn', 'Hepps', 'Deannp Brysonnp', 'Losnp Angelesnp', 'Vincentnp'}
###Markdown
Module 2 (Python 3) Basic NLP Tasks with NLTK
###Code
import nltk
from nltk.book import *
###Output
_____no_output_____
###Markdown
Counting vocabulary of words
###Code
text7
sent7
len(sent7)
len(text7)
len(set(text7))
list(set(text7))[:10]
###Output
_____no_output_____
###Markdown
Frequency of words
###Code
dist = FreqDist(text7)
len(dist)
vocab1 = dist.keys()
#vocab1[:10]
# In Python 3 dict.keys() returns an iterable view instead of a list
list(vocab1)[:10]
dist['four']
freqwords = [w for w in vocab1 if len(w) > 5 and dist[w] > 100]
freqwords
###Output
_____no_output_____
###Markdown
Normalization and stemming
###Code
input1 = "List listed lists listing listings"
words1 = input1.lower().split(' ')
words1
porter = nltk.PorterStemmer()
[porter.stem(t) for t in words1]
###Output
_____no_output_____
###Markdown
Lemmatization
###Code
udhr = nltk.corpus.udhr.words('English-Latin1')
udhr[:20]
[porter.stem(t) for t in udhr[:20]] # Still Lemmatization
WNlemma = nltk.WordNetLemmatizer()
[WNlemma.lemmatize(t) for t in udhr[:20]]
###Output
_____no_output_____
###Markdown
Tokenization
###Code
text11 = "Children shouldn't drink a sugary drink before bed."
text11.split(' ')
nltk.word_tokenize(text11)
text12 = "This is the first sentence. A gallon of milk in the U.S. costs $2.99. Is this the third sentence? Yes, it is!"
sentences = nltk.sent_tokenize(text12)
len(sentences)
sentences
###Output
_____no_output_____
###Markdown
Advanced NLP Tasks with NLTK POS tagging
###Code
nltk.help.upenn_tagset('MD')
text13 = nltk.word_tokenize(text11)
nltk.pos_tag(text13)
text14 = nltk.word_tokenize("Visiting aunts can be a nuisance")
nltk.pos_tag(text14)
# Parsing sentence structure
text15 = nltk.word_tokenize("Alice loves Bob")
grammar = nltk.CFG.fromstring("""
S -> NP VP
VP -> V NP
NP -> 'Alice' | 'Bob'
V -> 'loves'
""")
parser = nltk.ChartParser(grammar)
trees = parser.parse_all(text15)
for tree in trees:
print(tree)
text16 = nltk.word_tokenize("I saw the man with a telescope")
grammar1 = nltk.data.load('mygrammar.cfg')
grammar1
parser = nltk.ChartParser(grammar1)
trees = parser.parse_all(text16)
for tree in trees:
print(tree)
from nltk.corpus import treebank
text17 = treebank.parsed_sents('wsj_0001.mrg')[0]
print(text17)
###Output
(S
(NP-SBJ
(NP (NNP Pierre) (NNP Vinken))
(, ,)
(ADJP (NP (CD 61) (NNS years)) (JJ old))
(, ,))
(VP
(MD will)
(VP
(VB join)
(NP (DT the) (NN board))
(PP-CLR (IN as) (NP (DT a) (JJ nonexecutive) (NN director)))
(NP-TMP (NNP Nov.) (CD 29))))
(. .))
###Markdown
POS tagging and parsing ambiguity
###Code
text18 = nltk.word_tokenize("The old man the boat")
nltk.pos_tag(text18)
text19 = nltk.word_tokenize("Colorless green ideas sleep furiously")
nltk.pos_tag(text19)
###Output
_____no_output_____
|
nbs/09-pipeline.ipynb
|
###Markdown
Pipeline Imports
###Code
#exports
import numpy as np
import pandas as pd
import os
from sklearn.ensemble import RandomForestRegressor
from dagster import execute_pipeline, pipeline, solid, Field
from batopt import clean, discharge, charge, constraints, pv
import FEAutils as hlp
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
End-to-EndWe're now going to combine these steps into a pipeline using dagster, first we'll create the individual components.
###Code
@solid()
def load_data(_, raw_data_dir: str):
loaded_data = dict()
loaded_data['pv'] = clean.load_training_dataset(raw_data_dir, 'pv')
loaded_data['demand'] = clean.load_training_dataset(raw_data_dir, 'demand')
loaded_data['weather'] = clean.load_training_dataset(raw_data_dir, 'weather', dt_idx_freq='H')
return loaded_data
@solid()
def clean_data(_, loaded_data, raw_data_dir: str, intermediate_data_dir: str):
# Cleaning
cleaned_data = dict()
cleaned_data['pv'] = (loaded_data['pv']
.pipe(clean.pv_anomalies_to_nan)
.pipe(clean.interpolate_missing_panel_temps, loaded_data['weather'])
.pipe(clean.interpolate_missing_site_irradiance, loaded_data['weather'])
.pipe(clean.interpolate_missing_site_power)
)
cleaned_data['weather'] = clean.interpolate_missing_weather_solar(loaded_data['pv'], loaded_data['weather'])
cleaned_data['weather'] = clean.interpolate_missing_temps(cleaned_data['weather'], 'temp_location4')
cleaned_data['demand'] = loaded_data['demand']
# Saving
if os.path.exists(intermediate_data_dir) == False:
os.mkdir(intermediate_data_dir)
set_num = clean.identify_latest_set_num(raw_data_dir)
cleaned_data['pv'].to_csv(f'{intermediate_data_dir}/pv_set{set_num}.csv')
cleaned_data['demand'].to_csv(f'{intermediate_data_dir}/demand_set{set_num}.csv')
cleaned_data['weather'].to_csv(f'{intermediate_data_dir}/weather_set{set_num}.csv')
return intermediate_data_dir
@solid()
def fit_and_save_pv_model(_, intermediate_data_dir: str, pv_model_fp: str, model_params: dict):
X, y = pv.prepare_training_input_data(intermediate_data_dir)
pv.fit_and_save_pv_model(X, y, pv_model_fp, model_class=RandomForestRegressor, **model_params)
return True
@solid()
def fit_and_save_discharge_model(_, intermediate_data_dir: str, discharge_opt_model_fp: str, model_params: dict):
X, y = discharge.prepare_training_input_data(intermediate_data_dir)
discharge.fit_and_save_model(X, y, discharge_opt_model_fp, **model_params)
return True
@solid()
def construct_battery_profile(_, charge_model_success: bool, discharge_model_success: bool, intermediate_data_dir: str, raw_data_dir: str, discharge_opt_model_fp: str, pv_model_fp: str, start_time: str):
assert charge_model_success and discharge_model_success, 'Model training was unsuccessful'
s_discharge_profile = discharge.optimise_test_discharge_profile(raw_data_dir, intermediate_data_dir, discharge_opt_model_fp)
s_charge_profile = pv.optimise_test_charge_profile(raw_data_dir, intermediate_data_dir, pv_model_fp, start_time=start_time)
s_battery_profile = (s_charge_profile + s_discharge_profile).fillna(0)
s_battery_profile.name = 'charge_MW'
return s_battery_profile
@solid()
def check_and_save_battery_profile(_, s_battery_profile, output_data_dir: str):
# Check that solution meets battery constraints
assert constraints.schedule_is_legal(s_battery_profile), 'Solution violates constraints'
# Saving
if os.path.exists(output_data_dir) == False:
os.mkdir(output_data_dir)
s_battery_profile.index = s_battery_profile.index.tz_convert('UTC').tz_convert(None)
s_battery_profile.to_csv(f'{output_data_dir}/latest_submission.csv')
return
###Output
_____no_output_____
###Markdown
Then we'll combine them in a pipeline
###Code
@pipeline
def end_to_end_pipeline():
# loading and cleaning
loaded_data = load_data()
intermediate_data_dir = clean_data(loaded_data)
# charging
charge_model_success = fit_and_save_pv_model(intermediate_data_dir)
# discharing
discharge_model_success = fit_and_save_discharge_model(intermediate_data_dir)
# combining and saving
s_battery_profile = construct_battery_profile(charge_model_success, discharge_model_success, intermediate_data_dir)
check_and_save_battery_profile(s_battery_profile)
###Output
_____no_output_____
###Markdown
Which we'll now run a test
###Code
run_config = {
'solids': {
'load_data': {
'inputs': {
'raw_data_dir': '../data/raw',
},
},
'clean_data': {
'inputs': {
'raw_data_dir': '../data/raw',
'intermediate_data_dir': '../data/intermediate',
},
},
'fit_and_save_discharge_model': {
'inputs': {
'discharge_opt_model_fp': '../models/discharge_opt.sav',
'model_params': {
'criterion': 'mse',
'bootstrap': True,
'max_depth': 32,
'max_features': 'auto',
'min_samples_leaf': 1,
'min_samples_split': 4,
'n_estimators': 74
}
},
},
'fit_and_save_pv_model': {
'inputs': {
'pv_model_fp': '../models/pv_model.sav',
'model_params': {
'bootstrap': True,
'criterion': 'mse',
'max_depth': 5,
'max_features': 'sqrt',
'min_samples_leaf': 1,
'min_samples_split': 2,
'n_estimators': 150
}
},
},
'construct_battery_profile': {
'inputs': {
'raw_data_dir': '../data/raw',
'discharge_opt_model_fp': '../models/discharge_opt.sav',
'pv_model_fp': '../models/pv_model.sav',
'start_time': '08:00',
},
},
'check_and_save_battery_profile': {
'inputs': {
'output_data_dir': '../data/output'
},
},
}
}
execute_pipeline(end_to_end_pipeline, run_config=run_config)
###Output
[32m2021-03-19 01:22:42[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - ENGINE_EVENT - Starting initialization of resources [asset_store].
[32m2021-03-19 01:22:42[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - ENGINE_EVENT - Finished initialization of resources [asset_store].
[32m2021-03-19 01:22:42[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - PIPELINE_START - Started execution of pipeline "end_to_end_pipeline".
[32m2021-03-19 01:22:42[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - ENGINE_EVENT - Executing steps in process (pid: 17436)
[32m2021-03-19 01:22:42[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - load_data.compute - STEP_START - Started execution of step "load_data.compute".
[32m2021-03-19 01:22:42[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - load_data.compute - STEP_INPUT - Got input "raw_data_dir" of type "String". (Type check passed).
[32m2021-03-19 01:22:42[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - load_data.compute - STEP_OUTPUT - Yielded output "result" of type "Any". (Type check passed).
[32m2021-03-19 01:22:42[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - load_data.compute - OBJECT_STORE_OPERATION - Stored intermediate object for output result in memory object store using pickle.
[32m2021-03-19 01:22:42[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - load_data.compute - STEP_SUCCESS - Finished execution of step "load_data.compute" in 253ms.
[32m2021-03-19 01:22:42[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - clean_data.compute - STEP_START - Started execution of step "clean_data.compute".
[32m2021-03-19 01:22:42[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - clean_data.compute - OBJECT_STORE_OPERATION - Retrieved intermediate object for input loaded_data in memory object store using pickle.
[32m2021-03-19 01:22:42[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - clean_data.compute - STEP_INPUT - Got input "loaded_data" of type "Any". (Type check passed).
[32m2021-03-19 01:22:42[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - clean_data.compute - STEP_INPUT - Got input "raw_data_dir" of type "String". (Type check passed).
[32m2021-03-19 01:22:42[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - clean_data.compute - STEP_INPUT - Got input "intermediate_data_dir" of type "String". (Type check passed).
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - clean_data.compute - STEP_OUTPUT - Yielded output "result" of type "Any". (Type check passed).
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - clean_data.compute - OBJECT_STORE_OPERATION - Stored intermediate object for output result in memory object store using pickle.
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - clean_data.compute - STEP_SUCCESS - Finished execution of step "clean_data.compute" in 2m1s.
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - fit_and_save_discharge_model.compute - STEP_START - Started execution of step "fit_and_save_discharge_model.compute".
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - fit_and_save_discharge_model.compute - OBJECT_STORE_OPERATION - Retrieved intermediate object for input intermediate_data_dir in memory object store using pickle.
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - fit_and_save_discharge_model.compute - STEP_INPUT - Got input "intermediate_data_dir" of type "String". (Type check passed).
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - fit_and_save_discharge_model.compute - STEP_INPUT - Got input "discharge_opt_model_fp" of type "String". (Type check passed).
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - fit_and_save_discharge_model.compute - STEP_INPUT - Got input "model_params" of type "dict". (Type check passed).
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - fit_and_save_discharge_model.compute - STEP_OUTPUT - Yielded output "result" of type "Any". (Type check passed).
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - fit_and_save_discharge_model.compute - OBJECT_STORE_OPERATION - Stored intermediate object for output result in memory object store using pickle.
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - fit_and_save_discharge_model.compute - STEP_SUCCESS - Finished execution of step "fit_and_save_discharge_model.compute" in 5.44ms.
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - fit_and_save_pv_model.compute - STEP_START - Started execution of step "fit_and_save_pv_model.compute".
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - fit_and_save_pv_model.compute - OBJECT_STORE_OPERATION - Retrieved intermediate object for input intermediate_data_dir in memory object store using pickle.
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - fit_and_save_pv_model.compute - STEP_INPUT - Got input "intermediate_data_dir" of type "String". (Type check passed).
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - fit_and_save_pv_model.compute - STEP_INPUT - Got input "pv_model_fp" of type "String". (Type check passed).
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - fit_and_save_pv_model.compute - STEP_INPUT - Got input "model_params" of type "dict". (Type check passed).
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - fit_and_save_pv_model.compute - STEP_OUTPUT - Yielded output "result" of type "Any". (Type check passed).
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - fit_and_save_pv_model.compute - OBJECT_STORE_OPERATION - Stored intermediate object for output result in memory object store using pickle.
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - fit_and_save_pv_model.compute - STEP_SUCCESS - Finished execution of step "fit_and_save_pv_model.compute" in 6.29ms.
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - construct_battery_profile.compute - STEP_START - Started execution of step "construct_battery_profile.compute".
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - construct_battery_profile.compute - OBJECT_STORE_OPERATION - Retrieved intermediate object for input charge_model_success in memory object store using pickle.
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - construct_battery_profile.compute - OBJECT_STORE_OPERATION - Retrieved intermediate object for input discharge_model_success in memory object store using pickle.
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - construct_battery_profile.compute - OBJECT_STORE_OPERATION - Retrieved intermediate object for input intermediate_data_dir in memory object store using pickle.
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - construct_battery_profile.compute - STEP_INPUT - Got input "charge_model_success" of type "Bool". (Type check passed).
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - construct_battery_profile.compute - STEP_INPUT - Got input "discharge_model_success" of type "Bool". (Type check passed).
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - construct_battery_profile.compute - STEP_INPUT - Got input "intermediate_data_dir" of type "String". (Type check passed).
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - construct_battery_profile.compute - STEP_INPUT - Got input "raw_data_dir" of type "String". (Type check passed).
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - construct_battery_profile.compute - STEP_INPUT - Got input "discharge_opt_model_fp" of type "String". (Type check passed).
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - construct_battery_profile.compute - STEP_INPUT - Got input "pv_model_fp" of type "String". (Type check passed).
[32m2021-03-19 01:24:43[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - construct_battery_profile.compute - STEP_INPUT - Got input "start_time" of type "String". (Type check passed).
[32m2021-03-19 01:24:46[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - construct_battery_profile.compute - STEP_OUTPUT - Yielded output "result" of type "Any". (Type check passed).
[32m2021-03-19 01:24:46[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - construct_battery_profile.compute - OBJECT_STORE_OPERATION - Stored intermediate object for output result in memory object store using pickle.
[32m2021-03-19 01:24:46[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - construct_battery_profile.compute - STEP_SUCCESS - Finished execution of step "construct_battery_profile.compute" in 2.58s.
[32m2021-03-19 01:24:46[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - check_and_save_battery_profile.compute - STEP_START - Started execution of step "check_and_save_battery_profile.compute".
[32m2021-03-19 01:24:46[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - check_and_save_battery_profile.compute - OBJECT_STORE_OPERATION - Retrieved intermediate object for input s_battery_profile in memory object store using pickle.
[32m2021-03-19 01:24:46[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - check_and_save_battery_profile.compute - STEP_INPUT - Got input "s_battery_profile" of type "Any". (Type check passed).
[32m2021-03-19 01:24:46[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - check_and_save_battery_profile.compute - STEP_INPUT - Got input "output_data_dir" of type "String". (Type check passed).
[32m2021-03-19 01:24:46[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - check_and_save_battery_profile.compute - STEP_OUTPUT - Yielded output "result" of type "Any". (Type check passed).
[32m2021-03-19 01:24:46[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - check_and_save_battery_profile.compute - OBJECT_STORE_OPERATION - Stored intermediate object for output result in memory object store using pickle.
[32m2021-03-19 01:24:46[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - check_and_save_battery_profile.compute - STEP_SUCCESS - Finished execution of step "check_and_save_battery_profile.compute" in 15ms.
[32m2021-03-19 01:24:46[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - ENGINE_EVENT - Finished steps in process (pid: 17436) in 2m4s
[32m2021-03-19 01:24:46[0m - dagster - [34mDEBUG[0m - end_to_end_pipeline - 01f88437-2b8c-4581-9982-3f76523c2874 - 17436 - PIPELINE_SUCCESS - Finished execution of pipeline "end_to_end_pipeline".
###Markdown
We'll then visualise the latest charging profile
###Code
df_latest_submission = pd.read_csv('../data/output/latest_submission.csv')
s_latest_submission = df_latest_submission.set_index('datetime')['charge_MW']
s_latest_submission.index = pd.to_datetime(s_latest_submission.index)
# Plotting
fig, ax = plt.subplots(dpi=250)
s_latest_submission.plot(ax=ax)
ax.set_xlabel('')
ax.set_ylabel('Charge (MW)')
hlp.hide_spines(ax)
fig.tight_layout()
fig.savefig('../img/latest_submission.png', dpi=250)
###Output
_____no_output_____
###Markdown
Finally we'll export the relevant code to our `batopt` module
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00-utilities.ipynb.
Converted 01-retrieval.ipynb.
Converted 02-cleaning.ipynb.
Converted 03-charging.ipynb.
Converted 04-discharging.ipynb.
Converted 05-constraints.ipynb.
Converted 06-tuning.ipynb.
Converted 07-pv-forecast.ipynb.
Converted 08-christmas.ipynb.
Converted 09-pipeline.ipynb.
|
notebooks/02z1__Template-Basic.ipynb
|
###Markdown
Notebook Title Author: Author's Name Affiliation: Institute/University License: Applicable License > Abstract: Outline the contents of the notebook.
###Code
# Import all modules/packages used in the notebook
# Initial import to allow requesting data
from viresclient import SwarmRequest
import datetime as dt
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
1. Fetching data
###Code
request = SwarmRequest()
# Listing of data accessible (help for selection, does not need to be run)
request.available_collections(details=False)
# Helper tool to show available parameters for collection (e.g. MAG)
request.available_measurements("MAG")
# Listing of available models
request.available_models(details=False)
# Application of filters
request.set_range_filter(parameter="Latitude",
minimum=0,
maximum=90)
request.set_range_filter("Longitude", 0, 90);
# Set collection identifier (see available_collections for options)
request.set_collection("SW_OPER_MAGA_LR_1B")
# Set measurements (see available_measurements)
request.set_products(
measurements=["F", "B_NEC"],
#models=["CHAOS-Core", "MCO_SHA_2D"],
sampling_step="PT10S"
);
# Data request
data = request.get_between(
# 2014-01-01 00:00:00
start_time = dt.datetime(2019,1,1, 0),
# 2014-01-01 01:00:00
end_time = dt.datetime(2019,1,1, 1)
)
# Convert to pandas dataframe
df = data.as_dataframe()
df.head()
# or as xarray dataset
ds = data.as_xarray()
ds
###Output
_____no_output_____
###Markdown
2. Plotting data
###Code
# Create plot
ax = df.plot(
y=["F"],
figsize=(15,5),
grid=True
)
ax.set_xlabel("Timestamp")
ax.set_ylabel("[nT]");
###Output
_____no_output_____
###Markdown
Notebook Title Author: Author's Name Affiliation: Institute/University License: Applicable License > Abstract: Outline the contents of the notebook.
###Code
# Display important package versions used
# Edit this according to what you use
# - this will help others to reproduce your results
# - it may also help to trace issues if package changes break your code
%load_ext watermark
%watermark -i -v -p viresclient,pandas,xarray,matplotlib
# Import all modules/packages used in the notebook
# Initial import to allow requesting data
from viresclient import SwarmRequest
import datetime as dt
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
1. Fetching data
###Code
request = SwarmRequest()
# Listing of data accessible (help for selection, does not need to be run)
request.available_collections(details=False)
# Helper tool to show available parameters for collection (e.g. MAG)
request.available_measurements("MAG")
# Listing of available models
request.available_models(details=False)
# Application of filters
request.set_range_filter(parameter="Latitude",
minimum=0,
maximum=90)
request.set_range_filter("Longitude", 0, 90);
# Set collection identifier (see available_collections for options)
request.set_collection("SW_OPER_MAGA_LR_1B")
# Set measurements (see available_measurements)
request.set_products(
measurements=["F", "B_NEC"],
#models=["CHAOS-Core", "MCO_SHA_2D"],
sampling_step="PT10S"
);
# Data request
data = request.get_between(
# 2014-01-01 00:00:00
start_time = dt.datetime(2019,1,1, 0),
# 2014-01-01 01:00:00
end_time = dt.datetime(2019,1,1, 1)
)
# Convert to pandas dataframe
df = data.as_dataframe()
df.head()
# or as xarray dataset
ds = data.as_xarray()
ds
###Output
_____no_output_____
###Markdown
2. Plotting data
###Code
# Create plot
ax = df.plot(
y=["F"],
figsize=(15,5),
grid=True
)
ax.set_xlabel("Timestamp")
ax.set_ylabel("[nT]");
###Output
_____no_output_____
|
docs/tutorials/gradients.ipynb
|
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Calculate gradients View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial explores gradient calculation algorithms for the expectation values of quantum circuits.Calculating the gradient of the expectation value of a certain observable in a quantum circuit is an involved process. Expectation values of observables do not have the luxury of having analytic gradient formulas that are always easy to write down—unlike traditional machine learning transformations such as matrix multiplication or vector addition that have analytic gradient formulas which are easy to write down. As a result, there are different quantum gradient calculation methods that come in handy for different scenarios. This tutorial compares and contrasts two different differentiation schemes. Setup
###Code
!pip install tensorflow==2.3.1
###Output
_____no_output_____
###Markdown
Install TensorFlow Quantum:
###Code
!pip install tensorflow-quantum
###Output
_____no_output_____
###Markdown
Now import TensorFlow and the module dependencies:
###Code
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
###Output
_____no_output_____
###Markdown
1. PreliminaryLet's make the notion of gradient calculation for quantum circuits a little more concrete. Suppose you have a parameterized circuit like this one:
###Code
qubit = cirq.GridQubit(0, 0)
my_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha'))
SVGCircuit(my_circuit)
###Output
_____no_output_____
###Markdown
Along with an observable:
###Code
pauli_x = cirq.X(qubit)
pauli_x
###Output
_____no_output_____
###Markdown
Looking at this operator you know that $⟨Y(\alpha)| X | Y(\alpha)⟩ = \sin(\pi \alpha)$
###Code
def my_expectation(op, alpha):
"""Compute ⟨Y(alpha)| `op` | Y(alpha)⟩"""
params = {'alpha': alpha}
sim = cirq.Simulator()
final_state_vector = sim.simulate(my_circuit, params).final_state_vector
return op.expectation_from_state_vector(final_state_vector, {qubit: 0}).real
my_alpha = 0.3
print("Expectation=", my_expectation(pauli_x, my_alpha))
print("Sin Formula=", np.sin(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
and if you define $f_{1}(\alpha) = ⟨Y(\alpha)| X | Y(\alpha)⟩$ then $f_{1}^{'}(\alpha) = \pi \cos(\pi \alpha)$. Let's check this:
###Code
def my_grad(obs, alpha, eps=0.01):
grad = 0
f_x = my_expectation(obs, alpha)
f_x_prime = my_expectation(obs, alpha + eps)
return ((f_x_prime - f_x) / eps).real
print('Finite difference:', my_grad(pauli_x, my_alpha))
print('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
2. The need for a differentiatorWith larger circuits, you won't always be so lucky to have a formula that precisely calculates the gradients of a given quantum circuit. In the event that a simple formula isn't enough to calculate the gradient, the `tfq.differentiators.Differentiator` class allows you to define algorithms for computing the gradients of your circuits. For instance you can recreate the above example in TensorFlow Quantum (TFQ) with:
###Code
expectation_calculation = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
However, if you switch to estimating expectation based on sampling (what would happen on a true device) the values can change a little bit. This means you now have an imperfect estimate:
###Code
sampled_expectation_calculation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
This can quickly compound into a serious accuracy problem when it comes to gradients:
###Code
# Make input_points = [batch_size, 1] array.
input_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=input_points)
imperfect_outputs = sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=input_points)
plt.title('Forward Pass Values')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.plot(input_points, exact_outputs, label='Analytic')
plt.plot(input_points, imperfect_outputs, label='Sampled')
plt.legend()
# Gradients are a much different story.
values_tensor = tf.convert_to_tensor(input_points)
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = sampled_expectation_calculation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_finite_diff_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_finite_diff_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
Here you can see that although the finite difference formula is fast to compute the gradients themselves in the analytical case, when it came to the sampling based methods it was far too noisy. More careful techniques must be used to ensure a good gradient can be calculated. Next you will look at a much slower technique that wouldn't be as well suited for analytical expectation gradient calculations, but does perform much better in the real-world sample based case:
###Code
# A smarter differentiation scheme.
gradient_safe_sampled_expectation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ParameterShift())
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = gradient_safe_sampled_expectation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_param_shift_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_param_shift_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
From the above you can see that certain differentiators are best used for particular research scenarios. In general, the slower sample-based methods that are robust to device noise, etc., are great differentiators when testing or implementing algorithms in a more "real world" setting. Faster methods like finite difference are great for analytical calculations and you want higher throughput, but aren't yet concerned with the device viability of your algorithm. 3. Multiple observablesLet's introduce a second observable and see how TensorFlow Quantum supports multiple observables for a single circuit.
###Code
pauli_z = cirq.Z(qubit)
pauli_z
###Output
_____no_output_____
###Markdown
If this observable is used with the same circuit as before, then you have $f_{2}(\alpha) = ⟨Y(\alpha)| Z | Y(\alpha)⟩ = \cos(\pi \alpha)$ and $f_{2}^{'}(\alpha) = -\pi \sin(\pi \alpha)$. Perform a quick check:
###Code
test_value = 0.
print('Finite difference:', my_grad(pauli_z, test_value))
print('Sin formula: ', -np.pi * np.sin(np.pi * test_value))
###Output
_____no_output_____
###Markdown
It's a match (close enough).Now if you define $g(\alpha) = f_{1}(\alpha) + f_{2}(\alpha)$ then $g'(\alpha) = f_{1}^{'}(\alpha) + f^{'}_{2}(\alpha)$. Defining more than one observable in TensorFlow Quantum to use along with a circuit is equivalent to adding on more terms to $g$.This means that the gradient of a particular symbol in a circuit is equal to the sum of the gradients with regards to each observable for that symbol applied to that circuit. This is compatible with TensorFlow gradient taking and backpropagation (where you give the sum of the gradients over all observables as the gradient for a particular symbol).
###Code
sum_of_outputs = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=[[test_value]])
###Output
_____no_output_____
###Markdown
Here you see the first entry is the expectation w.r.t Pauli X, and the second is the expectation w.r.t Pauli Z. Now when you take the gradient:
###Code
test_value_tensor = tf.convert_to_tensor([[test_value]])
with tf.GradientTape() as g:
g.watch(test_value_tensor)
outputs = sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=test_value_tensor)
sum_of_gradients = g.gradient(outputs, test_value_tensor)
print(my_grad(pauli_x, test_value) + my_grad(pauli_z, test_value))
print(sum_of_gradients.numpy())
###Output
_____no_output_____
###Markdown
Here you have verified that the sum of the gradients for each observable is indeed the gradient of $\alpha$. This behavior is supported by all TensorFlow Quantum differentiators and plays a crucial role in the compatibility with the rest of TensorFlow. 4. Advanced usageAll differentiators that exist inside of TensorFlow Quantum subclass `tfq.differentiators.Differentiator`. To implement a differentiator, a user must implement one of two interfaces. The standard is to implement `get_gradient_circuits`, which tells the base class which circuits to measure to obtain an estimate of the gradient. Alternatively, you can overload `differentiate_analytic` and `differentiate_sampled`; the class `tfq.differentiators.Adjoint` takes this route.The following uses TensorFlow Quantum to implement the gradient of a circuit. You will use a small example of parameter shifting. Recall the circuit you defined above, $|\alpha⟩ = Y^{\alpha}|0⟩$. As before, you can define a function as the expectation value of this circuit against the $X$ observable, $f(\alpha) = ⟨\alpha|X|\alpha⟩$. Using [parameter shift rules](https://pennylane.ai/qml/glossary/parameter_shift.html), for this circuit, you can find that the derivative is$$\frac{\partial}{\partial \alpha} f(\alpha) = \frac{\pi}{2} f\left(\alpha + \frac{1}{2}\right) - \frac{ \pi}{2} f\left(\alpha - \frac{1}{2}\right)$$The `get_gradient_circuits` function returns the components of this derivative.
###Code
class MyDifferentiator(tfq.differentiators.Differentiator):
"""A Toy differentiator for <Y^alpha | X |Y^alpha>."""
def __init__(self):
pass
def get_gradient_circuits(self, programs, symbol_names, symbol_values):
"""Return circuits to compute gradients for given forward pass circuits.
Every gradient on a quantum computer can be computed via measurements
of transformed quantum circuits. Here, you implement a custom gradient
for a specific circuit. For a real differentiator, you will need to
implement this function in a more general way. See the differentiator
implementations in the TFQ library for examples.
"""
# The two terms in the derivative are the same circuit...
batch_programs = tf.stack([programs, programs], axis=1)
# ... with shifted parameter values.
shift = tf.constant(1/2)
forward = symbol_values + shift
backward = symbol_values - shift
batch_symbol_values = tf.stack([forward, backward], axis=1)
# Weights are the coefficients of the terms in the derivative.
num_program_copies = tf.shape(batch_programs)[0]
batch_weights = tf.tile(tf.constant([[[np.pi/2, -np.pi/2]]]),
[num_program_copies, 1, 1])
# The index map simply says which weights go with which circuits.
batch_mapper = tf.tile(
tf.constant([[[0, 1]]]), [num_program_copies, 1, 1])
return (batch_programs, symbol_names, batch_symbol_values,
batch_weights, batch_mapper)
###Output
_____no_output_____
###Markdown
The `Differentiator` base class uses the components returned from `get_gradient_circuits` to calculate the derivative, as in the parameter shift formula you saw above. This new differentiator can now be used with existing `tfq.layer` objects:
###Code
custom_dif = MyDifferentiator()
custom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif)
# Now let's get the gradients with finite diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
# Now let's get the gradients with custom diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
my_outputs = custom_grad_expectation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
my_gradients = g.gradient(my_outputs, values_tensor)
plt.subplot(1, 2, 1)
plt.title('Exact Gradient')
plt.plot(input_points, analytic_finite_diff_gradients.numpy())
plt.xlabel('x')
plt.ylabel('f(x)')
plt.subplot(1, 2, 2)
plt.title('My Gradient')
plt.plot(input_points, my_gradients.numpy())
plt.xlabel('x')
###Output
_____no_output_____
###Markdown
This new differentiator can now be used to generate differentiable ops.Key Point: A differentiator that has been previously attached to an op must be refreshed before attaching to a new op, because a differentiator may only be attached to one op at a time.
###Code
# Create a noisy sample based expectation op.
expectation_sampled = tfq.get_sampled_expectation_op(
cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01)))
# Make it differentiable with your differentiator:
# Remember to refresh the differentiator before attaching the new op
custom_dif.refresh()
differentiable_op = custom_dif.generate_differentiable_op(
sampled_op=expectation_sampled)
# Prep op inputs.
circuit_tensor = tfq.convert_to_tensor([my_circuit])
op_tensor = tfq.convert_to_tensor([[pauli_x]])
single_value = tf.convert_to_tensor([[my_alpha]])
num_samples_tensor = tf.convert_to_tensor([[5000]])
with tf.GradientTape() as g:
g.watch(single_value)
forward_output = differentiable_op(circuit_tensor, ['alpha'], single_value,
op_tensor, num_samples_tensor)
my_gradients = g.gradient(forward_output, single_value)
print('---TFQ---')
print('Foward: ', forward_output.numpy())
print('Gradient:', my_gradients.numpy())
print('---Original---')
print('Forward: ', my_expectation(pauli_x, my_alpha))
print('Gradient:', my_grad(pauli_x, my_alpha))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Calculate gradients View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial explores gradient calculation algorithms for the expectation values of quantum circuits.Calculating the gradient of the expectation value of a certain observable in a quantum circuit is an involved process. Expectation values of observables do not have the luxury of having analytic gradient formulas that are always easy to write down—unlike traditional machine learning transformations such as matrix multiplication or vector addition that have analytic gradient formulas which are easy to write down. As a result, there are different quantum gradient calculation methods that come in handy for different scenarios. This tutorial compares and contrasts two different differentiation schemes. Setup
###Code
!pip install tensorflow==2.4.1
###Output
_____no_output_____
###Markdown
Install TensorFlow Quantum:
###Code
!pip install tensorflow-quantum
# Update package resources to account for version changes.
import importlib, pkg_resources
importlib.reload(pkg_resources)
###Output
_____no_output_____
###Markdown
Now import TensorFlow and the module dependencies:
###Code
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
###Output
_____no_output_____
###Markdown
1. PreliminaryLet's make the notion of gradient calculation for quantum circuits a little more concrete. Suppose you have a parameterized circuit like this one:
###Code
qubit = cirq.GridQubit(0, 0)
my_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha'))
SVGCircuit(my_circuit)
###Output
_____no_output_____
###Markdown
Along with an observable:
###Code
pauli_x = cirq.X(qubit)
pauli_x
###Output
_____no_output_____
###Markdown
Looking at this operator you know that $⟨Y(\alpha)| X | Y(\alpha)⟩ = \sin(\pi \alpha)$
###Code
def my_expectation(op, alpha):
"""Compute ⟨Y(alpha)| `op` | Y(alpha)⟩"""
params = {'alpha': alpha}
sim = cirq.Simulator()
final_state_vector = sim.simulate(my_circuit, params).final_state_vector
return op.expectation_from_state_vector(final_state_vector, {qubit: 0}).real
my_alpha = 0.3
print("Expectation=", my_expectation(pauli_x, my_alpha))
print("Sin Formula=", np.sin(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
and if you define $f_{1}(\alpha) = ⟨Y(\alpha)| X | Y(\alpha)⟩$ then $f_{1}^{'}(\alpha) = \pi \cos(\pi \alpha)$. Let's check this:
###Code
def my_grad(obs, alpha, eps=0.01):
grad = 0
f_x = my_expectation(obs, alpha)
f_x_prime = my_expectation(obs, alpha + eps)
return ((f_x_prime - f_x) / eps).real
print('Finite difference:', my_grad(pauli_x, my_alpha))
print('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
2. The need for a differentiatorWith larger circuits, you won't always be so lucky to have a formula that precisely calculates the gradients of a given quantum circuit. In the event that a simple formula isn't enough to calculate the gradient, the `tfq.differentiators.Differentiator` class allows you to define algorithms for computing the gradients of your circuits. For instance you can recreate the above example in TensorFlow Quantum (TFQ) with:
###Code
expectation_calculation = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
However, if you switch to estimating expectation based on sampling (what would happen on a true device) the values can change a little bit. This means you now have an imperfect estimate:
###Code
sampled_expectation_calculation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
This can quickly compound into a serious accuracy problem when it comes to gradients:
###Code
# Make input_points = [batch_size, 1] array.
input_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=input_points)
imperfect_outputs = sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=input_points)
plt.title('Forward Pass Values')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.plot(input_points, exact_outputs, label='Analytic')
plt.plot(input_points, imperfect_outputs, label='Sampled')
plt.legend()
# Gradients are a much different story.
values_tensor = tf.convert_to_tensor(input_points)
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = sampled_expectation_calculation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_finite_diff_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_finite_diff_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
Here you can see that although the finite difference formula is fast to compute the gradients themselves in the analytical case, when it came to the sampling based methods it was far too noisy. More careful techniques must be used to ensure a good gradient can be calculated. Next you will look at a much slower technique that wouldn't be as well suited for analytical expectation gradient calculations, but does perform much better in the real-world sample based case:
###Code
# A smarter differentiation scheme.
gradient_safe_sampled_expectation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ParameterShift())
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = gradient_safe_sampled_expectation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_param_shift_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_param_shift_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
From the above you can see that certain differentiators are best used for particular research scenarios. In general, the slower sample-based methods that are robust to device noise, etc., are great differentiators when testing or implementing algorithms in a more "real world" setting. Faster methods like finite difference are great for analytical calculations and you want higher throughput, but aren't yet concerned with the device viability of your algorithm. 3. Multiple observablesLet's introduce a second observable and see how TensorFlow Quantum supports multiple observables for a single circuit.
###Code
pauli_z = cirq.Z(qubit)
pauli_z
###Output
_____no_output_____
###Markdown
If this observable is used with the same circuit as before, then you have $f_{2}(\alpha) = ⟨Y(\alpha)| Z | Y(\alpha)⟩ = \cos(\pi \alpha)$ and $f_{2}^{'}(\alpha) = -\pi \sin(\pi \alpha)$. Perform a quick check:
###Code
test_value = 0.
print('Finite difference:', my_grad(pauli_z, test_value))
print('Sin formula: ', -np.pi * np.sin(np.pi * test_value))
###Output
_____no_output_____
###Markdown
It's a match (close enough).Now if you define $g(\alpha) = f_{1}(\alpha) + f_{2}(\alpha)$ then $g'(\alpha) = f_{1}^{'}(\alpha) + f^{'}_{2}(\alpha)$. Defining more than one observable in TensorFlow Quantum to use along with a circuit is equivalent to adding on more terms to $g$.This means that the gradient of a particular symbol in a circuit is equal to the sum of the gradients with regards to each observable for that symbol applied to that circuit. This is compatible with TensorFlow gradient taking and backpropagation (where you give the sum of the gradients over all observables as the gradient for a particular symbol).
###Code
sum_of_outputs = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=[[test_value]])
###Output
_____no_output_____
###Markdown
Here you see the first entry is the expectation w.r.t Pauli X, and the second is the expectation w.r.t Pauli Z. Now when you take the gradient:
###Code
test_value_tensor = tf.convert_to_tensor([[test_value]])
with tf.GradientTape() as g:
g.watch(test_value_tensor)
outputs = sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=test_value_tensor)
sum_of_gradients = g.gradient(outputs, test_value_tensor)
print(my_grad(pauli_x, test_value) + my_grad(pauli_z, test_value))
print(sum_of_gradients.numpy())
###Output
_____no_output_____
###Markdown
Here you have verified that the sum of the gradients for each observable is indeed the gradient of $\alpha$. This behavior is supported by all TensorFlow Quantum differentiators and plays a crucial role in the compatibility with the rest of TensorFlow. 4. Advanced usageAll differentiators that exist inside of TensorFlow Quantum subclass `tfq.differentiators.Differentiator`. To implement a differentiator, a user must implement one of two interfaces. The standard is to implement `get_gradient_circuits`, which tells the base class which circuits to measure to obtain an estimate of the gradient. Alternatively, you can overload `differentiate_analytic` and `differentiate_sampled`; the class `tfq.differentiators.Adjoint` takes this route.The following uses TensorFlow Quantum to implement the gradient of a circuit. You will use a small example of parameter shifting. Recall the circuit you defined above, $|\alpha⟩ = Y^{\alpha}|0⟩$. As before, you can define a function as the expectation value of this circuit against the $X$ observable, $f(\alpha) = ⟨\alpha|X|\alpha⟩$. Using [parameter shift rules](https://pennylane.ai/qml/glossary/parameter_shift.html), for this circuit, you can find that the derivative is$$\frac{\partial}{\partial \alpha} f(\alpha) = \frac{\pi}{2} f\left(\alpha + \frac{1}{2}\right) - \frac{ \pi}{2} f\left(\alpha - \frac{1}{2}\right)$$The `get_gradient_circuits` function returns the components of this derivative.
###Code
class MyDifferentiator(tfq.differentiators.Differentiator):
"""A Toy differentiator for <Y^alpha | X |Y^alpha>."""
def __init__(self):
pass
def get_gradient_circuits(self, programs, symbol_names, symbol_values):
"""Return circuits to compute gradients for given forward pass circuits.
Every gradient on a quantum computer can be computed via measurements
of transformed quantum circuits. Here, you implement a custom gradient
for a specific circuit. For a real differentiator, you will need to
implement this function in a more general way. See the differentiator
implementations in the TFQ library for examples.
"""
# The two terms in the derivative are the same circuit...
batch_programs = tf.stack([programs, programs], axis=1)
# ... with shifted parameter values.
shift = tf.constant(1/2)
forward = symbol_values + shift
backward = symbol_values - shift
batch_symbol_values = tf.stack([forward, backward], axis=1)
# Weights are the coefficients of the terms in the derivative.
num_program_copies = tf.shape(batch_programs)[0]
batch_weights = tf.tile(tf.constant([[[np.pi/2, -np.pi/2]]]),
[num_program_copies, 1, 1])
# The index map simply says which weights go with which circuits.
batch_mapper = tf.tile(
tf.constant([[[0, 1]]]), [num_program_copies, 1, 1])
return (batch_programs, symbol_names, batch_symbol_values,
batch_weights, batch_mapper)
###Output
_____no_output_____
###Markdown
The `Differentiator` base class uses the components returned from `get_gradient_circuits` to calculate the derivative, as in the parameter shift formula you saw above. This new differentiator can now be used with existing `tfq.layer` objects:
###Code
custom_dif = MyDifferentiator()
custom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif)
# Now let's get the gradients with finite diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
# Now let's get the gradients with custom diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
my_outputs = custom_grad_expectation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
my_gradients = g.gradient(my_outputs, values_tensor)
plt.subplot(1, 2, 1)
plt.title('Exact Gradient')
plt.plot(input_points, analytic_finite_diff_gradients.numpy())
plt.xlabel('x')
plt.ylabel('f(x)')
plt.subplot(1, 2, 2)
plt.title('My Gradient')
plt.plot(input_points, my_gradients.numpy())
plt.xlabel('x')
###Output
_____no_output_____
###Markdown
This new differentiator can now be used to generate differentiable ops.Key Point: A differentiator that has been previously attached to an op must be refreshed before attaching to a new op, because a differentiator may only be attached to one op at a time.
###Code
# Create a noisy sample based expectation op.
expectation_sampled = tfq.get_sampled_expectation_op(
cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01)))
# Make it differentiable with your differentiator:
# Remember to refresh the differentiator before attaching the new op
custom_dif.refresh()
differentiable_op = custom_dif.generate_differentiable_op(
sampled_op=expectation_sampled)
# Prep op inputs.
circuit_tensor = tfq.convert_to_tensor([my_circuit])
op_tensor = tfq.convert_to_tensor([[pauli_x]])
single_value = tf.convert_to_tensor([[my_alpha]])
num_samples_tensor = tf.convert_to_tensor([[5000]])
with tf.GradientTape() as g:
g.watch(single_value)
forward_output = differentiable_op(circuit_tensor, ['alpha'], single_value,
op_tensor, num_samples_tensor)
my_gradients = g.gradient(forward_output, single_value)
print('---TFQ---')
print('Foward: ', forward_output.numpy())
print('Gradient:', my_gradients.numpy())
print('---Original---')
print('Forward: ', my_expectation(pauli_x, my_alpha))
print('Gradient:', my_grad(pauli_x, my_alpha))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Calculate gradients View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial explores gradient calculation algorithms for the expectation values of quantum circuits.Calculating the gradient of the expectation value of a certain observable in a quantum circuit is an involved process. Expectation values of observables do not have the luxury of having analytic gradient formulas that are always easy to write down—unlike traditional machine learning transformations such as matrix multiplication or vector addition that have analytic gradient formulas which are easy to write down. As a result, there are different quantum gradient calculation methods that come in handy for different scenarios. This tutorial compares and contrasts two different differentiation schemes. Setup
###Code
!pip install tensorflow==2.1.0
###Output
_____no_output_____
###Markdown
Install TensorFlow Quantum:
###Code
!pip install tensorflow-quantum cirq==0.7.0
###Output
_____no_output_____
###Markdown
Now import TensorFlow and the module dependencies:
###Code
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
###Output
_____no_output_____
###Markdown
1. PreliminaryLet's make the notion of gradient calculation for quantum circuits a little more concrete. Suppose you have a parameterized circuit like this one:
###Code
qubit = cirq.GridQubit(0, 0)
my_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha'))
SVGCircuit(my_circuit)
###Output
_____no_output_____
###Markdown
Along with an observable:
###Code
pauli_x = cirq.X(qubit)
pauli_x
###Output
_____no_output_____
###Markdown
Looking at this operator you know that $⟨Y(\alpha)| X | Y(\alpha)⟩ = \sin(\pi \alpha)$
###Code
def my_expectation(op, alpha):
"""Compute ⟨Y(alpha)| `op` | Y(alpha)⟩"""
params = {'alpha': alpha}
sim = cirq.Simulator()
final_state = sim.simulate(my_circuit, params).final_state
return op.expectation_from_wavefunction(final_state, {qubit: 0}).real
my_alpha = 0.3
print("Expectation=", my_expectation(pauli_x, my_alpha))
print("Sin Formula=", np.sin(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
and if you define $f_{1}(\alpha) = ⟨Y(\alpha)| X | Y(\alpha)⟩$ then $f_{1}^{'}(\alpha) = \pi \cos(\pi \alpha)$. Let's check this:
###Code
def my_grad(obs, alpha, eps=0.01):
grad = 0
f_x = my_expectation(obs, alpha)
f_x_prime = my_expectation(obs, alpha + eps)
return ((f_x_prime - f_x) / eps).real
print('Finite difference:', my_grad(pauli_x, my_alpha))
print('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
2. The need for a differentiatorWith larger circuits, you won't always be so lucky to have a formula that precisely calculates the gradients of a given quantum circuit. In the event that a simple formula isn't enough to calculate the gradient, the `tfq.differentiators.Differentiator` class allows you to define algorithms for computing the gradients of your circuits. For instance you can recreate the above example in TensorFlow Quantum (TFQ) with:
###Code
expectation_calculation = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
However, if you switch to estimating expectation based on sampling (what would happen on a true device) the values can change a little bit. This means you now have an imperfect estimate:
###Code
sampled_expectation_calculation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
This can quickly compound into a serious accuracy problem when it comes to gradients:
###Code
# Make input_points = [batch_size, 1] array.
input_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=input_points)
imperfect_outputs = sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=input_points)
plt.title('Forward Pass Values')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.plot(input_points, exact_outputs, label='Analytic')
plt.plot(input_points, imperfect_outputs, label='Sampled')
plt.legend()
# Gradients are a much different story.
values_tensor = tf.convert_to_tensor(input_points)
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = sampled_expectation_calculation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_finite_diff_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_finite_diff_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
Here you can see that although the finite difference formula is fast to compute the gradients themselves in the analytical case, when it came to the sampling based methods it was far too noisy. More careful techniques must be used to ensure a good gradient can be calculated. Next you will look at a much slower technique that wouldn't be as well suited for analytical expectation gradient calculations, but does perform much better in the real-world sample based case:
###Code
# A smarter differentiation scheme.
gradient_safe_sampled_expectation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ParameterShift())
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = gradient_safe_sampled_expectation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_param_shift_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_param_shift_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
From the above you can see that certain differentiators are best used for particular research scenarios. In general, the slower sample-based methods that are robust to device noise, etc., are great differentiators when testing or implementing algorithms in a more "real world" setting. Faster methods like finite difference are great for analytical calculations and you want higher throughput, but aren't yet concerned with the device viability of your algorithm. 3. Multiple observablesLet's introduce a second observable and see how TensorFlow Quantum supports multiple observables for a single circuit.
###Code
pauli_z = cirq.Z(qubit)
pauli_z
###Output
_____no_output_____
###Markdown
If this observable is used with the same circuit as before, then you have $f_{2}(\alpha) = ⟨Y(\alpha)| Z | Y(\alpha)⟩ = \cos(\pi \alpha)$ and $f_{2}^{'}(\alpha) = -\pi \sin(\pi \alpha)$. Perform a quick check:
###Code
test_value = 0.
print('Finite difference:', my_grad(pauli_z, test_value))
print('Sin formula: ', -np.pi * np.sin(np.pi * test_value))
###Output
_____no_output_____
###Markdown
It's a match (close enough).Now if you define $g(\alpha) = f_{1}(\alpha) + f_{2}(\alpha)$ then $g'(\alpha) = f_{1}^{'}(\alpha) + f^{'}_{2}(\alpha)$. Defining more than one observable in TensorFlow Quantum to use along with a circuit is equivalent to adding on more terms to $g$.This means that the gradient of a particular symbol in a circuit is equal to the sum of the gradients with regards to each observable for that symbol applied to that circuit. This is compatible with TensorFlow gradient taking and backpropagation (where you give the sum of the gradients over all observables as the gradient for a particular symbol).
###Code
sum_of_outputs = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=[[test_value]])
###Output
_____no_output_____
###Markdown
Here you see the first entry is the expectation w.r.t Pauli X, and the second is the expectation w.r.t Pauli Z. Now when you take the gradient:
###Code
test_value_tensor = tf.convert_to_tensor([[test_value]])
with tf.GradientTape() as g:
g.watch(test_value_tensor)
outputs = sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=test_value_tensor)
sum_of_gradients = g.gradient(outputs, test_value_tensor)
print(my_grad(pauli_x, test_value) + my_grad(pauli_z, test_value))
print(sum_of_gradients.numpy())
###Output
_____no_output_____
###Markdown
Here you have verified that the sum of the gradients for each observable is indeed the gradient of $\alpha$. This behavior is supported by all TensorFlow Quantum differentiators and plays a crucial role in the compatibility with the rest of TensorFlow. 4. Advanced usageHere you will learn how to define your own custom differentiation routines for quantum circuits.All differentiators that exist inside of TensorFlow Quantum subclass `tfq.differentiators.Differentiator`. A differentiator must implement `differentiate_analytic` and `differentiate_sampled`.The following uses TensorFlow Quantum constructs to implement the closed form solution from the first part of this tutorial.
###Code
class MyDifferentiator(tfq.differentiators.Differentiator):
"""A Toy differentiator for <Y^alpha | X |Y^alpha>."""
def __init__(self):
pass
@tf.function
def _compute_gradient(self, symbol_values):
"""Compute the gradient based on symbol_values."""
# f(x) = sin(pi * x)
# f'(x) = pi * cos(pi * x)
return tf.cast(tf.cos(symbol_values * np.pi) * np.pi, tf.float32)
@tf.function
def differentiate_analytic(self, programs, symbol_names, symbol_values,
pauli_sums, forward_pass_vals, grad):
"""Specify how to differentiate a circuit with analytical expectation.
This is called at graph runtime by TensorFlow. `differentiate_analytic`
should calculate the gradient of a batch of circuits and return it
formatted as indicated below. See
`tfq.differentiators.ForwardDifference` for an example.
Args:
programs: `tf.Tensor` of strings with shape [batch_size] containing
the string representations of the circuits to be executed.
symbol_names: `tf.Tensor` of strings with shape [n_params], which
is used to specify the order in which the values in
`symbol_values` should be placed inside of the circuits in
`programs`.
symbol_values: `tf.Tensor` of real numbers with shape
[batch_size, n_params] specifying parameter values to resolve
into the circuits specified by programs, following the ordering
dictated by `symbol_names`.
pauli_sums: `tf.Tensor` of strings with shape [batch_size, n_ops]
containing the string representation of the operators that will
be used on all of the circuits in the expectation calculations.
forward_pass_vals: `tf.Tensor` of real numbers with shape
[batch_size, n_ops] containing the output of the forward pass
through the op you are differentiating.
grad: `tf.Tensor` of real numbers with shape [batch_size, n_ops]
representing the gradient backpropagated to the output of the
op you are differentiating through.
Returns:
A `tf.Tensor` with the same shape as `symbol_values` representing
the gradient backpropagated to the `symbol_values` input of the op
you are differentiating through.
"""
# Computing gradients just based off of symbol_values.
return self._compute_gradient(symbol_values) * grad
@tf.function
def differentiate_sampled(self, programs, symbol_names, symbol_values,
pauli_sums, num_samples, forward_pass_vals, grad):
"""Specify how to differentiate a circuit with sampled expectation.
This is called at graph runtime by TensorFlow. `differentiate_sampled`
should calculate the gradient of a batch of circuits and return it
formatted as indicated below. See
`tfq.differentiators.ForwardDifference` for an example.
Args:
programs: `tf.Tensor` of strings with shape [batch_size] containing
the string representations of the circuits to be executed.
symbol_names: `tf.Tensor` of strings with shape [n_params], which
is used to specify the order in which the values in
`symbol_values` should be placed inside of the circuits in
`programs`.
symbol_values: `tf.Tensor` of real numbers with shape
[batch_size, n_params] specifying parameter values to resolve
into the circuits specified by programs, following the ordering
dictated by `symbol_names`.
pauli_sums: `tf.Tensor` of strings with shape [batch_size, n_ops]
containing the string representation of the operators that will
be used on all of the circuits in the expectation calculations.
num_samples: `tf.Tensor` of positive integers representing the
number of samples per term in each term of pauli_sums used
during the forward pass.
forward_pass_vals: `tf.Tensor` of real numbers with shape
[batch_size, n_ops] containing the output of the forward pass
through the op you are differentiating.
grad: `tf.Tensor` of real numbers with shape [batch_size, n_ops]
representing the gradient backpropagated to the output of the
op you are differentiating through.
Returns:
A `tf.Tensor` with the same shape as `symbol_values` representing
the gradient backpropagated to the `symbol_values` input of the op
you are differentiating through.
"""
return self._compute_gradient(symbol_values) * grad
###Output
_____no_output_____
###Markdown
This new differentiator can now be used with existing `tfq.layer` objects:
###Code
custom_dif = MyDifferentiator()
custom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif)
# Now let's get the gradients with finite diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
# Now let's get the gradients with custom diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
my_outputs = custom_grad_expectation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
my_gradients = g.gradient(my_outputs, values_tensor)
plt.subplot(1, 2, 1)
plt.title('Exact Gradient')
plt.plot(input_points, analytic_finite_diff_gradients.numpy())
plt.xlabel('x')
plt.ylabel('f(x)')
plt.subplot(1, 2, 2)
plt.title('My Gradient')
plt.plot(input_points, my_gradients.numpy())
plt.xlabel('x')
###Output
_____no_output_____
###Markdown
This new differentiator can now be used to generate differentiable ops.Key Point: A differentiator that has been previously attached to an op must be refreshed before attaching to a new op, because a differentiator may only be attached to one op at a time.
###Code
# Create a noisy sample based expectation op.
expectation_sampled = tfq.get_sampled_expectation_op(
cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01)))
# Make it differentiable with your differentiator:
# Remember to refresh the differentiator before attaching the new op
custom_dif.refresh()
differentiable_op = custom_dif.generate_differentiable_op(
sampled_op=expectation_sampled)
# Prep op inputs.
circuit_tensor = tfq.convert_to_tensor([my_circuit])
op_tensor = tfq.convert_to_tensor([[pauli_x]])
single_value = tf.convert_to_tensor([[my_alpha]])
num_samples_tensor = tf.convert_to_tensor([[1000]])
with tf.GradientTape() as g:
g.watch(single_value)
forward_output = differentiable_op(circuit_tensor, ['alpha'], single_value,
op_tensor, num_samples_tensor)
my_gradients = g.gradient(forward_output, single_value)
print('---TFQ---')
print('Foward: ', forward_output.numpy())
print('Gradient:', my_gradients.numpy())
print('---Original---')
print('Forward: ', my_expectation(pauli_x, my_alpha))
print('Gradient:', my_grad(pauli_x, my_alpha))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Calculate gradients View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial explores gradient calculation algorithms for the expectation values of quantum circuits.Calculating the gradient of the expectation value of a certain observable in a quantum circuit is an involved process. Expectation values of observables do not have the luxury of having analytic gradient formulas that are always easy to write down—unlike traditional machine learning transformations such as matrix multiplication or vector addition that have analytic gradient formulas which are easy to write down. As a result, there are different quantum gradient calculation methods that come in handy for different scenarios. This tutorial compares and contrasts two different differentiation schemes. Setup
###Code
!pip install tensorflow==2.7.0
###Output
_____no_output_____
###Markdown
Install TensorFlow Quantum:
###Code
!pip install tensorflow-quantum
# Update package resources to account for version changes.
import importlib, pkg_resources
importlib.reload(pkg_resources)
###Output
_____no_output_____
###Markdown
Now import TensorFlow and the module dependencies:
###Code
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
###Output
_____no_output_____
###Markdown
1. PreliminaryLet's make the notion of gradient calculation for quantum circuits a little more concrete. Suppose you have a parameterized circuit like this one:
###Code
qubit = cirq.GridQubit(0, 0)
my_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha'))
SVGCircuit(my_circuit)
###Output
_____no_output_____
###Markdown
Along with an observable:
###Code
pauli_x = cirq.X(qubit)
pauli_x
###Output
_____no_output_____
###Markdown
Looking at this operator you know that $⟨Y(\alpha)| X | Y(\alpha)⟩ = \sin(\pi \alpha)$
###Code
def my_expectation(op, alpha):
"""Compute ⟨Y(alpha)| `op` | Y(alpha)⟩"""
params = {'alpha': alpha}
sim = cirq.Simulator()
final_state_vector = sim.simulate(my_circuit, params).final_state_vector
return op.expectation_from_state_vector(final_state_vector, {qubit: 0}).real
my_alpha = 0.3
print("Expectation=", my_expectation(pauli_x, my_alpha))
print("Sin Formula=", np.sin(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
and if you define $f_{1}(\alpha) = ⟨Y(\alpha)| X | Y(\alpha)⟩$ then $f_{1}^{'}(\alpha) = \pi \cos(\pi \alpha)$. Let's check this:
###Code
def my_grad(obs, alpha, eps=0.01):
grad = 0
f_x = my_expectation(obs, alpha)
f_x_prime = my_expectation(obs, alpha + eps)
return ((f_x_prime - f_x) / eps).real
print('Finite difference:', my_grad(pauli_x, my_alpha))
print('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
2. The need for a differentiatorWith larger circuits, you won't always be so lucky to have a formula that precisely calculates the gradients of a given quantum circuit. In the event that a simple formula isn't enough to calculate the gradient, the `tfq.differentiators.Differentiator` class allows you to define algorithms for computing the gradients of your circuits. For instance you can recreate the above example in TensorFlow Quantum (TFQ) with:
###Code
expectation_calculation = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
However, if you switch to estimating expectation based on sampling (what would happen on a true device) the values can change a little bit. This means you now have an imperfect estimate:
###Code
sampled_expectation_calculation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
This can quickly compound into a serious accuracy problem when it comes to gradients:
###Code
# Make input_points = [batch_size, 1] array.
input_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=input_points)
imperfect_outputs = sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=input_points)
plt.title('Forward Pass Values')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.plot(input_points, exact_outputs, label='Analytic')
plt.plot(input_points, imperfect_outputs, label='Sampled')
plt.legend()
# Gradients are a much different story.
values_tensor = tf.convert_to_tensor(input_points)
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = sampled_expectation_calculation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_finite_diff_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_finite_diff_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
Here you can see that although the finite difference formula is fast to compute the gradients themselves in the analytical case, when it came to the sampling based methods it was far too noisy. More careful techniques must be used to ensure a good gradient can be calculated. Next you will look at a much slower technique that wouldn't be as well suited for analytical expectation gradient calculations, but does perform much better in the real-world sample based case:
###Code
# A smarter differentiation scheme.
gradient_safe_sampled_expectation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ParameterShift())
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = gradient_safe_sampled_expectation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_param_shift_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_param_shift_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
From the above you can see that certain differentiators are best used for particular research scenarios. In general, the slower sample-based methods that are robust to device noise, etc., are great differentiators when testing or implementing algorithms in a more "real world" setting. Faster methods like finite difference are great for analytical calculations and you want higher throughput, but aren't yet concerned with the device viability of your algorithm. 3. Multiple observablesLet's introduce a second observable and see how TensorFlow Quantum supports multiple observables for a single circuit.
###Code
pauli_z = cirq.Z(qubit)
pauli_z
###Output
_____no_output_____
###Markdown
If this observable is used with the same circuit as before, then you have $f_{2}(\alpha) = ⟨Y(\alpha)| Z | Y(\alpha)⟩ = \cos(\pi \alpha)$ and $f_{2}^{'}(\alpha) = -\pi \sin(\pi \alpha)$. Perform a quick check:
###Code
test_value = 0.
print('Finite difference:', my_grad(pauli_z, test_value))
print('Sin formula: ', -np.pi * np.sin(np.pi * test_value))
###Output
_____no_output_____
###Markdown
It's a match (close enough).Now if you define $g(\alpha) = f_{1}(\alpha) + f_{2}(\alpha)$ then $g'(\alpha) = f_{1}^{'}(\alpha) + f^{'}_{2}(\alpha)$. Defining more than one observable in TensorFlow Quantum to use along with a circuit is equivalent to adding on more terms to $g$.This means that the gradient of a particular symbol in a circuit is equal to the sum of the gradients with regards to each observable for that symbol applied to that circuit. This is compatible with TensorFlow gradient taking and backpropagation (where you give the sum of the gradients over all observables as the gradient for a particular symbol).
###Code
sum_of_outputs = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=[[test_value]])
###Output
_____no_output_____
###Markdown
Here you see the first entry is the expectation w.r.t Pauli X, and the second is the expectation w.r.t Pauli Z. Now when you take the gradient:
###Code
test_value_tensor = tf.convert_to_tensor([[test_value]])
with tf.GradientTape() as g:
g.watch(test_value_tensor)
outputs = sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=test_value_tensor)
sum_of_gradients = g.gradient(outputs, test_value_tensor)
print(my_grad(pauli_x, test_value) + my_grad(pauli_z, test_value))
print(sum_of_gradients.numpy())
###Output
_____no_output_____
###Markdown
Here you have verified that the sum of the gradients for each observable is indeed the gradient of $\alpha$. This behavior is supported by all TensorFlow Quantum differentiators and plays a crucial role in the compatibility with the rest of TensorFlow. 4. Advanced usageAll differentiators that exist inside of TensorFlow Quantum subclass `tfq.differentiators.Differentiator`. To implement a differentiator, a user must implement one of two interfaces. The standard is to implement `get_gradient_circuits`, which tells the base class which circuits to measure to obtain an estimate of the gradient. Alternatively, you can overload `differentiate_analytic` and `differentiate_sampled`; the class `tfq.differentiators.Adjoint` takes this route.The following uses TensorFlow Quantum to implement the gradient of a circuit. You will use a small example of parameter shifting. Recall the circuit you defined above, $|\alpha⟩ = Y^{\alpha}|0⟩$. As before, you can define a function as the expectation value of this circuit against the $X$ observable, $f(\alpha) = ⟨\alpha|X|\alpha⟩$. Using [parameter shift rules](https://pennylane.ai/qml/glossary/parameter_shift.html), for this circuit, you can find that the derivative is$$\frac{\partial}{\partial \alpha} f(\alpha) = \frac{\pi}{2} f\left(\alpha + \frac{1}{2}\right) - \frac{ \pi}{2} f\left(\alpha - \frac{1}{2}\right)$$The `get_gradient_circuits` function returns the components of this derivative.
###Code
class MyDifferentiator(tfq.differentiators.Differentiator):
"""A Toy differentiator for <Y^alpha | X |Y^alpha>."""
def __init__(self):
pass
def get_gradient_circuits(self, programs, symbol_names, symbol_values):
"""Return circuits to compute gradients for given forward pass circuits.
Every gradient on a quantum computer can be computed via measurements
of transformed quantum circuits. Here, you implement a custom gradient
for a specific circuit. For a real differentiator, you will need to
implement this function in a more general way. See the differentiator
implementations in the TFQ library for examples.
"""
# The two terms in the derivative are the same circuit...
batch_programs = tf.stack([programs, programs], axis=1)
# ... with shifted parameter values.
shift = tf.constant(1/2)
forward = symbol_values + shift
backward = symbol_values - shift
batch_symbol_values = tf.stack([forward, backward], axis=1)
# Weights are the coefficients of the terms in the derivative.
num_program_copies = tf.shape(batch_programs)[0]
batch_weights = tf.tile(tf.constant([[[np.pi/2, -np.pi/2]]]),
[num_program_copies, 1, 1])
# The index map simply says which weights go with which circuits.
batch_mapper = tf.tile(
tf.constant([[[0, 1]]]), [num_program_copies, 1, 1])
return (batch_programs, symbol_names, batch_symbol_values,
batch_weights, batch_mapper)
###Output
_____no_output_____
###Markdown
The `Differentiator` base class uses the components returned from `get_gradient_circuits` to calculate the derivative, as in the parameter shift formula you saw above. This new differentiator can now be used with existing `tfq.layer` objects:
###Code
custom_dif = MyDifferentiator()
custom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif)
# Now let's get the gradients with finite diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
# Now let's get the gradients with custom diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
my_outputs = custom_grad_expectation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
my_gradients = g.gradient(my_outputs, values_tensor)
plt.subplot(1, 2, 1)
plt.title('Exact Gradient')
plt.plot(input_points, analytic_finite_diff_gradients.numpy())
plt.xlabel('x')
plt.ylabel('f(x)')
plt.subplot(1, 2, 2)
plt.title('My Gradient')
plt.plot(input_points, my_gradients.numpy())
plt.xlabel('x')
###Output
_____no_output_____
###Markdown
This new differentiator can now be used to generate differentiable ops.Key Point: A differentiator that has been previously attached to an op must be refreshed before attaching to a new op, because a differentiator may only be attached to one op at a time.
###Code
# Create a noisy sample based expectation op.
expectation_sampled = tfq.get_sampled_expectation_op(
cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01)))
# Make it differentiable with your differentiator:
# Remember to refresh the differentiator before attaching the new op
custom_dif.refresh()
differentiable_op = custom_dif.generate_differentiable_op(
sampled_op=expectation_sampled)
# Prep op inputs.
circuit_tensor = tfq.convert_to_tensor([my_circuit])
op_tensor = tfq.convert_to_tensor([[pauli_x]])
single_value = tf.convert_to_tensor([[my_alpha]])
num_samples_tensor = tf.convert_to_tensor([[5000]])
with tf.GradientTape() as g:
g.watch(single_value)
forward_output = differentiable_op(circuit_tensor, ['alpha'], single_value,
op_tensor, num_samples_tensor)
my_gradients = g.gradient(forward_output, single_value)
print('---TFQ---')
print('Foward: ', forward_output.numpy())
print('Gradient:', my_gradients.numpy())
print('---Original---')
print('Forward: ', my_expectation(pauli_x, my_alpha))
print('Gradient:', my_grad(pauli_x, my_alpha))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Calculate gradients View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial explores gradient calculation algorithms for the expectation values of quantum circuits.Calculating the gradient of the expectation value of a certain observable in a quantum circuit is an involved process. Expectation values of observables do not have the luxury of having analytic gradient formulas that are always easy to write down—unlike traditional machine learning transformations such as matrix multiplication or vector addition that have analytic gradient formulas which are easy to write down. As a result, there are different quantum gradient calculation methods that come in handy for different scenarios. This tutorial compares and contrasts two different differentiation schemes. Setup
###Code
!pip install tensorflow==2.4.1
###Output
_____no_output_____
###Markdown
Install TensorFlow Quantum:
###Code
!pip install tensorflow-quantum
###Output
_____no_output_____
###Markdown
Now import TensorFlow and the module dependencies:
###Code
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
###Output
_____no_output_____
###Markdown
1. PreliminaryLet's make the notion of gradient calculation for quantum circuits a little more concrete. Suppose you have a parameterized circuit like this one:
###Code
qubit = cirq.GridQubit(0, 0)
my_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha'))
SVGCircuit(my_circuit)
###Output
_____no_output_____
###Markdown
Along with an observable:
###Code
pauli_x = cirq.X(qubit)
pauli_x
###Output
_____no_output_____
###Markdown
Looking at this operator you know that $⟨Y(\alpha)| X | Y(\alpha)⟩ = \sin(\pi \alpha)$
###Code
def my_expectation(op, alpha):
"""Compute ⟨Y(alpha)| `op` | Y(alpha)⟩"""
params = {'alpha': alpha}
sim = cirq.Simulator()
final_state_vector = sim.simulate(my_circuit, params).final_state_vector
return op.expectation_from_state_vector(final_state_vector, {qubit: 0}).real
my_alpha = 0.3
print("Expectation=", my_expectation(pauli_x, my_alpha))
print("Sin Formula=", np.sin(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
and if you define $f_{1}(\alpha) = ⟨Y(\alpha)| X | Y(\alpha)⟩$ then $f_{1}^{'}(\alpha) = \pi \cos(\pi \alpha)$. Let's check this:
###Code
def my_grad(obs, alpha, eps=0.01):
grad = 0
f_x = my_expectation(obs, alpha)
f_x_prime = my_expectation(obs, alpha + eps)
return ((f_x_prime - f_x) / eps).real
print('Finite difference:', my_grad(pauli_x, my_alpha))
print('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
2. The need for a differentiatorWith larger circuits, you won't always be so lucky to have a formula that precisely calculates the gradients of a given quantum circuit. In the event that a simple formula isn't enough to calculate the gradient, the `tfq.differentiators.Differentiator` class allows you to define algorithms for computing the gradients of your circuits. For instance you can recreate the above example in TensorFlow Quantum (TFQ) with:
###Code
expectation_calculation = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
However, if you switch to estimating expectation based on sampling (what would happen on a true device) the values can change a little bit. This means you now have an imperfect estimate:
###Code
sampled_expectation_calculation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
This can quickly compound into a serious accuracy problem when it comes to gradients:
###Code
# Make input_points = [batch_size, 1] array.
input_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=input_points)
imperfect_outputs = sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=input_points)
plt.title('Forward Pass Values')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.plot(input_points, exact_outputs, label='Analytic')
plt.plot(input_points, imperfect_outputs, label='Sampled')
plt.legend()
# Gradients are a much different story.
values_tensor = tf.convert_to_tensor(input_points)
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = sampled_expectation_calculation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_finite_diff_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_finite_diff_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
Here you can see that although the finite difference formula is fast to compute the gradients themselves in the analytical case, when it came to the sampling based methods it was far too noisy. More careful techniques must be used to ensure a good gradient can be calculated. Next you will look at a much slower technique that wouldn't be as well suited for analytical expectation gradient calculations, but does perform much better in the real-world sample based case:
###Code
# A smarter differentiation scheme.
gradient_safe_sampled_expectation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ParameterShift())
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = gradient_safe_sampled_expectation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_param_shift_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_param_shift_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
From the above you can see that certain differentiators are best used for particular research scenarios. In general, the slower sample-based methods that are robust to device noise, etc., are great differentiators when testing or implementing algorithms in a more "real world" setting. Faster methods like finite difference are great for analytical calculations and you want higher throughput, but aren't yet concerned with the device viability of your algorithm. 3. Multiple observablesLet's introduce a second observable and see how TensorFlow Quantum supports multiple observables for a single circuit.
###Code
pauli_z = cirq.Z(qubit)
pauli_z
###Output
_____no_output_____
###Markdown
If this observable is used with the same circuit as before, then you have $f_{2}(\alpha) = ⟨Y(\alpha)| Z | Y(\alpha)⟩ = \cos(\pi \alpha)$ and $f_{2}^{'}(\alpha) = -\pi \sin(\pi \alpha)$. Perform a quick check:
###Code
test_value = 0.
print('Finite difference:', my_grad(pauli_z, test_value))
print('Sin formula: ', -np.pi * np.sin(np.pi * test_value))
###Output
_____no_output_____
###Markdown
It's a match (close enough).Now if you define $g(\alpha) = f_{1}(\alpha) + f_{2}(\alpha)$ then $g'(\alpha) = f_{1}^{'}(\alpha) + f^{'}_{2}(\alpha)$. Defining more than one observable in TensorFlow Quantum to use along with a circuit is equivalent to adding on more terms to $g$.This means that the gradient of a particular symbol in a circuit is equal to the sum of the gradients with regards to each observable for that symbol applied to that circuit. This is compatible with TensorFlow gradient taking and backpropagation (where you give the sum of the gradients over all observables as the gradient for a particular symbol).
###Code
sum_of_outputs = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=[[test_value]])
###Output
_____no_output_____
###Markdown
Here you see the first entry is the expectation w.r.t Pauli X, and the second is the expectation w.r.t Pauli Z. Now when you take the gradient:
###Code
test_value_tensor = tf.convert_to_tensor([[test_value]])
with tf.GradientTape() as g:
g.watch(test_value_tensor)
outputs = sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=test_value_tensor)
sum_of_gradients = g.gradient(outputs, test_value_tensor)
print(my_grad(pauli_x, test_value) + my_grad(pauli_z, test_value))
print(sum_of_gradients.numpy())
###Output
_____no_output_____
###Markdown
Here you have verified that the sum of the gradients for each observable is indeed the gradient of $\alpha$. This behavior is supported by all TensorFlow Quantum differentiators and plays a crucial role in the compatibility with the rest of TensorFlow. 4. Advanced usageAll differentiators that exist inside of TensorFlow Quantum subclass `tfq.differentiators.Differentiator`. To implement a differentiator, a user must implement one of two interfaces. The standard is to implement `get_gradient_circuits`, which tells the base class which circuits to measure to obtain an estimate of the gradient. Alternatively, you can overload `differentiate_analytic` and `differentiate_sampled`; the class `tfq.differentiators.Adjoint` takes this route.The following uses TensorFlow Quantum to implement the gradient of a circuit. You will use a small example of parameter shifting. Recall the circuit you defined above, $|\alpha⟩ = Y^{\alpha}|0⟩$. As before, you can define a function as the expectation value of this circuit against the $X$ observable, $f(\alpha) = ⟨\alpha|X|\alpha⟩$. Using [parameter shift rules](https://pennylane.ai/qml/glossary/parameter_shift.html), for this circuit, you can find that the derivative is$$\frac{\partial}{\partial \alpha} f(\alpha) = \frac{\pi}{2} f\left(\alpha + \frac{1}{2}\right) - \frac{ \pi}{2} f\left(\alpha - \frac{1}{2}\right)$$The `get_gradient_circuits` function returns the components of this derivative.
###Code
class MyDifferentiator(tfq.differentiators.Differentiator):
"""A Toy differentiator for <Y^alpha | X |Y^alpha>."""
def __init__(self):
pass
def get_gradient_circuits(self, programs, symbol_names, symbol_values):
"""Return circuits to compute gradients for given forward pass circuits.
Every gradient on a quantum computer can be computed via measurements
of transformed quantum circuits. Here, you implement a custom gradient
for a specific circuit. For a real differentiator, you will need to
implement this function in a more general way. See the differentiator
implementations in the TFQ library for examples.
"""
# The two terms in the derivative are the same circuit...
batch_programs = tf.stack([programs, programs], axis=1)
# ... with shifted parameter values.
shift = tf.constant(1/2)
forward = symbol_values + shift
backward = symbol_values - shift
batch_symbol_values = tf.stack([forward, backward], axis=1)
# Weights are the coefficients of the terms in the derivative.
num_program_copies = tf.shape(batch_programs)[0]
batch_weights = tf.tile(tf.constant([[[np.pi/2, -np.pi/2]]]),
[num_program_copies, 1, 1])
# The index map simply says which weights go with which circuits.
batch_mapper = tf.tile(
tf.constant([[[0, 1]]]), [num_program_copies, 1, 1])
return (batch_programs, symbol_names, batch_symbol_values,
batch_weights, batch_mapper)
###Output
_____no_output_____
###Markdown
The `Differentiator` base class uses the components returned from `get_gradient_circuits` to calculate the derivative, as in the parameter shift formula you saw above. This new differentiator can now be used with existing `tfq.layer` objects:
###Code
custom_dif = MyDifferentiator()
custom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif)
# Now let's get the gradients with finite diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
# Now let's get the gradients with custom diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
my_outputs = custom_grad_expectation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
my_gradients = g.gradient(my_outputs, values_tensor)
plt.subplot(1, 2, 1)
plt.title('Exact Gradient')
plt.plot(input_points, analytic_finite_diff_gradients.numpy())
plt.xlabel('x')
plt.ylabel('f(x)')
plt.subplot(1, 2, 2)
plt.title('My Gradient')
plt.plot(input_points, my_gradients.numpy())
plt.xlabel('x')
###Output
_____no_output_____
###Markdown
This new differentiator can now be used to generate differentiable ops.Key Point: A differentiator that has been previously attached to an op must be refreshed before attaching to a new op, because a differentiator may only be attached to one op at a time.
###Code
# Create a noisy sample based expectation op.
expectation_sampled = tfq.get_sampled_expectation_op(
cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01)))
# Make it differentiable with your differentiator:
# Remember to refresh the differentiator before attaching the new op
custom_dif.refresh()
differentiable_op = custom_dif.generate_differentiable_op(
sampled_op=expectation_sampled)
# Prep op inputs.
circuit_tensor = tfq.convert_to_tensor([my_circuit])
op_tensor = tfq.convert_to_tensor([[pauli_x]])
single_value = tf.convert_to_tensor([[my_alpha]])
num_samples_tensor = tf.convert_to_tensor([[5000]])
with tf.GradientTape() as g:
g.watch(single_value)
forward_output = differentiable_op(circuit_tensor, ['alpha'], single_value,
op_tensor, num_samples_tensor)
my_gradients = g.gradient(forward_output, single_value)
print('---TFQ---')
print('Foward: ', forward_output.numpy())
print('Gradient:', my_gradients.numpy())
print('---Original---')
print('Forward: ', my_expectation(pauli_x, my_alpha))
print('Gradient:', my_grad(pauli_x, my_alpha))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Calculate gradients View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial explores gradient calculation algorithms for the expectation values of quantum circuits.Calculating the gradient of the expectation value of a certain observable in a quantum circuit is an involved process. Expectation values of observables do not have the luxury of having analytic gradient formulas that are always easy to write down—unlike traditional machine learning transformations such as matrix multiplication or vector addition that have analytic gradient formulas which are easy to write down. As a result, there are different quantum gradient calculation methods that come in handy for different scenarios. This tutorial compares and contrasts two different differentiation schemes. Setup
###Code
!pip install tensorflow==2.1.0
###Output
_____no_output_____
###Markdown
Install TensorFlow Quantum:
###Code
!pip install tensorflow-quantum
###Output
_____no_output_____
###Markdown
Now import TensorFlow and the module dependencies:
###Code
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
###Output
_____no_output_____
###Markdown
1. PreliminaryLet's make the notion of gradient calculation for quantum circuits a little more concrete. Suppose you have a parameterized circuit like this one:
###Code
qubit = cirq.GridQubit(0, 0)
my_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha'))
SVGCircuit(my_circuit)
###Output
_____no_output_____
###Markdown
Along with an observable:
###Code
pauli_x = cirq.X(qubit)
pauli_x
###Output
_____no_output_____
###Markdown
Looking at this operator you know that $⟨Y(\alpha)| X | Y(\alpha)⟩ = \sin(\pi \alpha)$
###Code
def my_expectation(op, alpha):
"""Compute ⟨Y(alpha)| `op` | Y(alpha)⟩"""
params = {'alpha': alpha}
sim = cirq.Simulator()
final_state = sim.simulate(my_circuit, params).final_state
return op.expectation_from_wavefunction(final_state, {qubit: 0}).real
my_alpha = 0.3
print("Expectation=", my_expectation(pauli_x, my_alpha))
print("Sin Formula=", np.sin(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
and if you define $f_{1}(\alpha) = ⟨Y(\alpha)| X | Y(\alpha)⟩$ then $f_{1}^{'}(\alpha) = \pi \cos(\pi \alpha)$. Let's check this:
###Code
def my_grad(obs, alpha, eps=0.01):
grad = 0
f_x = my_expectation(obs, alpha)
f_x_prime = my_expectation(obs, alpha + eps)
return ((f_x_prime - f_x) / eps).real
print('Finite difference:', my_grad(pauli_x, my_alpha))
print('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
2. The need for a differentiatorWith larger circuits, you won't always be so lucky to have a formula that precisely calculates the gradients of a given quantum circuit. In the event that a simple formula isn't enough to calculate the gradient, the `tfq.differentiators.Differentiator` class allows you to define algorithms for computing the gradients of your circuits. For instance you can recreate the above example in TensorFlow Quantum (TFQ) with:
###Code
expectation_calculation = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
However, if you switch to estimating expectation based on sampling (what would happen on a true device) the values can change a little bit. This means you now have an imperfect estimate:
###Code
sampled_expectation_calculation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
This can quickly compound into a serious accuracy problem when it comes to gradients:
###Code
# Make input_points = [batch_size, 1] array.
input_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=input_points)
imperfect_outputs = sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=input_points)
plt.title('Forward Pass Values')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.plot(input_points, exact_outputs, label='Analytic')
plt.plot(input_points, imperfect_outputs, label='Sampled')
plt.legend()
# Gradients are a much different story.
values_tensor = tf.convert_to_tensor(input_points)
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = sampled_expectation_calculation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_finite_diff_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_finite_diff_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
Here you can see that although the finite difference formula is fast to compute the gradients themselves in the analytical case, when it came to the sampling based methods it was far too noisy. More careful techniques must be used to ensure a good gradient can be calculated. Next you will look at a much slower technique that wouldn't be as well suited for analytical expectation gradient calculations, but does perform much better in the real-world sample based case:
###Code
# A smarter differentiation scheme.
gradient_safe_sampled_expectation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ParameterShift())
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = gradient_safe_sampled_expectation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_param_shift_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_param_shift_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
From the above you can see that certain differentiators are best used for particular research scenarios. In general, the slower sample-based methods that are robust to device noise, etc., are great differentiators when testing or implementing algorithms in a more "real world" setting. Faster methods like finite difference are great for analytical calculations and you want higher throughput, but aren't yet concerned with the device viability of your algorithm. 3. Multiple observablesLet's introduce a second observable and see how TensorFlow Quantum supports multiple observables for a single circuit.
###Code
pauli_z = cirq.Z(qubit)
pauli_z
###Output
_____no_output_____
###Markdown
If this observable is used with the same circuit as before, then you have $f_{2}(\alpha) = ⟨Y(\alpha)| Z | Y(\alpha)⟩ = \cos(\pi \alpha)$ and $f_{2}^{'}(\alpha) = -\pi \sin(\pi \alpha)$. Perform a quick check:
###Code
test_value = 0.
print('Finite difference:', my_grad(pauli_z, test_value))
print('Sin formula: ', -np.pi * np.sin(np.pi * test_value))
###Output
_____no_output_____
###Markdown
It's a match (close enough).Now if you define $g(\alpha) = f_{1}(\alpha) + f_{2}(\alpha)$ then $g'(\alpha) = f_{1}^{'}(\alpha) + f^{'}_{2}(\alpha)$. Defining more than one observable in TensorFlow Quantum to use along with a circuit is equivalent to adding on more terms to $g$.This means that the gradient of a particular symbol in a circuit is equal to the sum of the gradients with regards to each observable for that symbol applied to that circuit. This is compatible with TensorFlow gradient taking and backpropagation (where you give the sum of the gradients over all observables as the gradient for a particular symbol).
###Code
sum_of_outputs = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=[[test_value]])
###Output
_____no_output_____
###Markdown
Here you see the first entry is the expectation w.r.t Pauli X, and the second is the expectation w.r.t Pauli Z. Now when you take the gradient:
###Code
test_value_tensor = tf.convert_to_tensor([[test_value]])
with tf.GradientTape() as g:
g.watch(test_value_tensor)
outputs = sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=test_value_tensor)
sum_of_gradients = g.gradient(outputs, test_value_tensor)
print(my_grad(pauli_x, test_value) + my_grad(pauli_z, test_value))
print(sum_of_gradients.numpy())
###Output
_____no_output_____
###Markdown
Here you have verified that the sum of the gradients for each observable is indeed the gradient of $\alpha$. This behavior is supported by all TensorFlow Quantum differentiators and plays a crucial role in the compatibility with the rest of TensorFlow. 4. Advanced usageHere you will learn how to define your own custom differentiation routines for quantum circuits.All differentiators that exist inside of TensorFlow Quantum subclass `tfq.differentiators.Differentiator`. A differentiator must implement `differentiate_analytic` and `differentiate_sampled`.The following uses TensorFlow Quantum constructs to implement the closed form solution from the first part of this tutorial.
###Code
class MyDifferentiator(tfq.differentiators.Differentiator):
"""A Toy differentiator for <Y^alpha | X |Y^alpha>."""
def __init__(self):
pass
@tf.function
def _compute_gradient(self, symbol_values):
"""Compute the gradient based on symbol_values."""
# f(x) = sin(pi * x)
# f'(x) = pi * cos(pi * x)
return tf.cast(tf.cos(symbol_values * np.pi) * np.pi, tf.float32)
@tf.function
def differentiate_analytic(self, programs, symbol_names, symbol_values,
pauli_sums, forward_pass_vals, grad):
"""Specify how to differentiate a circuit with analytical expectation.
This is called at graph runtime by TensorFlow. `differentiate_analytic`
should calculate the gradient of a batch of circuits and return it
formatted as indicated below. See
`tfq.differentiators.ForwardDifference` for an example.
Args:
programs: `tf.Tensor` of strings with shape [batch_size] containing
the string representations of the circuits to be executed.
symbol_names: `tf.Tensor` of strings with shape [n_params], which
is used to specify the order in which the values in
`symbol_values` should be placed inside of the circuits in
`programs`.
symbol_values: `tf.Tensor` of real numbers with shape
[batch_size, n_params] specifying parameter values to resolve
into the circuits specified by programs, following the ordering
dictated by `symbol_names`.
pauli_sums: `tf.Tensor` of strings with shape [batch_size, n_ops]
containing the string representation of the operators that will
be used on all of the circuits in the expectation calculations.
forward_pass_vals: `tf.Tensor` of real numbers with shape
[batch_size, n_ops] containing the output of the forward pass
through the op you are differentiating.
grad: `tf.Tensor` of real numbers with shape [batch_size, n_ops]
representing the gradient backpropagated to the output of the
op you are differentiating through.
Returns:
A `tf.Tensor` with the same shape as `symbol_values` representing
the gradient backpropagated to the `symbol_values` input of the op
you are differentiating through.
"""
# Computing gradients just based off of symbol_values.
return self._compute_gradient(symbol_values) * grad
@tf.function
def differentiate_sampled(self, programs, symbol_names, symbol_values,
pauli_sums, num_samples, forward_pass_vals, grad):
"""Specify how to differentiate a circuit with sampled expectation.
This is called at graph runtime by TensorFlow. `differentiate_sampled`
should calculate the gradient of a batch of circuits and return it
formatted as indicated below. See
`tfq.differentiators.ForwardDifference` for an example.
Args:
programs: `tf.Tensor` of strings with shape [batch_size] containing
the string representations of the circuits to be executed.
symbol_names: `tf.Tensor` of strings with shape [n_params], which
is used to specify the order in which the values in
`symbol_values` should be placed inside of the circuits in
`programs`.
symbol_values: `tf.Tensor` of real numbers with shape
[batch_size, n_params] specifying parameter values to resolve
into the circuits specified by programs, following the ordering
dictated by `symbol_names`.
pauli_sums: `tf.Tensor` of strings with shape [batch_size, n_ops]
containing the string representation of the operators that will
be used on all of the circuits in the expectation calculations.
num_samples: `tf.Tensor` of positive integers representing the
number of samples per term in each term of pauli_sums used
during the forward pass.
forward_pass_vals: `tf.Tensor` of real numbers with shape
[batch_size, n_ops] containing the output of the forward pass
through the op you are differentiating.
grad: `tf.Tensor` of real numbers with shape [batch_size, n_ops]
representing the gradient backpropagated to the output of the
op you are differentiating through.
Returns:
A `tf.Tensor` with the same shape as `symbol_values` representing
the gradient backpropagated to the `symbol_values` input of the op
you are differentiating through.
"""
return self._compute_gradient(symbol_values) * grad
###Output
_____no_output_____
###Markdown
This new differentiator can now be used with existing `tfq.layer` objects:
###Code
custom_dif = MyDifferentiator()
custom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif)
# Now let's get the gradients with finite diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
# Now let's get the gradients with custom diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
my_outputs = custom_grad_expectation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
my_gradients = g.gradient(my_outputs, values_tensor)
plt.subplot(1, 2, 1)
plt.title('Exact Gradient')
plt.plot(input_points, analytic_finite_diff_gradients.numpy())
plt.xlabel('x')
plt.ylabel('f(x)')
plt.subplot(1, 2, 2)
plt.title('My Gradient')
plt.plot(input_points, my_gradients.numpy())
plt.xlabel('x')
###Output
_____no_output_____
###Markdown
This new differentiator can now be used to generate differentiable ops.Key Point: A differentiator that has been previously attached to an op must be refreshed before attaching to a new op, because a differentiator may only be attached to one op at a time.
###Code
# Create a noisy sample based expectation op.
expectation_sampled = tfq.get_sampled_expectation_op(
cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01)))
# Make it differentiable with your differentiator:
# Remember to refresh the differentiator before attaching the new op
custom_dif.refresh()
differentiable_op = custom_dif.generate_differentiable_op(
sampled_op=expectation_sampled)
# Prep op inputs.
circuit_tensor = tfq.convert_to_tensor([my_circuit])
op_tensor = tfq.convert_to_tensor([[pauli_x]])
single_value = tf.convert_to_tensor([[my_alpha]])
num_samples_tensor = tf.convert_to_tensor([[1000]])
with tf.GradientTape() as g:
g.watch(single_value)
forward_output = differentiable_op(circuit_tensor, ['alpha'], single_value,
op_tensor, num_samples_tensor)
my_gradients = g.gradient(forward_output, single_value)
print('---TFQ---')
print('Foward: ', forward_output.numpy())
print('Gradient:', my_gradients.numpy())
print('---Original---')
print('Forward: ', my_expectation(pauli_x, my_alpha))
print('Gradient:', my_grad(pauli_x, my_alpha))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Calculate gradients View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial explores gradient calculation algorithms for the expectation values of quantum circuits.Calculating the gradient of the expectation value of a certain observable in a quantum circuit is an involved process. Expectation values of observables do not have the luxury of having analytic gradient formulas that are always easy to write down—unlike traditional machine learning transformations such as matrix multiplication or vector addition that have analytic gradient formulas which are easy to write down. As a result, there are different quantum gradient calculation methods that come in handy for different scenarios. This tutorial compares and contrasts two different differentiation schemes. Setup
###Code
try:
%tensorflow_version 2.x
except Exception:
pass
###Output
_____no_output_____
###Markdown
Install TensorFlow Quantum:Note: This may require restarting the Colab runtime (*Runtime > Restart Runtime*).
###Code
!pip install tensorflow-quantum
###Output
_____no_output_____
###Markdown
Now import TensorFlow and the module dependencies:
###Code
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
###Output
_____no_output_____
###Markdown
1. PreliminaryLet's make the notion of gradient calculation for quantum circuits a little more concrete. Suppose you have a parameterized circuit like this one:
###Code
qubit = cirq.GridQubit(0, 0)
my_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha'))
SVGCircuit(my_circuit)
###Output
_____no_output_____
###Markdown
Along with an observable:
###Code
pauli_x = cirq.X(qubit)
pauli_x
###Output
_____no_output_____
###Markdown
Looking at this operator you know that $⟨Y(\alpha)| X | Y(\alpha)⟩ = \sin(\pi \alpha)$
###Code
def my_expectation(op, alpha):
"""Compute ⟨Y(alpha)| `op` | Y(alpha)⟩"""
params = {'alpha': alpha}
sim = cirq.Simulator()
final_state = sim.simulate(my_circuit, params).final_state
return op.expectation_from_wavefunction(final_state, {qubit: 0}).real
my_alpha = 0.3
print("Expectation=", my_expectation(pauli_x, my_alpha))
print("Sin Formula=", np.sin(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
and if you define $f_{1}(\alpha) = ⟨Y(\alpha)| X | Y(\alpha)⟩$ then $f_{1}^{'}(\alpha) = \pi \cos(\pi \alpha)$. Let's check this:
###Code
def my_grad(obs, alpha, eps=0.01):
grad = 0
f_x = my_expectation(obs, alpha)
f_x_prime = my_expectation(obs, alpha + eps)
return ((f_x_prime - f_x) / eps).real
print('Finite difference:', my_grad(pauli_x, my_alpha))
print('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
2. The need for a differentiatorWith larger circuits, you won't always be so lucky to have a formula that precisely calculates the gradients of a given quantum circuit. In the event that a simple formula isn't enough to calculate the gradient, the `tfq.differentiators.Differentiator` class allows you to define algorithms for computing the gradients of your circuits. For instance you can recreate the above example in TensorFlow Quantum (TFQ) with:
###Code
expectation_calculation = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
However, if you switch to estimating expectation based on sampling (what would happen on a true device) the values can change a little bit. This means you now have an imperfect estimate:
###Code
sampled_expectation_calculation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
This can quickly compound into a serious accuracy problem when it comes to gradients:
###Code
# Make input_points = [batch_size, 1] array.
input_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=input_points)
imperfect_outputs = sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=input_points)
plt.title('Forward Pass Values')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.plot(input_points, exact_outputs, label='Analytic')
plt.plot(input_points, imperfect_outputs, label='Sampled')
plt.legend()
# Gradients are a much different story.
values_tensor = tf.convert_to_tensor(input_points)
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = sampled_expectation_calculation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_finite_diff_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_finite_diff_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
Here you can see that although the finite difference formula is fast to compute the gradients themselves in the analytical case, when it came to the sampling based methods it was far too noisy. More careful techniques must be used to ensure a good gradient can be calculated. Next you will look at a much slower technique that wouldn't be as well suited for analytical expectation gradient calculations, but does perform much better in the real-world sample based case:
###Code
# A smarter differentiation scheme.
gradient_safe_sampled_expectation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ParameterShift())
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = gradient_safe_sampled_expectation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_param_shift_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_param_shift_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
From the above you can see that certain differentiators are best used for particular research scenarios. In general, the slower sample-based methods that are robust to device noise, etc., are great differentiators when testing or implementing algorithms in a more "real world" setting. Faster methods like finite difference are great for analytical calculations and you want higher throughput, but aren't yet concerned with the device viability of your algorithm. 3. Multiple observablesLet's introduce a second observable and see how TensorFlow Quantum supports multiple observables for a single circuit.
###Code
pauli_z = cirq.Z(qubit)
pauli_z
###Output
_____no_output_____
###Markdown
If this observable is used with the same circuit as before, then you have $f_{2}(\alpha) = ⟨Y(\alpha)| Z | Y(\alpha)⟩ = \cos(\pi \alpha)$ and $f_{2}^{'}(\alpha) = -\pi \sin(\pi \alpha)$. Perform a quick check:
###Code
test_value = 0.
print('Finite difference:', my_grad(pauli_z, test_value))
print('Sin formula: ', -np.pi * np.sin(np.pi * test_value))
###Output
_____no_output_____
###Markdown
It's a match (close enough).Now if you define $g(\alpha) = f_{1}(\alpha) + f_{2}(\alpha)$ then $g'(\alpha) = f_{1}^{'}(\alpha) + f^{'}_{2}(\alpha)$. Defining more than one observable in TensorFlow Quantum to use along with a circuit is equivalent to adding on more terms to $g$.This means that the gradient of a particular symbol in a circuit is equal to the sum of the gradients with regards to each observable for that symbol applied to that circuit. This is compatible with TensorFlow gradient taking and backpropagation (where you give the sum of the gradients over all observables as the gradient for a particular symbol).
###Code
sum_of_outputs = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=[[test_value]])
###Output
_____no_output_____
###Markdown
Here you see the first entry is the expectation w.r.t Pauli X, and the second is the expectation w.r.t Pauli Z. Now when you take the gradient:
###Code
test_value_tensor = tf.convert_to_tensor([[test_value]])
with tf.GradientTape() as g:
g.watch(test_value_tensor)
outputs = sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=test_value_tensor)
sum_of_gradients = g.gradient(outputs, test_value_tensor)
print(my_grad(pauli_x, test_value) + my_grad(pauli_z, test_value))
print(sum_of_gradients.numpy())
###Output
_____no_output_____
###Markdown
Here you have verified that the sum of the gradients for each observable is indeed the gradient of $\alpha$. This behavior is supported by all TensorFlow Quantum differentiators and plays a crucial role in the compatibility with the rest of TensorFlow. 4. Advanced usageHere you will learn how to define your own custom differentiation routines for quantum circuits.All differentiators that exist inside of TensorFlow Quantum subclass `tfq.differentiators.Differentiator`. A differentiator must implement `differentiate_analytic` and `differentiate_sampled`.The following uses TensorFlow Quantum constructs to implement the closed form solution from the first part of this tutorial.
###Code
class MyDifferentiator(tfq.differentiators.Differentiator):
"""A Toy differentiator for <Y^alpha | X |Y^alpha>."""
def __init__(self):
pass
@tf.function
def _compute_gradient(self, symbol_values):
"""Compute the gradient based on symbol_values."""
# f(x) = sin(pi * x)
# f'(x) = pi * cos(pi * x)
return tf.cast(tf.cos(symbol_values * np.pi) * np.pi, tf.float32)
@tf.function
def differentiate_analytic(self, programs, symbol_names, symbol_values,
pauli_sums, forward_pass_vals, grad):
"""Specify how to differentiate a circuit with analytical expectation.
This is called at graph runtime by TensorFlow. `differentiate_analytic`
should calculate the gradient of a batch of circuits and return it
formatted as indicated below. See
`tfq.differentiators.ForwardDifference` for an example.
Args:
programs: `tf.Tensor` of strings with shape [batch_size] containing
the string representations of the circuits to be executed.
symbol_names: `tf.Tensor` of strings with shape [n_params], which
is used to specify the order in which the values in
`symbol_values` should be placed inside of the circuits in
`programs`.
symbol_values: `tf.Tensor` of real numbers with shape
[batch_size, n_params] specifying parameter values to resolve
into the circuits specified by programs, following the ordering
dictated by `symbol_names`.
pauli_sums: `tf.Tensor` of strings with shape [batch_size, n_ops]
containing the string representation of the operators that will
be used on all of the circuits in the expectation calculations.
forward_pass_vals: `tf.Tensor` of real numbers with shape
[batch_size, n_ops] containing the output of the forward pass
through the op you are differentiating.
grad: `tf.Tensor` of real numbers with shape [batch_size, n_ops]
representing the gradient backpropagated to the output of the
op you are differentiating through.
Returns:
A `tf.Tensor` with the same shape as `symbol_values` representing
the gradient backpropagated to the `symbol_values` input of the op
you are differentiating through.
"""
# Computing gradients just based off of symbol_values.
return self._compute_gradient(symbol_values) * grad
@tf.function
def differentiate_sampled(self, programs, symbol_names, symbol_values,
pauli_sums, num_samples, forward_pass_vals, grad):
"""Specify how to differentiate a circuit with sampled expectation.
This is called at graph runtime by TensorFlow. `differentiate_sampled`
should calculate the gradient of a batch of circuits and return it
formatted as indicated below. See
`tfq.differentiators.ForwardDifference` for an example.
Args:
programs: `tf.Tensor` of strings with shape [batch_size] containing
the string representations of the circuits to be executed.
symbol_names: `tf.Tensor` of strings with shape [n_params], which
is used to specify the order in which the values in
`symbol_values` should be placed inside of the circuits in
`programs`.
symbol_values: `tf.Tensor` of real numbers with shape
[batch_size, n_params] specifying parameter values to resolve
into the circuits specified by programs, following the ordering
dictated by `symbol_names`.
pauli_sums: `tf.Tensor` of strings with shape [batch_size, n_ops]
containing the string representation of the operators that will
be used on all of the circuits in the expectation calculations.
num_samples: `tf.Tensor` of positive integers representing the
number of samples per term in each term of pauli_sums used
during the forward pass.
forward_pass_vals: `tf.Tensor` of real numbers with shape
[batch_size, n_ops] containing the output of the forward pass
through the op you are differentiating.
grad: `tf.Tensor` of real numbers with shape [batch_size, n_ops]
representing the gradient backpropagated to the output of the
op you are differentiating through.
Returns:
A `tf.Tensor` with the same shape as `symbol_values` representing
the gradient backpropagated to the `symbol_values` input of the op
you are differentiating through.
"""
return self._compute_gradient(symbol_values) * grad
###Output
_____no_output_____
###Markdown
This new differentiator can now be used with existing `tfq.layer` objects:
###Code
custom_dif = MyDifferentiator()
custom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif)
# Now let's get the gradients with finite diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
# Now let's get the gradients with custom diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
my_outputs = custom_grad_expectation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
my_gradients = g.gradient(my_outputs, values_tensor)
plt.subplot(1, 2, 1)
plt.title('Exact Gradient')
plt.plot(input_points, analytic_finite_diff_gradients.numpy())
plt.xlabel('x')
plt.ylabel('f(x)')
plt.subplot(1, 2, 2)
plt.title('My Gradient')
plt.plot(input_points, my_gradients.numpy())
plt.xlabel('x')
###Output
_____no_output_____
###Markdown
This new differentiator can now be used to generate differentiable ops.Key Point: A differentiator that has been previously attached to an op must be refreshed before attaching to a new op, because a differentiator may only be attached to one op at a time.
###Code
# Create a noisy sample based expectation op.
expectation_sampled = tfq.get_sampled_expectation_op(
cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01)))
# Make it differentiable with your differentiator:
# Remember to refresh the differentiator before attaching the new op
custom_dif.refresh()
differentiable_op = custom_dif.generate_differentiable_op(
sampled_op=expectation_sampled)
# Prep op inputs.
circuit_tensor = tfq.convert_to_tensor([my_circuit])
op_tensor = tfq.convert_to_tensor([[pauli_x]])
single_value = tf.convert_to_tensor([[my_alpha]])
num_samples_tensor = tf.convert_to_tensor([[1000]])
with tf.GradientTape() as g:
g.watch(single_value)
forward_output = differentiable_op(circuit_tensor, ['alpha'], single_value,
op_tensor, num_samples_tensor)
my_gradients = g.gradient(forward_output, single_value)
print('---TFQ---')
print('Foward: ', forward_output.numpy())
print('Gradient:', my_gradients.numpy())
print('---Original---')
print('Forward: ', my_expectation(pauli_x, my_alpha))
print('Gradient:', my_grad(pauli_x, my_alpha))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Calculate gradients View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial explores gradient calculation algorithms for the expectation values of quantum circuits.Calculating the gradient of the expectation value of a certain observable in a quantum circuit is an involved process. Expectation values of observables do not have the luxury of having analytic gradient formulas that are always easy to write down—unlike traditional machine learning transformations such as matrix multiplication or vector addition that have analytic gradient formulas which are easy to write down. As a result, there are different quantum gradient calculation methods that come in handy for different scenarios. This tutorial compares and contrasts two different differentiation schemes. Setup
###Code
!pip install tensorflow==2.3.1
###Output
_____no_output_____
###Markdown
Install TensorFlow Quantum:
###Code
!pip install tensorflow-quantum
###Output
_____no_output_____
###Markdown
Now import TensorFlow and the module dependencies:
###Code
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
###Output
_____no_output_____
###Markdown
1. PreliminaryLet's make the notion of gradient calculation for quantum circuits a little more concrete. Suppose you have a parameterized circuit like this one:
###Code
qubit = cirq.GridQubit(0, 0)
my_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha'))
SVGCircuit(my_circuit)
###Output
_____no_output_____
###Markdown
Along with an observable:
###Code
pauli_x = cirq.X(qubit)
pauli_x
###Output
_____no_output_____
###Markdown
Looking at this operator you know that $⟨Y(\alpha)| X | Y(\alpha)⟩ = \sin(\pi \alpha)$
###Code
def my_expectation(op, alpha):
"""Compute ⟨Y(alpha)| `op` | Y(alpha)⟩"""
params = {'alpha': alpha}
sim = cirq.Simulator()
final_state_vector = sim.simulate(my_circuit, params).final_state_vector
return op.expectation_from_state_vector(final_state_vector, {qubit: 0}).real
my_alpha = 0.3
print("Expectation=", my_expectation(pauli_x, my_alpha))
print("Sin Formula=", np.sin(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
and if you define $f_{1}(\alpha) = ⟨Y(\alpha)| X | Y(\alpha)⟩$ then $f_{1}^{'}(\alpha) = \pi \cos(\pi \alpha)$. Let's check this:
###Code
def my_grad(obs, alpha, eps=0.01):
grad = 0
f_x = my_expectation(obs, alpha)
f_x_prime = my_expectation(obs, alpha + eps)
return ((f_x_prime - f_x) / eps).real
print('Finite difference:', my_grad(pauli_x, my_alpha))
print('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
2. The need for a differentiatorWith larger circuits, you won't always be so lucky to have a formula that precisely calculates the gradients of a given quantum circuit. In the event that a simple formula isn't enough to calculate the gradient, the `tfq.differentiators.Differentiator` class allows you to define algorithms for computing the gradients of your circuits. For instance you can recreate the above example in TensorFlow Quantum (TFQ) with:
###Code
expectation_calculation = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
However, if you switch to estimating expectation based on sampling (what would happen on a true device) the values can change a little bit. This means you now have an imperfect estimate:
###Code
sampled_expectation_calculation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
This can quickly compound into a serious accuracy problem when it comes to gradients:
###Code
# Make input_points = [batch_size, 1] array.
input_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=input_points)
imperfect_outputs = sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=input_points)
plt.title('Forward Pass Values')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.plot(input_points, exact_outputs, label='Analytic')
plt.plot(input_points, imperfect_outputs, label='Sampled')
plt.legend()
# Gradients are a much different story.
values_tensor = tf.convert_to_tensor(input_points)
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = sampled_expectation_calculation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_finite_diff_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_finite_diff_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
Here you can see that although the finite difference formula is fast to compute the gradients themselves in the analytical case, when it came to the sampling based methods it was far too noisy. More careful techniques must be used to ensure a good gradient can be calculated. Next you will look at a much slower technique that wouldn't be as well suited for analytical expectation gradient calculations, but does perform much better in the real-world sample based case:
###Code
# A smarter differentiation scheme.
gradient_safe_sampled_expectation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ParameterShift())
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = gradient_safe_sampled_expectation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_param_shift_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_param_shift_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
From the above you can see that certain differentiators are best used for particular research scenarios. In general, the slower sample-based methods that are robust to device noise, etc., are great differentiators when testing or implementing algorithms in a more "real world" setting. Faster methods like finite difference are great for analytical calculations and you want higher throughput, but aren't yet concerned with the device viability of your algorithm. 3. Multiple observablesLet's introduce a second observable and see how TensorFlow Quantum supports multiple observables for a single circuit.
###Code
pauli_z = cirq.Z(qubit)
pauli_z
###Output
_____no_output_____
###Markdown
If this observable is used with the same circuit as before, then you have $f_{2}(\alpha) = ⟨Y(\alpha)| Z | Y(\alpha)⟩ = \cos(\pi \alpha)$ and $f_{2}^{'}(\alpha) = -\pi \sin(\pi \alpha)$. Perform a quick check:
###Code
test_value = 0.
print('Finite difference:', my_grad(pauli_z, test_value))
print('Sin formula: ', -np.pi * np.sin(np.pi * test_value))
###Output
_____no_output_____
###Markdown
It's a match (close enough).Now if you define $g(\alpha) = f_{1}(\alpha) + f_{2}(\alpha)$ then $g'(\alpha) = f_{1}^{'}(\alpha) + f^{'}_{2}(\alpha)$. Defining more than one observable in TensorFlow Quantum to use along with a circuit is equivalent to adding on more terms to $g$.This means that the gradient of a particular symbol in a circuit is equal to the sum of the gradients with regards to each observable for that symbol applied to that circuit. This is compatible with TensorFlow gradient taking and backpropagation (where you give the sum of the gradients over all observables as the gradient for a particular symbol).
###Code
sum_of_outputs = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=[[test_value]])
###Output
_____no_output_____
###Markdown
Here you see the first entry is the expectation w.r.t Pauli X, and the second is the expectation w.r.t Pauli Z. Now when you take the gradient:
###Code
test_value_tensor = tf.convert_to_tensor([[test_value]])
with tf.GradientTape() as g:
g.watch(test_value_tensor)
outputs = sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=test_value_tensor)
sum_of_gradients = g.gradient(outputs, test_value_tensor)
print(my_grad(pauli_x, test_value) + my_grad(pauli_z, test_value))
print(sum_of_gradients.numpy())
###Output
_____no_output_____
###Markdown
Here you have verified that the sum of the gradients for each observable is indeed the gradient of $\alpha$. This behavior is supported by all TensorFlow Quantum differentiators and plays a crucial role in the compatibility with the rest of TensorFlow. 4. Advanced usageHere you will learn how to define your own custom differentiation routines for quantum circuits.All differentiators that exist inside of TensorFlow Quantum subclass `tfq.differentiators.Differentiator`. A differentiator must implement `differentiate_analytic` and `differentiate_sampled`.The following uses TensorFlow Quantum constructs to implement the closed form solution from the first part of this tutorial.
###Code
class MyDifferentiator(tfq.differentiators.Differentiator):
"""A Toy differentiator for <Y^alpha | X |Y^alpha>."""
def __init__(self):
pass
@tf.function
def get_gradient_circuits(self, programs, symbol_names, symbol_values):
"""Return circuits to compute gradients for given forward pass circuits.
When implementing a gradient, it is often useful to describe the
intermediate computations in terms of transformed versions of the input
circuits. The details are beyond the scope of this tutorial, but interested
users should check out the differentiator implementations in the TFQ library
for examples.
"""
raise NotImplementedError(
"Gradient circuits are not implemented in this tutorial.")
@tf.function
def _compute_gradient(self, symbol_values):
"""Compute the gradient based on symbol_values."""
# f(x) = sin(pi * x)
# f'(x) = pi * cos(pi * x)
return tf.cast(tf.cos(symbol_values * np.pi) * np.pi, tf.float32)
@tf.function
def differentiate_analytic(self, programs, symbol_names, symbol_values,
pauli_sums, forward_pass_vals, grad):
"""Specify how to differentiate a circuit with analytical expectation.
This is called at graph runtime by TensorFlow. `differentiate_analytic`
should calculate the gradient of a batch of circuits and return it
formatted as indicated below. See
`tfq.differentiators.ForwardDifference` for an example.
Args:
programs: `tf.Tensor` of strings with shape [batch_size] containing
the string representations of the circuits to be executed.
symbol_names: `tf.Tensor` of strings with shape [n_params], which
is used to specify the order in which the values in
`symbol_values` should be placed inside of the circuits in
`programs`.
symbol_values: `tf.Tensor` of real numbers with shape
[batch_size, n_params] specifying parameter values to resolve
into the circuits specified by programs, following the ordering
dictated by `symbol_names`.
pauli_sums: `tf.Tensor` of strings with shape [batch_size, n_ops]
containing the string representation of the operators that will
be used on all of the circuits in the expectation calculations.
forward_pass_vals: `tf.Tensor` of real numbers with shape
[batch_size, n_ops] containing the output of the forward pass
through the op you are differentiating.
grad: `tf.Tensor` of real numbers with shape [batch_size, n_ops]
representing the gradient backpropagated to the output of the
op you are differentiating through.
Returns:
A `tf.Tensor` with the same shape as `symbol_values` representing
the gradient backpropagated to the `symbol_values` input of the op
you are differentiating through.
"""
# Computing gradients just based off of symbol_values.
return self._compute_gradient(symbol_values) * grad
@tf.function
def differentiate_sampled(self, programs, symbol_names, symbol_values,
pauli_sums, num_samples, forward_pass_vals, grad):
"""Specify how to differentiate a circuit with sampled expectation.
This is called at graph runtime by TensorFlow. `differentiate_sampled`
should calculate the gradient of a batch of circuits and return it
formatted as indicated below. See
`tfq.differentiators.ForwardDifference` for an example.
Args:
programs: `tf.Tensor` of strings with shape [batch_size] containing
the string representations of the circuits to be executed.
symbol_names: `tf.Tensor` of strings with shape [n_params], which
is used to specify the order in which the values in
`symbol_values` should be placed inside of the circuits in
`programs`.
symbol_values: `tf.Tensor` of real numbers with shape
[batch_size, n_params] specifying parameter values to resolve
into the circuits specified by programs, following the ordering
dictated by `symbol_names`.
pauli_sums: `tf.Tensor` of strings with shape [batch_size, n_ops]
containing the string representation of the operators that will
be used on all of the circuits in the expectation calculations.
num_samples: `tf.Tensor` of positive integers representing the
number of samples per term in each term of pauli_sums used
during the forward pass.
forward_pass_vals: `tf.Tensor` of real numbers with shape
[batch_size, n_ops] containing the output of the forward pass
through the op you are differentiating.
grad: `tf.Tensor` of real numbers with shape [batch_size, n_ops]
representing the gradient backpropagated to the output of the
op you are differentiating through.
Returns:
A `tf.Tensor` with the same shape as `symbol_values` representing
the gradient backpropagated to the `symbol_values` input of the op
you are differentiating through.
"""
return self._compute_gradient(symbol_values) * grad
###Output
_____no_output_____
###Markdown
This new differentiator can now be used with existing `tfq.layer` objects:
###Code
custom_dif = MyDifferentiator()
custom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif)
# Now let's get the gradients with finite diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
# Now let's get the gradients with custom diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
my_outputs = custom_grad_expectation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
my_gradients = g.gradient(my_outputs, values_tensor)
plt.subplot(1, 2, 1)
plt.title('Exact Gradient')
plt.plot(input_points, analytic_finite_diff_gradients.numpy())
plt.xlabel('x')
plt.ylabel('f(x)')
plt.subplot(1, 2, 2)
plt.title('My Gradient')
plt.plot(input_points, my_gradients.numpy())
plt.xlabel('x')
###Output
_____no_output_____
###Markdown
This new differentiator can now be used to generate differentiable ops.Key Point: A differentiator that has been previously attached to an op must be refreshed before attaching to a new op, because a differentiator may only be attached to one op at a time.
###Code
# Create a noisy sample based expectation op.
expectation_sampled = tfq.get_sampled_expectation_op(
cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01)))
# Make it differentiable with your differentiator:
# Remember to refresh the differentiator before attaching the new op
custom_dif.refresh()
differentiable_op = custom_dif.generate_differentiable_op(
sampled_op=expectation_sampled)
# Prep op inputs.
circuit_tensor = tfq.convert_to_tensor([my_circuit])
op_tensor = tfq.convert_to_tensor([[pauli_x]])
single_value = tf.convert_to_tensor([[my_alpha]])
num_samples_tensor = tf.convert_to_tensor([[1000]])
with tf.GradientTape() as g:
g.watch(single_value)
forward_output = differentiable_op(circuit_tensor, ['alpha'], single_value,
op_tensor, num_samples_tensor)
my_gradients = g.gradient(forward_output, single_value)
print('---TFQ---')
print('Foward: ', forward_output.numpy())
print('Gradient:', my_gradients.numpy())
print('---Original---')
print('Forward: ', my_expectation(pauli_x, my_alpha))
print('Gradient:', my_grad(pauli_x, my_alpha))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Calculate gradients View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial explores gradient calculation algorithms for the expectation values of quantum circuits.Calculating the gradient of the expectation value of a certain observable in a quantum circuit is an involved process. Expectation values of observables do not have the luxury of having analytic gradient formulas that are always easy to write down—unlike traditional machine learning transformations such as matrix multiplication or vector addition that have analytic gradient formulas which are easy to write down. As a result, there are different quantum gradient calculation methods that come in handy for different scenarios. This tutorial compares and contrasts two different differentiation schemes. Setup
###Code
try:
%tensorflow_version 2.x
except Exception:
pass
###Output
_____no_output_____
###Markdown
Install TensorFlow Quantum:
###Code
!pip install tensorflow-quantum
###Output
_____no_output_____
###Markdown
Now import TensorFlow and the module dependencies:
###Code
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
###Output
_____no_output_____
###Markdown
1. PreliminaryLet's make the notion of gradient calculation for quantum circuits a little more concrete. Suppose you have a parameterized circuit like this one:
###Code
qubit = cirq.GridQubit(0, 0)
my_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha'))
SVGCircuit(my_circuit)
###Output
_____no_output_____
###Markdown
Along with an observable:
###Code
pauli_x = cirq.X(qubit)
pauli_x
###Output
_____no_output_____
###Markdown
Looking at this operator you know that $⟨Y(\alpha)| X | Y(\alpha)⟩ = \sin(\pi \alpha)$
###Code
def my_expectation(op, alpha):
"""Compute ⟨Y(alpha)| `op` | Y(alpha)⟩"""
params = {'alpha': alpha}
sim = cirq.Simulator()
final_state = sim.simulate(my_circuit, params).final_state
return op.expectation_from_wavefunction(final_state, {qubit: 0}).real
my_alpha = 0.3
print("Expectation=", my_expectation(pauli_x, my_alpha))
print("Sin Formula=", np.sin(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
and if you define $f_{1}(\alpha) = ⟨Y(\alpha)| X | Y(\alpha)⟩$ then $f_{1}^{'}(\alpha) = \pi \cos(\pi \alpha)$. Let's check this:
###Code
def my_grad(obs, alpha, eps=0.01):
grad = 0
f_x = my_expectation(obs, alpha)
f_x_prime = my_expectation(obs, alpha + eps)
return ((f_x_prime - f_x) / eps).real
print('Finite difference:', my_grad(pauli_x, my_alpha))
print('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
2. The need for a differentiatorWith larger circuits, you won't always be so lucky to have a formula that precisely calculates the gradients of a given quantum circuit. In the event that a simple formula isn't enough to calculate the gradient, the `tfq.differentiators.Differentiator` class allows you to define algorithms for computing the gradients of your circuits. For instance you can recreate the above example in TensorFlow Quantum (TFQ) with:
###Code
expectation_calculation = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
However, if you switch to estimating expectation based on sampling (what would happen on a true device) the values can change a little bit. This means you now have an imperfect estimate:
###Code
sampled_expectation_calculation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
This can quickly compound into a serious accuracy problem when it comes to gradients:
###Code
# Make input_points = [batch_size, 1] array.
input_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=input_points)
imperfect_outputs = sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=input_points)
plt.title('Forward Pass Values')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.plot(input_points, exact_outputs, label='Analytic')
plt.plot(input_points, imperfect_outputs, label='Sampled')
plt.legend()
# Gradients are a much different story.
values_tensor = tf.convert_to_tensor(input_points)
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = sampled_expectation_calculation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_finite_diff_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_finite_diff_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
Here you can see that although the finite difference formula is fast to compute the gradients themselves in the analytical case, when it came to the sampling based methods it was far too noisy. More careful techniques must be used to ensure a good gradient can be calculated. Next you will look at a much slower technique that wouldn't be as well suited for analytical expectation gradient calculations, but does perform much better in the real-world sample based case:
###Code
# A smarter differentiation scheme.
gradient_safe_sampled_expectation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ParameterShift())
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = gradient_safe_sampled_expectation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_param_shift_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_param_shift_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
From the above you can see that certain differentiators are best used for particular research scenarios. In general, the slower sample-based methods that are robust to device noise, etc., are great differentiators when testing or implementing algorithms in a more "real world" setting. Faster methods like finite difference are great for analytical calculations and you want higher throughput, but aren't yet concerned with the device viability of your algorithm. 3. Multiple observablesLet's introduce a second observable and see how TensorFlow Quantum supports multiple observables for a single circuit.
###Code
pauli_z = cirq.Z(qubit)
pauli_z
###Output
_____no_output_____
###Markdown
If this observable is used with the same circuit as before, then you have $f_{2}(\alpha) = ⟨Y(\alpha)| Z | Y(\alpha)⟩ = \cos(\pi \alpha)$ and $f_{2}^{'}(\alpha) = -\pi \sin(\pi \alpha)$. Perform a quick check:
###Code
test_value = 0.
print('Finite difference:', my_grad(pauli_z, test_value))
print('Sin formula: ', -np.pi * np.sin(np.pi * test_value))
###Output
_____no_output_____
###Markdown
It's a match (close enough).Now if you define $g(\alpha) = f_{1}(\alpha) + f_{2}(\alpha)$ then $g'(\alpha) = f_{1}^{'}(\alpha) + f^{'}_{2}(\alpha)$. Defining more than one observable in TensorFlow Quantum to use along with a circuit is equivalent to adding on more terms to $g$.This means that the gradient of a particular symbol in a circuit is equal to the sum of the gradients with regards to each observable for that symbol applied to that circuit. This is compatible with TensorFlow gradient taking and backpropagation (where you give the sum of the gradients over all observables as the gradient for a particular symbol).
###Code
sum_of_outputs = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=[[test_value]])
###Output
_____no_output_____
###Markdown
Here you see the first entry is the expectation w.r.t Pauli X, and the second is the expectation w.r.t Pauli Z. Now when you take the gradient:
###Code
test_value_tensor = tf.convert_to_tensor([[test_value]])
with tf.GradientTape() as g:
g.watch(test_value_tensor)
outputs = sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=test_value_tensor)
sum_of_gradients = g.gradient(outputs, test_value_tensor)
print(my_grad(pauli_x, test_value) + my_grad(pauli_z, test_value))
print(sum_of_gradients.numpy())
###Output
_____no_output_____
###Markdown
Here you have verified that the sum of the gradients for each observable is indeed the gradient of $\alpha$. This behavior is supported by all TensorFlow Quantum differentiators and plays a crucial role in the compatibility with the rest of TensorFlow. 4. Advanced usageHere you will learn how to define your own custom differentiation routines for quantum circuits.All differentiators that exist inside of TensorFlow Quantum subclass `tfq.differentiators.Differentiator`. A differentiator must implement `differentiate_analytic` and `differentiate_sampled`.The following uses TensorFlow Quantum constructs to implement the closed form solution from the first part of this tutorial.
###Code
class MyDifferentiator(tfq.differentiators.Differentiator):
"""A Toy differentiator for <Y^alpha | X |Y^alpha>."""
def __init__(self):
pass
@tf.function
def _compute_gradient(self, symbol_values):
"""Compute the gradient based on symbol_values."""
# f(x) = sin(pi * x)
# f'(x) = pi * cos(pi * x)
return tf.cast(tf.cos(symbol_values * np.pi) * np.pi, tf.float32)
@tf.function
def differentiate_analytic(self, programs, symbol_names, symbol_values,
pauli_sums, forward_pass_vals, grad):
"""Specify how to differentiate a circuit with analytical expectation.
This is called at graph runtime by TensorFlow. `differentiate_analytic`
should calculate the gradient of a batch of circuits and return it
formatted as indicated below. See
`tfq.differentiators.ForwardDifference` for an example.
Args:
programs: `tf.Tensor` of strings with shape [batch_size] containing
the string representations of the circuits to be executed.
symbol_names: `tf.Tensor` of strings with shape [n_params], which
is used to specify the order in which the values in
`symbol_values` should be placed inside of the circuits in
`programs`.
symbol_values: `tf.Tensor` of real numbers with shape
[batch_size, n_params] specifying parameter values to resolve
into the circuits specified by programs, following the ordering
dictated by `symbol_names`.
pauli_sums: `tf.Tensor` of strings with shape [batch_size, n_ops]
containing the string representation of the operators that will
be used on all of the circuits in the expectation calculations.
forward_pass_vals: `tf.Tensor` of real numbers with shape
[batch_size, n_ops] containing the output of the forward pass
through the op you are differentiating.
grad: `tf.Tensor` of real numbers with shape [batch_size, n_ops]
representing the gradient backpropagated to the output of the
op you are differentiating through.
Returns:
A `tf.Tensor` with the same shape as `symbol_values` representing
the gradient backpropagated to the `symbol_values` input of the op
you are differentiating through.
"""
# Computing gradients just based off of symbol_values.
return self._compute_gradient(symbol_values) * grad
@tf.function
def differentiate_sampled(self, programs, symbol_names, symbol_values,
pauli_sums, num_samples, forward_pass_vals, grad):
"""Specify how to differentiate a circuit with sampled expectation.
This is called at graph runtime by TensorFlow. `differentiate_sampled`
should calculate the gradient of a batch of circuits and return it
formatted as indicated below. See
`tfq.differentiators.ForwardDifference` for an example.
Args:
programs: `tf.Tensor` of strings with shape [batch_size] containing
the string representations of the circuits to be executed.
symbol_names: `tf.Tensor` of strings with shape [n_params], which
is used to specify the order in which the values in
`symbol_values` should be placed inside of the circuits in
`programs`.
symbol_values: `tf.Tensor` of real numbers with shape
[batch_size, n_params] specifying parameter values to resolve
into the circuits specified by programs, following the ordering
dictated by `symbol_names`.
pauli_sums: `tf.Tensor` of strings with shape [batch_size, n_ops]
containing the string representation of the operators that will
be used on all of the circuits in the expectation calculations.
num_samples: `tf.Tensor` of positive integers representing the
number of samples per term in each term of pauli_sums used
during the forward pass.
forward_pass_vals: `tf.Tensor` of real numbers with shape
[batch_size, n_ops] containing the output of the forward pass
through the op you are differentiating.
grad: `tf.Tensor` of real numbers with shape [batch_size, n_ops]
representing the gradient backpropagated to the output of the
op you are differentiating through.
Returns:
A `tf.Tensor` with the same shape as `symbol_values` representing
the gradient backpropagated to the `symbol_values` input of the op
you are differentiating through.
"""
return self._compute_gradient(symbol_values) * grad
###Output
_____no_output_____
###Markdown
This new differentiator can now be used with existing `tfq.layer` objects:
###Code
custom_dif = MyDifferentiator()
custom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif)
# Now let's get the gradients with finite diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
# Now let's get the gradients with custom diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
my_outputs = custom_grad_expectation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
my_gradients = g.gradient(my_outputs, values_tensor)
plt.subplot(1, 2, 1)
plt.title('Exact Gradient')
plt.plot(input_points, analytic_finite_diff_gradients.numpy())
plt.xlabel('x')
plt.ylabel('f(x)')
plt.subplot(1, 2, 2)
plt.title('My Gradient')
plt.plot(input_points, my_gradients.numpy())
plt.xlabel('x')
###Output
_____no_output_____
###Markdown
This new differentiator can now be used to generate differentiable ops.Key Point: A differentiator that has been previously attached to an op must be refreshed before attaching to a new op, because a differentiator may only be attached to one op at a time.
###Code
# Create a noisy sample based expectation op.
expectation_sampled = tfq.get_sampled_expectation_op(
cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01)))
# Make it differentiable with your differentiator:
# Remember to refresh the differentiator before attaching the new op
custom_dif.refresh()
differentiable_op = custom_dif.generate_differentiable_op(
sampled_op=expectation_sampled)
# Prep op inputs.
circuit_tensor = tfq.convert_to_tensor([my_circuit])
op_tensor = tfq.convert_to_tensor([[pauli_x]])
single_value = tf.convert_to_tensor([[my_alpha]])
num_samples_tensor = tf.convert_to_tensor([[1000]])
with tf.GradientTape() as g:
g.watch(single_value)
forward_output = differentiable_op(circuit_tensor, ['alpha'], single_value,
op_tensor, num_samples_tensor)
my_gradients = g.gradient(forward_output, single_value)
print('---TFQ---')
print('Foward: ', forward_output.numpy())
print('Gradient:', my_gradients.numpy())
print('---Original---')
print('Forward: ', my_expectation(pauli_x, my_alpha))
print('Gradient:', my_grad(pauli_x, my_alpha))
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Calculate gradients View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial explores gradient calculation algorithms for the expectation values of quantum circuits.Calculating the gradient of the expectation value of a certain observable in a quantum circuit is an involved process. Expectation values of observables do not have the luxury of having analytic gradient formulas that are always easy to write down—unlike traditional machine learning transformations such as matrix multiplication or vector addition that have analytic gradient formulas which are easy to write down. As a result, there are different quantum gradient calculation methods that come in handy for different scenarios. This tutorial compares and contrasts two different differentiation schemes. Setup
###Code
try:
# %tensorflow_version only works in Colab.
%tensorflow_version 2.x
except Exception:
pass
###Output
_____no_output_____
###Markdown
Install TensorFlow Quantum:
###Code
!pip install tensorflow-quantum
###Output
_____no_output_____
###Markdown
Now import TensorFlow and the module dependencies:
###Code
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
###Output
_____no_output_____
###Markdown
1. PreliminaryLet's make the notion of gradient calculation for quantum circuits a little more concrete. Suppose you have a parameterized circuit like this one:
###Code
qubit = cirq.GridQubit(0, 0)
my_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha'))
SVGCircuit(my_circuit)
###Output
_____no_output_____
###Markdown
Along with an observable:
###Code
pauli_x = cirq.X(qubit)
pauli_x
###Output
_____no_output_____
###Markdown
Looking at this operator you know that $⟨Y(\alpha)| X | Y(\alpha)⟩ = \sin(\pi \alpha)$
###Code
def my_expectation(op, alpha):
"""Compute ⟨Y(alpha)| `op` | Y(alpha)⟩"""
params = {'alpha': alpha}
sim = cirq.Simulator()
final_state = sim.simulate(my_circuit, params).final_state
return op.expectation_from_wavefunction(final_state, {qubit: 0}).real
my_alpha = 0.3
print("Expectation=", my_expectation(pauli_x, my_alpha))
print("Sin Formula=", np.sin(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
and if you define $f_{1}(\alpha) = ⟨Y(\alpha)| X | Y(\alpha)⟩$ then $f_{1}^{'}(\alpha) = \pi \cos(\pi \alpha)$. Let's check this:
###Code
def my_grad(obs, alpha, eps=0.01):
grad = 0
f_x = my_expectation(obs, alpha)
f_x_prime = my_expectation(obs, alpha + eps)
return ((f_x_prime - f_x) / eps).real
print('Finite difference:', my_grad(pauli_x, my_alpha))
print('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha))
###Output
_____no_output_____
###Markdown
2. The need for a differentiatorWith larger circuits, you won't always be so lucky to have a formula that precisely calculates the gradients of a given quantum circuit. In the event that a simple formula isn't enough to calculate the gradient, the `tfq.differentiators.Differentiator` class allows you to define algorithms for computing the gradients of your circuits. For instance you can recreate the above example in TensorFlow Quantum (TFQ) with:
###Code
expectation_calculation = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
However, if you switch to estimating expectation based on sampling (what would happen on a true device) the values can change a little bit. This means you now have an imperfect estimate:
###Code
sampled_expectation_calculation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
###Output
_____no_output_____
###Markdown
This can quickly compound into a serious accuracy problem when it comes to gradients:
###Code
# Make input_points = [batch_size, 1] array.
input_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=input_points)
imperfect_outputs = sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=input_points)
plt.title('Forward Pass Values')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.plot(input_points, exact_outputs, label='Analytic')
plt.plot(input_points, imperfect_outputs, label='Sampled')
plt.legend()
# Gradients are a much different story.
values_tensor = tf.convert_to_tensor(input_points)
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = sampled_expectation_calculation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_finite_diff_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_finite_diff_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
Here you can see that although the finite difference formula is fast to compute the gradients themselves in the analytical case, when it came to the sampling based methods it was far too noisy. More careful techniques must be used to ensure a good gradient can be calculated. Next you will look at a much slower technique that wouldn't be as well suited for analytical expectation gradient calculations, but does perform much better in the real-world sample based case:
###Code
# A smarter differentiation scheme.
gradient_safe_sampled_expectation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ParameterShift())
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = gradient_safe_sampled_expectation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_param_shift_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_param_shift_gradients, label='Sampled')
plt.legend()
###Output
_____no_output_____
###Markdown
From the above you can see that certain differentiators are best used for particular research scenarios. In general, the slower sample-based methods that are robust to device noise, etc., are great differentiators when testing or implementing algorithms in a more "real world" setting. Faster methods like finite difference are great for analytical calculations and you want higher throughput, but aren't yet concerned with the device viability of your algorithm. 3. Multiple observablesLet's introduce a second observable and see how TensorFlow Quantum supports multiple observables for a single circuit.
###Code
pauli_z = cirq.Z(qubit)
pauli_z
###Output
_____no_output_____
###Markdown
If this observable is used with the same circuit as before, then you have $f_{2}(\alpha) = ⟨Y(\alpha)| Z | Y(\alpha)⟩ = \cos(\pi \alpha)$ and $f_{2}^{'}(\alpha) = -\pi \sin(\pi \alpha)$. Perform a quick check:
###Code
test_value = 0.
print('Finite difference:', my_grad(pauli_z, test_value))
print('Sin formula: ', -np.pi * np.sin(np.pi * test_value))
###Output
_____no_output_____
###Markdown
It's a match (close enough).Now if you define $g(\alpha) = f_{1}(\alpha) + f_{2}(\alpha)$ then $g'(\alpha) = f_{1}^{'}(\alpha) + f^{'}_{2}(\alpha)$. Defining more than one observable in TensorFlow Quantum to use along with a circuit is equivalent to adding on more terms to $g$.This means that the gradient of a particular symbol in a circuit is equal to the sum of the gradients with regards to each observable for that symbol applied to that circuit. This is compatible with TensorFlow gradient taking and backpropagation (where you give the sum of the gradients over all observables as the gradient for a particular symbol).
###Code
sum_of_outputs = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=[[test_value]])
###Output
_____no_output_____
###Markdown
Here you see the first entry is the expectation w.r.t Pauli X, and the second is the expectation w.r.t Pauli Z. Now when you take the gradient:
###Code
test_value_tensor = tf.convert_to_tensor([[test_value]])
with tf.GradientTape() as g:
g.watch(test_value_tensor)
outputs = sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=test_value_tensor)
sum_of_gradients = g.gradient(outputs, test_value_tensor)
print(my_grad(pauli_x, test_value) + my_grad(pauli_z, test_value))
print(sum_of_gradients.numpy())
###Output
_____no_output_____
###Markdown
Here you have verified that the sum of the gradients for each observable is indeed the gradient of $\alpha$. This behavior is supported by all TensorFlow Quantum differentiators and plays a crucial role in the compatibility with the rest of TensorFlow. 4. Advanced usageHere you will learn how to define your own custom differentiation routines for quantum circuits.All differentiators that exist inside of TensorFlow Quantum subclass `tfq.differentiators.Differentiator`. A differentiator must implement `differentiate_analytic` and `differentiate_sampled`.The following uses TensorFlow Quantum constructs to implement the closed form solution from the first part of this tutorial.
###Code
class MyDifferentiator(tfq.differentiators.Differentiator):
"""A Toy differentiator for <Y^alpha | X |Y^alpha>."""
def __init__(self):
pass
@tf.function
def _compute_gradient(self, symbol_values):
"""Compute the gradient based on symbol_values."""
# f(x) = sin(pi * x)
# f'(x) = pi * cos(pi * x)
return tf.cast(tf.cos(symbol_values * np.pi) * np.pi, tf.float32)
@tf.function
def differentiate_analytic(self, programs, symbol_names, symbol_values,
pauli_sums, forward_pass_vals, grad):
"""Specify how to differentiate a circuit with analytical expectation.
This is called at graph runtime by TensorFlow. `differentiate_analytic`
should calculate the gradient of a batch of circuits and return it
formatted as indicated below. See
`tfq.differentiators.ForwardDifference` for an example.
Args:
programs: `tf.Tensor` of strings with shape [batch_size] containing
the string representations of the circuits to be executed.
symbol_names: `tf.Tensor` of strings with shape [n_params], which
is used to specify the order in which the values in
`symbol_values` should be placed inside of the circuits in
`programs`.
symbol_values: `tf.Tensor` of real numbers with shape
[batch_size, n_params] specifying parameter values to resolve
into the circuits specified by programs, following the ordering
dictated by `symbol_names`.
pauli_sums: `tf.Tensor` of strings with shape [batch_size, n_ops]
containing the string representation of the operators that will
be used on all of the circuits in the expectation calculations.
forward_pass_vals: `tf.Tensor` of real numbers with shape
[batch_size, n_ops] containing the output of the forward pass
through the op you are differentiating.
grad: `tf.Tensor` of real numbers with shape [batch_size, n_ops]
representing the gradient backpropagated to the output of the
op you are differentiating through.
Returns:
A `tf.Tensor` with the same shape as `symbol_values` representing
the gradient backpropagated to the `symbol_values` input of the op
you are differentiating through.
"""
# Computing gradients just based off of symbol_values.
return self._compute_gradient(symbol_values) * grad
@tf.function
def differentiate_sampled(self, programs, symbol_names, symbol_values,
pauli_sums, num_samples, forward_pass_vals, grad):
"""Specify how to differentiate a circuit with sampled expectation.
This is called at graph runtime by TensorFlow. `differentiate_sampled`
should calculate the gradient of a batch of circuits and return it
formatted as indicated below. See
`tfq.differentiators.ForwardDifference` for an example.
Args:
programs: `tf.Tensor` of strings with shape [batch_size] containing
the string representations of the circuits to be executed.
symbol_names: `tf.Tensor` of strings with shape [n_params], which
is used to specify the order in which the values in
`symbol_values` should be placed inside of the circuits in
`programs`.
symbol_values: `tf.Tensor` of real numbers with shape
[batch_size, n_params] specifying parameter values to resolve
into the circuits specified by programs, following the ordering
dictated by `symbol_names`.
pauli_sums: `tf.Tensor` of strings with shape [batch_size, n_ops]
containing the string representation of the operators that will
be used on all of the circuits in the expectation calculations.
num_samples: `tf.Tensor` of positive integers representing the
number of samples per term in each term of pauli_sums used
during the forward pass.
forward_pass_vals: `tf.Tensor` of real numbers with shape
[batch_size, n_ops] containing the output of the forward pass
through the op you are differentiating.
grad: `tf.Tensor` of real numbers with shape [batch_size, n_ops]
representing the gradient backpropagated to the output of the
op you are differentiating through.
Returns:
A `tf.Tensor` with the same shape as `symbol_values` representing
the gradient backpropagated to the `symbol_values` input of the op
you are differentiating through.
"""
return self._compute_gradient(symbol_values) * grad
###Output
_____no_output_____
###Markdown
This new differentiator can now be used with existing `tfq.layer` objects:
###Code
custom_dif = MyDifferentiator()
custom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif)
# Now let's get the gradients with finite diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
# Now let's get the gradients with custom diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
my_outputs = custom_grad_expectation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
my_gradients = g.gradient(my_outputs, values_tensor)
plt.subplot(1, 2, 1)
plt.title('Exact Gradient')
plt.plot(input_points, analytic_finite_diff_gradients.numpy())
plt.xlabel('x')
plt.ylabel('f(x)')
plt.subplot(1, 2, 2)
plt.title('My Gradient')
plt.plot(input_points, my_gradients.numpy())
plt.xlabel('x')
###Output
_____no_output_____
###Markdown
This new differentiator can now be used to generate differentiable ops.Key Point: A differentiator that has been previously attached to an op must be refreshed before attaching to a new op, because a differentiator may only be attached to one op at a time.
###Code
# Create a noisy sample based expectation op.
expectation_sampled = tfq.get_sampled_expectation_op(
cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01)))
# Make it differentiable with your differentiator:
# Remember to refresh the differentiator before attaching the new op
custom_dif.refresh()
differentiable_op = custom_dif.generate_differentiable_op(
sampled_op=expectation_sampled)
# Prep op inputs.
circuit_tensor = tfq.convert_to_tensor([my_circuit])
op_tensor = tfq.convert_to_tensor([[pauli_x]])
single_value = tf.convert_to_tensor([[my_alpha]])
num_samples_tensor = tf.convert_to_tensor([[1000]])
with tf.GradientTape() as g:
g.watch(single_value)
forward_output = differentiable_op(circuit_tensor, ['alpha'], single_value,
op_tensor, num_samples_tensor)
my_gradients = g.gradient(forward_output, single_value)
print('---TFQ---')
print('Foward: ', forward_output.numpy())
print('Gradient:', my_gradients.numpy())
print('---Original---')
print('Forward: ', my_expectation(pauli_x, my_alpha))
print('Gradient:', my_grad(pauli_x, my_alpha))
###Output
_____no_output_____
|
Notebooks/ObservationPlotting.ipynb
|
###Markdown
Plotting Observations * **Products used:** [ga_ls8c_wofs_2](https://explorer.digitalearth.africa/ga_ls8c_wofs_2),[ga_ls8c_wofs_2_summary ](https://explorer.digitalearth.africa/ga_ls8c_wofs_2_summary) BackgroundTBA DescriptionThis notebook explains how you can perform validation analysis for WOFS derived product using collected ground truth dataset and window-based sampling. The notebook demonstrates how to:1. Plotting the count of clear observation in each month for validation points 2. *** Getting startedTo run this analysis, run all the cells in the notebook, starting with the "Load packages" cell.After finishing the analysis, you can modify some values in the "Analysis parameters" cell and re-run the analysis to load WOFLs for a different location or time period. Load packagesImport Python packages that are used for the analysis.
###Code
%matplotlib inline
import time
import datacube
from datacube.utils import masking, geometry
import sys
import os
import dask
import rasterio, rasterio.features
import xarray
import glob
import numpy as np
import pandas as pd
import seaborn as sn
import geopandas as gpd
import subprocess as sp
import matplotlib.pyplot as plt
import scipy, scipy.ndimage
import warnings
warnings.filterwarnings("ignore") #this will suppress the warnings for multiple UTM zones in your AOI
sys.path.append("../Scripts")
from rasterio.mask import mask
from geopandas import GeoSeries, GeoDataFrame
from shapely.geometry import Point
from deafrica_plotting import map_shapefile,display_map, rgb
from deafrica_spatialtools import xr_rasterize
from deafrica_datahandling import wofs_fuser, mostcommon_crs,load_ard,deepcopy
from deafrica_dask import create_local_dask_cluster
#for parallelisation
from multiprocessing import Pool, Manager
import multiprocessing as mp
from tqdm import tqdm
sn.set()
sn.set_theme(color_codes=True)
###Output
_____no_output_____
###Markdown
Connect to the datacubeActivate the datacube database, which provides functionality for loading and displaying stored Earth observation data.
###Code
dc = datacube.Datacube(app='WOfS_accuracy')
###Output
_____no_output_____
###Markdown
Analysis parameters To analyse validation points collected by each partner institution, we need to obtain WOfS surface water observation data that corresponds with the labelled input data locations. Loading Dataset 1. Load validation points for each partner institutions as a list of observations each has a location and month * Load the cleaned validation file as ESRI `shapefile` * Inspect the shapefile
###Code
#Read the final table of analysis for each AEZ zone
CEO = '../Supplementary_data/Validation/Refined/NewAnalysis/Continent/WOfS_processed/Intitutions/Point_Based/AEZs/ValidPoints/Africa_ValidationPoints.csv'
input_data = pd.read_csv(CEO,delimiter=",")
input_data=input_data.drop(['Unnamed: 0'], axis=1)
input_data.head()
input_data['CL_OBS_count'] = input_data.groupby('MONTH')['CLEAR_OBS'].transform('count')
input_data
###Output
_____no_output_____
###Markdown
Demonstrating Clear Observation in Each Month
###Code
import calendar
input_data['MONTH'] = input_data['MONTH'].apply(lambda x: calendar.month_abbr[x])
#input_data.reindex(input_data.MONTH.map(d).sort_values().index) #map + sort_values + reindex with index
input_data.MONTH=input_data.MONTH.str.capitalize() #capitalizes the series
d={i:e for e,i in enumerate(calendar.month_abbr)} #creates a dictionary
input_data.reindex(input_data.MONTH.map(d).sort_values().index) #map + sort_values + reindex with index
input_data = input_data.rename(columns={'CL_OBS_count':'Number of Valid Points','MONTH':'Month'})
#In order to plot the count of clear observation for each month in each AEZ and examine the seasonality
Months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
g = sn.catplot(x='Month', y='Number of Valid Points', kind='bar', data=input_data, order=Months); #you can add pallette=set1 to change the color scheme
g.fig.suptitle('Africa');
#sn.histplot(data=input_data,x='MONTH',hue='Clear_Obs', multiple='stack',bins=25).set_title('Number of WOfS Clear Observations in Sahel AEZ');
###Output
_____no_output_____
###Markdown
Working on histogram
###Code
#Reading the classification table extracted from 0.9 thresholding of the frequncy for each AEZ exteracted from WOfS_Validation_Africa notebook
#SummaryTable = '../Supplementary_data/Validation/Refined/Continent/AEZs_Assessment/AEZs_Classification/Africa_WOfS_Validation_Class_Eastern_T0.9.csv'
SummaryTable = '../Supplementary_data/Validation/Refined/Continent/AEZ_count/AEZs_Classification/Africa_WOfS_Validation_Class_Southern_T0.9.csv'
CLF = pd.read_csv(SummaryTable,delimiter=",")
CLF
CLF=CLF.drop(['Unnamed: 0','MONTH','ACTUAL','CLEAR_OBS','CLASS_WET','Actual_Sum','PREDICTION','WOfS_Sum', 'Actual_count','WOfS_count','geometry','WOfS_Wet_Sum','WOfS_Clear_Sum'], axis=1)
CLF
count = CLF.groupby('CLASS',as_index=False,sort=False).last()
count
sn.set()
sn.set_theme(color_codes=True)
ax1 = sn.displot(CLF, x="CEO_FREQUENCY", hue="CLASS");
ax2 = sn.displot(CLF, x="WOfS_FREQUENCY", hue="CLASS");
ax2._legend.remove()
sn.relplot(x="WOfS_FREQUENCY", y="CEO_FREQUENCY", hue="CLASS",size='WOfS_FREQUENCY',sizes=(10,150), data=CLF);
sn.displot(CLF, x="CEO_FREQUENCY", hue="CLASS", kind='kde');
sn.histplot(CLF, x="CEO_FREQUENCY");
sn.displot(CLF, x="CEO_FREQUENCY", kind='kde');
Sample_ID = CLF[['CLASS','CEO_FREQUENCY','WOfS_FREQUENCY']]
sn.pairplot(Sample_ID, hue='CLASS', size=2.5);
sn.pairplot(Sample_ID,hue='CLASS',diag_kind='kde',kind='scatter',palette='husl');
print(datacube.__version__)
###Output
_____no_output_____
###Markdown
*** Additional information**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). Digital Earth Africa data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.**Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/digitalearthafrica/deafrica-sandbox-notebooks).**Last modified:** September 2020**Compatible datacube version:** TagsBrowse all available tags on the DE Africa User Guide's [Tags Index](https://) (placeholder as this does not exist yet)
###Code
**Tags**: :index:`WOfS`, :index:`fractional cover`, :index:`deafrica_plotting`, :index:`deafrica_datahandling`, :index:`display_map`, :index:`wofs_fuser`, :index:`WOFL`, :index:`masking`
###Output
_____no_output_____
###Markdown
Plotting Observations * **Products used:** [ga_ls8c_wofs_2](https://explorer.digitalearth.africa/ga_ls8c_wofs_2),[ga_ls8c_wofs_2_summary ](https://explorer.digitalearth.africa/ga_ls8c_wofs_2_summary) BackgroundTBA DescriptionThis notebook explains how you can perform validation analysis for WOFS derived product using collected ground truth dataset and window-based sampling. The notebook demonstrates how to:1. Plotting the count of clear observation in each month for validation points 2. *** Getting startedTo run this analysis, run all the cells in the notebook, starting with the "Load packages" cell.After finishing the analysis, you can modify some values in the "Analysis parameters" cell and re-run the analysis to load WOFLs for a different location or time period. Load packagesImport Python packages that are used for the analysis.
###Code
%matplotlib inline
import time
import datacube
from datacube.utils import masking, geometry
import sys
import os
import dask
import rasterio, rasterio.features
import xarray
import glob
import numpy as np
import pandas as pd
import seaborn as sn
import geopandas as gpd
import subprocess as sp
import matplotlib.pyplot as plt
import scipy, scipy.ndimage
import warnings
warnings.filterwarnings("ignore") #this will suppress the warnings for multiple UTM zones in your AOI
sys.path.append("../Scripts")
from rasterio.mask import mask
from geopandas import GeoSeries, GeoDataFrame
from shapely.geometry import Point
from deafrica_plotting import map_shapefile,display_map, rgb
from deafrica_spatialtools import xr_rasterize
from deafrica_datahandling import wofs_fuser, mostcommon_crs,load_ard,deepcopy
from deafrica_dask import create_local_dask_cluster
#for parallelisation
from multiprocessing import Pool, Manager
import multiprocessing as mp
from tqdm import tqdm
sn.set()
sn.set_theme(color_codes=True)
###Output
_____no_output_____
###Markdown
Connect to the datacubeActivate the datacube database, which provides functionality for loading and displaying stored Earth observation data.
###Code
dc = datacube.Datacube(app='WOfS_accuracy')
###Output
_____no_output_____
###Markdown
Analysis parameters To analyse validation points collected by each partner institution, we need to obtain WOfS surface water observation data that corresponds with the labelled input data locations. Loading Dataset 1. Load validation points for each partner institutions as a list of observations each has a location and month * Load the cleaned validation file as ESRI `shapefile` * Inspect the shapefile
###Code
#Read the final table of analysis for each AEZ zone
CEO = '../Supplementary_data/Validation/Refined/NewAnalysis/Continent/WOfS_processed/Intitutions/Point_Based/AEZs/ValidPoints/Africa_ValidationPoints.csv'
input_data = pd.read_csv(CEO,delimiter=",")
input_data=input_data.drop(['Unnamed: 0'], axis=1)
input_data.head()
input_data['CL_OBS_count'] = input_data.groupby('MONTH')['CLEAR_OBS'].transform('count')
input_data
###Output
_____no_output_____
###Markdown
Demonstrating Clear Observation in Each Month
###Code
import calendar
input_data['MONTH'] = input_data['MONTH'].apply(lambda x: calendar.month_abbr[x])
#input_data.reindex(input_data.MONTH.map(d).sort_values().index) #map + sort_values + reindex with index
input_data.MONTH=input_data.MONTH.str.capitalize() #capitalizes the series
d={i:e for e,i in enumerate(calendar.month_abbr)} #creates a dictionary
input_data.reindex(input_data.MONTH.map(d).sort_values().index) #map + sort_values + reindex with index
input_data = input_data.rename(columns={'CL_OBS_count':'Number of Valid Points','MONTH':'Month'})
#In order to plot the count of clear observation for each month in each AEZ and examine the seasonality
Months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
g = sn.catplot(x='Month', y='Number of Valid Points', kind='bar', data=input_data, order=Months); #you can add pallette=set1 to change the color scheme
g.fig.suptitle('Africa');
#sn.histplot(data=input_data,x='MONTH',hue='Clear_Obs', multiple='stack',bins=25).set_title('Number of WOfS Clear Observations in Sahel AEZ');
###Output
_____no_output_____
###Markdown
Working on histogram
###Code
#Reading the classification table extracted from 0.9 thresholding of the frequncy for each AEZ exteracted from WOfS_Validation_Africa notebook
#SummaryTable = '../Supplementary_data/Validation/Refined/Continent/AEZs_Assessment/AEZs_Classification/Africa_WOfS_Validation_Class_Eastern_T0.9.csv'
SummaryTable = '../Supplementary_data/Validation/Refined/Continent/AEZ_count/AEZs_Classification/Africa_WOfS_Validation_Class_Southern_T0.9.csv'
CLF = pd.read_csv(SummaryTable,delimiter=",")
CLF
CLF=CLF.drop(['Unnamed: 0','MONTH','ACTUAL','CLEAR_OBS','CLASS_WET','Actual_Sum','PREDICTION','WOfS_Sum', 'Actual_count','WOfS_count','geometry','WOfS_Wet_Sum','WOfS_Clear_Sum'], axis=1)
CLF
count = CLF.groupby('CLASS',as_index=False,sort=False).last()
count
sn.set()
sn.set_theme(color_codes=True)
ax1 = sn.displot(CLF, x="CEO_FREQUENCY", hue="CLASS");
ax2 = sn.displot(CLF, x="WOfS_FREQUENCY", hue="CLASS");
ax2._legend.remove()
sn.relplot(x="WOfS_FREQUENCY", y="CEO_FREQUENCY", hue="CLASS",size='WOfS_FREQUENCY',sizes=(10,150), data=CLF);
sn.displot(CLF, x="CEO_FREQUENCY", hue="CLASS", kind='kde');
sn.histplot(CLF, x="CEO_FREQUENCY");
sn.displot(CLF, x="CEO_FREQUENCY", kind='kde');
Sample_ID = CLF[['CLASS','CEO_FREQUENCY','WOfS_FREQUENCY']]
sn.pairplot(Sample_ID, hue='CLASS', size=2.5);
sn.pairplot(Sample_ID,hue='CLASS',diag_kind='kde',kind='scatter',palette='husl');
print(datacube.__version__)
###Output
_____no_output_____
###Markdown
*** Additional information**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). Digital Earth Africa data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.**Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/digitalearthafrica/deafrica-sandbox-notebooks).**Last modified:** September 2020**Compatible datacube version:** TagsBrowse all available tags on the DE Africa User Guide's [Tags Index](https://) (placeholder as this does not exist yet)
###Code
**Tags**: :index:`WOfS`, :index:`fractional cover`, :index:`deafrica_plotting`, :index:`deafrica_datahandling`, :index:`display_map`, :index:`wofs_fuser`, :index:`WOFL`, :index:`masking`
###Output
_____no_output_____
|
pacbook/_build/jupyter_execute/PAC Interactive Inclusion Analysis.ipynb
|
###Markdown
Corporate PAC Inclusion Analysis¶ Does your company's Political Action Committee have a blind spot with respect to inclusion?
###Code
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = 'PAC.css'
HTML(open(css_file, "r").read())
import pandas as pd
import numpy as np
from ipywidgets import interact, interactive, fixed, interact_manual, Layout
import ipywidgets as widgets
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
from IPython.display import Javascript, display, HTML
races = ['Asian', 'Black', 'Hispanic', 'Other Races', 'White']
genders = ['Male', 'Female']
pac_selection = widgets.Select(
options=['BlackRock', 'Leidos', 'Google'],
value='BlackRock',
description='Select PAC:',
disabled=False,
)
pac_text = widgets.Text(value="BlackRock")
def handle_change(change):
if change['type'] == 'change' and change['name'] == 'value':
print("changed to %s" % change['new'])
pac_text.value = change['new']
display(Javascript('IPython.notebook.execute_cells_below()'))
pac_selection.observe(handle_change)
display(pac_selection)
def display_race_charts(input_df, main_title):
#strip Total column from dataframe
df = input_df.iloc[:,:-1]
count = len(df.index)
fig, axes = plt.subplots(1,count, figsize=(12,3))
if count > 1:
for ax, idx in zip(axes, df.index):
ax.pie(df.loc[idx], labels=df.columns, radius=1, autopct='%1.1f%%', explode=(0, 0, 0, 0, 0.1), shadow=True)
ax.set(ylabel='', title=idx, aspect='equal')
else:
ax = df.iloc[0].plot(kind = 'pie', labels=df.columns, radius=1, autopct='%1.1f%%', explode=(0,0,0,0,0.1), shadow=True)
ax.set(ylabel='', title=df.index[0], aspect='equal')
fig.suptitle(main_title, fontsize=18)
fig.tight_layout()
fig.subplots_adjust(top=0.80)
plt.show()
def display_race_summary_charts(input_df, main_title):
df = input_df
count = len(df.index)
fig, axes = plt.subplots(1,count, figsize=(12,3))
df.plot(kind = 'pie', radius=1, autopct='%1.1f%%', ax=ax, subplots=True, shadow=True)
ax.set(ylabel='', title=df.index[0], aspect='equal')
#ax.get_legend().remove()
fig.suptitle(main_title, fontsize=18)
fig.tight_layout()
fig.subplots_adjust(top=0.80)
plt.show()
def display_gender_charts(input_df, main_title):
#strip Total column from dataframe
df = input_df.iloc[:,:-1]
count = len(df.index)
fig, axes = plt.subplots(1,count, figsize=(12,3))
if count > 1:
for ax, idx in (zip(axes, df.index)):
ax.pie(df.loc[idx], labels=df.columns, radius=1, autopct='%1.1f%%', explode=(0.1, 0), shadow=True)
ax.set(ylabel='', title=idx, aspect='equal')
else:
ax = df.iloc[0].plot(kind = 'pie', labels=df.columns, radius=1, autopct='%1.1f%%', explode=(0.1, 0), shadow=True)
ax.set(ylabel='', title=df.index[0], aspect='equal')
fig.suptitle(main_title, fontsize=18)
fig.tight_layout()
fig.subplots_adjust(top=0.80)
plt.show()
def autopct_format(values):
def my_format(pct):
total = sum(values)
val = int(round(pct*total/100.0))
return '${v:d}'.format(v=val)
return my_format
def display_money_charts(input_df, main_title):
#strip Total column from dataframe
df = input_df.iloc[:,:-1]
#Reverse order dataframe
df = df.iloc[::-1]
fig, axes = plt.subplots(1,2, figsize=(12,3))
for ax, idx in zip(axes, df.index):
ax.pie(df.loc[idx], labels=df.columns, radius=1, autopct = autopct_format(df.loc[idx]), shadow=True)
ax.set(ylabel='', title=idx, aspect='equal')
fig.suptitle(main_title, fontsize=18)
fig.tight_layout()
fig.subplots_adjust(top=0.80)
plt.show()
def display_money_bar(input_df, main_title):
#strip Total column from dataframe
df = input_df.iloc[:,:-1]
#Reverse order dataframe
#df = df.iloc[::-1]
ax = df.plot(kind='bar', title =main_title, figsize=(15, 10), legend=True, fontsize=12)
ax.set_xlabel("Party", fontsize=12)
ax.set_ylabel("PAC Disbursment Dollars", fontsize=12)
for p in ax.patches:
ax.annotate('$'+str(p.get_height()), (p.get_x() * 1.005, p.get_height() * 1.005))
plt.show()
def change_index_values(df, old, new):
as_list = df.index.tolist()
idx = as_list.index(old)
as_list[idx] = new
df.index = as_list
def currency(x, pos):
'The two args are the value and tick position'
if x >= 1000000:
return '${:1.1f}M'.format(x*1e-6)
return '${:1.0f}K'.format(x*1e-3)
def display_bar(df):
fig, ax = plt.subplots()
df.plot(kind = 'barh', ax=ax)
fmt = FuncFormatter(currency)
ax.xaxis.set_major_formatter(fmt)
ax.xaxis.grid()
def race_totals(df):
total = 0
for r in races:
if r in df.columns:
total = total + df[r]
return total
def gender_totals(df):
total = 0
for r in genders:
if r in df.columns:
total = total + df[r]
return total
congress116_df = pd.read_csv("../data/116th_congress_190103.csv")
#data cleaning
#split race & ethnicity
congress116_df[['Race','Ethnicity']] = congress116_df.raceEthnicity.str.split(" - ", 1, expand=True,)
#remove independents
congress116_df = congress116_df[congress116_df.party != "Independent"]
#remove non-voting members
congress116_df['raceEthnicity'].replace('', np.nan, inplace=True)
congress116_df.dropna(subset=['raceEthnicity'], inplace=True)
#Fix Golden Jared missing gender
congress116_df.at[240, 'gender'] = 'M'
#change Native American and Pacific Islander to "Other Races"
congress116_df.loc[(congress116_df.Race == 'Native American'),'Race']='Other Races'
congress116_df.loc[(congress116_df.Race == 'Pacific Islander'),'Race']='Other Races'
#change M & F to Male and Female
congress116_df.loc[(congress116_df.gender == 'M'),'gender']='Male'
congress116_df.loc[(congress116_df.gender == 'F'),'gender']='Female'
#remove nan
congress116_df = congress116_df.apply(lambda x: x.fillna(0) if x.dtype.kind in 'biufc' else x.fillna(''))
#convert floats to ints
#congress116_df = congress116_df.convert_dtypes()
race_df = congress116_df.groupby(['party','Race']).agg('size').unstack()
gender_df = congress116_df.groupby(['party','gender']).agg('size').unstack()
#remove NaN
race_df = race_df.apply(lambda x: x.fillna(0) if x.dtype.kind in 'biufc' else x.fillna(''))
gender_df = gender_df.apply(lambda x: x.fillna(0) if x.dtype.kind in 'biufc' else x.fillna(''))
#add total column
race_df['Total'] = race_totals(race_df)
gender_df['Total'] = gender_totals(gender_df)
#race_df['Total'] = race_df['Asian'] + race_df['Black'] + race_df['Hispanic'] + race_df['Other Races'] + race_df['White']
#gender_df['Total'] = gender_df['Male'] + gender_df['Female']
#load 2017 US race and gender data
us_race_df = pd.read_csv("../data/us_race_2017.csv")
us_race_df.set_index('Category', inplace=True)
us_gender_df = pd.read_csv("../data/us_gender_2017.csv")
us_gender_df.set_index('Category', inplace=True)
#concatenate race dataframes
race_df = pd.concat([race_df, us_race_df])
#concatenate gender dataframes
gender_df = pd.concat([gender_df, us_gender_df])
#convert floats to ints
race_df = race_df.convert_dtypes()
gender_df = gender_df.convert_dtypes()
#change index values for clearer presentation
change_index_values(race_df, "Democrat", "Democrats")
change_index_values(race_df, "Republican", "Republicans")
change_index_values(gender_df, "Democrat", "Democrats")
change_index_values(gender_df, "Republican", "Republican")
#align PAC data
PAC_2017_2018_df = pd.read_csv("../data/"+pac_text.value+"PAC-2017-2018-disbursements.csv")
ThunderboltPAC_2017_2018_df = pd.read_csv("../data/thunderboltPAC-2017-2018-disbursements.csv")
NewPAC_2017_2018_df = pd.read_csv("../data/newPAC-2017-2018-disbursements.csv")
#remove nan
PAC_2017_2018_df = PAC_2017_2018_df.apply(lambda x: x.fillna(0) if x.dtype.kind in 'biufc' else x.fillna(''))
ThunderboltPAC_2017_2018_df = ThunderboltPAC_2017_2018_df.apply(lambda x: x.fillna(0) if x.dtype.kind in 'biufc' else x.fillna(''))
NewPAC_2017_2018_df = NewPAC_2017_2018_df.apply(lambda x: x.fillna(0) if x.dtype.kind in 'biufc' else x.fillna(''))
#convert floats to ints
PAC_2017_2018_df.candidate_office_district = PAC_2017_2018_df.candidate_office_district.astype(int)
ThunderboltPAC_2017_2018_df.candidate_office_district = ThunderboltPAC_2017_2018_df.candidate_office_district.astype(int)
NewPAC_2017_2018_df.candidate_office_district = NewPAC_2017_2018_df.candidate_office_district.astype(int)
#convert join columns from all caps to capitalized
PAC_2017_2018_df.candidate_first_name = PAC_2017_2018_df.candidate_first_name.str.title()
PAC_2017_2018_df.candidate_last_name = PAC_2017_2018_df.candidate_last_name.str.title()
PAC_2017_2018_df.candidate_middle_name = PAC_2017_2018_df.candidate_middle_name.str.title()
ThunderboltPAC_2017_2018_df.candidate_first_name = ThunderboltPAC_2017_2018_df.candidate_first_name.str.title()
ThunderboltPAC_2017_2018_df.candidate_last_name = ThunderboltPAC_2017_2018_df.candidate_last_name.str.title()
ThunderboltPAC_2017_2018_df.candidate_middle_name = ThunderboltPAC_2017_2018_df.candidate_middle_name.str.title()
NewPAC_2017_2018_df.candidate_first_name = NewPAC_2017_2018_df.candidate_first_name.str.title()
NewPAC_2017_2018_df.candidate_last_name = NewPAC_2017_2018_df.candidate_last_name.str.title()
NewPAC_2017_2018_df.candidate_middle_name = NewPAC_2017_2018_df.candidate_middle_name.str.title()
#join PAC and congress demographic dataframes
PAC_merge_df = pd.merge(PAC_2017_2018_df, congress116_df, how='left', left_on=['candidate_last_name','candidate_office_district','candidate_office_state'], right_on = ['lastName','district','state'])
TBPAC_merge_df = pd.merge(ThunderboltPAC_2017_2018_df, congress116_df, how='left', left_on=['candidate_last_name','candidate_office_district','candidate_office_state'], right_on = ['lastName','district','state'])
NewPAC_merge_df = pd.merge(NewPAC_2017_2018_df, congress116_df, how='left', left_on=['candidate_last_name','candidate_office_district','candidate_office_state'], right_on = ['lastName','district','state'])
#remove nan
PAC_merge_df = PAC_merge_df.apply(lambda x: x.fillna(0) if x.dtype.kind in 'biufc' else x.fillna(''))
TBPAC_merge_df = TBPAC_merge_df.apply(lambda x: x.fillna(0) if x.dtype.kind in 'biufc' else x.fillna(''))
NewPAC_merge_df = NewPAC_merge_df.apply(lambda x: x.fillna(0) if x.dtype.kind in 'biufc' else x.fillna(''))
#subsets
#possible loser candidates
winners_2018_df = PAC_merge_df.loc[PAC_merge_df['Race'] != ""]
possible_2018_losers_df = PAC_merge_df.loc[(PAC_merge_df['candidate_last_name'] != "") & (PAC_merge_df['Race'] == "")]
untraced_disbursements_df = PAC_merge_df.loc[PAC_merge_df['Race'] == ""]
PAC_race_df = winners_2018_df.groupby(['party','Race']).agg('size').unstack()
PAC_gender_df = winners_2018_df.groupby(['party','gender']).agg('size').unstack()
PAC_money_race_df = winners_2018_df.groupby(['party', 'Race'])['disbursement_amount'].agg('sum').unstack()
PAC_money_gender_df = winners_2018_df.groupby(['party', 'gender'])['disbursement_amount'].agg('sum').unstack()
#remove NaN
PAC_race_df = PAC_race_df.apply(lambda x: x.fillna(0) if x.dtype.kind in 'biufc' else x.fillna(''))
PAC_gender_df = PAC_gender_df.apply(lambda x: x.fillna(0) if x.dtype.kind in 'biufc' else x.fillna(''))
PAC_money_race_df = PAC_money_race_df.apply(lambda x: x.fillna(0) if x.dtype.kind in 'biufc' else x.fillna(''))
PAC_money_gender_df = PAC_money_gender_df.apply(lambda x: x.fillna(0) if x.dtype.kind in 'biufc' else x.fillna(''))
#add total column
PAC_race_df['Total'] = race_totals(PAC_race_df)
PAC_gender_df['Total'] = gender_totals(PAC_gender_df)
PAC_money_race_df['Total'] = race_totals(PAC_money_race_df)
PAC_money_gender_df['Total'] = gender_totals(PAC_money_gender_df)
#PAC_race_df['Total'] = PAC_race_df['Asian'] + PAC_race_df['Black'] + PAC_race_df['Hispanic'] + PAC_race_df['Other Races'] + PAC_race_df['White']
#PAC_gender_df['Total'] = PAC_gender_df['Male'] + PAC_gender_df['Female']
#PAC_money_race_df['Total'] = PAC_money_race_df['Asian'] + PAC_money_race_df['Black'] + PAC_money_race_df['Hispanic'] + PAC_money_race_df['Other Races'] + PAC_money_race_df['White']
#PAC_money_gender_df['Total'] = PAC_money_gender_df['Male'] + PAC_money_gender_df['Female']
#concatenate race dataframes
PAC_race_df = pd.concat([PAC_race_df, us_race_df])
#concatenate gender dataframes
PAC_gender_df = pd.concat([PAC_gender_df, us_gender_df])
#convert floats to ints
PAC_race_df = PAC_race_df.convert_dtypes()
PAC_gender_df = PAC_gender_df.convert_dtypes()
PAC_money_race_df = PAC_money_race_df.convert_dtypes()
PAC_money_gender_df = PAC_money_gender_df.convert_dtypes()
#remove NaN
PAC_race_df = PAC_race_df.apply(lambda x: x.fillna(0) if x.dtype.kind in 'biufc' else x.fillna(''))
PAC_gender_df = PAC_gender_df.apply(lambda x: x.fillna(0) if x.dtype.kind in 'biufc' else x.fillna(''))
PAC_money_race_df = PAC_money_race_df.apply(lambda x: x.fillna(0) if x.dtype.kind in 'biufc' else x.fillna(''))
PAC_money_gender_df = PAC_money_gender_df.apply(lambda x: x.fillna(0) if x.dtype.kind in 'biufc' else x.fillna(''))
#change index values for clearer presentation
change_index_values(PAC_race_df, "Democrat", "Democratic Recipiants")
change_index_values(PAC_race_df, "Republican", "Republican Recipiants")
change_index_values(PAC_gender_df, "Democrat", "Democratic Recipiants")
change_index_values(PAC_gender_df, "Republican", "Republican Recipiants")
change_index_values(PAC_money_race_df, "Democrat", "Democratic Recipiants")
change_index_values(PAC_money_race_df, "Republican", "Republican Recipiants")
change_index_values(PAC_money_gender_df, "Democrat", "Democratic Recipiants")
change_index_values(PAC_money_gender_df, "Republican", "Republican Recipiants")
#rearrange columns
race_cols = races
race_cols.append('Total')
gender_cols = genders
gender_cols.append('Total')
PAC_race_df = PAC_race_df[race_cols]
PAC_gender_df = PAC_gender_df[gender_cols]
###Output
_____no_output_____
###Markdown
PAC disbursements based on inclusion and diversity. In this analysis we take a closer look at the diversity of the United States overall and the diversity of the current US Congress by party. We will analyze diversity based on gender and race and compare the parties in general and then dive into the diversity of the specific candidates that received money from the selected PAC. All of this data is publicly available.
###Code
display_gender_charts(gender_df.iloc[2:3], "US Population by Gender")
display_gender_charts(gender_df.iloc[0:2], "Current Congress by Gender")
display_gender_charts(PAC_gender_df.iloc[0:2], "Percent of PAC Disbursements to Members of the 116th Congress by Gender")
display_gender_charts(PAC_gender_df.iloc[0:2].sum().to_frame(name='recipients').transpose(), "PAC Disbursement to candidates by Gender")
###Output
_____no_output_____
###Markdown
PAC disbursements based on racial diversity
###Code
display_race_charts(race_df.iloc[2:3], "US Population by Race")
display_race_charts(race_df.iloc[0:2], "Current Congress by Race")
display_race_charts(PAC_race_df.iloc[0:2], "Percent of PAC Disbursements to Members of the 116th Congress by Race")
display_race_charts(PAC_race_df.iloc[0:2].sum().to_frame(name='recipients').transpose(), "PAC Disbursement to candidates by Race")
display_money_charts(PAC_money_race_df, "Amount of PAC Disbursements to Members of the 116th Congress by Race")
df = PAC_merge_df.groupby('Race')['disbursement_amount'].sum().sort_values()
as_list = df.index.tolist()
idx = as_list.index('')
as_list[idx] = 'candidates not elected or other PACs or RNCC/DCCC'
df.index = as_list
display_bar(df)
###Output
_____no_output_____
|
.ipynb_checkpoints/ecephys_quickstart-checkpoint.ipynb
|
###Markdown
Extracellular Electrophysiology Data Quick StartA short introduction to the Visual Coding Neuropixels data and SDK. For more information, see the full reference notebook.Contents-------------* peristimulus time histograms* image classification
###Code
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from allensdk.brain_observatory.ecephys.ecephys_project_cache import EcephysProjectCache
st = session.get_stimulus_table()
gab_stims = st.loc[st.stimulus_name=='natural_movie_three',:]
sp = session.stimulus_presentations
sp.loc[sp.stimulus_name=='natural_movie_three',:]
movie = cache.get_natural_movie_template(3)
###Output
_____no_output_____
###Markdown
The `EcephysProjectCache` is the main entry point to the Visual Coding Neuropixels dataset. It allows you to download data for individual recording sessions and view cross-session summary information.
###Code
movie.shape
# this path determines where downloaded data will be stored
manifest_path = os.path.join('/local1/ecephys_cache_dir/', "manifest.json")
cache = EcephysProjectCache.from_warehouse(manifest=manifest_path)
print(cache.get_all_session_types())
###Output
['brain_observatory_1.1', 'functional_connectivity']
###Markdown
This dataset contains sessions in which two sets of stimuli were presented. The `"brain_observatory_1.1"` sessions are (almost exactly) the same as Visual Coding 2P sessions.
###Code
sessions = cache.get_session_table()
brain_observatory_type_sessions = sessions[sessions["session_type"] == "brain_observatory_1.1"]
brain_observatory_type_sessions.tail()
###Output
_____no_output_____
###Markdown
peristimulus time histograms We are going to pick a session arbitrarily and download its spike data.
###Code
session_id = 791319847
session = cache.get_session_data(session_id)
session.stimulus_presentations
###Output
_____no_output_____
###Markdown
We can get a high-level summary of this session by acessing its `metadata` attribute:
###Code
session.metadata
###Output
c:\users\fritz\.conda\envs\py36\lib\site-packages\allensdk\brain_observatory\ecephys\ecephys_project_api\ecephys_project_warehouse_api.py:291: FutureWarning: Conversion of the second argument of issubdtype from `bool` to `np.generic` is deprecated. In future, it will be treated as `np.bool_ == np.dtype(bool).type`.
pv_is_bool = np.issubdtype(output["p_value_rf"].values[0], np.bool)
###Markdown
We can also take a look at how many units were recorded in each brain structure:
###Code
session.structurewise_unit_counts
###Output
_____no_output_____
###Markdown
Now that we've gotten spike data, we can create peristimulus time histograms.
###Code
presentations = session.get_stimulus_table("flashes")
units = session.units[session.units["ecephys_structure_acronym"] == 'VISp']
time_step = 0.01
time_bins = np.arange(-0.1, 0.5 + time_step, time_step)
histograms = session.presentationwise_spike_counts(
stimulus_presentation_ids=presentations.index.values,
bin_edges=time_bins,
unit_ids=units.index.values
)
histograms.coords
presentations
mean_histograms = histograms.mean(dim="stimulus_presentation_id")
fig, ax = plt.subplots(figsize=(8, 8))
ax.pcolormesh(
mean_histograms["time_relative_to_stimulus_onset"],
np.arange(mean_histograms["unit_id"].size),
mean_histograms.T,
vmin=0,
vmax=1
)
ax.set_ylabel("unit", fontsize=24)
ax.set_xlabel("time relative to stimulus onset (s)", fontsize=24)
ax.set_title("peristimulus time histograms for VISp units on flash presentations", fontsize=24)
plt.show()
###Output
_____no_output_____
###Markdown
image classification First, we need to extract spikes. We will do using `EcephysSession.presentationwise_spike_times`, which returns spikes annotated by the unit that emitted them and the stimulus presentation during which they were emitted.
###Code
scene_presentations = session.get_stimulus_table("natural_scenes")
visp_units = session.units[session.units["ecephys_structure_acronym"] == "VISp"]
spikes = session.presentationwise_spike_times(
stimulus_presentation_ids=scene_presentations.index.values,
unit_ids=visp_units.index.values[:]
)
spikes
###Output
_____no_output_____
###Markdown
Next, we will convert these into a num_presentations X num_units matrix, which will serve as our input data.
###Code
spikes["count"] = np.zeros(spikes.shape[0])
spikes = spikes.groupby(["stimulus_presentation_id", "unit_id"]).count()
design = pd.pivot_table(
spikes,
values="count",
index="stimulus_presentation_id",
columns="unit_id",
fill_value=0.0,
aggfunc=np.sum
)
design
###Output
_____no_output_____
###Markdown
... with targets being the numeric identifiers of the images presented.
###Code
targets = scene_presentations.loc[design.index.values, "frame"]
targets
from sklearn import svm
from sklearn.model_selection import KFold
from sklearn.metrics import confusion_matrix
design_arr = design.values.astype(float)
targets_arr = targets.values.astype(int)
labels = np.unique(targets_arr)
accuracies = []
confusions = []
for train_indices, test_indices in KFold(n_splits=5).split(design_arr):
clf = svm.SVC(gamma="scale", kernel="rbf")
clf.fit(design_arr[train_indices], targets_arr[train_indices])
test_targets = targets_arr[test_indices]
test_predictions = clf.predict(design_arr[test_indices])
accuracy = 1 - (np.count_nonzero(test_predictions - test_targets) / test_predictions.size)
print(accuracy)
accuracies.append(accuracy)
confusions.append(confusion_matrix(y_true=test_targets, y_pred=test_predictions, labels=labels))
print(f"mean accuracy: {np.mean(accuracy)}")
print(f"chance: {1/labels.size}")
###Output
mean accuracy: 0.473109243697479
chance: 0.008403361344537815
###Markdown
imagewise performance
###Code
mean_confusion = np.mean(confusions, axis=0)
fig, ax = plt.subplots(figsize=(8, 8))
img = ax.imshow(mean_confusion)
fig.colorbar(img)
ax.set_ylabel("actual")
ax.set_xlabel("predicted")
plt.show()
best = labels[np.argmax(np.diag(mean_confusion))]
worst = labels[np.argmin(np.diag(mean_confusion))]
fig, ax = plt.subplots(1, 2, figsize=(16, 8))
best_image = cache.get_natural_scene_template(best)
ax[0].imshow(best_image, cmap=plt.cm.gray)
ax[0].set_title("most decodable", fontsize=24)
worst_image = cache.get_natural_scene_template(worst)
ax[1].imshow(worst_image, cmap=plt.cm.gray)
ax[1].set_title("least decodable", fontsize=24)
plt.show()
###Output
_____no_output_____
|
vanderplas/PythonDataScienceHandbook-master/notebooks/03.09-Pivot-Tables.ipynb
|
###Markdown
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).**The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!* Pivot Tables We have seen how the ``GroupBy`` abstraction lets us explore relationships within a dataset.A *pivot table* is a similar operation that is commonly seen in spreadsheets and other programs that operate on tabular data.The pivot table takes simple column-wise data as input, and groups the entries into a two-dimensional table that provides a multidimensional summarization of the data.The difference between pivot tables and ``GroupBy`` can sometimes cause confusion; it helps me to think of pivot tables as essentially a *multidimensional* version of ``GroupBy`` aggregation.That is, you split-apply-combine, but both the split and the combine happen across not a one-dimensional index, but across a two-dimensional grid. Motivating Pivot TablesFor the examples in this section, we'll use the database of passengers on the *Titanic*, available through the Seaborn library (see [Visualization With Seaborn](04.14-Visualization-With-Seaborn.ipynb)):
###Code
import numpy as np
import pandas as pd
import seaborn as sns
titanic = sns.load_dataset('titanic')
titanic.head()
###Output
_____no_output_____
###Markdown
This contains a wealth of information on each passenger of that ill-fated voyage, including gender, age, class, fare paid, and much more. Pivot Tables by HandTo start learning more about this data, we might begin by grouping according to gender, survival status, or some combination thereof.If you have read the previous section, you might be tempted to apply a ``GroupBy`` operation–for example, let's look at survival rate by gender:
###Code
titanic.groupby('sex')[['survived']].mean()
###Output
_____no_output_____
###Markdown
This immediately gives us some insight: overall, three of every four females on board survived, while only one in five males survived!This is useful, but we might like to go one step deeper and look at survival by both sex and, say, class.Using the vocabulary of ``GroupBy``, we might proceed using something like this:we *group by* class and gender, *select* survival, *apply* a mean aggregate, *combine* the resulting groups, and then *unstack* the hierarchical index to reveal the hidden multidimensionality. In code:
###Code
titanic.groupby(['sex', 'class'])['survived'].aggregate('mean').unstack()
###Output
_____no_output_____
###Markdown
This gives us a better idea of how both gender and class affected survival, but the code is starting to look a bit garbled.While each step of this pipeline makes sense in light of the tools we've previously discussed, the long string of code is not particularly easy to read or use.This two-dimensional ``GroupBy`` is common enough that Pandas includes a convenience routine, ``pivot_table``, which succinctly handles this type of multi-dimensional aggregation. Pivot Table SyntaxHere is the equivalent to the preceding operation using the ``pivot_table`` method of ``DataFrame``s:
###Code
titanic.pivot_table('survived', index='sex', columns='class')
###Output
_____no_output_____
###Markdown
This is eminently more readable than the ``groupby`` approach, and produces the same result.As you might expect of an early 20th-century transatlantic cruise, the survival gradient favors both women and higher classes.First-class women survived with near certainty (hi, Rose!), while only one in ten third-class men survived (sorry, Jack!). Multi-level pivot tablesJust as in the ``GroupBy``, the grouping in pivot tables can be specified with multiple levels, and via a number of options.For example, we might be interested in looking at age as a third dimension.We'll bin the age using the ``pd.cut`` function:
###Code
age = pd.cut(titanic['age'], [0, 18, 80])
titanic.pivot_table('survived', ['sex', age], 'class')
###Output
_____no_output_____
###Markdown
We can apply the same strategy when working with the columns as well; let's add info on the fare paid using ``pd.qcut`` to automatically compute quantiles:
###Code
fare = pd.qcut(titanic['fare'], 2)
titanic.pivot_table('survived', ['sex', age], [fare, 'class'])
###Output
_____no_output_____
###Markdown
The result is a four-dimensional aggregation with hierarchical indices (see [Hierarchical Indexing](03.05-Hierarchical-Indexing.ipynb)), shown in a grid demonstrating the relationship between the values. Additional pivot table optionsThe full call signature of the ``pivot_table`` method of ``DataFrame``s is as follows:```python call signature as of Pandas 0.18DataFrame.pivot_table(data, values=None, index=None, columns=None, aggfunc='mean', fill_value=None, margins=False, dropna=True, margins_name='All')```We've already seen examples of the first three arguments; here we'll take a quick look at the remaining ones.Two of the options, ``fill_value`` and ``dropna``, have to do with missing data and are fairly straightforward; we will not show examples of them here.The ``aggfunc`` keyword controls what type of aggregation is applied, which is a mean by default.As in the GroupBy, the aggregation specification can be a string representing one of several common choices (e.g., ``'sum'``, ``'mean'``, ``'count'``, ``'min'``, ``'max'``, etc.) or a function that implements an aggregation (e.g., ``np.sum()``, ``min()``, ``sum()``, etc.).Additionally, it can be specified as a dictionary mapping a column to any of the above desired options:
###Code
titanic.pivot_table(index='sex', columns='class',
aggfunc={'survived':sum, 'fare':'mean'})
###Output
_____no_output_____
###Markdown
Notice also here that we've omitted the ``values`` keyword; when specifying a mapping for ``aggfunc``, this is determined automatically. At times it's useful to compute totals along each grouping.This can be done via the ``margins`` keyword:
###Code
titanic.pivot_table('survived', index='sex', columns='class', margins=True)
###Output
_____no_output_____
###Markdown
Here this automatically gives us information about the class-agnostic survival rate by gender, the gender-agnostic survival rate by class, and the overall survival rate of 38%.The margin label can be specified with the ``margins_name`` keyword, which defaults to ``"All"``. Example: Birthrate DataAs a more interesting example, let's take a look at the freely available data on births in the United States, provided by the Centers for Disease Control (CDC).This data can be found at https://raw.githubusercontent.com/jakevdp/data-CDCbirths/master/births.csv(this dataset has been analyzed rather extensively by Andrew Gelman and his group; see, for example, [this blog post](http://andrewgelman.com/2012/06/14/cool-ass-signal-processing-using-gaussian-processes/)):
###Code
# shell command to download the data:
# !curl -O https://raw.githubusercontent.com/jakevdp/data-CDCbirths/master/births.csv
births = pd.read_csv('data/births.csv')
###Output
_____no_output_____
###Markdown
Taking a look at the data, we see that it's relatively simple–it contains the number of births grouped by date and gender:
###Code
births.head()
###Output
_____no_output_____
###Markdown
We can start to understand this data a bit more by using a pivot table.Let's add a decade column, and take a look at male and female births as a function of decade:
###Code
births['decade'] = 10 * (births['year'] // 10)
births.pivot_table('births', index='decade', columns='gender', aggfunc='sum')
###Output
_____no_output_____
###Markdown
We immediately see that male births outnumber female births in every decade.To see this trend a bit more clearly, we can use the built-in plotting tools in Pandas to visualize the total number of births by year (see [Introduction to Matplotlib](04.00-Introduction-To-Matplotlib.ipynb) for a discussion of plotting with Matplotlib):
###Code
%matplotlib inline
import matplotlib.pyplot as plt
sns.set() # use Seaborn styles
births.pivot_table('births', index='year', columns='gender', aggfunc='sum').plot()
plt.ylabel('total births per year');
###Output
_____no_output_____
###Markdown
With a simple pivot table and ``plot()`` method, we can immediately see the annual trend in births by gender. By eye, it appears that over the past 50 years male births have outnumbered female births by around 5%. Further data explorationThough this doesn't necessarily relate to the pivot table, there are a few more interesting features we can pull out of this dataset using the Pandas tools covered up to this point.We must start by cleaning the data a bit, removing outliers caused by mistyped dates (e.g., June 31st) or missing values (e.g., June 99th).One easy way to remove these all at once is to cut outliers; we'll do this via a robust sigma-clipping operation:
###Code
quartiles = np.percentile(births['births'], [25, 50, 75])
mu = quartiles[1]
sig = 0.74 * (quartiles[2] - quartiles[0])
###Output
_____no_output_____
###Markdown
This final line is a robust estimate of the sample mean, where the 0.74 comes from the interquartile range of a Gaussian distribution (You can learn more about sigma-clipping operations in a book I coauthored with Željko Ivezić, Andrew J. Connolly, and Alexander Gray: ["Statistics, Data Mining, and Machine Learning in Astronomy"](http://press.princeton.edu/titles/10159.html) (Princeton University Press, 2014)).With this we can use the ``query()`` method (discussed further in [High-Performance Pandas: ``eval()`` and ``query()``](03.12-Performance-Eval-and-Query.ipynb)) to filter-out rows with births outside these values:
###Code
births = births.query('(births > @mu - 5 * @sig) & (births < @mu + 5 * @sig)')
###Output
_____no_output_____
###Markdown
Next we set the ``day`` column to integers; previously it had been a string because some columns in the dataset contained the value ``'null'``:
###Code
# set 'day' column to integer; it originally was a string due to nulls
births['day'] = births['day'].astype(int)
###Output
_____no_output_____
###Markdown
Finally, we can combine the day, month, and year to create a Date index (see [Working with Time Series](03.11-Working-with-Time-Series.ipynb)).This allows us to quickly compute the weekday corresponding to each row:
###Code
# create a datetime index from the year, month, day
births.index = pd.to_datetime(10000 * births.year +
100 * births.month +
births.day, format='%Y%m%d')
births['dayofweek'] = births.index.dayofweek
###Output
_____no_output_____
###Markdown
Using this we can plot births by weekday for several decades:
###Code
import matplotlib.pyplot as plt
import matplotlib as mpl
births.pivot_table('births', index='dayofweek',
columns='decade', aggfunc='mean').plot()
plt.gca().set_xticklabels(['Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat', 'Sun'])
plt.ylabel('mean births by day');
###Output
_____no_output_____
###Markdown
Apparently births are slightly less common on weekends than on weekdays! Note that the 1990s and 2000s are missing because the CDC data contains only the month of birth starting in 1989.Another intersting view is to plot the mean number of births by the day of the *year*.Let's first group the data by month and day separately:
###Code
births_by_date = births.pivot_table('births',
[births.index.month, births.index.day])
births_by_date.head()
###Output
_____no_output_____
###Markdown
The result is a multi-index over months and days.To make this easily plottable, let's turn these months and days into a date by associating them with a dummy year variable (making sure to choose a leap year so February 29th is correctly handled!)
###Code
births_by_date.index = [pd.datetime(2012, month, day)
for (month, day) in births_by_date.index]
births_by_date.head()
###Output
_____no_output_____
###Markdown
Focusing on the month and day only, we now have a time series reflecting the average number of births by date of the year.From this, we can use the ``plot`` method to plot the data. It reveals some interesting trends:
###Code
# Plot the results
fig, ax = plt.subplots(figsize=(12, 4))
births_by_date.plot(ax=ax);
###Output
_____no_output_____
|
Research/GoogleSmartPhone/code/GSDC2_AssessAndCleanData.ipynb
|
###Markdown
Load Libraries
###Code
import numpy as np
import pandas as pd
from glob import glob
import os
import matplotlib.pyplot as plt
from tqdm.notebook import tqdm
from pathlib import Path
import plotly.express as px
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import TensorDataset, DataLoader
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, MinMaxScaler
###Output
_____no_output_____
###Markdown
Set Path
###Code
data_dir = Path("../input/google-smartphone-decimeter-challenge")
###Output
_____no_output_____
###Markdown
Load Data
###Code
df_train = pd.read_pickle(str(data_dir / "gsdc_train.pkl.gzip"))
df_test = pd.read_pickle(str(data_dir / "gsdc_test.pkl.gzip"))
###Output
_____no_output_____
###Markdown
Check Dataset
###Code
print(df_train.shape)
df_train.head()
print(df_test.shape)
df_test.head()
df_train.info(verbose = True, memory_usage= True, null_counts=True)
df_test.info(verbose = True, memory_usage= True, null_counts=True)
for col in df_train.columns:
print(f"KEY {col}")
if col in df_train.columns:
print(f"train dtype: {df_train[col].dtype}, null: {df_train[col].isna().mean()}")
if col in df_test.columns:
print(f"test dtype: {df_test[col].dtype}, null: {df_test[col].isna().mean()}")
print("")
###Output
KEY collectionName
train dtype: object, null: 0.0
test dtype: object, null: 0.0
KEY phoneName
train dtype: object, null: 0.0
test dtype: object, null: 0.0
KEY millisSinceGpsEpoch
train dtype: int64, null: 0.0
test dtype: int64, null: 0.0
KEY latDeg
train dtype: float64, null: 0.0
test dtype: float64, null: 0.0
KEY lngDeg
train dtype: float64, null: 0.0
test dtype: float64, null: 0.0
KEY heightAboveWgs84EllipsoidM
train dtype: float64, null: 0.0
test dtype: float64, null: 0.0
KEY phone
train dtype: object, null: 0.0
test dtype: object, null: 0.0
KEY timeSinceFirstFixSeconds
train dtype: float64, null: 0.0
KEY hDop
train dtype: float64, null: 0.0
KEY vDop
train dtype: float64, null: 0.0
KEY speedMps
train dtype: float64, null: 0.0
KEY courseDegree
train dtype: float64, null: 0.0
KEY t_latDeg
train dtype: float64, null: 0.0
KEY t_lngDeg
train dtype: float64, null: 0.0
KEY t_heightAboveWgs84EllipsoidM
train dtype: float64, null: 0.0
KEY constellationType
train dtype: float64, null: 0.47261348235903217
test dtype: float64, null: 0.3933060796187395
KEY svid
train dtype: float64, null: 0.47261348235903217
test dtype: float64, null: 0.3933060796187395
KEY signalType
train dtype: object, null: 0.47261348235903217
test dtype: object, null: 0.3933060796187395
KEY receivedSvTimeInGpsNanos
train dtype: float64, null: 0.47261348235903217
test dtype: float64, null: 0.3933060796187395
KEY xSatPosM
train dtype: float64, null: 0.47261348235903217
test dtype: float64, null: 0.3933060796187395
KEY ySatPosM
train dtype: float64, null: 0.47261348235903217
test dtype: float64, null: 0.3933060796187395
KEY zSatPosM
train dtype: float64, null: 0.47261348235903217
test dtype: float64, null: 0.3933060796187395
KEY xSatVelMps
train dtype: float64, null: 0.47261348235903217
test dtype: float64, null: 0.3933060796187395
KEY ySatVelMps
train dtype: float64, null: 0.47261348235903217
test dtype: float64, null: 0.3933060796187395
KEY zSatVelMps
train dtype: float64, null: 0.47261348235903217
test dtype: float64, null: 0.3933060796187395
KEY satClkBiasM
train dtype: float64, null: 0.47261348235903217
test dtype: float64, null: 0.3933060796187395
KEY satClkDriftMps
train dtype: float64, null: 0.47261348235903217
test dtype: float64, null: 0.3933060796187395
KEY rawPrM
train dtype: float64, null: 0.47261348235903217
test dtype: float64, null: 0.3933060796187395
KEY rawPrUncM
train dtype: float64, null: 0.47261348235903217
test dtype: float64, null: 0.3933060796187395
KEY isrbM
train dtype: float64, null: 0.47261348235903217
test dtype: float64, null: 0.3933060796187395
KEY ionoDelayM
train dtype: float64, null: 0.47261348235903217
test dtype: float64, null: 0.3933060796187395
KEY tropoDelayM
train dtype: float64, null: 0.47261348235903217
test dtype: float64, null: 0.3933060796187395
KEY utcTimeMillis
train dtype: float64, null: 0.7212848898296051
test dtype: float64, null: 0.487856065408915
KEY elapsedRealtimeNanos
train dtype: float64, null: 0.7212848898296051
test dtype: float64, null: 0.487856065408915
KEY yawDeg
train dtype: float64, null: 0.7212848898296051
test dtype: float64, null: 0.487856065408915
KEY rollDeg
train dtype: float64, null: 0.7212848898296051
test dtype: float64, null: 0.487856065408915
KEY pitchDeg
train dtype: float64, null: 0.7212848898296051
test dtype: float64, null: 0.487856065408915
KEY utcTimeMillis_Status
train dtype: float64, null: 0.14724916629867066
test dtype: float64, null: 0.11254180967579738
KEY SignalCount
train dtype: float64, null: 0.14724916629867066
test dtype: float64, null: 0.11254180967579738
KEY SignalIndex
train dtype: float64, null: 0.14724916629867066
test dtype: float64, null: 0.11254180967579738
KEY ConstellationType
train dtype: float64, null: 0.14724916629867066
test dtype: float64, null: 0.11254180967579738
KEY Svid
train dtype: float64, null: 0.14724916629867066
test dtype: float64, null: 0.11254180967579738
KEY CarrierFrequencyHz
train dtype: float64, null: 0.14858917939425317
test dtype: float64, null: 0.11434536431803773
KEY Cn0DbHz
train dtype: float64, null: 0.14724916629867066
test dtype: float64, null: 0.11254180967579738
KEY AzimuthDegrees
train dtype: float64, null: 0.14724916629867066
test dtype: float64, null: 0.11254180967579738
KEY ElevationDegrees
train dtype: float64, null: 0.14724916629867066
test dtype: float64, null: 0.11254180967579738
KEY UsedInFix
train dtype: float64, null: 0.14724916629867066
test dtype: float64, null: 0.11254180967579738
KEY HasAlmanacData
train dtype: float64, null: 0.14724916629867066
test dtype: float64, null: 0.11254180967579738
KEY HasEphemerisData
train dtype: float64, null: 0.14724916629867066
test dtype: float64, null: 0.11254180967579738
KEY BasebandCn0DbHz
train dtype: float64, null: 0.8446346180201306
test dtype: float64, null: 0.7579957589139322
KEY utcTimeMillis_UncalMag
train dtype: float64, null: 0.1333008481673798
test dtype: float64, null: 0.12249961742780316
KEY elapsedRealtimeNanos_UncalMag
train dtype: float64, null: 0.1333008481673798
test dtype: float64, null: 0.12249961742780316
KEY UncalMagXMicroT
train dtype: float64, null: 0.1333008481673798
test dtype: float64, null: 0.12249961742780316
KEY UncalMagYMicroT
train dtype: float64, null: 0.1333008481673798
test dtype: float64, null: 0.12249961742780316
KEY UncalMagZMicroT
train dtype: float64, null: 0.1333008481673798
test dtype: float64, null: 0.12249961742780316
KEY BiasXMicroT
train dtype: float64, null: 0.43065432230360434
test dtype: float64, null: 0.487856065408915
KEY BiasYMicroT
train dtype: float64, null: 0.43065432230360434
test dtype: float64, null: 0.487856065408915
KEY BiasZMicroT
train dtype: float64, null: 0.43065432230360434
test dtype: float64, null: 0.487856065408915
KEY utcTimeMillis_UncalAccel
train dtype: float64, null: 0.1333008481673798
test dtype: float64, null: 0.12249961742780316
KEY elapsedRealtimeNanos_UncalAccel
train dtype: float64, null: 0.1333008481673798
test dtype: float64, null: 0.12249961742780316
KEY UncalAccelXMps2
train dtype: float64, null: 0.1333008481673798
test dtype: float64, null: 0.12249961742780316
KEY UncalAccelYMps2
train dtype: float64, null: 0.1333008481673798
test dtype: float64, null: 0.12249961742780316
KEY UncalAccelZMps2
train dtype: float64, null: 0.1333008481673798
test dtype: float64, null: 0.12249961742780316
KEY BiasXMps2
train dtype: float64, null: 0.43065432230360434
test dtype: float64, null: 0.487856065408915
KEY BiasYMps2
train dtype: float64, null: 0.43065432230360434
test dtype: float64, null: 0.487856065408915
KEY BiasZMps2
train dtype: float64, null: 0.43065432230360434
test dtype: float64, null: 0.487856065408915
KEY utcTimeMillis_UncalGyro
train dtype: float64, null: 0.1333008481673798
test dtype: float64, null: 0.12249961742780316
KEY elapsedRealtimeNanos_UncalGyro
train dtype: float64, null: 0.1333008481673798
test dtype: float64, null: 0.12249961742780316
KEY UncalGyroXRadPerSec
train dtype: float64, null: 0.1333008481673798
test dtype: float64, null: 0.12249961742780316
KEY UncalGyroYRadPerSec
train dtype: float64, null: 0.1333008481673798
test dtype: float64, null: 0.12249961742780316
KEY UncalGyroZRadPerSec
train dtype: float64, null: 0.1333008481673798
test dtype: float64, null: 0.12249961742780316
KEY DriftXRadPerSec
train dtype: float64, null: 0.43065432230360434
test dtype: object, null: 0.487856065408915
KEY DriftYRadPerSec
train dtype: float64, null: 0.43065432230360434
test dtype: float64, null: 0.487856065408915
KEY DriftZRadPerSec
train dtype: float64, null: 0.43065432230360434
test dtype: float64, null: 0.487856065408915
KEY utcTimeMillis_Raw
train dtype: int64, null: 0.0
test dtype: int64, null: 0.0
KEY TimeNanos
train dtype: int64, null: 0.0
test dtype: int64, null: 0.0
KEY LeapSecond
train dtype: float64, null: 0.6503174917391238
test dtype: float64, null: 0.6804975624685744
KEY TimeUncertaintyNanos
train dtype: float64, null: 1.0
test dtype: float64, null: 1.0
KEY FullBiasNanos
train dtype: int64, null: 0.0
test dtype: int64, null: 0.0
KEY BiasNanos
train dtype: float64, null: 0.0
test dtype: float64, null: 0.0
KEY BiasUncertaintyNanos
train dtype: float64, null: 0.0
test dtype: float64, null: 0.0
KEY DriftNanosPerSecond
train dtype: float64, null: 0.10787866790516362
test dtype: float64, null: 0.06347419277266467
KEY DriftUncertaintyNanosPerSecond
train dtype: float64, null: 0.10787866790516362
test dtype: float64, null: 0.06347419277266467
KEY HardwareClockDiscontinuityCount
train dtype: int64, null: 0.0
test dtype: int64, null: 0.0
KEY Svid_Raw
train dtype: int64, null: 0.0
test dtype: int64, null: 0.0
KEY TimeOffsetNanos
train dtype: float64, null: 0.0
test dtype: float64, null: 0.0
KEY State
train dtype: int64, null: 0.0
test dtype: int64, null: 0.0
KEY ReceivedSvTimeNanos
train dtype: int64, null: 0.0
test dtype: int64, null: 0.0
KEY ReceivedSvTimeUncertaintyNanos
train dtype: int64, null: 0.0
test dtype: int64, null: 0.0
KEY Cn0DbHz_Raw
train dtype: float64, null: 0.0
test dtype: float64, null: 0.0
KEY PseudorangeRateMetersPerSecond
train dtype: float64, null: 0.0
test dtype: float64, null: 0.0
KEY PseudorangeRateUncertaintyMetersPerSecond
train dtype: float64, null: 0.0
test dtype: float64, null: 0.0
KEY AccumulatedDeltaRangeState
train dtype: int64, null: 0.0
test dtype: int64, null: 0.0
KEY AccumulatedDeltaRangeMeters
train dtype: float64, null: 0.0
test dtype: float64, null: 0.0
KEY AccumulatedDeltaRangeUncertaintyMeters
train dtype: float64, null: 0.0
test dtype: float64, null: 0.0
KEY CarrierFrequencyHz_Raw
train dtype: float64, null: 0.0
test dtype: float64, null: 0.0
KEY CarrierCycles
train dtype: float64, null: 1.0
test dtype: float64, null: 1.0
KEY CarrierPhase
train dtype: float64, null: 1.0
test dtype: float64, null: 1.0
KEY CarrierPhaseUncertainty
train dtype: float64, null: 1.0
test dtype: float64, null: 1.0
KEY MultipathIndicator
train dtype: int64, null: 0.0
test dtype: int64, null: 0.0
KEY SnrInDb
train dtype: float64, null: 1.0
test dtype: float64, null: 1.0
KEY ConstellationType_Raw
train dtype: int64, null: 0.0
test dtype: int64, null: 0.0
KEY AgcDb
train dtype: float64, null: 0.06000365458116977
test dtype: float64, null: 0.04684869816146733
KEY BasebandCn0DbHz_Raw
train dtype: float64, null: 0.8020815885246151
test dtype: float64, null: 0.7114859104125222
KEY FullInterSignalBiasNanos
train dtype: float64, null: 0.8020815885246151
test dtype: float64, null: 0.7114859104125222
KEY FullInterSignalBiasUncertaintyNanos
train dtype: float64, null: 0.8020815885246151
test dtype: float64, null: 0.7114859104125222
KEY SatelliteInterSignalBiasNanos
train dtype: float64, null: 0.891854852217874
test dtype: float64, null: 0.8104846643202238
KEY SatelliteInterSignalBiasUncertaintyNanos
train dtype: float64, null: 0.891854852217874
test dtype: float64, null: 0.8104846643202238
KEY CodeType
train dtype: object, null: 0.36728540756193756
test dtype: object, null: 0.3653564479811119
KEY ChipsetElapsedRealtimeNanos
train dtype: float64, null: 0.62987467832072
test dtype: float64, null: 0.5401919419364711
###Markdown
Data Assess and Clean Remove empty columns
###Code
drop_cols = []
for col in df_train.columns:
if col in df_train.columns and col in df_test.columns:
if df_train[col].isna().mean() == 1 or df_test[col].isna().mean() == 1:
drop_cols.append(col)
df_train.drop(columns = drop_cols, inplace = True)
df_test.drop(columns = drop_cols, inplace = True)
###Output
_____no_output_____
###Markdown
Change data type
###Code
# 서로 다른 데이터 타입이 존재하는 열이 있는지 확인한다.
for col in df_train.columns:
if col in df_train.columns and col in df_test.columns:
if df_train[col].dtype == df_test[col].dtype:
pass
else:
print(f"KEY {col}")
print(df_train[col].dtype, df_test[col].dtype)
print("")
col = 'DriftXRadPerSec'
df_temp = df_test.copy()
df_temp[col] = df_temp[col].apply(lambda x: x if type(x) == float else x + "-4" if x[-1] == 'E' else x)
df_temp[col] = df_temp[col].apply(lambda x: x if type(x) == float else float(x))
df_test = df_temp.copy()
# 서로 다른 데이터 타입이 존재하는 열이 있는지 확인한다.
for col in df_train.columns:
if col in df_train.columns and col in df_test.columns:
if df_train[col].dtype == df_test[col].dtype:
pass
else:
print(f"KEY {col}")
print(df_train[col].dtype, df_test[col].dtype)
print("")
###Output
_____no_output_____
###Markdown
Output
###Code
df_train.to_pickle(str(data_dir / "gsdc_cleaned_train.pkl.gzip"))
df_test.to_pickle(str(data_dir / "gsdc_cleaned_test.pkl.gzip"))
###Output
_____no_output_____
|
hail/python/hail/docs/tutorials/05-filter-annotate.ipynb
|
###Markdown
Filtering and Annotation Tutorial FilterYou can filter the rows of a table with [Table.filter](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.filter). This returns a table of those rows for which the expression evaluates to `True`.
###Code
import hail as hl
import seaborn
hl.utils.get_movie_lens('data/')
users = hl.read_table('data/users.ht')
users.filter(users.occupation == 'programmer').count()
###Output
_____no_output_____
###Markdown
AnnotateYou can add new fields to a table with [annotate](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.annotate). Let's mean-center and variance-normalize the `age` field.
###Code
stats = users.aggregate(hl.agg.stats(users.age))
missing_occupations = hl.set(['other', 'none'])
t = users.annotate(
cleaned_occupation = hl.cond(missing_occupations.contains(users.occupation),
hl.null('str'),
users.occupation))
t.show()
###Output
_____no_output_____
###Markdown
Note: `annotate` is functional: it doesn't mutate `users`, but returns a new table. This is also true of `filter`. In fact, all operations in Hail are functional.
###Code
users.describe()
###Output
_____no_output_____
###Markdown
There are two other annotate methods: [select](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.select) and [transmute](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.transmute). `select` returns a table with the key and an entirely new set of value fields. `transmute` replaces any fields mentioned on the right-hand side with the new fields, but leaves unmentioned fields unchanged. `transmute` is useful for transforming data into a new form. How about some examples?
###Code
(users.select(len_occupation = hl.len(users.occupation))
.describe())
(users.transmute(
cleaned_occupation = hl.cond(missing_occupations.contains(users.occupation),
hl.null(hl.tstr),
users.occupation))
.describe())
###Output
_____no_output_____
###Markdown
Finally, you can add global fields with [annotate_globals](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.annotate_globals). Globals are useful for storing metadata about a dataset or storing small data structures like sets and maps.
###Code
t = users.annotate_globals(cohort = 5, cloudable = hl.set(['sample1', 'sample10', 'sample15']))
t.describe()
t.cloudable
hl.eval(t.cloudable)
###Output
_____no_output_____
###Markdown
Filtering and Annotation Tutorial FilterYou can filter the rows of a table with [Table.filter](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.filter). This returns a table of those rows for which the expression evaluates to `True`.
###Code
import hail as hl
hl.utils.get_movie_lens('data/')
users = hl.read_table('data/users.ht')
users.filter(users.occupation == 'programmer').count()
###Output
_____no_output_____
###Markdown
AnnotateYou can add new fields to a table with [annotate](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.annotate). Let's mean-center and variance-normalize the `age` field.
###Code
stats = users.aggregate(hl.agg.stats(users.age))
missing_occupations = hl.set(['other', 'none'])
t = users.annotate(
cleaned_occupation = hl.cond(missing_occupations.contains(users.occupation),
hl.null('str'),
users.occupation))
t.show()
###Output
_____no_output_____
###Markdown
Note: `annotate` is functional: it doesn't mutate `users`, but returns a new table. This is also true of `filter`. In fact, all operations in Hail are functional.
###Code
users.describe()
###Output
_____no_output_____
###Markdown
There are two other annotate methods: [select](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.select) and [transmute](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.transmute). `select` returns a table with the key and an entirely new set of value fields. `transmute` replaces any fields mentioned on the right-hand side with the new fields, but leaves unmentioned fields unchanged. `transmute` is useful for transforming data into a new form. How about some examples?
###Code
(users.select(len_occupation = hl.len(users.occupation))
.describe())
(users.transmute(
cleaned_occupation = hl.cond(missing_occupations.contains(users.occupation),
hl.null(hl.tstr),
users.occupation))
.describe())
###Output
_____no_output_____
###Markdown
Finally, you can add global fields with [annotate_globals](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.annotate_globals). Globals are useful for storing metadata about a dataset or storing small data structures like sets and maps.
###Code
t = users.annotate_globals(cohort = 5, cloudable = hl.set(['sample1', 'sample10', 'sample15']))
t.describe()
t.cloudable
hl.eval(t.cloudable)
###Output
_____no_output_____
###Markdown
Filtering and Annotation Tutorial FilterYou can filter the rows of a table with [Table.filter](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.filter). This returns a table of those rows for which the expression evaluates to `True`.
###Code
import hail as hl
hl.utils.get_movie_lens('data/')
users = hl.read_table('data/users.ht')
users.filter(users.occupation == 'programmer').count()
###Output
_____no_output_____
###Markdown
We can also express this query in multiple ways using [aggregations](https://hail.is/docs/0.2/tutorials/04-aggregation.html):
###Code
users.aggregate(hl.agg.filter(users.occupation == 'programmer', hl.agg.count()))
users.aggregate(hl.agg.counter(users.occupation == 'programmer'))[True]
###Output
_____no_output_____
###Markdown
AnnotateYou can add new fields to a table with [annotate](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.annotate). As an example, let's create a new column called `cleaned_occupation` that replaces missing entries in the `occupation` field labeled as 'other' with 'none.'
###Code
missing_occupations = hl.set(['other', 'none'])
t = users.annotate(
cleaned_occupation = hl.if_else(missing_occupations.contains(users.occupation),
hl.missing('str'),
users.occupation))
t.show()
###Output
_____no_output_____
###Markdown
Compare this to what we had before:
###Code
users.show()
###Output
_____no_output_____
###Markdown
Note: `annotate` is functional: it doesn't mutate `users`, but returns a new table. This is also true of `filter`. In fact, all operations in Hail are functional.
###Code
users.describe()
###Output
_____no_output_____
###Markdown
Select and TransmuteThere are two other annotate methods: [select](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.select) and [transmute](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.transmute). `select` allows you to create new tables from old ones by selecting existing fields, or creating new ones. First, let's extract the `sex` and `occupation` fields:
###Code
users.select(users.sex, users.occupation).show()
###Output
_____no_output_____
###Markdown
We can also create a new field that stores the age relative to the average. Note that new fields *must* be assigned a name (in this case `mean_shifted_age`):
###Code
mean_age = round(users.aggregate(hl.agg.stats(users.age)).mean)
users.select(users.sex, users.occupation, mean_shifted_age = users.age - mean_age).show()
###Output
_____no_output_____
###Markdown
`transmute` replaces any fields mentioned on the right-hand side with the new fields, but leaves unmentioned fields unchanged. `transmute` is useful for transforming data into a new form. Compare the following two snippts of code. The second is identical to the first with `transmute` replacing `select.`
###Code
missing_occupations = hl.set(['other', 'none'])
t = users.select(
cleaned_occupation = hl.if_else(missing_occupations.contains(users.occupation),
hl.missing('str'),
users.occupation))
t.show()
missing_occupations = hl.set(['other', 'none'])
t = users.transmute(
cleaned_occupation = hl.if_else(missing_occupations.contains(users.occupation),
hl.missing('str'),
users.occupation))
t.show()
###Output
_____no_output_____
###Markdown
Global Fields Finally, you can add global fields with [annotate_globals](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.annotate_globals). Globals are useful for storing metadata about a dataset or storing small data structures like sets and maps.
###Code
t = users.annotate_globals(cohort = 5, cloudable = hl.set(['sample1', 'sample10', 'sample15']))
t.describe()
t.cloudable
hl.eval(t.cloudable)
###Output
_____no_output_____
###Markdown
Filtering and Annotation Tutorial FilterYou can filter the rows of a table with [Table.filter](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.filter). This returns a table of those rows for which the expression evaluates to `True`.
###Code
import hail as hl
hl.utils.get_movie_lens('data/')
users = hl.read_table('data/users.ht')
users.filter(users.occupation == 'programmer').count()
###Output
_____no_output_____
###Markdown
We can also express this query in multiple ways using [aggregations](https://hail.is/docs/0.2/tutorials/04-aggregation.html):
###Code
users.aggregate(hl.agg.filter(users.occupation == 'programmer', hl.agg.count()))
users.aggregate(hl.agg.counter(users.occupation == 'programmer'))[True]
###Output
_____no_output_____
###Markdown
AnnotateYou can add new fields to a table with [annotate](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.annotate). As an example, let's create a new column called `cleaned_occupation` that replaces missing entries in the `occupation` field labeled as 'other' with 'none.'
###Code
missing_occupations = hl.set(['other', 'none'])
t = users.annotate(
cleaned_occupation = hl.if_else(missing_occupations.contains(users.occupation),
hl.null('str'),
users.occupation))
t.show()
###Output
_____no_output_____
###Markdown
Compare this to what we had before:
###Code
users.show()
###Output
_____no_output_____
###Markdown
Note: `annotate` is functional: it doesn't mutate `users`, but returns a new table. This is also true of `filter`. In fact, all operations in Hail are functional.
###Code
users.describe()
###Output
_____no_output_____
###Markdown
Select and TransmuteThere are two other annotate methods: [select](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.select) and [transmute](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.transmute). `select` allows you to create new tables from old ones by selecting existing fields, or creating new ones. First, let's extract the `sex` and `occupation` fields:
###Code
users.select(users.sex, users.occupation).show()
###Output
_____no_output_____
###Markdown
We can also create a new field that stores the age relative to the average. Note that new fields *must* be assigned a name (in this case `mean_shifted_age`):
###Code
mean_age = round(users.aggregate(hl.agg.stats(users.age)).mean)
users.select(users.sex, users.occupation, mean_shifted_age = users.age - mean_age).show()
###Output
_____no_output_____
###Markdown
`transmute` replaces any fields mentioned on the right-hand side with the new fields, but leaves unmentioned fields unchanged. `transmute` is useful for transforming data into a new form. Compare the following two snippts of code. The second is identical to the first with `transmute` replacing `select.`
###Code
missing_occupations = hl.set(['other', 'none'])
t = users.select(
cleaned_occupation = hl.if_else(missing_occupations.contains(users.occupation),
hl.null('str'),
users.occupation))
t.show()
missing_occupations = hl.set(['other', 'none'])
t = users.transmute(
cleaned_occupation = hl.if_else(missing_occupations.contains(users.occupation),
hl.null('str'),
users.occupation))
t.show()
###Output
_____no_output_____
###Markdown
Global Fields Finally, you can add global fields with [annotate_globals](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.annotate_globals). Globals are useful for storing metadata about a dataset or storing small data structures like sets and maps.
###Code
t = users.annotate_globals(cohort = 5, cloudable = hl.set(['sample1', 'sample10', 'sample15']))
t.describe()
t.cloudable
hl.eval(t.cloudable)
###Output
_____no_output_____
###Markdown
Filtering and Annotation Tutorial FilterYou can filter the rows of a table with [Table.filter](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.filter). This returns a table of those rows for which the expression evaluates to `True`.
###Code
import hail as hl
hl.utils.get_movie_lens('data/')
users = hl.read_table('data/users.ht')
users.filter(users.occupation == 'programmer').count()
###Output
_____no_output_____
###Markdown
AnnotateYou can add new fields to a table with [annotate](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.annotate). Let's mean-center and variance-normalize the `age` field.
###Code
stats = users.aggregate(hl.agg.stats(users.age))
missing_occupations = hl.set(['other', 'none'])
t = users.annotate(
cleaned_occupation = hl.cond(missing_occupations.contains(users.occupation),
hl.null('str'),
users.occupation))
t.show()
###Output
_____no_output_____
###Markdown
Note: `annotate` is functional: it doesn't mutate `users`, but returns a new table. This is also true of `filter`. In fact, all operations in Hail are functional.
###Code
users.describe()
###Output
_____no_output_____
###Markdown
There are two other annotate methods: [select](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.select) and [transmute](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.transmute). `select` returns a table with the key and an entirely new set of value fields. `transmute` replaces any fields mentioned on the right-hand side with the new fields, but leaves unmentioned fields unchanged. `transmute` is useful for transforming data into a new form. How about some examples?
###Code
(users.select(len_occupation = hl.len(users.occupation))
.describe())
(users.transmute(
cleaned_occupation = hl.cond(missing_occupations.contains(users.occupation),
hl.null(hl.tstr),
users.occupation))
.describe())
###Output
_____no_output_____
###Markdown
Finally, you can add global fields with [annotate_globals](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.annotate_globals). Globals are useful for storing metadata about a dataset or storing small data structures like sets and maps.
###Code
t = users.annotate_globals(cohort = 5, cloudable = hl.set(['sample1', 'sample10', 'sample15']))
t.describe()
t.cloudable
hl.eval(t.cloudable)
###Output
_____no_output_____
###Markdown
Filtering and Annotation Tutorial FilterYou can filter the rows of a table with [Table.filter](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.filter). This returns a table of those rows for which the expression evaluates to `True`.
###Code
import hail as hl
hl.utils.get_movie_lens('data/')
users = hl.read_table('data/users.ht')
users.filter(users.occupation == 'programmer').count()
###Output
_____no_output_____
###Markdown
We can also express this query in multiple ways using [aggregations](https://hail.is/docs/0.2/tutorials/04-aggregation.html):
###Code
users.aggregate(hl.agg.filter(users.occupation == 'programmer', hl.agg.count()))
users.aggregate(hl.agg.counter(users.occupation == 'programmer'))[True]
###Output
_____no_output_____
###Markdown
AnnotateYou can add new fields to a table with [annotate](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.annotate). As an example, let's create a new column called `cleaned_occupation` that replaces missing entries in the `occupation` field labeled as 'other' with 'none.'
###Code
missing_occupations = hl.set(['other', 'none'])
t = users.annotate(
cleaned_occupation = hl.cond(missing_occupations.contains(users.occupation),
hl.null('str'),
users.occupation))
t.show()
###Output
_____no_output_____
###Markdown
Compare this to what we had before:
###Code
users.show()
###Output
_____no_output_____
###Markdown
Note: `annotate` is functional: it doesn't mutate `users`, but returns a new table. This is also true of `filter`. In fact, all operations in Hail are functional.
###Code
users.describe()
###Output
_____no_output_____
###Markdown
Select and TransmuteThere are two other annotate methods: [select](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.select) and [transmute](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.transmute). `select` allows you to create new tables from old ones by selecting existing fields, or creating new ones. First, let's extract the `sex` and `occupation` fields:
###Code
users.select(users.sex, users.occupation).show()
###Output
_____no_output_____
###Markdown
We can also create a new field that stores the age relative to the average. Note that new fields *must* be assigned a name (in this case `mean_shifted_age`):
###Code
mean_age = round(users.aggregate(hl.agg.stats(users.age)).mean)
users.select(users.sex, users.occupation, mean_shifted_age = users.age - mean_age).show()
###Output
_____no_output_____
###Markdown
`transmute` replaces any fields mentioned on the right-hand side with the new fields, but leaves unmentioned fields unchanged. `transmute` is useful for transforming data into a new form. Compare the following two snippts of code. The second is identical to the first with `transmute` replacing `select.`
###Code
missing_occupations = hl.set(['other', 'none'])
t = users.select(
cleaned_occupation = hl.cond(missing_occupations.contains(users.occupation),
hl.null('str'),
users.occupation))
t.show()
missing_occupations = hl.set(['other', 'none'])
t = users.transmute(
cleaned_occupation = hl.cond(missing_occupations.contains(users.occupation),
hl.null('str'),
users.occupation))
t.show()
###Output
_____no_output_____
###Markdown
Global Fields Finally, you can add global fields with [annotate_globals](https://hail.is/docs/0.2/hail.Table.htmlhail.Table.annotate_globals). Globals are useful for storing metadata about a dataset or storing small data structures like sets and maps.
###Code
t = users.annotate_globals(cohort = 5, cloudable = hl.set(['sample1', 'sample10', 'sample15']))
t.describe()
t.cloudable
hl.eval(t.cloudable)
###Output
_____no_output_____
|
assignments/assignment9/Assignment9_Model1_CoQA_dataset.ipynb
|
###Markdown
Model v1
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import pandas as pd
from torchtext.data import Example,Field, BucketIterator,Dataset
import spacy
import numpy as np
import random
import math
import time
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
pd.set_option('display.max_colwidth', -1)
train=pd.read_json('http://downloads.cs.stanford.edu/nlp/data/coqa/coqa-train-v1.0.json')
test=pd.read_json('http://downloads.cs.stanford.edu/nlp/data/coqa/coqa-dev-v1.0.json')
train['data'][0]['story']
import time
def data_prep(dt):
st=time.time()
story=[]
question=[]
answer=[]
for i in range(0,dt.data.shape[0]):
sty=dt.data[i]['story']
for j in range(0,len(dt.data[i]['questions'])):
story.append(sty)
question.append(dt.data[i]['questions'][j]['input_text'])
answer.append(dt.data[i]['answers'][j]['input_text'])
print(time.time()-st)
return(story,question,answer)
# import time
# def data_prep(dt):
# st=time.time()
# story=[]
# question=[]
# answer=[]
# for i in range(0,dt.data.shape[0]):
# sty=dt.data[i]['story']
# story.append(sty)
# q=[]
# a=[]
# for j in range(0,len(dt.data[i]['questions'])):
# q.append(dt.data[i]['questions'][j]['input_text'])
# q1=" ".join(q)
# a.append(dt.data[i]['answers'][j]['input_text'])
# a1=" ".join(a)
# question.append(q1)
# answer.append(a1)
# print(time.time()-st)
# return(story,question,answer)
st_train,question_train,answer_train=data_prep(train)
st_train[0]
question_train[0]
st_train,question_train,answer_train=data_prep(train)
train_data=pd.DataFrame(
{'story': st_train,
'question': question_train,
'answer': answer_train
})
train_data.shape
MAX_LEN = 100
train_data['story'] = train_data['story'].apply(lambda x: ' '.join(x.split(' ')[:MAX_LEN]))
train_data['story'][0]
len(train_data['story'][0])
train_data['sty_ques']=train_data['story']+" "+train_data['question']
train_data.head()
st_test,question_test,answer_test=data_prep(test)
test_data=pd.DataFrame(
{'story': st_test,
'question': question_test,
'answer': answer_test
})
test_data.shape
test_data['story'] = test_data['story'].apply(lambda x: " ".join(x.split(' ')[:MAX_LEN]))
test_data['sty_ques']=test_data['story']+" "+test_data['question']
test_data.head()
###Output
_____no_output_____
###Markdown
Obtaining only revelent fields from the data set
###Code
train_cols=train_data[['sty_ques','answer']]
test_cols=test_data[['sty_ques','answer']]
train_cols.shape
test_cols.shape
###Output
_____no_output_____
###Markdown
Create our fields to process our data
###Code
question = Field(tokenize='spacy',
init_token='<sos>',
eos_token='<eos>',
lower=True)
answer = Field(tokenize='spacy',
init_token='<sos>',
eos_token='<eos>',
lower=True)
fields = [('questions', question),('answers',answer)]
fields
example = [Example.fromlist([train_cols.sty_ques[i],train_cols.answer[i]], fields) for i in range(train_cols.shape[0])]
questionDataset = Dataset(example, fields)
#(train, valid) = questionDataset.split(split_ratio=[0.80, 0.20], random_state=random.seed(SEED))
train=questionDataset
print(len(train))
print(vars(train.examples[0]))
print(vars(train.examples[0]))
example1 = [Example.fromlist([test_cols.sty_ques[i],test_cols.answer[i]], fields) for i in range(test_cols.shape[0])]
valid = Dataset(example1, fields)
len(valid)
MAX_VOCAB_SIZE = 25_000
question.build_vocab(train,max_size = MAX_VOCAB_SIZE,
vectors = "glove.6B.100d",unk_init = torch.Tensor.normal_,min_freq = 2)
answer.build_vocab(train,max_size = MAX_VOCAB_SIZE,
vectors = "glove.6B.100d",unk_init = torch.Tensor.normal_,min_freq = 2)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
BATCH_SIZE = 64
train_iterator, valid_iterator = BucketIterator.splits(
(train, valid),
batch_size = BATCH_SIZE,
device = device,sort=False)
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, dropout):
super().__init__()
self.hid_dim = hid_dim
self.embedding = nn.Embedding(input_dim, emb_dim) #no dropout as only one layer!
self.rnn = nn.GRU(emb_dim, hid_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, src):
#src = [src len, batch size]
embedded = self.dropout(self.embedding(src))
#embedded = [src len, batch size, emb dim]
outputs, hidden = self.rnn(embedded) #no cell state!
#outputs = [src len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#outputs are always from the top hidden layer
return hidden
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, hid_dim, dropout):
super().__init__()
self.hid_dim = hid_dim
self.output_dim = output_dim
self.embedding = nn.Embedding(output_dim, emb_dim)
self.rnn = nn.GRU(emb_dim + hid_dim, hid_dim)
self.fc_out = nn.Linear(emb_dim + hid_dim * 2, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, context):
#input = [batch size]
#hidden = [n layers * n directions, batch size, hid dim]
#context = [n layers * n directions, batch size, hid dim]
#n layers and n directions in the decoder will both always be 1, therefore:
#hidden = [1, batch size, hid dim]
#context = [1, batch size, hid dim]
input = input.unsqueeze(0)
#input = [1, batch size]
embedded = self.dropout(self.embedding(input))
#embedded = [1, batch size, emb dim]
emb_con = torch.cat((embedded, context), dim = 2)
#emb_con = [1, batch size, emb dim + hid dim]
output, hidden = self.rnn(emb_con, hidden)
#output = [seq len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#seq len, n layers and n directions will always be 1 in the decoder, therefore:
#output = [1, batch size, hid dim]
#hidden = [1, batch size, hid dim]
output = torch.cat((embedded.squeeze(0), hidden.squeeze(0), context.squeeze(0)),
dim = 1)
#output = [batch size, emb dim + hid dim * 2]
prediction = self.fc_out(output)
#prediction = [batch size, output dim]
return prediction, hidden
embedded = self.dropout(self.embedding(input))
#embedded = [1, batch size, emb dim]
emb_con = torch.cat((embedded, context), dim = 2)
#emb_con = [1, batch size, emb dim + hid dim]
output, hidden = self.rnn(emb_con, hidden)
#output = [seq len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#seq len, n layers and n directions will always be 1 in the decoder, therefore:
#output = [1, batch size, hid dim]
#hidden = [1, batch size, hid dim]
output = torch.cat((embedded.squeeze(0), hidden.squeeze(0), context.squeeze(0)),
dim = 1)
#output = [batch size, emb dim + hid dim * 2]
prediction = self.fc_out(output)
#prediction = [batch size, output dim]
return prediction, hidden
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
assert encoder.hid_dim == decoder.hid_dim, \
"Hidden dimensions of encoder and decoder must be equal!"
def forward(self, src, trg, teacher_forcing_ratio = 0.5):
#src = [src len, batch size]
#trg = [trg len, batch size]
#teacher_forcing_ratio is probability to use teacher forcing
#e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time
batch_size = trg.shape[1]
trg_len = trg.shape[0]
trg_vocab_size = self.decoder.output_dim
#tensor to store decoder outputs
outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device)
#last hidden state of the encoder is the context
context = self.encoder(src)
#context also used as the initial hidden state of the decoder
hidden = context
#first input to the decoder is the <sos> tokens
input = trg[0,:]
for t in range(1, trg_len):
#insert input token embedding, previous hidden state and the context state
#receive output tensor (predictions) and new hidden state
output, hidden = self.decoder(input, hidden, context)
#place predictions in a tensor holding predictions for each token
outputs[t] = output
#decide if we are going to use teacher forcing or not
teacher_force = random.random() < teacher_forcing_ratio
#get the highest predicted token from our predictions
top1 = output.argmax(1)
#if teacher forcing, use actual next token as next input
#if not, use predicted token
input = trg[t] if teacher_force else top1
return outputs
INPUT_DIM = len(question.vocab)
OUTPUT_DIM = len(answer.vocab)
ENC_EMB_DIM = 256
DEC_EMB_DIM = 256
HID_DIM = 512
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, DEC_DROPOUT)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = Seq2Seq(enc, dec, device).to(device)
print(INPUT_DIM)
print(OUTPUT_DIM)
def init_weights(m):
for name, param in m.named_parameters():
nn.init.normal_(param.data, mean=0, std=0.01)
model.apply(init_weights)
optimizer = optim.Adam(model.parameters())
TRG_PAD_IDX = answer.vocab.stoi[answer.pad_token]
criterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX)
def train(model, iterator, optimizer, criterion, clip):
model.train()
epoch_loss = 0
for i, batch in enumerate(iterator):
src = batch.questions
trg = batch.answers
optimizer.zero_grad()
output = model(src, trg)
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in enumerate(iterator):
src = batch.questions
trg = batch.answers
output = model(src, trg, 0) #turn off teacher forcing
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
N_EPOCHS = 10
CLIP = 1
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut2-model.pt')
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
###Output
100%|█████████▉| 399960/400000 [00:30<00:00, 26128.40it/s]
###Markdown
###Code
model.load_state_dict(torch.load('tut2-model.pt'))
test_loss, test_acc = evaluate(model, test_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
###Output
_____no_output_____
|
notebooks/01.0-custom-parsing/03.0-swamp-sparrow-custom-parsing.ipynb
|
###Markdown
Swamp sparrow custom parsing- This dataset has: - A number of CSVs with individual, element, and song information- The data was originally in [luscinia](https://rflachlan.github.io/Luscinia/) format (h2 db). I exported individual tables as CSVs so that I could import them into python. - Because the data is already well annotated, all this notebook does is save data as WAV and generate JSONs corresponding to the data. - Dataset origin: - https://figshare.com/articles/SwampSparrow_luscdb_zip/5625310
###Code
%load_ext autoreload
%autoreload 2
DATASET_ID = 'swamp_sparrow'
from avgn.utils.general import prepare_env
prepare_env()
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
env: CUDA_VISIBLE_DEVICES=GPU
###Markdown
Import relevant packages
###Code
from joblib import Parallel, delayed
from tqdm.autonotebook import tqdm
import pandas as pd
pd.options.display.max_columns = None
import librosa
from datetime import datetime
import numpy as np
import avgn
from avgn.custom_parsing.lachlan_swampsparrow import string2int16, annotate_bouts
from avgn.utils.paths import DATA_DIR
###Output
_____no_output_____
###Markdown
Load data in original format
###Code
# create a unique datetime identifier for the files output by this notebook
DT_ID = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
DT_ID
DSLOC = avgn.utils.paths.Path('/mnt/cube/Datasets/swampsparrow/swampsparrow-xml/')
individual = pd.read_csv(DSLOC / 'INDIVIDUAL.csv')
individual[:3]
syllables = pd.read_csv(DSLOC / 'SYLLABLE.csv')
syllables[:3]
elements = pd.read_csv(DSLOC / 'ELEMENT.csv')
elements[:3]
songdata = pd.read_csv(DSLOC / 'SONGDATA.csv')
songdata[:3]
wavs = pd.read_csv(DSLOC / 'WAVS.csv')
wavs[:3]
###Output
_____no_output_____
###Markdown
generate wavs and textgrids
###Code
with Parallel(n_jobs=1, verbose=10) as parallel:
parallel(
delayed(annotate_bouts)(
row,
songdata[songdata.ID == row.ID].iloc[0],
individual[individual.ID == songdata[songdata.ID == row.ID].iloc[0].INDIVIDUALID].iloc[0],
elements[elements.SONGID == row.SONGID],
syllables[syllables.SONGID == row.SONGID],
DT_ID)
for idx, row in tqdm(wavs.iterrows(), total=len(wavs))
);
"""for idx, row in tqdm(wavs.iterrows(), total=len(wavs)):
annotate_bouts(
row,
songdata[songdata.ID == row.ID].iloc[0],
individual[individual.ID == songdata[songdata.ID == row.ID].iloc[0].INDIVIDUALID].iloc[0],
elements[elements.SONGID == row.SONGID],
syllables[syllables.SONGID == row.SONGID],
DT_ID
)"""
###Output
_____no_output_____
|
Imbalance-Dataset/.ipynb_checkpoints/Handle-imbalance-data-with-smote-and-ann-checkpoint.ipynb
|
###Markdown
Import package
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import torch
from torch import nn, optim
from jcopdl.callback import Callback, set_config
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
device
###Output
_____no_output_____
###Markdown
Load Data
###Code
df = pd.read_csv("activity_km_07_01.csv")
df.head()
###Output
_____no_output_____
###Markdown
Mengganti nama kolom yang menggunakan spasi
###Code
df.columns = df.columns.str.replace(" ","_")
df.head()
###Output
_____no_output_____
###Markdown
Cek Missing Values
###Code
df.shape
print("Total Rows : ", df.shape[0])
print("Total Cols : ", df.shape[1])
print("Total Missing Values : ", df.isnull().sum().sum())
pd.DataFrame({
'Missing_values': df.isnull().sum(),
'Persentase_missing_values (%)': (df.isnull().sum()/len(df))*100
})
###Output
_____no_output_____
###Markdown
cek value target
###Code
df.aksi.value_counts().to_frame()
###Output
_____no_output_____
###Markdown
Handle Missing Value
###Code
miss = pd.DataFrame({
'Missing_values': df.isnull().sum(),
'Persentase_missing_values (%)': (df.isnull().sum()/len(df))*100
})
miss
###Output
_____no_output_____
###Markdown
Suhu
###Code
mean_temp = float(df.suhu.mean())
mean_temp
df.suhu.fillna(value=mean_temp, inplace=True)
###Output
_____no_output_____
###Markdown
cahaya
###Code
mode_light = df.cahaya.mode()[0]
mode_light
df.cahaya.fillna(value=mode_light, inplace=True)
###Output
_____no_output_____
###Markdown
PH
###Code
mean_ph = float(df.PH.mean())
mean_ph
df.PH.fillna(value=mean_ph, inplace=True)
###Output
_____no_output_____
###Markdown
PPM
###Code
mean_ppm = float(df.PPM.mean())
mean_ppm
df.PPM.fillna(value=mean_ppm, inplace=True)
miss = pd.DataFrame({
'Missing_values': df.isnull().sum(),
'Persentase_missing_values (%)': (df.isnull().sum()/len(df))*100
})
miss
###Output
_____no_output_____
###Markdown
Handle & detect outliers
###Code
num_cols = np.array(['PH', 'suhu', 'PPM', 'tinggi_air']).reshape(2,2)
nrows=2
ncols=2
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=(20,10))
for i in range(nrows):
for j in range(ncols):
ax[i][j].set_title(num_cols[i][j])
df[[num_cols[i][j]]].boxplot(ax=ax[i][j])
plt.show()
###Output
_____no_output_____
###Markdown
Suhu `ambil nilai suhu yang lebih dari nol saja`
###Code
df[df.suhu < 0]
df = df[df.suhu > 0]
###Output
_____no_output_____
###Markdown
Tinggi air
###Code
df[df.tinggi_air < 8000]['tinggi_air'].max()
###Output
_____no_output_____
###Markdown
`ambil nilai dibawah 700`
###Code
df = df[df.tinggi_air < 700]
###Output
_____no_output_____
###Markdown
Visualisasi kembali
###Code
nrows=2
ncols=2
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=(20,10))
for i in range(nrows):
for j in range(ncols):
ax[i][j].set_title(num_cols[i][j])
df[[num_cols[i][j]]].boxplot(ax=ax[i][j])
plt.show()
###Output
_____no_output_____
###Markdown
`Pada feature PH kami menganggap nilainya bukan pencilan karena masih dalam rentan nilai PH (1-14)` Cek Imbalance data
###Code
plt.figure(figsize=(20,6))
sns.set_context('notebook', font_scale=1.5)
sns.countplot('aksi',data=df, palette="Set1")
plt.annotate(''+str(df['aksi'][df['aksi']=='Hidupkan Lampu dan Pompa nutrisi TDS'].count()), xy=(-0.2, 250), xytext=(-0.03, 40), size=15, color='r')
plt.annotate(''+str(df['aksi'][df['aksi']=='Tidak melakukan apa-apa'].count()), xy=(-0.2, 100), xytext=(0.97, 200), size=15, color='w')
plt.annotate(''+str(df['aksi'][df['aksi']=='Hidupkan Lampu'].count()), xy=(-0.2, 100), xytext=(1.97, 30), size=15, color='w')
plt.annotate(''+str(df['aksi'][df['aksi']=='Hidupkan Pompa nutrisi TDS'].count()), xy=(-0.2, 100), xytext=(2.97, 30), size=15, color='purple')
plt.tight_layout()
plt.show()
df.intensitas_air.unique()
###Output
_____no_output_____
###Markdown
Encoding
###Code
df_backup = df.copy(deep=True)
#applymap dengan dict untuk label encoding
mapping = {'Rendah sekali': 1,
'Rendah': 2,
'Cukup': 3,
'Tinggi': 4}
df.intensitas_air = df[['intensitas_air']].applymap(mapping.get)
# # one hot encoding cahaya
# cahaya = pd.get_dummies(df.cahaya, prefix='cahaya_')
# df = pd.concat([df, cahaya], axis=1)
# df.drop(columns='cahaya', inplace=True)
# df.head()
df.cahaya.unique()
#applymap dengan dict untuk label encoding
mapping2 = {'Tidak ada': 0,
'Ada': 1}
df.cahaya = df[['cahaya']].applymap(mapping2.get)
#applymap dengan dict untuk label encoding
mapping3 = {'Tidak melakukan apa-apa': 1,
'Hidupkan Lampu': 2,
'Hidupkan Pompa nutrisi TDS': 3,
'Hidupkan Lampu dan Pompa nutrisi TDS': 4}
df.aksi = df[['aksi']].applymap(mapping3.get)
df.head()
df.corr()['aksi'].to_frame()
X = df.drop(columns="aksi")
y = df.aksi
###Output
_____no_output_____
###Markdown
Handle dengan SMOTE
###Code
from imblearn.over_sampling import SMOTE
sm = SMOTE(random_state=42)
X_sm, y_sm = sm.fit_resample(X, y)
print(f'''Shape of X before SMOTE: {X.shape}
Shape of X after SMOTE: {X_sm.shape}''')
X_sm
df_smote = pd.concat([X_sm, y_sm], axis=1)
df_smote.head()
df_smote.isnull().sum()
###Output
_____no_output_____
###Markdown
Visualize
###Code
df[df['aksi'] == 4].shape
df_smote[df_smote['aksi'] == 4].shape
###Output
_____no_output_____
###Markdown
Swap numeric menjadi kategoric lagi untuk visualisasi
###Code
df_smote['intensitas_air'].unique()
{value: key for key, value in mapping.items()}
df_smote['intensitas_air'] = df_smote[['intensitas_air']].applymap({value: key for key, value in mapping.items()}.get)
df_smote['cahaya'] = df_smote[['cahaya']].applymap({value: key for key, value in mapping2.items()}.get)
df_smote['aksi'] = df_smote[['aksi']].applymap({value: key for key, value in mapping3.items()}.get)
df_smote.head()
###Output
_____no_output_____
###Markdown
PH
###Code
import matplotlib.colors as mcolors
plt.figure(figsize=(25,9))
df_smote.groupby('aksi')['PH'].mean().plot(kind='bar',color=['tab:blue', 'tab:orange', 'tab:green', 'tab:red'])
plt.xticks(rotation=0)
plt.title("Rata-rata PH")
plt.tight_layout()
plt.show()
ph_aksi = df_smote.groupby('aksi')['PH'].mean().reset_index()
ph_aksi
ph_aksi.iloc[0]['PH']
plt.figure(figsize=(25,9))
splot = sns.barplot(x='aksi', y='PH', data=ph_aksi)
plt.title("Rata-rata PH")
# plt.annotate(str(ph_aksi.iloc[0]['PH']), xy=(-0.4, 2), xytext=(-0.29, 2), size=20, color='w')
# plt.annotate(str(ph_aksi.iloc[1]['PH']), xy=(0.7, 3), xytext=(0.7, 3), size=20, color='w')
# plt.annotate(str(ph_aksi.iloc[2]['PH']), xy=(1.7, 6), xytext=(1.7, 6), size=20, color='w')
# plt.annotate(str(ph_aksi.iloc[3]['PH']), xy=(1.7, 3), xytext=(2.7, 3), size=20, color='w')
for p in splot.patches:
splot.annotate(format(p.get_height(), '.1f'), (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 10), textcoords = 'offset points')
plt.show()
###Output
_____no_output_____
###Markdown
Intensitas Air
###Code
df_smote.intensitas_air.unique()
plt.figure(figsize=(25,10))
splot = sns.countplot(x='intensitas_air', hue='aksi', data=df_smote)
for p in splot.patches:
splot.annotate(format(p.get_height(), '.1f'), (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 10), textcoords = 'offset points')
plt.title("Value count intensitas air")
plt.legend(bbox_to_anchor=(0.9,1))
plt.show()
###Output
_____no_output_____
###Markdown
Cahaya
###Code
plt.figure(figsize=(25,10))
splot = sns.countplot(x='cahaya', hue='aksi', data=df_smote)
for p in splot.patches:
splot.annotate(format(p.get_height(), '.1f'), (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 10), textcoords = 'offset points')
plt.title("Value count cahaya")
plt.legend(bbox_to_anchor=(1,1))
plt.show()
###Output
_____no_output_____
###Markdown
Suhu
###Code
plt.figure(figsize=(25,9))
splot = sns.barplot(x='aksi', y='suhu', data=df_smote.groupby('aksi')['suhu'].mean().reset_index())
for p in splot.patches:
splot.annotate(format(p.get_height(), '.1f'), (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 10), textcoords = 'offset points')
plt.title("Rata-rata suhu")
plt.show()
###Output
_____no_output_____
###Markdown
PPM
###Code
plt.figure(figsize=(25,9))
splot = sns.barplot(x='aksi', y='PPM', data=df_smote.groupby('aksi')['PPM'].mean().reset_index())
for p in splot.patches:
splot.annotate(format(p.get_height(), '.1f'), (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 10), textcoords = 'offset points')
plt.title("Rata-rata PPM")
plt.show()
###Output
_____no_output_____
###Markdown
Tinggi air
###Code
plt.figure(figsize=(25,10))
splot = sns.barplot(x='aksi', y='tinggi_air', data=df_smote.groupby('aksi')['tinggi_air'].mean().reset_index())
for p in splot.patches:
splot.annotate(format(p.get_height(), '.1f'), (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 10), textcoords = 'offset points')
plt.title("Rata-rata tinggi_air")
plt.show()
df_smote.isnull().sum()
###Output
_____no_output_____
###Markdown
Encode kembali
###Code
df_smote['intensitas_air'] = df_smote[['intensitas_air']].applymap(mapping.get)
df_smote['cahaya'] = df_smote[['cahaya']].applymap(mapping2.get)
# df_smote['aksi'] = df_smote[['aksi']].applymap(mapping3.get)
# one hot encoding cahaya
aksi = pd.get_dummies(df_smote.aksi, prefix='aksi')
df_smote = pd.concat([df_smote, aksi], axis=1)
df_smote.drop(columns='aksi', inplace=True)
df_smote.head()
df_smote.isnull().sum()
###Output
_____no_output_____
###Markdown
Build ANN model
###Code
import tensorflow as tf
from tensorflow import keras
from sklearn.model_selection import train_test_split
feature = list(df_smote.columns[:6])
target = list(df_smote.columns[6:])
target
df_smote.isnull().sum()
X = df_smote[feature]
y = df_smote[target]
from sklearn.preprocessing import RobustScaler
scaler = RobustScaler()
# transform data
X_scaled = scaler.fit_transform(X)
print(X_scaled)
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.2, stratify=y, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
model = keras.Sequential([
keras.layers.Dense(units=8, input_shape=(6,)),
keras.layers.Dense(units=16, activation='relu'),
keras.layers.Dense(units=4, activation='softmax')
])
# isi code
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
loss=tf.keras.losses.categorical_crossentropy,
metrics=['accuracy'])
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
callback=EarlyStopping(monitor="val_loss",patience=50, verbose=1, mode='min')
check_point = ModelCheckpoint('best_model.h5', monitor='val_accuracy', mode='max', verbose=1, save_best_only=True)
# isi code
history = model.fit(X_train, y_train, epochs=100,
callbacks=[callback, check_point],
validation_split=0.1)
from tensorflow.keras.models import load_model
# load model 2 untuk digunakan lagi dari file
best_model = load_model('best_model.h5')
loss, acc = best_model.evaluate(X_test, y_test, verbose=1)
print('Test Accuracy_model_3: %.3f' % acc)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'], c='r')
plt.show()
###Output
_____no_output_____
|
nlp-2019-autumn/lecture01/assignment-pattern-match.ipynb
|
###Markdown
基于模式匹配的对话机器人实现 Pattern Match机器能否实现对话,这个长久以来是衡量机器人是否具有智能的一个重要标志。 Alan Turing早在其文中就提出过一个测试机器智能程度的方法,该方法主要是考察人类是否能够通过对话内容区分对方是机器人还是真正的人类,如果人类无法区分,我们就称之为具有”智能“。而这个测试,后来被大家叫做”图灵测试“,之后也被翻拍成了一步著名电影,叫做《模拟游戏》。 既然图灵当年以此作为机器是否具备智能的标志,这项任务肯定是复杂的。自从 1960s 开始,诸多科学家就希望从各个方面来解决这个问题,直到如今,都只能解决一部分问题。 目前对话机器人的建立方法有很多,今天的作业中,我们为大家提供一共快速的基于模板的对话机器人配置方式。
###Code
def is_varaible(pattern):
return pattern.startswith('?') and all(s.isalpha() for s in pattern[1:])
def pattern_match(pattern, saying):
if not pattern or not saying: return []
if is_varaible(pattern[0]):
return [(pattern[0], saying[0])] + pattern_match(pattern[1:], saying[1:])
else:
if pattern[0] != saying[0]:
return []
else:
return pattern_match(pattern[1:], saying[1:])
pattern_match('?A + ?B = ?C'.split(), '3 + 2 = 5'.split())
def substitute(rule, parsed_rules):
if not rule:
return []
return [parsed_rules.get(rule[0], rule[0])] + substitute(rule[1:], parsed_rules)
got_patterns = pattern_match('I want ?X'.split(), 'I want Iphone'.split())
' '.join(substitute('What if you mean if you got ?X'.split(), dict(got_patterns)))
defined_patterns = {
"I need ?X": ["Image you will get ?X soon", "Why do you need ?X ?"],
"My ?X told me something": ["Talk about more about your ?X", "How do you think about your ?X ?"]
}
import random
def get_response(saying, rules=defined_patterns):
for pattern in rules:
got_patterns = pattern_match(pattern.split(), saying.split())
if got_patterns:
return ' '.join(substitute(random.choice(rules[pattern]).split(), dict(got_patterns)))
else:
return "I don't konwn how to response you."
print(get_response('I need Iphone'))
print(get_response('My Mother told me something'))
print(get_response("It's fine day"))
###Output
Image you will get Iphone soon
How do you think about your Mother ?
I don't konwn how to response you.
###Markdown
Segment Match 我们上边的这种形式,能够进行一些初级的对话了,但是我们的模式逐字逐句匹配的, "I need iPhone" 和 "I need ?X" 可以匹配,但是"I need an iPhone" 和 "I need ?X" 就不匹配了,那怎么办? 为了解决这个问题,我们可以新建一个变量类型 "?\*X", 这种类型多了一个星号(\*),表示匹配多个
###Code
def is_pattern_segment(pattern):
return pattern.startswith('?*') and all(s.isalpha() for s in pattern[2:])
is_pattern_segment('?*Hello')
fail = [True, None]
def is_match(rest, saying):
if not rest or not saying:
return True
if not all(a.isalpha() for a in rest[0]):
return True
if rest[0] != saying[0]:
return False
return is_match(rest[1:], saying[1:])
def segment_match(pattern, saying):
seg_pattern, rest = pattern[0], pattern[1:]
seg_pattern = seg_pattern.replace('?*', '?')
if not rest: return (seg_pattern, saying), len(saying)
for i, token in enumerate(saying):
if rest[0] == token and is_match(rest[1:], saying[(i+1):]):
return (seg_pattern, saying[:i]), i
return (seg_pattern, saying), len(saying)
def pattern_match_with_seg(pattern, saying):
if not pattern or not saying: return []
if is_varaible(pattern[0]):
return [(pattern[0], saying[0])] + pattern_match_with_seg(pattern[1:] ,saying[1:])
elif is_pattern_segment(pattern[0]):
match, index = segment_match(pattern, saying)
return [match] + pattern_match_with_seg(pattern[1:], saying[index:])
elif pattern[0] == saying[0]:
return pattern_match_with_seg(pattern[1:], saying[1:])
else:
return [True, None]
pattern_match_with_seg('?*P is very good!'.split(), 'My dog is very good!'.split())
response_pair = {
'I need ?X': [
"Why do you neeed ?X"
],
"I dont like my ?X": ["What bad things did ?X do for you?"]
}
def pattern2dict(patterns):
return {k: ' '.join(v) if isinstance(v, list) else v for k, v in patterns}
got_patterns = pattern_match_with_seg('I need ?*X'.split(), 'I need an ipad and an macbookpro'.split())
print(' '.join(substitute("Why do you neeed ?X".split(), pattern2dict(got_patterns))))
print(' '.join(substitute('What bad things did ?X for you?'.split(), pattern2dict(
pattern_match_with_seg('I dont like my ?X'.split(), 'I dont like my Boss'.split())))))
# ("?*X hello ?*Y", "Hi, how do you do")
' '.join(substitute('Hi, how do you do'.split(),
pattern2dict(pattern_match_with_seg('?*X hello ?*Y'.split(), 'hello Mike'))))
###Output
_____no_output_____
###Markdown
Assignment 问题1编写一个程序, get_response(saying, response_rules)输入是一个字符串 + 我们定义的 rules,例如上边我们所写的 pattern, 输出是一个回答。
###Code
import random
defined_patterns = {
"I need ?*x": ["Image you will get ?x soon", "Why do you need ?x ?"],
"My ?*X told me something": ["Talk about more about your ?X", "How do you think about your ?X ?"],
}
def get_response(saying, rules=defined_patterns):
for pattern in rules:
got_patterns = pattern_match_with_seg(pattern.split(), saying.split())
print(got_patterns)
if got_patterns:
return ' '.join(substitute(random.choice(rules[pattern]).split(), pattern2dict(got_patterns)))
else:
return "I don't konwn how to response you."
get_response('I need an Ipad')
###Output
[('?x', ['an', 'Ipad'])]
###Markdown
问题2改写以上程序,将程序变成能够支持中文输入的模式。*提示*: 你可以需用用到 jieba 分词
###Code
rule_responses = {
'?*x hello ?*y': ['How do you do'],
'?*x I want ?*y': ['what would it mean if you got ?y', 'Why do you want ?y', 'Suppose you got ?y soon'],
'?*x if ?*y': ['Do you really think its likely that ?y', 'Do you wish that ?y', 'What do you think about ?y', 'Really-- if ?y'],
'?*x no ?*y': ['why not?', 'You are being a negative', 'Are you saying \'No\' just to be negative?'],
'?*x I was ?*y': ['Were you really', 'Perhaps I already knew you were ?y', 'Why do you tell me you were ?y now?'],
'?*x I feel ?*y': ['Do you often feel ?y ?', 'What other feelings do you have?'],
'?*x你好?*y': ['你好呀', '请告诉我你的问题'],
'?*x我想?*y': ['你觉得?y有什么意义呢?', '为什么你想?y', '你可以想想你很快就可以?y了'],
'?*x我想要?*y': ['?x想问你,你觉得?y有什么意义呢?', '为什么你想?y', '?x觉得... 你可以想想你很快就可以有?y了', '你看?x像?y不', '我看你就像?y'],
'?*x喜欢?*y': ['喜欢?y的哪里?', '?y有什么好的呢?', '你想要?y吗?'],
'?*x讨厌?*y': ['?y怎么会那么讨厌呢?', '讨厌?y的哪里?', '?y有什么不好呢?', '你不想要?y吗?'],
'?*xAI?*y': ['你为什么要提AI的事情?', '你为什么觉得AI要解决你的问题?'],
'?*x机器人?*y': ['你为什么要提机器人的事情?', '你为什么觉得机器人要解决你的问题?'],
'?*x对不起?*y': ['不用道歉', '你为什么觉得你需要道歉呢?'],
'?*x我记得?*y': ['你经常会想起这个吗?', '除了?y你还会想起什么吗?', '你为什么和我提起?y'],
'?*x如果?*y': ['你真的觉得?y会发生吗?', '你希望?y吗?', '真的吗?如果?y的话', '关于?y你怎么想?'],
'?*x我?*z梦见?*y':['真的吗? --- ?y', '你在醒着的时候,以前想象过?y吗?', '你以前梦见过?y吗'],
'?*x妈妈?*y': ['你家里除了?y还有谁?', '嗯嗯,多说一点和你家里有关系的', '她对你影响很大吗?'],
'?*x爸爸?*y': ['你家里除了?y还有谁?', '嗯嗯,多说一点和你家里有关系的', '他对你影响很大吗?', '每当你想起你爸爸的时候, 你还会想起其他的吗?'],
'?*x我愿意?*y': ['我可以帮你?y吗?', '你可以解释一下,为什么想?y'],
'?*x我很难过,因为?*y': ['我听到你这么说, 也很难过', '?y不应该让你这么难过的'],
'?*x难过?*y': ['我听到你这么说, 也很难过',
'不应该让你这么难过的,你觉得你拥有什么,就会不难过?',
'你觉得事情变成什么样,你就不难过了?'],
'?*x就像?*y': ['你觉得?x和?y有什么相似性?', '?x和?y真的有关系吗?', '怎么说?'],
'?*x和?*y都?*z': ['你觉得?z有什么问题吗?', '?z会对你有什么影响呢?'],
'?*x和?*y一样?*z': ['你觉得?z有什么问题吗?', '?z会对你有什么影响呢?'],
'?*x我是?*y': ['真的吗?', '?x想告诉你,或许我早就知道你是?y', '你为什么现在才告诉我你是?y'],
'?*x我是?*y吗': ['如果你是?y会怎么样呢?', '你觉得你是?y吗', '如果你是?y,那一位着什么?'],
'?*x你是?*y吗': ['你为什么会对我是不是?y感兴趣?', '那你希望我是?y吗', '你要是喜欢, 我就会是?y'],
'?*x你是?*y' : ['为什么你觉得我是?y'],
'?*x因为?*y' : ['?y是真正的原因吗?', '你觉得会有其他原因吗?'],
'?*x我不能?*y': ['你或许现在就能?*y', '如果你能?*y,会怎样呢?'],
'?*x我觉得?*y': ['你经常这样感觉吗?', '除了到这个,你还有什么其他的感觉吗?'],
'?*x我?*y你?*z': ['其实很有可能我们互相?y'],
'?*x你为什么不?*y': ['你自己为什么不?y', '你觉得我不会?y', '等我心情好了,我就?y'],
'?*x好的?*y': ['好的', '你是一个很正能量的人'],
'?*x嗯嗯?*y': ['好的', '你是一个很正能量的人'],
'?*x不嘛?*y': ['为什么不?', '你有一点负能量', '你说 不,是想表达不想的意思吗?'],
'?*x不要?*y': ['为什么不?', '你有一点负能量', '你说 不,是想表达不想的意思吗?'],
'?*x有些人?*y': ['具体是哪些人呢?'],
'?*x有的人?*y': ['具体是哪些人呢?'],
'?*x某些人?*y': ['具体是哪些人呢?'],
'?*x每个人?*y': ['我确定不是人人都是', '你能想到一点特殊情况吗?', '例如谁?', '你看到的其实只是一小部分人'],
'?*x所有人?*y': ['我确定不是人人都是', '你能想到一点特殊情况吗?', '例如谁?', '你看到的其实只是一小部分人'],
'?*x总是?*y': ['你能想到一些其他情况吗?', '例如什么时候?', '你具体是说哪一次?', '真的---总是吗?'],
'?*x一直?*y': ['你能想到一些其他情况吗?', '例如什么时候?', '你具体是说哪一次?', '真的---总是吗?'],
'?*x或许?*y': ['你看起来不太确定'],
'?*x可能?*y': ['你看起来不太确定'],
'?*x电视很不错': ['嗯嗯,?x是狠不错', '?x我也很喜欢'],
'?*x他们是?*y吗?': ['你觉得他们可能不是?y?'],
'?*x': ['很有趣', '请继续', '我不太确定我很理解你说的, 能稍微详细解释一下吗?']
}
import os
import jieba
import random
fail = (True, None)
def is_match(rest, saying):
if not rest or not saying:
return True
# 分割下一个pattern, ?
if not all(a.isalpha() for a in rest[0]):
return True
if rest[0] != saying[0]:
return False
return is_match(rest[1:], saying[1:])
def segment_match(pattern, saying):
seg_pattern, rest = pattern[0], pattern[1:]
seg_pattern = seg_pattern.replace('?*', '?')
if not rest: return (seg_pattern, saying), len(saying)
for i, token in enumerate(saying):
if rest[0] == token and is_match(rest[1:], saying[(i+1):]):
return (seg_pattern, saying[:i]), i
return (seg_pattern, saying), len(saying)
def pattern_match_with_seg(pattern, saying):
if all([not pattern, not saying]): return []
if any([not pattern, not saying]): return [fail]
if is_varaible(pattern[0]):
return [(pattern[0], saying[0])] + pattern_match_with_seg(pattern[1:] ,saying[1:])
elif is_pattern_segment(pattern[0]):
match, index = segment_match(pattern, saying)
return [match] + pattern_match_with_seg(pattern[1:], saying[index:])
elif pattern[0] == saying[0]:
return pattern_match_with_seg(pattern[1:], saying[1:])
else:
return [fail]
def is_pattern_match(got_patterns):
if not got_patterns:
return False
for k, v in got_patterns:
if k is True and v is None:
return False
return True
def merge_token(tokens):
if len(tokens) < 2:
return tokens
if tokens[0] == '?' and tokens[1].isalpha():
return [''.join(tokens[:2])] + merge_token(tokens[2:])
if len(tokens) > 2 and tokens[0] == '?' and tokens[1] == '*' and tokens[2].isalpha():
return [''.join(tokens[:3])] + merge_token(tokens[3:])
return [tokens[0]] + merge_token(tokens[1:])
def is_chinese(string):
for ch in string:
if ch > '\u4e00' and ch < '\u9fff':
return True
else:
return False
def cut_chinese(string):
tokens = list(jieba.cut(string))
return merge_token(tokens)
def cut(string):
if is_chinese(string):
return cut_chinese(string)
else:
return string.split()
def get_response(saying, rules=rule_responses):
for pattern in rules:
got_patterns = pattern_match_with_seg(cut(pattern), cut(saying))
# print(pattern, got_patterns)
if is_pattern_match(got_patterns):
tokens = substitute(cut(random.choice(rules[pattern])), pattern2dict(got_patterns))
if is_chinese(saying):
return ''.join(tokens)
else:
return ' '.join(tokens)
else:
return "I don't konwn how to response you."
print(get_response('Mike hello John'))
print(get_response('Mike John I want an Ipad'))
print(get_response('亮剑电视很不错'))
print(get_response('你好'))
import jieba
def is_chinese(string):
for ch in string:
if ch > '\u4e00' and ch < '\u9fff':
return True
else:
return False
def merge_token(tokens):
if len(tokens) < 2:
return tokens
if tokens[0] == '?' and tokens[1].isalpha():
return [''.join(tokens[:2])] + merge_token(tokens[2:])
if len(tokens) > 2 and tokens[0] == '?' and tokens[1] == '*' and tokens[2].isalpha():
return [''.join(tokens[:3])] + merge_token(tokens[3:])
return [tokens[0]] + merge_token(tokens[1:])
def cut_chinese(string):
tokens = list(jieba.cut(string))
return merge_token(tokens)
def cut(string):
if is_chinese(string):
return cut_chinese(string)
else:
return string.split()
cut('?*x所有人?y')
###Output
_____no_output_____
|
EDA/TEAM-1_insurance_case_study.ipynb
|
###Markdown
Importing GDrive
###Code
from google.colab import drive
drive.mount('/gdrive')
###Output
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly&response_type=code
Enter your authorization code:
··········
Mounted at /gdrive
###Markdown
Importing libraries
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Defining path
###Code
path = "/gdrive/My Drive/ML:March2020/Assignments/data/"
data=pd.read_csv(path+"Insurance case study.csv")
data.head()
###Output
_____no_output_____
###Markdown
Knowing Data
###Code
data.dtypes
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 60 entries, 0 to 59
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Respon-dent 60 non-null int64
1 Concept Rating 60 non-null int64
2 Current Insurance Supplier 60 non-null object
3 Age 60 non-null int64
4 Marital Status 60 non-null object
5 Number of Cars 60 non-null int64
6 Average Age of Car(s) 60 non-null float64
7 Number of Trips 60 non-null int64
8 Unnamed: 8 0 non-null float64
dtypes: float64(2), int64(5), object(2)
memory usage: 4.3+ KB
###Markdown
Label encoding categorical data
###Code
from sklearn.preprocessing import LabelEncoder
CurrentInsuranceSupplier_labelencoder = LabelEncoder()
data["Current Insurance Supplier "] = CurrentInsuranceSupplier_labelencoder.fit_transform(data["Current Insurance Supplier "])
MaritalStatus_labelencoder = LabelEncoder()
data["Marital Status "] =MaritalStatus_labelencoder.fit_transform(data["Marital Status "])
data.head(10)
data.describe()
###Output
_____no_output_____
###Markdown
Various graphs for understanding
###Code
sns.distplot(data['Concept Rating '],kde=True) #distance plot
sns.barplot(x='Age',y="Respon-dent ",data=data) #Bar plot
sns.barplot(x='Average Age of Car(s)',y='Number of Cars',data=data) #Bar plot
sns.distplot(data['Number of Trips'],kde=True) #distance plot
sns.scatterplot(x="Average Age of Car(s)", y="Number of Trips", data=data) #scatter plot
sns.scatterplot(x="Number of Cars", y="Number of Trips", data=data) #scatter plot
sns.boxplot(x="Current Insurance Supplier ", y="Concept Rating ",data=data) #Box plot
sns.boxplot(x="Current Insurance Supplier ", y="Age",data=data) #Box plot
sns.boxplot(x="Marital Status ", y="Age",data=data) #Box plot
sns.catplot(x="Concept Rating ", y="Age",col="Current Insurance Supplier ", kind = 'bar',data=data, palette = "rainbow") #Categorical plot
sns.catplot(x="Average Age of Car(s)", y="Number of Trips",col="Number of Cars", kind = 'bar',data=data, palette = "rainbow") #Categorical plot
sns.catplot(x="Number of Cars", y="Number of Trips",col="Current Insurance Supplier ", kind = 'bar',data=data, palette = "rainbow") #Categorical plot
sns.catplot(x="Number of Cars", y="Number of Trips",col="Marital Status ", kind = 'bar',data=data, palette = "rainbow") #Categorical plot
sns.catplot(x="Age", y="Number of Cars",col="Marital Status ", kind = 'bar',data=data, palette = "rainbow") #Categorical plot
sns.catplot(x="Concept Rating ", y="Number of Trips",col="Current Insurance Supplier ", kind = 'bar',data=data, palette = "rainbow") #Categorical plot
###Output
_____no_output_____
|
_notebooks/unfinished/standard-neural-network.ipynb
|
###Markdown
refer to https://towardsdatascience.com/coding-a-2-layer-neural-network-from-scratch-in-python-4dd022d19fd2
###Code
class dlnet:
def __init__(self, x, y):
self.X=x
self.Y=y
self.Yh=np.zeros((1,self.Y.shape[1]))
self.L=2
self.dims = [9, 15, 1]
self.param = {}
self.ch = {}
self.grad = {}
self.loss = []
self.lr=0.003
self.sam = self.Y.shape[1]
###Output
_____no_output_____
###Markdown
 
###Code
def nInit(self):
np.random.seed(1)
self.param['W1'] = np.random.randn(self.dims[1], self.dims[0]) / np.sqrt(self.dims[0])
self.param['b1'] = np.zeros((self.dims[1], 1))
self.param['W2'] = np.random.randn(self.dims[2], self.dims[1]) / np.sqrt(self.dims[1])
self.param['b2'] = np.zeros((self.dims[2], 1))
return
def Sigmoid(Z):
return 1/(1+np.exp(-Z))
def Relu(Z):
return np.maximum(0,Z)
def forward(self):
Z1 = self.param['W1'].dot(self.X) + self.param['b1']
A1 = Relu(Z1)
self.ch['Z1'],self.ch['A1']=Z1,A1
Z2 = self.param['W2'].dot(A1) + self.param['b2']
A2 = Sigmoid(Z2)
self.ch['Z2'],self.ch['A2']=Z2,A2
self.Yh=A2
loss=self.nloss(A2)
return self.Yh, loss
def nloss(self,Yh):
loss = (1./self.sam) * (-np.dot(self.Y,np.log(Yh).T) - np.dot(1-self.Y, np.log(1-Yh).T))
return loss
###Output
_____no_output_____
###Markdown

###Code
def dRelu(x):
x[x<=0] = 0
x[x>0] = 1
return x
def dSigmoid(Z):
s = 1/(1+np.exp(-Z))
dZ = s * (1-s)
return dZ
def backward(self):
dLoss_Yh = - (np.divide(self.Y, self.Yh ) - np.divide(1 - self.Y, 1 - self.Yh))
dLoss_Z2 = dLoss_Yh * dSigmoid(self.ch['Z2'])
dLoss_A1 = np.dot(self.param["W2"].T,dLoss_Z2)
dLoss_W2 = 1./self.ch['A1'].shape[1] * np.dot(dLoss_Z2,self.ch['A1'].T)
dLoss_b2 = 1./self.ch['A1'].shape[1] * np.dot(dLoss_Z2, np.ones([dLoss_Z2.shape[1],1]))
dLoss_Z1 = dLoss_A1 * dRelu(self.ch['Z1'])
dLoss_A0 = np.dot(self.param["W1"].T,dLoss_Z1)
dLoss_W1 = 1./self.X.shape[1] * np.dot(dLoss_Z1,self.X.T)
dLoss_b1 = 1./self.X.shape[1] * np.dot(dLoss_Z1, np.ones([dLoss_Z1.shape[1],1]))
self.param["W1"] = self.param["W1"] - self.lr * dLoss_W1
self.param["b1"] = self.param["b1"] - self.lr * dLoss_b1
self.param["W2"] = self.param["W2"] - self.lr * dLoss_W2
self.param["b2"] = self.param["b2"] - self.lr * dLoss_b2
nn = dlnet(x,y)
nn.gd(x, y, iter = 15000)
def gd(self,X, Y, iter = 3000):
np.random.seed(1)
self.nInit()
for i in range(0, iter):
Yh, loss=self.forward()
self.backward()
if i % 500 == 0:
print ("Cost after iteration %i: %f" %(i, loss))
self.loss.append(loss)
return
###Output
_____no_output_____
|
1_Array_creation_routines.ipynb
|
###Markdown
Array creation routines Ones and zeros
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Create a new array of 2*2 integers, without initializing entries. Let X = np.array([1,2,3], [4,5,6], np.int32). Create a new array with the same shape and type as X.
###Code
X = np.array([[1,2,3], [4,5,6]], np.int32)
###Output
_____no_output_____
###Markdown
Create a 3-D array with ones on the diagonal and zeros elsewhere. Create a new array of 3*2 float numbers, filled with ones. Let x = np.arange(4, dtype=np.int64). Create an array of ones with the same shape and type as X.
###Code
x = np.arange(4, dtype=np.int64)
###Output
_____no_output_____
###Markdown
Create a new array of 3*2 float numbers, filled with zeros. Let x = np.arange(4, dtype=np.int64). Create an array of zeros with the same shape and type as X.
###Code
x = np.arange(4, dtype=np.int64)
###Output
_____no_output_____
###Markdown
Array creation routines Ones and zeros
###Code
import numpy as np # импорт библиотеки numpy
###Output
_____no_output_____
###Markdown
Create a new array of 2*2 integers, without initializing entries.
###Code
np.zeros([2,2], int) # возвращает новый массив указанной формы и типа, заполненный нулями
###Output
_____no_output_____
###Markdown
Let X = np.array([1,2,3], [4,5,6], np.int32). Create a new array with the same shape and type as X.
###Code
X = np.array([[1,2,3], [4,5,6]], np.int32)
np.empty_like(X) # возвращает новый массив без инициированных записей с формой и типом данных указанного массива
###Output
_____no_output_____
###Markdown
Create a 3-D array with ones on the diagonal and zeros elsewhere.
###Code
np.eye(3) # возвращает двумерный массив у которого все элементы по диагонали равны 1, а все остальные равны 0.
###Output
_____no_output_____
###Markdown
Create a new array of 3*2 float numbers, filled with ones.
###Code
np.ones([3,2]) # возвращает новый массив указанной формы и типа, заполненный единицами
###Output
_____no_output_____
###Markdown
Let x = np.arange(4, dtype=np.int64). Create an array of ones with the same shape and type as X.
###Code
x = np.arange(4, dtype=np.int64) # возвращает одномерный массив с равномерно разнесенными значениями внутри заданного интервала
np.ones_like(x) # возвращает новый массив из единиц с формой и типом данных указанного массива
###Output
_____no_output_____
###Markdown
Create a new array of 3*2 float numbers, filled with zeros.
###Code
np.zeros([3,2]) # возвращает новый массив указанной формы и типа, заполненный нулями
###Output
_____no_output_____
###Markdown
Let x = np.arange(4, dtype=np.int64). Create an array of zeros with the same shape and type as X.
###Code
x = np.arange(4, dtype=np.int64) # возвращает одномерный массив с равномерно разнесенными значениями внутри заданного интервала
np.zeros_like(x) # возвращает новый массив из нулей с формой и типом данных указанного массива
###Output
_____no_output_____
###Markdown
Create a new array of 2*5 uints, filled with 6.
###Code
np.full((2, 5),6,dtype=np.uint) # возвращает новый массив указанной формы и типа, заполненный указанным значением
###Output
_____no_output_____
###Markdown
Let x = np.arange(4, dtype=np.int64). Create an array of 6's with the same shape and type as X.
###Code
x = np.arange(4, dtype=np.int64)
np.full_like(x, 6) # возвращает новый массив все элементы которого равны указанному значению, а форма и тип данных такие же как массива x
###Output
_____no_output_____
###Markdown
From existing data Create an array of [1, 2, 3].
###Code
np.array([1,2,3]) # создание массива из списка или кортежа
###Output
_____no_output_____
###Markdown
Let x = [1, 2]. Convert it into an array.
###Code
x = [1,2]
np.asarray(x) # преобразует последовательность в массив NumPy. Если x итак массив NumPy, не делает ничего
###Output
_____no_output_____
###Markdown
Let X = np.array([[1, 2], [3, 4]]). Convert it into a matrix.
###Code
X = np.array([[1, 2], [3, 4]])
np.asmatrix(X) # интерпретирует входные данные как матрицу
###Output
_____no_output_____
###Markdown
Let x = [1, 2]. Conver it into an array of `float`.
###Code
x = [1, 2]
np.asfarray(x) # преобразует тип входного массива к вещественному типу float64
###Output
_____no_output_____
###Markdown
Let x = np.array([30]). Convert it into scalar of its single element, i.e. 30.
###Code
x = np.array([30])
x.item() # преобразует массив с одним элементом в его скалярный эквивалент
###Output
_____no_output_____
###Markdown
Let x = np.array([1, 2, 3]). Create a array copy of x, which has a different id from x.
###Code
x = np.array([1, 2, 3])
y = np.copy(x) # возвращает массив-копию указанного объекта
print(id(x),x)
print(id(y),y)
###Output
2080842828256 [1 2 3]
2080842826016 [1 2 3]
###Markdown
Numerical ranges Create an array of 2, 4, 6, 8, ..., 100.
###Code
np.arange(2, 101, 2) # возвращает одномерный массив с равномерно разнесенными значениями внутри заданного интервала
###Output
_____no_output_____
###Markdown
Create a 1-D array of 50 evenly spaced elements between 3. and 10., inclusive.
###Code
np.linspace(3,10,50) # возвращает одномерный массив из указанного количества элементов, значения которых равномерно распределенны внутри заданного интервала
###Output
_____no_output_____
###Markdown
Create a 1-D array of 50 element spaced evenly on a log scale between 3. and 10., exclusive.
###Code
np.logspace(3,10,50,endpoint=False) # возвращает 1-мерный массив из указанного кол-ва элементов, значения которых равномерно распределенны по логарифмической шкале внутри заданного интервала
###Output
_____no_output_____
###Markdown
Building matrices Let X = np.array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]). Get the diagonal of X, that is, [0, 5, 10].
###Code
X = np.array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]])
np.diagonal(X) # возвращает указанные диагонали массива
###Output
_____no_output_____
###Markdown
Create a 2-D array whose diagonal equals [1, 2, 3, 4] and 0's elsewhere.
###Code
np.diagflat([1,2,3,4], k=0) # сжимает входной массив до одной оси и строит из него диагональный массив (k-индекс диагонали)
###Output
_____no_output_____
###Markdown
Create an array which looks like below.array([[ 0., 0., 0., 0., 0.], [ 1., 0., 0., 0., 0.], [ 1., 1., 0., 0., 0.]])
###Code
np.tri(3, M=5, k=-1, dtype='float') # возвращает треугольный массив в котором на заданной диагонали и ниже ее расположены единицы, а выше нее расположены нули
###Output
_____no_output_____
###Markdown
Create an array which looks like below.array([[ 0, 0, 0], [ 4, 0, 0], [ 7, 8, 0], [10, 11, 12]])
###Code
X = np.arange(1,13).reshape(4,3)
np.tril(X,-1) # преобразует указанный массив в треугольный у которого все элементы выше указанной диагонали равны 0.
###Output
_____no_output_____
###Markdown
Create an array which looks like below. array([[ 1, 2, 3], [ 4, 5, 6], [ 0, 8, 9], [ 0, 0, 12]])
###Code
X = np.arange(1,13).reshape(4,3)
np.triu(X,-1) # преобразует указанный массив в треугольный у которого все элементы ниже указанной диагонали равны 0
###Output
_____no_output_____
###Markdown
Array creation routines Ones and zeros
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Create a new array of 2*2 integers, without initializing entries.
###Code
np.zeros(4,int).reshape(2,2)
###Output
_____no_output_____
###Markdown
Let X = np.array([1,2,3], [4,5,6], np.int32). Create a new array with the same shape and type as X.
###Code
X = np.array([[1,2,3], [4,5,6]], np.int32)
X
np.empty_like(X)
###Output
_____no_output_____
###Markdown
Create a 3-D array with ones on the diagonal and zeros elsewhere.
###Code
np.eye(3)
np.identity(3)
###Output
_____no_output_____
###Markdown
Create a new array of 3*2 float numbers, filled with ones.
###Code
np.ones([3,2],dtype=float)
###Output
_____no_output_____
###Markdown
Let x = np.arange(4, dtype=np.int64). Create an array of ones with the same shape and type as X.
###Code
x = np.arange(4, dtype=np.int64)
x = np.arange(4, dtype=np.int64)
np.ones_like(x)
###Output
_____no_output_____
###Markdown
Create a new array of 3*2 float numbers, filled with zeros.
###Code
np.zeros([3,2],dtype=float)
###Output
_____no_output_____
###Markdown
Let x = np.arange(4, dtype=np.int64). Create an array of zeros with the same shape and type as X.
###Code
x = np.arange(4, dtype=np.int64)
x = np.arange(4, dtype=np.int64)
np.zeros_like(x)
###Output
_____no_output_____
###Markdown
Create a new array of 2*5 uints, filled with 6.
###Code
np.full((2,5),6,dtype=np.uint32)
###Output
_____no_output_____
###Markdown
Let x = np.arange(4, dtype=np.int64). Create an array of 6's with the same shape and type as X.
###Code
x = np.arange(4, dtype=np.int64)
x = np.arange(4, dtype=np.int64)
np.full_like(x, 6)
###Output
_____no_output_____
###Markdown
From existing data Create an array of [1, 2, 3].
###Code
np.array([1,2,3])
###Output
_____no_output_____
###Markdown
Let x = [1, 2]. Convert it into an array.
###Code
x = [1,2]
x = [1,2]
np.asarray(x)
###Output
_____no_output_____
###Markdown
Let X = np.array([[1, 2], [3, 4]]). Convert it into a matrix.
###Code
X = np.array([[1, 2], [3, 4]])
X = np.array([[1, 2], [3, 4]])
np.asmatrix(X)
###Output
_____no_output_____
###Markdown
Let x = [1, 2]. Conver it into an array of `float`.
###Code
x = [1, 2]
x = [1, 2]
np.asfarray(x)
###Output
_____no_output_____
###Markdown
Let x = np.array([30]). Convert it into scalar of its single element, i.e. 30.
###Code
x = np.array([30])
x = np.array([30])
np.asscalar(x)
###Output
_____no_output_____
###Markdown
Let x = np.array([1, 2, 3]). Create a array copy of x, which has a different id from x.
###Code
x = np.array([1, 2, 3])
x = np.array([1, 2, 3])
y =np.copy(x)
print (id(x),id(y))
###Output
1606853064384 1606853064464
###Markdown
Numerical ranges Create an array of 2, 4, 6, 8, ..., 100.
###Code
np.arange(2,102,2)
###Output
_____no_output_____
###Markdown
Create a 1-D array of 50 evenly spaced elements between 3. and 10., inclusive.
###Code
np.linspace(3.,10.,50)
###Output
_____no_output_____
###Markdown
Create a 1-D array of 50 element spaced evenly on a log scale between 3. and 10., exclusive.
###Code
np.logspace(3.,10.,50,endpoint=False)
###Output
_____no_output_____
###Markdown
Building matrices Let X = np.array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]). Get the diagonal of X, that is, [0, 5, 10].
###Code
X = np.array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]])
X = np.array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]])
X.diagonal()
###Output
_____no_output_____
###Markdown
Create a 2-D array whose diagonal equals [1, 2, 3, 4] and 0's elsewhere.
###Code
np.diag(np.arange(4)+1)
###Output
_____no_output_____
###Markdown
Create an array which looks like below.array([[ 0., 0., 0., 0., 0.], [ 1., 0., 0., 0., 0.], [ 1., 1., 0., 0., 0.]])
###Code
np.tri(3,5,-1)
###Output
_____no_output_____
###Markdown
Create an array which looks like below.array([[ 0, 0, 0], [ 4, 0, 0], [ 7, 8, 0], [10, 11, 12]])
###Code
np.tril(np.arange(1,13).reshape(4,3),-1)
#np.tril() #Lower triangle of an array.
np.tri(3,3,-1)
###Output
_____no_output_____
###Markdown
Create an array which looks like below. array([[ 1, 2, 3], [ 4, 5, 6], [ 0, 8, 9], [ 0, 0, 12]])
###Code
np.triu(np.arange(1,13).reshape(4,3),-1)
###Output
_____no_output_____
###Markdown
Create a new array of 2*5 uints, filled with 6. Let x = np.arange(4, dtype=np.int64). Create an array of 6's with the same shape and type as X.
###Code
x = np.arange(4, dtype=np.int64)
###Output
_____no_output_____
###Markdown
From existing data Create an array of [1, 2, 3]. Let x = [1, 2]. Convert it into an array.
###Code
x = [1,2]
###Output
_____no_output_____
###Markdown
Let X = np.array([[1, 2], [3, 4]]). Convert it into a matrix.
###Code
X = np.array([[1, 2], [3, 4]])
###Output
_____no_output_____
###Markdown
Let x = [1, 2]. Conver it into an array of `float`.
###Code
x = [1, 2]
###Output
_____no_output_____
###Markdown
Let x = np.array([30]). Convert it into scalar of its single element, i.e. 30.
###Code
x = np.array([30])
###Output
_____no_output_____
###Markdown
Let x = np.array([1, 2, 3]). Create a array copy of x, which has a different id from x.
###Code
x = np.array([1, 2, 3])
###Output
70140352 [1 2 3]
70140752 [1 2 3]
###Markdown
Numerical ranges Create an array of 2, 4, 6, 8, ..., 100. Create a 1-D array of 50 evenly spaced elements between 3. and 10., inclusive. Create a 1-D array of 50 element spaced evenly on a log scale between 3. and 10., exclusive. Building matrices Let X = np.array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]). Get the diagonal of X, that is, [0, 5, 10].
###Code
X = np.array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]])
###Output
_____no_output_____
|
notebooks/01_overview_of_learning_tasks.ipynb
|
###Markdown
Time series learning tasks* What is machine learning with time series? * How is it different from standard machine learning?
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
###Output
_____no_output_____
###Markdown
Learning objectivesYou'll learn about* different time series learning tasks* how to tell them apart--- Single seriesTime series comes in many shapes and forms. As an example, consider that we observe a chemical process in a [bioreactor](https://en.wikipedia.org/wiki/Bioreactor). We may observe the repeated sensor readings for the pressure over time from a single bioreactor run.
###Code
from utils import load_pressure
pressure = load_pressure()
fig, ax = plt.subplots(1, figsize=(16, 4))
pressure.plot(ax=ax)
ax.set(ylabel="Pressure", xlabel="Time");
###Output
_____no_output_____
###Markdown
Suppose you only have a single time series, what are some real-world problems that you encounter and may want to solve with machine learning? > * Time series annotation (e.g. outlier/anomaly detection, segmentation)> * Forecasting--- Multiple time seriesYou may observe multiple time series. There are two ways in which this can happen: Multivariate time seriesHere we observe two or more variables over time, with variables representing *different kinds of measurements* within a single *experimental unit* (e.g. readings from different sensors of a single chemical process).
###Code
from utils import load_temperature
temperature = load_temperature()
fig, (ax0, ax1) = plt.subplots(nrows=2, figsize=(16, 8), sharex=True)
pressure.plot(ax=ax0)
temperature.plot(ax=ax1)
ax0.set(ylabel="Pressure")
ax1.set(ylabel="Temperature", xlabel="Time");
###Output
_____no_output_____
###Markdown
Suppose you have multivariate time series, what are some real-world problems that you encounter and may want to solve with machine learning? > * Time series annotation with additional variables> * Forecasting with exogenous variables> * Vector forecasting (forecasting multiple series at the same time)--- Panel data Sometimes also called longitudinal data, here we observe multiple independent instances of the *same kind(s) of measurements* over time, e.g. sensor readings from multiple separate chemical processes).
###Code
from utils import load_experiments
experiments = load_experiments(variables="pressure")
fig, ax = plt.subplots(1, figsize=(16, 4))
experiments.sample(5).T.plot(ax=ax)
ax.set(ylabel="Pressure", xlabel="Time");
###Output
_____no_output_____
###Markdown
Time series learning tasks* What is machine learning with time series? * How is it different from standard machine learning?
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
###Output
_____no_output_____
###Markdown
Learning objectivesYou'll learn about* different time series learning tasks* how to tell them apart--- Single seriesTime series comes in many shapes and forms. As an example, consider that we observe a chemical process in a [bioreactor](https://en.wikipedia.org/wiki/Bioreactor). We may observe the repeated sensor readings for the pressure over time from a single bioreactor run.
###Code
from utils import load_pressure
pressure = load_pressure()
fig, ax = plt.subplots(1, figsize=(16, 4))
pressure.plot(ax=ax)
ax.set(ylabel="Pressure", xlabel="Time");
###Output
_____no_output_____
###Markdown
Suppose you only have a single time series, what are some real-world problems that you encounter and may want to solve with machine learning? > * Time series annotation (e.g. outlier/anomaly detection, segmentation)> * Forecasting--- Multiple time seriesYou may observe multiple time series. There are two ways in which this can happen: Multivariate time seriesHere we observe two or more variables over time, with variables representing *different kinds of measurements* within a single *experimental unit* (e.g. readings from different sensors of a single chemical process).
###Code
from utils import load_temperature
temperature = load_temperature()
fig, (ax0, ax1) = plt.subplots(nrows=2, figsize=(16, 8), sharex=True)
pressure.plot(ax=ax0)
temperature.plot(ax=ax1)
ax0.set(ylabel="Pressure")
ax1.set(ylabel="Temperature", xlabel="Time");
###Output
_____no_output_____
###Markdown
Suppose you have multivariate time series, what are some real-world problems that you encounter and may want to solve with machine learning? > * Time series annotation with additional variables> * Forecasting with exogenous variables> * Vector forecasting (forecasting multiple series at the same time)--- Panel data Sometimes also called longitudinal data, here we observe multiple independent instances of the *same kind(s) of measurements* over time, e.g. sensor readings from multiple separate chemical processes).
###Code
from utils import load_experiments
experiments = load_experiments(variables="pressure")
fig, ax = plt.subplots(1, figsize=(16, 4))
experiments.sample(10).T.plot(ax=ax)
ax.set(ylabel="Pressure", xlabel="Time");
###Output
_____no_output_____
|
data_structure/enum.ipynb
|
###Markdown
Creating Enumerations A new enumeration is defined using the class syntax by subclassing Enum and adding class attributes describing the values
###Code
import enum
class BugStatus(enum.Enum):
new = 7
incomplete = 6
invalid = 5
wont_fix = 4
in_progress = 3
fix_committed = 2
fix_released = 1
print("member name: {}".format(BugStatus.wont_fix.name))
print("member value: {}".format(BugStatus.wont_fix.value))
###Output
_____no_output_____
###Markdown
Access to enumeration members and their attributes
###Code
BugStatus(7)
# if you want access enum members by name, use item access
BugStatus['']
###Output
_____no_output_____
###Markdown
Iteration Iterating over the enum class pruduces the indivisual members of the member
###Code
for status in BugStatus:
print("{}:{}".format(status.name, status.value))
###Output
_____no_output_____
###Markdown
**Note**:The members are produced in the order they are declared in the class definition. The names and values are not used to sort them. Comparing Enums Because enumeration members are not ordered. They only support equality and identify test
###Code
actual_state = BugStatus.wont_fix
desired_state = BugStatus.fix_released
print('Equality:',
actual_state == desired_state,
actual_state == BugStatus.wont_fix)
print('Identity:',
actual_state is desired_state,
actual_state is BugStatus.wont_fix)
print('Ordered by value:')
try:
print('\n'.join(' ' + s.name for s in sorted(BugStatus)))
except TypeError as err:
print(' Cannot sort: {}'.format(err))
###Output
_____no_output_____
###Markdown
IntEnumuse the `IntEnum` class for enumrations where the members need to behave more like numbers, for example, to support comparisons
###Code
class BugStatusInt(enum.IntEnum):
new = 7
incomplete = 6
invalid = 5
wont_fix = 4
in_progress = 3
fix_committed = 2
fix_released = 1
actual_state = BugStatusInt.wont_fix
desired_state = BugStatusInt.fix_released
print('Equality:',
actual_state == desired_state,
actual_state == BugStatusInt.wont_fix)
print('Identity:',
actual_state is desired_state,
actual_state is BugStatusInt.wont_fix)
print("comparison: ")
print("new is bigger then invalid:", BugStatusInt.new > BugStatusInt.invalid)
print('Ordered by value:')
print('\n'.join(' ' + s.name for s in sorted(BugStatusInt)))
###Output
_____no_output_____
###Markdown
Unique Enumeration Values Enum members with the same value are tracked as alias references to the same member object. Aliases do not cause repeated values to be present in the iterator for the Enum
###Code
class BugStatusUnique(enum.Enum):
new = 7
incomplete = 6
invalid = 5
wont_fix = 4
in_progress = 3
fix_committed = 2
fix_released = 1
by_design = 4
closed = 1
for status in BugStatusUnique:
print('{:15} = {}'.format(status.name, status.value))
print('\nSame: by_design is wont_fix: ',
BugStatusUnique.by_design is BugStatusUnique.wont_fix)
print('Same: closed is fix_released: ',
BugStatusUnique.closed is BugStatusUnique.fix_released)
###Output
_____no_output_____
###Markdown
**To require all members to have unique values, add the decorator**
###Code
@enum.unique
class BugStatusUniqueDecorator(enum.Enum):
new = 7
incomplete = 6
invalid = 5
wont_fix = 4
in_progress = 3
fix_committed = 2
fix_released = 1
# This will trigger an error with unique applied.
by_design = 4
closed = 1
###Output
_____no_output_____
###Markdown
Members with repeated values trigger a `ValueError` exception when the `Enum` class being interpreted Create Enumerations Programmatically
###Code
BugStatus = enum.Enum(
value='BugStatus',
names=('fix_released fix_committed in_progress '
'wont_fix invalid incomplete new'),
)
print('Member: {}'.format(BugStatus.new))
print('\nAll members:')
for status in BugStatus:
print('{:15} = {}'.format(status.name, status.value))
###Output
_____no_output_____
###Markdown
value: the name of the enumrationnames: lists the member of the enumeration, if a string is passed. it is split on the whitespace and commas. the value to names is starting with 1 If you want control the value associated with members, the names string can be replaced with a sequence of 2 part tuples or a dictionary mapping names to values
###Code
BugStatus = enum.Enum(
value='BugStatus',
names=[
('new', 7),
('incomplete', 6),
('invalid', 5),
('wont_fix', 4),
('in_progress', 3),
('fix_committed', 2),
('fix_released', 1),
],
)
print('All members:')
for status in BugStatus:
print('{:15} = {}'.format(status.name, status.value))
###Output
_____no_output_____
###Markdown
Non-integer Member Values Enum member values are not restricted to integers. In fact, any type of object can be associated with a member. If the value is a tuple, the members are passed as individual arguments to __init__().
###Code
class BugStatus(enum.Enum):
new = (7, ['incomplete',
'invalid',
'wont_fix',
'in_progress'])
incomplete = (6, ['new', 'wont_fix'])
invalid = (5, ['new'])
wont_fix = (4, ['new'])
in_progress = (3, ['new', 'fix_committed'])
fix_committed = (2, ['in_progress', 'fix_released'])
fix_released = (1, ['new'])
def __init__(self, num, transitions):
self.num = num
self.transitions = transitions
def can_transition(self, new_state):
return new_state.name in self.transitions
print('Name:', BugStatus.in_progress)
print('Value:', BugStatus.in_progress.value)
print('Custom attribute:', BugStatus.in_progress.transitions)
print('Using attribute:',
BugStatus.in_progress.can_transition(BugStatus.new))
BugStatus.new.transitions
###Output
_____no_output_____
###Markdown
For more complex cases, tuples might become unwieldy. Since member values can be any type of object, dictionaries can be used for cases where there are a lot of separate attributes to track for each enum value. Complex values are passed directly to __init__() as the only argument other than self.
###Code
import enum
class BugStatus(enum.Enum):
new = {
'num': 7,
'transitions': [
'incomplete',
'invalid',
'wont_fix',
'in_progress',
],
}
incomplete = {
'num': 6,
'transitions': ['new', 'wont_fix'],
}
invalid = {
'num': 5,
'transitions': ['new'],
}
wont_fix = {
'num': 4,
'transitions': ['new'],
}
in_progress = {
'num': 3,
'transitions': ['new', 'fix_committed'],
}
fix_committed = {
'num': 2,
'transitions': ['in_progress', 'fix_released'],
}
fix_released = {
'num': 1,
'transitions': ['new'],
}
def __init__(self, vals):
self.num = vals['num']
self.transitions = vals['transitions']
def can_transition(self, new_state):
return new_state.name in self.transitions
print('Name:', BugStatus.in_progress)
print('Value:', BugStatus.in_progress.value)
print('Custom attribute:', BugStatus.in_progress.transitions)
print('Using attribute:',
BugStatus.in_progress.can_transition(BugStatus.new))
###Output
_____no_output_____
|
Monelit Data Science/5. Lunes/NLP/reading_preprocessing.ipynb
|
###Markdown
Lectura y preprocesamiento de documentos
###Code
# Paquetes necesarios
import re
import pandas as pd
import os
import PyPDF2
import time
import sys
import pickle
# Packages for text preprocessing
import nltk
###Output
_____no_output_____
###Markdown
Almacenar los documentos en una base de datos
###Code
#---------------------------------------
# Cargar cada pdf en una lista de Python
#---------------------------------------
#scriptPath = 'C:\\Users\\User\\Desktop\\DS4A_workspace\\Final Project\\DS4A-Final_Project-Team_28\\Personal\\Camilo'
# Directorio de trabajo absoluto
# Necesario para llamados relativos desde bucles
scriptPath = sys.path[0]
os.chdir(scriptPath + '/Last_conpes_pdfs')
# Directorio con los pdfs
files = os.listdir(scriptPath + '/Last_conpes_pdfs')
# Lista vacía para llenar con cada pdf almacenado
Database = []
# Bucle que abre cada pdf en Python
for FILE in files:
# Si termina en '.pdf' leerlo en formato binario y pegarlo a la lista
if FILE.endswith('.pdf'):
data = open(FILE,'rb')
Database.append(data)
###Output
_____no_output_____
###Markdown
Observemos los primeros 10 archivos
###Code
Database[0:10]
###Output
_____no_output_____
###Markdown
Leer los documentos con bucle Con el paquete PyPDF2 podemos leer archivos en pdf con facilidad. Sin embargo, hay varios documentos que no se encuentran en formato legible, es decir que para ellos hay que emplear el procedimiento de Reconocimiento Óptico de Caracteres. En este caso obviaremos dichos documentos por ser minoría.
###Code
# Dataframe vacío que almacenará el nombre del PDF y su texto
text_table = pd.DataFrame(index = [0], columns = ['PDF','Text'])
fileIndex = 0
# Cntabilizar tiempo de ejecución
t0 = time.time()
# Para cada pdf que puede ser leído
for file in files:
# Leer como binario
pdfFileObj = open(file,'rb')
# Magia con PyPDF2
pdfReader = PyPDF2.PdfFileReader(pdfFileObj)
# Empezar contador de páginas
startPage = 0
# Caracter vacío para rellenarse con texto
text = ''
cleanText = ''
# Mientras que el contador sea menor al número de páginas
while startPage <= pdfReader.numPages-1:
pageObj = pdfReader.getPage(startPage)
text += pageObj.extractText()
startPage += 1
pdfFileObj.close()
# Para cada palabra dentro del texto
for myWord in text:
# Ignorar saltos de línea
if myWord != '\n':
cleanText += myWord
# Objeto temporal que contiene el texto limpio
text = cleanText
# Crear fila vacía
newRow = pd.DataFrame(index = [0], columns = ['PDF', 'Text'])
# Rellenar la anterior fila con nombre y texto
newRow.iloc[0]['PDF'] = file
newRow.iloc[0]['Text'] = text
# Concatenar la fila creada al dataframe creado por fuera del bucle
text_table = pd.concat([text_table, newRow], ignore_index=True)
t1 = time.time()
# Tiempo total de ejecución
total = t1-t0
###Output
_____no_output_____
###Markdown
Observemos el dataframe creado
###Code
text_table = text_table.iloc[1:]
text_table
###Output
_____no_output_____
###Markdown
Limpieza del texto En procesamiento de lenguaje natural, para que una computadora pueda interpretar las palabras como números, es necesario realizar transformaciones sobre estas. En ese sentido, el texto se homogeniza retirando todos los elementos que no le aportan al verdadero significado del texto. Entre esos elementos encontramos:- Caracteres especiales.- Palabras mal escritas de corta longitud.- Acentos si estos con inconsistentes (en español corregir la ortografía es más complicado que en inglés).- Stop words: artículos, conectores y palabras que dependiendo del contexto se consideren stop words.
###Code
# Remover caracteres que no son ASCII
def remove_noChar(words):
return [re.sub(u"[^a-zA-ZñÑáéíóúÁÉÍÓÚ ]","", word) for word in words]
# Remover stopwords
def remove_sw(words,sw_list):
return [word for word in words if word not in sw_list]
# Remover caracteres cortos
def remove_shortW(words):
return [word for word in words if len(word) > 2]
# Remover acentos
# Esto no es necesario para documentos que tienen buena ortografía
def remove_tilde(words):
return [r_tilde(word) for word in words]
# Reemplazo de tildes
def r_tilde(word):
w=[]
for letra in list(word):
if letra == 'á': letra = 'a'
if letra == 'é': letra = 'e'
if letra == 'í': letra = 'i'
if letra == 'ó': letra = 'o'
if letra == 'ú': letra = 'u'
w += letra
return ''.join(w)
# Función que usa las funciones anteriores
def preProc_docs(documentos):
documentos = [re.sub(r'[^\w\s]',' ',proy) for proy in documentos]
documentos = [[word for word in texto.lower().split()] for texto in documentos]
documentos = [remove_noChar(text) for text in documentos]
#documentos = [remove_tilde(text) for text in documentos]
documentos = [remove_sw(words_lst, all_stopwords) for words_lst in documentos]
documentos = [remove_shortW(text) for text in documentos]
return [' '.join(item) for item in documentos]
nltk.download('words')
# archivo stopwords
with open(scriptPath + '/stop_words_spanish.txt', 'rb') as f:
sw_spanish = f.read().decode('latin-1').replace(u'\r', u'').split(u'\n')
sw_spanish
#---------------------------------------------------------------------------------
# conjunto de stopwords de nltk
default_stopwords = nltk.corpus.stopwords.words('spanish')
# Combinar ambos conjuntos de stopwords
all_stopwords = list(set(default_stopwords) | set(sw_spanish) )
all_stopwords
###Output
_____no_output_____
###Markdown
Finalmente, con las funciones de limpieza listas, y la lista de stop words cargada, procedemos a limpiar el texto Correr limpieza
###Code
# Ejecutemos la limpieza y cronometremos el tiempo que toma correr
t0 = time.time()
clean_text = pd.DataFrame(text_table.Text)
clean_text = clean_text.apply(preProc_docs)
t1 = time.time()
# 16 segundos para procesar todo el texto
total = t1-t0
###Output
_____no_output_____
###Markdown
Extraer información relevante con expresión regular
###Code
#re.search(r'DEPARTAMENTO NACIONAL DE PLANEACIÓN\.(.*?)Departamento Nacional de Planeación', text_table.Text[5])
titles = []
for i in range(1,len(clean_text)):
try:
temp = re.search('nacional planeación(.*)versión aprobada', clean_text.Text.iloc[i])
temp2 = temp.group(1).strip()
temp3 = temp2.replace('CONSEJO NACIONAL DE POLÍTICA ECONÓMICA Y SOCIAL REPÚBLICA DE COLOMBIA DEPARTAMENTO NACIONAL DE PLANEACIÓN ','')
titles.append(temp2)
except:
pass
#except Exception:
# temp = re.search('Documento CONPES(.*)Versión aprobada', text_table.Text[i])
#if not os.path.isfile(filename):
# try:
# urllib.request.urlretrieve(url, filename)
# except Exception:
# pass
len(titles)
#-------------------------------------------------------------------------------
#titles_xlsx = pd.read_excel('C:\\Users\\User\\Desktop\\conpes_list.xlsx')
#titles_2 = pd.DataFrame(titles_xlsx.titulo)
#titles_2_clean = titles_2.apply(preProc_docs).titulo.tolist()
titles
###Output
_____no_output_____
###Markdown
Almacenar
###Code
pickle.dump( titles, open( "titles.p", "wb" ) )
###Output
_____no_output_____
|
module2-sql-for-analysis/Module2_Assignment.ipynb
|
###Markdown
Test the SQLite Load - File is Uploaded Locally in Colab
###Code
!pip install psycopg2-binary
# Imports
import sqlite3
# Queries
query1 = 'SELECT COUNT(character_id) \
FROM charactercreator_character;'
# File is uploaded to Colab
conn = sqlite3.connect('rpg_db.sqlite3')
# Execute queries
curs1 = conn.cursor()
print('Total characters =', curs1.execute(query1).fetchall())
# Close cursors and commit
curs1.close()
conn.commit()
###Output
_____no_output_____
###Markdown
Insert the RPG Data Into PostgreSQL
###Code
# Imports
import pandas as pd
from sqlalchemy import create_engine
import psycopg2
# Connect to sqlite RPG DB
sl_conn =sqlite3.connect('rpg_db.sqlite3')
sl_cur = sl_conn.cursor()
# Test the Connection
sl_cur.execute('SELECT * FROM charactercreator_character')
sl_cc_table = sl_cur.fetchall()
sl_cc_table[0]
# Create Postegres Engine
db_string = 'postgres://wsatpgnz:[email protected]:5432/wsatpgnz'
engine = create_engine(db_string)
# Connnect to Postgres DB
pg_conn = engine.connect()
# Convert charactercreator_character to Dataframe
df = pd.read_sql('SELECT * FROM charactercreator_character', sl_conn)
df
#pg_conn.execute('DROP TABLE "public"."charactercreator_character"')
# Store the Dataframe in the Postgres DB
table_names_dict = {'charactercreator_character': 'character_id'}
for table_name, primary_key in table_names_dict.items():
df = pd.read_sql(f"SELECT * FROM {table_name}", sl_conn)
df = df.set_index(primary_key, verify_integrity=True)
df.to_sql(table_name, pg_conn)
# Close cursors and commit
sl_cur.close()
sl_conn.commit()
###Output
_____no_output_____
###Markdown
Import Dataset
###Code
import psycopg2
import pandas as pd
df = pd.read_csv('titanic.csv')
df.head(30)
df['Name'] = df['Name'].str.replace("'", '`')
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 887 entries, 0 to 886
Data columns (total 8 columns):
Survived 887 non-null int64
Pclass 887 non-null int64
Name 887 non-null object
Sex 887 non-null object
Age 887 non-null float64
Siblings/Spouses Aboard 887 non-null int64
Parents/Children Aboard 887 non-null int64
Fare 887 non-null float64
dtypes: float64(2), int64(4), object(2)
memory usage: 55.5+ KB
###Markdown
Connect to PostgreSQL
###Code
dbname = 'rbdyjzel'
user = 'rbdyjzel'
password = 'PqamaH9-y5MGtTxPP__tkNvls9Nj0WWQ' # Don't commit this!
host = 'raja.db.elephantsql.com'
pg_conn = psycopg2.connect(dbname=dbname, user=user,
password=password, host=host)
pg_curs = pg_conn.cursor()
###Output
_____no_output_____
###Markdown
Transform pandas dataframe to SQL
###Code
#df.to_sql(name='titanic', con=sl_conn, if_exists='replace')
import sqlite3
# Convert to SQLite3
sl_conn = sqlite3.connect('titanic.sqlite3')
df.to_sql(name='titanic', con=sl_conn, if_exists='replace')
sl_curs = sl_conn.cursor()
human = sl_curs.execute('SELECT * FROM titanic;').fetchall()
human[0]
df.head(1)
create_titanic_table = """
CREATE TABLE titanic (
id SERIAL PRIMARY KEY,
survived INT,
pclass INT,
name VARCHAR(100),
sex VARCHAR(10),
age INT,
siblings_or_spouses_aboard INT,
parents_children_aboard INT,
fare FLOAT
)
"""
pg_curs.execute(create_titanic_table)
attempted_insert = """
INSERT INTO titanic
VALUES""" + str(human[0])
attempted_insert
pg_curs.execute(attempted_insert)
pg_curs.execute('SELECT * FROM titanic;')
pg_human = pg_curs.fetchall()
human[28]
pg_human[0]
for humans in human[1:]:
insert_human = """
INSERT INTO titanic
VALUES """ + str(humans)
pg_curs.execute(insert_human)
pg_conn.commit()
pg_human[-2]
for humans, pg_humans in zip(human, pg_human):
assert human == pg_human
###Output
_____no_output_____
|
foundation/applied-statistics/class_material_day_3/SF Salaries Exercise- Solutions.ipynb
|
###Markdown
___ ___ SF Salaries Exercise - SolutionsWe will be using the [SF Salaries Dataset](https://www.kaggle.com/kaggle/sf-salaries) from Kaggle! Just follow along and complete the tasks outlined in bold below. The tasks will get harder and harder as you go along. ** Import pandas as pd.**
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
** Read Salaries.csv as a dataframe called sal.**
###Code
sal = pd.read_csv('Salaries.csv')
###Output
_____no_output_____
###Markdown
** Check the head of the DataFrame. **
###Code
sal.head()
###Output
_____no_output_____
###Markdown
** Use the .info() method to find out how many entries there are.**
###Code
sal.info() # 148654 Entries
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 148654 entries, 0 to 148653
Data columns (total 13 columns):
Id 148654 non-null int64
EmployeeName 148654 non-null object
JobTitle 148654 non-null object
BasePay 148045 non-null float64
OvertimePay 148650 non-null float64
OtherPay 148650 non-null float64
Benefits 112491 non-null float64
TotalPay 148654 non-null float64
TotalPayBenefits 148654 non-null float64
Year 148654 non-null int64
Notes 0 non-null float64
Agency 148654 non-null object
Status 0 non-null float64
dtypes: float64(8), int64(2), object(3)
memory usage: 14.7+ MB
###Markdown
**What is the average BasePay ?**
###Code
sal['BasePay'].mean()
###Output
_____no_output_____
###Markdown
** What is the highest amount of OvertimePay in the dataset ? **
###Code
sal['OvertimePay'].max()
###Output
_____no_output_____
###Markdown
** What is the job title of JOSEPH DRISCOLL ? Note: Use all caps, otherwise you may get an answer that doesn't match up (there is also a lowercase Joseph Driscoll). **
###Code
sal[sal['EmployeeName']=='JOSEPH DRISCOLL']['JobTitle']
###Output
_____no_output_____
###Markdown
** How much does JOSEPH DRISCOLL make (including benefits)? **
###Code
sal[sal['EmployeeName']=='JOSEPH DRISCOLL']['TotalPayBenefits']
###Output
_____no_output_____
###Markdown
** What is the name of highest paid person (including benefits)?**
###Code
sal[sal['TotalPayBenefits']== sal['TotalPayBenefits'].max()] #['EmployeeName']
# or
# sal.loc[sal['TotalPayBenefits'].idxmax()]
###Output
_____no_output_____
###Markdown
** What is the name of lowest paid person (including benefits)? Do you notice something strange about how much he or she is paid?**
###Code
sal[sal['TotalPayBenefits']== sal['TotalPayBenefits'].min()] #['EmployeeName']
# or
# sal.loc[sal['TotalPayBenefits'].idxmax()]['EmployeeName']
## ITS NEGATIVE!! VERY STRANGE
###Output
_____no_output_____
###Markdown
** What was the average (mean) BasePay of all employees per year? (2011-2014) ? **
###Code
sal.groupby('Year').mean()['BasePay']
###Output
_____no_output_____
###Markdown
** How many unique job titles are there? **
###Code
sal['JobTitle'].nunique()
###Output
_____no_output_____
###Markdown
** What are the top 5 most common jobs? **
###Code
sal['JobTitle'].value_counts().head(5)
###Output
_____no_output_____
###Markdown
** How many Job Titles were represented by only one person in 2013? (e.g. Job Titles with only one occurence in 2013?) **
###Code
sum(sal[sal['Year']==2013]['JobTitle'].value_counts() == 1) # pretty tricky way to do this...
###Output
_____no_output_____
###Markdown
** How many people have the word Chief in their job title? (This is pretty tricky) **
###Code
def chief_string(title):
if 'chief' in title.lower():
return True
else:
return False
sum(sal['JobTitle'].apply(lambda x: chief_string(x)))
###Output
_____no_output_____
###Markdown
** Bonus: Is there a correlation between length of the Job Title string and Salary? **
###Code
sal['title_len'] = sal['JobTitle'].apply(len)
sal[['title_len','TotalPayBenefits']].corr() # No correlation.
###Output
_____no_output_____
|
Pytorch_BSNetConv.ipynb
|
###Markdown
###Code
!pip install kornia
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import sys
import os
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms
from scipy import io
import torch.utils.data
import scipy
import matplotlib.pyplot as plt
from torch.utils.data import Dataset, DataLoader
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
!pip install -U spectral
if not (os.path.isfile('/content/Indian_pines_corrected.mat')):
!wget http://www.ehu.eus/ccwintco/uploads/6/67/Indian_pines_corrected.mat
if not (os.path.isfile('/content/Indian_pines_gt.mat')):
!wget http://www.ehu.eus/ccwintco/uploads/c/c4/Indian_pines_gt.mat
import scipy.io as sio
def loadData():
data = sio.loadmat('Indian_pines_corrected.mat')['indian_pines_corrected']
labels = sio.loadmat('Indian_pines_gt.mat')['indian_pines_gt']
return data, labels
def padWithZeros(X, margin=2):
## From: https://github.com/gokriznastic/HybridSN/blob/master/Hybrid-Spectral-Net.ipynb
newX = np.zeros((X.shape[0] + 2 * margin, X.shape[1] + 2* margin, X.shape[2]))
x_offset = margin
y_offset = margin
newX[x_offset:X.shape[0] + x_offset, y_offset:X.shape[1] + y_offset, :] = X
return newX
def createImageCubes(X, y, windowSize=5, removeZeroLabels = True):
## From: https://github.com/gokriznastic/HybridSN/blob/master/Hybrid-Spectral-Net.ipynb
margin = int((windowSize - 1) / 2)
zeroPaddedX = padWithZeros(X, margin=margin)
# split patches
patchesData = np.zeros((X.shape[0] * X.shape[1], windowSize, windowSize, X.shape[2]), dtype=np.uint8)
patchesLabels = np.zeros((X.shape[0] * X.shape[1]), dtype=np.uint8)
patchIndex = 0
for r in range(margin, zeroPaddedX.shape[0] - margin):
for c in range(margin, zeroPaddedX.shape[1] - margin):
patch = zeroPaddedX[r - margin:r + margin + 1, c - margin:c + margin + 1]
patchesData[patchIndex, :, :, :] = patch
patchesLabels[patchIndex] = y[r-margin, c-margin]
patchIndex = patchIndex + 1
if removeZeroLabels:
patchesData = patchesData[patchesLabels>0,:,:,:]
patchesLabels = patchesLabels[patchesLabels>0]
patchesLabels -= 1
return patchesData, patchesLabels
class HyperSpectralDataset(Dataset):
"""HyperSpectral dataset."""
def __init__(self,data_url,label_url):
self.data = np.array(scipy.io.loadmat('/content/'+data_url.split('/')[-1])[data_url.split('/')[-1].split('.')[0].lower()])
self.targets = np.array(scipy.io.loadmat('/content/'+label_url.split('/')[-1])[label_url.split('/')[-1].split('.')[0].lower()])
self.data, self.targets = createImageCubes(self.data,self.targets, windowSize=7)
self.data = torch.Tensor(self.data)
self.data = self.data.permute(0,3,1,2)
print(self.data.shape)
def __len__(self):
return self.data.shape[0]
def __getitem__(self, idx):
return self.data[idx,:,:,:] , self.targets[idx]
data_train = HyperSpectralDataset('Indian_pines_corrected.mat','Indian_pines_gt.mat')
train_loader = DataLoader(data_train, batch_size=16, shuffle=True)
print(data_train.__getitem__(0)[0].shape)
print(data_train.__len__())
class BSNET_Conv(nn.Module):
def __init__(self,):
super(BSNET_Conv, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(200,64,(3,3),1,0),
nn.ReLU(True))
self.conv1_1 = nn.Sequential(
nn.Conv2d(200,128,(3,3),1,0),
nn.ReLU(True))
self.conv1_2 = nn.Sequential(
nn.Conv2d(128,64,(3,3),1,0),
nn.ReLU(True))
self.deconv1_2 = nn.Sequential(
nn.ConvTranspose2d(64,64,(3,3),1,0),
nn.ReLU(True))
self.deconv1_1 = nn.Sequential(
nn.ConvTranspose2d(64,128,(3,3),1,0),
nn.ReLU(True))
self.conv2_1 = nn.Sequential(
nn.Conv2d(128,200,(1,1),1,0),
nn.Sigmoid())
self.fc1 = nn.Sequential(
nn.Linear(64,128),
nn.ReLU(True))
self.fc2 = nn.Sequential(
nn.Linear(128,200),
nn.Sigmoid())
self.gp=nn.AvgPool2d(5)
def BAM(self,x):
x = self.conv1(x)
x = self.gp(x)
x = x.view(-1,64)
x = self.fc1(x)
x = self.fc2(x)
x = x.view(-1,1,1,200)
x = x.permute(0,3,1,2)
return x
def RecNet(self,x):
x = self.conv1_1(x)
x = self.conv1_2(x)
x = self.deconv1_2(x)
x = self.deconv1_1(x)
x = self.conv2_1(x)
return x
def forward(self,x):
BRW = self.BAM(x)
x = x*BRW
ret = self.RecNet(x)
return ret
model = BSNET_Conv().to(device)
from torchsummary import summary
summary(model,(200,7,7),batch_size=16)
top = 25
import skimage
import kornia
global bsnlist
ssim = kornia.losses.SSIM(5, reduction='none')
psnr = kornia.losses.PSNRLoss(2500)
from skimage import measure
ssim_list = []
psnr_list = []
l1_list = []
channel_weight_list = []
def train(epoch):
model.train()
ENTROPY = torch.zeros(200)
for batch_idx, (data, __) in enumerate(train_loader):
data = data.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.l1_loss(output,data)
loss.backward()
optimizer.step()
D = output.detach().cpu().numpy()
for i in range(0,200):
ENTROPY[i]+=skimage.measure.shannon_entropy(D[:,i,:,:])
if batch_idx % (0.5*len(train_loader)) == 0:
L1 = loss.item()
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader),L1))
l1_list.append(L1)
ssim_val = torch.mean(ssim(data,output))
print("SSIM: {}".format(ssim_val))
ssim_list.append(ssim_val)
psnr_val = psnr(data,output)
print("PSNR: {}".format(psnr_val))
psnr_list.append(psnr_val)
ENTROPY = np.array(ENTROPY)
bsnlist = np.asarray(ENTROPY.argsort()[-top:][::-1])
print('Top {} bands with Entropy ->'.format(top),list(bsnlist))
for epoch in range(0, 50):
train(epoch)
bsnlist = [80, 97, 43, 71, 72, 7, 92, 151, 134, 87, 100, 15, 10, 84, 95, 38, 106, 122, 3, 34, 37, 48, 36, 86, 190]
x,xx,xxx = psnr_list,ssim_list,l1_list
print(len(x)),print(len(xx)),print(len(xxx))
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(20,10))
plt.xlabel('Epoch',fontsize=50)
plt.ylabel('PSNR',fontsize=50)
plt.xticks(fontsize=40)
plt.yticks(np.arange(0,100 , 10.0),fontsize=40)
plt.ylim(10,100)
plt.plot(x,linewidth=5.0)
plt.savefig('PSNR-IN.pdf')
plt.show()
plt.figure(figsize=(20,10))
plt.xlabel('Epoch',fontsize=50)
plt.ylabel('SSIM',fontsize=50)
plt.xticks(fontsize=40)
plt.yticks(fontsize=40)
plt.plot(xx,linewidth=5.0)
plt.savefig('SSIM-IN.pdf')
plt.show()
plt.figure(figsize=(20,10))
plt.xlabel('Epoch',fontsize=50)
plt.ylabel('L1 Reconstruction loss',fontsize=50)
plt.xticks(fontsize=40)
plt.yticks(fontsize=40)
plt.plot(xxx,linewidth=5.0)
plt.savefig('L1-IN.pdf')
plt.show()
from scipy.stats import entropy
def MeanSpectralDivergence(band_subset):
n_row, n_column, n_band = band_subset.shape
N = n_row * n_column
hist = []
for i in range(n_band):
hist_, _ = np.histogram(band_subset[:, :, i], 256)
hist.append(hist_ / N)
hist = np.asarray(hist)
hist[np.nonzero(hist <= 0)] = 1e-20
info_div = 0
for b_i in range(n_band):
for b_j in range(n_band):
band_i = hist[b_i].reshape(-1)/np.sum(hist[b_i])
band_j = hist[b_j].reshape(-1)/np.sum(hist[b_j])
entr_ij = entropy(band_i, band_j)
entr_ji = entropy(band_j, band_i)
entr_sum = entr_ij + entr_ji
info_div += entr_sum
msd = info_div * 2 / (n_band * (n_band - 1))
return msd
def MeanSpectralAngle(band_subset):
"""
Spectral Angle (SA) is defined as the angle between two bands.
We use Mean SA (MSA) to quantify the redundancy among a band set.
i-th band B_i, and j-th band B_j,
SA = arccos [B_i^T * B_j / ||B_i|| * ||B_j||]
MSA = 2/n*(n-1) * sum(SA_ij)
Ref:
[1] GONG MAOGUO, ZHANG MINGYANG, YUAN YUAN. Unsupervised Band Selection Based on Evolutionary Multiobjective
Optimization for Hyperspectral Images [J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(1): 544-57.
:param band_subset: with shape (n_row, n_clm, n_band)
:return:
"""
n_row, n_column, n_band = band_subset.shape
spectral_angle = 0
for i in range(n_band):
for j in range(n_band):
band_i = band_subset[i].reshape(-1)
band_j = band_subset[j].reshape(-1)
lower = np.sum(band_i ** 2) ** 0.5 * np.sum(band_j ** 2) ** 0.5
higher = np.dot(band_i, band_j)
if higher / lower > 1.:
angle_ij = np.arccos(1. - 1e-16)
# print('1-higher-lower', higher - lower)
# elif higher / lower < -1.:
# angle_ij = np.arccos(1e-8 - 1.)
# print('2-higher-lower', higher - lower)
else:
angle_ij = np.arccos(higher / lower)
spectral_angle += angle_ij
msa = spectral_angle * 2 / (n_band * (n_band - 1))
return msa
import skimage
from skimage import measure
def sumentr(band_subset,X):
nbands = len(band_subset)
ENTROPY=np.ones(nbands)
for i in range(0,len(band_subset)):
ENTROPY[i]+=skimage.measure.shannon_entropy(X[:,:,band_subset[i]])
return np.sum(ENTROPY)
def MSA(bsnlist):
X, _ = loadData()
print('[',end=" ")
for a in range(2,len(bsnlist)):
band_subset_list = []
for i in bsnlist[:a]:
band_subset_list.append(X[:,:,i])
band_subset = np.array(band_subset_list)
band_subset = np.stack(band_subset,axis =2)
print(MeanSpectralAngle(band_subset),end=" ")
if a!= len(bsnlist)-1:
print(",",end=" ")
print(']')
def MSD(bsnlist):
X, _ = loadData()
print('[',end=" ")
for a in range(2,len(bsnlist)):
band_subset_list = []
for i in bsnlist[:a]:
band_subset_list.append(X[:,:,i])
band_subset = np.array(band_subset_list)
band_subset = np.stack(band_subset,axis =2)
print(MeanSpectralDivergence(band_subset),end=" ")
if a!= len(bsnlist)-1:
print(",",end=" ")
print(']')
def EntropySum(bsnlist):
X, _ = loadData()
print('[',end=" ")
for a in range(2,len(bsnlist)):
band_subset_list = []
for i in bsnlist[:a]:
band_subset_list.append(X[:,:,i])
band_subset = np.array(band_subset_list)
band_subset = np.stack(band_subset,axis =2)
print(sumentr(bsnlist[:a],X),end=" ")
if a!= len(bsnlist)-1:
print(",",end=" ")
print(']')
MSA(bsnlist)
MSD(bsnlist)
EntropySum(bsnlist)
dabs = [ 33.12881252113703 , 27.699269748852547 , 31.189050523240567 , 25.407158521806743 , 21.23842365661258 , 21.4600299169822 , 20.86248085946583 , 20.17228040657472 , 20.53299041484558 , 21.1335634998955 , 19.59117061832842 , 20.946230004039375 , 22.843494279707382 , 21.596483466175062 , 21.633130554147392 , 22.832050045391185 , 23.112561570936894 , 23.938250673675114 , 24.27697303727743 , 24.67049003424132 , 24.818116958133697 , 24.450204537801287 , 25.019421764795172 ]
bsnetconv = [ 24.926582298710684 , 21.330938461786815 , 23.04076040826106 , 21.645181998356264 , 18.691557180047244 , 20.7226257192415 , 20.180950404677475 , 18.92119239091796 , 18.14126793229048 , 18.054941145300337 , 19.2913419518319 , 21.458442690479096 , 22.376986846120094 , 26.539316782854403 , 26.364677531292276 , 26.004356791026886 , 26.06563007931373 , 27.562615680165703 , 26.816233958923476 , 26.75423730463073 , 26.76546728344344 , 26.651876628889074 , 26.170407693767313 ]
pca = [ 64.6659495569616 , 44.206964175291155 , 56.974405048963185 , 47.303760042785385 , 39.8940534876976 , 34.768743086455515 , 30.563590015282664 , 27.347606401064958 , 25.73579551531729 , 23.562059922897653 , 24.332531206971105 , 22.65318676880641 , 21.20680313199034 , 19.950365482269632 , 18.7957381872366 , 17.785780071254095 , 16.84197341100759 , 16.71704973304585 , 15.959359012341034 , 16.69296720295007 , 16.303398054778945 , 15.775575036839665 , 15.247189808858215 ]
spabs = [ 51.546751478947485 , 41.56190968855882 , 34.1507585379382 , 32.18755647433454 , 31.393008047463585 , 31.02527459658134 , 30.212480943960397 , 33.42180148091237 , 32.627589457381625 , 30.811290451152864 , 31.07497311582388 , 29.193101794721112 , 28.174085856682574 , 27.110556610241026 , 26.16396012024104 , 27.642474793195948 , 26.97927639524588 , 26.802185442574 , 26.733570979934218 , 25.614498829087168 , 24.57496106936372 , 24.260774948635653 , 24.535411090068447 ]
snmf = [ 42.687271482734026 , 69.98650272134581 , 65.56190884814379 , 64.78830503377719 , 60.283392581094056 , 57.29725635316855 , 61.48424023193987 , 65.9111624844873 , 69.81263992889625 , 66.0216268025207 , 63.44659867282022 , 59.12927876180595 , 55.89468878602123 , 54.131703617998376 , 56.680276749080825 , 59.53217131059314 , 57.16351130033321 , 54.9461367723193 , 55.23628180002861 , 54.62510055278423 , 53.74485500301176 , 52.97448803455957 , 51.9084356071723 ]
issc = [ 18.282704191681795 , 35.29174781838125 , 33.52621667208111 , 34.7570094214297 , 34.693446545983406 , 33.8470987598166 , 42.36183874938314 , 38.34479910743488 , 38.34974051412382 , 35.28287700260462 , 32.65494379097696 , 32.312139823186655 , 30.307662525527835 , 29.98966839606608 , 29.269512799967384 , 29.912423244699333 , 29.038917745855983 , 28.929037072912795 , 28.672306590798843 , 28.505889476998565 , 28.182865736586837 , 28.759689061354372 , 28.95934175252772 ]
new = [ 0.3815454878000646 , 6.781090755133797 , 9.950440828878444 , 11.237053937027174 , 27.088203156192495 , 28.32426713263673 , 31.61098768365176 , 29.18215151414016 , 30.623415605281778 , 28.31521833089382 , 27.449306705475106 , 26.808790797203535 , 26.64422017203365 , 26.15949242450031 , 24.537110949551423 , 26.60549266437586 , 25.726471570464867 , 25.336850074623072 , 24.637817130631845 , 23.95399234128613 , 23.29746143739684 , 22.647506727415077 , 22.878749722275444 ]
NSBands = list((i for i in range(2,25)))
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
methods = [dabs,bsnetconv,pca,spabs,snmf,new]
for i in methods: print(len(i))
markerstylelist = ["8","1","2","4","*","3","5"]
scatar = []
f = plt.figure(num=None, figsize=(12, 8), dpi=80, facecolor='w', edgecolor='k')
for method in methods:
PLOT = plt.plot(NSBands,method,markersize=30)
SCATTER = plt.scatter(NSBands,method,s=100)
scatar.append(SCATTER)
plt.xlabel('Number of Selected Bands',fontsize=40)
plt.ylabel('MSD',fontsize=40)
plt.xticks(fontsize=30)
plt.yticks(fontsize=30)
plt.ylim(1,80)
plt.xlim(1,25)
plt.legend(scatar,['DARecNet-BS','BSNet-Conv','PCA','SpaBS','SNMF','New'],loc='best',fontsize='xx-large',shadow=True,prop={'size': 26},bbox_to_anchor=(0.5, 0.5, 0.5, 0.5))
plt.show()
f.savefig("MSD-IN.pdf", bbox_inches='tight')
###Output
_____no_output_____
|
05_NN/NN_03_TB.ipynb
|
###Markdown
TENSORBOARD USED- tensorboard --logdir =./logs
###Code
import tensorflow as tf
import numpy as np
x_data = np.array([[0, 0], [0, 1], [1, 0], [1, 1]], dtype=np.float32)
y_data = np.array([[0], [1], [1], [0]], dtype=np.float32)
X = tf.placeholder(tf.float32, [None, 2], name='x-input')
Y = tf.placeholder(tf.float32, [None, 1], name='y-input')
with tf.name_scope("layer1") as scope:
W1 = tf.Variable(tf.random_normal([2,2]),name="Weight_one")
b1 = tf.Variable(tf.random_normal([2]),name="Bias_one")
layer_one = tf.sigmoid(tf.matmul(X,W1) + b1)
w1_histogram = tf.summary.histogram("Weight_one", W1)
b1_histogram = tf.summary.histogram("Bias-one",b1)
layer_one_histogram = tf.summary.histogram("layer1",layer_one)
###Output
_____no_output_____
###Markdown
more Deep
###Code
with tf.name_scope("layer2") as scope:
W2 = tf.Variable(tf.random_normal([2,1]),name="Weight_two")
b2 = tf.Variable(tf.random_normal([1]),name="Bias_two")
Hypothesis = tf.sigmoid(tf.matmul(layer_one,W2) + b2)
w2_histogram = tf.summary.histogram("Weight_two", W2)
b2_histogram = tf.summary.histogram("Bias-two",b2)
layer_two_histogram = tf.summary.histogram("Hypothesis",Hypothesis)
with tf.name_scope("cost") as scope:
cost = -tf.reduce_mean(Y * tf.log(Hypothesis) + (1 - Y) * tf.log(1 - Hypothesis))
cost_summ = tf.summary.scalar("cost", cost)
with tf.name_scope("train") as scope:
train = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost)
predicted = tf.cast(Hypothesis > 0.5, dtype=tf.float32)
accuracy = tf.reduce_mean(tf.cast(tf.equal(predicted, Y), dtype=tf.float32))
accuracy_summ = tf.summary.scalar("accuracy", accuracy)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
merged_summary = tf.summary.merge_all()
writer = tf.summary.FileWriter("\\xor_logs_01")
writer.add_graph(sess.graph)
for step in range(10001):
summary, _ = sess.run([merged_summary, train], feed_dict={X: x_data, Y: y_data})
writer.add_summary(summary, global_step=step)
if step % 500 == 0:
print(step, sess.run(cost, feed_dict={X: x_data, Y: y_data}), sess.run([W1, W2]))
h, c, a = sess.run([Hypothesis, predicted, accuracy], feed_dict={X: x_data, Y: y_data})
print("\nHypothesis: ", h, "\nCorrect: ", c, "\nAccuracy: ", a)
###Output
Hypothesis: [[ 0.49660048]
[ 0.54613131]
[ 0.47485587]
[ 0.47427541]]
Correct: [[ 0.]
[ 1.]
[ 0.]
[ 0.]]
Accuracy: 0.75
|
New Jupyter Notebooks/Classification/Iris Dataset/.ipynb_checkpoints/LogisticRegressionWithIris-checkpoint.ipynb
|
###Markdown
Exercise Option 1 - Standard DifficultyAnswer the following questions. You can also use the graph below, if seeing the data visually helps you understand the data.1. In the above cell, the expected class predictions should be [0, 1, 2], because the first datapoint of each class was used. If the model was not giving the expected output, some reasons could be that the data values chosen to test were outliers, or that logistic regression does not work well predicting the data. 2. The probabilities relate to the class predictions because each class prediction is just the class with the highest probabilitiy. The model is more or less confident in its predictions based on the logit function. 3. If a coefficient is negative, that means that if you took a datapoint and increased the value of a feature that has a negative coeficcient, the probability the model outputs should decrease.4. The two features do not predict very well the iris data. Although for the three datapoints tested it predicted correctly, the confidence for one of the the predictions was only 62%. Also, if you look at the graph of the dataset with the two features, you can see that there is overlap between veriscolor and virginica, which would explain the 62% confidence. 5. As seen below, using all the different feature pair combinations, the best pair was petal length and petal width. I know this because I calculated for each combination how confident the model was for the expected outputs. This finding actually aligns with what I found about the iris dataset with a decision tree: in the case of the decision tree, most nodes separated the data based on petal length and petal width, i.e. that they were the best predictors.
###Code
feature_pairs = [[0, 1], [0, 2], [0, 3], [1, 2], [1, 3], [2, 3]]
feature_pair_probabilities = []
feature_pair_models = []
# generate model and probabilities for each feature pair
for feature_pair in feature_pairs:
feature_pair_data = iris.data[:,feature_pair] # get data of feature pair
feature_pair_model = linear_model.LogisticRegression() # make model
feature_pair_model.fit(feature_pair_data, iris.target) # fit model to feature pair
feature_pair_inputs = [feature_pair_data[0], feature_pair_data[start_class_two], feature_pair_data[start_class_three]] # make new inputs with only the feature pair
feature_pair_probabilities.append(feature_pair_model.predict_proba(feature_pair_inputs)) # push class probabilities for specific feature pair to array
feature_pair_models.append(feature_pair_model) # push model to array for later use
best_pair_score = 0 # scale from 0-1, how well feature pair performed based on how close it was to expected outputs
best_pair = [] # feature pair with best score
# https://stackoverflow.com/questions/23828242/how-to-set-a-python-variable-to-undefined
best_model = None # the most accurate model
index = 0 # for indexing
for feature_pair_probability in feature_pair_probabilities:
# print probabilities for feature pair
print('Probabilities for {} & {}:\n{}'.format(iris.feature_names[feature_pairs[index][0]], iris.feature_names[feature_pairs[index][1]], feature_pair_probability))
# calculate score
feature_pair_score = ((feature_pair_probability[0][0]/1)+(feature_pair_probability[1][1]/1)+(feature_pair_probability[2][2]/1))/3
# if it's better than current best feature pair score, update it
if (feature_pair_score > best_pair_score):
best_pair_score = feature_pair_score
best_pair = feature_pairs[index]
best_model = feature_pair_models[index]
# index
index += 1
# print info on the best feature pair
print('Best pair: {} & {}, with score: {}'.format(iris.feature_names[best_pair[0]], iris.feature_names[best_pair[1]], best_pair_score))
###Output
Probabilities for sepal length (cm) & sepal width (cm):
[[0.92347315 0.0585081 0.01801875]
[0.00176572 0.1981595 0.80007478]
[0.05009604 0.37235578 0.57754818]]
Probabilities for sepal length (cm) & petal length (cm):
[[0.97521958 0.02478036 0.00000005]
[0.00105972 0.7765676 0.22237268]
[0.00000087 0.01201376 0.98798537]]
Probabilities for sepal length (cm) & petal width (cm):
[[0.92831857 0.07157087 0.00011056]
[0.00559652 0.62519423 0.36920925]
[0.00001365 0.03313084 0.96685551]]
Probabilities for sepal width (cm) & petal length (cm):
[[0.98200697 0.01799299 0.00000004]
[0.00633308 0.66738949 0.32627743]
[0.0000065 0.01584795 0.98414555]]
Probabilities for sepal width (cm) & petal width (cm):
[[0.95767161 0.04223211 0.00009628]
[0.10787733 0.65224682 0.23987585]
[0.00009767 0.02361994 0.97628239]]
Probabilities for petal length (cm) & petal width (cm):
[[0.97983058 0.02016939 0.00000003]
[0.0024148 0.77883567 0.21874952]
[0.00000027 0.00463449 0.99536524]]
Best pair: petal length (cm) & petal width (cm), with score: 0.9180104999500484
###Markdown
Exercise Option 2 - Advanced DifficultyAs seen below, to try to show model fit, I graphed the dataset against the predictions of the model. If you look at the graph, it seems like the model only incorrectly predicted 4 or 5 of the datapoints.
###Code
# https://scikit-learn.org/stable/auto_examples/linear_model/plot_iris_logistic.html#sphx-glr-auto-examples-linear-model-plot-iris-logistic-py
# Got help from Huxley for this.
# I use the model using the best pair of features for this.
X = iris.data[:, best_pair] # input data, use the best two features
Y = iris.target # expected values of input data
x1 = X[:, 0] # datapoints with only first feature
x2 = X[:, 1] # datapoits with only second feature
# get the min and max values of dataset, with 0.5 padding added
x_min, x_max = x1.min()-0.5, x1.max()+0.5
y_min, y_max = x2.min()-0.5, x2.max()+0.5
step = .01 # step size in the mesh
# https://numpy.org/doc/stable/reference/generated/numpy.arange.html
# https://numpy.org/doc/stable/reference/generated/numpy.meshgrid.html
xx, yy = numpy.meshgrid(numpy.arange(x_min, x_max, step), numpy.arange(y_min, y_max, step))
#print(xx)
#print(yy)
#print(numpy.c_[xx.ravel(), yy.ravel()])
# https://numpy.org/doc/stable/reference/generated/numpy.c_.html
# https://numpy.org/doc/stable/reference/generated/numpy.ravel.html
Z = best_model.predict(numpy.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.figure() # make figure
# https://matplotlib.org/3.1.0/tutorials/colors/colormaps.html
plt.pcolormesh(xx, yy, Z, cmap='Pastel1', shading='auto')
# https://matplotlib.org/api/_as_gen/matplotlib.pyplot.scatter.html
plt.scatter(x1, x2, c=Y, edgecolors='black', cmap=plt.cm.Paired) # plot all the training points with their expected colors
# https://thispointer.com/python-capitalize-the-first-letter-of-each-word-in-a-string/#:~:text=Use%20title()%20to%20capitalize,of%20word%20to%20lower%20case.
# https://www.askpython.com/python/string/remove-character-from-string-python
plt.xlabel(iris.feature_names[best_pair[0]].replace('(cm)', '').title()) # get the name of the x-axis feature, remove units, and capitalize each word
plt.ylabel(iris.feature_names[best_pair[1]].replace('(cm)', '').title()) # get the name of the y-axis feature, remove units, and capitalize each word
# https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.xlim.html
plt.xlim(x_min, x_max) # set min and max of x-axis
plt.ylim(y_min, y_max) # set min and max of y-axis
# https://matplotlib.org/api/_as_gen/matplotlib.pyplot.xticks.html
plt.xticks(()) # remove ticks on x-axis
plt.yticks(()) # remove ticks on y-axis
plt.show() # show graph
###Output
_____no_output_____
|
examples/Using_HoloGrid.ipynb
|
###Markdown
Using the `GridEditor` [Hologridgen](https://github.com/pygridgen/hologridgen) is an interactive tool for the generation of orthonormal grids using [pygridgen](https://github.com/pygridgen/pygridgen) and the [HoloViz](holoviz.org) tool suite for use within [Jupyter notebooks](https://jupyter.org/) or deployable with [Panel](panel.pyviz.org). This notebook will demonstrate how you can use the primary class in `hologridgen` called `GridEditor`.You can install `hologridgen` with conda into a Python 3.7 or 3.8 environment as follows:```conda install -c jlstevens -c conda-forge hologridgen```With `hologridgen` installed, first we import [GeoViews](http://geoviews.org/) and [GeoPandas](http://geopandas.org/) and load the GeoViews bokeh extension:
###Code
import geopandas as gpd
import geoviews as gv
gv.extension('bokeh')
###Output
_____no_output_____
###Markdown
`GridEditor` Basics Now we can import `GridEditor` from `hologridgen`:
###Code
from hologridgen import GridEditor
###Output
_____no_output_____
###Markdown
This class can then be instantiated and given a handle.
###Code
editor = GridEditor()
###Output
_____no_output_____
###Markdown
The main entrypoint on the instance is then the `.view()` method which we will call shortly. This method displays the Bokeh plot with the following set of tools in the side bar:By default, the 'Tap' tool is selected as indicated by the blue line on the side. To start defining a grid boundary, select the 'Point Draw' tool and click four times within the axes to define a square (the node marked with the triangle is the start node that was added first). Then hit the 'Generate mesh' button:Have a go replicating the above screenshot in the area below:
###Code
editor.view()
###Output
_____no_output_____
###Markdown
Note that you can do the following:* You can move the nodes with the 'Point Draw' tool by click dragging them* You can delete nodes with the 'Point Draw' tool by clicking them then hitting `Backspace`* You can change the grid resolution by change the `Xres` and `Yres` values before hitting 'Generate mesh'* You can hide any mesh you have generated by hitting the 'Hide mesh' button.Now that you have defined a boundary, how can you access it programatically?To get the corresponding geopandas `DataFrame`, simply use the `.boundary` property:
###Code
editor.boundary
###Output
_____no_output_____
###Markdown
You'll notice that the values in the `x`, `y` and `geometry` columns are not latitudes and longitudes. This is because these values are in the [Web Mercator](https://en.wikipedia.org/wiki/Web_Mercator_projection) projection. What if you want the last generated `pygridgen.grid.Gridgen` object? If available, this can be found using the `.grid` property:
###Code
editor.grid
###Output
_____no_output_____
###Markdown
Setting node polaritiesYou might have noticed in the previous example, that if you draw *five* boundary nodes, the 'Generate mesh' button is greyed out. This is because the total polarity of the nodes has to add up to *four* for the mesh generation to be available.To define more complex boundaries, you therefore need to set the node polarity (`beta` values) appropriately. This is done by selecting on the 'Tap' tool and clicking on the nodes. Red nodes (by default) contribute +1 to polarity, blue nodes contribute -1 to polarity and the hollow nodes are neutral, adding zero to the total polarity.The following screenshot show a more complex boundary you can define using both the 'Point Draw' and 'Tap' tools:Have a go defining this boundary in the view below (and inspecting `.boundary` afterwards, if you wish):
###Code
complex_boundary = GridEditor()
complex_boundary.view()
###Output
_____no_output_____
###Markdown
Have a go playing with the 'Node size' and 'Edge width' sliders (which should be self-evident) as well as the 'Background' drop down that lets you select from a variety of map tile sources. Once you are happy with a boundary, you can also download the corresponding GeoJSON file by clicking the 'Download boundary.geojson' button. Inserting nodes into edgesYou may have noticed one button that hasn't been mentioned yet, namely 'Insert points' which allows you to insert new points into an existing edge. To do this, you need to activate the 'Tap' tool again to use it for its second function of selecting edges. This the 'Tap' tool active, you can click an existing edge to select it (you can see when an edge is selected as the non-selected edges will then be muted in color). With an edge selected, you can now hit the 'Insert points' button to insert new nodes into the edge. As before, you can move these nodes around with the 'Point Draw' tool:Note that you can reset your selection by clicking away from any node or boundary edge. Try inserting and moving around new nodes in one of the boundaries you have defined above. Setting options in the constructorAll the widgets and settings describe so far can be set using keywords in the `GridEditor` constructor. For instance, you can run:
###Code
customized_editor = GridEditor(node_size=20, edge_width=4, xres=10, yres=40, background='EsriOceanBase')
customized_editor.view()
###Output
_____no_output_____
###Markdown
Which sets a larger (custom) node size and edge width as well as a custom resolution for the grids along the `x` and `y` directions. **Try defining a boundary over the world map in the cell above as it will help illustrate the serializing/restoring functionality later on!**All these options are [`param`](https://param.holoviz.org/) `Parameters` and you can read more about the allowed values (and the docstrings for these settings) by running:```pythongv.help(GridEditor)```Note that not all parameters are exposed as widgets such as `custom_background` and `focus`. We'll describe these features next: Setting a custom backgroundBy setting the `custom_background` parameter, to a HoloViews or GeoViews element, you can use any background you wish. For instance, the following line (corresponding to a [GeoViews example](http://geoviews.org/index.html)) defines a set of polygons for the various countries of the world, colored by population (with Bokeh hover information enabled):
###Code
custom_polygons = gv.Polygons(gpd.read_file(gpd.datasets.get_path('naturalearth_lowres')),
vdims=['pop_est', ('name', 'Country')]).opts(
tools=['hover'], width=600, alpha=0.4, cmap='plasma'
)
###Output
_____no_output_____
###Markdown
Now we simply supply this element in the constructor to the `custom_background` parameter:
###Code
editor_with_custom_background = GridEditor(custom_background=custom_polygons, xres=25, yres=30, node_size=20)
editor_with_custom_background.view()
###Output
_____no_output_____
###Markdown
Note that the Bokeh hover tool is still enabled and the rest of the `GridEditor`'s functionality remains the same as before. Setting the `focus`Once a boundary is defined, it is important to be able to tweak the orthonormal grid. One tool that makes this possible is `pygridgen`'s `Focus` class. Here is a simple example of defining a `Focus` instance:
###Code
import pygridgen as pgg
focus = pgg.Focus()
focus.add_focus(0.50, 'y', factor=5, extent=0.25)
###Output
_____no_output_____
###Markdown
Now you can apply this focus the next the time 'Generate Mesh' button is hit by setting the `focus` parameter on the `GridEditor` instance. For example, to set a focus on the first example in the notebook we can execute:
###Code
editor.focus = focus # Go back and hit 'Generate mesh' in the simple `editor` example above
###Output
_____no_output_____
###Markdown
To remove the `Focus` definition from the mesh generation process, simply set the parameter to `None`:
###Code
editor.focus = None
###Output
_____no_output_____
###Markdown
Saving and loading `GridEditor` stateYou can easily serialize the state of `GridEditor` by using the `.data` property. This makes it easy to save a boundary editing session to disk (e.g as a JSON file for instance) and resume your work later. This serialized data can be passed to `GridEditor` as the first (optional) positional argument:
###Code
serialized_data = customized_editor.data # Inspect this in the notebook (e.g use print)
restored_editor = GridEditor(serialized_data)
restored_editor.view()
###Output
_____no_output_____
|
2. Data Cleaning.ipynb
|
###Markdown
Load DatasetFirst, read the combined data from $2014$ until $2018$ from local disk.
###Code
data = pd.read_csv('./data/data_2014_2018.csv')
data.head()
data.info(verbose=False)
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1642574 entries, 0 to 1642573
Columns: 88 entries, loan_amnt to year
dtypes: float64(25), int64(39), object(24)
memory usage: 1.1+ GB
###Markdown
Data Cleaning 1. Features collected after loan is issuedFor this problem, we will assume that our model will run at the moment one begins to apply for the loan. Thus, there should be no information about user's payment behaviors.
###Code
ongo_columns = ['funded_amnt', 'funded_amnt_inv', 'issue_d', 'pymnt_plan', 'out_prncp',
'out_prncp_inv', 'total_pymnt', 'total_pymnt_inv', 'total_rec_prncp',
'policy_code', 'total_rec_int', 'total_rec_late_fee', 'recoveries',
'collection_recovery_fee', 'last_pymnt_d', 'last_pymnt_amnt',
'last_credit_pull_d', 'hardship_flag', 'disbursement_method',
'debt_settlement_flag']
data = data.drop(labels=ongo_columns, axis=1)
data.info(verbose=False)
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1642574 entries, 0 to 1642573
Columns: 68 entries, loan_amnt to year
dtypes: float64(14), int64(37), object(17)
memory usage: 852.2+ MB
###Markdown
2. Parsing `loan_status`
###Code
data['loan_status'].value_counts()
###Output
_____no_output_____
###Markdown
There are a series of different kinds of loan status. Based on the explanation from [Lending Club](https://help.lendingclub.com/hc/en-us/articles/215488038-What-do-the-different-Note-statuses-mean-), The explanation for each status are listed below: | Loan Status | Explanation || --------------- | ----------------------------------------------------- || Current | Loan is up to date on all outstanding payments || Fully Paid | Loan has been fully repaid, either at the expiration of the 3- or 5-year year term or as a result of a prepayment || Default | Loan has not been current for 121 days or more || Charged Off | Loan for which there is no longer a reasonable expectation of further payments. Generally, Charge Off occurs no later than 30 days after the Default status is reached. Upon Charge Off, the remaining principal balance of the Note is deducted from the account balance || In Grace Period | Loan is past due but within the 15-day grace period || Late (16-30) | Loan has not been current for 16 to 30 days || Late (31-120) | Loan has not been current for 31 to 120 days | For this project, we don't care about the loan that is in **Current** status. Instead, we are more interested in whether the loan is **Good** or **Bad**. Here, we assume loan in **Good** status if it will be fully paid, and the loan in **Bad** status if it is **Charged Off**, **Default**, or **Late (16-30 days, or 31-120 days)**. For loans that are in **Grace Period**, we will remove them from our data due to uncertainty.
###Code
# only keep the data that we are certainty about their final status
used_status = ['Charged Off', 'Fully Paid', 'Late (16-30 days)', 'Late (31-120 days)', 'Default']
data = data[data['loan_status'].isin(used_status)]
# Encoding the `loan_status`
# status 1: 'Charged Off', 'Late (16-30 days)', 'Late (31-120 days)', 'Default'
# status 0: 'Fully Paid'
data['target'] = 1
data.loc[data['loan_status'] == 'Fully Paid', 'target'] = 0
data.info(verbose=False)
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 820870 entries, 0 to 1642573
Columns: 69 entries, loan_amnt to target
dtypes: float64(14), int64(38), object(17)
memory usage: 438.4+ MB
###Markdown
3. Split into training and test dataset After above procedures, we have reduced the dataset size from $1,642,574$ to $820,870$, and the features from $88$ to $68$ (including the `target`).
###Code
# calculate the number of records for each year
year_count = data.groupby('year')['target'].count().reset_index()
year_count = year_count.rename(columns={'target': 'counts'})
year_count['ratio'] = year_count['counts'] / len(data)
year_count
# visualize the time effect
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(16, 6))
sns.countplot(x='year', data=data, ax=ax[0])
ax[0].set_xlabel('Year', fontsize=14)
ax[0].set_ylabel('Count', fontsize=14)
sns.barplot(x='year', y='target', data=data, ax=ax[1])
ax[1].set_xlabel('Year', fontsize=14)
ax[1].set_ylabel('Default Ratio', fontsize=14)
plt.tight_layout()
plt.show()
# drop useless features
data = data.drop(labels='loan_status', axis=1)
data.info(verbose=False)
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 820870 entries, 0 to 1642573
Columns: 68 entries, loan_amnt to target
dtypes: float64(14), int64(38), object(16)
memory usage: 472.1+ MB
###Markdown
Now, let's split the data into training and test set. Based on the above information, we decide to split the data according year information. More specifically, since the data after $2016$ accounts about $12\%$ of all the data, we will use the data from $2014$ until $2016$ as the training set, data from $2017$ until $2018$ as the test set.
###Code
# split into train and test set
train = data[data['year'] < 2017]
test = data[data['year'] >= 2017]
# save to disk
train.to_csv('./data/train.csv', index=False)
test.to_csv('./data/test.csv', index=False)
print('Training set:\t', train.shape, '\t', round(len(train) / len(data), 4))
print('Test set:\t', test.shape, '\t', round(len(test) / len(data), 4))
###Output
Training set: (722143, 68) 0.8797
Test set: (98727, 68) 0.1203
|
faiss/faiss_index_experiment.ipynb
|
###Markdown
###Code
!apt install libomp-dev
!python -m pip install --upgrade faiss faiss-gpu
import shutil
import urllib.request as request
from contextlib import closing
# first we download the Sift1M dataset
with closing(request.urlopen('ftp://ftp.irisa.fr/local/texmex/corpus/sift.tar.gz')) as r:
with open('sift.tar.gz', 'wb') as f:
shutil.copyfileobj(r, f)
import tarfile
tar = tarfile.open('sift.tar.gz', 'r:gz')
tar.extractall()
import numpy as np
def read_fvecs(fp):
a = np.fromfile(fp, dtype='int32')
d = a[0]
return a.reshape(-1, d+1)[:, 1:].copy().view('float32')
# 검색할 데이터
xb = read_fvecs('./sift/sift_base.fvecs') #1M samples
# some query vectors
xq = read_fvecs('./sift/sift_query.fvecs')
# 하나의 쿼리를 추출
xq = xq[0].reshape(1, xq.shape[1])
xq.shape
xb.shape
xq
###Output
_____no_output_____
###Markdown
IndexFlatL2- balance of search spped of search quality- 대부분 검색 품질이 높으면 검색 속도가 낮아진다.- 10억개의 벡터가 있고 1분에 100개의 쿼리를 수행한다고 하면 오랜시간이 걸릴 것이다.- 그리고 이걸 실행하려면 미친 하드웨어가 필요하므로 완전 검색에서는 빠른 색인을 사용할 수 없지만 사용 방법을 보여드리겠습니다~
###Code
d = 128
k = 10 # how many result do i want
import faiss
index = faiss.IndexFlatIP(d) # L2보다 약간 빠름. 실제로는 거의 차이가 없음
index.add(xb)
%%time
D, I = index.search(xq, k) # 샘플 갯수와 쿼리 벡터
I # 10개의 가장 가까운 인덱스
baseline = I[0].tolist()
baseline
###Output
_____no_output_____
###Markdown
LSH (Locality Sensitive Hashing)- 50 : 50- hashing 기능 : hash buckets에 값이 중복되는 충돌을 최소화- python의 dictionary 같은 구조- 가까운 버킷을 찾은다음 검색 범위를 제한한다.- 모든 곳을 검색할 필요가 없음.- 아주 직관적
###Code
nbits = d*5 # 데이터의 차원에 따라 확장됨
index = faiss.IndexLSH(d,nbits)
index.add(xb)
%%time
D, I = index.search(xq, k)
np.in1d(baseline, I)
###Output
_____no_output_____
###Markdown
- LSH 알고리즘을 통해 얻은 10개의 결과값.- 대부분이 일치함.- 합리적으로 좋은 회수율.- 시간이 빨라짐.- nbits를 조정하여 더 좋은 결괏값을 얻을 수 있음.- recall graph를 보면 차원을 증가시키면 좋은 recall을 얻을 수 있지만, 차원의 저주가 있다.- 낮은 차원일 수록 좋다. HNSW(Hierarchical Navigable Small World)- small world graph를 탐색.- hops로 접근- 인덱스 추가에 더 오래 걸림(더 정확하게 하기 위해)- ef_construction은 크게 해도될것같고.. (대신 메모리 소비 많음)- ef_search는 잘 조절
###Code
M = 16 # 50~16 number of 꼭짓점의 연결
ef_search = 16 #검색의 깊이 (네트워크 더 많이 검색하려는 경우 high or low)
ef_construction = 64 #네트워크 구성할 때 네트워크의 양. 얼마나 자세하게 네트워크를 구성할 것인가. 검색 시간에는 영향을 주지 않으므로 높게 설정
index = faiss.IndexHNSWFlat(d, M)
index.hnsw.efSearch = ef_search
index.hnsw.efConstruction = ef_construction
index.add(xb)
%%time
D, I = index.search(xq, k)
np.in1d(baseline, I)
###Output
_____no_output_____
###Markdown
IVF(Inverted File Index)- Super cool index- Clustering datapoint를 기반으로 함.- 보로노이 셀 or 디리클레 테셀레이션- 각 영역의 중심과의 거리를 계산- 가까운 곳의 영역 안의 벡터와만 비교 Edge Problem- 쿼리를 사용하면 셀 중 하나의 가장자리에 있다. 그리고 n개의 probe value를 정하면, 1로 되어있다면 검색하는 영역의 수. - 가까운 중심 노드가 있어도 검색하지 않을 것이다. 엣지로 영역이 막혀 있으니. 따라서 search scope를 늘림으로써 (nprobe의 증가) 여러개의 셀을 검색하므로 해결할 수 있다.
###Code
nlist = 128 # 데이터 내에서 가질 중심의 수
quantizer = faiss.IndexFlatIP(d) # 최종 검색을 수행하는 방법
index = faiss.IndexIVFFlat(quantizer, d, nlist)
index.is_trained
index.train(xb) # 다른 클러스터링은 훈련이나 최적화가 필요없지만, 사용하기 전 인덱스 트레인을 작성하고 모든 벡터를 전달하며ㅡ 속도는 느리지 않다.
index.add(xb)
index.nprobe = 2 # 2로 증가했더니 더 높은 성능
%%time
D, I = index.search(xq, k) # Low quality 결과를 제외하고 가장 빠른 것 같음.
np.in1d(baseline, I)
###Output
_____no_output_____
|
notebooks/karpathy_game.ipynb
|
###Markdown
Average Reward over time
###Code
g.plot_reward(smoothing=100)
###Output
_____no_output_____
###Markdown
Visualizing what the agent is seeingStarting with the ray pointing all the way right, we have one row per ray in clockwise order.The numbers for each ray are the following:- first three numbers are normalized distances to the closest visible (intersecting with the ray) object. If no object is visible then all of them are $1$. If there's many objects in sight, then only the closest one is visible. The numbers represent distance to friend, enemy and wall in order.- the last two numbers represent the speed of moving object (x and y components). Speed of wall is ... zero.Finally the last two numbers in the representation correspond to speed of the hero.
###Code
g.__class__ = KarpathyGame
np.set_printoptions(formatter={'float': (lambda x: '%.2f' % (x,))})
x = g.observe()
new_shape = (x[:-2].shape[0]//g.eye_observation_size, g.eye_observation_size)
print(x[:-2].reshape(new_shape))
print(x[-2:])
g.to_html()
###Output
[[1.00 1.00 0.34 1.00 0.52 0.53]
[1.00 1.00 0.34 1.00 0.52 0.53]
[1.00 1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 1.00 0.00 0.00]
[0.69 1.00 1.00 1.00 0.50 0.18]
[1.00 1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 0.79 0.00 0.00]
[1.00 1.00 1.00 0.68 0.00 0.00]
[1.00 0.43 1.00 1.00 -0.00 0.54]
[1.00 0.39 1.00 1.00 -0.00 0.54]
[1.00 1.00 1.00 0.56 0.00 0.00]
[1.00 1.00 1.00 0.57 0.00 0.00]
[1.00 1.00 1.00 0.61 0.00 0.00]
[1.00 1.00 1.00 0.68 0.00 0.00]
[1.00 1.00 1.00 0.79 0.00 0.00]
[1.00 1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 1.00 0.00 0.00]
[0.69 1.00 1.00 1.00 0.81 0.33]]
[-1.03 0.75]
###Markdown
Average Reward over time
###Code
g.plot_reward(smoothing=100)
session.run(current_controller.target_network_update)
current_controller.q_network.input_layer.Ws[0].eval()
current_controller.target_q_network.input_layer.Ws[0].eval()
###Output
_____no_output_____
###Markdown
Visualizing what the agent is seeingStarting with the ray pointing all the way right, we have one row per ray in clockwise order.The numbers for each ray are the following:- first three numbers are normalized distances to the closest visible (intersecting with the ray) object. If no object is visible then all of them are $1$. If there's many objects in sight, then only the closest one is visible. The numbers represent distance to friend, enemy and wall in order.- the last two numbers represent the speed of moving object (x and y components). Speed of wall is ... zero.Finally the last two numbers in the representation correspond to speed of the hero.
###Code
g.__class__ = KarpathyGame
np.set_printoptions(formatter={'float': (lambda x: '%.2f' % (x,))})
x = g.observe()
new_shape = (x[:-4].shape[0]//g.eye_observation_size, g.eye_observation_size)
print(x[:-4].reshape(new_shape))
print(x[-4:])
g.to_html()
###Output
[[1.00 0.55 1.00 -0.42 -0.40]
[1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 0.00 0.00]
[1.00 0.83 1.00 0.42 0.63]
[1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 0.00 0.00]
[1.00 0.44 1.00 -0.18 0.76]
[1.00 0.46 1.00 -0.18 0.76]
[1.00 1.00 1.00 0.00 0.00]
[0.89 1.00 1.00 -0.95 0.78]
[1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 0.00 0.00]
[1.00 0.44 1.00 0.45 -0.81]
[1.00 0.20 1.00 -0.64 0.14]
[1.00 0.19 1.00 -0.64 0.14]
[1.00 0.21 1.00 -0.64 0.14]
[1.00 0.57 1.00 0.56 0.78]
[1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 0.00 0.00]
[1.00 0.92 1.00 0.41 0.77]
[1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 0.00 0.00]
[1.00 1.00 1.00 0.00 0.00]]
[1.00 -0.94 -0.25 0.46]
|
u1/02_PythonPrimer.ipynb
|
###Markdown
Erste Schritte in Python VariablenPython ist fundamental objektorientiert. Das heißt nicht nur, dass Sie in Python objektorientiert programmieren können. Es gilt auch der Leitsatz: *Alles in Python ist ein Objekt*. Also selbst grundlegende Datentypen wie `int`, `float` und `str`, sowie Funktionen sind Objekte. Daher sind Variablen in Python immer Referenzen auf Objekte.Wie viele andere Skriptprachen auch, ist Python dynamisch typisiert. Das bedeutet, dass Sie keine Typangaben bei der Definition einer Variablen angeben müssen. Python leitet automatisch den passenden Typ ab, bzw. es wählt den "am besten passenden" Typ aus.Sie können daher Variablen in Python wie folgt definieren:
###Code
a = 42
b = 1.23
ab = 'Hallo'
###Output
_____no_output_____
###Markdown
Für Variablennamen gelten die Gleichen Regeln wie für die Bezeichner in C/C++. Variablen müssen mit einem Buchstaben oder Unterstrich beginnen und können sich ab dem 2. Zeichen aus einer beliebigen Folge von Buchstaben, Ziffern und Unterstrichen zusammensetzen.Es gibt allerdings einige Konventionen für die Wahl von Bezeichnern. So gelten Variablen, die mit 2 Unterstrichen beginnen als *privat*, Namen mit 2 Unterstrichen am Anfang und am Ende sind für spezielle Attribute und Methoden reserviert ("*magic methods*"). Die skalaren, also elementare oder nicht-zusammengesetzten Datentypen in Python sind:- `int` Ganze Zahlen- `float` Fließkommazahlen mit 64-bit Präzision- `complex` Komplexe Zahlen- `bool` Bool'scher Datentyp mit den Werten `True` und `False`- `NoneType` signalisiert das Nicht-Vorhandensein einer Referenz, ähnlich zu `NULL` oder `NIL` in anderen SprachenDie Typen `str` für Zeichenketten sowie `bytes` für Folgen von 8-bit (vorzeichenlosen) Werten (zum Verarbeiten von Binärdaten) gehören zu den *Sequentiallen Datentypen*. OperationenFür die meisten Datentypen existieren die bekannten Operationen (`+`, `-`, `*`, `/`) mit der üblichen Bedeutung. Daneben gibt es noch den Operator `//`, für die ganzzahlige Division, den Modulo-Operator `%` und den Potenz-Operator `**`.
###Code
a = 2 + 1.23
b = 22.2//3
c = "Hallo " + "Welt"
d = 2**8
print(a, b, c, d)
###Output
_____no_output_____
###Markdown
Die `print` Funktion, wie oben verwendet, benötigt man ziemlich häufig.Ruft man sie mit einer (beliebigen) Folge von Parametern aus, so wird für jeden Variable, entsprechend ihres Typs, eine passende *print* Methode aufgerufen. In Python heißt die Methode `__str__()`, sie entspricht in etwa der `toString()`-Methode aus Java.Um eine formatierte Ausgabe zu erhalten, kann man einen Format-String mit Platzhaltern angeben, ähnlich wie bei der `printf`-Funktion aus C. Über den Modulo-Operator können dann die Variablen angegeben werden, die an den Platzhaltern eingesetzt werden sollen.Für unser Beispiel oben sieht das dann z.B. so aus:
###Code
print("a = %f, b = %d, c = %s, d = %s" % (a, b, c, d))
###Output
_____no_output_____
###Markdown
Python ist stark typisiert. D.h., dass Variablen immer einen eindeutigen Typ haben und an keiner Stelle eine implizite Typumwandlung stattfinden kann. Jede Änderung des Typs erfordert eine explizite Typkonvertierung. Ein Ausdruck wie `"Hallo"+2` kann nicht ausgewertet werden, da die `+`-Operation für einen String und einen Integer nicht definiert ist.In diesem Fall kann man eine Typenwandlung, z.B. von `int` nach `str` vornehmen:
###Code
"Hallo" + str(2)
###Output
_____no_output_____
###Markdown
Sequentielle DatentypenUnter sequenziellen Datentypen wird eine Klasse von Datentypen zusammengefasst, die Folgen von **gleichartigen oder verschiedenen Elementen** verwalten.In Listen und Tupel können beliebige Folgen von Daten abgelegt sein. Die gespeicherten Elemente haben eine definierte Reihenfolge und man kann über eindeutige Indizes auf sie zugreifen. Listen sind veränderbar, d.h., man kann einzelne Elemente ändern, löschen oder hinzufügen. Tupel sind nicht veränderbar. Das bedeutet, bei jeder Änderung wird ein komplett neues Objekt mit den geänderten Elmenten angelegt.
###Code
a = [3.23, 7.0, "Hallo Welt", 256]
b = (3.23, 7.0, "Hallo Welt", 256)
print("Liste a = %s\nTupel b =%s" % (a,b) )
print("Das dritte Element von b ist " + b[2])
print("Gleiche Referenz? %s. Gleicher Inhalt? %s" % (a == b, set(a)==set(b)))
###Output
_____no_output_____
###Markdown
Das obige Beispiel bringt uns direkt zum nächsten Datentyp, den Mengen (oder engl. *sets*).Wie bei den Mengen aus der Mathematik kann ein set in Python jedes Objekt nur einmal enthalten.Wenn wir also aus der Liste [4,4,4,4,3,3,3,2,2,1] eine Menge machen, hat diese folgende Elemente:
###Code
set([4,4,4,4,3,3,3,2,2,1])
###Output
_____no_output_____
###Markdown
Die elemente tauchen nun nicht nur einmalig auf, sondern sie sind auch umsortiert worden.Man darf sich hier nicht täuschen lassen, die Elemente einer Menge sind immer unsortiert.D.h., man kann keine spezielle Sortierung erwarten, auch wenn die Ausgabe in manchen Fällen danach aussieht. Ein weitere sequentieller Datentyp sind Dictionaries (die deutsche Übersetzung *Wörterbücher* passt hier nicht so gut).Diectionaries sind eine Menge von *Schlüssel-Wert-Paaren*.Das bedeutet, dass jeder Wert im Dictionary unter einem frei wählbaren Schlüssel abgelegt ist, und auch über diesen Schlüssel zugegriffen werden kann.
###Code
haupstaedte = {"DE" : "Berlin", "FR" : "Paris", "US" : "Washington", "CH" : "Zurich"}
print(haupstaedte["FR"])
haupstaedte["US"] = "Washington, D.C."
print(haupstaedte["US"])
###Output
_____no_output_____
###Markdown
Funktionen Funktionen in Python werden über das Schlüsselwort `def` definiert. Die Syntax einer Funktions-Definition sieht folgendermaßen aus:```pythondef myfunc(arg1, arg2,... argN): '''Dokumentation''' Programmcode return ```Hier wird die Funktion "myfunc" definiert, welche mit den Parametern "arg1,arg2,....argN" aufgerufen werden kann.Wir sehen hier auch ein weiteres Konzept von Python, das wir bisher noch nicht angesprochen haben. Die Strukturierung von Code in Blöcke erfolgt über **Einrückungen**.Für eine Funktion bedeutet das, dass der Code des Funktionskörpers um eine Stufe gegenüber der Funktionsdefinition eingerückt sein muss. Wenn der Funktionskörpers weitere Kontrollstrukturen enthält, z.B. Schleifen oder Bedingungen, sind weitere Einrückungen nötig. Betrachten Sie folgendes Beispiel:
###Code
def gib_was_aus():
print("Eins")
print("Zwei")
print("Drei")
gib_was_aus()
###Output
_____no_output_____
###Markdown
Hier wird zuerst eine Funktion `gib_was_aus` definiert. Die Anweisung `print("Drei")` ist nicht mehr eingerückt, gehört daher nicht mehr zur Funktion. Funktionen können fast überall definiert sein, also z.B. auch innerhalb von anderen Funktionen.Rückgaben erfolgen, wie auch in anderen Programmiersprachen üblich mit den Schlüsselwort `return`.Falls mehrere Elemente zurückgegeben werden sollen, können diese z.B. in ein Tupel gepackt werden:
###Code
def inc(a, b, c):
return a+1, b+1, c+"B"
a=b=1
c="A"
a,b,c = inc(a,b,c)
print(a,b,c)
###Output
_____no_output_____
###Markdown
VerzweigungenWir haben bisher noch keine Kontrollstrukturen, also Verzweigungen oder Schleifen angesprochen.Eine Bedingung oder Verzeigung funktioniert in Python (wie üblich) über ein `if`-`else`-Konstrukt.Auch hier werden zur Strukturierung der Blöcke Einrückungen benutzt.
###Code
a=2
if a==0:
print("a ist Null")
else:
print("a ist nicht Null")
###Output
_____no_output_____
###Markdown
Um tiefe Verschachtelungen zu vermeiden, gibt es noch ein `elif`-Anweisung:
###Code
a=2
if a<0:
print("a ist negativ")
elif a>0:
print("a ist positiv")
else:
print("a ist Null")
###Output
_____no_output_____
###Markdown
SchleifenIn Python gibt es die Schleifentypen, `while` und `for`, wobei letztere eine etwas ungewöhnliche Syntax hat.Die `while`-Schleife hingegen wird wie in vielen bekannten Programmiersprachen benutzt:
###Code
i = 5
while i>0:
print(i)
i -= 2
###Output
_____no_output_____
###Markdown
Anders als z.B. in C/C++ oder Java läuft eine `for`-Schleife in Python nicht über eine Zählvariable, sondern über *die Elemente eines iterierbaren Datentyps*.Einige Beispiele für iterierbaren Datentypen haben wir schon als sequentielle Datentypen kennen gelernt.Wir können z.B. mit einer `for`-Schleife alle Elemente eines Dictionaries besuchen:
###Code
haupstaedte = {"DE" : "Berlin", "FR" : "Paris", "US" : "Washington", "CH" : "Zurich"}
for s in haupstaedte:
print(haupstaedte[s])
###Output
_____no_output_____
###Markdown
Wir sehen, dass die Laufvariable hier alle Schlüssel des Dictionaries annimmt. Bei einer Liste wird über alle Werte iteriert:
###Code
a = [3.23, 7.0, "Hallo Welt", 256]
for s in a:
print(s)
###Output
_____no_output_____
###Markdown
Neben den sequentiellen Datentypen liefern noch sogenannte **Generatoren** Folgen von Werten die iterierbar sind. Der bekannteste iterator ist `range()`.`range` kann mehrere Argumente haben. Ist nur ein Argument `E` angegeben, so läuft der iterator von 0 bis `E-1`.`range(S, E)` läuft von S bis `E-1`, und `range(S, E, K)` läuft von S bis `E-1` mit der Schrittweite `K`
###Code
print("Ein Parameter:", end=" ")
for s in range(5): print(s, end=" ")
print("\nZwei Parameter:", end=" ")
for s in range(2,5): print(s, end=" ")
print("\nDrei Parameter:", end=" ")
for s in range(0,5,2): print(s, end=" ")
###Output
_____no_output_____
###Markdown
Das zusätzliche Argument `end=" "` in den `print`-Anweisungen oben verhindert übrigens einen Zeilenumbruch.Ohne diesen Parameter würden alle Werte in einer Spalter untereinander ausgegeben. Damit endet unser erster *Crash Kurs* zum Thema Python. Sie haben nun die wichtigsten Elemente der Python-Syntax gesehen.Natürlich zeigen die Beispiele aber nur einen kleinen Auschnitt, die Sprache ist noch deutlich umfangreicher und viele Konzepte, wie z.B. Klassen und Module, haben wir noch nicht einmal angesprochen.Am besten Sie probieren Python einfach mal aus, indem Sie bestehende Beispiele übernehmen und verändern.Die Python Notebooks sind eine ideale Umgebung dafür.Sie können in den Code-Zellen Programmcode einfach ausprobieren.In den Markdown-Zellen können Sie sich Notizen machen, um Ihren Code zu dokumentieren oder ihre Schritte zu beschreiben. AufgabenAuf den folgenden Notebooks wird es ggf. auch Aufgaben zur Eigneständigen Bearbeitung geben. Ihre Lösungen können Sie über das JupyterNotebook einreichen.Zum Testen der Abgaben kommt hier eine Mini Aufgabe:Implementieren Sie eine Python Funktion, die den String *Hello World* zurückliefert!
###Code
def hello():
return "Hello World"
assert hello()=="Hello World"
###Output
_____no_output_____
|
learning_notebooks/Part 1 - Predicting timeseries in the real world new.ipynb
|
###Markdown
Part 1 - Predicting timeseries in the real world new Predicting in the real world is much less theoretical than we showed you in the previous BLUs. As with so much, experience plays a part in choosing things like tuning Hyper-Parameters and choosing models, but at the end of the day you are going to follow some best practices when you can, grid search while your CPU allows you, and avoid complicated problems with libraries and APIs whenever those present themselves.Let's go predict some timeseries.
###Code
import utils
from sklearn.metrics import r2_score
import pandas as pd
import numpy as np
import statsmodels.api as sm # <--- Yay! API!
% matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (16, 5)
import itertools
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.filterwarnings("ignore") # specify to ignore warning messages
airlines = utils.load_airline_data()
###Output
_____no_output_____
###Markdown
Predicting out of time We will start, as so often happens, with a confession  Same dataset as we had in the previous BLU:
###Code
airlines = airlines[:'1957']
###Output
_____no_output_____
###Markdown
Predicting the Airlines dataset in the real world Alright! Same problem as BLU2, but this time without caring so much about low level stuff. Let's have some fun!
###Code
airlines.plot(figsize=(16, 4));
###Output
_____no_output_____
###Markdown
This is the SARIMAX. What does that stand for?  The **Autoregressive Integrated Moving Average** part we know from BLU2. _(well... kind of anyway)_ Now what about the new bits? - **`Seasonal`**: as the name suggests, this model can actually deal with seasonality. Coool.... - **`With Exogenous`** roughly means we can add external information. For instance we can include the temperature time series to predict the ice cream sales, which is surely useful. Exogenous variables, as you learned in [BLU2](https://github.com/LDSSA/batch2-BLU02/blob/master/Learning%20Notebooks/BLU02%20-%20Learning%20Notebook%20-%20Part%201%20of%203%20-%20Time%20series%20modelling%20concepts.ipynb) are introduced from or produced outside the organism or system, and don't change with the predictions of the system.What are the parameters? These we already know from [BLU](https://github.com/LDSSA/batch2-BLU02/blob/master/Learning%20Notebooks/BLU02%20-%20Learning%20Notebook%20-%20Part%203%20of%203%20-%20Prediction.ipynb)> p = 0 > d = 1 > q = 1 But now we have a second bunch. The first 3 are the same as before, but for the seasonal part: > P = 1 > D = 1 > Q = 1 The last new parameter, `S`, is an integer giving the periodicity (number of periods in season). We normally have a decent intuition for this parameter: - If we have daily data and suspect we may have weekly trends, we may want `S` to be 7. - If the data is monthly and we think the time of the year may count, maybe try `S` at 12 > S = 12 To know the SARIMAX in detail you can and should take a closer look at [the documentation](http://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html). But for now, let's just do exactly what we did in BLU2, but without the crazy low level details:
###Code
model = sm.tsa.statespace.SARIMAX(airlines, # <-- holy crap just passed it pandas? No ".values"? No .diff?
order=(0, 1, 1), # <-- keeping our order as before in BLU2
seasonal_order=(1, 1, 1, 12)) # <-- We'll get into how we found these hyper params later
###Output
_____no_output_____
###Markdown
Now to fit the model
###Code
results = model.fit()
###Output
_____no_output_____
###Markdown
And get predictions:
###Code
pred = results.get_prediction(start=airlines.index.min(), # <--- Start at the first point we have
dynamic=False) # <--- Dynamic means "use only the data
# from the past, we'll use this eventually
mean_predictions = pred.predicted_mean
print('Can this possibly return a... %s OH MY GOD IT DID THAT IS SO AWESOME!!' % type(mean_predictions))
###Output
Can this possibly return a... <class 'pandas.core.series.Series'> OH MY GOD IT DID THAT IS SO AWESOME!!
###Markdown
Done. Seriously, check this out:
###Code
airlines.plot(label='observed')
mean_predictions.plot(label='One-step ahead Forecast with dynamic=False', alpha=.7)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Boum. Confidence intervals? Why yes please:
###Code
pred_ci = pred.conf_int()
airlines.plot(label='observed')
mean_predictions.plot(label='One-step ahead Forecast with dynamic=False', alpha=.7)
# Let's use some matplotlib code to fill between the upper and lower confidence bound with grey
plt.fill_between(pred_ci.index,
pred_ci['lower passengers_thousands'],
pred_ci['upper passengers_thousands'],
color='k',
alpha=.2)
plt.ylim([0, 700])
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Kind of makes sense, at the start we didn't have enough data to predict much, so the uncertaintly band is pretty insane.
###Code
def plot_predictions(series_, pred_):
"""
Remember Sam told us to build functions as we go? Let's not write this stuff again.
"""
mean_predictions_ = pred_.predicted_mean
pred_ci_ = pred_.conf_int()
series_.plot(label='observed')
mean_predictions_.plot(label='predicted',
alpha=.7)
plt.fill_between(pred_ci_.index,
pred_ci_['lower passengers_thousands'],
pred_ci_['upper passengers_thousands'],
color='k',
alpha=.2)
plt.ylim([0, 700])
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Now, what if we had stopped feeding it data, and asked it to predict the last 30 periods?
###Code
# We want to make 30 steps out of time
train_up_to_step = len(airlines) - 30
# remember the dynamic argument? Well, we'll use the first 30 steps to train
pred = results.get_prediction(start=airlines.index.min(),
dynamic=train_up_to_step)
plot_predictions(series_=airlines, pred_=pred)
###Output
_____no_output_____
###Markdown
Pretty cool, we can see the uncertainty increasing as we move. Also, what if we wanted to forecast outside of our "known" dates? For this we will use the `get_forecast` method, which allows us to go beyond what data we have:
###Code
forecast = results.get_forecast(steps=15)
forecast_ci = forecast.conf_int()
plot_predictions(series_=airlines, pred_=forecast)
###Output
_____no_output_____
###Markdown
Looks like a decent forecast! [Or is it](https://thumbs.gfycat.com/BitterRingedGrayling-size_restricted.gif)? Let's quantify the quality of our predictions. Validation metrics There are two metrics we will use to validate our results. The first one, $R^{2}$, should be familiar to you from [SLU12](https://goo.gl/6Nvqgs). It is generally used only to validate the test set results. _Optional bit: The demonstration of why $R^{2}$ is beyond the scope here, but intuitive enough: a really complex model can overfit and get a high $R^{2}$, so if we use it as a metric to optimize when choosing the model we're incentivizing it to overfit._Let's evaluate some test set results with R2:
###Code
pred = results.get_prediction(start=train_up_to_step,
dynamic=False)
y_pred = pred.predicted_mean
y_true = airlines.iloc[train_up_to_step::]
# Compute the mean square error
r2 = r2_score(y_pred=y_pred, y_true=y_true)
print('The R2 of our forecasts is {}'.format(round(r2, 2)))
###Output
The R2 of our forecasts is 0.97
###Markdown
AIC As we mentioned, $R^{2}$ is limited when applied to the training set. This is where AIC is a better choice. AIC (Akaike information criterion) is a metric that will simutaneously measure how well the model fits the data, but will control for how complex the model is. If the model is very complex, the expectation oh how well it must fit the data will also go up. It is therefore useful for comparing models. If you (for some weird reason) feel compelled to calculate it by hand, [this post](https://stats.stackexchange.com/questions/87345/calculating-aic-by-hand-in-r?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa) explains how to do so. Then again, it's sunny and beautiful outside, and Statsmodel has got your back.
###Code
results.aic
###Output
_____no_output_____
###Markdown
Hyper parameter optimization Now, I've been using some very specific parameters: > p = 0 > d = 1 > q = 1 > P = 1 > D = 1 > Q = 1 > S = 12 I discovered these parameters by doing one of the following: > A) _~~developing a strong intuition about how statsmodels work and about the trends in the airline industry~~_ > B) throwing a hyper parameter optimizer at the problem and making myself a nice cup of tea while it ran. Let's build a Hyper Parameter Optimizer (fancy!)
###Code
p = d = q = P = D = Q = range(0, 2) # <--- all of the paramters between 0 and 2
S = [7, 14] # <-- let's pretend we have a couple of hypothesis
params_combinations = list(itertools.product(p, d, q, P, D, Q, S))
inputs = [[x[0], x[1], x[2], x[3], x[4], x[5], x[6]] for x in params_combinations]
###Output
_____no_output_____
###Markdown
Great. Now, for each set of params, let's get the aic:
###Code
def get_aic(series_, params):
# extract the params
p = params[0]
d = params[1]
q = params[2]
P = params[3]
D = params[4]
Q = params[5]
S = params[6]
# fit a model with those params
model = sm.tsa.statespace.SARIMAX(series_,
order=(p, d, q),
seasonal_order=(P, D, Q, S),
enforce_stationarity=False,
enforce_invertibility=False)
# fit the model
results = model.fit()
# return the aic
return results.aic
###Output
_____no_output_____
###Markdown
Run, forest, run!
###Code
%%time
aic_scores = {}
params_index = {}
for i in range(len(inputs)):
try:
param_set = inputs[i]
aic = get_aic(airlines, param_set)
aic_scores[i] = aic
params_index[i] = param_set
# this will fail sometimes with impossible parameter combinations.
# ... and I'm too lazy to remember what they are.
except Exception as e:
continue
###Output
CPU times: user 33.7 s, sys: 1.87 s, total: 35.5 s
Wall time: 41.8 s
###Markdown
Wrangle these results into a usable dataframe _(note: don't worry if you don't understand this code too well for now)_
###Code
temp = pd.DataFrame(params_index).T
temp.columns = ['p', 'd', 'q', 'P', 'D', 'Q', 'S']
temp['aic'] = pd.Series(aic_scores)
temp.sort_values('aic').head()
###Output
_____no_output_____
###Markdown
Great! What were the best params?
###Code
best_model_params = temp.aic.idxmin()
temp.loc[best_model_params]
###Output
_____no_output_____
###Markdown
Great! Let's fit that model then:
###Code
best_model = sm.tsa.statespace.SARIMAX(airlines,
order=(0, 1, 1),
seasonal_order=(1, 1, 1, 12),
enforce_stationarity=False,
enforce_invertibility=False)
results = best_model.fit()
predictions_best_model = results.get_prediction(dynamic=len(airlines) - 10)
plot_predictions(series_=airlines, pred_=predictions_best_model)
###Output
_____no_output_____
###Markdown
Using season and exogenous variables So far, we have been using only the endogenous variable to create predictions. Also, the time series we used are ... kind of easy. Highly seasonal and periodic, eventhough the variance might increase over time. But they are easy. So, what about making things a "little bit" harder? Like, for example, predicting the US GDP Growth.
###Code
import pandas as pd
from statsmodels import api as sm
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
If you want additional insights on each of the time series of this dataset, check this page at [thebalance.com](https://www.thebalance.com/components-of-gdp-explanation-formula-and-chart-3306015).
###Code
data = pd.read_csv('../data/US_Production_Q_Data_Growth_Rates.csv')
data.Year = pd.to_datetime(data.Year)
data = data.set_index('Year')
data.head(10)
data['GDP Growth'].plot(figsize=(16, 6));
###Output
_____no_output_____
###Markdown
Can you see any seasonality? Me neither. But you might notice that, from time to time, there is a big drop in GDP Growth (to negative values). That is related to economical cycles. Hmmm...cycles...maybe a cyclical component is present?**ALSO**, Remember the 2007 crisis? Well, it is no surprise that we had the biggest recession near ~2009. Let's plot all time series together to see if there is a pattern
###Code
data.plot(figsize=(16, 6));
###Output
_____no_output_____
###Markdown
Well...maybe it wasn't a good idea. Let's instead make several plots, one for each pair (GDP Growth, OTHER TIME SERIES)
###Code
from itertools import product
for gdp, other in product(['GDP Growth'], data.drop('GDP Growth', axis=1).columns):
data[[gdp, other]].plot(figsize=(16, 6))
plt.show()
###Output
_____no_output_____
###Markdown
By visual inspection, we that Consumption Growth and Labor Growth follow GDP Growth like two puppies. So, one reasonable hypothesis is: those two time series are highly predictive for GDP Growth. Instead of trying to find the best model using grid search, we will let you explore the effect of each exogenous variable and model parameter in the forecast. First, let's prepare the train test split and, also, add some cyclical features for month and decade
###Code
import numpy as np
X = data.drop('GDP Growth', axis=1)
X['month (cosine)'] = np.cos(2 * np.pi * X.index.month/ 12)
X['month (sin)'] = np.sin(2 * np.pi * X.index.month/ 12)
X['decade (cosine)'] = np.cos(2 * np.pi * (X.index.year % 10) / 10)
X['decade (sine)'] = np.sin(2 * np.pi * (X.index.year % 10) / 10)
y = data['GDP Growth']
train_percentage = 0.5
train_size = int(X.shape[0] * train_percentage)
X_train, y_train = X.iloc[:train_size], y.iloc[:train_size]
X_test, y_test = X.iloc[train_size:], y.iloc[train_size:]
from ipywidgets import interact
from ipywidgets import fixed
from ipywidgets import FloatSlider
from ipywidgets import IntSlider
from ipywidgets import Checkbox
from ipywidgets import Dropdown
from ipywidgets import SelectMultiple
def sarimax_fit_and_plot(endo_train, endo_test,
exog_train=None, exog_test=None,
use_exog=False,
exog_selected=None,
trend='n',
maxiter=50,
forecast_steps=None,
p=1, d=0, q=0,
P=0, D=0, Q=0, s=0):
if use_exog:
exog_selected = list(exog_selected)
exog_train = exog_train[exog_selected]
exog_test = exog_test[exog_selected]
if use_exog:
sarimax = sm.tsa.SARIMAX(endo_train, exog=exog_train,
order=(p, d, q), trend=trend,
seasonal_order=(P, D, Q, s))
else:
sarimax = sm.tsa.SARIMAX(endo_train,
order=(p, d, q), trend=trend,
seasonal_order=(P, D, Q, s))
sarimax_results = sarimax.fit(maxiter=maxiter,
trend=trend)
if forecast_steps is None:
forecast_steps = len(y_test)
if use_exog:
exog_test = exog_test.iloc[:forecast_steps]
forecast = sarimax_results.get_forecast(
steps=forecast_steps, exog=exog_test).predicted_mean
else:
forecast = sarimax_results.get_forecast(
steps=forecast_steps).predicted_mean
plot_forecast_target(forecast, endo_test)
def plot_forecast_target(forecast, target):
plt.figure(figsize=(16, 6))
forecast.plot(label="prediction")
target = target.iloc[:len(forecast)]
target.plot(label="target")
plt.ylabel(target.name)
plt.legend()
plt.title("R²: {}".format(r2_score(target, forecast)))
interact(sarimax_fit_and_plot,
endo_train=fixed(y_train),
endo_test=fixed(y_test),
exog_train=fixed(X_train),
exog_test=fixed(X_test),
use_exog=Dropdown(options=[True, False]),
exog_selected=SelectMultiple(
options=list(X.columns),
value=[X.columns[0]],
rows=X.shape[1],
description='Exogenous features',
disabled=False
),
trend=Dropdown(options=['n','c','t','ct']),
maxiter=IntSlider(value=50,
min=10,
max=5000,
step=10),
forecast_steps=IntSlider(value=10,
min=1,
max=len(y_test),
step=1),
p=IntSlider(value=1,
min=0,
max=20,
step=1),
d=IntSlider(value=0,
min=0,
max=2,
step=1),
q=IntSlider(value=0,
min=0,
max=10,
step=1),
P=IntSlider(value=0,
min=0,
max=10,
step=1),
D=IntSlider(value=0,
min=0,
max=2,
step=1),
Q=IntSlider(value=0,
min=0,
max=10,
step=1),
s=IntSlider(value=0,
min=0,
max=40,
step=1));
###Output
_____no_output_____
|
.ipynb_checkpoints/categorical_embedding-checkpoint.ipynb
|
###Markdown
Visualize dataset using categorical embedding The dataset for this project originates from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Census+Income). The datset was donated by Ron Kohavi and Barry Becker, after being published in the article _"Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid"_. We can find the article by Ron Kohavi [online](https://www.aaai.org/Papers/KDD/1996/KDD96-033.pdf). The data we investigate here consists of small changes to the original dataset, such as removing the `'fnlwgt'` feature and records with missing or ill-formatted entries.
###Code
import warnings
warnings.simplefilter('ignore')
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # Allows the use of display() for DataFrames
# Import sklearn.preprocessing.StandardScaler
from sklearn.preprocessing import MinMaxScaler
# Import train_test_split
from sklearn.model_selection import train_test_split
# Import two metrics from sklearn - fbeta_score and accuracy_score
from sklearn.metrics import fbeta_score
from sklearn.metrics import accuracy_score
# Import the three supervised learning models from sklearn
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
# Import 'GridSearchCV', 'make_scorer', and any other necessary libraries
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
# Import functionality for cloning a model
from sklearn.base import clone
import altair as alt
alt.data_transformers.enable('json')
# Pretty display for notebooks
%matplotlib inline
from gensim.models import Word2Vec
# TSNE
import time
from sklearn.manifold import TSNE
# Load the Census dataset
data = pd.read_csv("census.csv")
data.head()
###Output
_____no_output_____
###Markdown
**Featureset Exploration*** **age**: Continuous. * **workclass**: Categorical - Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked. * **education**: Categorical - Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters, 1st-4th, 10th, Doctorate, 5th-6th, Preschool. * **education-num**: Continuous. * **marital-status**: Categorical - Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF-spouse. * **occupation**: Categorical - Tech-support, Craft-repair, Other-service, Sales, Exec-managerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces. * **relationship**: Categorical - Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried. * **race**: Categorical - Black, White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other. * **sex**: Categorical - Female, Male. * **capital-gain**: Continuous. * **capital-loss**: Continuous. * **hours-per-week**: Continuous. * **native-country**: Categorical - United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, El-Salvador, Trinadad&Tobago, Peru, Hong, Holand-Netherlands. ---- Preparing the DataBefore data can be used as input for machine learning algorithms, it often must be cleaned, formatted, and restructured. Fortunately, for this dataset, there are no invalid or missing entries we must deal with, however, there are some qualities about certain features that must be adjusted. Normalizing Numerical FeaturesIn addition to performing transformations on features that are highly skewed, it is often good practice to perform some type of scaling on numerical features. Applying a scaling to the data does not change the shape of each feature's distribution (such as `'capital-gain'` or `'capital-loss'` above); however, normalization ensures that each feature is treated equally when applying supervised learners. Note that once scaling is applied, observing the data in its raw form will no longer have the same original meaning.
###Code
# Initialize a scaler, then apply it to the features
scaler = MinMaxScaler() # default=(0, 1)
numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
numerical_normalized = ['age_normalized', 'education-num_normalized', 'capital-gain_normalized',
'capital-loss_normalized', 'hours-per-week_normalized']
normalized_data = data
normalized_data[numerical_normalized] = data[numerical]
normalized_data[numerical_normalized] = scaler.fit_transform(normalized_data[numerical_normalized])
normalized_data.head()
categorical = ['workclass', 'education_level', 'marital-status', 'occupation', 'relationship',
'race', 'sex', 'native-country']
def getSentence(x):
arr = []
for col in categorical:
arr.append(x[col].strip())
return arr
normalized_data['combined-categories'] = normalized_data.apply(getSentence, axis=1)
normalized_data.head()
# window size of 8 includes all words in context
# ns_exponent of 0.0 samples all words equally
model = Word2Vec(list(normalized_data['combined-categories']), min_count=1, size=32, window=8, ns_exponent=0.0,
workers=8, iter=100)
# test vector for one of the categorical values
model['Private']
def getCategoryArray(x):
arr = []
for col in categorical:
arr.append(model[x[col].strip()])
return np.mean(np.array(arr), axis=0)
normalized_data['categories_arr'] = normalized_data.apply(getCategoryArray, axis=1)
normalized_data.head()
normalizer = np.amax(list(normalized_data['categories_arr'])) - np.amin(list(normalized_data['categories_arr']))
def getCombinedArray(x):
arr = x['categories_arr']
for col in numerical_normalized:
arr = np.append(arr, x[col]*normalizer)
return arr
normalized_data['combined_arr'] = normalized_data.apply(getCombinedArray, axis=1)
normalized_data.head()
###Output
_____no_output_____
###Markdown
Visualize
###Code
def performTSNE(df, col, out_x='x', out_y='y', verbose=1, perplexity=40, n_iter=500):
'''
perform tSNE (t-distributed Stochastic Neighbor Embedding).
A tool to visualize high-dimensional data. It converts similarities between data points to joint probabilities and tries to minimize the Kullback-Leibler divergence between the joint probabilities of the low-dimensional embedding and the high-dimensional data.
input:
df - data frame
col - name of col with embedding data
out_x - name of column to store x-coordinates
out_y - name of column to store y-cordinates
verbose -
perplexity - related to the number of nearest neighbors - usually a value between 5 and 50
n_iter - number of iterations
output:
input df with 2 additional columns for x and y co-ordinates
'''
X = np.array(list(df[col]))
time_start = time.time()
tsne = TSNE(n_components=2, verbose=verbose, random_state=32, perplexity=perplexity, n_iter=n_iter)
tsne_results = tsne.fit_transform(X)
if verbose > 0:
print('t-SNE done! Time elapsed: {} seconds'.format(time.time()-time_start))
df[out_x] = tsne_results[:,0]
df[out_y] = tsne_results[:,1]
return df
normalized_data = performTSNE(normalized_data, 'combined_arr')
normalized_data.head()
###Output
_____no_output_____
###Markdown
Now that we have the x and y co-ordinates, we can get rid of the extra columns.
###Code
to_drop = ['age_normalized', 'education-num_normalized', 'capital-gain_normalized', 'capital-loss_normalized',
'hours-per-week_normalized', 'categories_arr', 'combined_arr']
#to be used in modeling
features = pd.DataFrame(normalized_data['combined_arr'].tolist())
normalized_data.drop(columns=to_drop, inplace=True)
scatter_chart = alt.Chart(normalized_data).mark_circle().encode(
x='x',
y='y',
color = 'income',
tooltip=['age', 'workclass', 'education_level', 'education-num',
'marital-status', 'occupation', 'relationship', 'race', 'sex',
'capital-gain', 'capital-loss', 'hours-per-week', 'native-country']
).properties(
width=700,
height=700
).interactive()
scatter_chart.display()
scatter_chart.save('scatter_chart.html')
###Output
_____no_output_____
###Markdown
Effect of Gender, Marital Status and Race
###Code
sex_selection = alt.selection_multi(fields=['sex'], name='sex')
marital_selection = alt.selection_multi(fields=['marital-status'], name='marital')
race_selection = alt.selection_multi(fields=['race'], name='race')
scatter_color = alt.condition(sex_selection | marital_selection | race_selection,
alt.Color('income:N'),
alt.value('lightgray'))
sex_color = alt.condition(sex_selection,
#alt.value("#e45756"),
alt.Color('income:N'),
alt.value('lightgray'))
marital_color = alt.condition(marital_selection,
#alt.value("#72b7b2"),
alt.Color('income:N'),
alt.value('lightgray'))
race_color = alt.condition(race_selection,
#alt.value("#54a24b"),
alt.Color('income:N'),
alt.value('lightgray'))
sex_bar = alt.Chart(normalized_data).mark_bar().encode(
x= alt.X('count()', axis=alt.Axis(title='')),
y= alt.Y('sex:N', sort='-x',
axis=alt.Axis(title='',
labelFontSize=12,
ticks=False)),
color=sex_color
).properties(
width=200,
height=60,
title="Sex"
).add_selection(
sex_selection
)
marital_bar = alt.Chart(normalized_data).mark_bar().encode(
x= alt.X('count()', axis=alt.Axis(title='')),
y= alt.Y('marital-status:N', sort='-x',
axis=alt.Axis(title='',
labelFontSize=12,
ticks=False)),
color=marital_color
).properties(
width=200,
height=225,
title="Marital Status"
).add_selection(
marital_selection
)
race_bar = alt.Chart(normalized_data).mark_bar().encode(
x= alt.X('count()', axis=alt.Axis(title='')),
y= alt.Y('race:N', sort='-x',
axis=alt.Axis(title='',
labelFontSize=12,
ticks=False)),
color=race_color
).properties(
width=200,
height=125,
title="Race"
).add_selection(
race_selection
)
scatter = alt.Chart(normalized_data).mark_circle().encode(
x='x',
y='y',
color = scatter_color,
tooltip=['age', 'workclass', 'education_level', 'education-num',
'marital-status', 'occupation', 'relationship', 'race', 'sex',
'capital-gain', 'capital-loss', 'hours-per-week', 'native-country']
).properties(
width=500,
height=500
).add_selection(alt.selection_single())
# selection_single work arounf for vega-lite bug - to show tooltp
gender_marital_race_chart = (scatter | (sex_bar & marital_bar & race_bar)).configure_legend(
orient='bottom'
)
gender_marital_race_chart.display()
gender_marital_race_chart.save('gender_marital_race_chart.html')
###Output
_____no_output_____
###Markdown
Effect of Education Level, Workclass and Occupation
###Code
workclass_selection = alt.selection_multi(fields=['workclass'], name='workclass')
education_level_selection = alt.selection_multi(fields=['education_level'], name='education_level')
occupation_selection = alt.selection_multi(fields=['occupation'], name='occupation')
scatter_color = alt.condition(workclass_selection | education_level_selection | occupation_selection,
alt.Color('income:N'),
alt.value('lightgray'))
workclass_color = alt.condition(workclass_selection,
alt.Color('income:N'),
alt.value('lightgray'))
education_level_color = alt.condition(education_level_selection,
alt.Color('income:N'),
alt.value('lightgray'))
occupation_color = alt.condition(occupation_selection,
alt.Color('income:N'),
alt.value('lightgray'))
workclass_bar = alt.Chart(normalized_data).mark_bar().encode(
x= alt.X('count()', axis=alt.Axis(title='')),
y= alt.Y('workclass:N', sort='-x',
axis=alt.Axis(title='',
labelFontSize=12,
ticks=False)),
color=workclass_color
).properties(
width=200,
height=80,
title="Workclass"
).add_selection(
workclass_selection
)
education_level_bar = alt.Chart(normalized_data).mark_bar().encode(
x= alt.X('count()', axis=alt.Axis(title='')),
y= alt.Y('education_level:N', sort='-x',
axis=alt.Axis(title='',
labelFontSize=12,
ticks=False)),
color=education_level_color
).properties(
width=200,
height=225,
title="Education Level"
).add_selection(
education_level_selection
)
occupation_bar = alt.Chart(normalized_data).mark_bar().encode(
x= alt.X('count()', axis=alt.Axis(title='')),
y= alt.Y('occupation:N', sort='-x',
axis=alt.Axis(title='',
labelFontSize=12,
ticks=False)),
color=occupation_color
).properties(
width=200,
height=150,
title="Occupation"
).add_selection(
occupation_selection
)
scatter = alt.Chart(normalized_data).mark_circle().encode(
x='x',
y='y',
color = scatter_color,
tooltip=['age', 'workclass', 'education_level', 'education-num',
'marital-status', 'occupation', 'relationship', 'race', 'sex',
'capital-gain', 'capital-loss', 'hours-per-week', 'native-country']
).properties(
width=500,
height=500
).add_selection(alt.selection_single())
# selection_single work arounf for vega-lite bug - to show tooltp
workclass_education_occupation_chart = (scatter | (workclass_bar & education_level_bar & occupation_bar)).configure_legend(
orient='bottom'
)
workclass_education_occupation_chart.display()
workclass_education_occupation_chart.save('workclass_education_occupation_chart.html')
###Output
_____no_output_____
###Markdown
Modeling
###Code
income = normalized_data['income'].map({'<=50K':0, '>50K':1})
###Output
_____no_output_____
###Markdown
Shuffle and Split Data
###Code
# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(features,
income,
test_size = 0.2,
random_state = 0)
# Show the results of the split
print("Training set has {:,} samples.".format(X_train.shape[0]))
print("Testing set has {:,} samples.".format(X_test.shape[0]))
# Initialize the classifier
clf = RandomForestClassifier(random_state=23)
# Create the parameters list you wish to tune, using a dictionary if needed.
parameters = {'n_estimators':[60, 75],'max_depth':[12, 14],'min_samples_leaf':[5, 6]}
# Make an fbeta_score scoring object using make_scorer()
scorer = make_scorer(fbeta_score, beta=0.5)
# Perform grid search on the classifier using 'scorer' as the scoring method using GridSearchCV()
grid_obj = GridSearchCV(clf, parameters, scoring=scorer)
# Fit the grid search object to the training data and find the optimal parameters using fit()
grid_fit = grid_obj.fit(X_train,y_train)
# Get the estimator
best_clf = grid_fit.best_estimator_
print(grid_fit.best_estimator_)
# Make predictions using the unoptimized and model
predictions = (clf.fit(X_train, y_train)).predict(X_test)
time_start = time.time()
best_predictions = (best_clf.fit(X_train, y_train)).predict(X_test)
time_end = time.time()
# Report the before-and-afterscores
print("Unoptimized model\n------")
print("Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions)))
print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5)))
print("\nOptimized Model\n------")
print("Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)))
print("Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)))
print("Time taken {:.4f}".format(time_end - time_start))
###Output
RandomForestClassifier(bootstrap=True, ccp_alpha=0.0, class_weight=None,
criterion='gini', max_depth=14, max_features='auto',
max_leaf_nodes=None, max_samples=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=5, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=75,
n_jobs=None, oob_score=False, random_state=23, verbose=0,
warm_start=False)
Unoptimized model
------
Accuracy score on testing data: 0.8377
F-score on testing data: 0.6700
Optimized Model
------
Final accuracy score on the testing data: 0.8605
Final F-score on the testing data: 0.7290
Time taken 5.1834
|
utilities/video-analysis/notebooks/resnet/resnet50/resnet50-http-icpu-onnx/create_resnet50_icpu_container_image.ipynb
|
###Markdown
Create a Local Docker ImageIn this section, we will create an IoT Edge module, a Docker container image with an HTTP web server that has a scoring REST endpoint. Get Global Variables
###Code
import sys
sys.path.append('../../../common')
from env_variables import *
###Output
_____no_output_____
###Markdown
Create Web Application & Inference Server for Our ML Solution
###Code
%%writefile $lvaExtensionPath/app.py
import threading
import cv2
import numpy as np
import io
import onnxruntime
import json
import logging
import linecache
import sys
from score import MLModel, PrintGetExceptionDetails
from flask import Flask, request, jsonify, Response
logging.basicConfig(level=logging.DEBUG)
app = Flask(__name__)
inferenceEngine = MLModel()
@app.route("/score", methods = ['POST'])
def scoreRRS():
global inferenceEngine
try:
# get request as byte stream
reqBody = request.get_data(False)
# convert from byte stream
inMemFile = io.BytesIO(reqBody)
# load a sample image
inMemFile.seek(0)
fileBytes = np.asarray(bytearray(inMemFile.read()), dtype=np.uint8)
cvImage = cv2.imdecode(fileBytes, cv2.IMREAD_COLOR)
# Infer Image
detectedObjects = inferenceEngine.Score(cvImage)
if len(detectedObjects) > 0:
respBody = {
"inferences" : detectedObjects
}
respBody = json.dumps(respBody)
logging.info("[LVAX] Sending response.")
return Response(respBody, status= 200, mimetype ='application/json')
else:
logging.info("[LVAX] Sending empty response.")
return Response(status= 204)
except:
PrintGetExceptionDetails()
return Response(response='Exception occured while processing the image.', status=500)
@app.route("/")
def healthy():
return "Healthy"
if __name__ == "__main__":
app.run(host='127.0.0.1', port=8888)
###Output
_____no_output_____
###Markdown
8888 is the internal port of the webserver app that listens the requests. Next, we will map it to different ports to expose it externally.
###Code
%%writefile $lvaExtensionPath/wsgi.py
from app import app as application
def create():
application.run(host='127.0.0.1', port=8888)
import os
os.makedirs(os.path.join(lvaExtensionPath, "nginx"), exist_ok=True)
###Output
_____no_output_____
###Markdown
The exposed port of the web app is now 80, while the internal one is still 8888.
###Code
%%writefile $lvaExtensionPath/nginx/app
server {
listen 80;
server_name _;
location / {
include proxy_params;
proxy_pass http://127.0.0.1:8888;
proxy_connect_timeout 5000s;
proxy_read_timeout 5000s;
}
}
%%writefile $lvaExtensionPath/gunicorn_logging.conf
[loggers]
keys=root, gunicorn.error
[handlers]
keys=console
[formatters]
keys=json
[logger_root]
level=INFO
handlers=console
[logger_gunicorn.error]
level=ERROR
handlers=console
propagate=0
qualname=gunicorn.error
[handler_console]
class=StreamHandler
formatter=json
args=(sys.stdout, )
[formatter_json]
class=jsonlogging.JSONFormatter
%%writefile $lvaExtensionPath/kill_supervisor.py
import sys
import os
import signal
def write_stdout(s):
sys.stdout.write(s)
sys.stdout.flush()
# this function is modified from the code and knowledge found here: http://supervisord.org/events.html#example-event-listener-implementation
def main():
while 1:
write_stdout('[LVAX] READY\n')
# wait for the event on stdin that supervisord will send
line = sys.stdin.readline()
write_stdout('[LVAX] Terminating supervisor with this event: ' + line);
try:
# supervisord writes its pid to its file from which we read it here, see supervisord.conf
pidfile = open('/tmp/supervisord.pid','r')
pid = int(pidfile.readline());
os.kill(pid, signal.SIGQUIT)
except Exception as e:
write_stdout('[LVAX] Could not terminate supervisor: ' + e.strerror + '\n')
write_stdout('[LVAX] RESULT 2\nOK')
main()
import os
os.makedirs(os.path.join(lvaExtensionPath, "etc"), exist_ok=True)
%%writefile $lvaExtensionPath/etc/supervisord.conf
[supervisord]
logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
loglevel=info ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=true ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
[program:gunicorn]
command=bash -c "gunicorn --workers 1 -m 007 --timeout 100000 --capture-output --error-logfile - --log-level debug --log-config gunicorn_logging.conf \"wsgi:create()\""
directory=/lvaExtension
redirect_stderr=true
stdout_logfile =/dev/stdout
stdout_logfile_maxbytes=0
startretries=2
startsecs=20
[program:nginx]
command=/usr/sbin/nginx -g "daemon off;"
startretries=2
startsecs=5
priority=3
[eventlistener:program_exit]
command=python kill_supervisor.py
directory=/lvaExtension
events=PROCESS_STATE_FATAL
priority=2
###Output
_____no_output_____
###Markdown
Create a Docker File to Containerize the ML Solution and Web App Server
###Code
%%writefile $lvaExtensionPath/Dockerfile
FROM ubuntu:18.04
ENV WORK_DIR=/lvaExtension
WORKDIR ${WORK_DIR}
RUN apt-get update && apt-get install -y --no-install-recommends wget ca-certificates
COPY etc /etc
RUN apt-get update && apt-get install -y --no-install-recommends \
python3-pip python3-dev libglib2.0-0 libsm6 libxext6 libxrender-dev nginx supervisor runit nginx python3-setuptools
RUN cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install --upgrade pip \
&& pip install numpy onnxruntime flask pillow gunicorn opencv-python json-logging-py
RUN apt-get update && apt-get install -y --no-install-recommends \
libgl1-mesa-dev
COPY . ${WORK_DIR}/
RUN rm -rf /var/lib/apt/lists/* \
&& apt-get clean \
&& rm /etc/nginx/sites-enabled/default \
&& cp ${WORK_DIR}/nginx/app /etc/nginx/sites-available/ \
&& ln -s /etc/nginx/sites-available/app /etc/nginx/sites-enabled/
EXPOSE 80
CMD ["supervisord", "-c", "/lvaExtension/etc/supervisord.conf"]
###Output
_____no_output_____
###Markdown
Create a Local Docker ImageFinally, we will create a Docker image locally. We will later host the image in a container registry like Docker Hub, Azure Container Registry, or a local registry.To run the following code snippet, you must have the pre-requisities mentioned in [the requirements page](../../../common/requirements.md). Most notably, we are running the `docker` command without `sudo`.> [!WARNING]> Please ensure that Docker is running before executing the cell below. Execution of the cell below may take several minutes.
###Code
!docker build -t $containerImageName --file ./$lvaExtensionPath/Dockerfile ./$lvaExtensionPath
###Output
_____no_output_____
|
scripts/python_scripts/.ipynb_checkpoints/plot_expression_data_BD-checkpoint.ipynb
|
###Markdown
Plotting expression data of given gene ID's01. This script plot the expression data of defined gene modules from WGCNAThe modules are named by colors (e.g. red, blue, ect.). This script allow the input of a colorname and plots the expressin values of all the genes in that module.02. This script plot the expression data of any given list of gene ID's. This script allows the input of gene list (robin IDs or ncbi ID) and will plot the expression value of the given genes. HousekeepingYou start with specifing the path to your machine, and the species you want to investigate. These Next you import all the modules used for this script
###Code
### Housekeeping
#
# load modules
import sqlite3 # to connect to database
import pandas as pd # data anaylis handeling
import numpy as np # following three are for plotting
import matplotlib.pyplot as plt
import seaborn as sns
import colorsys
import matplotlib
#
# specify path to folder were your data files are, or your database is
#path = '/Users/roos_brouns/Dropbox/Ant-fungus/02_scripts/Git_Das_folder2/Das_et_al_2022a'
path = '/Users/biplabendudas/Documents/GitHub/Das_et_al_2022a'
#
# specify species
species = 'ophio_cflo'
# Add defenition for color palette making for plotting.
# source: source: https://stackoverflow.com/questions/37765197/darken-or-lighten-a-color-in-matplotlib
def scale_lightness(rgb, scale_l):
# convert rgb to hls
h, l, s = colorsys.rgb_to_hls(*rgb)
# manipulate h, l, s values and return as rgb
return colorsys.hls_to_rgb(h, min(1, l * scale_l), s = s)
###Output
_____no_output_____
###Markdown
Load in the dataThe next step is to load in the data. This can be done with connection to a database base in this example or a excel/cvs with the data kan be read in. The database used in this tutorial is can be made with another script [REF].
###Code
### Load in data
#
# Connect to the database
conn = sqlite3.connect(f'{path}/data/databases/new_TC6_fungal_data.db')
#
# read data from DB into dataframe
exp_val = pd.read_sql_query(f"SELECT * from {species}_fpkm", conn)
### Clean data
#
# drop 'start' and 'end' columns
exp_val.drop(['start','end'], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Part 01. Plotting expression value of modulesThis next part of code plots the expression values of all the genes in a defined module. These module have been defined with WCGNA and can be used to do this step [REF]. * This code will ask for an input of color names of the defined module
###Code
### Part 01. Plot expressioin value of modules
#
# A. Get the data you want to plot
#
# Load in the data with the gene ID's and their assigned module
all_modules = pd.read_csv(f'{path}/results/networks/{species}_gene_IDs_and_module_identity.csv')
# Clean the data by dropping exsessive columns
all_modules.drop(['start', 'end'], axis=1, inplace=True)
#
#
# Input the colors of the modules you want to plot
# source : https://pynative.com/python-accept-list-input-from-user/
input_string = input('Enter elements of a list separated by space ')
print("\n")
module_list = input_string.split()
# print list
print('giving input colors', module_list)
# try:
for color in module_list:
# Define the module you want to plot
module_name = color
# select all the data of that module
module_data = all_modules.loc[all_modules['module_identity'] == module_name]
# select the gene ID's in of the module
module_IDs = pd.DataFrame(module_data['gene_ID_robin'])
# get the expression values of the selected gene ID's in the module
exp_val_module = module_IDs.merge(exp_val, on='gene_ID_robin', how='left')
#
# B. Plot the data
#
# Transform the data so we can plot
t_exp_val_module = exp_val_module.T
t_exp_val_module.drop(['gene_ID_robin','gene_ID_ncbi'], axis=0, inplace=True)
#
# Make the color palette for plotting
color = matplotlib.colors.ColorConverter.to_rgb(module_name)
rgbs = [scale_lightness(color, scale) for scale in [0.3, .6, 1, 1.1, 1.25]]
# Show the palette color
# sns.palplot(rgbs)
#
# set background and the color palette
sns.set_theme()
sns.set_palette(rgbs)
#
# Plot the gene expression values against the time, without legend and with titles
ax = t_exp_val_module.plot(legend=False)
ax.set_title(species[0].upper()+f'{species[1:len(species)]}-{module_name} gene expression', fontsize=15, fontweight='bold')
ax.set_xlabel('Time point')
ax.set_ylabel('Expression value (FPKM)')
#
### Done.
# except:
# print('Wrong input was given: No color module was defined or the given color does not exist as module.')
# ### X-axis try outs
# label = t_exp_val_module.index
# ndx = t_exp_val_module.index
# time_points = 2+np.arange(len(t_exp_val_module))*2
# exp_val_module
# sns.set_palette('Greens')
# t_exp_val_module.plot(legend=False)
# #plt.xticks(time_points)
# #plt.xticks(time_points, ndx)
###Output
_____no_output_____
|
ipynb/cellflow.ipynb
|
###Markdown
A step towards a reactive notebook This is a proof-of-concept for an API that aims to automate the data flow in a notebook. It uses the IPython magic functions `onchange` to specify how variables depend on each other in a cell, and `compute` to get the desired results. The dependency resolution is automatically taken care of, and the computations only occur if needed.
###Code
%load_ext cellflow
%cellflow_configure -v
# initialization
a = 2
###Output
_____no_output_____
###Markdown
Any change in the value of `a` will cause `b` to be re-computed:
###Code
%%onchange a -> b
b = a - 1
###Output
_____no_output_____
###Markdown
Any change in the value of `a` or `b` will cause `c` and `d` to be re-computed:
###Code
%%onchange a, b -> c, d
c = a + b
d = a - b
###Output
_____no_output_____
###Markdown
Until now no computation did actually take place. It has to be explicitely asked for through the `compute` magic function, which will figure out the optimal way to compute the results:
###Code
%%compute c, d
print()
print(f'c = {c}')
print(f'd = {d}')
###Output
The data flow consists of the following paths:
a -> c
a -> d
a -> b -> c
a -> b -> d
Looking at target variable c in path: a -> c
Variable c is also in path: a -> b -> c
And other variables have to be computed first
Looking at target variable d in path: a -> d
Variable d is also in path: a -> b -> d
And other variables have to be computed first
Looking at target variable b in path: a -> b -> c
Variable b is also in path: a -> b -> d
Which doesn't prevent computing it
Variable a has changed
Computing:
b = a - 1
Looking at target variable b in path: a -> b -> d
No change in dependencies
Looking at target variable c in path: a -> c
Variable c is also in path: b -> c
Which doesn't prevent computing it
Variable a has changed
Computing:
c = a + b
d = a - b
Looking at target variable d in path: a -> d
Variable d is also in path: b -> d
Which doesn't prevent computing it
No change in dependencies
Looking at target variable c in path: b -> c
No change in dependencies
Looking at target variable d in path: b -> d
No change in dependencies
All done!
c = 3
d = 1
###Markdown
If no dependency has changed, there will actually be no computation:
###Code
%%compute c, d
print()
print(f'c = {c}')
print(f'd = {d}')
###Output
The data flow consists of the following paths:
a -> c
a -> d
a -> b -> c
a -> b -> d
Looking at target variable c in path: a -> c
Variable c is also in path: a -> b -> c
And other variables have to be computed first
Looking at target variable d in path: a -> d
Variable d is also in path: a -> b -> d
And other variables have to be computed first
Looking at target variable b in path: a -> b -> c
Variable b is also in path: a -> b -> d
Which doesn't prevent computing it
No change in dependencies
Looking at target variable b in path: a -> b -> d
No change in dependencies
Looking at target variable c in path: a -> c
Variable c is also in path: b -> c
Which doesn't prevent computing it
No change in dependencies
Looking at target variable d in path: a -> d
Variable d is also in path: b -> d
Which doesn't prevent computing it
No change in dependencies
Looking at target variable c in path: b -> c
No change in dependencies
Looking at target variable d in path: b -> d
No change in dependencies
All done!
c = 3
d = 1
###Markdown
But a change in the dependencies will cause some or all the variables to be re-computed:
###Code
a = 3
%%compute c, d
print()
print(f'c = {c}')
print(f'd = {d}')
###Output
The data flow consists of the following paths:
a -> c
a -> d
a -> b -> c
a -> b -> d
Looking at target variable c in path: a -> c
Variable c is also in path: a -> b -> c
And other variables have to be computed first
Looking at target variable d in path: a -> d
Variable d is also in path: a -> b -> d
And other variables have to be computed first
Looking at target variable b in path: a -> b -> c
Variable b is also in path: a -> b -> d
Which doesn't prevent computing it
Variable a has changed
Computing:
b = a - 1
Looking at target variable b in path: a -> b -> d
No change in dependencies
Looking at target variable c in path: a -> c
Variable c is also in path: b -> c
Which doesn't prevent computing it
Variable a has changed
Computing:
c = a + b
d = a - b
Looking at target variable d in path: a -> d
Variable d is also in path: b -> d
Which doesn't prevent computing it
No change in dependencies
Looking at target variable c in path: b -> c
No change in dependencies
Looking at target variable d in path: b -> d
No change in dependencies
All done!
c = 5
d = 1
###Markdown
Again, only what is needed is re-computed:
###Code
%%compute c, d
print()
print(f'c = {c}')
print(f'd = {d}')
###Output
The data flow consists of the following paths:
a -> c
a -> d
a -> b -> c
a -> b -> d
Looking at target variable c in path: a -> c
Variable c is also in path: a -> b -> c
And other variables have to be computed first
Looking at target variable d in path: a -> d
Variable d is also in path: a -> b -> d
And other variables have to be computed first
Looking at target variable b in path: a -> b -> c
Variable b is also in path: a -> b -> d
Which doesn't prevent computing it
No change in dependencies
Looking at target variable b in path: a -> b -> d
No change in dependencies
Looking at target variable c in path: a -> c
Variable c is also in path: b -> c
Which doesn't prevent computing it
No change in dependencies
Looking at target variable d in path: a -> d
Variable d is also in path: b -> d
Which doesn't prevent computing it
No change in dependencies
Looking at target variable c in path: b -> c
No change in dependencies
Looking at target variable d in path: b -> d
No change in dependencies
All done!
c = 5
d = 1
###Markdown
The data flow can be hacked. Here we modify the intermediary variable `b` (that should automatically be computed as `b = a - 1`), which will cause the final results to be re-computed:
###Code
b = 10
%%compute c
print()
print(f'c = {c}')
print(f'd = {d}')
%%compute d
print()
print(f'c = {c}')
print(f'd = {d}')
###Output
The data flow consists of the following paths:
a -> d
a -> b -> d
Looking at target variable d in path: a -> d
Variable d is also in path: a -> b -> d
And other variables have to be computed first
Looking at target variable b in path: a -> b -> d
No change in dependencies
Looking at target variable d in path: a -> d
Variable d is also in path: b -> d
Which doesn't prevent computing it
No change in dependencies
Looking at target variable d in path: b -> d
No change in dependencies
All done!
c = 13
d = -7
|
mlflow-project/sklearn_elasticnet_wine/train.ipynb
|
###Markdown
MLflow Training TutorialThis `train.pynb` Jupyter notebook predicts the quality of wine using [sklearn.linear_model.ElasticNet](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html). > This is the Jupyter notebook version of the `train.py` exampleAttribution* The data set used in this example is from http://archive.ics.uci.edu/ml/datasets/Wine+Quality* P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis.* Modeling wine preferences by data mining from physicochemical properties. In Decision Support Systems, Elsevier, 47(4):547-553, 2009.
###Code
# Wine Quality Sample
def train(in_alpha, in_l1_ratio):
import os
import warnings
import sys
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import ElasticNet
import mlflow
import mlflow.sklearn
import logging
logging.basicConfig(level=logging.WARN)
logger = logging.getLogger(__name__)
def eval_metrics(actual, pred):
rmse = np.sqrt(mean_squared_error(actual, pred))
mae = mean_absolute_error(actual, pred)
r2 = r2_score(actual, pred)
return rmse, mae, r2
warnings.filterwarnings("ignore")
np.random.seed(40)
# Read the wine-quality csv file from the URL
csv_url =\
'http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv'
try:
data = pd.read_csv(csv_url, sep=';')
except Exception as e:
logger.exception(
"Unable to download training & test CSV, check your internet connection. Error: %s", e)
# Split the data into training and test sets. (0.75, 0.25) split.
train, test = train_test_split(data)
# The predicted column is "quality" which is a scalar from [3, 9]
train_x = train.drop(["quality"], axis=1)
test_x = test.drop(["quality"], axis=1)
train_y = train[["quality"]]
test_y = test[["quality"]]
# Set default values if no alpha is provided
if float(in_alpha) is None:
alpha = 0.5
else:
alpha = float(in_alpha)
# Set default values if no l1_ratio is provided
if float(in_l1_ratio) is None:
l1_ratio = 0.5
else:
l1_ratio = float(in_l1_ratio)
# Useful for multiple runs (only doing one run in this sample notebook)
with mlflow.start_run():
# Execute ElasticNet
lr = ElasticNet(alpha=alpha, l1_ratio=l1_ratio, random_state=42)
lr.fit(train_x, train_y)
# Evaluate Metrics
predicted_qualities = lr.predict(test_x)
(rmse, mae, r2) = eval_metrics(test_y, predicted_qualities)
# Print out metrics
print("Elasticnet model (alpha=%f, l1_ratio=%f):" % (alpha, l1_ratio))
print(" RMSE: %s" % rmse)
print(" MAE: %s" % mae)
print(" R2: %s" % r2)
# Log parameter, metrics, and model to MLflow
mlflow.log_param("alpha", alpha)
mlflow.log_param("l1_ratio", l1_ratio)
mlflow.log_metric("rmse", rmse)
mlflow.log_metric("r2", r2)
mlflow.log_metric("mae", mae)
mlflow.sklearn.log_model(lr, "model")
train(0.5, 0.5)
train(0.2, 0.2)
train(0.1, 0.1)
###Output
Elasticnet model (alpha=0.100000, l1_ratio=0.100000):
RMSE: 0.7792546522251949
MAE: 0.6112547988118587
R2: 0.2157063843066196
|
Data Analysis Stocker.ipynb
|
###Markdown
Checando as diferenças usando diferentes períodos de data
###Code
error1 = tomorrow(Microsoft, years = 1)[1]
error2 = tomorrow(Microsoft, years = 2)[1]
error3 = tomorrow(Microsoft, years = 3)[1]
print('Erro usando 1 ano de data:', error1, '%')
print('Erro usando 2 anos de data:', error2, '%')
print('Erro usando 3 anos de data:', error3, '%')
###Output
[*********************100%***********************] 1 of 1 completed
[*********************100%***********************] 1 of 1 completed
[*********************100%***********************] 1 of 1 completed
Erro usando 1 ano de data: 0.968 %
Erro usando 2 anos de data: 1.255 %
Erro usando 3 anos de data: 1.087 %
###Markdown
Aparentemente é melhor usar somente dados do último ano. Isso pode ser devido ao entendimento do modelo, se somente o comportamento mais recente esta envolvido no modelo, portanto é feita uma melhor predição. Agora vamos checar a diferença usando quantidades diferentes de dias para os passos de entradas:
###Code
error1 = tomorrow(Microsoft, steps = 1)[1]
error2 = tomorrow(Microsoft, steps = 10)[1]
error3 = tomorrow(Microsoft, steps = 20)[1]
print('Erro usando 1 dia anterior de dados:', error1, '%')
print('Erro usando 10 dias anterior de dados:', error2, '%')
print('Erro usando 20 dias anterior de dados:', error3, '%')
###Output
[*********************100%***********************] 1 of 1 completed
[*********************100%***********************] 1 of 1 completed
[*********************100%***********************] 1 of 1 completed
Erro usando 1 dia anterior de dados: 1.014 %
Erro usando 10 dias anterior de dados: 2.62 %
Erro usando 20 dias anterior de dados: 1.194 %
###Markdown
Novamente, checando que a data mais recente é a melhor opção. Isso não significa que o modelo não está levando em conta todo o comportamento durante todo o período de tempo. Agora vamos checar a diferença usando diferentes features:
###Code
error1 = tomorrow(Microsoft, features=['Open'])[1]
error2 = tomorrow(Microsoft, features=['Low'])[1]
error3 = tomorrow(Microsoft, features=['High'])[1]
error4 = tomorrow(Microsoft, features=['Volume'])[1]
error5 = tomorrow(Microsoft, features=['Adj Close'])[1]
error6 = tomorrow(Microsoft, features=['Interest'])[1]
error7 = tomorrow(Microsoft, features=['Wiki_views'])[1]
error8 = tomorrow(Microsoft, features=['RSI', '%K', '%R'])[1]
error9 = tomorrow(Microsoft)[1]
error10 = tomorrow(Microsoft, features=['Open','Low','High','Volume',
'Adj Close', 'Interest',
'Wiki_views', 'RSI', '%K', '%R'])[1]
error11 = tomorrow(Microsoft, features=['Open', 'Low', 'High', 'Volume',
'Adj Close'])
print('Error by including Open prices:',error1,'%')
print('Error by including Low prices:',error2,'%')
print('Error by including High prices:',error3,'%')
print('Error by including Volume:',error4,'%')
print('Error by including Adj Close prices:',error5,'%')
print('Error by including Interest:',error6,'%')
print('Error by including Wiki_views:',error7,'%')
print('Error by including indicators:',error8,'%')
print('Error by including only Close prices:',error9,'%')
print('Error by including all the features:',error10,'%')
print('Error by including the features from Yahoo Finance:',error11,'%')
###Output
[*********************100%***********************] 1 of 1 completed
[*********************100%***********************] 1 of 1 completed
[*********************100%***********************] 1 of 1 completed
[*********************100%***********************] 1 of 1 completed
[*********************100%***********************] 1 of 1 completed
[*********************100%***********************] 1 of 1 completed
[*********************100%***********************] 1 of 1 completed
[*********************100%***********************] 1 of 1 completed
[*********************100%***********************] 1 of 1 completed
[*********************100%***********************] 1 of 1 completed
[*********************100%***********************] 1 of 1 completed
Error by including Open prices: 0.943 %
Error by including Low prices: 0.977 %
Error by including High prices: 1.137 %
Error by including Volume: 0.961 %
Error by including Adj Close prices: 1.003 %
Error by including Interest: 1.071 %
Error by including Wiki_views: 0.967 %
Error by including indicators: 1.07 %
Error by including only Close prices: 1.231 %
Error by including all the features: 0.896 %
Error by including the features from Yahoo Finance: [337.95, 0.823, '2021-11-08'] %
###Markdown
Usando somente o preço de fechamento anterior mostra um erro mais aproximado do que usando todas as features, então é uma boa opção para economizar tempo enquanto está executando o código. De qualquer forma, é recomendado implementar vários casos com diferentes features e escolher o caso com o menor erro. Finalmente vamos checar o coeficiente de correlação de Pearson para cada feature contra os preços de fechamento
###Code
from stocker.get_data import correlation
corr = correlation(Microsoft, interest = True, wiki_views = True, indicators = True)
corr
Microsoft_prediction = stocker.predict.tomorrow('MSFT')
Microsoft_prediction
# [Preço previsto, error(%), data da próx abertura]
###Output
_____no_output_____
|
examples/VectorScanner.ipynb
|
###Markdown
Load Damagescanner package
###Code
import os
import numpy
import pandas
from damagescanner.core import VectorScanner
data_path = '..'
###Output
_____no_output_____
###Markdown
Read input data
###Code
inun_map = os.path.join(data_path,'data','inundation','inundation_map.tif')
landuse = os.path.join(data_path,'data','landuse','landuse.shp')
###Output
_____no_output_____
###Markdown
Create dummy maximum damage dictionary and curves DataFrame
###Code
maxdam = {"grass":5,
"forest":10,
"orchard":50,
"residential":200,
"industrial":300,
"retail":300,
"farmland":10,
"cemetery":15,
"construction":10,
"meadow":5,
"farmyard":5,
"scrub":5,
"allotments":10,
"reservoir":5,
"static_caravan":100,
"commercial":300}
curves = numpy.array(
[[0,0],
[50,0.2],
[100,0.4],
[150,0.6],
[200,0.8],
[250,1]])
curves = numpy.concatenate((curves,
numpy.transpose(numpy.array([curves[:,1]]*(len(maxdam)-1)))),
axis=1)
curves = pandas.DataFrame(curves)
curves.columns = ['depth']+list(maxdam.keys())
curves.set_index('depth',inplace=True)
###Output
_____no_output_____
###Markdown
Run the RasterScanner
###Code
%%time
# run the VectorScanner and return the landuse map with damage values
loss_df = VectorScanner(landuse,inun_map,curves,maxdam)
###Output
Get unique shapes: 100%|██████████████████████████████████████████████████████████| 3848/3848 [00:28<00:00, 135.87it/s]
Estimate damages: 100%|█████████████████████████████████████████████████████████| 9773/9773 [00:00<00:00, 13083.56it/s]
Damage per object: 100%|██████████████████████████████████████████████████████████| 6615/6615 [00:31<00:00, 211.64it/s]
|
Movie Ratings Recency.ipynb
|
###Markdown
IntroductionFounded in 1997, MovieLens is a website and online community that generates movie recommendations for users via collaborative filtering. Users begin by creating an account and rating some number of movies they have previously watched on a scale of 0.5 stars to 5 stars. MovieLens's recommendation algorithm then uses the user's ratings, as well as other users' rating patterns, to generate movie recommendations for the user. As the user rates more movies over time, the algorithm updates the recommendations displayed, becoming more personalized (and thus theoretically more successful) over time.MovieLens makes its datasets, consisting of some 20 million ratings covering over 27,000 movies, available to the public. These datasets include the numerical rating and the associated user ID, movie ID, and date and time of rating. For this study, we'll begin with an up-to-date random sample of 100K ratings offered by MovieLens; if necessary for adequate significance levels, we'll also examine their full dataset of 20 million ratings. Loading the DataThe function load_and_clean imports the specified datasets, reformats dates, and merges the ratings data with the movies data.
###Code
import numpy as np
import scipy as sp
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set()
%matplotlib inline
# load small data
from load_and_clean import load_and_clean
smallRatings_df = load_and_clean("ml-latest-small/movies.csv", "ml-latest-small/ratings.csv")
smallRatings_df.head()
###Output
_____no_output_____
###Markdown
Exploring the DataLooking at the distribution of scores pictured below, two features stand out. First, although the site allows half-star ratings (0.5, 1.5, etc), users give whole-star ratings (1.0, 2.0, etc) much more often. This accounts for the dips in the half-star values of the plot on the upper left. If we round all the ratings up to the nearest full star as in the plot on the upper right plot, we can see the shape of the distribution more clearly. As the bottom plot shows, the distribution is roughly Gaussian with a leftward skew.
###Code
rating_mean = smallRatings_df["rating"].mean()
rating_std = smallRatings_df["rating"].std()
plt.figure(figsize=(15,5))
plt.subplot(121)
sns.countplot("rating", data=smallRatings_df)
plt.title("Rating distribution (mean = {}, std = {})".format(round(rating_mean, 2), round(rating_std, 2)))
plt.subplot(122)
roundedRatings = (smallRatings_df["rating"] + .01).round() # add .01 to avoid skew from default banker's rounding
sns.countplot(roundedRatings)
plt.title("Rating distribution (rounded up to nearest whole star)")
plt.show()
plt.figure(figsize=(15,5))
sns.distplot(roundedRatings, bins=5, kde_kws={'bw':.8})
plt.title("Rounded ratings distribution with Gaussian approximation")
plt.show()
###Output
_____no_output_____
###Markdown
The plots below show the total number of ratings submitted on particular months of the year, days of the week, and years since 1996. There are ready-to-hand intuitive explanations for some of the differences exhibited here. For instance, while the data doesn't prove causality, the significant differences across months do correlate with typical U.S. work patterns. November and December feature significant holiday time for many jobs, but also short daylight and cold weather in many regions. We would expect this combination to increase indoor leisure time, which could include both movie-watching and movie-rating. The vacation-heavy summer months are also slightly elevated. The only result that doesn't have an obvious corollary in U.S. work cycles is the secondary ratings spike in April.For weekdays, it's unsurprising that the evenings most commonly used for social gatherings (Wednesday, Friday, and Saturday) feature lower movie-rating totals. People are more likely to be at home and on their computers on less socially busy nights. As for the yearly distribution, given the irregularity of the pattern and the rarity of multi-year institutional cycles in our culture, there are no clearly significant trends.
###Code
plt.figure(figsize=(15,5))
plt.subplot(121)
sns.countplot("month", data=smallRatings_df)
plt.title("Total ratings in each calendar month")
plt.subplot(122)
sns.countplot("weekday", data=smallRatings_df)
plt.title("Total ratings on each weekday")
plt.show()
plt.figure(figsize=(15,5))
sns.countplot(smallRatings_df["datetime"].dt.year)
plt.title("Total ratings each year")
plt.show()
###Output
_____no_output_____
###Markdown
As we can see below, the distribution of how many ratings each user submitted is roughly geometric. The median number of ratings for a single user is 71 (the mean, heavily distorted by outliers, rises to 149).
###Code
user_df = smallRatings_df.groupby("userId").agg({"movieId":"count", "rating":"mean"})
user_df.rename(index=str, columns={"movieId":"rating_count"}, inplace=True)
plt.figure(figsize=(15,5))
plt.subplot(121)
user_df["rating_count"].hist(range=(0,800), bins=20)
plt.title("User rating count distribution (limit 800)")
plt.subplot(122)
user_df["rating_count"].hist(range=(0,200), bins=20)
plt.title("User rating count distribution (limit 200)")
print(user_df["rating_count"].mean())
plt.show()
###Output
149.03725782414307
###Markdown
Central QuestionOne major difficulty in assessing the ratings in this dataset is that participants give ratings at different time intervals from their actual viewing. Some ratings represent ratings for a movie that the user watched just an hour beforehand and remembers in great detail, while others represent ratings for movies that the user watched many years beforehand and retains only a vague impression of. Given the effects of memory over time, shifts in taste, and intervening life experiences, ratings of recently vs non-recently watched movies may be measuring very different attributes of the viewer-movie interaction. Interpreting them as equivalent may lead to faulty conclusions and gloss over valuable insights.We can roughly encapsulate these considerations in a single question: how do users' ratings of recently watched movies (say, within the past week) differ from their ratings of movies they watched long ago? HypothesisMy hypothesis is that ratings not given recently after watching a movie will tend to regress to the average in users' minds, leading to a smaller degree of dispersion among ratings. Concretely, my hypothesis is that the standard distribution will be lower for movies not rated recently after watching than for movies rated recently.It will also be interesting to examine the mean. My secondary hypothesis is that the mean rating will be slightly lower for movies not rated recently after watching than for movies rated recently. MethodologyOn the face of it, our question seems difficult to answer based on the available data. Because MovieLens does not ask the user how long ago or on what date she watched a given movie (rating a movie is a one-click process), we don't have any direct time interval data. However, we can make some inferences based on usage patterns. Broadly, we can posit three typical usage patterns for rating movies: Profile-building: the user rates multiple movies she has watched at some point - not necessarily recently - in order to generate initial recommendations. (Includes the initial site visit.) Targeted rating: the user logs in specifically to rate one or more movies she has recently watched Hybrid rating: the user logs in specifically to rate one or more movies she has recently watched, and then also rates one or more non-recently-watched movies while online.Notably, none of these typical patterns include going online just to rate a single movie that the user watched long ago - it's hard to imagine what motive would prompt that behavior. This is crucial, because it means that we can distinguish recent vs non-recent ratings based on how many movies the user rated that day. Specifically, for the purposes of answering our question, we will divide the data into three groups: Singleton ratings: ratings given on days on which that user recorded no other ratings Small-batch ratings: ratings given on days on which that user recorded two to five ratings Large-batch ratings: ratings given on days on which that user recorded more than five ratingsWe will discard the small-batch ratings as ambiguous, since these could plausibly represent several recently watched movies (usage pattern 2), several non-recently-watched movies (usage pattern 1), or a mixture of the two (usage pattern 3). We will compare the singleton ratings, which almost exclusively represent recently watched movies, against the large-batch ratings, which predominantly represent movies watched less recently than one week.For each of these sets – the singleton ratings and the large-batch ratings – we will take the mean and the standard deviation. As they are on the same scale, we do not need to normalize. We will compare these metrics and run t-tests to determine whether these differences are significant, defined as a p-value of less than 5%.Given the large size of the full dataset (20 million ratings), it would make good pragmatic sense to begin with evaluating a smaller random subset of 100,000 ratings, which we will divide into singleton, small-batch, and large-batch ratings. If the p-values produced from these subsets are close to or greater than 5%, we will run the tests again on the entire dataset. Conclusions and applicationsThis test will be successful if it demonstrates a significant difference between the singleton and large-batch rating datasets – in either direction. Even if that difference is the opposite of the hypothesized difference, the test will have produced interesting and useful information about the dataset. The lower the p-value of the difference is, the more significant the result is. Knowing the nature of this difference will allow us to take recency of rating into account as we analyze the dataset for recommendation algorithm optimization and other purposes.On the other hand, if there are no significant differences between the two rating sets, this means that one of the premises of the experiment was misguided: either the singletons vs large-batch ratings do not predominantly represent recent vs non-recent ratings (respectively) as supposed, or recent vs non-recent ratings do not actually display any statistically significant differences.The main application of this study would be toward improving the recommendation algorithm. A simple approach to movie recommendations would treat all ratings given by a user equally. The results of this experiment, however, may give us both reason and means to normalize ratings for recency. Significant differences between recent and non-recent ratings would also give us additional justification to weight ratings according to recency (with batch size and various aspects of individual user patterns serving as proxies for recency). Specifically, an optimal recommendation algorithm will likely attach more weight to recent movies. For one thing, the accuracy and detail of a memory diminish over time. But equally important is the consideration that individual taste is a moving target for every individual over time. This means that even a perfect memory of how much a user enjoyed a movie five years ago will be a very imperfect indicator of how much they would like a similar movie today. How much the user enjoyed a movie five hours ago, while still subject to a host of variable factors, will be more reliable. If this line of reasoning in favor of recent ratings is correct, the datasets of recent vs non-recent ratings will show differences in their distribution. The experiment outlined here would not be sufficient to prove that a recency-weighted algorithm will be more effective, but it would lend evidential support to such an argument. Further researchA reasonable next step in investigating and using the results of this study would be to build a recommendation algorithm that uses recency-based weighting and normalization. This new algorithm could then be tested against the original algorithm on MovieLens. Provided an adequate number of users to establish significance, a simple A/B test would be adequate; if the number of regular users is smaller, we could randomize the recommendation algorithm for each login or page reload rather than each user. Success would be based on the average subsequent user rating of recommended movies.While A/B testing could confirm these results and give us new data, there are also ways we could dig further into the data we have. Specifically, we could perform further study of recency differentials (the gap between recent and non-recent ratings). Looking at recency differentials for different users could help us categorize different types of viewers by recency effects. Perhaps more practically, looking at recency differentials for different movies may help us determine which types of movies age positively vs negatively in viewers' memories over time. This could help generate re-watch recommendations, or predict demand for DVD sales and other post-release merchandizing. Bonus: basic executionFrom the basic execution below, we can see that the mean values and standard distributions of the singleton vs. large-batch ratings differ slightly, but significantly (p = 0.00000000049). Both my primary and secondary hypothesis (that singleton ratings would have a higher average and a higher standard deviation) turned out to be false. But given the extremely low p-value, the experiment was still successful in producing significant information about real differences between recent and non-recent ratings.The plots below also reveal another significant difference: singleton ratings are much more likely to utilize half-star ratings (0.5, etc). This supports (but does not prove) the assumption that singleton ratings represent recent movies, which are remembered in much greater detail.The next step in this experiment, before pursuing the further research options listed above, would be to try toggling the threshholds between singleton, small-batch, and large-batch ratings and seeing how that affects these basic results.
###Code
from experiment import split, contrastPlots
singletons_df, batches_df = split(smallRatings_df)
contrastPlots(singletons_df, batches_df)
p_val = sp.stats.ttest_ind(singletons_df["rating"], batches_df["rating"])[1]
print("P-value for singleton vs. large-batch ratings: {}".format(p_val))
###Output
_____no_output_____
|
SubsurfaceDataAnalytics_NeuralNetv2.ipynb
|
###Markdown
Visualize the DatasetLet's visualize the train and test data split on a single scatter plot.* we want to make sure it is fair* ensure that the test samples are not clustered or too far away from the training data.We will look at the original data and normalized data, the input to the neural network.
###Code
plt.subplot(211)
plt.plot(X2_train['Depth'].values,y2_train['Nporosity'].values, 'o', markerfacecolor='red', markeredgecolor='black', alpha=0.4, label = "Train")
plt.plot(X2_test['Depth'].values,y2_test['Nporosity'].values, 'o', markerfacecolor='blue', markeredgecolor='black', alpha=0.4, label = "Test")
plt.title('Standard Normal Porosity vs. Depth')
plt.xlabel('Z (m)')
plt.ylabel('NPorosity')
plt.legend()
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.0, wspace=0.2, hspace=0.2)
plt.subplot(212)
plt.plot(X2_train['norm_Depth'].values,y2_train['norm_Nporosity'].values, 'o', markerfacecolor='red', markeredgecolor='black', alpha=0.4, label = "Train")
plt.plot(X2_test['norm_Depth'].values,y2_test['norm_Nporosity'].values, 'o', markerfacecolor='blue', markeredgecolor='black', alpha=0.4, label = "Test")
plt.title('Normalized Standard Normal Porosity vs. Normalized Depth')
plt.xlabel('Normalized Z')
plt.ylabel('Normalized NPorosity')
plt.legend()
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.5, wspace=0.2, hspace=0.3)
plt.show()
###Output
_____no_output_____
###Markdown
Specify the Prediction LocationsGiven this training and testing data, let's specify the prediction locations over the range of the observed depths at regularly spaced $nbins$ locations.
###Code
# Specify the prediction locations
nbins = 1000
depth_bins = np.linspace(depth_min, depth_max, nbins) # set the bins for prediction
norm_depth_bins = (depth_bins-depth_min)/(depth_max-depth_min)*2-1 # use normalized bins
###Output
_____no_output_____
###Markdown
Build and Train a Simple Neural NetworkFor our first model we will build a simple model with: * 1 predictor feature - depth ($d$)* 1 responce feature - normal score porosity ($N\{\phi\}$)we will build a model to predict normal score porosity from depth over all locations in our model $\bf{u} \in AOI$. \begin{equation}N\{\phi(\bf{u})\} = \hat{f} (d(\bf{u}))\end{equation}and use this model to support the prediction of porosity between the wells. Design, Train and Test a Neural NetworkIn the following code we use keras / tensorflow to:1. **Design the Network** - we use a fully connected, feed forward neural network with one node in the input and output layers, to receive the normalized depth and output the normalized (normal score) porosity. We found by trial and error, given the complexity of the dataset, that we required a significant network width (about 500 nodes) and a network depth of atleast 4 hidden layers. 2. **Select the Optimizer** - we selected the adam optimizer (Kingma and Ba, 2015). This optimizer is computationally efficient and is well suited to problems with noisy data and sparse gradients. It is an extension of stochastic gradient descent with the addition of adaptive gradient algorithm that calculates per-parameter learning rates to improve learning with sparse gradients, and root mean square propopagation that sets the learning rate based on the recent magnitudes of the gradients for each parameter to improve performance with noisy data. We include stochastic gradient descent for experimentation. * we found a learning rate of 0.01 to 0.001 works well * we found the rate of decay parameters of $\beta_1=0.9$ and $\beta_2=0.999$ performed well 3. **Compile the Machine** - specify the optimizer, loss function and the metric for model training. 4. **Train the Network** - fit / train the model parameters over a specified number of batch size within a specified number of epochs. We specify the train and test normalized predictor and response features.Then we visualize the model in the original units.
###Code
# Design the neural network
model_2 = Sequential([
Dense(1, activation='linear', input_shape=(1,)), # input layer
Dense(3, activation='relu'),
# Dense(50, activation='relu'),
# Dense(50, activation='relu'),
# Dense(20, activation='relu'), # uncomment these to add hidden layers
# Dense(20, activation='relu'),
# Dense(20, activation='relu'),
# Dense(20, activation='relu'),
Dense(1, activation='linear'), # output layer
])
# Select the Optimizer
adam = Adam(lr=0.003, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False) # adam optimizer
#sgd = keras.optimizers.SGD(lr=0.001, momentum=0.0, decay = 0.0, nesterov=False) # stochastic gradient descent
# Compile the Machine
model_2.compile(optimizer=adam,loss='mse',metrics=['accuracy'])
# Train the Network
hist_2 = model_2.fit(X2_train['norm_Depth'], y2_train['norm_Nporosity'],
batch_size=5, epochs=200,
validation_data=(X2_test['norm_Depth'], y2_test['norm_Nporosity']),verbose = 0)
# Predict with the Network
pred_norm_Nporsity = model_2.predict(np.array(norm_depth_bins)) # predict with our ANN
pred_Nporosity = ((pred_norm_Nporsity + 1)/2*(Npor_max - Npor_min)+Npor_min)
# Plot the Model Predictions
plt.subplot(1,1,1)
plt.plot(depth_bins,pred_Nporosity,'black',linewidth=2)
plt.plot(X2_train['Depth'].values,y2_train['Nporosity'].values, 'o', markerfacecolor='red', markeredgecolor='black', alpha=1.0, label = "Train")
plt.plot(X2_test['Depth'].values,y2_test['Nporosity'].values, 'o', markerfacecolor='blue', markeredgecolor='black', alpha=1.0, label = "Test")
plt.xlabel('Depth (m)')
plt.ylabel('Porosity (faction)')
plt.legend()
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=0.8, wspace=0.2, hspace=0.2)
plt.show()
###Output
_____no_output_____
###Markdown
Evaluation of the ModelFor my specified artificial neural network design and optimization parameters I have a very flexible model to fit the data.* artificial neural networks live up to their designation as **Universal Function Approximators**Let's check the training curve, loss functions for our model over training and testing datasets.* **square loss** ($L_2$ loss) is the:\begin{equation}L_2 = \sum_{\bf{u}_{\alpha} \in AOI} \left(y(\bf{u}_{\alpha}) - \hat{f}(x_1(\bf{u}_{\alpha}),\ldots,x_m(\bf{u}_{\alpha})\right)\end{equation}* this is a measure of the inaccuracy over the available dataWe can see the progress of the model over epochs in reduction of training and testing error.* we can observe that the model matches the training data after about 200 epochs, but continues to improve up the 1,000 epochs
###Code
# Plot the Loss vs. Training Epoch
plt.subplot(1,1,1)
plt.plot(hist_2.history['loss'])
plt.plot(hist_2.history['val_loss'])
plt.title('Loss function of Artificial Neural Net Example #2')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper right')
plt.ylim(0,0.4)
plt.grid()
plt.tight_layout()
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=0.8, wspace=0.2, hspace=0.2)
plt.show()
###Output
_____no_output_____
###Markdown
Some ObservationsWe performed a set of experiments and made the following observations that may help you experiment with the design and training of this artificial neural network. Each machine was trained over 1000 epochs with a batch size of 5. For a 1 $\times$ 100 $\times$ 100 $\times$ 1 neural network:* Learning Rate of 0.001 still converging at 1000 epochs, missing some features* Learning Rate of 0.1 - stuck in a local minum as a step functionFor a 1 $\times$ 500 $\times$ 500 $\times$ 1 neural network:* Learning Rate of 0.01 - 0.001 for a close fit to training data* Learning Rate of 0.1 - stuck in a local minimum as a line* Learning Rate of $\le$ 0.0001 still converging at 1000 epochs, missing some featuresFor a 1 $\times$ 500 $\times$ 500 $\times$ 500 $\times$ 1 neural network:* Learning Rate of 0.01 - 0.001 for a close fit to training data* Learning Rate of 0.1 - stuck as a line* Learning Rate of $\le$ 0.0001 still converging at 1000 epochs, missing some features Visualizing the Neural NetThere are some methods available to interogate the artificial neural net.* neural net summary* weightsHere's the summary from our neural net. It lists by layers the number of nodes and number of parameters.
###Code
model_2.summary() # artificial neural network design and number of parameters
###Output
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_7 (Dense) (None, 1) 2
_________________________________________________________________
dense_8 (Dense) (None, 3) 6
_________________________________________________________________
dense_9 (Dense) (None, 1) 4
=================================================================
Total params: 12
Trainable params: 12
Non-trainable params: 0
_________________________________________________________________
###Markdown
We can also see the actual trained weights for each node in each layer.
###Code
for layer in model_2.layers: # weights for the trained artificial neural network
g = layer.get_config()
h = layer.get_weights()
print(g)
print(h)
print('\n')
###Output
{'name': 'dense_7', 'trainable': True, 'batch_input_shape': (None, 1), 'dtype': 'float32', 'units': 1, 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'GlorotUniform', 'config': {'seed': None}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}
[array([[1.3643419]], dtype=float32), array([0.06368703], dtype=float32)]
{'name': 'dense_8', 'trainable': True, 'dtype': 'float32', 'units': 3, 'activation': 'relu', 'use_bias': True, 'kernel_initializer': {'class_name': 'GlorotUniform', 'config': {'seed': None}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}
[array([[-0.42711794, -0.64516276, 1.1085054 ]], dtype=float32), array([-0.183792 , -0.28067875, -0.7691435 ], dtype=float32)]
{'name': 'dense_9', 'trainable': True, 'dtype': 'float32', 'units': 1, 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'GlorotUniform', 'config': {'seed': None}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}
[array([[-1.2426817 ],
[-0.5781366 ],
[ 0.51588565]], dtype=float32), array([0.00843373], dtype=float32)]
###Markdown
Subsurface Data Analytics Artificial Neural Networks for Prediction in Python Michael Pyrcz, Associate Professor, University of Texas at Austin [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) | [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy) Honggeun Jo, Graduate Student, The University of Texas at Austin [LinkedIn](https://www.linkedin.com/in/honggeun-jo/?originalSubdomain=kr) | [Twitter](https://twitter.com/HonggeunJ) PGE 383 Exercise: Artificial Neural Networks for Subsurface Modeling in Python Here's a simple workflow, demonstration of artificial neural networks for subsurface modeling workflows. This should help you get started with building subsurface models that use data analytics and machine learning. Here's some basic details about neural networks. Neural NetworksMachine learning method for supervised learning for classification and regression analysis. Here are some key aspects of support vector machines.**Basic Design** *"...a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs."* Caudill (1989). **Nature-inspire Computing** based on the neuronal structure in the brain, including many interconnected simple, processing units, known as nodes that are capable of complicated emergent pattern detection due to a large number of nodes and interconnectivity.**Training and Testing** just like and other predictive model (e.g. linear regression, decision trees and support vector machines) we perform training to fit parameters and testing to tune hyperparameters. Here we observe the error with training and testing datasets, but do not demonstrate tuning of the hyperparameters. **Parameters** are the weights applied to each connection and a bias term applied to each node. For a single node in an artificial neural network, this includes the slope terms, $\beta_i$, and the bias term, $\beta_{0}$.\begin{equation}Y = \sum_{i=1}^m \beta_i X + \beta_0\end{equation}it can be seen that the number of parameters increases rapidly as we increase the number of nodes and the connectivity between the nodes.**Layers** the typical artificial neural net is structured with an **input layer**, with one node for each $m$ predictor feature, $X_1,\ldots,X_m$. There is an **ouput layer**, with one node for each $r$ response feature, $Y_1,\ldots,Y_r$. There may be one or more layers of nodes between the input and output layers, known as **hidden layer(s)**. **Connections** are the linkages between the nodes in adjacent layers. For example, in a fully connected artificial neural network, all the input nodes are connected to all of the nodes in the first layer, then all of the nodes in the first layer are connected to the next layer and so forth. Each connection includes a weight parameter as indicated above.**Nodes** receive the weighted signal from the connected previous layer nodes, sum and then apply this result to the **activation** function in the node. Some example activation functions include:* **Binary** the node fires or not. This is represented by a Heaviside step function.* **Identify** the input is passed to the output $f(x) = x$* **Linear** the node passes a signal that increases linearly with the weighted input.* **Logistic** also known as sigmoid or soft step $f(x) = \frac{1}{1+e^{-x}}$the node output is the nonlinear activiation function applied to the linearly weighted inputs. This is fed to all nodes in the next layer.**Training Cycles** - the presentation of a batch of data, forward application of the current prediction model to make estimates, calculation of error and then backpropagation of error to correct the artificial neural network parameters to reduce the error over all of the batches.**Batch** is the set of training data for each training cycle of forward prediction and back propagation of error, drawn to train for each iteration. There is a trade-off, a larger batch results in more computational time per iteration, but a more accurate estimate of the error to adjust the weights. Smaller batches result in a nosier estimate of the error, but faster epochs, this results in faster learning and even possibly more robust models.**Epochs** - is a set of training cycles, batches covering all available training data. **Local Minimums** - if one calculated the error hypersurface over the range of model parameters it would be hyparabolic, there is a global minimium error solution. But this error hyper surface is rough and it is possible to be stuck in a local minimum. **Learning Rate** and **Mommentum** coefficients are introduced to avoid getting stuck in local minimums.* **Mommentum** is a hyperparameter to control the use of information from the weight update over the last epoch for consideration in the current epoch. This can be accomplished with an update vector, $v_i$, a mommentum parameter, $\alpha$, to calculate the current weight update, $v_{i+1}$ given the new update $\theta_{i+1}$.\begin{equation}v_{i+1} = \alpha v_i + \theta_{i+1}\end{equation}* **Learning Rate** is a hyperparameter that controls the adjustment of the weights in response to the gradient indicated by backpropagation of error Applications to subsurface modelingWe demonstrate the estimation of normal score transformed porosity from depth. This would be useful for building a vertical trend model. * modeling the complicated relationship between porosity and depth. Limitations of Neural Network EstimationSince we demonstrate the use of an artificial neural network to estimate porosity from sparsely sampled data over depth, we should comment on limitations of our artificial neural networks for this estimation problem:* does not honor the well data* does not honor the histogram of the data* does not honor spatial correlation * does not honor the multivariate relationship* generally low interpretability models* requires a large number of data for effective training* high model complexity with high model variance Workflow GoalsLearn the basics of machine learning in python to predict subsurface features. This includes:* Loading and visualizing sample data* Trying out neural nets Objective In the PGE 383: Subsurface Machine Learning Class, I want to provide hands-on experience with building subsurface modeling workflows. Python provides an excellent vehicle to accomplish this. I have coded a package called GeostatsPy with GSLIB: Geostatistical Library (Deutsch and Journel, 1998) functionality that provides basic building blocks for building subsurface modeling workflows. The objective is to remove the hurdles of subsurface modeling workflow construction by providing building blocks and sufficient examples. This is not a coding class per se, but we need the ability to 'script' workflows working with numerical methods. Getting StartedHere's the steps to get setup in Python with the GeostatsPy package:1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/). 2. From Anaconda Navigator (within Anaconda3 group), go to the environment tab, click on base (root) green arrow and open a terminal. 3. In the terminal type: pip install geostatspy. 4. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality. You will need to copy the data file to your working directory. They are available here:* Tabular data - 12_sample_data.csv found [here](https://github.com/GeostatsGuy/GeoDataSets/blob/master/12_sample_data.csv).There are exampled below with these functions. You can go here to see a list of the available functions, https://git.io/fh4eX, other example workflows and source code. Import Required PackagesWe will also need some standard packages. These should have been installed with Anaconda 3.
###Code
import geostatspy.GSLIB as GSLIB # GSLIB utilies, visualization and wrapper
import geostatspy.geostats as geostats # GSLIB methods convert to Python
###Output
_____no_output_____
###Markdown
We will also need some standard packages. These should have been installed with Anaconda 3.
###Code
import numpy as np # ndarrys for gridded data
import pandas as pd # DataFrames for tabular data
import os # set working directory, run executables
import matplotlib.pyplot as plt # for plotting
import seaborn as sns # for plotting
import warnings # supress warnings from seaborn pairplot
from sklearn.model_selection import train_test_split # train / test DatFrame split
###Output
_____no_output_____
###Markdown
We will also need the following packages to train and test our artificial neural nets:* Tensorflow - open source machine learning * Keras - high level application programing interface (API) to build and train modelsMore information is available at [tensorflow install](https://www.tensorflow.org/install).* This workflow was designed with tensorflow version 1.13.1 and does not work with version 2.0.0 alpha.To check your current version of tensorflow you could run the next block of code.
###Code
import tensorflow as tf
tf.__version__ # check the installed version of tensorflow
###Output
_____no_output_____
###Markdown
Let's import all of the tensorflow and keras methods that we will need in our workflow.
###Code
import tensorflow as tf
from keras.models import Sequential, load_model
from keras.layers.core import Dense, Dropout, Activation
from keras.utils import np_utils
import keras
from tensorflow.python.keras import backend as k
###Output
_____no_output_____
###Markdown
Set the working directoryI always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time).
###Code
os.chdir("C:/PGE383")
###Output
_____no_output_____
###Markdown
Loading DataLet's load the provided multivariate, spatial dataset '12_sample_data.csv'. It is a comma delimited file with: * Depth ($m$)* Normal Score Porosity It is common to transform properties to standard normal for geostatistical workflows.We load it with the pandas 'read_csv' function into a data frame we called 'df' and then preview it to make sure it loaded correctly.**Python Tip: using functions from a package** just type the label for the package that we declared at the beginning:```pythonimport pandas as pd```so we can access the pandas function 'read_csv' with the command: ```pythonpd.read_csv()```but read csv has required input parameters. The essential one is the name of the file. For our circumstance all the other default parameters are fine. If you want to see all the possible parameters for this function, just go to the docs [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html). * The docs are always helpful* There is often a lot of flexibility for Python functions, possible through using various inputs parametersalso, the program has an output, a pandas DataFrame loaded from the data. So we have to specficy the name / variable representing that new object.```pythondf = pd.read_csv("1D_Porosity.csv") ```Let's run this command to load the data and then look at the resulting DataFrame to ensure that we loaded it.
###Code
df2 = pd.read_csv("1D_Porosity.csv") # read a .csv file in as a DataFrame # display first 4 samples in the table as a preview
df2.head() # we could also use this command for a table preview
###Output
_____no_output_____
###Markdown
Data NormalizationWe must normalize the features before we apply them in an artificial neural network model. These are the motivations for this normalization:* remove impact of scale of different types of data (i.e., depth varies between $[0,10]$, but porosity only varies between $[-3,3.0]$).* activation functions in artificial neural networks are designed to be more sensive to values of nodes closer to 0.0 (i.e., results in higher gradient and improves backpropagation in training)Let's normalize each feature. * We apply the min max normalization by-hand to force both the predictor and response features to be bound $[-1,1]$.* It is easy to backtransform given we keep track of the original min and max values
###Code
depth_min = df2['Depth'].values.min(); depth_max = df2['Depth'].values.max()
Npor_min = df2['Nporosity'].values.min(); Npor_max = df2['Nporosity'].values.max()
df2['norm_Depth'] = (df2['Depth'] - depth_min)/(depth_max - depth_min) * 2 - 1
df2['norm_Nporosity'] = (df2['Nporosity'] - Npor_min)/(Npor_max - Npor_min) * 2 - 1
df2.head()
###Output
_____no_output_____
###Markdown
It is also a good idea to check the summary statistics. * All normalized features should now range from -1.0 to 1.0
###Code
df2.describe().transpose()
###Output
_____no_output_____
###Markdown
Separation of Training and Testing DataWe also need to split our data into training / testing datasets so that we:* can train our artificial neural networks using the training data * while testing their performance with the withheld testing (validation) data.
###Code
X2 = df2.iloc[:,[0,2]] # extract the predictor feature - acoustic impedance
y2 = df2.iloc[:,[1,3]] # extract the response feature - porosity
X2_train, X2_test, y2_train, y2_test = train_test_split(X2, y2, test_size=0.2, random_state=73073)
###Output
_____no_output_____
###Markdown
Visualize the DatasetLet's visualize the train and test data split on a single scatter plot.* we want to make sure it is fair* ensure that the test samples are not clustered or too far away from the training data.We will look at the original data and normalized data, the input to the neural network.
###Code
plt.subplot(211)
plt.plot(X2_train['Depth'].values,y2_train['Nporosity'].values, 'o', markerfacecolor='red', markeredgecolor='black', alpha=0.4, label = "Train")
plt.plot(X2_test['Depth'].values,y2_test['Nporosity'].values, 'o', markerfacecolor='blue', markeredgecolor='black', alpha=0.4, label = "Test")
plt.title('Standard Normal Porosity vs. Depth')
plt.xlabel('Z (m)')
plt.ylabel('NPorosity')
plt.legend()
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.0, wspace=0.2, hspace=0.2)
plt.subplot(212)
plt.plot(X2_train['norm_Depth'].values,y2_train['norm_Nporosity'].values, 'o', markerfacecolor='red', markeredgecolor='black', alpha=0.4, label = "Train")
plt.plot(X2_test['norm_Depth'].values,y2_test['norm_Nporosity'].values, 'o', markerfacecolor='blue', markeredgecolor='black', alpha=0.4, label = "Test")
plt.title('Normalized Standard Normal Porosity vs. Normalized Depth')
plt.xlabel('Normalized Z')
plt.ylabel('Normalized NPorosity')
plt.legend()
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.5, wspace=0.2, hspace=0.3)
plt.show()
###Output
_____no_output_____
###Markdown
Specify the Prediction LocationsGiven this training and testing data, let's specify the prediction locations over the range of the observed depths at regularly spaced $nbins$ locations.
###Code
# Specify the prediction locations
nbins = 1000
depth_bins = np.linspace(depth_min, depth_max, nbins) # set the bins for prediction
norm_depth_bins = (depth_bins-depth_min)/(depth_max-depth_min)*2-1 # use normalized bins
###Output
_____no_output_____
###Markdown
Build and Train a Simple Neural NetworkFor our first model we will build a simple model with: * 1 predictor feature - depth ($d$)* 1 responce feature - normal score porosity ($N\{\phi\}$)we will build a model to predict normal score porosity from depth over all locations in our model $\bf{u} \in AOI$. \begin{equation}N\{\phi(\bf{u})\} = \hat{f} (d(\bf{u}))\end{equation}and use this model to support the prediction of porosity between the wells. Design, Train and Test a Neural NetworkIn the following code we use keras / tensorflow to:1. **Design the Network** - we use a fully connected, feed forward neural network with one node in the input and output layers, to receive the normalized depth and output the normalized (normal score) porosity. We found by trial and error, given the complexity of the dataset, that we required a significant network width (about 500 nodes) and a network depth of atleast 4 hidden layers. 2. **Select the Optimizer** - we selected the adam optimizer (Kingma and Ba, 2015). This optimizer is computationally efficient and is well suited to problems with noisy data and sparse gradients. It is an extension of stochastic gradient descent with the addition of adaptive gradient algorithm that calculates per-parameter learning rates to improve learning with sparse gradients, and root mean square propopagation that sets the learning rate based on the recent magnitudes of the gradients for each parameter to improve performance with noisy data. We include stochastic gradient descent for experimentation. * we found a learning rate of 0.01 to 0.001 works well * we found the rate of decay parameters of $\beta_1=0.9$ and $\beta_2=0.999$ performed well 3. **Compile the Machine** - specify the optimizer, loss function and the metric for model training. 4. **Train the Network** - fit / train the model parameters over a specified number of batch size within a specified number of epochs. We specify the train and test normalized predictor and response features.Then we visualize the model in the original units.
###Code
# Design the neural network
model_2 = Sequential([
Dense(1, activation='linear', input_shape=(1,)), # input layer
Dense(500, activation='relu'),
Dense(500, activation='relu'),
Dense(500, activation='relu'),
# Dense(100, activation='relu'), # uncomment these to add hidden layers
# Dense(100, activation='relu'),
# Dense(100, activation='relu'),
# Dense(100, activation='relu'),
Dense(1, activation='linear'), # output layer
])
# Select the Optimizer
adam = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False) # adam optimizer
#sgd = keras.optimizers.SGD(lr=0.001, momentum=0.0, decay = 0.0, nesterov=False) # stochastic gradient descent
# Compile the Machine
model_2.compile(optimizer=adam,loss='mse',metrics=['accuracy'])
# Train the Network
hist_2 = model_2.fit(X2_train['norm_Depth'], y2_train['norm_Nporosity'],
batch_size=5, epochs=1000,
validation_data=(X2_test['norm_Depth'], y2_test['norm_Nporosity']),verbose = 0)
# Predict with the Network
pred_norm_Nporsity = model_2.predict(np.array(norm_depth_bins)) # predict with our ANN
pred_Nporosity = ((pred_norm_Nporsity + 1)/2*(Npor_max - Npor_min)+Npor_min)
# Plot the Model Predictions
plt.subplot(1,1,1)
plt.plot(depth_bins,pred_Nporosity,'black',linewidth=2)
plt.plot(X2_train['Depth'].values,y2_train['Nporosity'].values, 'o', markerfacecolor='red', markeredgecolor='black', alpha=1.0, label = "Train")
plt.plot(X2_test['Depth'].values,y2_test['Nporosity'].values, 'o', markerfacecolor='blue', markeredgecolor='black', alpha=1.0, label = "Test")
plt.xlabel('Depth (m)')
plt.ylabel('Porosity (faction)')
plt.legend()
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=0.8, wspace=0.2, hspace=0.2)
plt.show()
###Output
_____no_output_____
###Markdown
Evaluation of the ModelFor my specified artificial neural network design and optimization parameters I have a very flexible model to fit the data.* artificial neural networks live up to their designation as **Universal Function Approximators**Let's check the training curve, loss functions for our model over training and testing datasets.* **square loss** ($L_2$ loss) is the:\begin{equation}L_2 = \sum_{\bf{u}_{\alpha} \in AOI} \left(y(\bf{u}_{\alpha}) - \hat{f}(x_1(\bf{u}_{\alpha}),\ldots,x_m(\bf{u}_{\alpha})\right)\end{equation}* this is a measure of the inaccuracy over the available dataWe can see the progress of the model over epochs in reduction of training and testing error.* we can observe that the model matches the training data after about 200 epochs, but continues to improve up the 1,000 epochs
###Code
# Plot the Loss vs. Training Epoch
plt.subplot(1,1,1)
plt.plot(hist_2.history['loss'])
plt.plot(hist_2.history['val_loss'])
plt.title('Loss function of Artificial Neural Net Example #2')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper right')
plt.ylim(0,0.4)
plt.grid()
plt.tight_layout()
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=0.8, wspace=0.2, hspace=0.2)
plt.show()
###Output
_____no_output_____
###Markdown
Some ObservationsWe performed a set of experiments and made the following observations that may help you experiment with the design and training of this artificial neural network. Each machine was trained over 1000 epochs with a batch size of 5. For a 1 $\times$ 100 $\times$ 100 $\times$ 1 neural network:* Learning Rate of 0.001 still converging at 1000 epochs, missing some features* Learning Rate of 0.1 - stuck in a local minum as a step functionFor a 1 $\times$ 500 $\times$ 500 $\times$ 1 neural network:* Learning Rate of 0.01 - 0.001 for a close fit to training data* Learning Rate of 0.1 - stuck in a local minimum as a line* Learning Rate of $\le$ 0.0001 still converging at 1000 epochs, missing some featuresFor a 1 $\times$ 500 $\times$ 500 $\times$ 500 $\times$ 1 neural network:* Learning Rate of 0.01 - 0.001 for a close fit to training data* Learning Rate of 0.1 - stuck as a line* Learning Rate of $\le$ 0.0001 still converging at 1000 epochs, missing some features Visualizing the Neural NetThere are some methods available to interogate the artificial neural net.* neural net summary* weightsHere's the summary from our neural net. It lists by layers the number of nodes and number of parameters.
###Code
model_2.summary() # artificial neural network design and number of parameters
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_535 (Dense) (None, 1) 2
_________________________________________________________________
dense_536 (Dense) (None, 500) 1000
_________________________________________________________________
dense_537 (Dense) (None, 500) 250500
_________________________________________________________________
dense_538 (Dense) (None, 500) 250500
_________________________________________________________________
dense_539 (Dense) (None, 1) 501
=================================================================
Total params: 502,503
Trainable params: 502,503
Non-trainable params: 0
_________________________________________________________________
###Markdown
We can also see the actual trained weights for each node in each layer.
###Code
for layer in model_2.layers: # weights for the trained artificial neural network
g = layer.get_config()
h = layer.get_weights()
print(g)
print(h)
print('\n')
###Output
{'name': 'dense_535', 'trainable': True, 'batch_input_shape': (None, 1), 'dtype': 'float32', 'units': 1, 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}
[array([[-1.324188]], dtype=float32), array([-0.01764077], dtype=float32)]
{'name': 'dense_536', 'trainable': True, 'units': 500, 'activation': 'relu', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}
[array([[ 4.37869839e-02, 2.84792539e-02, -5.04958704e-02,
1.33315518e-01, 3.78291379e-03, -1.06391739e-02,
9.11150649e-02, -1.02796906e-03, -1.50301322e-01,
-6.21361472e-02, -2.31712814e-02, 4.74340748e-03,
8.10861066e-02, -8.89362991e-02, 2.98289418e-01,
4.41729911e-02, 4.47279736e-02, 2.22163871e-02,
4.29757684e-02, 6.64823875e-02, 3.37973014e-02,
2.44364385e-02, -6.22198582e-02, 8.36473256e-02,
-1.49021158e-02, 8.82090256e-02, 3.54946870e-03,
-1.29045490e-02, 8.74734744e-02, -1.06779337e-02,
-5.43587096e-02, 1.33621832e-03, 1.16167612e-01,
-6.61311031e-04, -9.01760723e-06, -4.26420793e-02,
7.38960207e-02, 3.16858850e-02, 1.09584540e-01,
7.82321021e-02, 3.61922011e-02, -1.04301557e-01,
8.85869563e-02, 9.70736817e-02, -5.80134802e-03,
8.00451189e-02, -1.30036043e-03, 3.91601166e-03,
1.01504810e-01, 3.23649906e-02, 5.36048077e-02,
-8.75119120e-02, 7.06093982e-02, 9.37056392e-02,
9.51493159e-02, -3.00738541e-03, 7.92166516e-02,
-4.90556529e-04, -1.29743107e-02, -7.81329200e-02,
1.32052168e-01, -8.99229273e-02, -6.77816384e-03,
-1.38963714e-01, 4.36128713e-02, -6.23008721e-02,
-6.70676082e-02, -1.58895971e-04, 4.52702381e-02,
6.76621273e-02, -1.48024713e-03, 1.09858453e-01,
-9.12376912e-04, 6.73749018e-03, 5.63484197e-03,
-3.38859251e-03, 1.16737701e-01, -7.53363296e-02,
1.29707232e-01, -1.08739913e-01, -9.22356173e-03,
1.16427720e-01, -7.81598166e-02, -6.16153656e-03,
8.80591869e-02, -6.53325766e-02, 3.85513045e-02,
1.08171615e-03, 9.70692188e-02, -4.51390678e-03,
-1.91461872e-02, -9.73084494e-02, -2.10419614e-02,
-8.55254233e-02, 1.29724205e-01, -1.06441669e-01,
4.56053168e-02, -7.28177875e-02, -7.60430004e-04,
5.38264588e-02, -1.00965656e-01, -6.52337819e-02,
1.06089935e-02, -7.60561451e-02, -6.80339411e-02,
-6.55979514e-02, 1.01904668e-01, 4.16511856e-03,
4.18214388e-02, 3.36837284e-02, -6.89377934e-02,
-3.13841109e-03, 7.37283900e-02, 1.30472451e-01,
3.95321772e-02, -6.40012696e-02, -2.65501030e-02,
1.35376722e-01, 5.19507155e-02, 3.28973262e-03,
9.72523764e-02, 6.78595230e-02, -7.61842132e-02,
5.09470887e-02, -5.81806116e-02, 8.02602246e-02,
-1.05615310e-01, -7.84106031e-02, -8.20061564e-03,
-2.09269673e-02, -6.52782246e-02, -3.76430005e-02,
-4.43258230e-03, -4.43456182e-03, -7.33490521e-03,
4.39224727e-02, 7.80225322e-02, -8.18999037e-02,
-3.16527747e-02, 1.35007845e-02, -1.23362601e-01,
6.97992444e-02, -4.31013806e-03, 2.70868577e-02,
7.12058842e-02, -1.16116460e-03, -4.60769923e-04,
-4.54251692e-02, 6.35423064e-02, -2.99977348e-03,
-1.34880468e-01, 6.31999224e-02, -6.48167804e-02,
-6.69402704e-02, 6.27936516e-03, -3.10638584e-02,
-9.25559103e-02, -1.11442834e-01, 3.28482166e-02,
-1.14701122e-01, -5.21795750e-02, -6.13903143e-02,
-7.53833354e-02, -3.98605177e-03, 3.29084210e-02,
-1.16867386e-02, -5.33676371e-02, -4.76962626e-02,
-1.55353293e-01, 7.93459788e-02, 4.12472570e-03,
-6.63398355e-02, 4.17984463e-02, -4.13348991e-03,
-3.65724275e-03, 2.94227749e-02, -6.86958060e-02,
8.13451335e-02, -1.81635027e-03, 5.97945638e-02,
-7.60767385e-02, -5.49900644e-02, 1.34785116e-01,
-1.07490271e-01, 1.29407361e-01, -1.05475530e-03,
4.20478433e-02, -8.41621831e-02, 9.80847478e-02,
7.03405812e-02, 8.84528086e-02, 2.63709179e-03,
9.27885473e-02, -1.14496797e-01, 5.28439842e-02,
-3.86079773e-02, 4.00377586e-02, 3.54537810e-03,
-7.69117326e-02, -1.08731799e-02, 6.50035217e-02,
6.63219616e-02, -4.19028290e-02, 5.35250939e-02,
8.24268088e-02, 4.13128324e-02, 5.90596795e-02,
-2.60966900e-03, -6.19906597e-02, -4.01285430e-03,
-4.59489822e-02, 1.15634732e-01, 9.54596475e-02,
3.70567292e-02, -6.69994131e-02, 3.28783505e-02,
-7.60106593e-02, -9.85587314e-02, -1.66328298e-03,
-5.84144657e-03, -1.33208230e-01, -9.32852104e-02,
8.55539441e-02, 4.49849386e-03, 4.29949239e-02,
3.05023566e-02, -9.41249058e-02, -5.18078580e-02,
9.61371660e-02, 8.67340341e-02, -2.08380865e-03,
-1.32435605e-01, 3.22007686e-02, -9.46334153e-02,
1.68362688e-02, -1.11437447e-01, -7.92395920e-02,
-7.11047575e-02, -2.16089971e-02, -1.10117979e-01,
6.64046081e-03, 8.86755735e-02, -3.29512381e-03,
-7.75940418e-02, -1.31690934e-01, 6.67404830e-02,
8.91118497e-03, -1.77239031e-02, -8.86819586e-02,
6.95541948e-02, 6.89079836e-02, -1.05990872e-01,
-6.73175603e-03, -6.64992407e-02, 9.71645042e-02,
1.55092170e-03, -1.54509470e-01, -7.50039071e-02,
1.23657798e-02, -1.34944007e-01, -6.35182112e-02,
1.14698768e-01, -9.12414715e-02, -1.27469366e-02,
-1.27320290e-01, -6.35384917e-02, 8.69745612e-02,
-1.01650916e-01, -7.39746392e-02, -1.11858472e-01,
-4.81140334e-03, 5.06917983e-02, -1.02960981e-01,
-6.23498410e-02, -1.90573558e-03, -7.14417920e-02,
-8.64325166e-02, 2.62962803e-02, 7.69598118e-04,
7.82793504e-04, 8.48663449e-02, 5.03086671e-03,
-9.86906141e-02, -6.83365166e-02, -8.57298437e-04,
-7.14763552e-02, -2.47936677e-02, 4.49170657e-02,
2.82813385e-02, 6.86316267e-02, 2.87176669e-02,
-6.49383990e-04, 1.10460266e-01, 4.37190309e-02,
-1.34818971e-01, 2.25369576e-02, 7.59827569e-02,
8.77600238e-02, 6.10029958e-02, 4.76534851e-02,
-1.09301433e-01, 9.92182493e-02, 9.33573022e-02,
-1.11727640e-01, 7.03948066e-02, 6.28180876e-02,
-1.55644551e-01, 3.15748453e-02, 3.85251343e-02,
7.26150498e-02, -2.99699232e-02, -1.16008695e-03,
9.00451466e-02, 3.72605994e-02, -7.57500231e-02,
1.35603398e-01, 3.50316837e-02, 1.13473386e-01,
-7.59589225e-02, -6.93566352e-02, -9.32576507e-02,
-5.85489348e-02, -8.36537704e-02, 6.00345293e-03,
-8.75586644e-02, 1.75075140e-02, 5.72374538e-02,
-1.57084502e-02, 3.77423353e-02, -1.36674359e-03,
3.36039974e-03, -1.06565133e-01, -1.29231559e-02,
-6.47024736e-02, -5.83822392e-02, 9.99464691e-02,
-8.59773308e-02, -2.25808448e-03, 1.04079396e-01,
-3.04516940e-03, -1.45849600e-01, -8.17036256e-02,
-1.57709438e-02, 1.06820256e-01, -4.14584093e-02,
-6.64309459e-03, -9.44048446e-03, 8.36263970e-03,
1.09452441e-01, 7.22158607e-03, -6.70162402e-03,
1.76369622e-02, -4.00551409e-02, 3.14997211e-02,
-6.75998926e-02, 5.34251817e-02, -6.50570542e-02,
-5.72608598e-02, -2.12053079e-02, -6.81426972e-02,
1.92111805e-02, -2.33714134e-02, -2.11180479e-04,
1.36419702e-02, 6.11771382e-02, -5.63817704e-03,
5.86114563e-02, 8.18122774e-02, 8.32198001e-03,
-8.27094913e-02, 4.07909937e-02, 8.27760771e-02,
6.55070022e-02, -1.61201786e-02, -8.37071761e-02,
-1.21987864e-01, 1.02040082e-01, 6.02825582e-02,
-6.75648451e-02, 7.14957565e-02, 2.08943477e-03,
4.29181755e-02, -2.25497922e-03, 6.37466386e-02,
-6.50759786e-02, -9.48397517e-02, 4.51110490e-02,
1.01239242e-01, -8.45591426e-02, 3.51027623e-02,
-7.04090968e-02, -6.20335378e-02, 5.60532324e-02,
-6.05542287e-02, 8.46762434e-02, 1.10914316e-02,
6.13659620e-02, 2.12732601e-04, 3.68024129e-03,
-6.93032220e-02, -2.75107697e-02, -1.10175148e-01,
-2.30825553e-03, -8.83723274e-02, -5.68870977e-02,
-1.47443516e-02, 5.29175289e-02, 1.02081917e-01,
1.03125438e-01, -5.90750249e-03, -1.71697848e-02,
4.59938950e-04, -9.67270415e-03, 1.13146283e-01,
6.02231696e-02, -1.22713171e-01, -1.56251043e-01,
9.25789773e-02, 3.61172482e-02, -6.59704860e-03,
6.58418387e-02, -1.08241461e-01, 4.66979221e-02,
1.89785305e-02, 1.06163993e-01, -7.98181817e-03,
-4.21184935e-02, -6.92812204e-02, -8.86960998e-02,
-6.53993636e-02, -7.85087645e-02, -8.98909271e-02,
9.31996405e-02, 3.45884226e-02, 8.91835988e-02,
6.24386482e-02, -4.41202847e-03, 6.15031570e-02,
-7.02832565e-02, 1.05966171e-02, -1.43899932e-03,
-7.76754320e-02, 1.02125727e-01, 8.34097266e-02,
-1.43925697e-01, 8.42536539e-02, 1.04465811e-02,
2.24737227e-02, 1.08246863e-01, -7.89064392e-02,
-9.83206779e-02, -1.22306362e-01, 4.55920026e-02,
1.05681658e-01, -2.09155977e-02, 3.43536325e-02,
-8.47020000e-02, 5.27732074e-02, 5.98774552e-02,
-1.04057558e-01, -1.83242336e-02, 7.08967596e-02,
-6.99171498e-02, -6.37240848e-03, -2.10936777e-02,
5.89776859e-02, 8.36545303e-02, -1.14212818e-01,
-1.50420722e-02, 7.64402226e-02, 7.43056163e-02,
1.21164434e-01, -1.32907426e-03, -4.82582068e-03,
2.17562038e-02, 2.86421478e-02, -5.22685573e-02,
-8.04568008e-02, 1.20499551e-01, -1.01798251e-01,
1.02965400e-01, -9.50997025e-02, -2.22027255e-03,
7.36168995e-02, 1.10508129e-03, -3.12446966e-03,
3.03575695e-02, -1.19504379e-02, 6.95811212e-02,
-6.69124594e-04, 8.70308727e-02, 3.32550704e-02,
-7.08441362e-02, 4.81908470e-02, 8.01099166e-02,
1.68886909e-03, 9.67642516e-02, -5.53571247e-02,
2.30133021e-03, 6.30277395e-02]], dtype=float32), array([-0.06139961, -0.03823963, -0.04796872, -0.05715915, -0.00778893,
-0.01715333, -0.07307467, -0.00414138, -0.02369456, -0.08908577,
-0.03416378, -0.01094732, -0.08148513, -0.06651676, -0.04846705,
-0.06414326, -0.06192571, -0.03751549, -0.0619099 , -0.07249781,
-0.05136329, 0.01927103, -0.05907007, -0.08442453, -0.02177054,
-0.06876581, -0.00651649, -0.02827632, -0.06884521, -0.01586316,
-0.05935439, -0.01289208, -0.05038578, -0.00867271, -0.00318428,
0.0086913 , -0.08008435, -0.0433035 , -0.04697267, -0.07393831,
-0.05754452, -0.07528708, -0.06741227, -0.076387 , 0.03964805,
-0.07952945, -0.005366 , 0.02763389, -0.07387943, -0.0549034 ,
-0.07615414, -0.08007966, -0.08798408, -0.07791902, -0.07160194,
-0.00560842, -0.08591481, -0.00418956, -0.02050515, -0.05918604,
-0.07379995, -0.06774189, -0.01120751, -0.01974001, 0.06067615,
-0.05945587, -0.05904902, 0.00732201, -0.06428739, -0.07017349,
-0.0086476 , -0.0485802 , -0.00730538, -0.00957363, -0.02154209,
-0.01012385, -0.0503656 , -0.07115868, -0.06906632, 0.00565668,
-0.01403928, -0.07305516, -0.07933737, -0.00866085, -0.0698074 ,
-0.07289582, -0.05830758, -0.00272358, -0.07188021, -0.01362477,
-0.02701662, -0.07330002, -0.02126301, -0.08914766, -0.08186267,
-0.07523412, -0.06335444, -0.06751571, -0.00684798, -0.07694793,
-0.07039765, -0.06945851, -0.02247573, -0.07047946, -0.06347238,
-0.04554536, -0.04426727, -0.0142412 , -0.05859767, -0.05376985,
-0.07195932, -0.00791719, -0.07501336, -0.05634447, -0.0550543 ,
-0.0462756 , -0.04563486, -0.05884227, -0.07370295, -0.00788396,
-0.04158321, -0.0841307 , -0.07055681, -0.0639838 , -0.05372795,
-0.08102303, -0.07511374, -0.05606232, -0.02532341, -0.02959966,
-0.06019864, -0.05242248, -0.0084167 , -0.00598817, -0.01137267,
-0.06449889, -0.08436804, -0.07252833, -0.0444572 , -0.01972921,
-0.04350067, -0.08468529, -0.00869756, -0.04184554, -0.08907367,
-0.0059012 , -0.00231363, -0.06242609, -0.07555081, 0.0111065 ,
-0.00878023, -0.06532616, -0.06585679, -0.05300143, -0.01283845,
0.05271176, -0.0663313 , -0.07893655, -0.05373202, -0.08252703,
-0.07084762, -0.06485971, -0.07689745, -0.01427657, -0.05416669,
-0.02175184, -0.0562396 , -0.06432167, -0.02325281, -0.06684574,
-0.00705054, -0.06296429, -0.05777869, -0.00748384, -0.00635496,
-0.04159978, -0.06685322, -0.08389334, -0.00701761, -0.06432831,
-0.07090707, -0.06264 , -0.08206632, -0.07564244, -0.05886821,
-0.0063959 , -0.05845111, -0.06204287, -0.04233636, -0.07327709,
-0.07834864, -0.00707027, -0.09887744, 0.00626668, -0.06312662,
-0.02920069, -0.05897494, -0.00603426, -0.05829883, -0.02059165,
-0.07834075, -0.06683242, -0.05705149, -0.0762232 , -0.08900511,
-0.05739731, -0.07109588, -0.00592516, -0.05978489, -0.0183938 ,
-0.05917713, -0.04931708, -0.07244904, -0.05095007, -0.06334067,
-0.05363588, -0.06938779, -0.07172804, -0.00444791, -0.01391948,
-0.03241966, -0.07023546, -0.09296511, -0.00707521, -0.06007748,
-0.05099807, -0.08633383, -0.04671507, -0.07162772, -0.06880689,
-0.00580111, -0.02292861, -0.05301842, -0.0699292 , 0.06440304,
-0.08127811, -0.08215243, -0.06553537, -0.02928443, -0.07802734,
-0.01139031, -0.07959687, 0.01446193, -0.07216932, 0.00841058,
-0.06963574, -0.01869061, -0.0314908 , -0.06402976, -0.08380255,
-0.08380487, -0.07963423, -0.01089369, -0.06037915, -0.07224868,
-0.00275239, -0.02820986, -0.07644273, -0.01820268, -0.02047534,
-0.06006921, -0.04915273, -0.0652189 , -0.01797788, -0.00674913,
-0.06562325, -0.09353849, 0.01572672, -0.06633594, -0.07833622,
-0.00790852, -0.06969513, -0.07658083, -0.06411042, -0.00743337,
-0.06299569, -0.06349686, 0.06481276, -0.01390679, -0.00704703,
-0.09151021, -0.00897039, -0.07108843, -0.07116202, -0.0073884 ,
-0.07649622, -0.03437965, -0.06293312, -0.04763817, -0.08573087,
-0.04416954, -0.00300061, -0.07880595, -0.0616023 , -0.01828985,
-0.0342025 , -0.07971473, -0.08888055, -0.07533889, -0.06609005,
-0.07731427, -0.07258897, -0.07445047, -0.08132768, -0.06696977,
-0.07822786, -0.05202796, -0.05250061, -0.05390753, -0.09042215,
-0.02522271, -0.00187201, -0.08000803, -0.04903469, -0.06784555,
-0.0589343 , -0.05476155, -0.05010034, -0.05756522, -0.06287936,
-0.06581006, -0.06560832, -0.07855322, -0.00921494, -0.06341241,
-0.03245432, -0.08061009, -0.02204132, -0.05269145, -0.00488693,
-0.00603298, 0.00657092, -0.02193722, -0.06552897, -0.06532465,
-0.07232498, -0.07811823, 0.02950191, -0.04523056, -0.0042063 ,
-0.0276743 , -0.08286427, -0.0218486 , -0.04605348, -0.03906796,
-0.0113944 , -0.0318488 , -0.01176713, -0.04728981, -0.01139557,
-0.01032085, -0.02429256, -0.05450579, -0.04662224, -0.04982534,
-0.07490676, -0.06650402, -0.06072304, -0.03004125, -0.06273857,
-0.03457063, -0.00580207, -0.0016215 , -0.02143812, -0.07632421,
-0.01378029, -0.07292905, -0.0883406 , 0.03496349, -0.08809979,
-0.05501696, -0.07620736, -0.0675406 , -0.0245028 , -0.06243804,
-0.08766349, -0.06441812, -0.07074441, -0.07693358, -0.07767762,
-0.00305608, 0.00857855, -0.01002404, -0.06823479, -0.05937157,
-0.0650647 , -0.06624278, -0.04493498, -0.05777024, -0.04969624,
-0.06784492, -0.04617 , -0.0673632 , -0.02127056, -0.06717695,
-0.01669092, -0.0772848 , -0.00140384, -0.00521039, -0.07037175,
-0.01954671, -0.08139505, -0.00493742, -0.06340863, -0.06051 ,
-0.020957 , -0.07684536, -0.10320909, -0.07542219, -0.02146349,
-0.02428293, -0.00908613, -0.02114126, -0.04844717, -0.07506659,
-0.04155113, -0.02611309, -0.0735454 , -0.0500888 , -0.01164846,
-0.07853115, -0.0765461 , 0.06416114, -0.02982672, -0.0457687 ,
-0.01188941, -0.06042792, -0.07090043, -0.0842066 , -0.06803164,
-0.08185688, -0.06315702, -0.10026774, -0.049329 , -0.09501091,
-0.07488233, -0.00600216, -0.07636529, -0.0779068 , -0.01502587,
-0.00658086, -0.02787442, -0.08036617, -0.06611934, 0.00686911,
-0.09045056, -0.01738705, 0.08067761, -0.04652357, -0.05989062,
-0.07276973, -0.04131578, -0.06277292, -0.04539846, -0.03031358,
-0.05429706, -0.06148601, -0.05069135, -0.06731895, -0.07334394,
-0.02559559, -0.08864345, -0.07282297, -0.01017689, -0.0348955 ,
-0.08250091, -0.06658982, -0.00376104, -0.02169728, -0.07390127,
-0.07335062, -0.05235442, -0.00371226, -0.00705077, 0.0043261 ,
-0.04688119, -0.05291606, 0.00359649, -0.06370793, 0.00477501,
-0.04499554, -0.07048909, -0.00963487, -0.07721131, -0.00331341,
-0.00476873, -0.04947318, -0.01665951, -0.08469487, -0.00331149,
-0.06853346, -0.0532644 , -0.06600736, -0.07018627, -0.08656831,
-0.01211547, -0.04298428, -0.06073886, -0.00776081, -0.07503507],
dtype=float32)]
###Markdown
Subsurface Data Analytics Artificial Neural Networks for Prediction in Python Michael Pyrcz, Associate Professor, University of Texas at Austin [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) | [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy) Honggeun Jo, Graduate Student, The University of Texas at Austin [LinkedIn](https://www.linkedin.com/in/honggeun-jo/?originalSubdomain=kr) | [Twitter](https://twitter.com/HonggeunJ) PGE 383 Exercise: Artificial Neural Networks for Subsurface Modeling in Python Here's a simple workflow, demonstration of artificial neural networks for subsurface modeling workflows. This should help you get started with building subsurface models that use data analytics and machine learning. Here's some basic details about neural networks. Neural NetworksMachine learning method for supervised learning for classification and regression analysis. Here are some key aspects of support vector machines.**Basic Design** *"...a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs."* Caudill (1989). **Nature-inspire Computing** based on the neuronal structure in the brain, including many interconnected simple, processing units, known as nodes that are capable of complicated emergent pattern detection due to a large number of nodes and interconnectivity.**Training and Testing** just like and other predictive model (e.g. linear regression, decision trees and support vector machines) we perform training to fit parameters and testing to tune hyperparameters. Here we observe the error with training and testing datasets, but do not demonstrate tuning of the hyperparameters. **Parameters** are the weights applied to each connection and a bias term applied to each node. For a single node in an artificial neural network, this includes the slope terms, $\beta_i$, and the bias term, $\beta_{0}$.\begin{equation}Y = \sum_{i=1}^m \beta_i X + \beta_0\end{equation}it can be seen that the number of parameters increases rapidly as we increase the number of nodes and the connectivity between the nodes.**Layers** the typical artificial neural net is structured with an **input layer**, with one node for each $m$ predictor feature, $X_1,\ldots,X_m$. There is an **ouput layer**, with one node for each $r$ response feature, $Y_1,\ldots,Y_r$. There may be one or more layers of nodes between the input and output layers, known as **hidden layer(s)**. **Connections** are the linkages between the nodes in adjacent layers. For example, in a fully connected artificial neural network, all the input nodes are connected to all of the nodes in the first layer, then all of the nodes in the first layer are connected to the next layer and so forth. Each connection includes a weight parameter as indicated above.**Nodes** receive the weighted signal from the connected previous layer nodes, sum and then apply this result to the **activation** function in the node. Some example activation functions include:* **Binary** the node fires or not. This is represented by a Heaviside step function.* **Identify** the input is passed to the output $f(x) = x$* **Linear** the node passes a signal that increases linearly with the weighted input.* **Logistic** also known as sigmoid or soft step $f(x) = \frac{1}{1+e^{-x}}$the node output is the nonlinear activiation function applied to the linearly weighted inputs. This is fed to all nodes in the next layer.**Training Cycles** - the presentation of a batch of data, forward application of the current prediction model to make estimates, calculation of error and then backpropagation of error to correct the artificial neural network parameters to reduce the error over all of the batches.**Batch** is the set of training data for each training cycle of forward prediction and back propagation of error, drawn to train for each iteration. There is a trade-off, a larger batch results in more computational time per iteration, but a more accurate estimate of the error to adjust the weights. Smaller batches result in a nosier estimate of the error, but faster epochs, this results in faster learning and even possibly more robust models.**Epochs** - is a set of training cycles, batches covering all available training data. **Local Minimums** - if one calculated the error hypersurface over the range of model parameters it would be hyparabolic, there is a global minimium error solution. But this error hyper surface is rough and it is possible to be stuck in a local minimum. **Learning Rate** and **Mommentum** coefficients are introduced to avoid getting stuck in local minimums.* **Mommentum** is a hyperparameter to control the use of information from the weight update over the last epoch for consideration in the current epoch. This can be accomplished with an update vector, $v_i$, a mommentum parameter, $\alpha$, to calculate the current weight update, $v_{i+1}$ given the new update $\theta_{i+1}$.\begin{equation}v_{i+1} = \alpha v_i + \theta_{i+1}\end{equation}* **Learning Rate** is a hyperparameter that controls the adjustment of the weights in response to the gradient indicated by backpropagation of error Applications to subsurface modelingWe demonstrate the estimation of normal score transformed porosity from depth. This would be useful for building a vertical trend model. * modeling the complicated relationship between porosity and depth. Limitations of Neural Network EstimationSince we demonstrate the use of an artificial neural network to estimate porosity from sparsely sampled data over depth, we should comment on limitations of our artificial neural networks for this estimation problem:* does not honor the well data* does not honor the histogram of the data* does not honor spatial correlation * does not honor the multivariate relationship* generally low interpretability models* requires a large number of data for effective training* high model complexity with high model variance Workflow GoalsLearn the basics of machine learning in python to predict subsurface features. This includes:* Loading and visualizing sample data* Trying out neural nets Objective In the PGE 383: Subsurface Machine Learning Class, I want to provide hands-on experience with building subsurface modeling workflows. Python provides an excellent vehicle to accomplish this. I have coded a package called GeostatsPy with GSLIB: Geostatistical Library (Deutsch and Journel, 1998) functionality that provides basic building blocks for building subsurface modeling workflows. The objective is to remove the hurdles of subsurface modeling workflow construction by providing building blocks and sufficient examples. This is not a coding class per se, but we need the ability to 'script' workflows working with numerical methods. Getting StartedHere's the steps to get setup in Python with the GeostatsPy package:1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/). 2. From Anaconda Navigator (within Anaconda3 group), go to the environment tab, click on base (root) green arrow and open a terminal. 3. In the terminal type: pip install geostatspy. 4. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality. You will need to copy the data file to your working directory. They are available here:* Tabular data - 12_sample_data.csv found [here](https://github.com/GeostatsGuy/GeoDataSets/blob/master/12_sample_data.csv).There are exampled below with these functions. You can go here to see a list of the available functions, https://git.io/fh4eX, other example workflows and source code. Import Required PackagesWe will also need some standard packages. These should have been installed with Anaconda 3.
###Code
import geostatspy.GSLIB as GSLIB # GSLIB utilies, visualization and wrapper
import geostatspy.geostats as geostats # GSLIB methods convert to Python
###Output
_____no_output_____
###Markdown
We will also need some standard packages. These should have been installed with Anaconda 3.
###Code
import numpy as np # ndarrys for gridded data
import pandas as pd # DataFrames for tabular data
import os # set working directory, run executables
import matplotlib.pyplot as plt # for plotting
import seaborn as sns # for plotting
import warnings # supress warnings from seaborn pairplot
from sklearn.model_selection import train_test_split # train / test DatFrame split
###Output
_____no_output_____
###Markdown
We will also need the following packages to train and test our artificial neural nets:* Tensorflow - open source machine learning * Keras - high level application programing interface (API) to build and train modelsMore information is available at [tensorflow install](https://www.tensorflow.org/install).* This workflow was designed with tensorflow version 1.13.1 and does not work with version 2.0.0 alpha.To check your current version of tensorflow you could run the next block of code.
###Code
import tensorflow as tf
tf.__version__ # check the installed version of tensorflow
###Output
_____no_output_____
###Markdown
Let's import all of the tensorflow and keras methods that we will need in our workflow.
###Code
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout
#from keras.models import Sequential, load_model
#from keras.layers.core import Dense, Dropout, Activation
from tensorflow.keras.optimizers import Adam
from tensorflow.python.keras import backend as k
###Output
_____no_output_____
###Markdown
Set the working directoryI always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time).
###Code
os.chdir("c:/PGE383")
###Output
_____no_output_____
###Markdown
Loading DataLet's load the provided multivariate, spatial dataset '12_sample_data.csv'. It is a comma delimited file with: * Depth ($m$)* Normal Score Porosity It is common to transform properties to standard normal for geostatistical workflows.We load it with the pandas 'read_csv' function into a data frame we called 'df' and then preview it to make sure it loaded correctly.**Python Tip: using functions from a package** just type the label for the package that we declared at the beginning:```pythonimport pandas as pd```so we can access the pandas function 'read_csv' with the command: ```pythonpd.read_csv()```but read csv has required input parameters. The essential one is the name of the file. For our circumstance all the other default parameters are fine. If you want to see all the possible parameters for this function, just go to the docs [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html). * The docs are always helpful* There is often a lot of flexibility for Python functions, possible through using various inputs parametersalso, the program has an output, a pandas DataFrame loaded from the data. So we have to specficy the name / variable representing that new object.```pythondf = pd.read_csv("1D_Porosity.csv") ```Let's run this command to load the data and then look at the resulting DataFrame to ensure that we loaded it.
###Code
df2 = pd.read_csv("1D_Porosity.csv") # read a .csv file in as a DataFrame # display first 4 samples in the table as a preview
df2.head() # we could also use this command for a table preview
###Output
_____no_output_____
###Markdown
Data NormalizationWe must normalize the features before we apply them in an artificial neural network model. These are the motivations for this normalization:* remove impact of scale of different types of data (i.e., depth varies between $[0,10]$, but porosity only varies between $[-3,3.0]$).* activation functions in artificial neural networks are designed to be more sensive to values of nodes closer to 0.0 (i.e., results in higher gradient and improves backpropagation in training)Let's normalize each feature. * We apply the min max normalization by-hand to force both the predictor and response features to be bound $[-1,1]$.* It is easy to backtransform given we keep track of the original min and max values
###Code
depth_min = df2['Depth'].values.min(); depth_max = df2['Depth'].values.max()
Npor_min = df2['Nporosity'].values.min(); Npor_max = df2['Nporosity'].values.max()
df2['norm_Depth'] = (df2['Depth'] - depth_min)/(depth_max - depth_min) * 2 - 1
df2['norm_Nporosity'] = (df2['Nporosity'] - Npor_min)/(Npor_max - Npor_min) * 2 - 1
df2.head()
###Output
_____no_output_____
###Markdown
It is also a good idea to check the summary statistics. * All normalized features should now range from -1.0 to 1.0
###Code
df2.describe().transpose()
###Output
_____no_output_____
###Markdown
Separation of Training and Testing DataWe also need to split our data into training / testing datasets so that we:* can train our artificial neural networks using the training data * while testing their performance with the withheld testing (validation) data.
###Code
X2 = df2.iloc[:,[0,2]] # extract the predictor feature - acoustic impedance
y2 = df2.iloc[:,[1,3]] # extract the response feature - porosity
X2_train, X2_test, y2_train, y2_test = train_test_split(X2, y2, test_size=0.2, random_state=73073)
###Output
_____no_output_____
###Markdown
Visualize the DatasetLet's visualize the train and test data split on a single scatter plot.* we want to make sure it is fair* ensure that the test samples are not clustered or too far away from the training data.We will look at the original data and normalized data, the input to the neural network.
###Code
plt.subplot(211)
plt.plot(X2_train['Depth'].values,y2_train['Nporosity'].values, 'o', markerfacecolor='red', markeredgecolor='black', alpha=0.4, label = "Train")
plt.plot(X2_test['Depth'].values,y2_test['Nporosity'].values, 'o', markerfacecolor='blue', markeredgecolor='black', alpha=0.4, label = "Test")
plt.title('Standard Normal Porosity vs. Depth')
plt.xlabel('Z (m)')
plt.ylabel('NPorosity')
plt.legend()
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.0, wspace=0.2, hspace=0.2)
plt.subplot(212)
plt.plot(X2_train['norm_Depth'].values,y2_train['norm_Nporosity'].values, 'o', markerfacecolor='red', markeredgecolor='black', alpha=0.4, label = "Train")
plt.plot(X2_test['norm_Depth'].values,y2_test['norm_Nporosity'].values, 'o', markerfacecolor='blue', markeredgecolor='black', alpha=0.4, label = "Test")
plt.title('Normalized Standard Normal Porosity vs. Normalized Depth')
plt.xlabel('Normalized Z')
plt.ylabel('Normalized NPorosity')
plt.legend()
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.5, wspace=0.2, hspace=0.3)
plt.show()
###Output
_____no_output_____
###Markdown
Specify the Prediction LocationsGiven this training and testing data, let's specify the prediction locations over the range of the observed depths at regularly spaced $nbins$ locations.
###Code
# Specify the prediction locations
nbins = 1000
depth_bins = np.linspace(depth_min, depth_max, nbins) # set the bins for prediction
norm_depth_bins = (depth_bins-depth_min)/(depth_max-depth_min)*2-1 # use normalized bins
###Output
_____no_output_____
###Markdown
Build and Train a Simple Neural NetworkFor our first model we will build a simple model with: * 1 predictor feature - depth ($d$)* 1 responce feature - normal score porosity ($N\{\phi\}$)we will build a model to predict normal score porosity from depth over all locations in our model $\bf{u} \in AOI$. \begin{equation}N\{\phi(\bf{u})\} = \hat{f} (d(\bf{u}))\end{equation}and use this model to support the prediction of porosity between the wells. Design, Train and Test a Neural NetworkIn the following code we use keras / tensorflow to:1. **Design the Network** - we use a fully connected, feed forward neural network with one node in the input and output layers, to receive the normalized depth and output the normalized (normal score) porosity. We found by trial and error, given the complexity of the dataset, that we required a significant network width (about 500 nodes) and a network depth of atleast 4 hidden layers. 2. **Select the Optimizer** - we selected the adam optimizer (Kingma and Ba, 2015). This optimizer is computationally efficient and is well suited to problems with noisy data and sparse gradients. It is an extension of stochastic gradient descent with the addition of adaptive gradient algorithm that calculates per-parameter learning rates to improve learning with sparse gradients, and root mean square propopagation that sets the learning rate based on the recent magnitudes of the gradients for each parameter to improve performance with noisy data. We include stochastic gradient descent for experimentation. * we found a learning rate of 0.01 to 0.001 works well * we found the rate of decay parameters of $\beta_1=0.9$ and $\beta_2=0.999$ performed well 3. **Compile the Machine** - specify the optimizer, loss function and the metric for model training. 4. **Train the Network** - fit / train the model parameters over a specified number of batch size within a specified number of epochs. We specify the train and test normalized predictor and response features.Then we visualize the model in the original units.
###Code
# Design the neural network
model_2 = Sequential([
Dense(1, activation='linear', input_shape=(1,)), # input layer
Dense(50, activation='relu'),
Dense(50, activation='relu'),
Dense(50, activation='relu'),
Dense(20, activation='relu'), # uncomment these to add hidden layers
# Dense(20, activation='relu'),
# Dense(20, activation='relu'),
# Dense(20, activation='relu'),
Dense(1, activation='linear'), # output layer
])
# Select the Optimizer
adam = Adam(lr=0.003, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False) # adam optimizer
#sgd = keras.optimizers.SGD(lr=0.001, momentum=0.0, decay = 0.0, nesterov=False) # stochastic gradient descent
# Compile the Machine
model_2.compile(optimizer=adam,loss='mse',metrics=['accuracy'])
# Train the Network
hist_2 = model_2.fit(X2_train['norm_Depth'], y2_train['norm_Nporosity'],
batch_size=5, epochs=200,
validation_data=(X2_test['norm_Depth'], y2_test['norm_Nporosity']),verbose = 0)
# Predict with the Network
pred_norm_Nporsity = model_2.predict(np.array(norm_depth_bins)) # predict with our ANN
pred_Nporosity = ((pred_norm_Nporsity + 1)/2*(Npor_max - Npor_min)+Npor_min)
# Plot the Model Predictions
plt.subplot(1,1,1)
plt.plot(depth_bins,pred_Nporosity,'black',linewidth=2)
plt.plot(X2_train['Depth'].values,y2_train['Nporosity'].values, 'o', markerfacecolor='red', markeredgecolor='black', alpha=1.0, label = "Train")
plt.plot(X2_test['Depth'].values,y2_test['Nporosity'].values, 'o', markerfacecolor='blue', markeredgecolor='black', alpha=1.0, label = "Test")
plt.xlabel('Depth (m)')
plt.ylabel('Porosity (faction)')
plt.legend()
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=0.8, wspace=0.2, hspace=0.2)
plt.show()
###Output
_____no_output_____
###Markdown
Evaluation of the ModelFor my specified artificial neural network design and optimization parameters I have a very flexible model to fit the data.* artificial neural networks live up to their designation as **Universal Function Approximators**Let's check the training curve, loss functions for our model over training and testing datasets.* **square loss** ($L_2$ loss) is the:\begin{equation}L_2 = \sum_{\bf{u}_{\alpha} \in AOI} \left(y(\bf{u}_{\alpha}) - \hat{f}(x_1(\bf{u}_{\alpha}),\ldots,x_m(\bf{u}_{\alpha})\right)\end{equation}* this is a measure of the inaccuracy over the available dataWe can see the progress of the model over epochs in reduction of training and testing error.* we can observe that the model matches the training data after about 200 epochs, but continues to improve up the 1,000 epochs
###Code
# Plot the Loss vs. Training Epoch
plt.subplot(1,1,1)
plt.plot(hist_2.history['loss'])
plt.plot(hist_2.history['val_loss'])
plt.title('Loss function of Artificial Neural Net Example #2')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper right')
plt.ylim(0,0.4)
plt.grid()
plt.tight_layout()
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=0.8, wspace=0.2, hspace=0.2)
plt.show()
###Output
_____no_output_____
###Markdown
Some ObservationsWe performed a set of experiments and made the following observations that may help you experiment with the design and training of this artificial neural network. Each machine was trained over 1000 epochs with a batch size of 5. For a 1 $\times$ 100 $\times$ 100 $\times$ 1 neural network:* Learning Rate of 0.001 still converging at 1000 epochs, missing some features* Learning Rate of 0.1 - stuck in a local minum as a step functionFor a 1 $\times$ 500 $\times$ 500 $\times$ 1 neural network:* Learning Rate of 0.01 - 0.001 for a close fit to training data* Learning Rate of 0.1 - stuck in a local minimum as a line* Learning Rate of $\le$ 0.0001 still converging at 1000 epochs, missing some featuresFor a 1 $\times$ 500 $\times$ 500 $\times$ 500 $\times$ 1 neural network:* Learning Rate of 0.01 - 0.001 for a close fit to training data* Learning Rate of 0.1 - stuck as a line* Learning Rate of $\le$ 0.0001 still converging at 1000 epochs, missing some features Visualizing the Neural NetThere are some methods available to interogate the artificial neural net.* neural net summary* weightsHere's the summary from our neural net. It lists by layers the number of nodes and number of parameters.
###Code
model_2.summary() # artificial neural network design and number of parameters
###Output
Model: "sequential_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_12 (Dense) (None, 1) 2
_________________________________________________________________
dense_13 (Dense) (None, 50) 100
_________________________________________________________________
dense_14 (Dense) (None, 50) 2550
_________________________________________________________________
dense_15 (Dense) (None, 1) 51
=================================================================
Total params: 2,703
Trainable params: 2,703
Non-trainable params: 0
_________________________________________________________________
###Markdown
We can also see the actual trained weights for each node in each layer.
###Code
for layer in model_2.layers: # weights for the trained artificial neural network
g = layer.get_config()
h = layer.get_weights()
print(g)
print(h)
print('\n')
###Output
{'name': 'dense', 'trainable': True, 'batch_input_shape': (None, 1), 'dtype': 'float32', 'units': 1, 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'GlorotUniform', 'config': {'seed': None}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}
[array([[-0.6884202]], dtype=float32), array([0.1548591], dtype=float32)]
{'name': 'dense_1', 'trainable': True, 'dtype': 'float32', 'units': 1, 'activation': 'relu', 'use_bias': True, 'kernel_initializer': {'class_name': 'GlorotUniform', 'config': {'seed': None}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}
[array([[-1.1786897]], dtype=float32), array([-0.15738262], dtype=float32)]
{'name': 'dense_2', 'trainable': True, 'dtype': 'float32', 'units': 1, 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'GlorotUniform', 'config': {'seed': None}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}
[array([[1.4482979]], dtype=float32), array([-0.16542506], dtype=float32)]
###Markdown
Subsurface Data Analytics Artificial Neural Networks for Prediction in Python Michael Pyrcz, Associate Professor, University of Texas at Austin [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) | [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy) Honggeun Jo, Graduate Student, The University of Texas at Austin [LinkedIn](https://www.linkedin.com/in/honggeun-jo/?originalSubdomain=kr) | [Twitter](https://twitter.com/HonggeunJ) PGE 383 Exercise: Artificial Neural Networks for Subsurface Modeling in Python Here's a simple workflow, demonstration of artificial neural networks for subsurface modeling workflows. This should help you get started with building subsurface models that use data analytics and machine learning. Here's some basic details about neural networks. Neural NetworksMachine learning method for supervised learning for classification and regression analysis. Here are some key aspects of support vector machines.**Basic Design** *"...a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs."* Caudill (1989). **Nature-inspire Computing** based on the neuronal structure in the brain, including many interconnected simple, processing units, known as nodes that are capable of complicated emergent pattern detection due to a large number of nodes and interconnectivity.**Training and Testing** just like and other predictive model (e.g. linear regression, decision trees and support vector machines) we perform training to fit parameters and testing to tune hyperparameters. Here we observe the error with training and testing datasets, but do not demonstrate tuning of the hyperparameters. **Parameters** are the weights applied to each connection and a bias term applied to each node. For a single node in an artificial neural network, this includes the slope terms, $\beta_i$, and the bias term, $\beta_{0}$.\begin{equation}Y = \sum_{i=1}^m \beta_i X + \beta_0\end{equation}it can be seen that the number of parameters increases rapidly as we increase the number of nodes and the connectivity between the nodes.**Layers** the typical artificial neural net is structured with an **input layer**, with one node for each $m$ predictor feature, $X_1,\ldots,X_m$. There is an **ouput layer**, with one node for each $r$ response feature, $Y_1,\ldots,Y_r$. There may be one or more layers of nodes between the input and output layers, known as **hidden layer(s)**. **Connections** are the linkages between the nodes in adjacent layers. For example, in a fully connected artificial neural network, all the input nodes are connected to all of the nodes in the first layer, then all of the nodes in the first layer are connected to the next layer and so forth. Each connection includes a weight parameter as indicated above.**Nodes** receive the weighted signal from the connected previous layer nodes, sum and then apply this result to the **activation** function in the node. Some example activation functions include:* **Binary** the node fires or not. This is represented by a Heaviside step function.* **Identify** the input is passed to the output $f(x) = x$* **Linear** the node passes a signal that increases linearly with the weighted input.* **Logistic** also known as sigmoid or soft step $f(x) = \frac{1}{1+e^{-x}}$the node output is the nonlinear activiation function applied to the linearly weighted inputs. This is fed to all nodes in the next layer.**Training Cycles** - the presentation of a batch of data, forward application of the current prediction model to make estimates, calculation of error and then backpropagation of error to correct the artificial neural network parameters to reduce the error over all of the batches.**Batch** is the set of training data for each training cycle of forward prediction and back propagation of error, drawn to train for each iteration. There is a trade-off, a larger batch results in more computational time per iteration, but a more accurate estimate of the error to adjust the weights. Smaller batches result in a nosier estimate of the error, but faster epochs, this results in faster learning and even possibly more robust models.**Epochs** - is a set of training cycles, batches covering all available training data. **Local Minimums** - if one calculated the error hypersurface over the range of model parameters it would be hyparabolic, there is a global minimium error solution. But this error hyper surface is rough and it is possible to be stuck in a local minimum. **Learning Rate** and **Mommentum** coefficients are introduced to avoid getting stuck in local minimums.* **Mommentum** is a hyperparameter to control the use of information from the weight update over the last epoch for consideration in the current epoch. This can be accomplished with an update vector, $v_i$, a mommentum parameter, $\alpha$, to calculate the current weight update, $v_{i+1}$ given the new update $\theta_{i+1}$.\begin{equation}v_{i+1} = \alpha v_i + \theta_{i+1}\end{equation}* **Learning Rate** is a hyperparameter that controls the adjustment of the weights in response to the gradient indicated by backpropagation of error Applications to subsurface modelingWe demonstrate the estimation of normal score transformed porosity from depth. This would be useful for building a vertical trend model. * modeling the complicated relationship between porosity and depth. Limitations of Neural Network EstimationSince we demonstrate the use of an artificial neural network to estimate porosity from sparsely sampled data over depth, we should comment on limitations of our artificial neural networks for this estimation problem:* does not honor the well data* does not honor the histogram of the data* does not honor spatial correlation * does not honor the multivariate relationship* generally low interpretability models* requires a large number of data for effective training* high model complexity with high model variance Workflow GoalsLearn the basics of machine learning in python to predict subsurface features. This includes:* Loading and visualizing sample data* Trying out neural nets Objective In the PGE 383: Subsurface Machine Learning Class, I want to provide hands-on experience with building subsurface modeling workflows. Python provides an excellent vehicle to accomplish this. I have coded a package called GeostatsPy with GSLIB: Geostatistical Library (Deutsch and Journel, 1998) functionality that provides basic building blocks for building subsurface modeling workflows. The objective is to remove the hurdles of subsurface modeling workflow construction by providing building blocks and sufficient examples. This is not a coding class per se, but we need the ability to 'script' workflows working with numerical methods. Getting StartedHere's the steps to get setup in Python with the GeostatsPy package:1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/). 2. From Anaconda Navigator (within Anaconda3 group), go to the environment tab, click on base (root) green arrow and open a terminal. 3. In the terminal type: pip install geostatspy. 4. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality. You will need to copy the data file to your working directory. They are available here:* Tabular data - 12_sample_data.csv found [here](https://github.com/GeostatsGuy/GeoDataSets/blob/master/12_sample_data.csv).There are exampled below with these functions. You can go here to see a list of the available functions, https://git.io/fh4eX, other example workflows and source code. Import Required PackagesWe will also need some standard packages. These should have been installed with Anaconda 3.
###Code
import geostatspy.GSLIB as GSLIB # GSLIB utilies, visualization and wrapper
import geostatspy.geostats as geostats # GSLIB methods convert to Python
###Output
_____no_output_____
###Markdown
We will also need some standard packages. These should have been installed with Anaconda 3.
###Code
import numpy as np # ndarrys for gridded data
import pandas as pd # DataFrames for tabular data
import os # set working directory, run executables
import matplotlib.pyplot as plt # for plotting
import seaborn as sns # for plotting
import warnings # supress warnings from seaborn pairplot
from sklearn.model_selection import train_test_split # train / test DatFrame split
###Output
_____no_output_____
###Markdown
We will also need the following packages to train and test our artificial neural nets:* Tensorflow - open source machine learning, with Keras modulw - high level application programing interface (API) to build and train modelsMore information is available at [tensorflow install](https://www.tensorflow.org/install).* This workflow was designed with tensorflow version > 2.0.To import Tensorflow, set memory growth and check your current version of tensorflow you could run the next block of code.
###Code
import tensorflow as tf
physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
tf.__version__ # check the installed version of tensorflow
###Output
_____no_output_____
###Markdown
Let's import all of the tensorflow and keras methods that we will need in our workflow.
###Code
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout
#from keras.models import Sequential, load_model
#from keras.layers.core import Dense, Dropout, Activation
from tensorflow.keras.optimizers import Adam
from tensorflow.python.keras import backend as k
###Output
_____no_output_____
###Markdown
Set the working directoryI always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time).
###Code
os.chdir("c:/PGE383")
###Output
_____no_output_____
###Markdown
Loading DataLet's load the provided multivariate, spatial dataset '12_sample_data.csv'. It is a comma delimited file with: * Depth ($m$)* Normal Score Porosity It is common to transform properties to standard normal for geostatistical workflows.We load it with the pandas 'read_csv' function into a data frame we called 'df' and then preview it to make sure it loaded correctly.**Python Tip: using functions from a package** just type the label for the package that we declared at the beginning:```pythonimport pandas as pd```so we can access the pandas function 'read_csv' with the command: ```pythonpd.read_csv()```but read csv has required input parameters. The essential one is the name of the file. For our circumstance all the other default parameters are fine. If you want to see all the possible parameters for this function, just go to the docs [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html). * The docs are always helpful* There is often a lot of flexibility for Python functions, possible through using various inputs parametersalso, the program has an output, a pandas DataFrame loaded from the data. So we have to specficy the name / variable representing that new object.```pythondf = pd.read_csv("1D_Porosity.csv") ```Let's run this command to load the data and then look at the resulting DataFrame to ensure that we loaded it.
###Code
df2 = pd.read_csv("1D_Porosity.csv") # read a .csv file in as a DataFrame # display first 4 samples in the table as a preview
df2.head() # we could also use this command for a table preview
###Output
_____no_output_____
###Markdown
Data NormalizationWe must normalize the features before we apply them in an artificial neural network model. These are the motivations for this normalization:* remove impact of scale of different types of data (i.e., depth varies between $[0,10]$, but porosity only varies between $[-3,3.0]$).* activation functions in artificial neural networks are designed to be more sensive to values of nodes closer to 0.0 (i.e., results in higher gradient and improves backpropagation in training)Let's normalize each feature. * We apply the min max normalization by-hand to force both the predictor and response features to be bound $[-1,1]$.* It is easy to backtransform given we keep track of the original min and max values
###Code
depth_min = df2['Depth'].values.min(); depth_max = df2['Depth'].values.max()
Npor_min = df2['Nporosity'].values.min(); Npor_max = df2['Nporosity'].values.max()
df2['norm_Depth'] = (df2['Depth'] - depth_min)/(depth_max - depth_min) * 2 - 1
df2['norm_Nporosity'] = (df2['Nporosity'] - Npor_min)/(Npor_max - Npor_min) * 2 - 1
df2.head()
###Output
_____no_output_____
###Markdown
It is also a good idea to check the summary statistics. * All normalized features should now range from -1.0 to 1.0
###Code
df2.describe().transpose()
###Output
_____no_output_____
###Markdown
Separation of Training and Testing DataWe also need to split our data into training / testing datasets so that we:* can train our artificial neural networks using the training data * while testing their performance with the withheld testing (validation) data.
###Code
X2 = df2.iloc[:,[0,2]] # extract the predictor feature - acoustic impedance
y2 = df2.iloc[:,[1,3]] # extract the response feature - porosity
X2_train, X2_test, y2_train, y2_test = train_test_split(X2, y2, test_size=0.2, random_state=73073)
###Output
_____no_output_____
|
concepts/Data Structures/02 Stacks/07 Minimum bracket reversals.ipynb
|
###Markdown
Problem StatementGiven an input string consisting of only `{` and `}`, figure out the minimum number of reversals required to make the brackets balanced.For example:* For `input_string = "}}}}`, the number of reversals required is `2`.* For `input_string = "}{}}`, the number of reversals required is `1`.If the brackets cannot be balanced, return `-1` to indicate that it is not possible to balance them.
###Code
class LinkedListNode:
def __init__(self, data):
self.data = data
self.next = None
class Stack:
def __init__(self):
self.num_elements = 0
self.head = None
def push(self, data):
new_node = LinkedListNode(data)
if self.head is None:
self.head = new_node
else:
new_node.next = self.head
self.head = new_node
self.num_elements += 1
def pop(self):
if self.is_empty():
return None
temp = self.head.data
self.head = self.head.next
self.num_elements -= 1
return temp
def top(self):
if self.head is None:
return None
return self.head.data
def size(self):
return self.num_elements
def is_empty(self):
return self.num_elements == 0
def minimum_bracket_reversals(input_string):
"""
Calculate the number of reversals to fix the brackets
Args:
input_string(string): Strings to be used for bracket reversal calculation
Returns:
int: Number of breacket reversals needed
"""
# TODO: Write function here
pass
def test_function(test_case):
input_string = test_case[0]
expected_output = test_case[1]
output = minimum_bracket_reversals(input_string)
if output == expected_output:
print("Pass")
else:
print("Fail")
test_case_1 = ["}}}}", 2]
test_function(test_case_1)
test_case_2 = ["}}{{", 2]
test_function(test_case_2)
test_case_3 = ["{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{}}}}}", 13]
test_function(test_case_1)
test_case_4= ["}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{", 2]
test_function(test_case_2)
test_case_5 = ["}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}", 1]
test_function(test_case_3)
###Output
_____no_output_____
###Markdown
Hide Solution
###Code
def minimum_bracket_reversals(input_string):
if len(input_string) % 2 == 1:
return -1
stack = Stack()
count = 0
for bracket in input_string:
if stack.is_empty():
stack.push(bracket)
else:
top = stack.top()
if top != bracket:
if top == '{':
stack.pop()
continue
stack.push(bracket)
ls = list()
while not stack.is_empty():
first = stack.pop()
second = stack.pop()
ls.append(first)
ls.append(second)
if first == '}' and second == '}':
count += 1
elif first == '{' and second == '}':
count += 2
elif first == '{' and second == '{':
count += 1
return count
###Output
_____no_output_____
|
ML-Python/Week2/submissions/A3/A3_Guru.ipynb
|
###Markdown
Generalized Linear Model4 Features ==> 1 TargetEmploys Gradient DescentIt works
###Code
import pandas as pd
import numpy as np
import string as str
###Output
_____no_output_____
###Markdown
x ==> Array of Feature Valuesx[1] returns a column of training values for the second featurey ==> Result Array. The values the Hypothesis fn should train to.
###Code
train_data = pd.read_csv('train_data.csv',header = 0)
y = train_data["Target"]
y = y.values #convert to ndarray
train_data = train_data.drop("Target",1)
x = train_data.values
x = np.c_[np.ones(x.shape[0]),x]
def calculateCost (HypothesisFn,y,x):
distance = HypothesisFn - y
return (np.sum((distance)**2)/(2*distance.size))
def derivativeOf(HypothesisFn,y,x):
distance = HypothesisFn - y
deriv = np.dot(x.transpose(), distance)
deriv /= y.shape[0]
return deriv
Parameters = np.zeros(5) #initialize parameters
HypothesisFn = np.dot(x,Parameters)
###Output
_____no_output_____
###Markdown
If anyone's wondering why I didn't transpose my Parameters array, one dimensional arrays do not need to be transposed to be multiplied. np.dot figures it out on its own. np.dot is the real bae
###Code
LearningRate = 0.1
while True:
Parameters = Parameters - LearningRate*derivativeOf(HypothesisFn,y,x)
HypothesisFn = np.dot(x,Parameters)
cost = calculateCost (HypothesisFn,y,x)
if ( cost <= 10**(-25)):
break
# 10^-25 seems pretty negligible. = ¯\_(ツ)_/¯
print ("\nThe Hypothesis function is {0}".format(Parameters[0])),
for i in range(1,Parameters.size):
print (" + {0}x{1}".format(Parameters[i],i)),
test_input = pd.read_csv('test_input.csv',header = 0)
test_input = test_input.values
test_input = np.c_[np.ones(test_input.shape[0]),test_input]
prediction = np.dot (test_input, Parameters)
test_input = np.delete(test_input, (0), axis=1)
test_input = np.c_[test_input,prediction]
output = pd.DataFrame(data=test_input, columns = ["Variable 1","Variable 2","Variable 3","Variable 4","Prediction"])
output.to_csv('test_output.csv', index=False, header=True, sep=',')
print output #Boom shakalaka
###Output
Variable 1 Variable 2 Variable 3 Variable 4 Prediction
0 0.052917 0.315560 0.769792 0.242442 3.314985
1 0.736748 0.774736 0.991972 0.686699 5.876583
2 0.072012 0.649908 0.472070 0.879730 5.022247
3 0.564943 0.859830 0.764108 0.991852 6.293258
4 0.913741 0.827315 0.411412 0.885002 6.162375
5 0.183902 0.559515 0.327523 0.730312 4.576910
6 0.263851 0.968906 0.487362 0.949212 6.248639
7 0.567881 0.254971 0.942230 0.814473 4.263874
8 0.324696 0.158970 0.227014 0.410909 2.994508
9 0.280444 0.200198 0.964338 0.860580 3.951514
10 0.185000 0.651180 0.022720 0.722323 4.725106
11 0.344684 0.843482 0.647393 0.331496 5.231817
12 0.224112 0.570966 0.137944 0.880133 4.743091
13 0.585966 0.794656 0.742884 0.587200 5.603770
14 0.455460 0.199887 0.237061 0.948117 3.873507
15 0.145781 0.976496 0.706553 0.466245 5.693815
16 0.105116 0.285242 0.265898 0.406501 3.242849
17 0.982609 0.911043 0.358867 0.617593 6.126405
18 0.999271 0.755277 0.886075 0.937326 6.263868
19 0.647837 0.140072 0.893775 0.319280 3.339608
20 0.793952 0.369704 0.355454 0.814200 4.535348
21 0.381051 0.409095 0.565238 0.763939 4.390016
22 0.845477 0.787822 0.267364 0.379387 5.314161
23 0.067374 0.751306 0.175546 0.874231 5.203306
24 0.736657 0.071955 0.139063 0.431615 3.006095
25 0.350636 0.682030 0.389500 0.554403 4.892420
26 0.859244 0.642913 0.882985 0.492440 5.269921
27 0.139187 0.800644 0.774105 0.348804 5.025113
28 0.476021 0.076429 0.834784 0.035051 2.646669
29 0.325099 0.658965 0.566500 0.760976 5.128210
30 0.961533 0.838505 0.256368 0.152980 5.276231
31 0.338888 0.877312 0.968079 0.862380 6.114902
32 0.058088 0.553975 0.319993 0.941502 4.722646
33 0.736978 0.852461 0.824838 0.671308 6.030080
34 0.405489 0.969086 0.440965 0.673975 5.996756
35 0.206186 0.528878 0.995515 0.588026 4.608132
36 0.880115 0.573087 0.452686 0.708280 5.145927
37 0.631965 0.321172 0.058512 0.331713 3.554654
38 0.175711 0.510245 0.057679 0.116826 3.556717
39 0.571311 0.679296 0.973027 0.930148 5.747043
40 0.057735 0.968774 0.828177 0.692998 5.933670
41 0.855015 0.099155 0.384450 0.534479 3.405493
42 0.909414 0.607041 0.054595 0.544184 4.904533
43 0.755819 0.766392 0.591364 0.985343 6.056822
44 0.094780 0.330283 0.657516 0.860141 4.094019
45 0.959639 0.916353 0.991050 0.368245 6.092215
46 0.996029 0.020010 0.099648 0.824563 3.490729
47 0.193439 0.621190 0.076266 0.667906 4.594009
48 0.234836 0.464753 0.341920 0.218517 3.701314
49 0.311490 0.920181 0.707429 0.134068 5.233442
50 0.265221 0.954385 0.303192 0.539907 5.628531
51 0.574388 0.863050 0.580298 0.534555 5.676266
52 0.351915 0.775155 0.196954 0.305331 4.800062
53 0.260872 0.519215 0.598623 0.802096 4.708636
54 0.697415 0.687054 0.268074 0.721872 5.309147
55 0.133789 0.918394 0.019873 0.308387 5.019777
56 0.712013 0.727818 0.568780 0.800287 5.670229
57 0.263746 0.758232 0.820299 0.302350 4.945134
58 0.493948 0.613297 0.216240 0.029928 4.069417
59 0.919843 0.714592 0.973155 0.470763 5.549610
|
Python-For-Data-Analysis/Chapter 3 Basics/3.5 Files and OS.ipynb
|
###Markdown
Files and OS in Python Version 0.0 Use of sys module to get system's default encoding
###Code
import sys
sys.getdefaultencoding()
###Output
_____no_output_____
###Markdown
Files in Python (seek,tell,open,read,write)
###Code
x=open("C:/Users/sanka/Desktop/sanky.txt","r+")
x.write("hello how ya doing")
x.seek(12)
print(x.tell())
x.seek(0)
print(x.readlines())
x.close()
###Output
_____no_output_____
###Markdown
Use with as to open files without worrying to close them
###Code
with open("C:/Users/sanka/Desktop/sanky.txt","w") as f:
f.write("hello how ya doing again")
###Output
_____no_output_____
|
Chinese_rationality_analasis/training.ipynb
|
###Markdown
Tutorial for Chinese Sentiment analysis with hotel review data DependenciesPython 3.5, numpy, pickle, keras, tensorflow, [jieba](https://github.com/fxsjy/jieba) Optional for plottingpylab, scipy
###Code
from os import listdir
from os.path import isfile, join
import jieba
import codecs
from langconv import * # convert Traditional Chinese characters to Simplified Chinese characters
import pickle
import random
from keras.models import Sequential
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import GRU
from keras.preprocessing.text import Tokenizer
from keras.layers.core import Dense
from keras.utils import to_categorical
from keras.preprocessing.sequence import pad_sequences
from keras.callbacks import TensorBoard
###Output
/home/yingshaoxo/.local/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
###Markdown
Helper function to pickle and load stuff
###Code
def __pickleStuff(filename, stuff):
save_stuff = open(filename, "wb")
pickle.dump(stuff, save_stuff)
save_stuff.close()
def __loadStuff(filename):
saved_stuff = open(filename,"rb")
stuff = pickle.load(saved_stuff)
saved_stuff.close()
return stuff
###Output
_____no_output_____
###Markdown
Get lists of files, positive and negative files
###Code
dataBaseDirPos = "./Data/positive/"
dataBaseDirNeg = "./Data/negative/"
positiveFiles = [dataBaseDirPos + f for f in listdir(dataBaseDirPos) if isfile(join(dataBaseDirPos, f)) and '.txt' in f]
negativeFiles = [dataBaseDirNeg + f for f in listdir(dataBaseDirNeg) if isfile(join(dataBaseDirNeg, f)) and '.txt' in f]
###Output
_____no_output_____
###Markdown
Show length of samples
###Code
print(len(positiveFiles))
print(len(negativeFiles))
print()
print(positiveFiles)
print(negativeFiles)
###Output
6
4
['./Data/positive/diary.txt', './Data/positive/msgs.txt', './Data/positive/theory.txt', './Data/positive/mind.txt', './Data/positive/drafts.txt', './Data/positive/saying.txt']
['./Data/negative/QQZoneComments.txt', './Data/negative/DuanZi.txt', './Data/negative/SiBuDeJieDianzi.txt', './Data/negative/BilibiliComments.txt']
###Markdown
Have a look at what's in a file(one hotel review)
###Code
filename = positiveFiles[0]
with codecs.open(filename, "r", encoding="utf-8", errors="ignore") as doc_file:
text=doc_file.read()
print(text[:200])
###Output
在这个世界上我能活多久?是空留无一物还是另类?我不知道,也不会去想。
世界总是要我们给予什么,但残酷的命运无情的夺走我们的一切。
时间在这时已停止,只留下一串串时间的印记串联起的文字。
因此才有了这本日记,他是属于自己的,没人偷看。
这是一片自由的天空,任自己遨游,飞跃时间的限制,让我们能在年老的时候说:瞧!这就是青春,我的宝贵时间就是那样过的!
——————————————
天空一如
###Markdown
Test removing stop wordsDemo what it looks like to tokenize the sentence and remove stop words.
###Code
filename = positiveFiles[1]
with codecs.open(filename, "r", encoding="utf-8", errors="ignore") as doc_file:
text=doc_file.read()[:200]
text = text.replace("\n", "")
text = text.replace("\r", "")
print("==Orginal==:\n\r{}".format(text))
stopwords = [ line.rstrip() for line in codecs.open('./Data/chinese_stop_words.txt',"r", encoding="utf-8") ]
seg_list = jieba.cut(text, cut_all=False)
final =[]
seg_list = list(seg_list)
for seg in seg_list:
if seg not in stopwords:
final.append(seg)
print("==Tokenized==\tToken count:{}\n\r{}".format(len(seg_list)," ".join(seg_list)))
print("==Stop Words Removed==\tToken count:{}\n\r{}".format(len(final)," ".join(final)))
###Output
Building prefix dict from the default dictionary ...
Loading model from cache /tmp/jieba.cache
###Markdown
Prepare "doucments", a list of tuplesSome files contain abnormal encoding characters which encoding GB2312 will complain about. Solution: read as bytes then decode as GB2312 line by line, skip lines with abnormal encodings. We also convert any traditional Chinese characters to simplified Chinese characters.
###Code
documents = []
positive_nums = 0
negative_nums = 0
for filename in positiveFiles:
with open(filename, "r", encoding="utf-8", errors="ignore") as f:
text = f.read()
all_text = Converter('zh-hans').convert(text)# Convert from traditional to simplified Chinese
text_list = all_text.split("\n\n——————————————\n\n")
for text in text_list:
#text = text.replace("\n", "")
#text = text.replace("\r", "")
documents.append((text, "pos"))
positive_nums += 1
for filename in negativeFiles:
with open(filename, "r", encoding="utf-8", errors="ignore") as f:
text = f.read()
all_text = Converter('zh-hans').convert(text)# Convert from traditional to simplified Chinese
text_list = all_text.split("\n\n——————————————\n\n")
for text in text_list:
#text = text.replace("\n", "")
#text = text.replace("\r", "")
documents.append((text, "neg"))
negative_nums += 1
print('positive_nums:', positive_nums)
print('negative_nums:', negative_nums)
###Output
positive_nums: 8739
negative_nums: 13422
###Markdown
Optional step to save/load the documents as pickle file
###Code
# Uncomment those two lines to save/load the documents for later use since the step above takes a while
# __pickleStuff("./Data/chinese_sentiment_corpus.p", documents)
# documents = __loadStuff("./Data/chinese_sentiment_corpus.p")
print(len(documents))
print(documents[-4:-1])
###Output
22161
[('每天都做,但还没研究过,现在好了哈哈', 'neg'), ('极限6分钟,四分钟开始全身抖动', 'neg'), ('(=・ω・=)', 'neg')]
###Markdown
shuffle the data
###Code
random.shuffle(documents)
###Output
_____no_output_____
###Markdown
Prepare the input and output for the modelEach input (hotel review) will be a list of tokens, output will be one token("pos" or "neg"). The stopwords are not removed here since the dataset is relative small and removing the stop words are not saving much traing time.
###Code
# Tokenize only
totalX = []
totalY = [str(doc[1]) for doc in documents]
for doc in documents:
seg_list = jieba.cut(doc[0], cut_all=False)
seg_list = list(seg_list)
totalX.append(seg_list)
#Switch to below code to experiment with removing stop words
# Tokenize and remove stop words
# totalX = []
# totalY = [str(doc[1]) for doc in documents]
# stopwords = [ line.rstrip() for line in codecs.open('./Data/chinese_stop_words.txt',"r", encoding="utf-8") ]
# for doc in documents:
# seg_list = jieba.cut(doc[0], cut_all=False)
# seg_list = list(seg_list)
# Uncomment below code to experiment with removing stop words
# final =[]
# for seg in seg_list:
# if seg not in stopwords:
# final.append(seg)
# totalX.append(final)
###Output
_____no_output_____
###Markdown
Visualize distribution of sentence lengthDecide the max input sequence, here we cover up to 60% sentences. The longer input sequence, the more training time will take, but could improve prediction accuracy.
###Code
import numpy as np
import scipy.stats as stats
import pylab as pl
h = sorted([len(sentence) for sentence in totalX])
maxLength = h[int(len(h) * 0.60)]
print("Max length is: ",h[len(h)-1])
print("60% cover length up to: ",maxLength)
h = h[:5000]
fit = stats.norm.pdf(h, np.mean(h), np.std(h)) #this is a fitting indeed
pl.plot(h,fit,'-o')
pl.hist(h,normed=True) #use this to draw histogram of your data
pl.show()
###Output
Max length is: 2677
60% cover length up to: 16
###Markdown
Words to number tokens, paddingPad input sequence to max input length if it is shorterSave the input tokenizer, since we need to use the same tokenizer for our new predition data.
###Code
totalX = [" ".join(wordslist) for wordslist in totalX] # Keras Tokenizer expect the words tokens to be seperated by space
input_tokenizer = Tokenizer(30000) # Initial vocab size
input_tokenizer.fit_on_texts(totalX)
vocab_size = len(input_tokenizer.word_index) + 1
print("input vocab_size:",vocab_size)
totalX = np.array(pad_sequences(input_tokenizer.texts_to_sequences(totalX), maxlen=maxLength))
__pickleStuff("./Data/input_tokenizer_chinese.p", input_tokenizer)
###Output
input vocab_size: 44932
###Markdown
Output, array of 0s and 1s
###Code
target_tokenizer = Tokenizer(3)
target_tokenizer.fit_on_texts(totalY)
print("output vocab_size:",len(target_tokenizer.word_index) + 1)
totalY = np.array(target_tokenizer.texts_to_sequences(totalY)) -1
totalY = totalY.reshape(totalY.shape[0])
totalY[40:50]
###Output
_____no_output_____
###Markdown
Turn output 0s and 1s to categories(one-hot vectors)
###Code
totalY = to_categorical(totalY, num_classes=2)
totalY[40:50]
output_dimen = totalY.shape[1] # which is 2
###Output
_____no_output_____
###Markdown
Save meta data for later preditionmaxLength: the input sequence lengthvocab_size: Input vocab sizeoutput_dimen: which is 2 in this example (pos or neg)sentiment_tag: either ["neg","pos"] or ["pos","neg"] matching the target tokenizer
###Code
target_reverse_word_index = {v: k for k, v in list(target_tokenizer.word_index.items())}
sentiment_tag = [target_reverse_word_index[1],target_reverse_word_index[2]]
metaData = {"maxLength":maxLength,"vocab_size":vocab_size,"output_dimen":output_dimen,"sentiment_tag":sentiment_tag}
__pickleStuff("./Data/meta_sentiment_chinese.p", metaData)
###Output
_____no_output_____
###Markdown
Build the Model, train and save itThe training data is logged to Tensorboard, we can look at it by cd into directory "./Graph/sentiment_chinese" and run"python -m tensorflow.tensorboard --logdir=."
###Code
embedding_dim = 256
model = Sequential()
model.add(Embedding(vocab_size, embedding_dim,input_length = maxLength))
# Each input would have a size of (maxLength x 256) and each of these 256 sized vectors are fed into the GRU layer one at a time.
# All the intermediate outputs are collected and then passed on to the second GRU layer.
model.add(GRU(256, dropout=0.9, return_sequences=True))
# Using the intermediate outputs, we pass them to another GRU layer and collect the final output only this time
model.add(GRU(256, dropout=0.9))
# The output is then sent to a fully connected layer that would give us our final output_dim classes
model.add(Dense(output_dimen, activation='softmax'))
# We use the adam optimizer instead of standard SGD since it converges much faster
tbCallBack = TensorBoard(log_dir='./Graph/sentiment_chinese', histogram_freq=0,
write_graph=True, write_images=True)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
model.fit(totalX, totalY, validation_split=0.1, batch_size=32, epochs=20, verbose=1, callbacks=[tbCallBack])
model.save('./Data/sentiment_chinese_model.HDF5')
print("Saved model!")
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 16, 256) 11502592
_________________________________________________________________
gru_1 (GRU) (None, 16, 256) 393984
_________________________________________________________________
gru_2 (GRU) (None, 256) 393984
_________________________________________________________________
dense_1 (Dense) (None, 2) 514
=================================================================
Total params: 12,291,074
Trainable params: 12,291,074
Non-trainable params: 0
_________________________________________________________________
Train on 19944 samples, validate on 2217 samples
Epoch 1/20
19944/19944 [==============================] - 130s 6ms/step - loss: 0.4441 - acc: 0.7968 - val_loss: 0.2851 - val_acc: 0.8841
Epoch 2/20
19944/19944 [==============================] - 130s 7ms/step - loss: 0.2677 - acc: 0.8968 - val_loss: 0.2330 - val_acc: 0.9089
Epoch 3/20
19944/19944 [==============================] - 132s 7ms/step - loss: 0.2083 - acc: 0.9202 - val_loss: 0.2311 - val_acc: 0.9093
Epoch 4/20
19944/19944 [==============================] - 132s 7ms/step - loss: 0.1680 - acc: 0.9368 - val_loss: 0.2418 - val_acc: 0.9107
Epoch 5/20
19944/19944 [==============================] - 132s 7ms/step - loss: 0.1451 - acc: 0.9488 - val_loss: 0.2546 - val_acc: 0.9053
Epoch 6/20
19944/19944 [==============================] - 132s 7ms/step - loss: 0.1263 - acc: 0.9552 - val_loss: 0.2664 - val_acc: 0.9098
Epoch 7/20
19944/19944 [==============================] - 132s 7ms/step - loss: 0.1098 - acc: 0.9611 - val_loss: 0.2856 - val_acc: 0.9089
Epoch 8/20
19944/19944 [==============================] - 132s 7ms/step - loss: 0.0929 - acc: 0.9659 - val_loss: 0.2932 - val_acc: 0.9102
Epoch 9/20
19944/19944 [==============================] - 132s 7ms/step - loss: 0.0823 - acc: 0.9705 - val_loss: 0.2887 - val_acc: 0.9084
Epoch 10/20
19944/19944 [==============================] - 133s 7ms/step - loss: 0.0732 - acc: 0.9748 - val_loss: 0.3213 - val_acc: 0.9071
Epoch 11/20
19944/19944 [==============================] - 134s 7ms/step - loss: 0.0687 - acc: 0.9755 - val_loss: 0.3645 - val_acc: 0.9089
Epoch 12/20
19944/19944 [==============================] - 134s 7ms/step - loss: 0.0615 - acc: 0.9786 - val_loss: 0.3509 - val_acc: 0.9111
Epoch 13/20
19944/19944 [==============================] - 134s 7ms/step - loss: 0.0599 - acc: 0.9787 - val_loss: 0.3168 - val_acc: 0.9102
Epoch 14/20
19944/19944 [==============================] - 134s 7ms/step - loss: 0.0532 - acc: 0.9811 - val_loss: 0.3981 - val_acc: 0.9093
Epoch 15/20
19944/19944 [==============================] - 134s 7ms/step - loss: 0.0483 - acc: 0.9828 - val_loss: 0.3934 - val_acc: 0.9111
Epoch 16/20
19944/19944 [==============================] - 134s 7ms/step - loss: 0.0475 - acc: 0.9831 - val_loss: 0.4414 - val_acc: 0.9084
Epoch 17/20
19944/19944 [==============================] - 134s 7ms/step - loss: 0.0469 - acc: 0.9834 - val_loss: 0.4706 - val_acc: 0.9048
Epoch 18/20
19944/19944 [==============================] - 134s 7ms/step - loss: 0.0435 - acc: 0.9842 - val_loss: 0.4752 - val_acc: 0.9057
Epoch 19/20
19944/19944 [==============================] - 134s 7ms/step - loss: 0.0434 - acc: 0.9849 - val_loss: 0.4922 - val_acc: 0.9057
Epoch 20/20
19944/19944 [==============================] - 134s 7ms/step - loss: 0.0414 - acc: 0.9854 - val_loss: 0.4394 - val_acc: 0.9075
Saved model!
###Markdown
Below are prediction codeFunction to load the meta data and the model we just trained.
###Code
model = None
sentiment_tag = None
maxLength = None
def loadModel():
global model, sentiment_tag, maxLength
metaData = __loadStuff("./Data/meta_sentiment_chinese.p")
maxLength = metaData.get("maxLength")
vocab_size = metaData.get("vocab_size")
output_dimen = metaData.get("output_dimen")
sentiment_tag = metaData.get("sentiment_tag")
embedding_dim = 256
if model is None:
model = Sequential()
model.add(Embedding(vocab_size, embedding_dim, input_length=maxLength))
# Each input would have a size of (maxLength x 256) and each of these 256 sized vectors are fed into the GRU layer one at a time.
# All the intermediate outputs are collected and then passed on to the second GRU layer.
model.add(GRU(256, dropout=0.9, return_sequences=True))
# Using the intermediate outputs, we pass them to another GRU layer and collect the final output only this time
model.add(GRU(256, dropout=0.9))
# The output is then sent to a fully connected layer that would give us our final output_dim classes
model.add(Dense(output_dimen, activation='softmax'))
# We use the adam optimizer instead of standard SGD since it converges much faster
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.load_weights('./Data/sentiment_chinese_model.HDF5')
model.summary()
print("Model weights loaded!")
###Output
_____no_output_____
###Markdown
Functions to convert sentence to model input, and predict result
###Code
def findFeatures(text):
text=Converter('zh-hans').convert(text)
text = text.replace("\n", "")
text = text.replace("\r", "")
seg_list = jieba.cut(text, cut_all=False)
seg_list = list(seg_list)
text = " ".join(seg_list)
textArray = [text]
input_tokenizer_load = __loadStuff("./Data/input_tokenizer_chinese.p")
textArray = np.array(pad_sequences(input_tokenizer_load.texts_to_sequences(textArray), maxlen=maxLength))
return textArray
def predictResult(text):
if model is None:
print("Please run \"loadModel\" first.")
return None
features = findFeatures(text)
predicted = model.predict(features)[0] # we have only one sentence to predict, so take index 0
predicted = np.array(predicted)
probab = predicted.max()
predition = sentiment_tag[predicted.argmax()]
return predition, probab
###Output
_____no_output_____
###Markdown
Calling the load model function
###Code
loadModel()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_2 (Embedding) (None, 16, 256) 11502592
_________________________________________________________________
gru_3 (GRU) (None, 16, 256) 393984
_________________________________________________________________
gru_4 (GRU) (None, 256) 393984
_________________________________________________________________
dense_2 (Dense) (None, 2) 514
=================================================================
Total params: 12,291,074
Trainable params: 12,291,074
Non-trainable params: 0
_________________________________________________________________
Model weights loaded!
###Markdown
Try some new comments, feel free to try your ownThe result tuple consists the predicted result and likehood.
###Code
predictResult("还好,床很大而且很干净,前台很友好,很满意,下次还来。")
predictResult("房间有点小但是设备还齐全,没有异味。")
predictResult("房间还算干净,一般般吧,短住还凑合。")
predictResult("开始不太满意,前台好说话换了一间,房间很干净没有异味。")
predictResult("以前从没有出现过这种情况,这一定有问题")
predictResult("需求决定人的行为")
predictResult("我不同意你所说的每一个字,但我誓死捍卫你说话的权力")
predictResult("凡夫俗子只关心如何去打发时间,而略具才华的人却考虑如何应用时间")
predictResult("清华大学的傻逼们,请出来说句话")
predictResult("我好可怜奥")
predictResult("好久都没有听到一首这样有韵味的歌了!")
predictResult("在一个傍晚的偏远小镇上,街道上寒冷凄清,几乎看不到路人,只有几盏闪烁的霓虹灯,渲染着寂寥的风景。")
predictResult("走开,女大十八变不知道啊")
predictResult("踢个球右腿被干了,瓜皮瓜皮")
predictResult("终于他梁的忙完这些稀里糊涂的东西了,爆炸")
predictResult("大家都是平等的")
saying = """
never give up
I'm born to do this
有希望在的地方,痛苦也成欢乐。
信仰是人生杠杆的支撑点,具备这个支撑点,才可能成为一个强而有力的人;信仰是事业的大门,没有正确的信仰,注定做不出伟大的事业。
哲学是有严密逻辑系统的宇宙观,它研究宇宙的性质、宇宙内万事万物演化的总规律、人在宇宙中的位置等等一些很基本的问题
伟人与平凡人的差别在于,伟人的胸中并不是没有不自信的时候,只是他能够在不自信时调整自己,从而从不自信中走出来,以达到自信的旺盛的精神状态
别人是自己的镜子,自己应该在别人成功与失败的教训中避免不幸的重现。
劣书是损害我们精神思想的毒药。
I love losing face
陈述性的讲演不会被当成 negative
偏激的、平庸的、不讲逻辑的才会
生死狙击是这两年兴起的一款页游
A teacher from a community college addressed a sympathetic audience.
你怕是个傻子
好耶好耶,妈妈有爸爸了
小学生们要喷就喷点有营养的好么
SB游戏
本人玩这个英雄联盟也有几千场了,打这么多场下来,不说100%的场次, 至少90%的场次是属于以下类型的。1,己方3路全爆或者敌方3路全爆2.赢是躺赢,输是凯瑞。3一方默契到爆每次抓人先人一步,或者无脑团,每次团得比对方快几秒。这个游戏秒人速度大家是有目共睹的,任何一个小小的失误都会导致被秒,团灭或者队友之间的胡喷,而且请记住,你是绝对无法彻底控制一场对战的随机性的。在这个战局优劣瞬息万变的游戏,5个随机的人打另外5个随机的人,又有各式各样的阵容克制,单个英雄之间的克制,还有暴击率。在这样一个随机性游戏里面,概率事件变得如此之多的游戏,很有可能这个游戏需要的运气量比你打牌或者赌钱的运气更多,前提是运气能量化的话。能决定你输或者赢得跟你技术关系真不大,不管你是翻盘局,少胜多,还是你凯瑞了,或者你带崩全局。都说明不了你,你队友或者你对手很垃圾或者很NB。综上经常开比赛,描述英雄联盟是一个多需要技术多注重竞技性的游戏,来洗脑这个只能玩路人局的你,舔着B脸说自己是竞技游戏的,真的是太垃圾了。
"""
text_list = [text for text in saying.split('\n') if text.strip('\n ') != '']
for text in text_list:
print(text[:88], '\n', predictResult(text), '\n'*2)
###Output
never give up
('pos', 0.9998504)
I'm born to do this
('pos', 0.998686)
有希望在的地方,痛苦也成欢乐。
('pos', 0.9998497)
信仰是人生杠杆的支撑点,具备这个支撑点,才可能成为一个强而有力的人;信仰是事业的大门,没有正确的信仰,注定做不出伟大的事业。
('pos', 0.99976236)
哲学是有严密逻辑系统的宇宙观,它研究宇宙的性质、宇宙内万事万物演化的总规律、人在宇宙中的位置等等一些很基本的问题
('pos', 0.9996941)
伟人与平凡人的差别在于,伟人的胸中并不是没有不自信的时候,只是他能够在不自信时调整自己,从而从不自信中走出来,以达到自信的旺盛的精神状态
('pos', 0.9994443)
别人是自己的镜子,自己应该在别人成功与失败的教训中避免不幸的重现。
('pos', 0.9998518)
劣书是损害我们精神思想的毒药。
('neg', 0.9961482)
I love losing face
('pos', 0.98438025)
陈述性的讲演不会被当成 negative
('pos', 0.6916256)
偏激的、平庸的、不讲逻辑的才会
('pos', 0.9996351)
生死狙击是这两年兴起的一款页游
('pos', 0.87719715)
A teacher from a community college addressed a sympathetic audience.
('pos', 0.95462114)
你怕是个傻子
('neg', 0.9991398)
好耶好耶,妈妈有爸爸了
('neg', 0.999869)
小学生们要喷就喷点有营养的好么
('neg', 0.99981564)
SB游戏
('neg', 0.949867)
本人玩这个英雄联盟也有几千场了,打这么多场下来,不说100%的场次, 至少90%的场次是属于以下类型的。1,己方3路全爆或者敌方3路全爆2.赢是躺赢,输是凯瑞。3一方默契到爆每
('neg', 0.9943099)
|
notebooks/Dstripes/adversarial/basic/inference_adversarial/dense/VAE/pokemonIVAAE_Dense_reconst_1ellwlb_05sharpdiff.ipynb
|
###Markdown
Settings
###Code
%env TF_KERAS = 1
import os
sep_local = os.path.sep
import sys
sys.path.append('..'+sep_local+'..')
print(sep_local)
os.chdir('..'+sep_local+'..'+sep_local+'..'+sep_local+'..'+sep_local+'..')
print(os.getcwd())
import tensorflow as tf
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Dataset loading
###Code
dataset_name='Dstripes'
import tensorflow as tf
train_ds = tf.data.Dataset.from_generator(
lambda: training_generator,
output_types=tf.float32 ,
output_shapes=tf.TensorShape((batch_size, ) + image_size)
)
test_ds = tf.data.Dataset.from_generator(
lambda: testing_generator,
output_types=tf.float32 ,
output_shapes=tf.TensorShape((batch_size, ) + image_size)
)
_instance_scale=1.0
for data in train_ds:
_instance_scale = float(data[0].numpy().max())
break
_instance_scale
import numpy as np
from collections.abc import Iterable
if isinstance(inputs_shape, Iterable):
_outputs_shape = np.prod(inputs_shape)
_outputs_shape
###Output
_____no_output_____
###Markdown
Model's Layers definition
###Code
menc_lays = [tf.keras.layers.Dense(units=intermediate_dim//2, activation='relu'),
tf.keras.layers.Dense(units=intermediate_dim//2, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=latents_dim)]
venc_lays = [tf.keras.layers.Dense(units=intermediate_dim//2, activation='relu'),
tf.keras.layers.Dense(units=intermediate_dim//2, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=latents_dim)]
dec_lays = [tf.keras.layers.Dense(units=latents_dim, activation='relu'),
tf.keras.layers.Dense(units=intermediate_dim, activation='relu'),
tf.keras.layers.Dense(units=_outputs_shape),
tf.keras.layers.Reshape(inputs_shape)]
###Output
_____no_output_____
###Markdown
Model definition
###Code
model_name = dataset_name+'IVAAE_Dense_reconst_1ell_05ssmi'
experiments_dir='experiments'+sep_local+model_name
from training.autoencoding_basic.autoencoders.autoencoder import autoencoder as AE
inputs_shape=image_size
variables_params = \
[
{
'name': 'inference_mean',
'inputs_shape':inputs_shape,
'outputs_shape':latents_dim,
'layers': menc_lays
},
{
'name': 'inference_logvariance',
'inputs_shape':inputs_shape,
'outputs_shape':latents_dim,
'layers': venc_lays
},
{
'name': 'generative',
'inputs_shape':latents_dim,
'outputs_shape':inputs_shape,
'layers':dec_lays
}
]
from utils.data_and_files.file_utils import create_if_not_exist
_restore = os.path.join(experiments_dir, 'var_save_dir')
create_if_not_exist(_restore)
_restore
#to restore trained model, set filepath=_restore
from statistical.basic_adversarial_losses import \
create_inference_discriminator_real_losses, \
create_inference_discriminator_fake_losses, \
create_inference_generator_fake_losses
inference_discriminator_losses = {
'inference_discriminator_real_outputs': create_inference_discriminator_real_losses,
'inference_discriminator_fake_outputs': create_inference_discriminator_fake_losses,
'inference_generator_fake_outputs': create_inference_generator_fake_losses,
}
ae = AE(
name=model_name,
latents_dim=latents_dim,
batch_size=batch_size,
variables_params=variables_params,
filepath=None
)
from evaluation.quantitive_metrics.sharp_difference import prepare_sharpdiff
from statistical.losses_utilities import similarity_to_distance
from statistical.ae_losses import expected_loglikelihood_with_lower_bound as ellwlb
discr2gen_rate = 0.001
gen2trad_rate = 0.1
ae.compile(
loss={'x_logits': lambda x_true, x_logits: ellwlb(x_true, x_logits)+ 0.5*similarity_to_distance(prepare_sharpdiff([ae.batch_size]+ae.get_inputs_shape()))(x_true, x_logits)}
adversarial_losses=inference_discriminator_losses,
adversarial_weights={'generator_weight': gen2trad_rate, 'discriminator_weight': discr2gen_rate}
)
###Output
_____no_output_____
###Markdown
Callbacks
###Code
from training.callbacks.sample_generation import SampleGeneration
from training.callbacks.save_model import ModelSaver
es = tf.keras.callbacks.EarlyStopping(
monitor='loss',
min_delta=1e-12,
patience=12,
verbose=1,
restore_best_weights=False
)
ms = ModelSaver(filepath=_restore)
csv_dir = os.path.join(experiments_dir, 'csv_dir')
create_if_not_exist(csv_dir)
csv_dir = os.path.join(csv_dir, ae.name+'.csv')
csv_log = tf.keras.callbacks.CSVLogger(csv_dir, append=True)
csv_dir
image_gen_dir = os.path.join(experiments_dir, 'image_gen_dir')
create_if_not_exist(image_gen_dir)
sg = SampleGeneration(latents_shape=latents_dim, filepath=image_gen_dir, gen_freq=5, save_img=True, gray_plot=False)
###Output
_____no_output_____
###Markdown
Model Training
###Code
ae.fit(
x=train_ds,
input_kw=None,
steps_per_epoch=int(1e4),
epochs=int(1e6),
verbose=2,
callbacks=[ es, ms, csv_log, sg],
workers=-1,
use_multiprocessing=True,
validation_data=test_ds,
validation_steps=int(1e4)
)
###Output
_____no_output_____
###Markdown
Model Evaluation inception_score
###Code
from evaluation.generativity_metrics.inception_metrics import inception_score
is_mean, is_sigma = inception_score(ae, tolerance_threshold=1e-6, max_iteration=200)
print(f'inception_score mean: {is_mean}, sigma: {is_sigma}')
###Output
_____no_output_____
###Markdown
Frechet_inception_distance
###Code
from evaluation.generativity_metrics.inception_metrics import frechet_inception_distance
fis_score = frechet_inception_distance(ae, training_generator, tolerance_threshold=1e-6, max_iteration=10, batch_size=32)
print(f'frechet inception distance: {fis_score}')
###Output
_____no_output_____
###Markdown
perceptual_path_length_score
###Code
from evaluation.generativity_metrics.perceptual_path_length import perceptual_path_length_score
ppl_mean_score = perceptual_path_length_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200, batch_size=32)
print(f'perceptual path length score: {ppl_mean_score}')
###Output
_____no_output_____
###Markdown
precision score
###Code
from evaluation.generativity_metrics.precision_recall import precision_score
_precision_score = precision_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)
print(f'precision score: {_precision_score}')
###Output
_____no_output_____
###Markdown
recall score
###Code
from evaluation.generativity_metrics.precision_recall import recall_score
_recall_score = recall_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200)
print(f'recall score: {_recall_score}')
###Output
_____no_output_____
###Markdown
Image Generation image reconstruction Training dataset
###Code
%load_ext autoreload
%autoreload 2
from training.generators.image_generation_testing import reconstruct_from_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'reconstruct_training_images_like_a_batch_dir')
create_if_not_exist(save_dir)
reconstruct_from_a_batch(ae, training_generator, save_dir)
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'reconstruct_testing_images_like_a_batch_dir')
create_if_not_exist(save_dir)
reconstruct_from_a_batch(ae, testing_generator, save_dir)
###Output
_____no_output_____
###Markdown
with Randomness
###Code
from training.generators.image_generation_testing import generate_images_like_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'generate_training_images_like_a_batch_dir')
create_if_not_exist(save_dir)
generate_images_like_a_batch(ae, training_generator, save_dir)
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'generate_testing_images_like_a_batch_dir')
create_if_not_exist(save_dir)
generate_images_like_a_batch(ae, testing_generator, save_dir)
###Output
_____no_output_____
###Markdown
Complete Randomness
###Code
from training.generators.image_generation_testing import generate_images_randomly
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'random_synthetic_dir')
create_if_not_exist(save_dir)
generate_images_randomly(ae, save_dir)
from training.generators.image_generation_testing import interpolate_a_batch
from utils.data_and_files.file_utils import create_if_not_exist
save_dir = os.path.join(experiments_dir, 'interpolate_dir')
create_if_not_exist(save_dir)
interpolate_a_batch(ae, testing_generator, save_dir)
###Output
100%|██████████| 15/15 [00:00<00:00, 19.90it/s]
|
deep-learnining-specialization/2. improving deep neural networks/resources/Optimization methods.ipynb
|
###Markdown
Optimization MethodsUntil now, you've always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result. Gradient descent goes "downhill" on a cost function $J$. Think of it as trying to do this: **Figure 1** : **Minimizing the cost is like finding the lowest point in a hilly landscape** At each step of the training, you update your parameters following a certain direction to try to get to the lowest possible point. **Notations**: As usual, $\frac{\partial J}{\partial a } = $ `da` for any variable `a`.To get started, run the following code to import the libraries you will need.
###Code
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import math
import sklearn
import sklearn.datasets
from opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
from opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
###Output
_____no_output_____
###Markdown
1 - Gradient DescentA simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent. **Warm-up exercise**: Implement the gradient descent update rule. The gradient descent rule is, for $l = 1, ..., L$: $$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{1}$$$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{2}$$where L is the number of layers and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding.
###Code
# GRADED FUNCTION: update_parameters_with_gd
def update_parameters_with_gd(parameters, grads, learning_rate):
"""
Update parameters using one step of gradient descent
Arguments:
parameters -- python dictionary containing your parameters to be updated:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients to update each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
learning_rate -- the learning rate, scalar.
Returns:
parameters -- python dictionary containing your updated parameters
"""
L = len(parameters) // 2 # number of layers in the neural networks
# Update rule for each parameter
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads["dW" + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads["db" + str(l+1)]
### END CODE HERE ###
return parameters
parameters, grads, learning_rate = update_parameters_with_gd_test_case()
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.63535156 -0.62320365 -0.53718766]
[-1.07799357 0.85639907 -2.29470142]]
b1 = [[ 1.74604067]
[-0.75184921]]
W2 = [[ 0.32171798 -0.25467393 1.46902454]
[-2.05617317 -0.31554548 -0.3756023 ]
[ 1.1404819 -1.09976462 -0.1612551 ]]
b2 = [[-0.88020257]
[ 0.02561572]
[ 0.57539477]]
###Markdown
**Expected Output**: **W1** [[ 1.63535156 -0.62320365 -0.53718766] [-1.07799357 0.85639907 -2.29470142]] **b1** [[ 1.74604067] [-0.75184921]] **W2** [[ 0.32171798 -0.25467393 1.46902454] [-2.05617317 -0.31554548 -0.3756023 ] [ 1.1404819 -1.09976462 -0.1612551 ]] **b2** [[-0.88020257] [ 0.02561572] [ 0.57539477]] A variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent. - **(Batch) Gradient Descent**:``` pythonX = data_inputY = labelsparameters = initialize_parameters(layers_dims)for i in range(0, num_iterations): Forward propagation a, caches = forward_propagation(X, parameters) Compute cost. cost = compute_cost(a, Y) Backward propagation. grads = backward_propagation(a, caches, parameters) Update parameters. parameters = update_parameters(parameters, grads) ```- **Stochastic Gradient Descent**:```pythonX = data_inputY = labelsparameters = initialize_parameters(layers_dims)for i in range(0, num_iterations): for j in range(0, m): Forward propagation a, caches = forward_propagation(X[:,j], parameters) Compute cost cost = compute_cost(a, Y[:,j]) Backward propagation grads = backward_propagation(a, caches, parameters) Update parameters. parameters = update_parameters(parameters, grads)``` In Stochastic Gradient Descent, you use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will "oscillate" toward the minimum rather than converge smoothly. Here is an illustration of this: **Figure 1** : **SGD vs GD** "+" denotes a minimum of the cost. SGD leads to many oscillations to reach convergence. But each step is a lot faster to compute for SGD than for GD, as it uses only one training example (vs. the whole batch for GD). **Note** also that implementing SGD requires 3 for-loops in total:1. Over the number of iterations2. Over the $m$ training examples3. Over the layers (to update all parameters, from $(W^{[1]},b^{[1]})$ to $(W^{[L]},b^{[L]})$)In practice, you'll often get faster results if you do not use neither the whole training set, nor only one training example, to perform each update. Mini-batch gradient descent uses an intermediate number of examples for each step. With mini-batch gradient descent, you loop over the mini-batches instead of looping over individual training examples. **Figure 2** : **SGD vs Mini-Batch GD** "+" denotes a minimum of the cost. Using mini-batches in your optimization algorithm often leads to faster optimization. **What you should remember**:- The difference between gradient descent, mini-batch gradient descent and stochastic gradient descent is the number of examples you use to perform one update step.- You have to tune a learning rate hyperparameter $\alpha$.- With a well-turned mini-batch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large). 2 - Mini-Batch Gradient descentLet's learn how to build mini-batches from the training set (X, Y).There are two steps:- **Shuffle**: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the $i^{th}$ column of X is the example corresponding to the $i^{th}$ label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches. - **Partition**: Partition the shuffled (X, Y) into mini-batches of size `mini_batch_size` (here 64). Note that the number of training examples is not always divisible by `mini_batch_size`. The last mini batch might be smaller, but you don't need to worry about this. When the final mini-batch is smaller than the full `mini_batch_size`, it will look like this: **Exercise**: Implement `random_mini_batches`. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the $1^{st}$ and $2^{nd}$ mini-batches:```pythonfirst_mini_batch_X = shuffled_X[:, 0 : mini_batch_size]second_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size]...```Note that the last mini-batch might end up smaller than `mini_batch_size=64`. Let $\lfloor s \rfloor$ represents $s$ rounded down to the nearest integer (this is `math.floor(s)` in Python). If the total number of examples is not a multiple of `mini_batch_size=64` then there will be $\lfloor \frac{m}{mini\_batch\_size}\rfloor$ mini-batches with a full 64 examples, and the number of examples in the final mini-batch will be ($m-mini_\_batch_\_size \times \lfloor \frac{m}{mini\_batch\_size}\rfloor$).
###Code
# GRADED FUNCTION: random_mini_batches
def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
"""
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
mini_batch_size -- size of the mini-batches, integer
Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
"""
np.random.seed(seed) # To make your "random" minibatches the same as ours
m = X.shape[1] # number of training examples
mini_batches = []
# Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m))
shuffled_X = X[:, permutation]
shuffled_Y = Y[:, permutation].reshape((1,m))
# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
for k in range(0, num_complete_minibatches):
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:, k * mini_batch_size : (k + 1) * mini_batch_size]
mini_batch_Y = shuffled_Y[:, k * mini_batch_size : (k + 1) * mini_batch_size]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
# Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:, (k + 1) * mini_batch_size:]
mini_batch_Y = shuffled_Y[:, (k + 1) * mini_batch_size:]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
return mini_batches
X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case()
mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)
print ("shape of the 1st mini_batch_X: " + str(mini_batches[0][0].shape))
print ("shape of the 2nd mini_batch_X: " + str(mini_batches[1][0].shape))
print ("shape of the 3rd mini_batch_X: " + str(mini_batches[2][0].shape))
print ("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape))
print ("shape of the 2nd mini_batch_Y: " + str(mini_batches[1][1].shape))
print ("shape of the 3rd mini_batch_Y: " + str(mini_batches[2][1].shape))
print ("mini batch sanity check: " + str(mini_batches[0][0][0][0:3]))
###Output
shape of the 1st mini_batch_X: (12288, 64)
shape of the 2nd mini_batch_X: (12288, 64)
shape of the 3rd mini_batch_X: (12288, 20)
shape of the 1st mini_batch_Y: (1, 64)
shape of the 2nd mini_batch_Y: (1, 64)
shape of the 3rd mini_batch_Y: (1, 20)
mini batch sanity check: [ 0.90085595 -0.7612069 0.2344157 ]
###Markdown
**Expected Output**: **shape of the 1st mini_batch_X** (12288, 64) **shape of the 2nd mini_batch_X** (12288, 64) **shape of the 3rd mini_batch_X** (12288, 20) **shape of the 1st mini_batch_Y** (1, 64) **shape of the 2nd mini_batch_Y** (1, 64) **shape of the 3rd mini_batch_Y** (1, 20) **mini batch sanity check** [ 0.90085595 -0.7612069 0.2344157 ] **What you should remember**:- Shuffling and Partitioning are the two steps required to build mini-batches- Powers of two are often chosen to be the mini-batch size, e.g., 16, 32, 64, 128. 3 - MomentumBecause mini-batch gradient descent makes a parameter update after seeing just a subset of examples, the direction of the update has some variance, and so the path taken by mini-batch gradient descent will "oscillate" toward convergence. Using momentum can reduce these oscillations. Momentum takes into account the past gradients to smooth out the update. We will store the 'direction' of the previous gradients in the variable $v$. Formally, this will be the exponentially weighted average of the gradient on previous steps. You can also think of $v$ as the "velocity" of a ball rolling downhill, building up speed (and momentum) according to the direction of the gradient/slope of the hill. **Figure 3**: The red arrows shows the direction taken by one step of mini-batch gradient descent with momentum. The blue points show the direction of the gradient (with respect to the current mini-batch) on each step. Rather than just following the gradient, we let the gradient influence $v$ and then take a step in the direction of $v$. **Exercise**: Initialize the velocity. The velocity, $v$, is a python dictionary that needs to be initialized with arrays of zeros. Its keys are the same as those in the `grads` dictionary, that is:for $l =1,...,L$:```pythonv["dW" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["W" + str(l+1)])v["db" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["b" + str(l+1)])```**Note** that the iterator l starts at 0 in the for loop while the first parameters are v["dW1"] and v["db1"] (that's a "one" on the superscript). This is why we are shifting l to l+1 in the `for` loop.
###Code
# GRADED FUNCTION: initialize_velocity
def initialize_velocity(parameters):
"""
Initializes the velocity as a python dictionary with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
Returns:
v -- python dictionary containing the current velocity.
v['dW' + str(l)] = velocity of dWl
v['db' + str(l)] = velocity of dbl
"""
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
# Initialize velocity
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
v["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
### END CODE HERE ###
return v
parameters = initialize_velocity_test_case()
v = initialize_velocity(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
###Output
v["dW1"] = [[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db1"] = [[ 0.]
[ 0.]]
v["dW2"] = [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db2"] = [[ 0.]
[ 0.]
[ 0.]]
###Markdown
**Expected Output**: **v["dW1"]** [[ 0. 0. 0.] [ 0. 0. 0.]] **v["db1"]** [[ 0.] [ 0.]] **v["dW2"]** [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] **v["db2"]** [[ 0.] [ 0.] [ 0.]] **Exercise**: Now, implement the parameters update with momentum. The momentum update rule is, for $l = 1, ..., L$: $$ \begin{cases}v_{dW^{[l]}} = \beta v_{dW^{[l]}} + (1 - \beta) dW^{[l]} \\W^{[l]} = W^{[l]} - \alpha v_{dW^{[l]}}\end{cases}\tag{3}$$$$\begin{cases}v_{db^{[l]}} = \beta v_{db^{[l]}} + (1 - \beta) db^{[l]} \\b^{[l]} = b^{[l]} - \alpha v_{db^{[l]}} \end{cases}\tag{4}$$where L is the number of layers, $\beta$ is the momentum and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$ (that's a "one" on the superscript). So you will need to shift `l` to `l+1` when coding.
###Code
# GRADED FUNCTION: update_parameters_with_momentum
def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):
"""
Update parameters using Momentum
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- python dictionary containing the current velocity:
v['dW' + str(l)] = ...
v['db' + str(l)] = ...
beta -- the momentum hyperparameter, scalar
learning_rate -- the learning rate, scalar
Returns:
parameters -- python dictionary containing your updated parameters
v -- python dictionary containing your updated velocities
"""
L = len(parameters) // 2 # number of layers in the neural networks
# Momentum update for each parameter
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
# compute velocities
v["dW" + str(l+1)] = beta * v["dW" + str(l+1)] + (1 - beta) * grads["dW" + str(l+1)]
v["db" + str(l+1)] = beta * v["db" + str(l+1)] + (1 - beta) * grads["db" + str(l+1)]
# update parameters
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * v["dW" + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * v["db" + str(l+1)]
### END CODE HERE ###
return parameters, v
parameters, grads, v = update_parameters_with_momentum_test_case()
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
###Output
W1 = [[ 1.62544598 -0.61290114 -0.52907334]
[-1.07347112 0.86450677 -2.30085497]]
b1 = [[ 1.74493465]
[-0.76027113]]
W2 = [[ 0.31930698 -0.24990073 1.4627996 ]
[-2.05974396 -0.32173003 -0.38320915]
[ 1.13444069 -1.0998786 -0.1713109 ]]
b2 = [[-0.87809283]
[ 0.04055394]
[ 0.58207317]]
v["dW1"] = [[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]]
v["db1"] = [[-0.01228902]
[-0.09357694]]
v["dW2"] = [[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]]
v["db2"] = [[ 0.02344157]
[ 0.16598022]
[ 0.07420442]]
###Markdown
**Expected Output**: **W1** [[ 1.62544598 -0.61290114 -0.52907334] [-1.07347112 0.86450677 -2.30085497]] **b1** [[ 1.74493465] [-0.76027113]] **W2** [[ 0.31930698 -0.24990073 1.4627996 ] [-2.05974396 -0.32173003 -0.38320915] [ 1.13444069 -1.0998786 -0.1713109 ]] **b2** [[-0.87809283] [ 0.04055394] [ 0.58207317]] **v["dW1"]** [[-0.11006192 0.11447237 0.09015907] [ 0.05024943 0.09008559 -0.06837279]] **v["db1"]** [[-0.01228902] [-0.09357694]] **v["dW2"]** [[-0.02678881 0.05303555 -0.06916608] [-0.03967535 -0.06871727 -0.08452056] [-0.06712461 -0.00126646 -0.11173103]] **v["db2"]** [[ 0.02344157] [ 0.16598022] [ 0.07420442]] **Note** that:- The velocity is initialized with zeros. So the algorithm will take a few iterations to "build up" velocity and start to take bigger steps.- If $\beta = 0$, then this just becomes standard gradient descent without momentum. **How do you choose $\beta$?**- The larger the momentum $\beta$ is, the smoother the update because the more we take the past gradients into account. But if $\beta$ is too big, it could also smooth out the updates too much. - Common values for $\beta$ range from 0.8 to 0.999. If you don't feel inclined to tune this, $\beta = 0.9$ is often a reasonable default. - Tuning the optimal $\beta$ for your model might need trying several values to see what works best in term of reducing the value of the cost function $J$. **What you should remember**:- Momentum takes past gradients into account to smooth out the steps of gradient descent. It can be applied with batch gradient descent, mini-batch gradient descent or stochastic gradient descent.- You have to tune a momentum hyperparameter $\beta$ and a learning rate $\alpha$. 4 - AdamAdam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum. **How does Adam work?**1. It calculates an exponentially weighted average of past gradients, and stores it in variables $v$ (before bias correction) and $v^{corrected}$ (with bias correction). 2. It calculates an exponentially weighted average of the squares of the past gradients, and stores it in variables $s$ (before bias correction) and $s^{corrected}$ (with bias correction). 3. It updates parameters in a direction based on combining information from "1" and "2".The update rule is, for $l = 1, ..., L$: $$\begin{cases}v_{dW^{[l]}} = \beta_1 v_{dW^{[l]}} + (1 - \beta_1) \frac{\partial \mathcal{J} }{ \partial W^{[l]} } \\v^{corrected}_{dW^{[l]}} = \frac{v_{dW^{[l]}}}{1 - (\beta_1)^t} \\s_{dW^{[l]}} = \beta_2 s_{dW^{[l]}} + (1 - \beta_2) (\frac{\partial \mathcal{J} }{\partial W^{[l]} })^2 \\s^{corrected}_{dW^{[l]}} = \frac{s_{dW^{[l]}}}{1 - (\beta_1)^t} \\W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{dW^{[l]}}}{\sqrt{s^{corrected}_{dW^{[l]}}} + \varepsilon}\end{cases}$$where:- t counts the number of steps taken of Adam - L is the number of layers- $\beta_1$ and $\beta_2$ are hyperparameters that control the two exponentially weighted averages. - $\alpha$ is the learning rate- $\varepsilon$ is a very small number to avoid dividing by zeroAs usual, we will store all parameters in the `parameters` dictionary **Exercise**: Initialize the Adam variables $v, s$ which keep track of the past information.**Instruction**: The variables $v, s$ are python dictionaries that need to be initialized with arrays of zeros. Their keys are the same as for `grads`, that is:for $l = 1, ..., L$:```pythonv["dW" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["W" + str(l+1)])v["db" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["b" + str(l+1)])s["dW" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["W" + str(l+1)])s["db" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["b" + str(l+1)])```
###Code
# GRADED FUNCTION: initialize_adam
def initialize_adam(parameters) :
"""
Initializes v and s as two python dictionaries with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters["W" + str(l)] = Wl
parameters["b" + str(l)] = bl
Returns:
v -- python dictionary that will contain the exponentially weighted average of the gradient.
v["dW" + str(l)] = ...
v["db" + str(l)] = ...
s -- python dictionary that will contain the exponentially weighted average of the squared gradient.
s["dW" + str(l)] = ...
s["db" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
s = {}
# Initialize v, s. Input: "parameters". Outputs: "v, s".
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
v["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
v["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
s["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
s["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
### END CODE HERE ###
return v, s
parameters = initialize_adam_test_case()
v, s = initialize_adam(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
###Output
v["dW1"] = [[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db1"] = [[ 0.]
[ 0.]]
v["dW2"] = [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db2"] = [[ 0.]
[ 0.]
[ 0.]]
s["dW1"] = [[ 0. 0. 0.]
[ 0. 0. 0.]]
s["db1"] = [[ 0.]
[ 0.]]
s["dW2"] = [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
s["db2"] = [[ 0.]
[ 0.]
[ 0.]]
###Markdown
**Expected Output**: **v["dW1"]** [[ 0. 0. 0.] [ 0. 0. 0.]] **v["db1"]** [[ 0.] [ 0.]] **v["dW2"]** [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] **v["db2"]** [[ 0.] [ 0.] [ 0.]] **s["dW1"]** [[ 0. 0. 0.] [ 0. 0. 0.]] **s["db1"]** [[ 0.] [ 0.]] **s["dW2"]** [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] **s["db2"]** [[ 0.] [ 0.] [ 0.]] **Exercise**: Now, implement the parameters update with Adam. Recall the general update rule is, for $l = 1, ..., L$: $$\begin{cases}v_{W^{[l]}} = \beta_1 v_{W^{[l]}} + (1 - \beta_1) \frac{\partial J }{ \partial W^{[l]} } \\v^{corrected}_{W^{[l]}} = \frac{v_{W^{[l]}}}{1 - (\beta_1)^t} \\s_{W^{[l]}} = \beta_2 s_{W^{[l]}} + (1 - \beta_2) (\frac{\partial J }{\partial W^{[l]} })^2 \\s^{corrected}_{W^{[l]}} = \frac{s_{W^{[l]}}}{1 - (\beta_2)^t} \\W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{W^{[l]}}}{\sqrt{s^{corrected}_{W^{[l]}}}+\varepsilon}\end{cases}$$**Note** that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding.
###Code
# GRADED FUNCTION: update_parameters_with_adam
def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8):
"""
Update parameters using Adam
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
learning_rate -- the learning rate, scalar.
beta1 -- Exponential decay hyperparameter for the first moment estimates
beta2 -- Exponential decay hyperparameter for the second moment estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
Returns:
parameters -- python dictionary containing your updated parameters
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
"""
L = len(parameters) // 2 # number of layers in the neural networks
v_corrected = {} # Initializing first moment estimate, python dictionary
s_corrected = {} # Initializing second moment estimate, python dictionary
# Perform Adam update on all parameters
for l in range(L):
# Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v".
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = beta1 * v["dW" + str(l+1)] + (1 - beta1) * grads["dW" + str(l+1)]
v["db" + str(l+1)] = beta1 * v["db" + str(l+1)] + (1 - beta1) * grads["db" + str(l+1)]
### END CODE HERE ###
# Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected".
### START CODE HERE ### (approx. 2 lines)
v_corrected["dW" + str(l+1)] = v["dW" + str(l+1)] / (1 - beta1**t)
v_corrected["db" + str(l+1)] = v["db" + str(l+1)] / (1 - beta1**t)
### END CODE HERE ###
# Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s".
### START CODE HERE ### (approx. 2 lines)
s["dW" + str(l+1)] = beta2 * s["dW" + str(l+1)] + (1 - beta2) * np.square(grads["dW" + str(l+1)])
s["db" + str(l+1)] = beta2 * s["db" + str(l+1)] + (1 - beta2) * np.square(grads["db" + str(l+1)])
### END CODE HERE ###
# Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected".
### START CODE HERE ### (approx. 2 lines)
s_corrected["dW" + str(l+1)] = s["dW" + str(l+1)] / (1 - beta2**t)
s_corrected["db" + str(l+1)] = s["db" + str(l+1)] / (1 - beta2**t)
### END CODE HERE ###
# Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters".
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * v_corrected["dW" + str(l+1)] / (np.sqrt(s_corrected["dW" + str(l+1)]) + epsilon)
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * v_corrected["db" + str(l+1)] / (np.sqrt(s_corrected["db" + str(l+1)]) + epsilon)
### END CODE HERE ###
return parameters, v, s
parameters, grads, v, s = update_parameters_with_adam_test_case()
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
###Output
W1 = [[ 1.63178673 -0.61919778 -0.53561312]
[-1.08040999 0.85796626 -2.29409733]]
b1 = [[ 1.75225313]
[-0.75376553]]
W2 = [[ 0.32648046 -0.25681174 1.46954931]
[-2.05269934 -0.31497584 -0.37661299]
[ 1.14121081 -1.09244991 -0.16498684]]
b2 = [[-0.88529979]
[ 0.03477238]
[ 0.57537385]]
v["dW1"] = [[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]]
v["db1"] = [[-0.01228902]
[-0.09357694]]
v["dW2"] = [[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]]
v["db2"] = [[ 0.02344157]
[ 0.16598022]
[ 0.07420442]]
s["dW1"] = [[ 0.00121136 0.00131039 0.00081287]
[ 0.0002525 0.00081154 0.00046748]]
s["db1"] = [[ 1.51020075e-05]
[ 8.75664434e-04]]
s["dW2"] = [[ 7.17640232e-05 2.81276921e-04 4.78394595e-04]
[ 1.57413361e-04 4.72206320e-04 7.14372576e-04]
[ 4.50571368e-04 1.60392066e-07 1.24838242e-03]]
s["db2"] = [[ 5.49507194e-05]
[ 2.75494327e-03]
[ 5.50629536e-04]]
###Markdown
**Expected Output**: **W1** [[ 1.63178673 -0.61919778 -0.53561312] [-1.08040999 0.85796626 -2.29409733]] **b1** [[ 1.75225313] [-0.75376553]] **W2** [[ 0.32648046 -0.25681174 1.46954931] [-2.05269934 -0.31497584 -0.37661299] [ 1.14121081 -1.09245036 -0.16498684]] **b2** [[-0.88529978] [ 0.03477238] [ 0.57537385]] **v["dW1"]** [[-0.11006192 0.11447237 0.09015907] [ 0.05024943 0.09008559 -0.06837279]] **v["db1"]** [[-0.01228902] [-0.09357694]] **v["dW2"]** [[-0.02678881 0.05303555 -0.06916608] [-0.03967535 -0.06871727 -0.08452056] [-0.06712461 -0.00126646 -0.11173103]] **v["db2"]** [[ 0.02344157] [ 0.16598022] [ 0.07420442]] **s["dW1"]** [[ 0.00121136 0.00131039 0.00081287] [ 0.0002525 0.00081154 0.00046748]] **s["db1"]** [[ 1.51020075e-05] [ 8.75664434e-04]] **s["dW2"]** [[ 7.17640232e-05 2.81276921e-04 4.78394595e-04] [ 1.57413361e-04 4.72206320e-04 7.14372576e-04] [ 4.50571368e-04 1.60392066e-07 1.24838242e-03]] **s["db2"]** [[ 5.49507194e-05] [ 2.75494327e-03] [ 5.50629536e-04]] You now have three working optimization algorithms (mini-batch gradient descent, Momentum, Adam). Let's implement a model with each of these optimizers and observe the difference. 5 - Model with different optimization algorithmsLets use the following "moons" dataset to test the different optimization methods. (The dataset is named "moons" because the data from each of the two classes looks a bit like a crescent-shaped moon.)
###Code
train_X, train_Y = load_dataset()
###Output
_____no_output_____
###Markdown
We have already implemented a 3-layer neural network. You will train it with: - Mini-batch **Gradient Descent**: it will call your function: - `update_parameters_with_gd()`- Mini-batch **Momentum**: it will call your functions: - `initialize_velocity()` and `update_parameters_with_momentum()`- Mini-batch **Adam**: it will call your functions: - `initialize_adam()` and `update_parameters_with_adam()`
###Code
def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8, num_epochs = 10000, print_cost = True):
"""
3-layer neural network model which can be run in different optimizer modes.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
layers_dims -- python list, containing the size of each layer
learning_rate -- the learning rate, scalar.
mini_batch_size -- the size of a mini batch
beta -- Momentum hyperparameter
beta1 -- Exponential decay hyperparameter for the past gradients estimates
beta2 -- Exponential decay hyperparameter for the past squared gradients estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
num_epochs -- number of epochs
print_cost -- True to print the cost every 1000 epochs
Returns:
parameters -- python dictionary containing your updated parameters
"""
L = len(layers_dims) # number of layers in the neural networks
costs = [] # to keep track of the cost
t = 0 # initializing the counter required for Adam update
seed = 10 # For grading purposes, so that your "random" minibatches are the same as ours
# Initialize parameters
parameters = initialize_parameters(layers_dims)
# Initialize the optimizer
if optimizer == "gd":
pass # no initialization required for gradient descent
elif optimizer == "momentum":
v = initialize_velocity(parameters)
elif optimizer == "adam":
v, s = initialize_adam(parameters)
# Optimization loop
for i in range(num_epochs):
# Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch
seed = seed + 1
minibatches = random_mini_batches(X, Y, mini_batch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# Forward propagation
a3, caches = forward_propagation(minibatch_X, parameters)
# Compute cost
cost = compute_cost(a3, minibatch_Y)
# Backward propagation
grads = backward_propagation(minibatch_X, minibatch_Y, caches)
# Update parameters
if optimizer == "gd":
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
elif optimizer == "momentum":
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)
elif optimizer == "adam":
t = t + 1 # Adam counter
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s,
t, learning_rate, beta1, beta2, epsilon)
# Print the cost every 1000 epoch
if print_cost and i % 1000 == 0:
print ("Cost after epoch %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('epochs (per 100)')
plt.title("Learning rate = " + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
You will now run this 3 layer neural network with each of the 3 optimization methods. 5.1 - Mini-batch Gradient descentRun the following code to see how the model does with mini-batch gradient descent.
###Code
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "gd")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Gradient Descent optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
Cost after epoch 0: 0.690736
Cost after epoch 1000: 0.685273
Cost after epoch 2000: 0.647072
Cost after epoch 3000: 0.619525
Cost after epoch 4000: 0.576584
Cost after epoch 5000: 0.607243
Cost after epoch 6000: 0.529403
Cost after epoch 7000: 0.460768
Cost after epoch 8000: 0.465586
Cost after epoch 9000: 0.464518
###Markdown
5.2 - Mini-batch gradient descent with momentumRun the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains.
###Code
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Momentum optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
Cost after epoch 0: 0.690741
Cost after epoch 1000: 0.685341
Cost after epoch 2000: 0.647145
Cost after epoch 3000: 0.619594
Cost after epoch 4000: 0.576665
Cost after epoch 5000: 0.607324
Cost after epoch 6000: 0.529476
Cost after epoch 7000: 0.460936
Cost after epoch 8000: 0.465780
Cost after epoch 9000: 0.464740
###Markdown
5.3 - Mini-batch with Adam modeRun the following code to see how the model does with Adam.
###Code
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "adam")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Adam optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
Cost after epoch 0: 0.690552
Cost after epoch 1000: 0.185567
Cost after epoch 2000: 0.150852
Cost after epoch 3000: 0.074454
Cost after epoch 4000: 0.125936
Cost after epoch 5000: 0.104235
Cost after epoch 6000: 0.100552
Cost after epoch 7000: 0.031601
Cost after epoch 8000: 0.111709
Cost after epoch 9000: 0.197648
|
onshore7-Copy1.ipynb
|
###Markdown
OLS regression with power and windspeed, filtered to those countries that "make sense"
###Code
R={}
for i in wd.set_index('ISO_CODE').index.unique():
result = sm.ols(formula="year ~ power + wind", data=wd.set_index('ISO_CODE').loc[i]).fit()
R[i]=result.params
P=pd.DataFrame(R).T
P[(P['Intercept']>1990)&(P['Intercept']<2020)]
P[(P['Intercept']>1990)&(P['Intercept']<2020)].hist(bins=20)
###Output
_____no_output_____
###Markdown
Number of projects in each country
###Code
C={}
for i in wd.set_index('ISO_CODE').index.unique():
C[i]=len(wd.set_index('ISO_CODE').loc[i].index)
C
P['projects']=C.values()
###Output
_____no_output_____
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.