Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
12,200 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
File systems proxies
This notebook demonstrates file system proxies to folders and directories, which are basically strings (paths) with additional methods for creation, opening moving, deleting, renaming, linking and browsing. Three file systems are currently supported
Step1: This notebook will create directories and files in a context manager that cleans up everything afterwards
Step2: This context manager will be used as follows
Step3: Proxy
A proxy to an existing or non-existing path (file or directory) can be created from a string
Step4: Create directories
You can use the proxy's "mkdir" method to create a directory (can be done recursively)
Step5: Browse directories
Browse directories that not necessarily exist by indexing or iteration
Step6: Browse directories that exist using the "ls" method or iteration
Step7: Create files
Files can be create with the "open" method which gives you the underlying file IO object, or with the "mkfile" method which does the same but combines it with writing data.
Step8: Move/rename
Moving files or directories can be done with the "move" method (aliases
Step9: Copy
Copying files or directories can be done with the "copy" method (alias
Step10: Links
Hard and soft links are supported. The difference is that when deleting the link destination, the hard link still exists (file is not actually deleted) while the soft link is broken.
Step11: Cross-device links
HDF5 also supports cross-device links (called external links)
Step12: Devices
If a proxy is derived from another proxy, they share the same device. Therefore if the device is already opened by one proxy, it will be open for the other proxy. Re-opening is allowed but does not actually do anything (no reference counting either).
Step13: Access modes
Files can be opened in whatever mode the underlying file system supports (read, read/write, append, truncate,...). The default access mode is defined on proxy initialization. If no mode is specified, it will be read/write and create when not existing (local file system
Step14: Nexus
Nexus proxies inherit from HDF5 proxies with additional functionality to enforce Nexus standards.
Step15: Apart from the creation of Nexus groups, Nexus fields and attributes are currently neither enforced, restricted or dealt with automatically. For example
Step16: Default plotting
Plottable data (regular, n-D data with associated axes) can be provided as an NXdata instance. When several NXdata groups are present, only one of them can be marked as the default.
Step17: Data processing
Results of data processing can be saved as NXprocess instances. | Python Code:
from spectrocrunch.io import fs,localfs,h5fs,nxfs
Explanation: File systems proxies
This notebook demonstrates file system proxies to folders and directories, which are basically strings (paths) with additional methods for creation, opening moving, deleting, renaming, linking and browsing. Three file systems are currently supported:
* Local file system
* HDF5 file system
* Nexus file system
End of explanation
from contextlib import contextmanager
@contextmanager
def temppath(fsystem=None):
with localfs.Path('.').temp() as root:
root.mkdir()
if fsystem=='nexus':
root = nxfs.Path(root['test.nx']).mkdir()
print('Nexus root: {}'.format(root))
elif fsystem=='hdf5':
root = h5fs.Path(root['test.h5']).mkdir()
print('HDF5 root: {}'.format(root))
else:
print('Local root: {}'.format(root))
yield root
Explanation: This notebook will create directories and files in a context manager that cleans up everything afterwards:
End of explanation
for fsystem in ['local','hdf5']:
with temppath(fsystem) as root:
root.ls()
print('')
Explanation: This context manager will be used as follows:
End of explanation
import os
with temppath() as root:
path = os.path.join(root.path,'subdir')
print('String: '+path)
proxy = localfs.Path(path)
print('Local proxy: '+str(proxy))
assert(not proxy.exists)
print('')
path = os.path.join(root.path,'test.h5:/subdir')
print('String: '+path)
proxy = h5fs.Path(path)
print('HDF5 proxy: '+str(proxy))
assert(not proxy.exists)
Explanation: Proxy
A proxy to an existing or non-existing path (file or directory) can be created from a string:
End of explanation
for fsystem in ['local','hdf5']:
with temppath(fsystem=fsystem) as root:
subdir = root['dir']['subdir']
print('Device: {}'.format(subdir.device))
print('Path: '+subdir.path)
print('Location: '+subdir.location)
assert(not subdir.exists)
subdir.mkdir(recursive=True)
assert(subdir.exists)
print('')
Explanation: Create directories
You can use the proxy's "mkdir" method to create a directory (can be done recursively)
End of explanation
for fsystem in ['local','hdf5']:
with temppath(fsystem=fsystem) as root:
print('Indexing:')
path = root['dir']['subdir']
assert(path['.'] == path)
assert(path['..'] == root['dir'])
assert(path['..'] == path.parent)
assert(path == path['..']['subdir'])
print('Iterate up from {}:'.format(path.path))
for p in path.iterup:
print(p)
print('')
Explanation: Browse directories
Browse directories that not necessarily exist by indexing or iteration:
End of explanation
for fsystem in ['local','hdf5']:
with temppath(fsystem=fsystem) as root:
path = root['dir']['subdir'].mkdir()
for subdir in ['dira','dirb']:
path[subdir].mkdir()
print('Iterate {}:'.format(path.path))
for subdir in path:
print(subdir)
print('ls:')
root.ls(recursive=True)
print('')
Explanation: Browse directories that exist using the "ls" method or iteration:
End of explanation
for hfsystem in ['local','hdf5']:
with temppath(fsystem=fsystem) as root:
if hfsystem=='hdf5':
data = range(10)
else:
data = 'Hello World'
path = root['file1.txt'].mkfile(data=data)
root.ls(stats=True)
with path.open(mode='r') as fileio:
print('IO object: {}'.format(type(fileio)))
content = path.read()
print('Content: {}'.format(content))
if hfsystem=='hdf5':
assert(tuple(data)==tuple(content))
else:
assert(data==content)
print('')
Explanation: Create files
Files can be create with the "open" method which gives you the underlying file IO object, or with the "mkfile" method which does the same but combines it with writing data.
End of explanation
for fsystem in ['local','hdf5']:
with temppath(fsystem=fsystem) as root:
path = root['dir1'].mkdir()['test1.txt'].mkfile(data='Hello World')
print('\nMove test1.txt to test2.txt:')
path = path.move('test2.txt')
root.ls(recursive=True)
print('\nMove test2.txt to ../dir2/test3.txt:')
path = path.move('../dir2/test3.txt')
root.ls(recursive=True)
print('\nMove test3.txt to ..:')
path = path.move('..')
root.ls(recursive=True)
print('\n\n')
Explanation: Move/rename
Moving files or directories can be done with the "move" method (aliases: "mv" and "rename"). The destination can be a proxy or a string.
End of explanation
for fsystem in ['local','hdf5']:
with temppath(fsystem=fsystem) as root:
path = root['dir1'].mkdir()['test1.txt'].mkfile(data='Hello World')
print('\nCopy test1.txt to test2.txt')
path = path.copy('test2.txt')
root.ls(recursive=True)
print('\nCopy test2.txt to ../dir2/test3.txt')
path = path.copy('../dir2/test3.txt')
root.ls(recursive=True)
print('\nCopy test3.txt to ../dir1/test1.txt (force overwrite)')
try:
path = path.copy(root['dir1']['test1.txt'])
except fs.AlreadyExists:
path = path.copy(root['dir1']['test1.txt'],force=True)
else:
raise RuntimeError('Copy should not work')
root.ls(recursive=True)
print('\nMove test1.txt to test2.txt (force overwrite)')
try:
path = path.move('test2.txt')
except fs.AlreadyExists:
path = path.move('test2.txt',force=True)
else:
raise RuntimeError('Move should not work')
root.ls(recursive=True)
print('\n\n')
Explanation: Copy
Copying files or directories can be done with the "copy" method (alias: "cp"). The destination can be a proxy or a string. Existing destinations are replaced when "force=True".
End of explanation
for fsystem in ['local','hdf5']:
with temppath(fsystem=fsystem) as root:
stats = fsystem=='hdf5'
path = root['dir1'].mkdir()['file.txt'].mkfile(data='Hello World')
soft = root['soft.txt'].link(path)
hard = root['hard.txt'].link(path,soft=False)
soft2 = root['soft2.txt'].link('hard.txt')
assert(soft.read()=='Hello World')
assert(soft2.read()=='Hello World')
assert(hard.read()=='Hello World')
root.ls(recursive=True,stats=stats)
print('\nRemove file.txt')
soft.linkdest().remove(recursive=True)
assert(soft.lexists)
assert(not soft.exists)
assert(soft2.read()=='Hello World')
assert(hard.read()=='Hello World')
root.ls(recursive=True,stats=stats)
print('\n\n')
Explanation: Links
Hard and soft links are supported. The difference is that when deleting the link destination, the hard link still exists (file is not actually deleted) while the soft link is broken.
End of explanation
with temppath(fsystem='hdf5') as root1:
root2 = h5fs.Path(root1.device.parent['external.h5']).mkdir()
path = root1['dir1'].mkdir()['a'].mkfile(data='Hello World')
lnk = root2['alnk'].link(path)
root1.ls(recursive=True,stats=True)
print('')
root2.ls(recursive=True,stats=True)
path = lnk.linkdest().rename('b')
print('\n\nLink destination renamed:')
root1.ls(recursive=True,stats=True)
print('')
root2.ls(recursive=True,stats=True)
lnk = root2['blnk'].link(path)
root2['alnk'].remove()
print('\n\nLink to renamed destination:')
root1.ls(recursive=True,stats=True)
print('')
root2.ls(recursive=True,stats=True)
lnk.linkdest().copy(root2['b'])
print('\n\nCopy destination to local tree:')
root1.ls(recursive=True,stats=True)
print('')
root2.ls(recursive=True,stats=True)
lnk.linkdest().move(root2['c'])
print('\n\nMove destination to local tree:')
root1.ls(recursive=True,stats=True)
print('')
root2.ls(recursive=True,stats=True)
root2['b'].move(root1['dir1'])
print('\n\nMove from local tree to destination:')
root1.ls(recursive=True,stats=True)
print('')
root2.ls(recursive=True,stats=True)
root2['blnk'].remove(recursive=True)
print('\n\nRemove link recursively:')
root1.ls(recursive=True,stats=True)
print('')
root2.ls(recursive=True,stats=True)
Explanation: Cross-device links
HDF5 also supports cross-device links (called external links):
End of explanation
with temppath(fsystem='hdf5') as root1:
# Shared device
deviceid = id(root1.device)
path = root1['dir1']['dir2'].mkdir()
assert(deviceid==id(path.device))
txtfile = path['file.txt'].open(data='Hello World')
assert(deviceid==id(path.device))
# Shared device is opened once
with root1['dir1'].open() as grp1:
assert(root1['dir1']['dir2'].device.isopen())
with root1['dir1']['dir2'].open() as grp2:
assert(grp1['dir2']==grp2)
# Same device but not shared
root2 = h5fs.Path(root1.location,mode='r')
assert(deviceid!=id(root2.device))
Explanation: Devices
If a proxy is derived from another proxy, they share the same device. Therefore if the device is already opened by one proxy, it will be open for the other proxy. Re-opening is allowed but does not actually do anything (no reference counting either).
End of explanation
with temppath(fsystem='hdf5') as root:
print('Default proxy mode: {}'.format(root.openparams['mode']))
rootreadonly = h5fs.Path(root.location,mode='r')
print('Read-only proxy mode: {}'.format(rootreadonly.openparams['mode']))
# Write a file
txtfile = root['file.txt'].mkfile(data='123',mode='x')
txtfile = root['file.txt'].mkfile(data='abcd',mode='w')
assert(txtfile.read()=='abcd')
# Try to write a file in the wrong mode (default proxy):
for mode in ['r','x']:
try:
txtfile = root['file.txt'].mkfile(data='123',mode=mode)
except (IOError,fs.AlreadyExists):
pass
else:
raise RuntimeError('Should throw an error')
# Try to write a file in the wrong mode (read-only proxy):
for mode in ['r','x','w','a']:
try:
txtfile = rootreadonly['file.txt'].mkfile(data='123',mode=mode)
except (IOError,ValueError,fs.AlreadyExists):
pass
else:
raise RuntimeError('Should throw an error')
assert(txtfile.read()=='abcd')
Explanation: Access modes
Files can be opened in whatever mode the underlying file system supports (read, read/write, append, truncate,...). The default access mode is defined on proxy initialization. If no mode is specified, it will be read/write and create when not existing (local file system: mode='a+', HDF5 file system: mode='a'). This default can be overriden when opening the file, but it will not allow any override (for example you will never be able to open a file with write access when the poxy is read-only). Proxies derived from eachother inherit the default access mode.
End of explanation
with temppath(fsystem='nexus') as root:
root.nxentry(name='test0001')
entry = root.new_nxentry()
subentry = entry.nxsubentry('subentry')
instrument = entry.nxinstrument()
det1 = entry.nxdetector('xia')
det2 = entry.nxdetector('pco')
measurement = entry.measurement()
nxmonochromator = entry.nxmonochromator()
positioners = entry.positioners()
application = entry.application('xrf')
root.ls(recursive=True,stats=True)
Explanation: Nexus
Nexus proxies inherit from HDF5 proxies with additional functionality to enforce Nexus standards.
End of explanation
with temppath(fsystem='nexus') as root:
entry = root.new_nxentry()
instrument = entry.nxinstrument()
name = instrument['name']
if not name.exists:
name = name.mkfile(data='ID21 Scanning X-ray Microscope')
name.update_stats(short_name='ID21')
root.ls(stats=True,recursive=True)
Explanation: Apart from the creation of Nexus groups, Nexus fields and attributes are currently neither enforced, restricted or dealt with automatically. For example:
End of explanation
with temppath(fsystem='nexus') as root:
entry = root.nxentry(name='entry.1')
entry = root.new_nxentry()
# Add axes
positioners = entry.positioners()
shape = (2,20,30)
positioners.add_axis('y',range(shape[1]),units='um',title='vertical')
positioners.add_axis('x',range(shape[2]),units='um',title='horizontal')
energy = 'energy',range(5,5+shape[0]),{'units':'keV','title':'energy'}
# Add plotable groups (NXdata)
for i in [1,2]:
nxdata = entry.nxdata('plot'+str(i))
# Add signals
for signal in ['A','B','C']:
signal = nxdata.add_signal(name=signal,dtype=float,shape=shape)
# Add axes (by name or by value)
nxdata.set_axes(energy,'y','x')
assert('energy' in positioners)
# Select default plot
plotselect = entry['plotselect'].link(entry['plot1'])
plotselect.mark_default()
assert(root.default==plotselect)
assert(plotselect.signal.name=='C')
plotselect.default_signal('B')
assert(plotselect.signal.name=='B')
root.ls(recursive=True,stats=True)
Explanation: Default plotting
Plottable data (regular, n-D data with associated axes) can be provided as an NXdata instance. When several NXdata groups are present, only one of them can be marked as the default.
End of explanation
with temppath(fsystem='nexus') as root:
entry = root.new_nxentry()
# Results of a process with certain parameters
parameters = {'a':1,'b':2}
process = entry.nxprocess(name='proc1',parameters=parameters)
assert(process.config.read()==parameters)
process.results['result1'].write(data=range(10))
process.results['result2'].write(data=range(20))
# Retrieve the first process again
process = entry.nxprocess(name='proc1',parameters=parameters)
assert(process.config.read()==parameters)
# Results of a second process based on the first
process2 = entry.nxprocess(name='proc2',dependencies=[process],parameters=parameters)
assert(next(iter(process2.dependencies)).linkdest()==process)
root.ls(recursive=True,stats=True)
Explanation: Data processing
Results of data processing can be saved as NXprocess instances.
End of explanation |
12,201 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear classifier on sensor data with plot patterns and filters
Here decoding, a.k.a MVPA or supervised machine learning, is applied to M/EEG
data in sensor space. Fit a linear classifier with the LinearModel object
providing topographical patterns which are more neurophysiologically
interpretable [1]_ than the classifier filters (weight vectors).
The patterns explain how the MEG and EEG data were generated from the
discriminant neural sources which are extracted by the filters.
Note patterns/filters in MEG data are more similar than EEG data
because the noise is less spatially correlated in MEG than EEG.
References
.. [1] Haufe, S., Meinecke, F., Gรถrgen, K., Dรคhne, S., Haynes, J.-D.,
Blankertz, B., & Bieรmann, F. (2014). On the interpretation of
weight vectors of linear models in multivariate neuroimaging.
NeuroImage, 87, 96โ110. doi
Step1: Set parameters
Step2: Decoding in sensor space using a LogisticRegression classifier
Step3: Let's do the same on EEG data using a scikit-learn pipeline | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Romain Trachel <[email protected]>
# Jean-Remi King <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne import io, EvokedArray
from mne.datasets import sample
from mne.decoding import Vectorizer, get_coef
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
# import a linear classifier from mne.decoding
from mne.decoding import LinearModel
print(__doc__)
data_path = sample.data_path()
Explanation: Linear classifier on sensor data with plot patterns and filters
Here decoding, a.k.a MVPA or supervised machine learning, is applied to M/EEG
data in sensor space. Fit a linear classifier with the LinearModel object
providing topographical patterns which are more neurophysiologically
interpretable [1]_ than the classifier filters (weight vectors).
The patterns explain how the MEG and EEG data were generated from the
discriminant neural sources which are extracted by the filters.
Note patterns/filters in MEG data are more similar than EEG data
because the noise is less spatially correlated in MEG than EEG.
References
.. [1] Haufe, S., Meinecke, F., Gรถrgen, K., Dรคhne, S., Haynes, J.-D.,
Blankertz, B., & Bieรmann, F. (2014). On the interpretation of
weight vectors of linear models in multivariate neuroimaging.
NeuroImage, 87, 96โ110. doi:10.1016/j.neuroimage.2013.10.067
End of explanation
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.4
event_id = dict(aud_l=1, vis_l=3)
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(.5, 25, fir_design='firwin')
events = mne.read_events(event_fname)
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
decim=2, baseline=None, preload=True)
labels = epochs.events[:, -1]
# get MEG and EEG data
meg_epochs = epochs.copy().pick_types(meg=True, eeg=False)
meg_data = meg_epochs.get_data().reshape(len(labels), -1)
Explanation: Set parameters
End of explanation
clf = LogisticRegression(solver='lbfgs')
scaler = StandardScaler()
# create a linear model with LogisticRegression
model = LinearModel(clf)
# fit the classifier on MEG data
X = scaler.fit_transform(meg_data)
model.fit(X, labels)
# Extract and plot spatial filters and spatial patterns
for name, coef in (('patterns', model.patterns_), ('filters', model.filters_)):
# We fitted the linear model onto Z-scored data. To make the filters
# interpretable, we must reverse this normalization step
coef = scaler.inverse_transform([coef])[0]
# The data was vectorized to fit a single model across all time points and
# all channels. We thus reshape it:
coef = coef.reshape(len(meg_epochs.ch_names), -1)
# Plot
evoked = EvokedArray(coef, meg_epochs.info, tmin=epochs.tmin)
evoked.plot_topomap(title='MEG %s' % name, time_unit='s')
Explanation: Decoding in sensor space using a LogisticRegression classifier
End of explanation
X = epochs.pick_types(meg=False, eeg=True)
y = epochs.events[:, 2]
# Define a unique pipeline to sequentially:
clf = make_pipeline(
Vectorizer(), # 1) vectorize across time and channels
StandardScaler(), # 2) normalize features across trials
LinearModel(
LogisticRegression(solver='lbfgs'))) # 3) fits a logistic regression
clf.fit(X, y)
# Extract and plot patterns and filters
for name in ('patterns_', 'filters_'):
# The `inverse_transform` parameter will call this method on any estimator
# contained in the pipeline, in reverse order.
coef = get_coef(clf, name, inverse_transform=True)
evoked = EvokedArray(coef, epochs.info, tmin=epochs.tmin)
evoked.plot_topomap(title='EEG %s' % name[:-1], time_unit='s')
Explanation: Let's do the same on EEG data using a scikit-learn pipeline
End of explanation |
12,202 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 1
Imports
Step2: Euler's method
Euler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation
$$ \frac{dy}{dx} = f(y(x), x) $$
with the initial condition
Step4: The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation
Step6: You are now going to solve the following differential equation
Step7: In the following cell you are going to solve the above ODE using four different algorithms | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Explanation: Ordinary Differential Equations Exercise 1
Imports
End of explanation
def solve_euler(derivs, y0, x):
Solve a 1d ODE using Euler's method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where
y and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
h=x[1]-x[0]
y=np.empty(len(x))
y[0]=y0
for i in range(len(x)-1):
y[i+1]=y[i]+derivs(x[i],y[i])*h
return y
solve_euler(lambda y, x: 1, 0, [0,1,2])
assert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
Explanation: Euler's method
Euler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation
$$ \frac{dy}{dx} = f(y(x), x) $$
with the initial condition:
$$ y(x_0)=y_0 $$
Euler's method performs updates using the equations:
$$ y_{n+1} = y_n + h f(y_n,x_n) $$
$$ h = x_{n+1} - x_n $$
Write a function solve_euler that implements the Euler method for a 1d ODE and follows the specification described in the docstring:
End of explanation
def solve_midpoint(derivs, y0, x):
Solve a 1d ODE using the Midpoint method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where y
and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
h=x[1]-x[0]
y=np.empty(len(x))
y[0]=y0
for i in range(len(x)-1):
y[i+1]=y[i]+h*derivs(y[i]+.5*h*derivs(x[i],y[i]),x[i]+.5*h)
return y
assert np.allclose(solve_midpoint(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
Explanation: The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation:
$$ y_{n+1} = y_n + h f\left(y_n+\frac{h}{2}f(y_n,x_n),x_n+\frac{h}{2}\right) $$
Write a function solve_midpoint that implements the midpoint method for a 1d ODE and follows the specification described in the docstring:
End of explanation
def solve_exact(x):
compute the exact solution to dy/dx = x + 2y.
Parameters
----------
x : np.ndarray
Array of x values to compute the solution at.
Returns
-------
y : np.ndarray
Array of solutions at y[i] = y(x[i]).
y=np.empty(len(x))
for i in range(len(x)):
y[i]=.25*np.exp(2*x[i])-.5*x[i]-.25
return y
raise NotImplementedError()
assert np.allclose(solve_exact(np.array([0,1,2])),np.array([0., 1.09726402, 12.39953751]))
Explanation: You are now going to solve the following differential equation:
$$
\frac{dy}{dx} = x + 2y
$$
which has the analytical solution:
$$
y(x) = 0.25 e^{2x} - 0.5 x - 0.25
$$
First, write a solve_exact function that compute the exact solution and follows the specification described in the docstring:
End of explanation
odeint?
x=np.linspace(0,1,11)
def derivs(x,y):
d=x+2*y
return d
E=solve_euler(derivs,0,x)
M=solve_midpoint(derivs, 0, x)
O=odeint(derivs,0,x)
X=solve_exact(x)
print('Euler=',E)
print('Midpoint=',M)
print('Odeint=',O)
print('Exact=',X)
plt.figure(figsize=(10,10))
plt.subplot(2,1,1)
plt.plot(x,E,label='Euler')
plt.plot(x,M,label='Midpoint')
plt.plot(x,O,label='Odeint')
plt.plot(x,X,label='Exact')
plt.xlabel('x')
plt.ylabel('y')
plt.title('ODE Y vs X')
plt.legend()
plt.subplot(2,1,2)
plt.plot(x,E-X,label='Euler')
plt.plot(x,M-X,label='Midpoint')
plt.plot(x,O-X,label='Odeint')
plt.legend()
assert True # leave this for grading the plots
Explanation: In the following cell you are going to solve the above ODE using four different algorithms:
Euler's method
Midpoint method
odeint
Exact
Here are the details:
Generate an array of x values with $N=11$ points over the interval $[0,1]$ ($h=0.1$).
Define the derivs function for the above differential equation.
Using the solve_euler, solve_midpoint, odeint and solve_exact functions to compute
the solutions using the 4 approaches.
Visualize the solutions on a sigle figure with two subplots:
Plot the $y(x)$ versus $x$ for each of the 4 approaches.
Plot $\left|y(x)-y_{exact}(x)\right|$ versus $x$ for each of the 3 numerical approaches.
Your visualization should have legends, labeled axes, titles and be customized for beauty and effectiveness.
While your final plot will use $N=10$ points, first try making $N$ larger and smaller to see how that affects the errors of the different approaches.
End of explanation |
12,203 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'emac-2-53-vol', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: EMAC-2-53-VOL
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
12,204 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Insights from Baby Names Data
Author Information
Step1: II- Predictive Analysis
II-1 Most popular name of all time
Step2: Now, let's find the most popular male and female names of all times
Step3: And the winner for most popular male and female baby names since 1910 are
Step4: The winner in the male category is James. 493865 baby boys were named 'James' from 1910 to 2014.
On the female side 'Mary' is the winner. 3730856 baby girls were named 'Mary' from 1910 to 2014.
II-2 Most Gender Ambigious Name in 2013 and 1945
We quantify the popularity of a gender ambigious name with 'name' in year x by
Step5: It is intesting to notice number gender ambigious names more than doubles since 1945. I believe this is a general trend which could more predominantly observerd in liberal and urban cities in the US.
II-3,4 Names with largest decrease and increase in number since 1980
Step6: II-5 Other Names with largest decrease and increase in number since 1980
Let's see for what other names large differentials are observed betseen 1980 and 2014.
Step7: III- Insights
III-1 Trend in the number of gender ambigious names
As mentioned in Section II-2 we expect the number of gender ambigious names to increase over the years. That trend is most probably related to changes in perspective of the society in the gender-equality issues. But let's not pretend to be a sociologist here
Step8: A quick google seaerch revealst that in 2003 and 2004 landmark years in the process of leagalization of same-sex marriage
Step9: Now, the other peak has happened in 1989. It turns out Berlin wall came down in 1989. But also Denmark became the first country to legalize same sex marriage.
III-2 Clustering of the US States using baby names
Now we try to see if the states cluster in terms of how their people name their babies. We'll first extract all the baby names (male and female) used in 2014 and generate feature vectors for each state using the counts for each name.
Step10: Next, we'll perform dimentionality reduction using principle component analysis and we'll retain only two of the componets. Scikit-learn's RandomizedPCA implementation is choosen for its efficiencty.
We note that it is important to normalize the data since baby name counts are correlated with the population of states. Our goal is to cluster the states by the distribution of different names.
Step11: It is interesting to observe CA and TX being obvious outliers. We have squeezed many dimansions into only two therefore it not easy to comment on the meaning of principle componenets. However it is tempting to conclude that the first principal component is directly proportional to the Hispanic population since both CA and TX has huge values in that direction. And with taking the rist of getting ahead of ourselves we can say that the other direction could well be related to the Asian population percentage. And it is not surprising to see CA having the largest coefficient in that direction
Step12: Finally we employ a K-means clustering algorithm to the data with reduced to 2 dimensions.
Step13: We'll conclude by listing the states under each cluster. For that aim we downloaded a csv file from http
Step14: Finally, let's list the states under each cluster
Step15: We'll avoid trying to give too much isight looking at these clusters as we mentioned before a lot of dimentions are pressed into two and it is questionable if these clusters are meaningful in an obvious sense.
Some ideas for further investigation | Python Code:
import os
from mpl_toolkits.basemap import Basemap
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
data_folder = os.path.join('data')
file_names = []
for f in os.listdir(data_folder):
file_names.append(os.path.join(data_folder,f))
del file_names[file_names.index(os.path.join(data_folder,'StateReadMe.pdf'))]
Explanation: Insights from Baby Names Data
Author Information:
Oguz Semerci<br>
[email protected]<br>
Summary of the investigation
We report on descriptive satistics as well as a few insights mined from a data set of state by state baby name counts from 1910 to 2014. We present the following descriptive statistics:
The most popular male and female names of all time
favorite gender-neutral names in 1945 and 2013
Names with biggest decreae and increase in popularity since 1980
We extract the following insights from the dataset:
Increase in pupularity of gender ambigious names
Correlation in the increaed tendency in using gender-neutral names with landmark events leading the legalization of same-sex marriage
Dimentionality reduction (randomized PCA) of the data, comments on first tow principle components and K-means clustering of the states.
I- Data Preperation
Here we quote the official description of the data aset:
For each of the 50 states and the District of Columbia we created a file called SC.txt, where SC is the state's postal code.
Each record in a file has the format: 2-digit state code, sex (M = male or F = female), 4-digit year of birth (starting with 1910), the 2-15 character name, and the number of occurrences of the name. Fields are delimited with a comma. Each file is sorted first on sex, then year of birth, and then on number of occurrences in descending order. When there is a tie on the number of occurrences names are listed in alphabetical order. This sorting makes it easy to determine a name's rank. The first record for each sex & year of birth has rank 1, the second record has rank 2, and so forth.
To safeguard privacy, we restrict our list of names to those with at least 5 occurrences. If a name has less than 5 occurrences for a year of birth in any state, the sum of the state counts for that year will be less than the national count.</q>"
One can say the data sets look clean except for some ambiguities in baby names. For example in RI data we have the following for the year 1992:
RI,F,1992,Kaitlyn,37
RI,F,1992,Katelyn,36
One might argue that both versions of the name Katelyn are phonetically the same and sould be counted together. And if they were to be counted together that would change the rank of the name Katelyn about 10 levels. Normalizing the data for such instances is out of the scope of this analysis. However, we'll keep it in mind when analyzing the results.
Below, we sequentially process each file and extract relevant data without loading all data to memory at once. Let's first fet a list of all the file names:
End of explanation
# we yearly count data for each name in the data set using the following dictionary format :
# dict = {'name': {count: []}} where count[0] is count for name 1910 and count[-1] is for 2014
N_years = 2014-1910+1
names_dict_M = {}
names_dict_F = {}
for fname in file_names:
with open(fname,'r') as f:
for line in f:
state, gender, year, name, count = line.split(',')
year = int(year)
count = int(count)
if gender == 'M':
if name in names_dict_M:
# name already in the dict, update the count for appropriate year
names_dict_M[name]['count'][year-1910] += count
else:
# create an entry for the name
names_dict_M[name] = {'count': [0]*N_years}
names_dict_M[name]['count'][year-1910] += count
elif gender == 'F':
if name in names_dict_F:
# name already in the dict, update the count for appropriate year
names_dict_F[name]['count'][year-1910] += count
else:
# create an entry for the name
names_dict_F[name] = {'count': [0]*N_years}
names_dict_F[name]['count'][year-1910] += count
Explanation: II- Predictive Analysis
II-1 Most popular name of all time
End of explanation
#lets extract tuples as (name, total_count) and sort them
male_overall = [(n, sum(names_dict_M[n]['count'])) for n in names_dict_M.keys()]
male_overall.sort(key = lambda x: x[1], reverse = True)
female_overall = [(n, sum(names_dict_F[n]['count'])) for n in names_dict_F.keys()]
female_overall.sort(key = lambda x: x[1], reverse = True)
Explanation: Now, let's find the most popular male and female names of all times:
End of explanation
print('Male:')
print('{}: {}'.format(male_overall[0][0], male_overall[0][1]))
print('\nFemale:')
print('{}: {}'.format(female_overall[0][0], female_overall[0][1]))
width = 0.6
fig = plt.figure(figsize = (12,3))
ax = plt.subplot(121)
ax.bar(np.arange(10), [c for n,c in male_overall[:10]], width = width)
ax.set_xticks(np.arange(10) + width/2)
ax.set_xticklabels([n for n,c in male_overall[:10]], rotation = 90)
ax.set_title('10 Most Popular Male Names since 1910')
ax.set_ylabel('name count')
ax = plt.subplot(122)
ax.bar(np.arange(10), [c for n,c in female_overall[:10]], width = width)
ax.set_xticks(np.arange(10) + width/2)
ax.set_xticklabels([n for n,c in female_overall[:10]], rotation = 90)
ax.set_title('10 Most Popular Female Names since 1910')
ax.set_ylabel('name count')
plt.tight_layout()
plt.show()
Explanation: And the winner for most popular male and female baby names since 1910 are:
End of explanation
#lets extract tuples as (name, count[2013]) and sort them with count
male_2013 = [(n, names_dict_M[n]['count'][2013-1910])
for n in names_dict_M.keys()
if names_dict_M[n]['count'][2013-1910] > 0]
female_2013 = [(n, names_dict_F[n]['count'][2013-1910])
for n in names_dict_F.keys()
if names_dict_F[n]['count'][2013-1910] > 0]
male_1945 = [(n, names_dict_M[n]['count'][1945-1910])
for n in names_dict_M.keys()
if names_dict_M[n]['count'][1945-1910] > 0]
female_1945 = [(n, names_dict_F[n]['count'][1945-1910])
for n in names_dict_F.keys()
if names_dict_F[n]['count'][1945-1910] > 0]
#first find gender ambigious names in 2013:
gender_ambigious_names = set([n for n, _ in male_2013]) & set([n for n, _ in female_2013])
gender_ambigious_names = [(
n,min(names_dict_M[n]['count'][2013-1910],
names_dict_F[n]['count'][2013-1910])
)
for n in gender_ambigious_names]
#sort the tuples such that most popular names are at top
gender_ambigious_names.sort(key = lambda x: x[1], reverse = True)
print('In 2013 there were {} gender ambigious names and are the most popular ones was {}'
.format(len(gender_ambigious_names), gender_ambigious_names[0][0]))
width = 0.6
fig = plt.figure(figsize = (12,3))
ax = plt.subplot(121)
ax.bar(np.arange(10), [c for n,c in gender_ambigious_names[:10]], width = width)
ax.set_xticks(np.arange(10) + width/2)
ax.set_xticklabels([n for n,c in gender_ambigious_names[:10]], rotation = 90)
ax.set_title('10 Most Popular Gender Ambigious Names in 2013')
ax.set_ylabel('name count')
gender_ambigious_names = set([n for n, _ in male_1945]) & set([n for n, _ in female_1945])
gender_ambigious_names = [(
n,min(names_dict_M[n]['count'][1945-1910],
names_dict_F[n]['count'][1945-1910])
)
for n in gender_ambigious_names]
#sort the tuples such that most popular names are at top
gender_ambigious_names.sort(key = lambda x: x[1], reverse = True)
print('In 1945 there were {} gender ambigious names and are the most popular ones was {}'
.format(len(gender_ambigious_names), gender_ambigious_names[0][0]))
ax2 = plt.subplot(122)
ax2.bar(np.arange(10), [c for n,c in gender_ambigious_names[:10]], width = width)
ax2.set_xticks(np.arange(10) + width/2)
ax2.set_xticklabels([n for n,c in gender_ambigious_names[:10]], rotation = 90)
ax2.set_title('10 Most Popular Gender Ambigious Names in 1945')
ax2.set_ylabel('name count')
plt.tight_layout()
plt.show()
Explanation: The winner in the male category is James. 493865 baby boys were named 'James' from 1910 to 2014.
On the female side 'Mary' is the winner. 3730856 baby girls were named 'Mary' from 1910 to 2014.
II-2 Most Gender Ambigious Name in 2013 and 1945
We quantify the popularity of a gender ambigious name with 'name' in year x by: Minimum of {number of male babies born in year x with name 'name', number of female babies born in year x with name 'name'}
End of explanation
male_diff = [ (n, names_dict_M[n]['count'][-1] - names_dict_M[n]['count'][1980-1910]) for n in names_dict_M.keys() ]
female_diff = [ (n, names_dict_F[n]['count'][-1] - names_dict_F[n]['count'][1980-1910]) for n in names_dict_F.keys() ]
male_diff.sort(key = lambda x: x[1], reverse = True)
female_diff.sort(key = lambda x: x[1], reverse = True)
print('Male name with most increase in popularity is {}'.format(male_diff[0][0]))
print('Count for {} increased from {} to {} from 1980 to 2014'.format(male_diff[0][0],
names_dict_M[male_diff[0][0]]['count'][1980-1910],
names_dict_M[male_diff[0][0]]['count'][-1]))
print('\nFemale name with most increase in popularity is {}'.format(female_diff[0][0]))
print('Count for {} increased from {} to {} from 1980 to 2014'.format(female_diff[0][0],
names_dict_F[female_diff[0][0]]['count'][1980-1910],
names_dict_F[female_diff[0][0]]['count'][-1]))
print('\nMale name with most deccrease in popularity is {}'.format(male_diff[-1][0]))
print('Count for {} decreased from {} to {} from 1980 to 2014'.format(male_diff[-1][0],
names_dict_M[male_diff[-1][0]]['count'][1980-1910],
names_dict_M[male_diff[-1][0]]['count'][-1]))
print('\nFemale name with most deccrease in popularity is {}'.format(female_diff[-1][0]))
print('Count for {} decreased from {} to {} from 1980 to 2014'.format(female_diff[-1][0],
names_dict_F[female_diff[-1][0]]['count'][1980-1910],
names_dict_F[female_diff[-1][0]]['count'][-1]))
Explanation: It is intesting to notice number gender ambigious names more than doubles since 1945. I believe this is a general trend which could more predominantly observerd in liberal and urban cities in the US.
II-3,4 Names with largest decrease and increase in number since 1980
End of explanation
print('Male names with largest increase in popularity along with increase rate:')
for n, c in male_diff[:5]:
print('{}: {}'.format(n,c))
print('\nFemale names with largest increase in popularity along with increase rate:')
for n, c in female_diff[:5]:
print('{}: {}'.format(n,c))
print('\nMale names with largest decrease in popularity along with decrease rate:')
for n, c in male_diff[-1:-5:-1]:
print('{}: {}'.format(n,c))
print('\nFemale names with largest decrease in popularity along with decrease rate:')
for n, c in female_diff[-1:-5:-1]:
print('{}: {}'.format(n,c))
Explanation: II-5 Other Names with largest decrease and increase in number since 1980
Let's see for what other names large differentials are observed betseen 1980 and 2014.
End of explanation
count = [0]*(2014-1910+1)
for year in range(0,2014-1910+1):
male_names = [n for n in names_dict_M.keys() if names_dict_M[n]['count'][year] > 0]
female_names = [n for n in names_dict_F.keys() if names_dict_F[n]['count'][year] > 0]
count[year] = len(set(male_names) & set(female_names))
fit = np.polyfit(range(0,2014-1910+1),count,1)
fit_fn = np.poly1d(fit)
fig = plt.figure(figsize = (15,3))
plt.plot(range(0,2014-1910+1), count, label = 'data')
plt.plot(range(0,2014-1910+1), fit_fn(range(0,2014-1910+1)), '--k', label = 'linear fit')
plt.legend(loc = 'lower right')
plt.title('Trend in the number of gender ambigious names from 1910 to 2014')
plt.xticks([0,1960-1910,2014-1910], ['1910', '1960', '2014'])
plt.xlabel('years')
plt.xlim([0,2014-1910])
plt.grid()
plt.show()
print('There is peak in yer {}.'.format(1910 + count.index(max(count))))
#what are the most popular gender ambigious names in 2004:
male_2004 = [(n, names_dict_M[n]['count'][2004-1910])
for n in names_dict_M.keys()
if names_dict_M[n]['count'][2004-1910] > 0]
female_2004 = [(n, names_dict_F[n]['count'][2004-1910])
for n in names_dict_F.keys()
if names_dict_F[n]['count'][2004-1910] > 0]
gender_ambigious_names = set([n for n, _ in male_2004]) & set([n for n, _ in female_2004])
gender_ambigious_names = [(
n,min(names_dict_M[n]['count'][1945-1910],
names_dict_F[n]['count'][1945-1910])
)
for n in gender_ambigious_names]
#sort the tuples such that most popular names are at top
gender_ambigious_names.sort(key = lambda x: x[1], reverse = True)
print('In 2014 there were {} gender ambigious names and here are the most popular ones:'
.format(len(gender_ambigious_names)))
for n,c in gender_ambigious_names[:3]:
print('{}: {}'.format(n,c))
Explanation: III- Insights
III-1 Trend in the number of gender ambigious names
As mentioned in Section II-2 we expect the number of gender ambigious names to increase over the years. That trend is most probably related to changes in perspective of the society in the gender-equality issues. But let's not pretend to be a sociologist here:). Below, we plot the trend as well as a linear fit to the trend.
End of explanation
count[2004-1910] = 0
1910 + count.index(max(count))
Explanation: A quick google seaerch revealst that in 2003 and 2004 landmark years in the process of leagalization of same-sex marriage:
Goodridge v. Dept. of Public Health, 798 N.E.2d 941 (Mass. 2003), is a landmark state appellate court case dealing with same-sex marriage in Massachusetts. The November 18, 2003, decision was the first by a U.S. state's highest court to find that same-sex couples had the right to marry. Despite numerous attempts to delay the ruling, and to reverse it, the first marriage licenses were issued to same-sex couples on May 17, 2004, and the ruling has been in full effect since that date. (https://en.wikipedia.org/wiki/Goodridge_v._Department_of_Public_Health)
Maybe there is some correlation here! People were prefering gender-neutral names to celebrate such events. It'd be interesting to look into that other peak happened before 2004.
End of explanation
#find all the male nd female names for 2014
male_names = [n for n in names_dict_M.keys() if names_dict_M[n]['count'][-1] > 0]
female_names = [n for n in names_dict_F.keys() if names_dict_F[n]['count'][-1] > 0]
#create a map names to indexes
#we'll make sure to have two feature's associated with gender-neutral names
name2index_male = {}
for i,n in enumerate(male_names):
name2index_male[n] = i
male_name_count = len(male_names)
name2index_female = {}
for i,n in enumerate(female_names):
name2index_female[n] = i + male_name_count
states = []
#data with counts for all the names in 2014 for each state in its rows:
X = []
for fname in file_names:
states.append(fname[-6:-4])
#temporary sample vector for current state
temp = [0]*(len(name2index_male)+len(name2index_female))
#read the file for the current state
with open(fname,'r') as f:
for line in f:
state, gender, year, name, count = line.split(',')
year = int(year)
if year == 2014:
count = float(count)
if gender == 'M':
feature_index = name2index_male[name]
else:
feature_index = name2index_female[name]
temp[feature_index] = count
X.append(temp)
X = np.array(X)
print('Data matrix X has shape: {}'.format(X.shape))
#check if sparse to see if it makes sense to transform X to a sparse matrix
from scipy.sparse import csr_matrix, issparse
issparse(X)
Explanation: Now, the other peak has happened in 1989. It turns out Berlin wall came down in 1989. But also Denmark became the first country to legalize same sex marriage.
III-2 Clustering of the US States using baby names
Now we try to see if the states cluster in terms of how their people name their babies. We'll first extract all the baby names (male and female) used in 2014 and generate feature vectors for each state using the counts for each name.
End of explanation
#normlize each the counts for each state by the total number babies born there in 2014
for i in range(X.shape[0]):
X[i,:] = X[i,:] / np.sum(X[i,:])
from sklearn.decomposition import RandomizedPCA
from sklearn.preprocessing import StandardScaler
X = StandardScaler().fit_transform(X)
pca = RandomizedPCA(n_components = 2)
pca.fit(X)
X_pca = pca.transform(X)
fig = plt.figure(figsize = (6,6))
plt.scatter(X_pca[:,0],X_pca[:,1])
# plt.xlim([-1,2])
# plt.ylim([-2,3])
for i in range(len(states)):
plt.annotate(states[i], (X_pca[i,0], X_pca[i,1]))
plt.xlabel("first principal component")
plt.ylabel("second principal component")
plt.title("States projected to first two principle components")
plt.show()
Explanation: Next, we'll perform dimentionality reduction using principle component analysis and we'll retain only two of the componets. Scikit-learn's RandomizedPCA implementation is choosen for its efficiencty.
We note that it is important to normalize the data since baby name counts are correlated with the population of states. Our goal is to cluster the states by the distribution of different names.
End of explanation
ind2keep = [i for i in range(len(states)) if states[i] not in ['NY', 'FL', 'CA', 'TX']]
X_pca = X_pca[ind2keep,:]
states = [states[i] for i in ind2keep]
X_pca = StandardScaler().fit_transform(X_pca)
fig = plt.figure(figsize = (13,6))
ax1 = plt.subplot(121)
ax1.scatter(X_pca[:,0],X_pca[:,1])
# plt.xlim([-1,2])
# plt.ylim([-2,3])
for i in range(len(states)):
ax1.annotate(states[i], (X_pca[i,0], X_pca[i,1]))
ax1.set_xlabel("first principal component")
ax1.set_ylabel("second principal component")
ax1.set_title('States')
ax2 = plt.subplot(122)
ax2.scatter(X_pca[:,0],X_pca[:,1])
ax2.set_xlim([-1.5,1.1])
ax2.set_ylim([-1.5,0.5])
for i in range(len(states)):
ax2.annotate(states[i], (X_pca[i,0], X_pca[i,1]))
ax2.set_xlabel("first principal component")
ax2.set_ylabel("second principal component")
ax2.set_title('States - Zoomed in to the lower left corner')
plt.show()
Explanation: It is interesting to observe CA and TX being obvious outliers. We have squeezed many dimansions into only two therefore it not easy to comment on the meaning of principle componenets. However it is tempting to conclude that the first principal component is directly proportional to the Hispanic population since both CA and TX has huge values in that direction. And with taking the rist of getting ahead of ourselves we can say that the other direction could well be related to the Asian population percentage. And it is not surprising to see CA having the largest coefficient in that direction: (https://en.wikipedia.org/wiki/Demographics_of_Asian_Americans).
Now let's remove NY, FL, CA and TX from the data set, standardize the features and zoom into that big cluster:
End of explanation
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters = 3, init='k-means++')
kmeans.fit(X_pca)
y_pred = kmeans.predict(X_pca)
fig = plt.figure(figsize = (15,15))
ax1 = plt.subplot(111)
ax1.scatter(X_pca[:,0],X_pca[:,1], c = y_pred, s= 100)
for i in range(len(states)):
ax1.annotate(states[i], (X_pca[i,0], X_pca[i,1]))
ax1.set_xlabel("first principal component")
ax1.set_ylabel("second principal component")
ax1.set_title('States Clustered by K-means')
plt.show()
Explanation: Finally we employ a K-means clustering algorithm to the data with reduced to 2 dimensions.
End of explanation
state_dict = {}
import re
with open('states.csv', 'r') as f:
for line in f:
name, abbrv = re.sub('["\n]', '', line).split(',')
state_dict[abbrv] = name
Explanation: We'll conclude by listing the states under each cluster. For that aim we downloaded a csv file from http://www.fonz.net/blog/archives/2008/04/06/csv-of-states-and-state-abbreviations/ that contains state names and their abbreviations. Let's load that file and get a map of abbreviations to full state names.
End of explanation
print('Blue cluster:')
print('--------------')
print(', '.join([state_dict[states[i]] for i in range(len(states)) if y_pred[i] == 0 ]))
print('\nGreen cluster:')
print('--------------')
print(', '.join([state_dict[states[i]] for i in range(len(states)) if y_pred[i] == 1 ]))
print('\nRed cluster:')
print('--------------')
print(', '.join([state_dict[states[i]] for i in range(len(states)) if y_pred[i] == 2 ]))
Explanation: Finally, let's list the states under each cluster:
End of explanation
!ipython nbconvert baby_names.ipynb
Explanation: We'll avoid trying to give too much isight looking at these clusters as we mentioned before a lot of dimentions are pressed into two and it is questionable if these clusters are meaningful in an obvious sense.
Some ideas for further investigation:
If we had more time there it'd have been possible to extract other interesting information from this data set. Here are two examples that come to mind:
State by state population change.
Analyis of diversity and demographics of immigration.
More informed cluster analysis by classification of names into demographics.
End of explanation |
12,205 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ardumoto Example
This example shows how to use the
Ardumoto on the board.
Ardumoto supports two DC motor driving.
There are also instructions
on how to hook up the shield.
Motor A and Motor B are connected as below to the Arduino pins
Step1: 1. Use Microblaze to control the Ardumoto shield
First let's define a few constants.
Step2: Now we can use Microblaze program to control the shield.
Step3: 2. Set pin and polarity configurations
We have to first initialize the device.
Step4: We can then set motor A and B to have the same polarity.
Step5: 3. Set direction and speed for each motor
Step6: Now let us set motor A speed to be 10% of the maximum speed.
Step7: Set speed for motor B to be the maximum.
Step8: Run each individual motor for a few seconds.
Step9: 4. Run both motors together
The following cell will run both motors in the same direction,
but with different speeds.
Step10: Again, the rotation of the motor depends on the wiring to the shield.
In our setup, the following cell will result in two motors rotating
in opposite directions. | Python Code:
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
Explanation: Ardumoto Example
This example shows how to use the
Ardumoto on the board.
Ardumoto supports two DC motor driving.
There are also instructions
on how to hook up the shield.
Motor A and Motor B are connected as below to the Arduino pins:
Default connections
| Pin number | Functionality |
|------------|----------------------------------|
| 2 | Direction control for motor A |
| 3 | PWM control (speed) for motor A |
| 4 | Direction control for motor B |
| 11 | PWM control (speed) for motor B |
Alternate Connections
| Pin number | Functionality |
|------------|----------------------------------|
| 8 | Direction control for motor A |
| 9 | PWM control (speed) for motor A |
| 7 | Direction control for motor B |
| 10 | PWM control (speed) for motor B |
In this notebook, we will assume the first (default) pin configuration.
There are multiple ways to hook up the motor to the shield, as shown below:
In this notebook, we will assume the second way in the above picture.
End of explanation
MOTOR_A = 0
MOTOR_B = 1
POLAR_DEFAULT = 0
POLAR_REVERSE = 1
FORWARD = 0
BACKWARD = 1
Explanation: 1. Use Microblaze to control the Ardumoto shield
First let's define a few constants.
End of explanation
%%microblaze base.ARDUINO
#include "xio_switch.h"
#include "gpio.h"
#include "timer.h"
#define DEFAULT_PERIOD 625998
#define DEFAULT_DUTY 312998
#define PWM_A_PIN 3
#define PWM_B_PIN 11
#define DIR_A_PIN 2
#define DIR_B_PIN 4
typedef enum motor {
MOTOR_A = 0,
MOTOR_B = 1,
}motor_e;
static unsigned int pol_a = 0, pol_b = 0;
static unsigned int dir_a = 0, dir_b = 0;
static unsigned int duty_a = 50, duty_b = 50;
static timer timer_a;
static timer timer_b;
static gpio gpio_a;
static gpio gpio_b;
unsigned int init_ardumoto(){
timer_a = timer_open_device(0);
timer_b = timer_open_device(5);
set_pin(PWM_A_PIN, PWM0);
set_pin(PWM_B_PIN, PWM5);
gpio_a = gpio_open(DIR_A_PIN);
gpio_b = gpio_open(DIR_B_PIN);
gpio_set_direction(gpio_a, GPIO_OUT);
gpio_set_direction(gpio_b, GPIO_OUT);
return 0;
}
void configure_polar(unsigned int motor, unsigned int polarity){
if (motor == MOTOR_A) {
pol_a = polarity;
}else if (motor == MOTOR_B) {
pol_b = polarity;
}
}
void set_direction(unsigned int motor, unsigned int direction){
if (motor == MOTOR_A){
dir_a = (direction)? pol_a : !pol_a;
}
else if (motor == MOTOR_B){
dir_b = (direction)? pol_b : !pol_b;
}
}
void set_speed(unsigned int motor, unsigned int speed){
if (motor == MOTOR_A) {
duty_a = speed;
} else if (motor == MOTOR_B) {
duty_b = speed;
}
}
void run(unsigned int motor){
if (motor == MOTOR_A) {
gpio_write(gpio_a, dir_a);
timer_pwm_generate(timer_a, DEFAULT_PERIOD,
duty_a*DEFAULT_PERIOD/100);
}else if(motor == MOTOR_B) {
gpio_write(gpio_b, dir_b);
timer_pwm_generate(timer_b, DEFAULT_PERIOD,
duty_b*DEFAULT_PERIOD/100);
}
}
void stop(unsigned int motor){
if (motor == MOTOR_A) {
timer_pwm_stop(timer_a);
}else if (motor == MOTOR_B){
timer_pwm_stop(timer_b);
}
}
Explanation: Now we can use Microblaze program to control the shield.
End of explanation
init_ardumoto()
Explanation: 2. Set pin and polarity configurations
We have to first initialize the device.
End of explanation
configure_polar(MOTOR_A, POLAR_DEFAULT)
configure_polar(MOTOR_B, POLAR_DEFAULT)
Explanation: We can then set motor A and B to have the same polarity.
End of explanation
set_direction(MOTOR_A, FORWARD)
set_direction(MOTOR_B, FORWARD)
Explanation: 3. Set direction and speed for each motor
End of explanation
set_speed(MOTOR_A, 10)
Explanation: Now let us set motor A speed to be 10% of the maximum speed.
End of explanation
set_speed(MOTOR_B, 99)
Explanation: Set speed for motor B to be the maximum.
End of explanation
from time import sleep
run(MOTOR_A)
sleep(3)
stop(MOTOR_A)
sleep(1)
run(MOTOR_B)
sleep(3)
stop(MOTOR_B)
Explanation: Run each individual motor for a few seconds.
End of explanation
run(MOTOR_A)
run(MOTOR_B)
sleep(2)
stop(MOTOR_A)
stop(MOTOR_B)
Explanation: 4. Run both motors together
The following cell will run both motors in the same direction,
but with different speeds.
End of explanation
set_direction(MOTOR_A, FORWARD)
set_speed(MOTOR_A, 50)
set_direction(MOTOR_B, BACKWARD)
set_speed(MOTOR_B, 50)
run(MOTOR_A)
run(MOTOR_B)
sleep(3)
stop(MOTOR_A)
stop(MOTOR_B)
Explanation: Again, the rotation of the motor depends on the wiring to the shield.
In our setup, the following cell will result in two motors rotating
in opposite directions.
End of explanation |
12,206 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian deep-learning
Launch in Google Colab
Bayesian deep-learning network using dropout layers to perform Monte Carlo approximations for quantifying model uncertainty.
Overview
This notebook uses the fashion MNIST dataset and a Bayesain deep-learning model. If the Google Cloud TPU is attached to the notebook, the model can utilize the TPU to accelerate the training and inference performance.
Learning goals
Build a Bayesian deep-learning network in Keras
Create and compile the model under a distribution strategy that uses TPUs
Run Bayesian inference
Instructions
<h3> Train on Google Colab using TPU <a href="https
Step1: Specify variables
Step2: Fashion MNIST dataset
The fashion MNIST dataset is available as a tf.keras.datasets.
Step3: Define the Bayesian deep-learning model
The following example uses a single layer conv-net with a dropout layer for doing the Monte Carlo approximations during Bayesian inference.
Step4: Using the TPU
To use the TPU for training and inference, first the TPU device needs to be initialized. Then the model has to be built and compiled specifically to use the TPU.
Step5: Train
Download pre-trained weights
Step6: Training the fashion MNIST Bayesian deep-learning model
Step7: Bayesian inference
The inference step is repeated over and over again to obtain the model uncertainty associated with each prediction class. Unlike in the regular deep-learning architecture, each inference step returns a different set of probabilities for each class. The final accuracy is calculated as the class-wise mean of all the probabilities. The model uncertainty is numerically represented as the class-wise standard deviation of all the probabilities.
Step8: Visualize predictions | Python Code:
%tensorflow_version 2.x
import os
import numpy as np
import tensorflow as tf
from tqdm import tqdm
from matplotlib import pyplot
%matplotlib inline
print("Tensorflow version " + tf.__version__)
Explanation: Bayesian deep-learning
Launch in Google Colab
Bayesian deep-learning network using dropout layers to perform Monte Carlo approximations for quantifying model uncertainty.
Overview
This notebook uses the fashion MNIST dataset and a Bayesain deep-learning model. If the Google Cloud TPU is attached to the notebook, the model can utilize the TPU to accelerate the training and inference performance.
Learning goals
Build a Bayesian deep-learning network in Keras
Create and compile the model under a distribution strategy that uses TPUs
Run Bayesian inference
Instructions
<h3> Train on Google Colab using TPU <a href="https://colab.research.google.com/"><img valign="middle" src="https://raw.githubusercontent.com/rahulremanan/python_tutorial/master/Machine_Vision/07_Bayesian_deep_learning/media/tpu-hexagon.png" width="50"></a></h3>
On the main menu, click Runtime and select Change runtime type. Set "TPU" as the hardware accelerator.
Click Runtime again and select Runtime > Run All. You can also run the cells manually with Shift-ENTER.
A quick word about TPUs
TPUs are currently available only in the Google Cloud. They are designed to read the data directly from Google Cloud Storage (GCS). Therefore, local datasets need to be either stored in the cloud instance memory to pass it to the TPU or as a GCS bucket so that the TPU can access it. For developers, this means that the typical generator functions that can handle CPUs or GPUs will therefore fail when trying to use TPUs, necessitating custom TPU specific generator functions. In this notebook, we are using the first approach by storing the entire fashion MNIST dataset in the instance memory. This approach of handling the dataset without a generator function works well in this particular case due to the manageable size of the dataset.
Bayesian deep-learning using Fashion MNIST, Keras and TPUs
Import
End of explanation
WEIGHTS_FILE='./bayesian_fashionMNIST.h5'
GITHUB_REPO='https://github.com/rahulremanan/python_tutorial/'
WEIGHTS_URL='{}raw/master/Machine_Vision/07_Bayesian_deep_learning/weights/bayesian_fashionMNIST.h5'.format(GITHUB_REPO)
LABEL_NAMES = ['t_shirt','trouser','pullover','dress','coat','sandal','shirt','sneaker','bag','ankle_boots']
Explanation: Specify variables
End of explanation
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
# add empty color dimension
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
Explanation: Fashion MNIST dataset
The fashion MNIST dataset is available as a tf.keras.datasets.
End of explanation
def fashionMNIST_model(input_data,dropout_rate=0.5,model_name="Bayesian_fashionMNIST",enable_bayesian_inference=True):
inputs = tf.keras.Input(shape=(input_data.shape[1:]))
x = tf.keras.layers.Conv2D(128,(3,3))(inputs)
x = tf.keras.layers.MaxPooling2D(pool_size=(2,2),strides=(2,2))(x)
x = tf.keras.layers.Activation('elu')(x)
x = tf.keras.layers.Dropout(dropout_rate)(x,training=enable_bayesian_inference)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(10)(x)
outputs = tf.keras.layers.Activation('softmax')(x)
model = tf.keras.Model(inputs=inputs,outputs=outputs,name=model_name)
return model
Explanation: Define the Bayesian deep-learning model
The following example uses a single layer conv-net with a dropout layer for doing the Monte Carlo approximations during Bayesian inference.
End of explanation
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
print("REPLICAS: ", strategy.num_replicas_in_sync)
with strategy.scope():
bayesian_model = fashionMNIST_model(x_train,enable_bayesian_inference=True)
bayesian_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=[tf.keras.metrics.sparse_categorical_accuracy])
Explanation: Using the TPU
To use the TPU for training and inference, first the TPU device needs to be initialized. Then the model has to be built and compiled specifically to use the TPU.
End of explanation
if not os.path.exists(WEIGHTS_FILE):
!wget {WEIGHTS_URL} -O {WEIGHTS_FILE}
if os.path.exists(WEIGHTS_FILE):
bayesian_model.load_weights(WEIGHTS_FILE)
print('Loaded pre-trained weights: {} ...'.format(WEIGHTS_FILE))
Explanation: Train
Download pre-trained weights
End of explanation
bayesian_model.fit(x_train.astype(np.float32),y_train.astype(np.float32),
epochs=5,
steps_per_epoch=60,
validation_data=(x_test.astype(np.float32),y_test.astype(np.float32)),
validation_freq=1)
bayesian_model.save_weights(WEIGHTS_FILE,overwrite=True)
Explanation: Training the fashion MNIST Bayesian deep-learning model
End of explanation
with strategy.scope():
bayesian_model = fashionMNIST_model(x_train,enable_bayesian_inference=True)
bayesian_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=[tf.keras.metrics.sparse_categorical_accuracy])
bayesian_model.load_weights(WEIGHTS_FILE)
preds=[]
num_bayesian_inference_steps=10
for i in tqdm(range(num_bayesian_inference_steps)):
preds.append(bayesian_model.predict(x_test[:16].astype(np.float32)))
mean_preds=np.mean(np.asarray(preds),axis=0)
stdev_preds=np.std(np.asarray(preds),axis=0)
Explanation: Bayesian inference
The inference step is repeated over and over again to obtain the model uncertainty associated with each prediction class. Unlike in the regular deep-learning architecture, each inference step returns a different set of probabilities for each class. The final accuracy is calculated as the class-wise mean of all the probabilities. The model uncertainty is numerically represented as the class-wise standard deviation of all the probabilities.
End of explanation
def plot_predictions(images,ground_truths,
preds_acc,preds_stdev=None,
label_names=None,
enable_bayesian_inference=True):
n = images.shape[0]
nc = int(np.ceil(n / 4))
f, axes = pyplot.subplots(nc, 4)
for i in range(nc * 4):
y = i // 4
x = i % 4
axes[x, y].axis('off')
label = label_names[np.argmax(preds_acc[i])]
ground_truth=label_names[ground_truths[i]]
accuracy = np.max(preds_acc[i])
if enable_bayesian_inference and preds_stdev is not None:
confidence = preds_stdev[i][np.argmax(preds_acc[i])]
if i > n:
continue
axes[x, y].imshow(images[i])
if enable_bayesian_inference and preds_stdev is not None:
axes[x, y].text(0.5,0.5, '\nLabel (Actual): {} ({})'.format(label,ground_truth) +
'\nAccuracy: {}, \nUncertainty: {}\n'.format(str(round(accuracy,2)),
str(round(confidence,2))),
fontsize=10)
else:
axes[x, y].text(0.5,0.5, '\nLabel: {}'.format(label) +
'\nAccuracy: {} \n'.format(str(round(accuracy,2))),
fontsize=10)
pyplot.gcf().set_size_inches(16,16)
plot_predictions(np.squeeze(x_test[:16]), y_test[:16],
mean_preds,stdev_preds,
label_names=LABEL_NAMES,
enable_bayesian_inference=True)
Explanation: Visualize predictions
End of explanation |
12,207 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparison of batchflow performance with tf and torch
Models
Firstly, we comapre torch and tf versions of VGG16. Train them on MNIST.
TensorFlow model
Step1: ... then restart kernel to clear GPU
Torch model
Step2: torch model is slightly faster.
Research
Now let us compare research performances. We have the same experiment scheme and use four GPU units.
TensorFlow model
Step3: Torch model | Python Code:
%%time
%run ./tf_model.py
Explanation: Comparison of batchflow performance with tf and torch
Models
Firstly, we comapre torch and tf versions of VGG16. Train them on MNIST.
TensorFlow model
End of explanation
%%time
%run ./torch_model.py
Explanation: ... then restart kernel to clear GPU
Torch model
End of explanation
import numpy as np
from multiprocessing import Process, Queue
import time
import matplotlib.pyplot as plt
import nvidia_smi
gpu_list = [2, 4, 5, 6]
def get_utilization(gpu_list):
nvidia_smi.nvmlInit()
handle = [nvidia_smi.nvmlDeviceGetHandleByIndex(i) for i in gpu_list]
res = [nvidia_smi.nvmlDeviceGetUtilizationRates(item) for item in handle]
return time.time(), [item.gpu for item in res]
def gpu_stat(gpu_list, forward, back):
res = []
while forward.empty():
time.sleep(0.5)
res.append(get_utilization(gpu_list))
back.put(res)
def plot(res, gpu_list):
times = np.array([item[0] for item in res])
times = times - times[0]
utilization = np.array([[item[1][j] for item in res] for j in range(len(gpu_list))])
plt.figure(figsize=(15, 3))
_ = [plt.plot(times, utilization[i]) for i in range(len(gpu_list))]
plt.show()
forward = Queue()
back = Queue()
p = Process(target=gpu_stat, args=(gpu_list, forward, back))
p.start()
%%time
%run ./tf_research.py 4
forward.put('stop')
tf_res = back.get()
plot(tf_res, gpu_list)
forward = Queue()
back = Queue()
p = Process(target=gpu_stat, args=([2], forward, back))
p.start()
%%time
%run ./tf_research.py 1
forward.put('stop')
tf_res = back.get()
plot(tf_res, [2])
Explanation: torch model is slightly faster.
Research
Now let us compare research performances. We have the same experiment scheme and use four GPU units.
TensorFlow model
End of explanation
forward = Queue()
back = Queue()
p = Process(target=gpu_stat, args=(gpu_list, forward, back))
p.start()
%%time
%run ./torch_research.py 4
forward.put('stop')
torch_res = back.get()
plot(torch_res, gpu_list)
forward = Queue()
back = Queue()
p = Process(target=gpu_stat, args=([2], forward, back))
p.start()
%%time
%run ./torch_research.py 1
forward.put('stop')
torch_res = back.get()
plot(torch_res, [2])
Explanation: Torch model
End of explanation |
12,208 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Yellow Brick to Explore and Model the Famous Iris Dataset
Exploration Notebook by
Step1: Terminology
150 observations (n=150)
Step2: Import the Good Stuff
Step3: Feature Exploration with RadViz
Step4: Setosas tend to have the largest septal-width. This can could be a great predictor.
Then, let's remove setosa from the training set and see fi we can find any differentiation between veriscolor and virginica.
Remove Setosa from the training set
Step5: Try the Covariance Visualizer
Step6: This covariance chart is not intereptatble as they don't have labels. Also there shouldn't be half numbers in labels.
More Feature Exploration
Step7: This clearly demonstrates the separation between features - especially petal_length and petal_width. One concern is that this demonstraction data might be obsured by the scaling of the features and add noise to the intepretation.
Feature Exploration
Step8: The scaled dataset makes it easier to see the separation between classes for each of the features.
*TODO - Add scaling option to PararalCordinates and potentially other visualizers
Now that we have some features, Let's Evaluate Classifiers
From the feature selection phase, we determined that petal_length and petal_width seem to have the best separation.
Step9: Note
Step10: Model Selection | Python Code:
# read the iris data into a DataFrame
import pandas as pd
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'
col_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
iris = pd.read_csv(url, header=None, names=col_names)
iris.head()
Explanation: Using Yellow Brick to Explore and Model the Famous Iris Dataset
Exploration Notebook by:
Nathan Danielsen
Prema Damodaran
Review of the iris dataset
End of explanation
# map each iris species to a number
iris['species_num'] = iris.species.map({'Iris-setosa':0, 'Iris-versicolor':1, 'Iris-virginica':2})
Explanation: Terminology
150 observations (n=150): each observation is one iris flower
4 features (p=4): sepal length, sepal width, petal length, and petal width
Response: iris species
Classification problem since response is categorical
Lightly Preprocess the Dataset
End of explanation
import yellowbrick as yb
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (12, 8)
from yellowbrick.features.rankd import Rank2D
from yellowbrick.features.radviz import RadViz
from yellowbrick.features.pcoords import ParallelCoordinates
Explanation: Import the Good Stuff
End of explanation
# Specify the features of interest and the classes of the target
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
visualizer = RadViz(classes=classes, features=features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.show() # Draw/show/show the data
Explanation: Feature Exploration with RadViz
End of explanation
# Specify the features of interest and the classes of the target
iris_subset = iris[iris.species_num!=0]
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa','Iris-versicolor', 'Iris-virginica'] # but have to leave in more than two classes
# Extract the numpy arrays from the data frame
X = iris_subset[features].as_matrix()
y = iris_subset.species_num.as_matrix()
assert y.shape[0] == X.shape[0]
visualizer = RadViz(classes=classes, features=features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.show() # Draw/show/show the data
Explanation: Setosas tend to have the largest septal-width. This can could be a great predictor.
Then, let's remove setosa from the training set and see fi we can find any differentiation between veriscolor and virginica.
Remove Setosa from the training set
End of explanation
# Specify the features of interest and the classes of the target
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
visualizer = Rank2D(features=features, algorithm='covariance')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.show() # Draw/show/show the data
Explanation: Try the Covariance Visualizer
End of explanation
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.values
assert y.shape[0] == X.shape[0]
visualizer = ParallelCoordinates(classes=classes, features=features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.show() # Draw/show/show the data
Explanation: This covariance chart is not intereptatble as they don't have labels. Also there shouldn't be half numbers in labels.
More Feature Exploration: Look at Parallel Coodinates for all Species
End of explanation
from sklearn import preprocessing
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
X_scaled = preprocessing.scale(X)
y = iris.species_num.values
assert y.shape[0] == X.shape[0]
visualizer = ParallelCoordinates(classes=classes, features=features)
visualizer.fit(X_scaled, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.show()
Explanation: This clearly demonstrates the separation between features - especially petal_length and petal_width. One concern is that this demonstraction data might be obsured by the scaling of the features and add noise to the intepretation.
Feature Exploration: ParallelCoordinates with Scaling
End of explanation
# Classifier Evaluation Imports
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import train_test_split
from yellowbrick.classifier import ClassificationReport, ClassBalance
features = ['petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
assert y.shape[0] == X.shape[0]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
assert y_train.shape[0] == X_train.shape[0]
assert y_test.shape[0] == X_test.shape[0]
# Instantiate the classification model and visualizer
bayes = GaussianNB()
visualizer = ClassificationReport(bayes, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.show() # Draw/show/show the data
visualizer
Explanation: The scaled dataset makes it easier to see the separation between classes for each of the features.
*TODO - Add scaling option to PararalCordinates and potentially other visualizers
Now that we have some features, Let's Evaluate Classifiers
From the feature selection phase, we determined that petal_length and petal_width seem to have the best separation.
End of explanation
# Classifier Evaluation Imports
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split
from yellowbrick.classifier import ClassificationReport, ClassBalance
features = ['petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
assert y.shape[0] == X.shape[0]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
assert y_train.shape[0] == X_train.shape[0]
assert y_test.shape[0] == X_test.shape[0]
# Instantiate the classification model and visualizer
bayes = MultinomialNB()
visualizer = ClassificationReport(bayes)# classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.show() # Draw/show/show the data
Explanation: Note: There seems to be some sort of bug in the draw/ fit methods.
Let's try a naive bayes as the other didn't wrok
End of explanation
features = ['petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
assert y.shape[0] == X.shape[0]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
assert y_train.shape[0] == X_train.shape[0]
assert y_test.shape[0] == X_test.shape[0]
test = pd.DataFrame(y_test, columns=['species'])
test.species.value_counts() # The test train split provides unbalanced classes
from sklearn.ensemble import RandomForestClassifier
from yellowbrick.classifier import ClassificationReport
# Instantiate the classification model and visualizer
forest = RandomForestClassifier()
visualizer = ClassBalance(forest, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.show() # Draw/show/show the data
Explanation: Model Selection: Random Forest Classification
End of explanation |
12,209 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Perform a 4-terminal calculation with 2 crossed Carbon chains.
Running a two-terminal calculation with TranSiesta is a breeze compared to running $N>2$-electrode calculations. When performing $N>2$-electrode calculations an endless combination of different applied bias settings become apparent.
This will be reflected in an even more verbose input for TranSiesta to describe all the 4 electrodes, contours, chemical potentials etc.
This example will primarily create the geometries, and then you should perform data analysis.
Step1: Exercises
Run the two electrodes (RUN_ELEC_X/Y.fdf).
Run the TranSiesta analyzation step (see fdf flag
Step2: Plot bond (vector) currents, below is a skeleton code to do this, look in the sisl manual for extraction of vector current
Step4: Performing NEGF calculations for $N$-electrodes with an applied bias is extremely difficult and one should first take on this task once the traditional TranSiesta 2-electrode setup is a breeze.
One of the main difficulties in performing good $N$-electrode calculations is the Poisson solution and the boundary conditions. These are more difficult to empose when having more than 2 electrodes or 1 electrode (only 2 electrode setups are easy).
In a 2 terminal device it is obvious how the applied bias is located (a ramp between the two electrodes), however,
when dealing with more than 2 elecrodes all electrodes may have a different chemical potential and thus the variations in how to apply the bias becomes infinite.
Your first task is to read through RUN.fdf and figure out which electrode has which chemical potential and draw a small schematic of it.
Calculate the NEGF with a bias of 0.5 eV (please use this command line, for details of options refer to the Siesta manual)
Step5: TranSiesta's method of setting boundary conditions for $N$-electrodes is extremely crude since it only fixes the potential on the electrodes. Instead of letting TranSiesta apply the boundary conditions we can provide an external solution to the Poisson problem with proper boundary conditions of the electrodes.
Below is a method to solve the Poisson problem in Python using pyamg. It takes quite some time, so be patient.
You don't need to understand the below script (but I won't hold you back if you want to carefully read it through ;)) | Python Code:
chain = sisl.Geometry([[0,0,0]], atoms=sisl.Atom[6], sc=[1.4, 1.4, 11])
elec_x = chain.tile(4, axis=0).add_vacuum(11 - 1.4, 1)
elec_x.write('ELEC_X.fdf')
elec_y = chain.tile(4, axis=1).add_vacuum(11 - 1.4, 0)
elec_y.write('ELEC_Y.fdf')
chain_x = elec_x.tile(4, axis=0)
chain_y = elec_y.tile(4, axis=1)
chain_x = chain_x.translate(-chain_x.center(what='xyz'))
chain_y = chain_y.translate(-chain_y.center(what='xyz'))
device = chain_x.append(chain_y.translate([0, 0, -chain.cell[2, 2] + 2.1]), 2)
# Correct the y-direction vacuum
device = device.add_vacuum(chain_y.cell[1, 1] - chain_x.cell[1,1], 1)
device = device.translate(device.center(what='cell'))
device.write('DEVICE.fdf')
device.write('DEVICE.xyz')
Explanation: Perform a 4-terminal calculation with 2 crossed Carbon chains.
Running a two-terminal calculation with TranSiesta is a breeze compared to running $N>2$-electrode calculations. When performing $N>2$-electrode calculations an endless combination of different applied bias settings become apparent.
This will be reflected in an even more verbose input for TranSiesta to describe all the 4 electrodes, contours, chemical potentials etc.
This example will primarily create the geometries, and then you should perform data analysis.
End of explanation
tbt = sisl.get_sile('siesta.TBT.nc')
Explanation: Exercises
Run the two electrodes (RUN_ELEC_X/Y.fdf).
Run the TranSiesta analyzation step (see fdf flag: TS.Analyze) and determine the optimal pivoting scheme used.
If you are interested you may try to use the worst pivoting scheme and see if it affects the execution time (however this system is very small so the time difference may be very small).
After analyzation and adding the resulting pivoting scheme to RUN.fdf; run the device (RUN.fdf).
Try and extract similar data as done in TB 6. At least plot one of the DOS quantities.
Extend your DOS plot to be orbitally resolved by extracting only subsets of DOS, in this regard also play with the norm keyword, try and plot the DOS per $s$, sum of $p$, etc. for the orbitals on the Carbon atoms.
A file named siesta.ORB_INDX has been created by Siesta which contains the orbital information per atom, this should give you access to the indices for extraction.
End of explanation
xy = tbt.geometry.xyz[:, :]
J1 = # fill in the corresponding code here ()
plt.quiver(xy[:, 0], xy[:, 1], J1[:, 0], J1[:, 1]);
Explanation: Plot bond (vector) currents, below is a skeleton code to do this, look in the sisl manual for extraction of vector current
End of explanation
def plot_grid(grid, plane_dist=1):
Plot a cut through the grid
z_index = grid.index(plane_dist, 2)
x, y = np.mgrid[:grid.shape[0], :grid.shape[1]]
dcell = grid.dcell
x, y = x * dcell[0, 0] + y * dcell[1, 0], x * dcell[0, 1] + y * dcell[1, 1]
fig, ax = plt.subplots(1, 1)
im = ax.contourf(x, y, grid.grid[:, :, z_index])
ax.set_xlabel(r'$x$ [Ang]'); ax.set_ylabel(r'$y$ [Ang]')
ax.set_title('Potential difference [eV]')
# Also plot the atomic coordinates
xyz = grid.geometry.xyz
ax.scatter(xyz[:, 0], xyz[:, 1], 50, 'k', alpha=.6)
fig.colorbar(im);
# Read in the two different grids:
grid0 = sisl.get_sile('siesta.VH').read_grid()
no_guess = sisl.get_sile('no_guess_0.5.VH').read_grid()
# Specify the geometry so we can add the atoms to the plot
no_guess.set_geometry(device)
plot_grid(no_guess - grid0, 1.) # replace with the correct z-distance
Explanation: Performing NEGF calculations for $N$-electrodes with an applied bias is extremely difficult and one should first take on this task once the traditional TranSiesta 2-electrode setup is a breeze.
One of the main difficulties in performing good $N$-electrode calculations is the Poisson solution and the boundary conditions. These are more difficult to empose when having more than 2 electrodes or 1 electrode (only 2 electrode setups are easy).
In a 2 terminal device it is obvious how the applied bias is located (a ramp between the two electrodes), however,
when dealing with more than 2 elecrodes all electrodes may have a different chemical potential and thus the variations in how to apply the bias becomes infinite.
Your first task is to read through RUN.fdf and figure out which electrode has which chemical potential and draw a small schematic of it.
Calculate the NEGF with a bias of 0.5 eV (please use this command line, for details of options refer to the Siesta manual):
siesta -L no_guess_0.5 -V 0.5:eV RUN.fdf > no_guess_0.5.out
which applies a bias of 0.5 eV. Read through the output and find the warning which justifies the name no_guess.
- There should now be 2 files that lets you plot the bias potential profile, siesta.VH and no_guess_0.5.VH.
Use the below method to plot the plane right between the two carbon chains (HINT: calculating the $z$-value at the center between the two chains may be done using a method for the Geometry object)
End of explanation
# Define the boundary conditions in the unit-cell
bc = [['dirichlet', 'dirichlet'],
['dirichlet', 'dirichlet'],
['neumann', 'neumann']]
# Import the required machinery for solving the boundary conditions
# There is also a command-line utility to do this from a siesta.TBT.nc file with
# some easier to use command-line options. It isn't fully automated but almost.
from sisl_toolbox.transiesta.poisson.poisson_explicit import solve_poisson
device_name = device.copy()
# define the electrodes in the device together with their potential
elecs = {}
# X-left
device_name['elec-x-1'] = np.arange(elec_x.na)
elecs['elec-x-1'] = 1.
# X-right
device_name['elec-x-2'] = np.arange(chain_x.na - elec_x.na, chain_x.na)
elecs['elec-x-2'] = 1.
# Y-left
device_name['elec-y-1'] = np.arange(chain_x.na, chain_x.na + elec_y.na)
elecs['elec-y-1'] = -1.
# Y-right
device_name['elec-y-2'] = np.arange(device.na - elec_y.na, device.na)
elecs['elec-y-2'] = -1.
# Now solve
print("Starting solution... Please hold... This takes time! :)")
grid = solve_poisson(device_name, grid0.shape,
dtype=np.float32, tolerance=2e-6,
boundary=bc, radius=1.7, **elecs)
print("Done... storing boundary conditions to disk")
grid.write('V.TSV.nc')
Explanation: TranSiesta's method of setting boundary conditions for $N$-electrodes is extremely crude since it only fixes the potential on the electrodes. Instead of letting TranSiesta apply the boundary conditions we can provide an external solution to the Poisson problem with proper boundary conditions of the electrodes.
Below is a method to solve the Poisson problem in Python using pyamg. It takes quite some time, so be patient.
You don't need to understand the below script (but I won't hold you back if you want to carefully read it through ;))
End of explanation |
12,210 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gradient Checking
Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.
You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.
But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".
Let's do it!
Step2: 1) How does gradient checking work?
Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$.
Let's look back at the definition of a derivative (or gradient)
Step4: Expected Output
Step6: Expected Output
Step8: Expected Output
Step10: Now, run backward propagation.
Step12: You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.
How does gradient checking work?.
As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still | Python Code:
# Packages
import numpy as np
from testCases import *
from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector
Explanation: Gradient Checking
Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.
You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.
But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".
Let's do it!
End of explanation
# GRADED FUNCTION: forward_propagation
def forward_propagation(x, theta):
Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
J -- the value of function J, computed using the formula J(theta) = theta * x
### START CODE HERE ### (approx. 1 line)
J = theta * x
### END CODE HERE ###
return J
x, theta = 2, 4
J = forward_propagation(x, theta)
print ("J = " + str(J))
Explanation: 1) How does gradient checking work?
Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$.
Let's look back at the definition of a derivative (or gradient):
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small."
We know the following:
$\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly.
You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct.
Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct!
2) 1-dimensional gradient checking
Consider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input.
You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct.
<img src="images/1Dgrad_kiank.png" style="width:600px;height:250px;">
<caption><center> <u> Figure 1 </u>: 1D linear model<br> </center></caption>
The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation").
Exercise: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions.
End of explanation
# GRADED FUNCTION: backward_propagation
def backward_propagation(x, theta):
Computes the derivative of J with respect to theta (see Figure 1).
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
dtheta -- the gradient of the cost with respect to theta
### START CODE HERE ### (approx. 1 line)
dtheta = x
### END CODE HERE ###
return dtheta
x, theta = 2, 4
dtheta = backward_propagation(x, theta)
print ("dtheta = " + str(dtheta))
Explanation: Expected Output:
<table style=>
<tr>
<td> ** J ** </td>
<td> 8</td>
</tr>
</table>
Exercise: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial Jย }{ \partial \theta} = x$.
End of explanation
# GRADED FUNCTION: gradient_check
def gradient_check(x, theta, epsilon = 1e-7):
Implement the backward propagation presented in Figure 1.
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
# Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.
### START CODE HERE ### (approx. 5 lines)
thetaplus = theta + epsilon # Step 1
thetaminus = theta - epsilon # Step 2
J_plus = forward_propagation(x, thetaplus) # Step 3
J_minus = forward_propagation(x, thetaminus) # Step 4
gradapprox = (J_plus - J_minus) / (2 * epsilon) # Step 5
### END CODE HERE ###
# Check if gradapprox is close enough to the output of backward_propagation()
### START CODE HERE ### (approx. 1 line)
grad = backward_propagation(x, theta)
### END CODE HERE ###
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference < 1e-7:
print ("The gradient is correct!")
else:
print ("The gradient is wrong!")
return difference
x, theta = 2, 4
difference = gradient_check(x, theta)
print("difference = " + str(difference))
Explanation: Expected Output:
<table>
<tr>
<td> ** dtheta ** </td>
<td> 2 </td>
</tr>
</table>
Exercise: To show that the backward_propagation() function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking.
Instructions:
- First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow:
1. $\theta^{+} = \theta + \varepsilon$
2. $\theta^{-} = \theta - \varepsilon$
3. $J^{+} = J(\theta^{+})$
4. $J^{-} = J(\theta^{-})$
5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$
- Then compute the gradient using backward propagation, and store the result in a variable "grad"
- Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula:
$$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$
You will need 3 Steps to compute this formula:
- 1'. compute the numerator using np.linalg.norm(...)
- 2'. compute the denominator. You will need to call np.linalg.norm(...) twice.
- 3'. divide them.
- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.
End of explanation
def forward_propagation_n(X, Y, parameters):
Implements the forward propagation (and computes the cost) presented in Figure 3.
Arguments:
X -- training set for m examples
Y -- labels for m examples
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (5, 4)
b1 -- bias vector of shape (5, 1)
W2 -- weight matrix of shape (3, 5)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
Returns:
cost -- the cost function (logistic cost for one example)
# retrieve parameters
m = X.shape[1]
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
# Cost
logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y)
cost = 1./m * np.sum(logprobs)
cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)
return cost, cache
Explanation: Expected Output:
The gradient is correct!
<table>
<tr>
<td> ** difference ** </td>
<td> 2.9193358103083e-10 </td>
</tr>
</table>
Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in backward_propagation().
Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it!
3) N-dimensional gradient checking
The following figure describes the forward and backward propagation of your fraud detection model.
<img src="images/NDgrad_kiank.png" style="width:600px;height:400px;">
<caption><center> <u> Figure 2 </u>: deep neural network<br>LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID</center></caption>
Let's look at your implementations for forward propagation and backward propagation.
End of explanation
def backward_propagation_n(X, Y, cache):
Implement the backward propagation presented in figure 2.
Arguments:
X -- input datapoint, of shape (input size, 1)
Y -- true "label"
cache -- cache output from forward_propagation_n()
Returns:
gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T) * 2
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 4./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
Explanation: Now, run backward propagation.
End of explanation
# GRADED FUNCTION: gradient_check_n
def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7):
Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n
Arguments:
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters.
x -- input datapoint, of shape (input size, 1)
y -- true "label"
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
# Set-up variables
parameters_values, _ = dictionary_to_vector(parameters)
grad = gradients_to_vector(gradients)
num_parameters = parameters_values.shape[0]
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))
# Compute gradapprox
for i in range(num_parameters):
# Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]".
# "_" is used because the function you have to outputs two parameters but we only care about the first one
### START CODE HERE ### (approx. 3 lines)
thetaplus = np.copy(parameters_values) # Step 1
thetaplus[i][0] = thetaplus[i][0] + epsilon # Step 2
J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3
### END CODE HERE ###
# Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]".
### START CODE HERE ### (approx. 3 lines)
thetaminus = np.copy(parameters_values) # Step 1
thetaminus[i][0] = thetaminus[i][0] - epsilon # Step 2
J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3
### END CODE HERE ###
# Compute gradapprox[i]
### START CODE HERE ### (approx. 1 line)
gradapprox[i] = (J_plus[i] - J_minus[i]) / (2 * epsilon)
### END CODE HERE ###
# Compare gradapprox to backward propagation gradients by computing difference.
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference > 1e-7:
print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
return difference
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y)
Explanation: You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.
How does gradient checking work?.
As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still:
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "dictionary_to_vector()" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.
The inverse function is "vector_to_dictionary" which outputs back the "parameters" dictionary.
<img src="images/dictionary_to_vector.png" style="width:600px;height:400px;">
<caption><center> <u> Figure 2 </u>: dictionary_to_vector() and vector_to_dictionary()<br> You will need these functions in gradient_check_n()</center></caption>
We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that.
Exercise: Implement gradient_check_n().
Instructions: Here is pseudo-code that will help you implement the gradient check.
For each i in num_parameters:
- To compute J_plus[i]:
1. Set $\theta^{+}$ to np.copy(parameters_values)
2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$
3. Calculate $J^{+}_i$ using to forward_propagation_n(x, y, vector_to_dictionary($\theta^{+}$ )).
- To compute J_minus[i]: do the same thing with $\theta^{-}$
- Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$
Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to parameter_values[i]. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute:
$$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$
End of explanation |
12,211 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
El vecindario racista
Step1: Supongamos que tenemos un vecindario. Este vecindario es una matriz o casillero, en el que cada vecino puede ocupar una casilla.
Step2: Aquรญ podemos ver un pequeรฑo vecindario, con vecinos azules y rojos. Tambiรฉn hay espacios libres que no estรกn ocupados por nadie.
Cada vecino se ve afectado por las 8 casillas que tiene a su alrededor. En principio, no le molesta la presencia de vecinos de un color diferente, pero si la proporciรณn de vecinos de su mismo color es de sรณlo 1/3 o menos, se sentirรก incomodado y desearรก marcharse.
En el grรกfico superior, los vecinos incomodados tienen unas esquinas grises, mientras que los vecinos confortables no.
Estos vecinos que se sienten incรณmodos se mudarรกn en cuanto puedan. Para representar esto, repasaremos la lista de vecinos, detectando a los que quieren mudarse, y cambiarรกn de sitio a una casilla nueva aleatoria que estรฉ vacรญa.
Esto representa un 'step'.
Step3: Podemos comprobar que aunque los individuos no prefieren ningรบn tipo de segregaciรณn, y su รบnica condiciรณn es que al menos 1/3 de sus vecinos sean del mismo color que ellos, al cabo de unos pocos steps la segregaciรณn en grupos homogรฉneos ha aparecido como propiedad emergente.
Step4: Cรณmo podemos evitar esta situaciรณn?
Existe una sencilla manera de evitarlo
Step5: Extendiendo el algoritmo
Ahora que ya hemos visto cรณmo trabaja este modelo, vamos a jugar un poco con sus parรกmetros. ยฟTe atreves a predecir cรณmo influirรกn estos nรบmeros en un vecindario de tres colores?
Step6: ยกSorpresa! ยฟEsperabas esto?
Tambiรฉn podemos observar cรณmo han ido evolucionando
Step7: ยฟQuรฉ crees que pasarรก en un vecindario igual pero mรกs concienciado? | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import vecindario as vc
Explanation: El vecindario racista: el modelo de segregaciรณn de Schelling
La segregaciรณn racial es un problema en muchas partes del mundo desde hace mucho tiempo. A pesar de que ciertos colectivos han realizado un gran esfuerzo por solucionarlo, muchos paรญses continรบan segregados por razones รฉtnicas, de credo, sexo, riqueza, etc. ยฟPor quรฉ es un problema tan complicado de resolver?
En 1971, el economista americano Thomas Schelling creรณ un modelo basado en agentes que podrรญa ayudar a explicar por quรฉ la segregaciรณn es un problema tan complicado de combatir. Su modelo de segregaciรณn mostraba que individuos o "agentes" que no eran especialmente rigurosos respecto a su entorno tendรญan aรบn asรญ a segregarse con el tiempo. A pesar de que el modelo es especialmente simple, permite una interesante perspectiva sobre cรณmo los individuos pueden tender a segregarse, a pesar de no tener un especial deseo por hacerlo.
(Traducciรณn de http://nifty.stanford.edu/2014/mccown-schelling-model-segregation/ )
(Enlace al paper original de Schelling para Harvard: http://www.stat.berkeley.edu/~aldous/157/Papers/Schelling_Seg_Models.pdf )
Planteamiento
Para este ejercicio usaremos numpy, matplotlib y el cรณdigo que estรก en la carpeta vecindario.
End of explanation
mundo, colores = vc.crear_mundo()
vc.vecin_print(mundo, colores, 10, 0)
Explanation: Supongamos que tenemos un vecindario. Este vecindario es una matriz o casillero, en el que cada vecino puede ocupar una casilla.
End of explanation
n = vc.step_mudanza(mundo, colores, 1, 10)
Explanation: Aquรญ podemos ver un pequeรฑo vecindario, con vecinos azules y rojos. Tambiรฉn hay espacios libres que no estรกn ocupados por nadie.
Cada vecino se ve afectado por las 8 casillas que tiene a su alrededor. En principio, no le molesta la presencia de vecinos de un color diferente, pero si la proporciรณn de vecinos de su mismo color es de sรณlo 1/3 o menos, se sentirรก incomodado y desearรก marcharse.
En el grรกfico superior, los vecinos incomodados tienen unas esquinas grises, mientras que los vecinos confortables no.
Estos vecinos que se sienten incรณmodos se mudarรกn en cuanto puedan. Para representar esto, repasaremos la lista de vecinos, detectando a los que quieren mudarse, y cambiarรกn de sitio a una casilla nueva aleatoria que estรฉ vacรญa.
Esto representa un 'step'.
End of explanation
n = vc.step_multiple(mundo, colores, n, 10)
Explanation: Podemos comprobar que aunque los individuos no prefieren ningรบn tipo de segregaciรณn, y su รบnica condiciรณn es que al menos 1/3 de sus vecinos sean del mismo color que ellos, al cabo de unos pocos steps la segregaciรณn en grupos homogรฉneos ha aparecido como propiedad emergente.
End of explanation
mundo, colores = vc.crear_mundo(intom = 90)
n = vc.step_multiple(mundo, colores, 0, 10, numsteps=100)
Explanation: Cรณmo podemos evitar esta situaciรณn?
Existe una sencilla manera de evitarlo: que los individuos activamente trabajen para evitar la segregaciรณn. La manera de implementar esto es muy sencilla: basta con que tambiรฉn se sientan incรณmodos si mรกs del 90% de sus vecinos son iguales a ellos para obtener cambios sustanciales en la segregaciรณn del grupo.
End of explanation
dim = 50 #Tamaรฑo del lado de la matriz
vacios = 10 #Porcentaje de huecos
colors = 3 #Nรบmero de colores
prop = [0.33, 0.33]
n = 0
intolerance = 33 #Porcentaje mรญnimo de vecinos iguales
intom = 100 #Porcentaje mรกximo de vecinos iguales
rad = 1 # Radio en el que el color de los vecinos se comprueba
historia_felicidad = [] # Aquรญ vamos a almacenar la felicidad del grupo en cada step
historia_segregacion = []# Y aquรญ, el valor de la segregaciรณn.
#Veamos cรณmo comienza la simulaciรณn, con los vecinos repartidos aleatoriamente.
par, datacolor = vc.crear_mundo(dim, colors= colors, prop= prop, vacios = vacios,
intolerance=intolerance, intom= intom, rad = rad,
h_fel = historia_felicidad, h_seg = historia_segregacion)
vc.vecin_print(par, datacolor, dim, n)
#Quรฉ pasarรก dentro de 50 steps?
n = vc.step_multiple(par, datacolor, n, dim,
historia_felicidad, historia_segregacion, 50)
Explanation: Extendiendo el algoritmo
Ahora que ya hemos visto cรณmo trabaja este modelo, vamos a jugar un poco con sus parรกmetros. ยฟTe atreves a predecir cรณmo influirรกn estos nรบmeros en un vecindario de tres colores?
End of explanation
vc.evolucion(historia_felicidad, historia_segregacion)
Explanation: ยกSorpresa! ยฟEsperabas esto?
Tambiรฉn podemos observar cรณmo han ido evolucionando:
End of explanation
intom = 90 #Porcentaje mรกximo de vecinos iguales
n = 0 #Vamos a comenzar de 0
historia_felicidad = [] # Aquรญ vamos a almacenar la felicidad del grupo en cada step
historia_segregacion = []# Y aquรญ, el valor de la segregaciรณn.
#Veamos cรณmo comienza la simulaciรณn, con los vecinos repartidos aleatoriamente.
par, datacolor = vc.crear_mundo(dim, colors= colors, prop= prop, vacios = vacios,
intolerance=intolerance, intom= intom, rad = rad,
h_fel = historia_felicidad, h_seg = historia_segregacion)
vc.vecin_print(par, datacolor, dim, n)
#Quรฉ pasarรก dentro de 50 steps?
n = vc.step_multiple(par, datacolor, n, dim,
historia_felicidad, historia_segregacion, 50)
vc.evolucion(historia_felicidad, historia_segregacion)
Explanation: ยฟQuรฉ crees que pasarรก en un vecindario igual pero mรกs concienciado?
End of explanation |
12,212 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute seed based time-frequency connectivity in sensor space
Computes the connectivity between a seed-gradiometer close to the visual cortex
and all other gradiometers. The connectivity is computed in the time-frequency
domain using Morlet wavelets and the debiased Squared Weighted Phase Lag Index
[1] is used as connectivity metric.
[1] Vinck et al. "An improved index of phase-synchronization for electro-
physiological data in the presence of volume-conduction, noise and
sample-size bias" NeuroImage, vol. 55, no. 4, pp. 1548-1565, Apr. 2011.
Step1: Set parameters | Python Code:
# Author: Martin Luessi <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne import io
from mne.connectivity import spectral_connectivity, seed_target_indices
from mne.datasets import sample
from mne.time_frequency import AverageTFR
print(__doc__)
Explanation: Compute seed based time-frequency connectivity in sensor space
Computes the connectivity between a seed-gradiometer close to the visual cortex
and all other gradiometers. The connectivity is computed in the time-frequency
domain using Morlet wavelets and the debiased Squared Weighted Phase Lag Index
[1] is used as connectivity metric.
[1] Vinck et al. "An improved index of phase-synchronization for electro-
physiological data in the presence of volume-conduction, noise and
sample-size bias" NeuroImage, vol. 55, no. 4, pp. 1548-1565, Apr. 2011.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Add a bad channel
raw.info['bads'] += ['MEG 2443']
# Pick MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,
exclude='bads')
# Create epochs for left-visual condition
event_id, tmin, tmax = 3, -0.2, 0.5
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6),
preload=True)
# Use 'MEG 2343' as seed
seed_ch = 'MEG 2343'
picks_ch_names = [raw.ch_names[i] for i in picks]
# Create seed-target indices for connectivity computation
seed = picks_ch_names.index(seed_ch)
targets = np.arange(len(picks))
indices = seed_target_indices(seed, targets)
# Define wavelet frequencies and number of cycles
cwt_frequencies = np.arange(7, 30, 2)
cwt_n_cycles = cwt_frequencies / 7.
# Run the connectivity analysis using 2 parallel jobs
sfreq = raw.info['sfreq'] # the sampling frequency
con, freqs, times, _, _ = spectral_connectivity(
epochs, indices=indices,
method='wpli2_debiased', mode='cwt_morlet', sfreq=sfreq,
cwt_frequencies=cwt_frequencies, cwt_n_cycles=cwt_n_cycles, n_jobs=1)
# Mark the seed channel with a value of 1.0, so we can see it in the plot
con[np.where(indices[1] == seed)] = 1.0
# Show topography of connectivity from seed
title = 'WPLI2 - Visual - Seed %s' % seed_ch
layout = mne.find_layout(epochs.info, 'meg') # use full layout
tfr = AverageTFR(epochs.info, con, times, freqs, len(epochs))
tfr.plot_topo(fig_facecolor='w', font_color='k', border='k')
Explanation: Set parameters
End of explanation |
12,213 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neural Network
Learning Objectives
Step1: Next, we'll load our data set.
Step2: Examine the data
It's a good idea to get to know your data a little bit before you work with it.
We'll print out a quick summary of a few useful statistics on each column.
This will include things like mean, standard deviation, max, min, and various quantiles.
Step3: This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Let's create a different, more appropriate feature. Because we are predicing the price of a single house, we should try to make all our features correspond to a single house as well
Step4: Build a neural network model
In this exercise, we'll be trying to predict median_house_value. It will be our label (sometimes also called a target). We'll use the remaining columns as our input features.
To train our model, we'll first use the LinearRegressor interface. Then, we'll change to DNNRegressor | Python Code:
import math
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.INFO)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
Explanation: Neural Network
Learning Objectives:
* Use the DNNRegressor class in TensorFlow to predict median housing price
The data is based on 1990 census data from California. This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively.
<p>
Let's use a set of features to predict house value.
## Set Up
In this first cell, we'll load the necessary libraries.
End of explanation
df = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep=",")
Explanation: Next, we'll load our data set.
End of explanation
df.head()
df.describe()
Explanation: Examine the data
It's a good idea to get to know your data a little bit before you work with it.
We'll print out a quick summary of a few useful statistics on each column.
This will include things like mean, standard deviation, max, min, and various quantiles.
End of explanation
df['num_rooms'] = df['total_rooms'] / df['households']
df['num_bedrooms'] = df['total_bedrooms'] / df['households']
df['persons_per_house'] = df['population'] / df['households']
df.describe()
df.drop(['total_rooms', 'total_bedrooms', 'population', 'households'], axis = 1, inplace = True)
df.describe()
Explanation: This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Let's create a different, more appropriate feature. Because we are predicing the price of a single house, we should try to make all our features correspond to a single house as well
End of explanation
featcols = {
colname : tf.feature_column.numeric_column(colname) \
for colname in 'housing_median_age,median_income,num_rooms,num_bedrooms,persons_per_house'.split(',')
}
# Bucketize lat, lon so it's not so high-res; California is mostly N-S, so more lats than lons
featcols['longitude'] = tf.feature_column.bucketized_column(tf.feature_column.numeric_column('longitude'),
np.linspace(-124.3, -114.3, 5).tolist())
featcols['latitude'] = tf.feature_column.bucketized_column(tf.feature_column.numeric_column('latitude'),
np.linspace(32.5, 42, 10).tolist())
featcols.keys()
# Split into train and eval
msk = np.random.rand(len(df)) < 0.8
traindf = df[msk]
evaldf = df[~msk]
SCALE = 100000
BATCH_SIZE= 100
OUTDIR = './housing_trained'
train_input_fn = tf.estimator.inputs.pandas_input_fn(x = traindf[list(featcols.keys())],
y = traindf["median_house_value"] / SCALE,
num_epochs = None,
batch_size = BATCH_SIZE,
shuffle = True)
eval_input_fn = tf.estimator.inputs.pandas_input_fn(x = evaldf[list(featcols.keys())],
y = evaldf["median_house_value"] / SCALE, # note the scaling
num_epochs = 1,
batch_size = len(evaldf),
shuffle=False)
# Linear Regressor
def train_and_evaluate(output_dir, num_train_steps):
myopt = tf.train.FtrlOptimizer(learning_rate = 0.01) # note the learning rate
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = featcols.values(),
optimizer = myopt)
#Add rmse evaluation metric
def rmse(labels, predictions):
pred_values = tf.cast(predictions['predictions'],tf.float64)
return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)}
estimator = tf.contrib.estimator.add_metrics(estimator,rmse)
train_spec=tf.estimator.TrainSpec(
input_fn = train_input_fn,
max_steps = num_train_steps)
eval_spec=tf.estimator.EvalSpec(
input_fn = eval_input_fn,
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = (100 * len(traindf)) / BATCH_SIZE)
# DNN Regressor
def train_and_evaluate(output_dir, num_train_steps):
myopt = tf.train.FtrlOptimizer(learning_rate = 0.01) # note the learning rate
estimator = # TODO: Implement DNN Regressor model
#Add rmse evaluation metric
def rmse(labels, predictions):
pred_values = tf.cast(predictions['predictions'],tf.float64)
return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)}
estimator = tf.contrib.estimator.add_metrics(estimator,rmse)
train_spec=tf.estimator.TrainSpec(
input_fn = train_input_fn,
max_steps = num_train_steps)
eval_spec=tf.estimator.EvalSpec(
input_fn = eval_input_fn,
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
train_and_evaluate(OUTDIR, num_train_steps = (100 * len(traindf)) / BATCH_SIZE)
Explanation: Build a neural network model
In this exercise, we'll be trying to predict median_house_value. It will be our label (sometimes also called a target). We'll use the remaining columns as our input features.
To train our model, we'll first use the LinearRegressor interface. Then, we'll change to DNNRegressor
End of explanation |
12,214 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What is NetCDF?
<img src='http
Step1: mode='r' is the default.
mode='a' opens an existing file and allows for appending (does not clobber existing data)
format can be one of
NETCDF3_CLASSIC
NETCDF3_64BIT
NETCDF4_CLASSIC
NETCDF4 (default).
NETCDF4_CLASSIC uses HDF5 for the underlying storage layer (as does NETCDF4) but enforces the classic netCDF 3 data model so data can be read with older clients.
Load essential modules
Step2: Just to be safe, make sure dataset is not already open
Step3: Creating dimensions
The ncfile object we created is a container for dimensions, variables, and attributes. First, let's create some dimensions using the createDimension method.
Every dimension has a name and a length.
The name is a string that is used to specify the dimension to be used when creating a variable, and as a key to access the dimension object in the ncfile.dimensions dictionary.
Setting the dimension length to 0 or None makes it unlimited, so it can grow.
For NETCDF4 files, any variable's dimension can be unlimited.
For NETCDF4_CLASSIC and NETCDF3* files, only one per variable can be unlimited, and it must be the leftmost (slowest varying) dimension.
Decide what dimensions our data will have
Step4: Creating attributes
netCDF attributes can be created just like you would for any python object.
Best to adhere to established conventions (like the CF conventions)
We won't try to adhere to any specific convention here though.
Step5: You can also easily delete a netCDF attribute of a Dataset by using delncattr method
Step6: Creating variables
Now let's add some variables and store some data in them.
A variable has a name, a type, a shape, and some data values.
The shape of a variable is specified by a tuple of dimension names.
A variable should also have some named attributes, such as 'units', that describe the data.
The createVariable method takes 3 mandatory args.
the 1st argument is the variable name (a string). This is used as the key to access the variable object from the variables dictionary.
the 2nd argument is the datatype (most numpy datatypes supported).
the third argument is a tuple containing the dimension names (the dimensions must be created first). Unless this is a NETCDF4 file, any unlimited dimension must be the leftmost one.
there are lots of optional arguments (many of which are only relevant when format='NETCDF4') to control compression, chunking, fill_value, etc.
Step7: Define a 3D variable to hold the data
Step8: Pre-defined variable attributes (read only)
The netCDF4 module provides some useful pre-defined Python attributes for netCDF variables, such as dimensions, shape, dtype, ndim.
Note
Step9: Writing data
To write data a netCDF variable object, just treat it like a numpy array and assign values to a slice.
Step10: You can just treat a netCDF Variable object like a numpy array and assign values to it.
However, unlike numpy arrays, variables automatically grow along unlimited dimensions
The above writes the whole 3D variable all at once, but you can write it a slice at a time instead.
Let's add another time slice....
Step11: Note that we have not yet written any data to the time variable. It automatically grew as we appended data along the time dimension to the variable temp, but the data are missing.
Step12: Dashes indicate masked values (where data have not yet been written).
Now, to work with time objects we will need some extra imports
Step13: Closing a netCDF file
It's important to close a netCDF file you opened for writing
Step14: Check again using ncdump utility
Step15: Appending data to NetCDF dataset
Step16: Create an averaged array using the existing "air_temperature" field
Step17: Write the data
Step18: Open the resulting dataset and plot some data
Step19: Open the file for reading
Step20: First, try this handy methods of extracting variables
Step21: References
This notebook is build upon the great materials of the Unidata Python Workshop | Python Code:
import os
path_to_file = os.path.join(os.pardir, 'data', 'new.nc')
Explanation: What is NetCDF?
<img src='http://www.unidata.ucar.edu/images/logos/netcdf-50x50.png'>
NetCDF (network Common Data Form) is a set of interfaces for array-oriented data access and a freely distributed collection of data access libraries for C, Fortran, C++, Java, and other languages.
NetCDF data are:
Self-Describing. A netCDF file includes information about the data it contains.
Portable. A netCDF file can be accessed by computers with different ways of storing integers, characters, and floating-point numbers.
Scalable. A small subset of a large dataset may be accessed efficiently.
Appendable. Data may be appended to a properly structured netCDF file without copying the dataset or redefining its structure.
Sharable. One writer and multiple readers may simultaneously access the same netCDF file.
Archivable. Access to all earlier forms of netCDF data will be supported by current and future versions of the software.
Opening a file, creating a new Dataset
Let's create an empty NetCDF file named '../data/new.nc', opened for writing. Note, opening a file with 'w' will clobber any existing data (unless clobber=False is used, in which case an exception is raised if the file already exists).
End of explanation
from __future__ import division, print_function # py2to3 compatibility
import netCDF4 as nc
import numpy as np
print('NetCDF package version: {}'.format(nc.__version__))
Explanation: mode='r' is the default.
mode='a' opens an existing file and allows for appending (does not clobber existing data)
format can be one of
NETCDF3_CLASSIC
NETCDF3_64BIT
NETCDF4_CLASSIC
NETCDF4 (default).
NETCDF4_CLASSIC uses HDF5 for the underlying storage layer (as does NETCDF4) but enforces the classic netCDF 3 data model so data can be read with older clients.
Load essential modules
End of explanation
try:
ncfile.close()
except:
pass
# another way of checking this:
# if ncfile.isopen():
# ncfile.close()
ncfile = nc.Dataset(path_to_file, mode='w',
format='NETCDF4_CLASSIC')
print(ncfile)
Explanation: Just to be safe, make sure dataset is not already open
End of explanation
nlat = 73
nlon = 144
lat_dim = ncfile.createDimension('lat', nlat) # latitude axis
lon_dim = ncfile.createDimension('lon', nlon) # longitude axis
time_dim = ncfile.createDimension('time', None) # unlimited axis
for dim in ncfile.dimensions.items():
print(dim)
Explanation: Creating dimensions
The ncfile object we created is a container for dimensions, variables, and attributes. First, let's create some dimensions using the createDimension method.
Every dimension has a name and a length.
The name is a string that is used to specify the dimension to be used when creating a variable, and as a key to access the dimension object in the ncfile.dimensions dictionary.
Setting the dimension length to 0 or None makes it unlimited, so it can grow.
For NETCDF4 files, any variable's dimension can be unlimited.
For NETCDF4_CLASSIC and NETCDF3* files, only one per variable can be unlimited, and it must be the leftmost (slowest varying) dimension.
Decide what dimensions our data will have
End of explanation
ncfile.author = 'UEA Python Group'
ncfile.title='My model data'
print(ncfile)
Explanation: Creating attributes
netCDF attributes can be created just like you would for any python object.
Best to adhere to established conventions (like the CF conventions)
We won't try to adhere to any specific convention here though.
End of explanation
ncfile.some_unnecessary_attribute = '123456'
ncfile.delncattr('some_unnecessary_attribute')
Explanation: You can also easily delete a netCDF attribute of a Dataset by using delncattr method:
End of explanation
# Define two variables with the same names as dimensions,
# a conventional way to define "coordinate variables".
lat = ncfile.createVariable('lat', np.float32, ('lat',))
lat.units = 'degrees_north'
lat.long_name = 'latitude'
#
lon = ncfile.createVariable('lon', np.float32, ('lon',))
lon.units = 'degrees_east'
lon.long_name = 'longitude'
#
time = ncfile.createVariable('time', np.float64, ('time',))
time.units = 'hours since 1800-01-01'
time.long_name = 'time'
Explanation: Creating variables
Now let's add some variables and store some data in them.
A variable has a name, a type, a shape, and some data values.
The shape of a variable is specified by a tuple of dimension names.
A variable should also have some named attributes, such as 'units', that describe the data.
The createVariable method takes 3 mandatory args.
the 1st argument is the variable name (a string). This is used as the key to access the variable object from the variables dictionary.
the 2nd argument is the datatype (most numpy datatypes supported).
the third argument is a tuple containing the dimension names (the dimensions must be created first). Unless this is a NETCDF4 file, any unlimited dimension must be the leftmost one.
there are lots of optional arguments (many of which are only relevant when format='NETCDF4') to control compression, chunking, fill_value, etc.
End of explanation
temp = ncfile.createVariable('temp', np.float64,
('time', 'lat', 'lon')) # note: unlimited dimension is leftmost
temp.units = 'K' # degrees Kelvin
temp.standard_name = 'air_temperature' # this is a CF standard name
print(temp)
Explanation: Define a 3D variable to hold the data
End of explanation
print("Some pre-defined attributes for variable temp:\n")
print("temp.dimensions:", temp.dimensions)
print("temp.shape:", temp.shape)
print("temp.dtype:", temp.dtype)
print("temp.ndim:", temp.ndim)
Explanation: Pre-defined variable attributes (read only)
The netCDF4 module provides some useful pre-defined Python attributes for netCDF variables, such as dimensions, shape, dtype, ndim.
Note: since no data has been written yet, the length of the 'time' dimension is 0.
End of explanation
# Write latitudes, longitudes.
# Note: the ":" is necessary in these "write" statements
lat[:] = -90. + (180 / nlat) * np.arange(nlat) # south pole to north pole
lon[:] = (180 / nlat) * np.arange(nlon) # Greenwich meridian eastward
ntimes = 5 # 5 Time slices to begin with
# create a 3D array of random numbers
data_arr = np.random.uniform(low=280, high=330, size=(ntimes, nlat, nlon))
# Write the data. This writes the whole 3D netCDF variable all at once.
temp[:] = data_arr # Appends data along unlimited dimension
Explanation: Writing data
To write data a netCDF variable object, just treat it like a numpy array and assign values to a slice.
End of explanation
# create a 2D array of random numbers
data_slice = np.random.uniform(low=270, high=290, size=(nlat, nlon))
temp[5, :, :] = data_slice # Appends the 6th time slice
print(" Wrote more data, temp.shape is now ", temp.shape)
Explanation: You can just treat a netCDF Variable object like a numpy array and assign values to it.
However, unlike numpy arrays, variables automatically grow along unlimited dimensions
The above writes the whole 3D variable all at once, but you can write it a slice at a time instead.
Let's add another time slice....
End of explanation
print(time)
times_arr = time[:]
print(type(times_arr), times_arr)
Explanation: Note that we have not yet written any data to the time variable. It automatically grew as we appended data along the time dimension to the variable temp, but the data are missing.
End of explanation
import datetime as dt
from netCDF4 import date2num, num2date
# 1st 6 days of October.
dates = [dt.datetime(2016, 10, 1, 0),
dt.datetime(2016, 10, 2, 0),
dt.datetime(2016, 10, 3, 0),
dt.datetime(2016, 10, 4, 0),
dt.datetime(2016, 10, 5, 0),
dt.datetime(2016, 10, 6, 0)]
print('\n'.join([str(i) for i in dates]))
times = date2num(dates, time.units)
print(times, time.units) # numeric values
time[:] = times
# read time data back, convert to datetime instances, check values.
print(num2date(time[:], time.units))
Explanation: Dashes indicate masked values (where data have not yet been written).
Now, to work with time objects we will need some extra imports:
End of explanation
# first print the Dataset object to see what we've got
print(ncfile)
# close the Dataset.
ncfile.close()
Explanation: Closing a netCDF file
It's important to close a netCDF file you opened for writing:
flushes buffers to make sure all data gets written
releases memory resources used by open netCDF files
End of explanation
!ncdump -h ../data/new.nc
Explanation: Check again using ncdump utility
End of explanation
ncfile = nc.Dataset(path_to_file, 'a')
temp_ave = ncfile.createVariable('zonal_mean_temp',
np.float64, ('time', 'lat'))
temp_ave.units = 'K'
temp_ave.standard_name = 'zonally_averaged_air_temperature'
print(temp_ave)
Explanation: Appending data to NetCDF dataset
End of explanation
temp = ncfile.variables['temp'][:]
print(temp.shape)
ave_arr = np.mean(temp[:], axis=2)
print(ave_arr.shape)
Explanation: Create an averaged array using the existing "air_temperature" field:
End of explanation
temp_ave[:] = ave_arr # again, note the square brackets!
ncfile.close()
Explanation: Write the data
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Open the resulting dataset and plot some data
End of explanation
ncfile = nc.Dataset(path_to_file, 'r')
Explanation: Open the file for reading
End of explanation
try:
ncfile.get_variables_by_attributes(units='K')
ncfile.get_variables_by_attributes(ndim=1)
except:
pass
t = ncfile.variables['zonal_mean_temp']
lats = ncfile.variables['lat']
times = ncfile.variables['time']
dt = num2date(times[:], times.units)
fig, ax = plt.subplots(figsize=(10, 6))
p = ax.contourf(lats[:], dt, t[:], cmap='inferno')
cb = fig.colorbar(p, ax=ax)
ax.tick_params(labelsize=20)
ax.set_xlabel(lats.long_name, fontsize=22)
ax.set_ylabel(times.long_name, fontsize=22)
ax.set_title('{} ({})'.format(t.standard_name.replace('_', ' '), t.units), fontsize=20)
print('Here is the plot')
Explanation: First, try this handy methods of extracting variables: get_variables_by_attributes. Note: it's available in netCDF4>1.2.0.
End of explanation
HTML(html)
Explanation: References
This notebook is build upon the great materials of the Unidata Python Workshop:
* https://github.com/Unidata/unidata-python-workshop/blob/master/notebooks/netCDF-Writing.ipynb
Other interesting and useful projects using netcdf4-python
xarray: N-dimensional variant of the core pandas data structure that can operate on netcdf variables.
Iris: their data model creates a data abstraction layer which isolates analysis and visualisation code from data format specifics. Can also handle GRIB and PP formats.
Biggus: Virtual large arrays (from netcdf variables) with lazy evaluation.
cf-python: Implements the CF data model for the reading, writing and processing of data and metadata.
End of explanation |
12,215 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SciPy
ล to je SciPy?
SciPy je nadgradnja NumPy paketa, i sadrลพi veliki broj numeriฤkih algoritama za cijeli niz podruฤja. Ovdje su pobrojana neka nama zanimljivija
Step1: Narvno, moลพemo uฤitati i samo podpaket koji nas zanima, u ovom sluฤaju za linearnu algebru.
Step2: Specijalne funkcije
Kao primjer pogledajmo Besselove funkcije
Step3: Numeriฤka integracija
Step4: quad funkcija se koriste za numeriฤku integracije (quad ... jer se na engleskom taj proces zove kvadratura). dblquad sluลพi za dvostruke, a tplquad za trostruke integrale.
Jednostavan primjer, raฤunamo
$$\begin{equation} \int_0^1 x\, \mathrm{d}x \end{equation}$$
Step6: Ove funkcije imaju puno opcionalnih argumenata. Ako ลพelimo funkciji koju integriramo proslijediti dodatne parametre (vidi ovdje), moลพemo korisiti varijablu args.
Step7: Koriลกtenje anonimnih funkcija
Step8: U viลกe dimenzija
Step9: Obiฤne diferencijalne jednadลพbe (ODJ)
SciPy nudi dvije moguฤnosti rjeลกavanja ODJ
Step10: Sustav ODJ zapisujemo kao
Step12: Ovo su jednadลพe s wiki stranice
Step13: Jednostavna animacija, kasnije ฤemo vidjeti kako moลพemo napravit bolju animaciju.
Step15: Priguลกeni dinamiฤki oscilator
Opis problema moลพete proฤitati ovdje
Step16: Fourierova transformacija
Paket je fftpack
Step17: Primjenimo Fourierovu transformaciju na prethodni primjer harmoniฤkog oscilatora.
Step18: Kako je signal realan, spektar je simetriฤan. Stoga nam je dosta nacrtati pozitivne frekvencije.
Step19: Linearna algebra
Detaljna dokumentacija
Step20: $A X = B$
Step21: Svojstveni problem
\begin{equation}\displaystyle A v = \lambda v\end{equation}
Step22: Svojstveni vektori su stupci u evecs
Step23: To nije sve, postoje i specijalizirane funkcije, kao npr. eigh za hermitske matrice
Matriฤne operacije
Step24: Rijetke matrice
Viลกe informacija na http
Step25: Pametniji naฤin kreiranja rijetke matrice.
Step26: Vektor v smo mogli konstruirati i drugaฤije (vidjeli smo primjere u predavanju o NumPy-ju), no uvijek trebamo doฤi do dvodimenzionalnog niza. Za razliku od MATLAB-a u kojemu su svi nizovi 2D, u NumPy-ju 1D niz nije isto ลกto i matrica $n\times 1$ ili $1\times n$.
Npr. jedna moguฤnost je
v = array([[1,2,3,4]]).T
Step27: Optimizacija
Viลกe na http
Step28: Nalaลพenje minimuma
Step29: Nalaลพenje rjeลกenja jednadลพbi
Problem oblika $f(x) = 0$ se rjeลกava fsolve funkcijom.
Step30: Interpolacija
Funkcija interp1d, za dane nizove $x$ i $y$ koordinata vraฤa objekt koji se ponaลกa kao funkcija.
Step31: Statistika
Viลกe na http
Step32: Osnovna statistika | Python Code:
from scipy import *
Explanation: SciPy
ล to je SciPy?
SciPy je nadgradnja NumPy paketa, i sadrลพi veliki broj numeriฤkih algoritama za cijeli niz podruฤja. Ovdje su pobrojana neka nama zanimljivija:
Specijalne funkcije (scipy.special)
Integracija (scipy.integrate)
Optimizacija (scipy.optimize)
Interpolacija (scipy.interpolate)
Fourierova transformacija (scipy.fftpack)
Linearna algebra (scipy.linalg)
Linearna algebra s rijetkim matricama (scipy.sparse)
Statistika (scipy.stats)
Procesiranje slika (scipy.ndimage)
Za zadnja dva podruฤja postoje i napredniji paketi. Za slike smo veฤ npr. koristili scikit-image), a za statistiku ฤemo korititi pandas paket.
SciPy paket uฤitavamo pomoฤu scipy modula.
End of explanation
import scipy.linalg as la
Explanation: Narvno, moลพemo uฤitati i samo podpaket koji nas zanima, u ovom sluฤaju za linearnu algebru.
End of explanation
# jn, yn: Besselove funkcije prvog i drugog reda s realnim stupnjem
# jn_zeros, yn_zeros: raฤunaju pripadne nultoฤke
from scipy.special import jn, yn, jn_zeros, yn_zeros
n = 0 # stupanj
x = 0.0
print ("J_{}({}) = {:f}".format(n, x, jn(n, x)))
x = 1.0
print ("Y_{}({}) = {:f}".format(n, x, yn(n, x)))
from pylab import *
%matplotlib inline
x = linspace(0, 10, 100)
fig, ax = subplots()
for n in range(4):
ax.plot(x, jn(n, x), label=r"$J_%d(x)$" % n)
ax.legend();
n = 0 # stupanj
m = 4 # broj nultoฤaka za izraฤunati
jn_zeros(n, m)
Explanation: Specijalne funkcije
Kao primjer pogledajmo Besselove funkcije:
End of explanation
from scipy.integrate import quad, dblquad, tplquad
Explanation: Numeriฤka integracija
End of explanation
def f(x):
return x
x_donje = 0
x_gornje = 1
rez, abserr = quad(f, x_donje, x_gornje)
print ("Rezultat = {}, apsolutna greลกka = {}".format(rez,abserr))
Explanation: quad funkcija se koriste za numeriฤku integracije (quad ... jer se na engleskom taj proces zove kvadratura). dblquad sluลพi za dvostruke, a tplquad za trostruke integrale.
Jednostavan primjer, raฤunamo
$$\begin{equation} \int_0^1 x\, \mathrm{d}x \end{equation}$$
End of explanation
def integrand(x, n):
Besselova funkcija prvog tipa stupnja n.
return jn(n, x)
x_d = 0
x_g = 10
rez, abserr = quad(integrand, x_d, x_g, args=(3,))
print (rez, abserr)
Explanation: Ove funkcije imaju puno opcionalnih argumenata. Ako ลพelimo funkciji koju integriramo proslijediti dodatne parametre (vidi ovdje), moลพemo korisiti varijablu args.
End of explanation
rez, abserr = quad(lambda x: exp(-x ** 2), -Inf, Inf)
print ("numeriฤki = {}, {}".format(rez, abserr))
egzaktno = sqrt(pi)
print ("egzaktno = {}".format(egzaktno))
Explanation: Koriลกtenje anonimnih funkcija:
End of explanation
def integrand(x, y):
return exp(-x**2-y**2)
x_d = 0
x_g = 10
y_d = 0
y_g = 10
# ovdje je a = x_d, b = x_g, g(x) = y_d, h(x) = y_g
# g(x) i h(x) trebaju biti funkcije!
rez, abserr = dblquad(integrand, x_d, x_g, lambda x : y_d, lambda x: y_g)
print (rez, abserr)
Explanation: U viลกe dimenzija:
\begin{equation}
\int_a^b \int_{g(x)}^{h(x)} f(x,y)\,\mathrm{d}y\mathrm{d}x
\end{equation}
End of explanation
from scipy.integrate import odeint, ode
Explanation: Obiฤne diferencijalne jednadลพbe (ODJ)
SciPy nudi dvije moguฤnosti rjeลกavanja ODJ: Funkciju odeint i klasu ode. Mi ฤemo prikazati odeint.
End of explanation
from IPython.display import Image
Image(url='http://upload.wikimedia.org/wikipedia/commons/c/c9/Double-compound-pendulum-dimensioned.svg')
Explanation: Sustav ODJ zapisujemo kao:
$y' = f(y, t)$
gdje je
$y = [y_1(t), y_2(t), ..., y_n(t)]$,
Joลก trebamo i poฤetne uvjete $y(0)$.
Ovo je sintaksa:
y_t = odeint(f, y_0, t)
t je niz vremena za koje ลพelimo rijeลกiti ODJ
y_t je niz s jednim retkom za svaki trenutak iz t, a stupci daje rjeลกenje y_i(t) u tom trenutku
Dvostruko njihalo
Opis problema: http://en.wikipedia.org/wiki/Double_pendulum
End of explanation
g = 9.82
L = 0.5
m = 0.1
def dx(x, t):
Desna strana ODJ
x1, x2, x3, x4 = x[0], x[1], x[2], x[3]
dx1 = 6.0/(m*L**2) * (2 * x3 - 3 * cos(x1-x2) * x4)/(16 - 9 * cos(x1-x2)**2)
dx2 = 6.0/(m*L**2) * (8 * x4 - 3 * cos(x1-x2) * x3)/(16 - 9 * cos(x1-x2)**2)
dx3 = -0.5 * m * L**2 * ( dx1 * dx2 * sin(x1-x2) + 3 * (g/L) * sin(x1))
dx4 = -0.5 * m * L**2 * (-dx1 * dx2 * sin(x1-x2) + (g/L) * sin(x2))
return [dx1, dx2, dx3, dx4]
# poฤetni uvjet
x0 = [pi/4, pi/2, 0, 0]
# niz vremena
t = linspace(0, 10, 250)
# rjeลกenje ODJ
x = odeint(dx, x0, t)
# nacrtajmo rjeลกenje
# crtamo kuteve
fig, axes = subplots(1,2, figsize=(12,4))
axes[0].plot(t, x[:, 0], 'r', label="theta1")
axes[0].plot(t, x[:, 1], 'b', label="theta2")
x1 = + L * sin(x[:, 0])
y1 = - L * cos(x[:, 0])
x2 = x1 + L * sin(x[:, 1])
y2 = y1 - L * cos(x[:, 1])
axes[1].plot(x1, y1, 'r', label="njihalo1")
axes[1].plot(x2, y2, 'b', label="njihalo2")
axes[1].set_ylim([-1, 0])
axes[1].set_xlim([1, -1]);
Explanation: Ovo su jednadลพe s wiki stranice:
${\dot \theta_1} = \frac{6}{m\ell^2} \frac{ 2 p_{\theta_1} - 3 \cos(\theta_1-\theta_2) p_{\theta_2}}{16 - 9 \cos^2(\theta_1-\theta_2)}$
${\dot \theta_2} = \frac{6}{m\ell^2} \frac{ 8 p_{\theta_2} - 3 \cos(\theta_1-\theta_2) p_{\theta_1}}{16 - 9 \cos^2(\theta_1-\theta_2)}.$
${\dot p_{\theta_1}} = -\frac{1}{2} m \ell^2 \left [ {\dot \theta_1} {\dot \theta_2} \sin (\theta_1-\theta_2) + 3 \frac{g}{\ell} \sin \theta_1 \right ]$
${\dot p_{\theta_2}} = -\frac{1}{2} m \ell^2 \left [ -{\dot \theta_1} {\dot \theta_2} \sin (\theta_1-\theta_2) + \frac{g}{\ell} \sin \theta_2 \right]$
Definiramo:
$x = [\theta_1, \theta_2, p_{\theta_1}, p_{\theta_2}]$
End of explanation
from IPython.display import display,clear_output
import time
fig, ax = subplots(figsize=(4,4))
for t_idx, tt in enumerate(t[:200]):
x1 = + L * sin(x[t_idx, 0])
y1 = - L * cos(x[t_idx, 0])
x2 = x1 + L * sin(x[t_idx, 1])
y2 = y1 - L * cos(x[t_idx, 1])
ax.cla()
ax.plot([0, x1], [0, y1], 'r.-')
ax.plot([x1, x2], [y1, y2], 'b.-')
ax.set_ylim([-1.5, 0.5])
ax.set_xlim([1, -1])
display(fig)
clear_output(wait=True)
time.sleep(0.03)
Explanation: Jednostavna animacija, kasnije ฤemo vidjeti kako moลพemo napravit bolju animaciju.
End of explanation
def dy(y, t, zeta, w0):
Desna strana ODJ za harmoniฤki oscilator
x, p = y[0], y[1]
dx = p
dp = -2 * zeta * w0 * p - w0**2 * x
return [dx, dp]
# poฤetno stanje:
y0 = [1.0, 0.0]
# vremena, frekvencija
t = linspace(0, 10, 1000)
w0 = 2*pi*1.0
# rjeลกavamo ODJ za tri vrste priguลกenja
y1 = odeint(dy, y0, t, args=(0.0, w0)) # neguลกeno
y2 = odeint(dy, y0, t, args=(0.2, w0)) # podguลกeno
y3 = odeint(dy, y0, t, args=(1.0, w0)) # kritiฤko guลกenje
y4 = odeint(dy, y0, t, args=(5.0, w0)) # preguลกeno
fig, ax = subplots()
ax.plot(t, y1[:,0], 'k', label="neguลกeno", linewidth=0.25)
ax.plot(t, y2[:,0], 'r', label="podguลกeno")
ax.plot(t, y3[:,0], 'b', label="kritiฤko guลกenje")
ax.plot(t, y4[:,0], 'g', label="preguลกeno")
ax.legend();
Explanation: Priguลกeni dinamiฤki oscilator
Opis problema moลพete proฤitati ovdje: http://en.wikipedia.org/wiki/Damping
Jednadลพba je
\begin{equation} \frac{\mathrm{d}^2x}{\mathrm{d}t^2} + 2\zeta\omega_0\frac{\mathrm{d}x}{\mathrm{d}t} + \omega^2_0 x = 0 \end{equation}
$x$ pozicija oscilatora,
$\omega_0$ frekvencija,
$\zeta$ koeficijent guลกenja.
Definiramo $p = \frac{\mathrm{d}x}{\mathrm{d}t}$:
\begin{equation} \frac{\mathrm{d}p}{\mathrm{d}t} = - 2\zeta\omega_0 p - \omega^2_0 x \end{equation}
\begin{equation} \frac{\mathrm{d}x}{\mathrm{d}t} = p \end{equation}
End of explanation
from scipy.fftpack import *
Explanation: Fourierova transformacija
Paket je fftpack:
End of explanation
N = len(t)
dt = t[1]-t[0]
# y2 je rjeลกenje podguลกenog harmoniฤkog oscilatora
F = fft(y2[:,0])
# izraฤunajmo frekvencije
w = fftfreq(N, dt)
fig, ax = subplots(figsize=(9,3))
ax.plot(w, abs(F));
Explanation: Primjenimo Fourierovu transformaciju na prethodni primjer harmoniฤkog oscilatora.
End of explanation
indeksi = where(w > 0)
w_pos = w[indeksi]
F_pos = F[indeksi]
fig, ax = subplots(figsize=(9,3))
ax.plot(w_pos, abs(F_pos))
ax.set_xlim(0, 5);
Explanation: Kako je signal realan, spektar je simetriฤan. Stoga nam je dosta nacrtati pozitivne frekvencije.
End of explanation
A = array([[1,2,-1], [4,5,6], [7,8,9]])
b = array([1,2,3])
x = solve(A, b)
x
# provjera
dot(A, x) - b
Explanation: Linearna algebra
Detaljna dokumentacija: http://docs.scipy.org/doc/scipy/reference/linalg.html
Neฤemo prolaziti kroz sve funkcije.
Sustavi linearnih jednadลพbi
$A x = b$
End of explanation
A = rand(3,3)
B = rand(3,3)
X = solve(A, B)
X
# provjera
norm(dot(A, X) - B)
Explanation: $A X = B$
End of explanation
evals = eigvals(A)
evals
evals, evecs = eig(A)
evals
evecs
Explanation: Svojstveni problem
\begin{equation}\displaystyle A v = \lambda v\end{equation}
End of explanation
n = 1
norm(dot(A, evecs[:,n]) - evals[n] * evecs[:,n])
Explanation: Svojstveni vektori su stupci u evecs:
End of explanation
# inverz
inv(A)
# determinanta
det(A)
# razne norme
norm(A, ord=2), norm(A, ord=Inf)
Explanation: To nije sve, postoje i specijalizirane funkcije, kao npr. eigh za hermitske matrice
Matriฤne operacije
End of explanation
from scipy.sparse import *
# gusta matrica
M = array([[1,0,0,0], [0,3,0,0], [0,1,1,0], [1,0,0,1]]); M
# pretvorimo je u rijetku matricu
A = csr_matrix(M); A
# vratimo natrag
A.todense()
Explanation: Rijetke matrice
Viลกe informacija na http://en.wikipedia.org/wiki/Sparse_matrix
Postoji viลกe formata rijetkih matrica, mi neฤemo ulaziti u detalje.
End of explanation
A = lil_matrix((4,4)) # prazna 4x4 rijetka matrica
A[0,0] = 1
A[1,1] = 3
A[2,2] = A[2,1] = 1
A[3,3] = A[3,0] = 1
A
A.todense()
# konvertiranje
A = csr_matrix(A); A
A = csc_matrix(A); A
A.todense()
(A * A).todense()
(A @ A).todense()
dot(A,A)
v = array([1,2,3,4])[:,newaxis]; v
Explanation: Pametniji naฤin kreiranja rijetke matrice.
End of explanation
# rijetka matrica puta vektor
A * v
A.todense() * v
Explanation: Vektor v smo mogli konstruirati i drugaฤije (vidjeli smo primjere u predavanju o NumPy-ju), no uvijek trebamo doฤi do dvodimenzionalnog niza. Za razliku od MATLAB-a u kojemu su svi nizovi 2D, u NumPy-ju 1D niz nije isto ลกto i matrica $n\times 1$ ili $1\times n$.
Npr. jedna moguฤnost je
v = array([[1,2,3,4]]).T
End of explanation
from scipy import optimize
Explanation: Optimizacija
Viลกe na http://scipy-lectures.github.com/advanced/mathematical_optimization/index.html
Modul je optimize:
End of explanation
def f(x):
return 4*x**3 + (x-2)**2 + x**4
fig, ax = subplots()
x = linspace(-5, 3, 100)
ax.plot(x, f(x));
x_min = optimize.fmin_bfgs(f, -2)
x_min
optimize.fmin_bfgs(f, 0.5)
optimize.brent(f)
optimize.fminbound(f, -4, 2)
Explanation: Nalaลพenje minimuma
End of explanation
omega_c = 3.0
def f(omega):
return tan(2*pi*omega) - omega_c/omega
import numpy as np
np.seterr(divide='ignore')
fig, ax = subplots(figsize=(10,4))
x = linspace(0, 3, 1000)
y = f(x)
maska = where(abs(y) > 50)
x[maska] = y[maska] = NaN # da se rijeลกimo asimptote
ax.plot(x, y)
ax.plot([0, 3], [0, 0], 'k')
ax.set_ylim(-5,5);
optimize.fsolve(f, 0.1)
optimize.fsolve(f, 0.6)
optimize.fsolve(f, 1.1)
Explanation: Nalaลพenje rjeลกenja jednadลพbi
Problem oblika $f(x) = 0$ se rjeลกava fsolve funkcijom.
End of explanation
from scipy.interpolate import *
n = arange(0, 10)
x = linspace(0, 9, 100)
y_meas = sin(n) + 0.1 * randn(len(n)) # ubacujemo malo ลกuma
y_real = sin(x)
linear_interpolation = interp1d(n, y_meas)
y_interp1 = linear_interpolation(x)
cubic_interpolation = interp1d(n, y_meas, kind='cubic')
y_interp2 = cubic_interpolation(x)
fig, ax = subplots(figsize=(10,4))
ax.plot(n, y_meas, 'bs', label='podaci sa ลกumom')
ax.plot(x, y_real, 'k', lw=2, label='originalna funkcija')
ax.plot(x, y_interp1, 'r', label='linearna interpolacija')
ax.plot(x, y_interp2, 'g', label='kubiฤna interpolacija')
ax.legend(loc=3);
Explanation: Interpolacija
Funkcija interp1d, za dane nizove $x$ i $y$ koordinata vraฤa objekt koji se ponaลกa kao funkcija.
End of explanation
from scipy import stats
# sluฤajna varijabla s Poissionovom distribucijom
X = stats.poisson(3.5)
n = arange(0,15)
fig, axes = subplots(2,1, sharex=True)
# kumulativna distribucija (CDF)
axes[0].step(n, X.cdf(n))
# histogram 1000 sluฤajnih realizacija od X
axes[1].hist(X.rvs(size=1000));
# normalna distribucija
Y = stats.norm()
x = linspace(-5,5,100)
fig, axes = subplots(3,1, sharex=True)
# PDF
axes[0].plot(x, Y.pdf(x))
# CDF
axes[1].plot(x, Y.cdf(x));
# histogram
axes[2].hist(Y.rvs(size=1000), bins=50);
Explanation: Statistika
Viลกe na http://docs.scipy.org/doc/scipy/reference/stats.html.
Mi ฤemo kasnije raditi s moฤnijim paketom pandas.
End of explanation
X.mean(), X.std(), X.var()
Y.mean(), Y.std(), Y.var()
from verzije import *
from IPython.display import HTML
HTML(print_sysinfo()+info_packages('numpy,scipy,matplotlib'))
Explanation: Osnovna statistika:
End of explanation |
12,216 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DAT210x - Programming with Python for DS
Module5- Lab7
Step1: A Convenience Function
This method is for your visualization convenience only. You aren't expected to know how to put this together yourself, although you should be able to follow the code by now
Step2: The Assignment
Load in the dataset, identify nans, and set proper headers. Be sure to verify the rows line up by looking at the file in a text editor.
Step3: Copy out the status column into a slice, then drop it from the main dataframe. Always verify you properly executed the drop by double checking (printing out the resulting operating)! Many people forget to set the right axis here.
If you goofed up on loading the dataset and notice you have a sample column, this would be a good place to drop that too if you haven't already.
Step4: With the labels safely extracted from the dataset, replace any nan values with the mean feature / column value
Step5: Do train_test_split. Use the same variable names as on the EdX platform in the reading material, but set the random_state=7 for reproducibility, and keep the test_size at 0.5 (50%).
Step6: Experiment with the basic SKLearn preprocessing scalers. We know that the features consist of different units mixed in together, so it might be reasonable to assume feature scaling is necessary. Print out a description of the dataset, post transformation. Recall
Step7: Dimensionality Reduction
PCA and Isomap are your new best friends
Step8: Train your model against data_train, then transform both data_train and data_test using your model. You can save the results right back into the variables themselves.
Step9: Implement and train KNeighborsClassifier on your projected 2D training data here. You can name your variable knmodel. You can use any K value from 1 - 15, so play around with it and see what results you can come up. Your goal is to find a good balance where you aren't too specific (low-K), nor are you too general (high-K). You should also experiment with how changing the weights parameter affects the results.
Step10: Be sure to always keep the domain of the problem in mind! It's WAY more important to errantly classify a benign tumor as malignant, and have it removed, than to incorrectly leave a malignant tumor, believing it to be benign, and then having the patient progress in cancer. Since the UDF weights don't give you any class information, the only way to introduce this data into SKLearn's KNN Classifier is by "baking" it into your data. For example, randomly reducing the ratio of benign samples compared to malignant samples from the training set.
Calculate and display the accuracy of the testing set | Python Code:
import random, math
import pandas as pd
import numpy as np
import scipy.io
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot') # Look Pretty
# Leave this alone until indicated:
Test_PCA = False
Explanation: DAT210x - Programming with Python for DS
Module5- Lab7
End of explanation
def plotDecisionBoundary(model, X, y):
print("Plotting...")
fig = plt.figure()
ax = fig.add_subplot(111)
padding = 0.1
resolution = 0.1
#(2 for benign, 4 for malignant)
colors = {2:'royalblue', 4:'lightsalmon'}
# Calculate the boundaris
x_min, x_max = X[:, 0].min(), X[:, 0].max()
y_min, y_max = X[:, 1].min(), X[:, 1].max()
x_range = x_max - x_min
y_range = y_max - y_min
x_min -= x_range * padding
y_min -= y_range * padding
x_max += x_range * padding
y_max += y_range * padding
# Create a 2D Grid Matrix. The values stored in the matrix
# are the predictions of the class at at said location
xx, yy = np.meshgrid(np.arange(x_min, x_max, resolution),
np.arange(y_min, y_max, resolution))
# What class does the classifier say?
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour map
plt.contourf(xx, yy, Z, cmap=plt.cm.seismic)
plt.axis('tight')
# Plot your testing points as well...
for label in np.unique(y):
indices = np.where(y == label)
plt.scatter(X[indices, 0], X[indices, 1], c=colors[label], alpha=0.8)
p = model.get_params()
plt.title('K = ' + str(p['n_neighbors']))
plt.show()
Explanation: A Convenience Function
This method is for your visualization convenience only. You aren't expected to know how to put this together yourself, although you should be able to follow the code by now:
End of explanation
# .. your code here ..
Explanation: The Assignment
Load in the dataset, identify nans, and set proper headers. Be sure to verify the rows line up by looking at the file in a text editor.
End of explanation
# .. your code here ..
Explanation: Copy out the status column into a slice, then drop it from the main dataframe. Always verify you properly executed the drop by double checking (printing out the resulting operating)! Many people forget to set the right axis here.
If you goofed up on loading the dataset and notice you have a sample column, this would be a good place to drop that too if you haven't already.
End of explanation
# .. your code here ..
Explanation: With the labels safely extracted from the dataset, replace any nan values with the mean feature / column value:
End of explanation
# .. your code here ..
Explanation: Do train_test_split. Use the same variable names as on the EdX platform in the reading material, but set the random_state=7 for reproducibility, and keep the test_size at 0.5 (50%).
End of explanation
# .. your code here ..
Explanation: Experiment with the basic SKLearn preprocessing scalers. We know that the features consist of different units mixed in together, so it might be reasonable to assume feature scaling is necessary. Print out a description of the dataset, post transformation. Recall: when you do pre-processing, which portion of the dataset is your model trained upon? Also which portion(s) of your dataset actually get transformed?
End of explanation
model = None
if Test_PCA:
print('Computing 2D Principle Components')
# TODO: Implement PCA here. Save your model into the variable 'model'.
# You should reduce down to two dimensions.
# .. your code here ..
else:
print('Computing 2D Isomap Manifold')
# TODO: Implement Isomap here. Save your model into the variable 'model'
# Experiment with K values from 5-10.
# You should reduce down to two dimensions.
# .. your code here ..
Explanation: Dimensionality Reduction
PCA and Isomap are your new best friends
End of explanation
# .. your code here ..
Explanation: Train your model against data_train, then transform both data_train and data_test using your model. You can save the results right back into the variables themselves.
End of explanation
# .. your code here ..
Explanation: Implement and train KNeighborsClassifier on your projected 2D training data here. You can name your variable knmodel. You can use any K value from 1 - 15, so play around with it and see what results you can come up. Your goal is to find a good balance where you aren't too specific (low-K), nor are you too general (high-K). You should also experiment with how changing the weights parameter affects the results.
End of explanation
# .. your code changes above ..
plotDecisionBoundary(knmodel, X_test, y_test)
Explanation: Be sure to always keep the domain of the problem in mind! It's WAY more important to errantly classify a benign tumor as malignant, and have it removed, than to incorrectly leave a malignant tumor, believing it to be benign, and then having the patient progress in cancer. Since the UDF weights don't give you any class information, the only way to introduce this data into SKLearn's KNN Classifier is by "baking" it into your data. For example, randomly reducing the ratio of benign samples compared to malignant samples from the training set.
Calculate and display the accuracy of the testing set:
End of explanation |
12,217 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sveuฤiliลกte u Zagrebu
Fakultet elektrotehnike i raฤunarstva
Strojno uฤenje 2018/2019
http
Step1: Zadatci
1. Jednostavna regresija
Zadan je skup primjera $\mathcal{D}={(x^{(i)},y^{(i)})}_{i=1}^4 = {(0,4),(1,1),(2,2),(4,5)}$. Primjere predstavite matrixom $\mathbf{X}$ dimenzija $N\times n$ (u ovom sluฤaju $4\times 1$) i vektorom oznaka $\textbf{y}$, dimenzija $N\times 1$ (u ovom sluฤaju $4\times 1$), na sljedeฤi naฤin
Step2: (a)
Prouฤite funkciju PolynomialFeatures iz biblioteke sklearn i upotrijebite je za generiranje matrice dizajna $\mathbf{\Phi}$ koja ne koristi preslikavanje u prostor viลกe dimenzije (samo ฤe svakom primjeru biti dodane dummy jedinice; $m=n+1$).
Step3: (b)
Upoznajte se s modulom linalg. Izraฤunajte teลพine $\mathbf{w}$ modela linearne regresije kao $\mathbf{w}=(\mathbf{\Phi}^\intercal\mathbf{\Phi})^{-1}\mathbf{\Phi}^\intercal\mathbf{y}$. Zatim se uvjerite da isti rezultat moลพete dobiti izraฤunom pseudoinverza $\mathbf{\Phi}^+$ matrice dizajna, tj. $\mathbf{w}=\mathbf{\Phi}^+\mathbf{y}$, koriลกtenjem funkcije pinv.
Step4: Radi jasnoฤe, u nastavku je vektor $\mathbf{x}$ s dodanom dummy jedinicom $x_0=1$ oznaฤen kao $\tilde{\mathbf{x}}$.
(c)
Prikaลพite primjere iz $\mathcal{D}$ i funkciju $h(\tilde{\mathbf{x}})=\mathbf{w}^\intercal\tilde{\mathbf{x}}$. Izraฤunajte pogreลกku uฤenja prema izrazu $E(h|\mathcal{D})=\frac{1}{2}\sum_{i=1}^N(\tilde{\mathbf{y}}^{(i)} - h(\tilde{\mathbf{x}}))^2$. Moลพete koristiti funkciju srednje kvadratne pogreลกke mean_squared_error iz modula sklearn.metrics.
Q
Step5: (d)
Uvjerite se da za primjere iz $\mathcal{D}$ teลพine $\mathbf{w}$ ne moลพemo naฤi rjeลกavanjem sustava $\mathbf{w}=\mathbf{\Phi}^{-1}\mathbf{y}$, veฤ da nam doista treba pseudoinverz.
Q
Step6: (e)
Prouฤite klasu LinearRegression iz modula sklearn.linear_model. Uvjerite se da su teลพine koje izraฤunava ta funkcija (dostupne pomoฤu atributa coef_ i intercept_) jednake onima koje ste izraฤunali gore. Izraฤunajte predikcije modela (metoda predict) i uvjerite se da je pogreลกka uฤenja identiฤna onoj koju ste ranije izraฤunali.
Step7: 2. Polinomijalna regresija i utjecaj ลกuma
(a)
Razmotrimo sada regresiju na veฤem broju primjera. Koristite funkciju make_labels(X, f, noise=0) koja uzima matricu neoznaฤenih primjera $\mathbf{X}{N\times n}$ te generira vektor njihovih oznaka $\mathbf{y}{N\times 1}$. Oznake se generiraju kao $y^{(i)} = f(x^{(i)})+\mathcal{N}(0,\sigma^2)$, gdje je $f
Step8: Prikaลพite taj skup funkcijom scatter.
Step9: (b)
Trenirajte model polinomijalne regresije stupnja $d=3$. Na istom grafikonu prikaลพite nauฤeni model $h(\mathbf{x})=\mathbf{w}^\intercal\tilde{\mathbf{x}}$ i primjere za uฤenje. Izraฤunajte pogreลกku uฤenja modela.
Step10: 3. Odabir modela
(a)
Na skupu podataka iz zadatka 2 trenirajte pet modela linearne regresije $\mathcal{H}_d$ razliฤite sloลพenosti, gdje je $d$ stupanj polinoma, $d\in{1,3,5,10,20}$. Prikaลพite na istome grafikonu skup za uฤenje i funkcije $h_d(\mathbf{x})$ za svih pet modela (preporuฤujemo koristiti plot unutar for petlje). Izraฤunajte pogreลกku uฤenja svakog od modela.
Q
Step11: (b)
Razdvojite skup primjera iz zadatka 2 pomoฤu funkcije cross_validation.train_test_split na skup za uฤenja i skup za ispitivanje u omjeru 1
Step12: (c)
Toฤnost modela ovisi o (1) njegovoj sloลพenosti (stupanj $d$ polinoma), (2) broju primjera $N$, i (3) koliฤini ลกuma. Kako biste to analizirali, nacrtajte grafikone pogreลกaka kao u 3b, ali za sve kombinacija broja primjera $N\in{100,200,1000}$ i koliฤine ลกuma $\sigma\in{100,200,500}$ (ukupno 9 grafikona). Upotrijebite funkciju subplots kako biste pregledno posloลพili grafikone u tablicu $3\times 3$. Podatci se generiraju na isti naฤin kao u zadatku 2.
NB
Step13: 4. Regularizirana regresija
(a)
U gornjim eksperimentima nismo koristili regularizaciju. Vratimo se najprije na primjer iz zadatka 1. Na primjerima iz tog zadatka izraฤunajte teลพine $\mathbf{w}$ za polinomijalni regresijski model stupnja $d=3$ uz L2-regularizaciju (tzv. ridge regression), prema izrazu $\mathbf{w}=(\mathbf{\Phi}^\intercal\mathbf{\Phi}+\lambda\mathbf{I})^{-1}\mathbf{\Phi}^\intercal\mathbf{y}$. Napravite izraฤun teลพina za regularizacijske faktore $\lambda=0$, $\lambda=1$ i $\lambda=10$ te usporedite dobivene teลพine.
Q
Step14: (b)
Prouฤite klasu Ridge iz modula sklearn.linear_model, koja implementira L2-regularizirani regresijski model. Parametar $\alpha$ odgovara parametru $\lambda$. Primijenite model na istim primjerima kao u prethodnom zadatku i ispiลกite teลพine $\mathbf{w}$ (atributi coef_ i intercept_).
Q
Step15: 5. Regularizirana polinomijalna regresija
(a)
Vratimo se na sluฤaj $N=50$ sluฤajno generiranih primjera iz zadatka 2. Trenirajte modele polinomijalne regresije $\mathcal{H}_{\lambda,d}$ za $\lambda\in{0,100}$ i $d\in{2,10}$ (ukupno ฤetiri modela). Skicirajte pripadne funkcije $h(\mathbf{x})$ i primjere (na jednom grafikonu; preporuฤujemo koristiti plot unutar for petlje).
Q
Step16: (b)
Kao u zadataku 3b, razdvojite primjere na skup za uฤenje i skup za ispitivanje u omjeru 1
Step17: 6. L1-regularizacija i L2-regularizacija
Svrha regularizacije jest potiskivanje teลพina modela $\mathbf{w}$ prema nuli, kako bi model bio ลกto jednostavniji. Sloลพenost modela moลพe se okarakterizirati normom pripadnog vektora teลพina $\mathbf{w}$, i to tipiฤno L2-normom ili L1-normom. Za jednom trenirani model moลพemo izraฤunati i broj ne-nul znaฤajki, ili L0-normu, pomoฤu sljedeฤe funkcije
Step18: (a)
Za ovaj zadatak upotrijebite skup za uฤenje i skup za testiranje iz zadatka 3b. Trenirajte modele L2-regularizirane polinomijalne regresije stupnja $d=20$, mijenjajuฤi hiperparametar $\lambda$ u rasponu ${1,2,\dots,100}$. Za svaki od treniranih modela izraฤunajte L{0,1,2}-norme vektora teลพina $\mathbf{w}$ te ih prikaลพite kao funkciju od $\lambda$.
Q
Step19: (b)
Glavna prednost L1-regularizirane regresije (ili LASSO regression) nad L2-regulariziranom regresijom jest u tome ลกto L1-regularizirana regresija rezultira rijetkim modelima (engl. sparse models), odnosno modelima kod kojih su mnoge teลพine pritegnute na nulu. Pokaลพite da je to doista tako, ponovivลกi gornji eksperiment s L1-regulariziranom regresijom, implementiranom u klasi Lasso u modulu sklearn.linear_model.
Step20: 7. Znaฤajke razliฤitih skala
ฤesto se u praksi moลพemo susreti sa podatcima u kojima sve znaฤajke nisu jednakih magnituda. Primjer jednog takvog skupa je regresijski skup podataka grades u kojem se predviฤa prosjek ocjena studenta na studiju (1--5) na temelju dvije znaฤajke
Step21: a)
Iscrtajte ovisnost ciljne vrijednosti (y-os) o prvoj i o drugoj znaฤajki (x-os). Iscrtajte dva odvojena grafa.
Step22: b)
Nauฤite model L2-regularizirane regresije ($\lambda = 0.01$), na podacima grades_X i grades_y
Step23: Sada ponovite gornji eksperiment, ali prvo skalirajte podatke grades_X i grades_y i spremite ih u varijable grades_X_fixed i grades_y_fixed. Za tu svrhu, koristite StandardScaler.
Step24: Q
Step25: Ponovno, nauฤite na ovom skupu L2-regularizirani model regresije ($\lambda = 0.01$).
Step26: Q
Step27: Q | Python Code:
# Uฤitaj osnovne biblioteke...
import numpy as np
import sklearn
import matplotlib.pyplot as plt
import scipy as sp
%pylab inline
Explanation: Sveuฤiliลกte u Zagrebu
Fakultet elektrotehnike i raฤunarstva
Strojno uฤenje 2018/2019
http://www.fer.unizg.hr/predmet/su
Laboratorijska vjeลพba 1: Regresija
Verzija: 1.1
Zadnji put aลพurirano: 12. listopada 2018.
(c) 2015-2018 Jan ล najder, Domagoj Alagiฤ, Mladen Karan
Objavljeno: 12. listopada 2018.
Rok za predaju: 22. listopada 2018. u 07:00h
Upute
Prva laboratorijska vjeลพba sastoji se od osam zadataka. U nastavku slijedite upute navedene u ฤelijama s tekstom. Rjeลกavanje vjeลพbe svodi se na dopunjavanje ove biljeลพnice: umetanja ฤelije ili viลกe njih ispod teksta zadatka, pisanja odgovarajuฤeg kรดda te evaluiranja ฤelija.
Osigurajte da u potpunosti razumijete kรดd koji ste napisali. Kod predaje vjeลพbe, morate biti u stanju na zahtjev asistenta (ili demonstratora) preinaฤiti i ponovno evaluirati Vaลก kรดd. Nadalje, morate razumjeti teorijske osnove onoga ลกto radite, u okvirima onoga ลกto smo obradili na predavanju. Ispod nekih zadataka moลพete naฤi i pitanja koja sluลพe kao smjernice za bolje razumijevanje gradiva (nemojte pisati odgovore na pitanja u biljeลพnicu). Stoga se nemojte ograniฤiti samo na to da rijeลกite zadatak, nego slobodno eksperimentirajte. To upravo i jest svrha ovih vjeลพbi.
Vjeลพbe trebate raditi samostalno. Moลพete se konzultirati s drugima o naฤelnom naฤinu rjeลกavanja, ali u konaฤnici morate sami odraditi vjeลพbu. U protivnome vjeลพba nema smisla.
End of explanation
X = np.array([[0],[1],[2],[4]])
y = np.array([4,1,2,5])
Explanation: Zadatci
1. Jednostavna regresija
Zadan je skup primjera $\mathcal{D}={(x^{(i)},y^{(i)})}_{i=1}^4 = {(0,4),(1,1),(2,2),(4,5)}$. Primjere predstavite matrixom $\mathbf{X}$ dimenzija $N\times n$ (u ovom sluฤaju $4\times 1$) i vektorom oznaka $\textbf{y}$, dimenzija $N\times 1$ (u ovom sluฤaju $4\times 1$), na sljedeฤi naฤin:
End of explanation
from sklearn.preprocessing import PolynomialFeatures
Phi = PolynomialFeatures(1, False, True).fit_transform(X)
print(Phi)
Explanation: (a)
Prouฤite funkciju PolynomialFeatures iz biblioteke sklearn i upotrijebite je za generiranje matrice dizajna $\mathbf{\Phi}$ koja ne koristi preslikavanje u prostor viลกe dimenzije (samo ฤe svakom primjeru biti dodane dummy jedinice; $m=n+1$).
End of explanation
from numpy import linalg
w = np.dot(np.dot(np.linalg.inv(np.dot(np.transpose(Phi), Phi)), np.transpose(Phi)), y)
print(w)
w2 = np.dot(np.linalg.pinv(Phi), y)
print(w2)
Explanation: (b)
Upoznajte se s modulom linalg. Izraฤunajte teลพine $\mathbf{w}$ modela linearne regresije kao $\mathbf{w}=(\mathbf{\Phi}^\intercal\mathbf{\Phi})^{-1}\mathbf{\Phi}^\intercal\mathbf{y}$. Zatim se uvjerite da isti rezultat moลพete dobiti izraฤunom pseudoinverza $\mathbf{\Phi}^+$ matrice dizajna, tj. $\mathbf{w}=\mathbf{\Phi}^+\mathbf{y}$, koriลกtenjem funkcije pinv.
End of explanation
from sklearn.metrics import mean_squared_error
h = np.dot(Phi, w)
print (h)
error = mean_squared_error(y, h)
print (error)
plt.plot(X, y, '+', X, h, linewidth = 1)
plt.axis([-3, 6, -1, 7])
Explanation: Radi jasnoฤe, u nastavku je vektor $\mathbf{x}$ s dodanom dummy jedinicom $x_0=1$ oznaฤen kao $\tilde{\mathbf{x}}$.
(c)
Prikaลพite primjere iz $\mathcal{D}$ i funkciju $h(\tilde{\mathbf{x}})=\mathbf{w}^\intercal\tilde{\mathbf{x}}$. Izraฤunajte pogreลกku uฤenja prema izrazu $E(h|\mathcal{D})=\frac{1}{2}\sum_{i=1}^N(\tilde{\mathbf{y}}^{(i)} - h(\tilde{\mathbf{x}}))^2$. Moลพete koristiti funkciju srednje kvadratne pogreลกke mean_squared_error iz modula sklearn.metrics.
Q: Gore definirana funkcija pogreลกke $E(h|\mathcal{D})$ i funkcija srednje kvadratne pogreลกke nisu posve identiฤne. U ฤemu je razlika? Koja je "realnija"?
End of explanation
try:
np.dot(np.linalg.inv(Phi), y)
except LinAlgError as err:
print(err)
Explanation: (d)
Uvjerite se da za primjere iz $\mathcal{D}$ teลพine $\mathbf{w}$ ne moลพemo naฤi rjeลกavanjem sustava $\mathbf{w}=\mathbf{\Phi}^{-1}\mathbf{y}$, veฤ da nam doista treba pseudoinverz.
Q: Zaลกto je to sluฤaj? Bi li se problem mogao rijeลกiti preslikavanjem primjera u viลกu dimenziju? Ako da, bi li to uvijek funkcioniralo, neovisno o skupu primjera $\mathcal{D}$? Pokaลพite na primjeru.
End of explanation
from sklearn.linear_model import LinearRegression
lr = LinearRegression().fit(Phi, y)
w2 = [lr.intercept_, lr.coef_[1]]
h2 = lr.predict(Phi)
error2 = mean_squared_error(y, h)
print ('staro: ')
print (w)
print (h)
print (error)
print('novo: ')
print (w2)
print (h2)
print (error2)
Explanation: (e)
Prouฤite klasu LinearRegression iz modula sklearn.linear_model. Uvjerite se da su teลพine koje izraฤunava ta funkcija (dostupne pomoฤu atributa coef_ i intercept_) jednake onima koje ste izraฤunali gore. Izraฤunajte predikcije modela (metoda predict) i uvjerite se da je pogreลกka uฤenja identiฤna onoj koju ste ranije izraฤunali.
End of explanation
from numpy.random import normal
def make_labels(X, f, noise=0) :
return map(lambda x : f(x) + (normal(0,noise) if noise>0 else 0), X)
def make_instances(x1, x2, N) :
return sp.array([np.array([x]) for x in np.linspace(x1,x2,N)])
N = 50
sigma = 200
fun = lambda x :5 + x - 2*x**2 - 5*x**3
x = make_instances(-5, 5, N)
y = list(make_labels(x, fun, sigma))
y6a = y
x6a = x
Explanation: 2. Polinomijalna regresija i utjecaj ลกuma
(a)
Razmotrimo sada regresiju na veฤem broju primjera. Koristite funkciju make_labels(X, f, noise=0) koja uzima matricu neoznaฤenih primjera $\mathbf{X}{N\times n}$ te generira vektor njihovih oznaka $\mathbf{y}{N\times 1}$. Oznake se generiraju kao $y^{(i)} = f(x^{(i)})+\mathcal{N}(0,\sigma^2)$, gdje je $f:\mathbb{R}^n\to\mathbb{R}$ stvarna funkcija koja je generirala podatke (koja nam je u stvarnosti nepoznata), a $\sigma$ je standardna devijacija Gaussovog ลกuma, definirana parametrom noise. Za generiranje ลกuma koristi se funkcija numpy.random.normal.
Generirajte skup za uฤenje od $N=50$ primjera uniformno distribuiranih u intervalu $[-5,5]$ pomoฤu funkcije $f(x) = 5 + x -2 x^2 -5 x^3$ uz ลกum $\sigma=200$:
End of explanation
plt.figure(figsize=(10, 5))
plt.plot(x, fun(x), 'r', linewidth = 1)
plt.scatter(x, y)
Explanation: Prikaลพite taj skup funkcijom scatter.
End of explanation
from sklearn.preprocessing import PolynomialFeatures
Phi = PolynomialFeatures(3).fit_transform(x.reshape(-1, 1))
w = np.dot(np.linalg.pinv(Phi), y)
h = np.dot(Phi, w)
error = mean_squared_error(y, h)
print(error)
plt.figure(figsize=(10,5))
plt.scatter(x, y)
plt.plot(x, h, 'r', linewidth=1)
Explanation: (b)
Trenirajte model polinomijalne regresije stupnja $d=3$. Na istom grafikonu prikaลพite nauฤeni model $h(\mathbf{x})=\mathbf{w}^\intercal\tilde{\mathbf{x}}$ i primjere za uฤenje. Izraฤunajte pogreลกku uฤenja modela.
End of explanation
Phi_d = [];
w_d = [];
h_d = [];
err_d = [];
d = [1, 3, 5, 10, 20]
for i in d:
Phi_d.append(PolynomialFeatures(i).fit_transform(x.reshape(-1,1)))
for i in range(0, len(d)):
w_d.insert(i, np.dot(np.linalg.pinv(Phi_d[i]), y))
h_d.insert(i, np.dot(Phi_d[i], w_d[i]))
for i in range(0, len(d)):
err_d.insert(i, mean_squared_error(y, h_d[i]))
print (str(d[i]) + ': ' + str(err_d[i]))
fig = plt.figure(figsize=(15, 20))
fig.subplots_adjust(wspace=0.2)
for i in range(0, len(d)):
ax = fig.add_subplot(5, 2, i+1)
ax.scatter(x, y);
ax.plot(x, h_d[i], 'r', linewidth = 1)
Explanation: 3. Odabir modela
(a)
Na skupu podataka iz zadatka 2 trenirajte pet modela linearne regresije $\mathcal{H}_d$ razliฤite sloลพenosti, gdje je $d$ stupanj polinoma, $d\in{1,3,5,10,20}$. Prikaลพite na istome grafikonu skup za uฤenje i funkcije $h_d(\mathbf{x})$ za svih pet modela (preporuฤujemo koristiti plot unutar for petlje). Izraฤunajte pogreลกku uฤenja svakog od modela.
Q: Koji model ima najmanju pogreลกku uฤenja i zaลกto?
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size = 0.5)
err_train = [];
err_test = [];
d = range(0, 20)
for i in d:
Phi_train = PolynomialFeatures(i).fit_transform(X_train.reshape(-1, 1))
Phi_test = PolynomialFeatures(i).fit_transform(X_test.reshape(-1, 1))
w_train = np.dot(np.linalg.pinv(Phi_train), y_train)
h_train = np.dot(Phi_train, w_train)
h_test = np.dot(Phi_test, w_train)
err_train.insert(i, np.log(mean_squared_error(y_train, h_train)))
err_test.insert(i, np.log(mean_squared_error(y_test, h_test)))
plt.figure(figsize=(10,5))
plt.plot(d, err_train, d, err_test)
plt.grid()
Explanation: (b)
Razdvojite skup primjera iz zadatka 2 pomoฤu funkcije cross_validation.train_test_split na skup za uฤenja i skup za ispitivanje u omjeru 1:1. Prikaลพite na jednom grafikonu pogreลกku uฤenja i ispitnu pogreลกku za modele polinomijalne regresije $\mathcal{H}_d$, sa stupnjem polinoma $d$ u rasponu $d\in [1,2,\ldots,20]$. Radi preciznosti, funkcije $h(\mathbf{x})$ iscrtajte na cijelom skupu primjera (ali pogreลกku generalizacije raฤunajte, naravno, samo na ispitnome skupu). Buduฤi da kvadratna pogreลกka brzo raste za veฤe stupnjeve polinoma, umjesto da iscrtate izravno iznose pogreลกaka, iscrtajte njihove logaritme.
NB: Podjela na skupa za uฤenje i skup za ispitivanje mora za svih pet modela biti identiฤna.
Q: Je li rezultat u skladu s oฤekivanjima? Koji biste model odabrali i zaลกto?
Q: Pokrenite iscrtavanje viลกe puta. U ฤemu je problem? Bi li problem bio jednako izraลพen kad bismo imali viลกe primjera? Zaลกto?
End of explanation
N2 = [100, 200, 1000];
sigma = [100, 200, 500];
X_train4c_temp = [];
X_test4c_temp = [];
y_train4c_temp = [];
y_test4c_temp = [];
x_tmp = np.linspace(-5, 5, 1000);
X_train, X_test = train_test_split(x_tmp, test_size = 0.5)
for i in range(0, 3):
y_tmp_train = list(make_labels(X_train, fun, sigma[i]))
y_tmp_test = list(make_labels(X_test, fun, sigma[i]))
for j in range(0,3):
X_train4c_temp.append(X_train[0:int(N2[j]/2)])
X_test4c_temp.append(X_test[0:int(N2[j]/2)])
y_train4c_temp.append(y_tmp_train[0:int(N2[j]/2)])
y_test4c_temp.append(y_tmp_test[0:int(N2[j]/2)])
err_tr = [];
err_tst = [];
for i in range(0, 9):
X_train4c = X_train4c_temp[i]
X_test4c = X_test4c_temp[i]
y_train4c = y_train4c_temp[i]
y_test4c = y_test4c_temp[i]
err_train4c = [];
err_test4c = [];
d4c = range(0, 20)
for j in d4c:
Phi_train4c = PolynomialFeatures(j).fit_transform(X_train4c.reshape(-1, 1))
Phi_test4c = PolynomialFeatures(j).fit_transform(X_test4c.reshape(-1, 1))
w_train4c = np.dot(np.linalg.pinv(Phi_train4c), y_train4c)
h_train4c = np.dot(Phi_train4c, w_train4c)
h_test4c = np.dot(Phi_test4c, w_train4c)
err_train4c.insert(j, np.log(mean_squared_error(y_train4c, h_train4c)))
err_test4c.insert(j, np.log(mean_squared_error(y_test4c, h_test4c)))
err_tr.append(err_train4c);
err_tst.append(err_test4c);
fig = plt.figure(figsize=(15, 10))
fig.subplots_adjust(wspace=0.2, hspace = 0.35)
Nn = [100, 200, 1000, 100, 200, 1000, 100, 200, 1000]
sgm = [100, 100, 100, 200, 200, 200, 500, 500, 500]
for i in range(0, 9):
ax = fig.add_subplot(3, 3, i+1)
plt.plot(d, err_tr[i], d, err_tst[i]); grid;
ax.grid();
Explanation: (c)
Toฤnost modela ovisi o (1) njegovoj sloลพenosti (stupanj $d$ polinoma), (2) broju primjera $N$, i (3) koliฤini ลกuma. Kako biste to analizirali, nacrtajte grafikone pogreลกaka kao u 3b, ali za sve kombinacija broja primjera $N\in{100,200,1000}$ i koliฤine ลกuma $\sigma\in{100,200,500}$ (ukupno 9 grafikona). Upotrijebite funkciju subplots kako biste pregledno posloลพili grafikone u tablicu $3\times 3$. Podatci se generiraju na isti naฤin kao u zadatku 2.
NB: Pobrinite se da svi grafikoni budu generirani nad usporedivim skupovima podataka, na sljedeฤi naฤin. Generirajte najprije svih 1000 primjera, podijelite ih na skupove za uฤenje i skupove za ispitivanje (dva skupa od po 500 primjera). Zatim i od skupa za uฤenje i od skupa za ispitivanje naฤinite tri razliฤite verzije, svaka s drugaฤijom koliฤinom ลกuma (ukupno 2x3=6 verzija podataka). Kako bi simulirali veliฤinu skupa podataka, od tih dobivenih 6 skupova podataka uzorkujte treฤinu, dvije treฤine i sve podatke. Time ste dobili 18 skupova podataka -- skup za uฤenje i za testiranje za svaki od devet grafova.
Q: Jesu li rezultati oฤekivani? Obrazloลพite.
End of explanation
lam = [0, 1, 10]
y = np.array([4,1,2,5])
Phi4a = PolynomialFeatures(3).fit_transform(X)
w_L2 = [];
def w_reg(lam):
t1 = np.dot(Phi4a.T, Phi4a) + np.dot(lam, np.eye(4))
t2 = np.dot(np.linalg.inv(t1), Phi4a.T)
return np.dot(t2, y)
for i in range(0, 3):
w_L2.insert(i, w_reg(lam[i]))
print (w_reg(lam[i]))
Explanation: 4. Regularizirana regresija
(a)
U gornjim eksperimentima nismo koristili regularizaciju. Vratimo se najprije na primjer iz zadatka 1. Na primjerima iz tog zadatka izraฤunajte teลพine $\mathbf{w}$ za polinomijalni regresijski model stupnja $d=3$ uz L2-regularizaciju (tzv. ridge regression), prema izrazu $\mathbf{w}=(\mathbf{\Phi}^\intercal\mathbf{\Phi}+\lambda\mathbf{I})^{-1}\mathbf{\Phi}^\intercal\mathbf{y}$. Napravite izraฤun teลพina za regularizacijske faktore $\lambda=0$, $\lambda=1$ i $\lambda=10$ te usporedite dobivene teลพine.
Q: Kojih je dimenzija matrica koju treba invertirati?
Q: Po ฤemu se razlikuju dobivene teลพine i je li ta razlika oฤekivana? Obrazloลพite.
End of explanation
from sklearn.linear_model import Ridge
for i in lam:
w = [];
w_L22 = Ridge(alpha = i).fit(Phi4a, y)
w.append(w_L22.intercept_)
for i in range(0, len(w_L22.coef_[1:])):
w.append(w_L22.coef_[i])
print (w)
Explanation: (b)
Prouฤite klasu Ridge iz modula sklearn.linear_model, koja implementira L2-regularizirani regresijski model. Parametar $\alpha$ odgovara parametru $\lambda$. Primijenite model na istim primjerima kao u prethodnom zadatku i ispiลกite teลพine $\mathbf{w}$ (atributi coef_ i intercept_).
Q: Jesu li teลพine identiฤne onima iz zadatka 4a? Ako nisu, objasnite zaลกto je to tako i kako biste to popravili.
End of explanation
x5a = linspace(-5, 5, 50);
f = (5 + x5a - 2*(x5a**2) - 5*(x5a**3));
y5a = f + normal(0, 200, 50);
lamd = [0, 100]
dd = [2, 10]
h5a = []
for i in lamd:
for j in dd:
Phi5a = PolynomialFeatures(j).fit_transform(x5a.reshape(-1,1))
w_5a = np.dot(np.dot(np.linalg.inv(np.dot(Phi5a.T, Phi5a) + np.dot(i, np.eye(j+1))), Phi5a.T), y5a);
h_5a = np.dot(Phi5a, w_5a)
h5a.append(h_5a)
lamdd = [0, 0, 100, 100]
ddd = [2, 10, 2, 10]
fig = plt.figure(figsize=(15, 10))
fig.subplots_adjust(wspace=0.2, hspace = 0.2)
for i in range(0, len(lamdd)):
ax = fig.add_subplot(2, 2, i+1)
plt.plot(x5a, h5a[i], 'r', linewidth = 2)
plt.scatter(x5a, y5a);
Explanation: 5. Regularizirana polinomijalna regresija
(a)
Vratimo se na sluฤaj $N=50$ sluฤajno generiranih primjera iz zadatka 2. Trenirajte modele polinomijalne regresije $\mathcal{H}_{\lambda,d}$ za $\lambda\in{0,100}$ i $d\in{2,10}$ (ukupno ฤetiri modela). Skicirajte pripadne funkcije $h(\mathbf{x})$ i primjere (na jednom grafikonu; preporuฤujemo koristiti plot unutar for petlje).
Q: Jesu li rezultati oฤekivani? Obrazloลพite.
End of explanation
X5a_train, X5a_test, y5a_train, y5a_test = train_test_split(x5a, y5a, test_size = 0.5)
err5a_train = [];
err5a_test = [];
d = 20;
lambda5a = range(0, 50)
for i in lambda5a:
Phi5a_train = PolynomialFeatures(d).fit_transform(X5a_train.reshape(-1, 1))
Phi5a_test = PolynomialFeatures(d).fit_transform(X5a_test.reshape(-1, 1))
w5a_train = np.dot(np.dot(np.linalg.inv(np.dot(Phi5a_train.T, Phi5a_train) + np.dot(i, np.eye(d+1))), Phi5a_train.T), y5a_train);
h5a_train = np.dot(Phi5a_train, w5a_train)
h5a_test = np.dot(Phi5a_test, w5a_train)
err5a_train.insert(i, np.log(mean_squared_error(y5a_train, h5a_train)))
err5a_test.insert(i, np.log(mean_squared_error(y5a_test, h5a_test)))
plt.figure(figsize=(8,4))
plt.plot(lambda5a, err5a_train, lambda5a, err5a_test);
plt.grid(), plt.xlabel('$\lambda$'), plt.ylabel('err');
plt.legend(['ucenje', 'ispitna'], loc='best');
Explanation: (b)
Kao u zadataku 3b, razdvojite primjere na skup za uฤenje i skup za ispitivanje u omjeru 1:1. Prikaลพite krivulje logaritama pogreลกke uฤenja i ispitne pogreลกke u ovisnosti za model $\mathcal{H}_{d=20,\lambda}$, podeลกavajuฤi faktor regularizacije $\lambda$ u rasponu $\lambda\in{0,1,\dots,50}$.
Q: Kojoj strani na grafikonu odgovara podruฤje prenauฤenosti, a kojoj podnauฤenosti? Zaลกto?
Q: Koju biste vrijednosti za $\lambda$ izabrali na temelju ovih grafikona i zaลกto?
End of explanation
def nonzeroes(coef, tol=1e-6):
return len(coef) - len(coef[sp.isclose(0, coef, atol=tol)])
Explanation: 6. L1-regularizacija i L2-regularizacija
Svrha regularizacije jest potiskivanje teลพina modela $\mathbf{w}$ prema nuli, kako bi model bio ลกto jednostavniji. Sloลพenost modela moลพe se okarakterizirati normom pripadnog vektora teลพina $\mathbf{w}$, i to tipiฤno L2-normom ili L1-normom. Za jednom trenirani model moลพemo izraฤunati i broj ne-nul znaฤajki, ili L0-normu, pomoฤu sljedeฤe funkcije:
End of explanation
from sklearn.linear_model import Ridge
from sklearn.linear_model import Lasso
lambda6a = range(1,100)
d6a = 20
X6a_train, X6a_test, y6a_train, y6a_test = train_test_split(x6a, y6a, test_size = 0.5)
Phi6a_train = PolynomialFeatures(d6a).fit_transform(X6a_train.reshape(-1,1))
L0 = [];
L1 = [];
L2 = [];
L1_norm = lambda w: sum(abs(w));
L2_norm = lambda w: math.sqrt(np.dot(w.T, w));
for i in lambda6a:
w6a = np.dot(np.dot(np.linalg.inv(np.dot(Phi6a_train.T, Phi6a_train) + np.dot(i, np.eye(d6a+1))), Phi6a_train.T), y6a_train);
L0.append(nonzeroes(w6a))
L1.append(L1_norm(w6a))
L2.append(L2_norm(w6a))
plot(lambda6a, L0, lambda6a, L1, lambda6a, L2, linewidth = 1)
grid()
Explanation: (a)
Za ovaj zadatak upotrijebite skup za uฤenje i skup za testiranje iz zadatka 3b. Trenirajte modele L2-regularizirane polinomijalne regresije stupnja $d=20$, mijenjajuฤi hiperparametar $\lambda$ u rasponu ${1,2,\dots,100}$. Za svaki od treniranih modela izraฤunajte L{0,1,2}-norme vektora teลพina $\mathbf{w}$ te ih prikaลพite kao funkciju od $\lambda$.
Q: Objasnite oblik obiju krivulja. Hoฤe li krivulja za $\|\mathbf{w}\|_2$ doseฤi nulu? Zaลกto? Je li to problem? Zaลกto?
Q: Za $\lambda=100$, koliki je postotak teลพina modela jednak nuli, odnosno koliko je model rijedak?
End of explanation
L0 = [];
L1 = [];
L2 = [];
for i in lambda6a:
lass = Lasso(alpha = i, tol = 0.115).fit(Phi6a_train, y6a_train)
w6b = lass.coef_
L0.append(nonzeroes(w6b))
L1.append(L1_norm(w6b))
L2.append(L2_norm(w6b))
plot(lambda6a, L0, lambda6a, L1, lambda6a, L2, linewidth = 1)
legend(['L0', 'L1', 'L2'], loc = 'best')
grid()
Explanation: (b)
Glavna prednost L1-regularizirane regresije (ili LASSO regression) nad L2-regulariziranom regresijom jest u tome ลกto L1-regularizirana regresija rezultira rijetkim modelima (engl. sparse models), odnosno modelima kod kojih su mnoge teลพine pritegnute na nulu. Pokaลพite da je to doista tako, ponovivลกi gornji eksperiment s L1-regulariziranom regresijom, implementiranom u klasi Lasso u modulu sklearn.linear_model.
End of explanation
n_data_points = 500
np.random.seed(69)
# Generiraj podatke o bodovima na prijamnom ispitu koristeฤi normalnu razdiobu i ograniฤi ih na interval [1, 3000].
exam_score = np.random.normal(loc=1500.0, scale = 500.0, size = n_data_points)
exam_score = np.round(exam_score)
exam_score[exam_score > 3000] = 3000
exam_score[exam_score < 0] = 0
# Generiraj podatke o ocjenama iz srednje ลกkole koristeฤi normalnu razdiobu i ograniฤi ih na interval [1, 5].
grade_in_highschool = np.random.normal(loc=3, scale = 2.0, size = n_data_points)
grade_in_highschool[grade_in_highschool > 5] = 5
grade_in_highschool[grade_in_highschool < 1] = 1
# Matrica dizajna.
grades_X = np.array([exam_score,grade_in_highschool]).T
# Zavrลกno, generiraj izlazne vrijednosti.
rand_noise = np.random.normal(loc=0.0, scale = 0.5, size = n_data_points)
exam_influence = 0.9
grades_y = ((exam_score / 3000.0) * (exam_influence) + (grade_in_highschool / 5.0) \
* (1.0 - exam_influence)) * 5.0 + rand_noise
grades_y[grades_y < 1] = 1
grades_y[grades_y > 5] = 5
Explanation: 7. Znaฤajke razliฤitih skala
ฤesto se u praksi moลพemo susreti sa podatcima u kojima sve znaฤajke nisu jednakih magnituda. Primjer jednog takvog skupa je regresijski skup podataka grades u kojem se predviฤa prosjek ocjena studenta na studiju (1--5) na temelju dvije znaฤajke: bodova na prijamnom ispitu (1--3000) i prosjeka ocjena u srednjoj ลกkoli. Prosjek ocjena na studiju izraฤunat je kao teลพinska suma ove dvije znaฤajke uz dodani ลกum.
Koristite sljedeฤi kรดd kako biste generirali ovaj skup podataka.
End of explanation
plt.figure()
plot(exam_score, grades_y, 'r+')
grid()
plt.figure()
plot(grade_in_highschool, grades_y, 'g+')
grid()
Explanation: a)
Iscrtajte ovisnost ciljne vrijednosti (y-os) o prvoj i o drugoj znaฤajki (x-os). Iscrtajte dva odvojena grafa.
End of explanation
w = Ridge(alpha = 0.01).fit(grades_X, grades_y).coef_
print(w)
Explanation: b)
Nauฤite model L2-regularizirane regresije ($\lambda = 0.01$), na podacima grades_X i grades_y:
End of explanation
from sklearn.preprocessing import StandardScaler
#grades_y.reshape(-1, 1)
scaler = StandardScaler()
scaler.fit(grades_X)
grades_X_fixed = scaler.transform(grades_X)
scaler2 = StandardScaler()
scaler2.fit(grades_y.reshape(-1, 1))
grades_y_fixed = scaler2.transform(grades_y.reshape(-1, 1))
Explanation: Sada ponovite gornji eksperiment, ali prvo skalirajte podatke grades_X i grades_y i spremite ih u varijable grades_X_fixed i grades_y_fixed. Za tu svrhu, koristite StandardScaler.
End of explanation
grades_X_fixed_colinear = [[g[0],g[1],g[1]] for g in grades_X_fixed]
Explanation: Q: Gledajuฤi grafikone iz podzadatka (a), koja znaฤajka bi trebala imati veฤu magnitudu, odnosno vaลพnost pri predikciji prosjeka na studiju? Odgovaraju li teลพine Vaลกoj intuiciji? Objasnite.
8. Multikolinearnost i kondicija matrice
a)
Izradite skup podataka grades_X_fixed_colinear tako ลกto ฤete u skupu grades_X_fixed iz
zadatka 7b duplicirati zadnji stupac (ocjenu iz srednje ลกkole). Time smo efektivno uveli savrลกenu multikolinearnost.
End of explanation
w = Ridge(alpha = 0.01).fit(grades_X_fixed_colinear, grades_y_fixed).coef_
print(w)
Explanation: Ponovno, nauฤite na ovom skupu L2-regularizirani model regresije ($\lambda = 0.01$).
End of explanation
w_001s = []
w_1000s = []
for i in range(10):
X_001, X_1000, y_001, y_1000 = train_test_split(grades_X_fixed_colinear, grades_y_fixed, test_size = 0.5)
w_001 = Ridge(alpha = 0.01).fit(X_001, y_001).coef_
w_1000 = Ridge(alpha = 0.01).fit(X_1000, y_1000).coef_
w_001s.append(w_001[0])
w_1000s.append(w_1000[0])
#print(w_001)
#print(w_1000)
#print()
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(w_001s)
print(scaler.mean_)
scaler = StandardScaler()
scaler.fit(w_1000s)
print(scaler.mean_)
Explanation: Q: Usporedite iznose teลพina s onima koje ste dobili u zadatku 7b. ล to se dogodilo?
b)
Sluฤajno uzorkujte 50% elemenata iz skupa grades_X_fixed_colinear i nauฤite dva modela L2-regularizirane regresije, jedan s $\lambda=0.01$, a jedan s $\lambda=1000$. Ponovite ovaj pokus 10 puta (svaki put s drugim podskupom od 50% elemenata). Za svaki model, ispiลกite dobiveni vektor teลพina u svih 10 ponavljanja te ispiลกite standardnu devijaciju vrijednosti svake od teลพina (ukupno ลกest standardnih devijacija, svaka dobivena nad 10 vrijednosti).
End of explanation
lam = 0.01
phi = grades_X_fixed_colinear
s = np.add(np.dot(np.transpose(phi), phi), lam * np.identity(len(a)))
print(np.linalg.cond(s))
lam = 10
phi = grades_X_fixed_colinear
s = np.add(np.dot(np.transpose(phi), phi), lam * np.identity(len(a)))
print(np.linalg.cond(s))
Explanation: Q: Kako regularizacija utjeฤe na stabilnost teลพina?
Q: Jesu li koeficijenti jednakih magnituda kao u prethodnom pokusu? Objasnite zaลกto.
c)
Koristeฤi numpy.linalg.cond izraฤunajte kondicijski broj matrice $\mathbf{\Phi}^\intercal\mathbf{\Phi}+\lambda\mathbf{I}$, gdje je $\mathbf{\Phi}$ matrica dizajna (grades_fixed_X_colinear). Ponovite i za $\lambda=0.01$ i za $\lambda=10$.
End of explanation |
12,218 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dataset handling
Scikit-multilearn provides methods to load, save and manipulate multi-label data sets in two formats
Step1: Loading scikit-multilearn data format is easier as it stores more information than the ARFF file, all you need to do is specify the path to the data set file.
Step2: If the filename argument is not None this dictionary is saved as a bzip2 compressed pickle and the function does not return anything.
scikit-multilearn repository
Step3: The following benchmark data sets, originally provided in the MULAN data repository are provided in train, test, and undivided variants. The undivided variant contains the complete data set, before the train/test split.
Step4: Variants
Step5: Scikit-multilearn can automatically download the data sets for you, similar to scikit-learn's data set API.
The data is stored by default in the subfolder scikit_ml_learn_data of your SCIKIT_ML_LEARN_DATA environment variable. If the variable is not set, the data is stored in ~/scikit_ml_learn_data.
To download a data set use the
Step6: ARFF files
The most common way for storing multi-label data is the ARFF file format created by the WEKA library. You can find many benchmark data sets in ARFF format on the MULAN data repository.
Loading both dense and sparse ARFF files is simple in scikit-multilearn, just use
Step7: Loading multi-label ARFF files requires additional information as the number or placement of labels, is not indicated in the format directly.
Step8: Different software expects labels in different parts of the ARFF file
Step9: There are two ways to save ARFF data
Step10: Or if you also want the metadata
Step11: As you can see scikit-multilearn encodes nominal types by default as integers, and converts the input space to floats, while the output space to binary indicators 0/1 represented as integers. To change this behavior specify your own params to load_from_arff as described in the API documentation.
Step12: If you want to save ARFF files, you can use the
Step13: Let's say we want to save a subset of the data in a sparse format and with labels at the begining of the file. | Python Code:
from skmultilearn.dataset import load_dataset_dump, save_dataset_dump
Explanation: Dataset handling
Scikit-multilearn provides methods to load, save and manipulate multi-label data sets in two formats:
a scikit-multilearn pickle of data set in scipy sparse format
the traditional ARFF file format
The functionality is provided in the :mod:skmultilearn.dataset module.
Scikit-multilearn also provides a repository of most popular benchmark data sets in the scipy sparse format and convienience functions to access them.
scikit-multilearn format
End of explanation
X, y, feature_names, label_names = load_dataset_dump('_static/example.pkl.bz2')
X, y, feature_names[:3], label_names[:3]
save_dataset_dump(X[:10,:4], y[:10, :3], feature_names[:4], label_names[:3], filename=None)
Explanation: Loading scikit-multilearn data format is easier as it stores more information than the ARFF file, all you need to do is specify the path to the data set file.
End of explanation
from skmultilearn.dataset import available_data_sets
Explanation: If the filename argument is not None this dictionary is saved as a bzip2 compressed pickle and the function does not return anything.
scikit-multilearn repository
End of explanation
set([x[0] for x in available_data_sets().keys()])
Explanation: The following benchmark data sets, originally provided in the MULAN data repository are provided in train, test, and undivided variants. The undivided variant contains the complete data set, before the train/test split.
End of explanation
set([x[1] for x in available_data_sets().keys()])
Explanation: Variants:
End of explanation
from skmultilearn.dataset import load_dataset
X, y, feature_names, label_names = load_dataset('scene', 'train')
X, y, feature_names[:3], label_names[:3]
Explanation: Scikit-multilearn can automatically download the data sets for you, similar to scikit-learn's data set API.
The data is stored by default in the subfolder scikit_ml_learn_data of your SCIKIT_ML_LEARN_DATA environment variable. If the variable is not set, the data is stored in ~/scikit_ml_learn_data.
To download a data set use the :meth:load_dataset function.
End of explanation
from skmultilearn.dataset import load_from_arff
Explanation: ARFF files
The most common way for storing multi-label data is the ARFF file format created by the WEKA library. You can find many benchmark data sets in ARFF format on the MULAN data repository.
Loading both dense and sparse ARFF files is simple in scikit-multilearn, just use :func:load_from_arff, like this:
End of explanation
path_to_arff_file = '_static/example.arff'
label_count = 7
Explanation: Loading multi-label ARFF files requires additional information as the number or placement of labels, is not indicated in the format directly.
End of explanation
label_location="end"
Explanation: Different software expects labels in different parts of the ARFF file:
MEKA expects labels to appear at the beginning of the file
MULAN expects labels to appear at the end of the file
As the example.arff comes from MULAN, we set the label location to end.
End of explanation
arff_file_is_sparse = False
X, y = load_from_arff(
path_to_arff_file,
label_count=label_count,
label_location=label_location,
load_sparse=arff_file_is_sparse
)
Explanation: There are two ways to save ARFF data:
- dense, where the file contains a complete dump of the data set row by row, including places where the value is 0
- sparse, as a dictionary of keys, where for each row the non-zero elements are listed with their index
The example file is not sparse, that's why we set the load_sparse argument to False
End of explanation
X, y, feature_names, label_names = load_from_arff(
path_to_arff_file,
label_count=label_count,
label_location=label_location,
load_sparse=arff_file_is_sparse,
return_attribute_definitions=True
)
Explanation: Or if you also want the metadata: feature and label names:
End of explanation
X, y, feature_names[:3], label_names[:3]
Explanation: As you can see scikit-multilearn encodes nominal types by default as integers, and converts the input space to floats, while the output space to binary indicators 0/1 represented as integers. To change this behavior specify your own params to load_from_arff as described in the API documentation.
End of explanation
from skmultilearn.dataset import save_to_arff
Explanation: If you want to save ARFF files, you can use the :meth:save_arff function, which can both return a string containing an ARFF dump of the data set, or save it to a provided file when the filename argument is passed.
End of explanation
print(save_to_arff(X[:10,:4], y[:10, :3], label_location='start', save_sparse=True))
Explanation: Let's say we want to save a subset of the data in a sparse format and with labels at the begining of the file.
End of explanation |
12,219 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LCS Demo 1
LCS Workshop- Educational LCS - eLCS
Outcome
Step1: Display final population
Step2: Visualise classifiers | Python Code:
import numpy as np
import matplotlib.pyplot as plt
headerList = np.array([])
dataList = []
arraylist = np.array([])
# Open the file for reading.
with open('ExampleRun_eLCS_10000_RulePop.txt', 'r') as infile:
headerList = infile.readline().rstrip('\n').split('\t') #strip off first row
for line in infile:
lineList = line.strip('\n').split('\t')
# arraylist = [float(i) for i in lineList]
dataList.append(lineList)
infile.close()
#my_list = data.strip('\n').split('\t')
#print(headerList)
#print(dataList)
#print(np.shape(dataList))
Explanation: LCS Demo 1
LCS Workshop- Educational LCS - eLCS
Outcome: Learn the concept and use of Learning Classifier Systems (LCSs)
Instructors: Dr Ryan Urbanowicz, Dr Will Browne And Dr Karthik Kuber,
The following topics will be covered in a series of hands-on exercises and demonstrations:
1. LCS in a Nutshell
2. LCS Concepts
3. LCS Functional Cycle
4. LCS Adaptability
5. LCS Applications (toy and real problems)
<p style="color:red;">Welcome to the Educational Learning Classifier System (eLCS).</p>
It has the core elements of the functionality that help define the concept of LCSs. Itโs the same family as the fully featured ExSTraCS system, so it is easy to transfer to a state-of-the-art LCS from this shallow learning curve.
eLCS complements the forthcoming Textbook on Learning Classifier Systems. Each demo is paired with one of the chapters in the textbook. Therefore, there are 5 different versions of an educational learning classi๏ฌer system (eLCS), as relevant functionality (code) is added to eLCS at each stage. This builds up the eLCS algorithm in its entirety from Demo 1 through to 5. Demo 6 showcases how ExSTraCS may be applied to a real-world data mining example, i.e. large scale bioinformatics.
Demo 1 Understanding of what an LCS is attempting โ how does it classify the training data?
Demo 2 Matching and Covering
Demo 3 Prediction, Rule Population Evaluations, GA Rule Discovery and Parental Selection
Demo 4 Deletion and Niche GA + Subsumption
Demo 5 Complete eLCS applied to a complex (toy) problem
Bonus Demo 6 ExSTraCS applied to a real-world data mining example
All code is in Python. This newest version is coded in Python 3.4. Here it is to be run in the Jupyter platform (http://jupyter.org/), as it supports interactive data science.
Each demo version only includes the minimum code needed to perform the functions they were designed for. This way users can start by examining the simplest version of the code and progress onwards. The demo exercises are to implement several functions in eLCS and view results in spreadsheet, text ๏ฌle or Python based graphics (preferable).
Set-up and introduction to Jupyter
Please see http://jupyter.org/ on how to set-up Jupyter with Python 3.
Please download eLCS_1.ipynb, โฆ , eLCS_5.ipynb from Github
1. Jupyter can be extended to hide individual code in cells
All of the necessary code is below, separated into 'cells' for descriptive purposes. It is verbose and can make it difficult to see important method-code compared with infrastructure-code, so it is nice to be able to hide code on occasion. Jupyter needs the hide_code extension available from: https://github.com/kirbs-/hide_code
It can be downloaded and extracted into the Pythod directory.
"pip install hide_code" from the command prompt then installs the extension. Note a reboot of the server is needed!
Then under the drop down menu of 'Cell Toolbar' is possible to toggel 'Hide Code', for your viewing pleasure.
Name: eLCS_Run.py
Authors: Ryan Urbanowicz - Written at Dartmouth College, Hanover, NH, USA
Contact: [email protected]
Created: November 1, 2013
Description: To run e-LCS, run this module. A properly formatted configuration file, including all run parameters must be included with the path to that file given below. In this example, the configuration file has been included locally, so only the file name is required.
eLCS: Educational Learning Classifier System - A basic LCS coded for educational purposes. This LCS algorithm uses supervised learning, and thus is most
similar to "UCS", an LCS algorithm published by Ester Bernado-Mansilla and Josep Garrell-Guiu (2003) which in turn is based heavily on "XCS", an LCS
algorithm published by Stewart Wilson (1995).
Copyright (C) 2013 Ryan Urbanowicz
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the
Free Software Foundation; either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABLILITY
or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation,
Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
The purpose of an LCS is to classify, which it does through a population rules. We have included, as a rule population example, a real run of the complete (Demo 5) eLCS after 10000 iterations (eLCS has learned the problem already with perfect accuracy) (see ExampleRun eLCS 10000 RulePop.txt).
We have removed some of the rule parameters columns to keep this example as simple as possible. Users can see below to examine rules, or can are encouraged to open the rule population in Excel, try sorting rules by numerosity, accuracy, or initial time stamp and examine basic rule properties. Instead of manually selecting a small set of rules to include as an example rule population for this ๏ฌrst Demo, it is good to be initially exposed to what a complete rule population might look like.
Conditions in the rules included (A 0, A 1, R 0, R 1,R 2, and R 3), making up the multiplexer problem address (A) and register (R) bits. Class is labelled as Phenotype, since eLCS handles both discrete and continuous endpoints, which are better generalized as a phenotype. Also included in the ๏ฌle are the following rule parameters: ๏ฌtness, accuracy, numerosity, TimeStamp, Initial TimeStamp, and Speci๏ฌcity (just the fraction of speci๏ฌed attributes in a given rule). The rule population is initial ordered by initial Time Stamp, i.e., the iteration in which the rule was originally introduced to the population.
Load in existing file rule population
End of explanation
import pandas
import numpy
from IPython.display import display
indexR = ['Rule'+str(i) for i in range(1, len(dataList)+1)]
df = pandas.DataFrame(dataList, index=indexR,columns=headerList)
display(df)
Explanation: Display final population:
End of explanation
import numpy as np
import matplotlib.pyplot as p
from matplotlib import gridspec
for j in range(1, 5): #len(dataList)+1):
dL = dataList[j] #0 is indice of first row of data, 1 is second so can loop!
print(dL)
for i in range(0,len(dL)-1):
if (dL[i] =='0'): dL[i] = 0
if (dL[i] =='1'): dL[i] = 1
if (dL[i] =='#'): dL[i] = 2
print(dL)
c = np.array([[dL[0],dL[1],dL[2],dL[3],dL[4],dL[5]]])
a = np.array([[dL[6]]])
gs = gridspec.GridSpec(1, 2, width_ratios=[6, 1])
print(c)
print(a)
p.subplot(gs[0])
p.imshow(c, interpolation="nearest")
p.axis('off')
p.subplot(gs[1])
p.imshow(a)
p.axis('off')
p.tight_layout()
p.show()
Explanation: Visualise classifiers
End of explanation |
12,220 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis on "Jallikattu" with Twitter Data Feed <h3 style="color
Step1: We will create a Twitter API handle for fetching data
Inorder to qualify for a Twitter API handle you need to be a Phone Verified Twitter user.
Goto Twitter settings page twitter.com/settings/account
Choose Mobile tab on left pane, then enter your phone number and verify by OTP
Now you should be able to register new API handle for your account for programmatic tweeting
Now goto Twitter Application Management page
Click Create New Appbutton
Enter a Unique App name(global namespace), you might have to try few time to get it correct
Description can be anything you wish
website can be some <yourname>.com, you dont really have to own the domain
Leave the callback URL empty, agree to the terms and condition unconditionally
Click create
You can find the api credentials in Application Management consol
Choose the App and goto keys and access tokens tab to get API_KEY, API_SECRET, ACCESS_TOKEN and ACCESS_TOKEN_SECRET
RUN THE CODE BLOCK BELOW ONLY ON FIRST TIME YOU CONFIGURE THE TWITTER API
Step2: From second run you can load the credentials securely form stored file
If you want to check the credentials uncomment the last line in below code block
Step3: Creating an Open Auth Instance
With the created api and token we will open an open auth instance to authenticate our twitter account.
If you feel that your twitter api credentials have been compromised you can just generate a new set of access token-secret pair, access token is like RSA to authenticate your api key.
Step4: Twitter API Handle
Tweepy comes with a Twitter API wrapper class called 'API', passing the open auth instance to this API creates a live Twitter handle to our account.
ATTENTION
Step5: Inspiration for this Project
I drew inspiration for this project from the ongoing issue on traditional bull fighting AKA Jallikattu. Here I'm trying read pulse of the people based on tweets.
We are searching for key word Jallikattu in Twitters public tweets, in the retured search result we are taking 150 tweets to do our Sentiment Analysis. Please dont go for large number of tweets there is an upper limit of 450 tweets, for more on api rate limits checkout Twitter Developer Doc.
Step6: Processing Tweets
Once we get the tweets, we will iterate through the tweets and do following oprations
1. Pass the tweet text to TextBlob to process the tweet
2. Processed tweets will have two attributes
* Polarity which is a numerical value between -1 to 1, the sentiment of the text can be infered from this.
* Subjectivity this shows wheather the text is stated as a fact or an opinion, value ranges from 0 to 1
3. For each tweet we will find sentiment of the text (positive, neutral or negative) and update a counter variable accordingly, this counter is later ploted as a pie chart.
4. Then we pass the tweet text to a regular expression to extract hash tags, which we later use to create an awesome word cloud visualization.
Step7: Sentiment Analysis
We can see that majority is neutral which is contributed by
1. Tweets with media only(photo, video)
2. Tweets in regional language. Textblob do not work on our indian languages.
3. Some tweets contains only stop words or the words that do not give any positive or negative perspective.
4. Polarity is calculated by the number of positive words like "great, awesome, etc." or negative words like "hate, bad, etc"
One more point to note is that TextBlob is not a complete NLP package it does not do context aware search, such sophisticated deep learing abilities are available only with likes of Google.
Step8: Simple Word Cloud with Twitter #tags
Let us viualize the tags used in for Jallikattu by creating a tag cloud. The wordcloud package takes a single string of tags separated by whitespace. We will concatinate the tags and pass it to generate method to create a tag cloud image.
Step9: Masked Word Cloud
The tag cloud can be masked using a grascale stencil image the wordcloud package neatly arranges the word in side the mask image. I have supreimposed generated word cloud image on to the mask image to provide a detailing otherwise the background of the word cloud will be white and it will appeare like words are hanging in space instead.
Inorder to make the image superimposing work well, we need to manipulate image transparency using image alpha channel. If you look at the visual only fine detail of mask image is seen in the tag cloud this is bacause word cloud is layed on mask image and the transparency of word cloud image is 90% so only 10% of mask image is seen. | Python Code:
# import tweepy for twitter datastream and textblob for processing tweets
import tweepy
import textblob
# wordcloud package is used to produce the cool masked tag cloud above
from wordcloud import WordCloud
# pickle to serialize/deserialize python objects
import pickle
# regex package to extract hasttags from tweets
import re
# os for loading files from local system, matplotlib, np and PIL for ploting
from os import path
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
Explanation: Sentiment Analysis on "Jallikattu" with Twitter Data Feed <h3 style="color:red;">#DataScienceForSocialCause</h3>
Twitter is flooded with Jallikattu issue, let us find peoples sentiment with Data Science tools. Following is the approach
* Register a Twitter API handle for data feed
* Pull out tweets on search query 'jallikattu'
* Using NLP packages find the sentiment of the tweet (Positive, Neutral or Negative)
* Plot pie chart of the sentiment
* Plot a masked word cloud of tags used
Finall output we expect is a masked word cloud of popular tags used in twitter with font size propotional to the frequency of use. Let's dive in ...
Loading necessary packages
In particular we will be using tweepy to register an api handle with twitter and get the data feed. Tweepy Document
TextBlob package to determine the sentiment of the tweets. TextBlob Document
End of explanation
# make sure to exclued this folder in git ignore
path_to_cred_file = path.abspath('../restricted/api_credentials.p')
# we will store twitter handle credentials in a pickle file (object de-serialization)
# code for pickling credentials need to be run only once during initial configuration
# fill the following dictionary with your twitter credentials
twitter_credentials = {'api_key':'API_KEY', \
'api_secret':'API_SECRET', \
'access_token':'ACCESS_TOKEN', \
'access_token_secret':'ACCESS_TOKEN_SECRET'}
pickle.dump(twitter_credentials,open(path_to_cred_file, "wb"))
print("Pickled credentials saved to :\n"+path_to_cred_file+"\n")
print("\n".join(["{:20} : {}".format(key,value) for key,value in twitter_credentials.items()]))
Explanation: We will create a Twitter API handle for fetching data
Inorder to qualify for a Twitter API handle you need to be a Phone Verified Twitter user.
Goto Twitter settings page twitter.com/settings/account
Choose Mobile tab on left pane, then enter your phone number and verify by OTP
Now you should be able to register new API handle for your account for programmatic tweeting
Now goto Twitter Application Management page
Click Create New Appbutton
Enter a Unique App name(global namespace), you might have to try few time to get it correct
Description can be anything you wish
website can be some <yourname>.com, you dont really have to own the domain
Leave the callback URL empty, agree to the terms and condition unconditionally
Click create
You can find the api credentials in Application Management consol
Choose the App and goto keys and access tokens tab to get API_KEY, API_SECRET, ACCESS_TOKEN and ACCESS_TOKEN_SECRET
RUN THE CODE BLOCK BELOW ONLY ON FIRST TIME YOU CONFIGURE THE TWITTER API
End of explanation
# make sure to exclued this folder in git ignore
path_to_cred_file = path.abspath('../restricted/api_credentials.p')
# load saved twitter credentials
twitter_credentials = pickle.load(open(path_to_cred_file,'rb'))
#print("\n".join(["{:20} : {}".format(key,value) for key,value in twitter_credentials.items()]))
Explanation: From second run you can load the credentials securely form stored file
If you want to check the credentials uncomment the last line in below code block
End of explanation
# lets create an open authentication handler and initialize it with our twitter handlers api key
auth = tweepy.OAuthHandler(twitter_credentials['api_key'],twitter_credentials['api_secret'])
# access token is like password for the api key,
auth.set_access_token(twitter_credentials['access_token'],twitter_credentials['access_token_secret'])
Explanation: Creating an Open Auth Instance
With the created api and token we will open an open auth instance to authenticate our twitter account.
If you feel that your twitter api credentials have been compromised you can just generate a new set of access token-secret pair, access token is like RSA to authenticate your api key.
End of explanation
# lets create an instance of twitter api wrapper
api = tweepy.API(auth)
# lets do some self check
user = api.me()
print("{}\n{}".format(user.name,user.location))
Explanation: Twitter API Handle
Tweepy comes with a Twitter API wrapper class called 'API', passing the open auth instance to this API creates a live Twitter handle to our account.
ATTENTION: Please beware that this is a handle you your own account not any pseudo account, if you tweet something with this it will be your tweet This is the reason I took care not to expose my api credentials, if you expose anyone can mess up your Twitter account.
Let's open the twitter handle and print the Name and Location of the twitter account owner, you should be seeing your name.
End of explanation
# now lets get some data to check the sentiment on it
# lets search for key word jallikattu and check the sentiment on it
query = 'jallikattu'
tweet_cnt = 150
peta_tweets = api.search(q=query,count=tweet_cnt)
Explanation: Inspiration for this Project
I drew inspiration for this project from the ongoing issue on traditional bull fighting AKA Jallikattu. Here I'm trying read pulse of the people based on tweets.
We are searching for key word Jallikattu in Twitters public tweets, in the retured search result we are taking 150 tweets to do our Sentiment Analysis. Please dont go for large number of tweets there is an upper limit of 450 tweets, for more on api rate limits checkout Twitter Developer Doc.
End of explanation
# lets go over the tweets
sentiment_polarity = [0,0,0]
tags = []
for tweet in peta_tweets:
processed_tweet = textblob.TextBlob(tweet.text)
polarity = processed_tweet.sentiment.polarity
upd_index = 0 if polarity > 0 else (1 if polarity == 0 else 2)
sentiment_polarity[upd_index] = sentiment_polarity[upd_index]+1
tags.extend(re.findall(r"#(\w+)", tweet.text))
#print(tweet.text)
#print(processed_tweet.sentiment,'\n')
sentiment_label = ['Positive','Neutral','Negative']
#print("\n".join(["{:8} tweets count {}".format(s,val) for s,val in zip(sentiment_label,sentiment_polarity)]))
# plotting sentiment pie chart
colors = ['yellowgreen', 'gold', 'coral']
# lets explode the positive sentiment for visual appeal
explode = (0.1, 0, 0)
plt.pie(sentiment_polarity,labels=sentiment_label,colors=colors,explode=explode,shadow=True,autopct='%1.1f%%')
plt.axis('equal')
plt.legend(bbox_to_anchor=(1.3,1))
plt.title('Twitter Sentiment on \"'+query+'\"')
plt.show()
Explanation: Processing Tweets
Once we get the tweets, we will iterate through the tweets and do following oprations
1. Pass the tweet text to TextBlob to process the tweet
2. Processed tweets will have two attributes
* Polarity which is a numerical value between -1 to 1, the sentiment of the text can be infered from this.
* Subjectivity this shows wheather the text is stated as a fact or an opinion, value ranges from 0 to 1
3. For each tweet we will find sentiment of the text (positive, neutral or negative) and update a counter variable accordingly, this counter is later ploted as a pie chart.
4. Then we pass the tweet text to a regular expression to extract hash tags, which we later use to create an awesome word cloud visualization.
End of explanation
# lets process the hash tags in the tweets and make a word cloud visualization
# normalizing tags by converting all tags to lowercase
tags = [t.lower() for t in tags]
# get unique count of tags to take count for each
uniq_tags = list(set(tags))
tag_count = []
# for each unique hash tag take frequency of occurance
for tag in uniq_tags:
tag_count.append((tag,tags.count(tag)))
# lets print the top five tags
tag_count =sorted(tag_count,key=lambda x:-x[1])[:5]
print("\n".join(["{:8} {}".format(tag,val) for tag,val in tag_count]))
Explanation: Sentiment Analysis
We can see that majority is neutral which is contributed by
1. Tweets with media only(photo, video)
2. Tweets in regional language. Textblob do not work on our indian languages.
3. Some tweets contains only stop words or the words that do not give any positive or negative perspective.
4. Polarity is calculated by the number of positive words like "great, awesome, etc." or negative words like "hate, bad, etc"
One more point to note is that TextBlob is not a complete NLP package it does not do context aware search, such sophisticated deep learing abilities are available only with likes of Google.
End of explanation
# we will create a vivid tag cloud visualization
# creating a single string of texts from tags, the tag's font size is proportional to its frequency
text = " ".join(tags)
# this generates an image from the long string, if you wish you may save it to local
wc = WordCloud().generate(text)
# we will display the image with matplotlibs image show, removed x and y axis ticks
plt.imshow(wc)
plt.axis("off")
plt.show()
Explanation: Simple Word Cloud with Twitter #tags
Let us viualize the tags used in for Jallikattu by creating a tag cloud. The wordcloud package takes a single string of tags separated by whitespace. We will concatinate the tags and pass it to generate method to create a tag cloud image.
End of explanation
# we can also create a masked word cloud from the tags by using grayscale image as stencil
# lets load the mask image from local
bull_mask = np.array(Image.open(path.abspath('../asset/bull_mask_1.jpg')))
wc_mask = WordCloud(background_color="white", mask=bull_mask).generate(text)
mask_image = plt.imshow(bull_mask, cmap=plt.cm.gray)
word_cloud = plt.imshow(wc_mask,alpha=0.9)
plt.axis("off")
plt.title("Twitter Hash Tag Word Cloud for "+query)
plt.show()
Explanation: Masked Word Cloud
The tag cloud can be masked using a grascale stencil image the wordcloud package neatly arranges the word in side the mask image. I have supreimposed generated word cloud image on to the mask image to provide a detailing otherwise the background of the word cloud will be white and it will appeare like words are hanging in space instead.
Inorder to make the image superimposing work well, we need to manipulate image transparency using image alpha channel. If you look at the visual only fine detail of mask image is seen in the tag cloud this is bacause word cloud is layed on mask image and the transparency of word cloud image is 90% so only 10% of mask image is seen.
End of explanation |
12,221 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
A mass on a spring experiences a force described by Hookes law.
For a displacment $x$, the force is
$$F=-kx,$$
where $k$ is the spring constant with units of N/m.
The equation of motion is
$$ F = ma $$
or
$$ -k x = m a .$$
Because acceleration is the second derivative of displacment, this is
a differential equation,
$$ \frac{d^2}{dt^2} = -\frac{k}{m} x.$$
The solution to this equation is harmonic motion, for example
$$ x(t) = A\sin\omega t,$$
where $A$ is some amplitude and $\omega = \sqrt{k/m}$.
This can be verified by plugging the solution into the differential equation.
The angular frequency $\omega$ is related to the frequency $f$ and the period $T$ by
$$f = \omega/2\pi$$ and $$T=2\pi/\omega$$
We can illustrate this rather trivial case with an interacive plot.
Step1: We want to generalize this result to several massess connected by several springs.
The spring constant as a second derivative of potential
The force related to poential energy by
$$ F = -\frac{d}{dx}V(x).$$
Ths equation comes directly from the definition that work is force times distance.
Integrating this, we find the potential energy of a mass on a spring,
$$ V(x) = \frac{1}{2}kx^2. $$
In fact, the spring contant can be defined to be the second derivative of the potential,
$$ k = \frac{d^2}{dx^2} V(x).$$ We take the value of the second derivative at the minimum
of the potential, which assumes that the oscillations are not very far from equilibrium.
We see that Hooke's law is simply
$$F = -\frac{d^2 V(x)}{dx^2} x, $$
where the second derivative is evaluated at the minimum of the potential.
For a general potential, we can write the equation of motion as
$$ \frac{d^2}{dt^2} x = -\frac{1}{m}\frac{d^2V(x)}{dx^2} x.$$
The expression on the right hand side is known as the dynamical matrix,
though this is a trivial 1x1 matrix.
Two masses connected by a spring
Now the potential depends on two corrdinates,
$$ V(x_1, x_2) = \frac{1}{2} k (x_1 - x_2 - d),$$
where $d$ is the equilibrium separation of the particles.
Now the force on each particle depends on the positions of both of the particles,
$$
\begin{pmatrix}F_1 \ F_2\end{pmatrix}
= -
\begin{pmatrix}
\frac{\partial^2 V}{\partial x_1^2} &
\frac{\partial^2 V}{\partial x_1\partial x_2} \
\frac{\partial^2 V}{\partial x_1\partial x_2} &
\frac{\partial^2 V}{\partial x_2^2} \
\end{pmatrix}
\begin{pmatrix}x_1 \ x_2\end{pmatrix}
$$
For performing the derivatives, we find
$$
\begin{pmatrix}F_1 \ F_2\end{pmatrix}
= -
\begin{pmatrix}
k & -k \
-k & k \
\end{pmatrix}
\begin{pmatrix}x_1 \ x_2\end{pmatrix}
$$
The equations of motion are coupled,
$$
\begin{pmatrix}
\frac{d^2x_1}{dt^2} \
\frac{d^2x_2}{dt^2} \
\end{pmatrix}
= -
\begin{pmatrix}
k/m & -k/m \
-k/m & k/m \
\end{pmatrix}
\begin{pmatrix}x_1 \ x_2\end{pmatrix}
$$
To decouple the equations, we find the eigenvalues and eigenvectors.
Step2: The frequencies of the two modes of vibration are (in multiples of $\sqrt{k/m}$)
Step3: The first mode is a vibrational mode were the masses vibrate against each other (moving in opposite directions). This can be seen from the eigenvector.
Step4: The second mode is a translation mode with zero frequencyโboth masses move in the same direction.
Step5: We can interactively illustrate the vibrational mode.
Step6: Finding the dynamical matrix with numerical derivatives
We start from a function $V(x)$. If we want to calculate a derivative,
we just use the difference formula but don't take the delta too small.
Using $\delta x = 10^{-6}$ is safe.
$$
F = -\frac{dV(x)}{dx} \approx
\frac{V(x+\Delta x) - V(x-\Delta x)}{2\Delta x}
$$
Note that it is more accurate to do this symmetric difference formula
than it would be to use the usual forward derivative from calculus class.
It's easy to see this formula is just calculating the slope of the function using points near $x$.
Step7: Next, we can find the second derivative by using the difference formula twice.
We find the nice expression,
$$
\frac{d^2V}{dx^2} \approx \frac{V(x+\Delta x) - 2V(x) + V(x-\Delta x)}{(\Delta x)^2}.
$$
This formula has the nice interpretation of comparing the value of $V(x)$ to
the average of points on either side. If it is equal to the average, the line
is straight and the second derivative is zero.
If average of the outer values is larger than $V(x)$, then the ends curve upward,
and the second derivative is positive.
Likewise, if the average of the outer values is less than $V(x)$, then the ends curve downward,
and the second derivative is negative.
Step8: Now we can use these derivative formulas to calcuate the dynamical matrix
for the two masses on one spring. Well use $k=1$ and $m=1$ for simplicity. | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.html import widgets
def make_plot(t):
fig, ax = plt.subplots()
x,y = 0,0
plt.plot(x, y, 'k.')
plt.plot(x + 0.3 * t, y, 'bo')
plt.xlim(-1,1)
plt.ylim(-1,1)
widgets.interact(make_plot, t=(-1,1,0.1))
Explanation: Introduction
A mass on a spring experiences a force described by Hookes law.
For a displacment $x$, the force is
$$F=-kx,$$
where $k$ is the spring constant with units of N/m.
The equation of motion is
$$ F = ma $$
or
$$ -k x = m a .$$
Because acceleration is the second derivative of displacment, this is
a differential equation,
$$ \frac{d^2}{dt^2} = -\frac{k}{m} x.$$
The solution to this equation is harmonic motion, for example
$$ x(t) = A\sin\omega t,$$
where $A$ is some amplitude and $\omega = \sqrt{k/m}$.
This can be verified by plugging the solution into the differential equation.
The angular frequency $\omega$ is related to the frequency $f$ and the period $T$ by
$$f = \omega/2\pi$$ and $$T=2\pi/\omega$$
We can illustrate this rather trivial case with an interacive plot.
End of explanation
import numpy as np
a = np.array([[1, -1], [-1, 1]])
freq, vectors = np.linalg.eig(a)
vectors = vectors.transpose()
Explanation: We want to generalize this result to several massess connected by several springs.
The spring constant as a second derivative of potential
The force related to poential energy by
$$ F = -\frac{d}{dx}V(x).$$
Ths equation comes directly from the definition that work is force times distance.
Integrating this, we find the potential energy of a mass on a spring,
$$ V(x) = \frac{1}{2}kx^2. $$
In fact, the spring contant can be defined to be the second derivative of the potential,
$$ k = \frac{d^2}{dx^2} V(x).$$ We take the value of the second derivative at the minimum
of the potential, which assumes that the oscillations are not very far from equilibrium.
We see that Hooke's law is simply
$$F = -\frac{d^2 V(x)}{dx^2} x, $$
where the second derivative is evaluated at the minimum of the potential.
For a general potential, we can write the equation of motion as
$$ \frac{d^2}{dt^2} x = -\frac{1}{m}\frac{d^2V(x)}{dx^2} x.$$
The expression on the right hand side is known as the dynamical matrix,
though this is a trivial 1x1 matrix.
Two masses connected by a spring
Now the potential depends on two corrdinates,
$$ V(x_1, x_2) = \frac{1}{2} k (x_1 - x_2 - d),$$
where $d$ is the equilibrium separation of the particles.
Now the force on each particle depends on the positions of both of the particles,
$$
\begin{pmatrix}F_1 \ F_2\end{pmatrix}
= -
\begin{pmatrix}
\frac{\partial^2 V}{\partial x_1^2} &
\frac{\partial^2 V}{\partial x_1\partial x_2} \
\frac{\partial^2 V}{\partial x_1\partial x_2} &
\frac{\partial^2 V}{\partial x_2^2} \
\end{pmatrix}
\begin{pmatrix}x_1 \ x_2\end{pmatrix}
$$
For performing the derivatives, we find
$$
\begin{pmatrix}F_1 \ F_2\end{pmatrix}
= -
\begin{pmatrix}
k & -k \
-k & k \
\end{pmatrix}
\begin{pmatrix}x_1 \ x_2\end{pmatrix}
$$
The equations of motion are coupled,
$$
\begin{pmatrix}
\frac{d^2x_1}{dt^2} \
\frac{d^2x_2}{dt^2} \
\end{pmatrix}
= -
\begin{pmatrix}
k/m & -k/m \
-k/m & k/m \
\end{pmatrix}
\begin{pmatrix}x_1 \ x_2\end{pmatrix}
$$
To decouple the equations, we find the eigenvalues and eigenvectors.
End of explanation
freq
Explanation: The frequencies of the two modes of vibration are (in multiples of $\sqrt{k/m}$)
End of explanation
vectors[0]
Explanation: The first mode is a vibrational mode were the masses vibrate against each other (moving in opposite directions). This can be seen from the eigenvector.
End of explanation
vectors[1]
Explanation: The second mode is a translation mode with zero frequencyโboth masses move in the same direction.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.html import widgets
def make_plot(t):
fig, ax = plt.subplots()
x,y = np.array([-1,1]), np.array([0,0])
plt.plot(x, y, 'k.')
plt.plot(x + 0.3 * vectors[0] * t, y, 'bo')
plt.xlim(-1.5,1.5)
plt.ylim(-1.5,1.5)
widgets.interact(make_plot, t=(-1,1,0.1))
Explanation: We can interactively illustrate the vibrational mode.
End of explanation
def V(x):
return 0.5 * x**2
deltax = 1e-6
def F_approx(x):
return ( V(x + deltax) - V(x - deltax) ) / (2 * deltax)
[(x, F_approx(x)) for x in np.linspace(-2,2,9)]
Explanation: Finding the dynamical matrix with numerical derivatives
We start from a function $V(x)$. If we want to calculate a derivative,
we just use the difference formula but don't take the delta too small.
Using $\delta x = 10^{-6}$ is safe.
$$
F = -\frac{dV(x)}{dx} \approx
\frac{V(x+\Delta x) - V(x-\Delta x)}{2\Delta x}
$$
Note that it is more accurate to do this symmetric difference formula
than it would be to use the usual forward derivative from calculus class.
It's easy to see this formula is just calculating the slope of the function using points near $x$.
End of explanation
def dV2dx2_approx(x):
return ( V(x + deltax) - 2 * V(x) + V(x - deltax) ) / deltax**2
[(x, dV2dx2_approx(x)) for x in np.linspace(-2,2,9)]
Explanation: Next, we can find the second derivative by using the difference formula twice.
We find the nice expression,
$$
\frac{d^2V}{dx^2} \approx \frac{V(x+\Delta x) - 2V(x) + V(x-\Delta x)}{(\Delta x)^2}.
$$
This formula has the nice interpretation of comparing the value of $V(x)$ to
the average of points on either side. If it is equal to the average, the line
is straight and the second derivative is zero.
If average of the outer values is larger than $V(x)$, then the ends curve upward,
and the second derivative is positive.
Likewise, if the average of the outer values is less than $V(x)$, then the ends curve downward,
and the second derivative is negative.
End of explanation
def V2(x1, x2):
return 0.5 * (x1 - x2)**2
x1, x2 = -1, 1
mat = np.array(
[[(V2(x1+deltax, x2) - 2 * V2(x1,x2) + V2(x1-deltax, x2)) / deltax**2 ,
(V2(x1+deltax, x2+deltax) - V2(x1-deltax, x2+deltax)
- V2(x1+deltax, x2-deltax) + V2(x1+deltax, x2+deltax)) / (2*deltax)**2],
[(V2(x1+deltax, x2+deltax) - V2(x1-deltax, x2+deltax)
- V2(x1+deltax, x2-deltax) + V2(x1+deltax, x2+deltax)) / (2*deltax)**2,
(V2(x1, x2+deltax) - 2 * V2(x1,x2) + V2(x1, x2-deltax)) / deltax**2 ]]
)
mat
freq, vectors = np.linalg.eig(mat)
vectors = vectors.transpose()
for f,v in zip(freq, vectors):
print("freqency", f, ", eigenvector", v)
Explanation: Now we can use these derivative formulas to calcuate the dynamical matrix
for the two masses on one spring. Well use $k=1$ and $m=1$ for simplicity.
End of explanation |
12,222 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling and Simulation in Python
Experiments with different ODE solvers
Copyright 2019 Allen Downey
License
Step1: Glucose minimal model
Read the data.
Step2: Interpolate the insulin data.
Step3: Initialize the parameters
Step4: To estimate basal levels, we'll use the concentrations at t=0.
Step5: Create the initial condtions.
Step8: Make the System object.
Step10: Numerical solution
In the previous chapter, we approximated the differential equations with difference equations, and solved them using run_simulation.
In this chapter, we solve the differential equation numerically using run_euler...
Instead of an update function, we provide a slope function that evaluates the right-hand side of the differential equations. We don't have to do the update part; the solver does it for us.
Step11: We can test the slope function with the initial conditions.
Step12: Here's how we run the ODE solver.
Step13: results is a TimeFrame with one row for each time step and one column for each state variable
Step14: Plotting the results from run_simulation and run_euler, we can see that they are not very different.
Step15: The differences in G are less than 1%.
Step16: Dropping pennies
I'll start by getting the units we need from Pint.
Step17: And defining the initial state.
Step18: Acceleration due to gravity is about 9.8 m / s$^2$.
Step19: When we call odeint, we need an array of timestamps where we want to compute the solution.
I'll start with a duration of 10 seconds.
Step20: Now we make a System object.
Step22: And define the slope function.
Step23: It's always a good idea to test the slope function with the initial conditions.
Step25: Now we're ready to call run_euler
Step26: Here are the results
Step27: And here's position as a function of time
Step29: Onto the sidewalk
To figure out when the penny hit the sidewalk, we can use crossings, which finds the times where a Series passes through a given value.
Step30: For this example there should be just one crossing, the time when the penny hits the sidewalk.
Step31: We can compare that to the exact result. Without air resistance, we have
$v = -g t$
and
$y = 381 - g t^2 / 2$
Setting $y=0$ and solving for $t$ yields
$t = \sqrt{\frac{2 y_{init}}{g}}$
Step33: The estimate is accurate to about 10 decimal places.
Events
Instead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. run_ralston provides exactly the tool we need, event functions.
Here's an event function that returns the height of the penny above the sidewalk
Step34: And here's how we pass it to run_ralston. The solver should run until the event function returns 0, and then terminate.
Step35: The message from the solver indicates the solver stopped because the event we wanted to detect happened.
Here are the results
Step36: With the events option, the solver returns the actual time steps it computed, which are not necessarily equally spaced.
The last time step is when the event occurred
Step37: The result is accurate to about 15 decimal places.
We can also check the velocity of the penny when it hits the sidewalk
Step38: And convert to kilometers per hour. | Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
init = State(y = 2)
system = System(init=init, t_0=1, t_end=3)
def slope_func(state, t, system):
[y] = state
dydt = y + t
return [dydt]
results, details = run_euler(system, slope_func)
get_last_value(results.y)
Explanation: Modeling and Simulation in Python
Experiments with different ODE solvers
Copyright 2019 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
data = pd.read_csv('data/glucose_insulin.csv', index_col='time');
Explanation: Glucose minimal model
Read the data.
End of explanation
I = interpolate(data.insulin)
Explanation: Interpolate the insulin data.
End of explanation
G0 = 290
k1 = 0.03
k2 = 0.02
k3 = 1e-05
Explanation: Initialize the parameters
End of explanation
Gb = data.glucose[0]
Ib = data.insulin[0]
Explanation: To estimate basal levels, we'll use the concentrations at t=0.
End of explanation
init = State(G=G0, X=0)
Explanation: Create the initial condtions.
End of explanation
t_0 = get_first_label(data)
t_end = get_last_label(data)
system = System(G0=G0, k1=k1, k2=k2, k3=k3,
init=init, Gb=Gb, Ib=Ib, I=I,
t_0=t_0, t_end=t_end, dt=2)
def update_func(state, t, system):
Updates the glucose minimal model.
state: State object
t: time in min
system: System object
returns: State object
G, X = state
k1, k2, k3 = system.k1, system.k2, system.k3
I, Ib, Gb = system.I, system.Ib, system.Gb
dt = system.dt
dGdt = -k1 * (G - Gb) - X*G
dXdt = k3 * (I(t) - Ib) - k2 * X
G += dGdt * dt
X += dXdt * dt
return State(G=G, X=X)
def run_simulation(system, update_func):
Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
t_0, t_end, dt = system.t_0, system.t_end, system.dt
frame = TimeFrame(columns=init.index)
frame.row[t_0] = init
ts = linrange(t_0, t_end, dt)
for t in ts:
frame.row[t+dt] = update_func(frame.row[t], t, system)
return frame
%time results = run_simulation(system, update_func);
results
Explanation: Make the System object.
End of explanation
def slope_func(state, t, system):
Computes derivatives of the glucose minimal model.
state: State object
t: time in min
system: System object
returns: derivatives of G and X
G, X = state
k1, k2, k3 = system.k1, system.k2, system.k3
I, Ib, Gb = system.I, system.Ib, system.Gb
dGdt = -k1 * (G - Gb) - X*G
dXdt = k3 * (I(t) - Ib) - k2 * X
return dGdt, dXdt
Explanation: Numerical solution
In the previous chapter, we approximated the differential equations with difference equations, and solved them using run_simulation.
In this chapter, we solve the differential equation numerically using run_euler...
Instead of an update function, we provide a slope function that evaluates the right-hand side of the differential equations. We don't have to do the update part; the solver does it for us.
End of explanation
slope_func(init, 0, system)
Explanation: We can test the slope function with the initial conditions.
End of explanation
system = System(G0=G0, k1=k1, k2=k2, k3=k3,
init=init, Gb=Gb, Ib=Ib, I=I,
t_0=t_0, t_end=t_end, dt=1)
%time results2, details = run_euler(system, slope_func)
Explanation: Here's how we run the ODE solver.
End of explanation
results2
Explanation: results is a TimeFrame with one row for each time step and one column for each state variable:
End of explanation
plot(results.G, '-')
plot(results2.G, '-')
plot(data.glucose, 'bo')
Explanation: Plotting the results from run_simulation and run_euler, we can see that they are not very different.
End of explanation
diff = results.G - results2.G
percent_diff = diff / results2.G * 100
max(abs(percent_diff.dropna()))
Explanation: The differences in G are less than 1%.
End of explanation
m = UNITS.meter
s = UNITS.second
Explanation: Dropping pennies
I'll start by getting the units we need from Pint.
End of explanation
init = State(y=381 * m,
v=0 * m/s)
Explanation: And defining the initial state.
End of explanation
g = 9.8 * m/s**2
Explanation: Acceleration due to gravity is about 9.8 m / s$^2$.
End of explanation
t_end = 10 * s
Explanation: When we call odeint, we need an array of timestamps where we want to compute the solution.
I'll start with a duration of 10 seconds.
End of explanation
system = System(init=init, g=g, t_end=t_end)
Explanation: Now we make a System object.
End of explanation
def slope_func(state, t, system):
Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
y, v = state
g = system.g
dydt = v
dvdt = -g
return dydt, dvdt
Explanation: And define the slope function.
End of explanation
dydt, dvdt = slope_func(system.init, 0, system)
print(dydt)
print(dvdt)
Explanation: It's always a good idea to test the slope function with the initial conditions.
End of explanation
system.set(dt=0.1*s)
results, details = run_euler(system, slope_func, max_step=0.5)
details.message
results
def crossings(series, value):
Find the labels where the series passes through value.
The labels in series must be increasing numerical values.
series: Series
value: number
returns: sequence of labels
units = get_units(series.values[0])
values = magnitudes(series - value)
interp = InterpolatedUnivariateSpline(series.index, values)
return interp.roots()
t_crossings = crossings(results.y, 0)
system.set(dt=0.1*s)
results, details = run_ralston(system, slope_func, max_step=0.5)
details.message
t_crossings = crossings(results.y, 0)
Explanation: Now we're ready to call run_euler
End of explanation
results
Explanation: Here are the results:
End of explanation
def plot_position(results):
plot(results.y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig01.pdf')
Explanation: And here's position as a function of time:
End of explanation
def crossings(series, value):
Find the labels where the series passes through value.
The labels in series must be increasing numerical values.
series: Series
value: number
returns: sequence of labels
units = get_units(series.values[0])
values = magnitudes(series - value)
interp = InterpolatedUnivariateSpline(series.index, values)
return interp.roots()
t_crossings = crossings(results.y, 0)
Explanation: Onto the sidewalk
To figure out when the penny hit the sidewalk, we can use crossings, which finds the times where a Series passes through a given value.
End of explanation
t_sidewalk = t_crossings[0] * s
Explanation: For this example there should be just one crossing, the time when the penny hits the sidewalk.
End of explanation
sqrt(2 * init.y / g)
Explanation: We can compare that to the exact result. Without air resistance, we have
$v = -g t$
and
$y = 381 - g t^2 / 2$
Setting $y=0$ and solving for $t$ yields
$t = \sqrt{\frac{2 y_{init}}{g}}$
End of explanation
def event_func(state, t, system):
Return the height of the penny above the sidewalk.
y, v = state
return y
Explanation: The estimate is accurate to about 10 decimal places.
Events
Instead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. run_ralston provides exactly the tool we need, event functions.
Here's an event function that returns the height of the penny above the sidewalk:
End of explanation
results, details = run_ralston(system, slope_func, events=event_func)
details
Explanation: And here's how we pass it to run_ralston. The solver should run until the event function returns 0, and then terminate.
End of explanation
results
Explanation: The message from the solver indicates the solver stopped because the event we wanted to detect happened.
Here are the results:
End of explanation
t_sidewalk = get_last_label(results) * s
Explanation: With the events option, the solver returns the actual time steps it computed, which are not necessarily equally spaced.
The last time step is when the event occurred:
End of explanation
v_sidewalk = get_last_value(results.v)
Explanation: The result is accurate to about 15 decimal places.
We can also check the velocity of the penny when it hits the sidewalk:
End of explanation
km = UNITS.kilometer
h = UNITS.hour
v_sidewalk.to(km / h)
Explanation: And convert to kilometers per hour.
End of explanation |
12,223 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CrowdTruth vs. MACE vs. Majority Vote for Recognizing Textual Entailment Annotation
This notebook contains a comparative analysis on the task of recognizing textual entailment between three approaches
Step1: Declaring a pre-processing configuration
The pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class
Step2: Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Recognizing Textual Entailment task
Step3: Pre-processing the input data
After declaring the configuration of our input file, we are ready to pre-process the crowd data
Step4: Computing the CrowdTruth metrics
The pre-processed data can then be used to calculate the CrowdTruth metrics. results is a dict object that contains the quality metrics for the sentences, annotations and crowd workers.
Step5: CrowdTruth Sentence Quality Score
The sentence metrics are stored in results["units"]. The uqs column in results["units"] contains the sentence quality scores, capturing the overall workers agreement over each sentences. The uqs_initial column in results["units"] contains the initial sentence quality scores, before appling the CrowdTruth metrics.
Step6: The histograms above show that the final sentence quality scores are nicely distributed, with both lower and high quality sentences. We also observe that, overall, the sentence quality score increased after applying the CrowdTruth metrics, compared to the initial sentence quality scores.
The sentence quality score is a powerful measure to understand how clear the sentence is and the suitability of the sentence to be used as training data for various machine learning models.
The unit_annotation_score column in results["units"] contains the sentence-annotation scores, capturing the likelihood that an annotation is expressed in a sentence. For each sentence, we store a dictionary mapping each annotation to its sentence-annotation score.
Step7: Example of a clear unit based on the CrowdTruth metrics
First, we sort the sentence metrics stored in results["units"] based on the sentence quality score (uqs), in ascending order. Thus, the most clear sentences are found at the tail of the new structure
Step8: We print the most clear unit, which is the last unit in sortedUQS
Step9: The unit below is very clear because the text contains high overlap with the hypothesis. The relevance can be observed in the following parts of the hypothesis and of the text
Step10: Example of an unclear unit based on the CrowdTruth metrics
We use the same structure as above and we print the most unclear unit, which is the first unit in sortedUQS
Step11: The unit below is very unclear because the text and the hypothesis contain overlapping words such as "1990" and "apartheid" and phrases that could be related, such as "South Africa" - "ANC" or "abolished" - "were to be lifted", but a clear relevance between the two can not be shown.
Step12: CrowdTruth Worker Quality Scores
The worker metrics are stored in results["workers"]. The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers. The wqs_initial column in results["workers"] contains the initial worker quality scores, before appling the CrowdTruth metrics.
Step13: The histograms above shows the worker quality scores and the initial worker quality scores. We observe that the worker quality scores are distributed across a wide spectrum, from low to high quality workers. Furthermore, the worker quality scores seem to have a more normal distribution after computing the CrowdTruth iterations, compared to the initial worker quality scores.
Low worker quality scores can be used to identify spam workers, or workers that have misunderstood the annotation task. Similarly, high worker quality scores can be used to identify well performing workers.
CrowdTruth Annotation Quality Score
The annotation metrics are stored in results["annotations"]. The aqs column contains the annotation quality scores, capturing the overall worker agreement over one annotation.
Step14: In the dataframe above we observe that after iteratively computing the sentence quality scores and the worker quality scores the overall agreement on the annotations increased. This can be seen when comparing the annotation quality scores with the initial annotation quality scores.
MACE for Recognizing Textual Entailment Annotation
We first pre-processed the crowd results to create compatible files for running the MACE tool.
Each row in a csv file should point to a unit in the dataset and each column in the csv file should point to a worker. The content of the csv file captures the worker answer for that particular unit (or remains empty if the worker did not annotate that unit).
The following implementation of MACE has been used in these experiments
Step15: For each sentence and each annotation, MACE computes the sentence annotation probability score, which shows the probability of each annotation to be expressed in the sentence. MACE sentence annotation probability score is similar to the CrowdTruth sentence-annotation score.
Step16: For each worker in the annotators set we have MACE worker competence score, which is similar to the CrowdTruth worker quality score.
Step17: CrowdTruth vs. MACE on Worker Quality
We read the worker quality scores as returned by CrowdTruth and MACE and merge the two dataframes
Step18: Plot the quality scores of the workers as computed by both CrowdTruth and MACE
Step19: In the plot above we observe that MACE favours extreme values, which means that the identified low quality workers will have very low scores, e.g., below 0.2 and the best workers will have quality scores of 1.0, or very close to 1.0. On the other side, CrowdTruth has a smaller interval of values, starting from around 0.25 to 0.9.
Following, we compute the correlation between the two values using Spearman correlation and Kendall's tau correlation, to see whether the two values are correlated. More exactly, we want to see whether, overall, both metrics identify as low quality or high quality similar workers, or they are really divergent in their outcome.
Step20: Spearman correlation shows shows a strong to very strong correlation between the two computed values, and the correlation is significant. This means that overall, even if the two metrics provide different values, they are indeed correlated and low quality workers receive low scores and high quality workers receive higher scores from both aggregation methods.
Step21: Even with Kendall's tau rank correlation, we observe a moderate to strong correlation between the two computed values, where the correlation is significant.
Further, we compute the difference of the two quality scores and we check one worker for which the difference is very high.
Step22: We take for example the worker with the id "A2QPX2MS844TYJ" and check the overall disagreement among the workers on the units annotated by them. MACE rated the worker with a quality score of 0.01 while CrowdTruth rated the worker with a quality score of 0.43
What we observe in the dataframe below, where we show the units annotated by the worker "A2QPX2MS844TYJ", is that the worker "A2QPX2MS844TYJ" annotated, in general, units with high disagreement, i.e., which are not very clear. While MACE marked the worker as low quality because it seems that they always picked the same answer, CrowdTruth also considered the difficulty of the units, and thus, giving it a higher weight.
Step23: CrowdTruth vs. MACE vs. Majority Vote on Annotation Performance
Next, we look into the crowd performance in terms of F1-score compared to expert annotations. We compare the crowd performance given the three aggregation methods
Step24: The following two functions compute the F1-score of the crowd compared to the expert annotations. The first function computes the F1-score at every sentence-annotation score threshold. The second function computes the F1-score for the majority vote approach, i.e., when at least half of the workers picked the answer.
Step25: F1-score for the annotation "1" or "true" | Python Code:
#Read the input file into a pandas DataFrame
import pandas as pd
test_data = pd.read_csv("../data/rte.standardized.csv")
test_data.head()
Explanation: CrowdTruth vs. MACE vs. Majority Vote for Recognizing Textual Entailment Annotation
This notebook contains a comparative analysis on the task of recognizing textual entailment between three approaches:
CrowdTruth
MACE (a probabilistic model that computes competence estimates of the individual annotators and the most likely answer to each item [1])
Majority Vote (the most common crowd annotation aggregation method).
[1] Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy (2013): Learning Whom to Trust with MACE. In: Proceedings of NAACL-HLT 2013.
First we describe the task. Then, we apply the CrowdTruth metrics and identify clear and unclear example sentence. We then apply MACE. In the final part we perform two comparisons:
CrowdTruth vs. MACE: workers' quality
CrowdTruth vs. MACE vs. Majority Vote: metrics performance in terms of F1-score (compared to expert, ground truth annotations)
Data This noteboook uses the data gathered in the "Recognizing Textual Entailment" crowdsourcing experiment published in Rion Snow, Brendan OโConnor, Dan Jurafsky, and Andrew Y. Ng: Cheap and fastโbut is it good? Evaluating non-expert annotations for natural language tasks. EMNLP 2008, pages 254โ263.
Task Description: Given two sentences, the crowd has to choose whether the second hypothesis sentence can be inferred from the first sentence (binary choice, true/false). Following, we provide an example from the aforementioned publication:
Text: โCrude Oil Prices Slumpโ
Hypothesis: โOil prices dropโ
A screenshot of the task as it appeared to workers can be seen at the following repository.
The dataset for this task was downloaded from the following repository, which contains the raw output from the crowd on AMT. Currently, you can find the processed input file in the folder named data. Besides the raw crowd annotations, the processed file also contains the text and the hypothesis that needs to be tested with the given text, which were given as input to the crowd.
End of explanation
import crowdtruth
from crowdtruth.configuration import DefaultConfig
Explanation: Declaring a pre-processing configuration
The pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class:
End of explanation
class TestConfig(DefaultConfig):
inputColumns = ["gold", "task", "text", "hypothesis"]
outputColumns = ["response"]
customPlatformColumns = ["!amt_annotation_ids", "orig_id", "!amt_worker_ids", "start", "end"]
# processing of a closed task
open_ended_task = False
annotation_vector = ["0", "1"]
def processJudgments(self, judgments):
# pre-process output to match the values in annotation_vector
for col in self.outputColumns:
# transform to lowercase
judgments[col] = judgments[col].apply(lambda x: str(x).lower())
return judgments
Explanation: Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Recognizing Textual Entailment task:
inputColumns: list of input columns from the .csv file with the input data
outputColumns: list of output columns from the .csv file with the answers from the workers
customPlatformColumns: a list of columns from the .csv file that defines a standard annotation tasks, in the following order - judgment id, unit id, worker id, started time, submitted time. This variable is used for input files that do not come from AMT or FigureEight (formarly known as CrowdFlower).
annotation_separator: string that separates between the crowd annotations in outputColumns
open_ended_task: boolean variable defining whether the task is open-ended (i.e. the possible crowd annotations are not known beforehand, like in the case of free text input); in the task that we are processing, workers pick the answers from a pre-defined list, therefore the task is not open ended, and this variable is set to False
annotation_vector: list of possible crowd answers, mandatory to declare when open_ended_task is False; for our task, this is the list of relations
processJudgments: method that defines processing of the raw crowd data; for this task, we process the crowd answers to correspond to the values in annotation_vector
The complete configuration class is declared below:
End of explanation
data, config = crowdtruth.load(
file = "../data/rte.standardized.csv",
config = TestConfig()
)
data['judgments'].head()
Explanation: Pre-processing the input data
After declaring the configuration of our input file, we are ready to pre-process the crowd data:
End of explanation
results = crowdtruth.run(data, config)
Explanation: Computing the CrowdTruth metrics
The pre-processed data can then be used to calculate the CrowdTruth metrics. results is a dict object that contains the quality metrics for the sentences, annotations and crowd workers.
End of explanation
results["units"].head()
# Distribution of the sentence quality scores and the initial sentence quality scores
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 15, 5
plt.subplot(1, 2, 1)
plt.hist(results["units"]["uqs"])
plt.ylim(0,270)
plt.xlabel("Sentence Quality Score")
plt.ylabel("#Sentences")
plt.subplot(1, 2, 2)
plt.hist(results["units"]["uqs_initial"])
plt.ylim(0,270)
plt.xlabel("Initial Sentence Quality Score")
plt.ylabel("# Units")
Explanation: CrowdTruth Sentence Quality Score
The sentence metrics are stored in results["units"]. The uqs column in results["units"] contains the sentence quality scores, capturing the overall workers agreement over each sentences. The uqs_initial column in results["units"] contains the initial sentence quality scores, before appling the CrowdTruth metrics.
End of explanation
results["units"]["unit_annotation_score"].head()
Explanation: The histograms above show that the final sentence quality scores are nicely distributed, with both lower and high quality sentences. We also observe that, overall, the sentence quality score increased after applying the CrowdTruth metrics, compared to the initial sentence quality scores.
The sentence quality score is a powerful measure to understand how clear the sentence is and the suitability of the sentence to be used as training data for various machine learning models.
The unit_annotation_score column in results["units"] contains the sentence-annotation scores, capturing the likelihood that an annotation is expressed in a sentence. For each sentence, we store a dictionary mapping each annotation to its sentence-annotation score.
End of explanation
sortedUQS = results["units"].sort_values(["uqs"])
sortedUQS = sortedUQS.reset_index()
Explanation: Example of a clear unit based on the CrowdTruth metrics
First, we sort the sentence metrics stored in results["units"] based on the sentence quality score (uqs), in ascending order. Thus, the most clear sentences are found at the tail of the new structure:
End of explanation
sortedUQS.tail(1)
Explanation: We print the most clear unit, which is the last unit in sortedUQS:
End of explanation
print("Hypothesis: %s" % sortedUQS["input.hypothesis"].iloc[len(sortedUQS.index)-1])
print("Text: %s" % sortedUQS["input.text"].iloc[len(sortedUQS.index)-1])
print("Expert Answer: %s" % sortedUQS["input.gold"].iloc[len(sortedUQS.index)-1])
print("Crowd Answer with CrowdTruth: %s" % sortedUQS["unit_annotation_score"].iloc[len(sortedUQS.index)-1])
print("Crowd Answer without CrowdTruth: %s" % sortedUQS["unit_annotation_score_initial"].iloc[len(sortedUQS.index)-1])
Explanation: The unit below is very clear because the text contains high overlap with the hypothesis. The relevance can be observed in the following parts of the hypothesis and of the text: "Pamplona fiesta has been celebrated for centuries" and "The centuries-old Pamplona fiesta".
End of explanation
sortedUQS.head(1)
Explanation: Example of an unclear unit based on the CrowdTruth metrics
We use the same structure as above and we print the most unclear unit, which is the first unit in sortedUQS:
End of explanation
print("Hypothesis: %s" % sortedUQS["input.hypothesis"].iloc[0])
print("Text: %s" % sortedUQS["input.text"].iloc[0])
print("Expert Answer: %s" % sortedUQS["input.gold"].iloc[0])
print("Crowd Answer with CrowdTruth: %s" % sortedUQS["unit_annotation_score"].iloc[0])
print("Crowd Answer without CrowdTruth: %s" % sortedUQS["unit_annotation_score_initial"].iloc[0])
Explanation: The unit below is very unclear because the text and the hypothesis contain overlapping words such as "1990" and "apartheid" and phrases that could be related, such as "South Africa" - "ANC" or "abolished" - "were to be lifted", but a clear relevance between the two can not be shown.
End of explanation
results["workers"].head()
# Distribution of the worker quality scores and the initial worker quality scores
plt.rcParams['figure.figsize'] = 15, 5
plt.subplot(1, 2, 1)
plt.hist(results["workers"]["wqs"])
plt.ylim(0,50)
plt.xlabel("Worker Quality Score")
plt.ylabel("#Workers")
plt.subplot(1, 2, 2)
plt.hist(results["workers"]["wqs_initial"])
plt.ylim(0,50)
plt.xlabel("Initial Worker Quality Score")
plt.ylabel("#Workers")
Explanation: CrowdTruth Worker Quality Scores
The worker metrics are stored in results["workers"]. The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers. The wqs_initial column in results["workers"] contains the initial worker quality scores, before appling the CrowdTruth metrics.
End of explanation
results["annotations"]
Explanation: The histograms above shows the worker quality scores and the initial worker quality scores. We observe that the worker quality scores are distributed across a wide spectrum, from low to high quality workers. Furthermore, the worker quality scores seem to have a more normal distribution after computing the CrowdTruth iterations, compared to the initial worker quality scores.
Low worker quality scores can be used to identify spam workers, or workers that have misunderstood the annotation task. Similarly, high worker quality scores can be used to identify well performing workers.
CrowdTruth Annotation Quality Score
The annotation metrics are stored in results["annotations"]. The aqs column contains the annotation quality scores, capturing the overall worker agreement over one annotation.
End of explanation
# MACE input file sample
import numpy as np
mace_test_data = pd.read_csv("../data/mace_rte.standardized.csv", header=None)
mace_test_data = test_data.replace(np.nan, '', regex=True)
mace_test_data.head()
Explanation: In the dataframe above we observe that after iteratively computing the sentence quality scores and the worker quality scores the overall agreement on the annotations increased. This can be seen when comparing the annotation quality scores with the initial annotation quality scores.
MACE for Recognizing Textual Entailment Annotation
We first pre-processed the crowd results to create compatible files for running the MACE tool.
Each row in a csv file should point to a unit in the dataset and each column in the csv file should point to a worker. The content of the csv file captures the worker answer for that particular unit (or remains empty if the worker did not annotate that unit).
The following implementation of MACE has been used in these experiments: https://github.com/dirkhovy/MACE.
End of explanation
# MACE sentence annotation probability scores:
import pandas as pd
mace_data = pd.read_csv("../data/results/mace_units_rte.csv")
mace_data.head()
Explanation: For each sentence and each annotation, MACE computes the sentence annotation probability score, which shows the probability of each annotation to be expressed in the sentence. MACE sentence annotation probability score is similar to the CrowdTruth sentence-annotation score.
End of explanation
# MACE worker competence scores
mace_workers = pd.read_csv("../data/results/mace_workers_rte.csv")
mace_workers.head()
Explanation: For each worker in the annotators set we have MACE worker competence score, which is similar to the CrowdTruth worker quality score.
End of explanation
mace_workers = pd.read_csv("../data/results/mace_workers_rte.csv")
crowdtruth_workers = pd.read_csv("../data/results/crowdtruth_workers_rte.csv")
workers_scores = pd.merge(mace_workers, crowdtruth_workers, on='worker')
workers_scores = workers_scores.sort_values(["wqs"])
workers_scores.head()
Explanation: CrowdTruth vs. MACE on Worker Quality
We read the worker quality scores as returned by CrowdTruth and MACE and merge the two dataframes:
End of explanation
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.scatter(
workers_scores["competence"],
workers_scores["wqs"],
)
plt.plot([0, 1], [0, 1], 'red', linewidth=1)
plt.title("Worker Quality Score")
plt.xlabel("MACE")
plt.ylabel("CrowdTruth")
Explanation: Plot the quality scores of the workers as computed by both CrowdTruth and MACE:
End of explanation
from scipy.stats import spearmanr
x = workers_scores["wqs"]
x_corr = workers_scores["competence"]
corr, p_value = spearmanr(x, x_corr)
print ("correlation: ", corr)
print ("p-value: ", p_value)
Explanation: In the plot above we observe that MACE favours extreme values, which means that the identified low quality workers will have very low scores, e.g., below 0.2 and the best workers will have quality scores of 1.0, or very close to 1.0. On the other side, CrowdTruth has a smaller interval of values, starting from around 0.25 to 0.9.
Following, we compute the correlation between the two values using Spearman correlation and Kendall's tau correlation, to see whether the two values are correlated. More exactly, we want to see whether, overall, both metrics identify as low quality or high quality similar workers, or they are really divergent in their outcome.
End of explanation
from scipy.stats import kendalltau
x1 = workers_scores["wqs"]
x2 = workers_scores["competence"]
tau, p_value = kendalltau(x1, x2)
print ("correlation: ", tau)
print ("p-value: ", p_value)
Explanation: Spearman correlation shows shows a strong to very strong correlation between the two computed values, and the correlation is significant. This means that overall, even if the two metrics provide different values, they are indeed correlated and low quality workers receive low scores and high quality workers receive higher scores from both aggregation methods.
End of explanation
workers_scores["diff"] = workers_scores["wqs"] - workers_scores["competence"]
workers_scores = workers_scores.sort_values(["diff"])
workers_scores.tail(5)
Explanation: Even with Kendall's tau rank correlation, we observe a moderate to strong correlation between the two computed values, where the correlation is significant.
Further, we compute the difference of the two quality scores and we check one worker for which the difference is very high.
End of explanation
# Sample of sentences annotated by worker A2QPX2MS844TYJ
units = list(test_data[test_data["!amt_worker_ids"] == "A2QPX2MS844TYJ"]["orig_id"])
units_df = sortedUQS[sortedUQS["unit"].isin(units)]
units_df = units_df.sort_values(["uqs_initial"])
units_df
Explanation: We take for example the worker with the id "A2QPX2MS844TYJ" and check the overall disagreement among the workers on the units annotated by them. MACE rated the worker with a quality score of 0.01 while CrowdTruth rated the worker with a quality score of 0.43
What we observe in the dataframe below, where we show the units annotated by the worker "A2QPX2MS844TYJ", is that the worker "A2QPX2MS844TYJ" annotated, in general, units with high disagreement, i.e., which are not very clear. While MACE marked the worker as low quality because it seems that they always picked the same answer, CrowdTruth also considered the difficulty of the units, and thus, giving it a higher weight.
End of explanation
import pandas as pd
import numpy as np
mace = pd.read_csv("../data/results/mace_units_rte.csv")
crowdtruth = pd.read_csv("../data/results/crowdtruth_units_rte.csv")
Explanation: CrowdTruth vs. MACE vs. Majority Vote on Annotation Performance
Next, we look into the crowd performance in terms of F1-score compared to expert annotations. We compare the crowd performance given the three aggregation methods: CrowdTruth, MACE and Majority Vote. We read the result files as given by MACE and CrowdTruth.
End of explanation
def compute_F1_score(dataset, label, gold_column, gold_value):
nyt_f1 = np.zeros(shape=(100, 2))
for idx in xrange(0, 100):
thresh = (idx + 1) / 100.0
tp = 0
fp = 0
tn = 0
fn = 0
for gt_idx in range(0, len(dataset.index)):
if dataset[label].iloc[gt_idx] >= thresh:
if dataset[gold_column].iloc[gt_idx] == gold_value:
tp = tp + 1.0
else:
fp = fp + 1.0
else:
if dataset[gold_column].iloc[gt_idx] == gold_value:
fn = fn + 1.0
else:
tn = tn + 1.0
nyt_f1[idx, 0] = thresh
if tp != 0:
nyt_f1[idx, 1] = 2.0 * tp / (2.0 * tp + fp + fn)
else:
nyt_f1[idx, 1] = 0
return nyt_f1
def compute_majority_vote(dataset, label, gold_column, gold_value):
tp = 0
fp = 0
tn = 0
fn = 0
for j in range(len(dataset.index)):
if dataset[label].iloc[j] >= 0.5:
if dataset[gold_column].iloc[j] == gold_value:
tp = tp + 1.0
else:
fp = fp + 1.0
else:
if dataset[gold_column].iloc[j] == gold_value:
fn = fn + 1.0
else:
tn = tn + 1.0
return 2.0 * tp / (2.0 * tp + fp + fn)
Explanation: The following two functions compute the F1-score of the crowd compared to the expert annotations. The first function computes the F1-score at every sentence-annotation score threshold. The second function computes the F1-score for the majority vote approach, i.e., when at least half of the workers picked the answer.
End of explanation
F1_crowdtruth = compute_F1_score(crowdtruth, "true", "gold", 1)
print("Best CrowdTruth F1 score for annotation 'true': ", F1_crowdtruth[F1_crowdtruth[:,1].argsort()][-1:])
F1_mace = compute_F1_score(mace, "true", "gold", 1)
print("Best MACE F1 score for annotation 'true': ", F1_mace[F1_mace[:,1].argsort()][-1:])
F1_majority_vote = compute_majority_vote(crowdtruth, 'true_initial', "gold", 1)
print("Majority Vote F1 score for annotation 'true': ", F1_majority_vote)
Explanation: F1-score for the annotation "1" or "true":
End of explanation |
12,224 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep dive into the Qumulo API python bindings
<span style="background-color
Step1: Inspect all of the Qumulo python bindings and show methods
Step2: Create a new python REST client instance and login.
Step3: Show Qumulo API python client library version and Qumulo cluster software version
Step4: Active Directory (ad) - list AD status
Step5: analytics - timeseries data
Step6: Authentication and Authorization (auth) - get related identities
Step7: cluster - list nodes
Step8: dns - resolve names from ip addresses
Step9: File System (fs) - list files and read a file
Step10: groups - list all groups in Qumulo
Step11: network - show current connection counts to all nodes
Step12: nfs - list shares
Step13: node_state - get current node state
Step14: quota - list first 20 quotas and capacity used for each
Step15: smb - list shares
Step16: snapshots - list directory-level snapshots
Step17: support - check support/monitoring status
Step18: time_config - get current time status
Step19: users - list users | Python Code:
!pip show qumulo_api
import qumulo
import os
import io
import glob
import re
import time
from datetime import datetime
import dateutil.parser as date_parser
from qumulo.rest_client import RestClient
%%javascript
// this will prevent the large output window below from being boxed in.
IPython.OutputArea.auto_scroll_threshold = 9999;
# set your environment variables or fill in the variables below
API_HOSTNAME = os.environ['API_HOSTNAME'] if 'API_HOSTNAME' in os.environ else 'product.eng.qumulo.com'
API_USER = os.environ['API_USER'] if 'API_USER' in os.environ else 'admin'
API_PASSWORD = os.environ['API_PASSWORD'] if 'API_PASSWORD' in os.environ else ''
Explanation: Deep dive into the Qumulo API python bindings
<span style="background-color: #FFBBAA; padding: 5px;">This data is current as of Qumulo Version 3.3.1</span>
This notebook helps a user explore the Qumulo API python bindings. First, it will list out all of the supported Qunmulo API python modules and functions. Don't forget to run <code>pip install qumulo_api</code> first.
After listing all of the defined python functions, it demonstrates one (ore more) functions inside each of python modules. All code examples below are will only read data and configuration from your Qumulo cluster.
End of explanation
qumulo_lib_path = os.path.dirname(qumulo.__file__) + '/rest'
total_matches = 0
for f in glob.glob(qumulo_lib_path + '/*.py'):
file_name = os.path.basename(f)
if file_name == '__init__.py':
continue
print("")
print("-"*80)
print("Area: %s" % (file_name, ))
c = open(f, 'r').read()
rx_str = '@request.request[ \r\n]+def ([^(]+)\([ \r\n]*conninfo,[ \r\n]*credentials([^\)]*)(.*?)(return|yield)'
ms = re.findall(rx_str, c, re.S|re.M)
for m in ms:
total_matches += 1
func_name = m[0]
# get arguments
args = []
arg_ms = m[1].split(',')
for arg_m in arg_ms:
if arg_m.strip() != "":
args.append(re.sub('=.*', '', arg_m.strip()))
# method
method = "GET"
method_m = re.search('method[ ]*=[ ]*"([A-Z]+)', m[2])
if method_m is not None:
method = method_m.group(1)
# uri, currently more work for fs methods
uri = "/"
uri_m = re.search('uri[ ]*=.*?"([^"]+)', m[2])
if uri_m is not None:
uri = uri_m.group(1)
uri_m = re.search('uri[ ]*=.*?\'([^\']+)', m[2])
if uri_m is not None:
uri = uri_m.group(1)
print(" rc.%s.%s(%s)" % (file_name.replace('.py', ''),
func_name,
', '.join(args[:4]) + (' ...' if len(args)>4 else '')))
Explanation: Inspect all of the Qumulo python bindings and show methods
End of explanation
# Create a new reset client and login.
rc = RestClient(API_HOSTNAME, 8000)
rc.login(API_USER, API_PASSWORD)
Explanation: Create a new python REST client instance and login.
End of explanation
# if qumulo_api is installed via pip, this will return its version.
cmd_output = !pip show qumulo_api
# parse out results if "pip show"
pip_version = '! Unknown !'
for line in cmd_output:
parts = line.split(':')
if parts[0] == 'Version':
pip_version = parts[1]
print("Qumulo API python library version: %s" % (pip_version.strip(),))
# get the Qumulo cluster software version via the API
cluster_version = rc.version.version()
print("Qumulo Cluster software version: %s" % (cluster_version['revision_id'].replace('Qumulo Core ', ''),))
# How old is the current build on the Qumulo cluster?
build_time = int(date_parser.parse(cluster_version['build_date']).strftime('%s'))
cur_time = time.time()
print("Qumulo Cluster software version is: %d days old." % ((cur_time - build_time)/(60*60*24),))
Explanation: Show Qumulo API python client library version and Qumulo cluster software version
End of explanation
# Show the current status of the Cluster's AD relationship
rc.ad.list_ad()
Explanation: Active Directory (ad) - list AD status
End of explanation
# Get the latest minute's metrics from the timeseries data endpoint.
# This data is used on the Qumulo web application's dashboard home page.
# Show the average value for the last minute for each series.
data = rc.analytics.time_series_get(begin_time=int(time.time() - 60))
for series in data:
# skip totals since they are duplicated by the other metrics
if 'total' in series['id']:
continue
print("%22s - %11s" % (series['id'],
round(sum(series['values']) / len(series['values']), 1)))
Explanation: analytics - timeseries data
End of explanation
for a in rc.auth.local_username_to_all_related_identities('tommy'):
print("%(id_type)s - %(id_value)s" % a)
Explanation: Authentication and Authorization (auth) - get related identities
End of explanation
for n in rc.cluster.list_nodes():
print("%(node_name)s/%(id)s - %(model_number)s" % n)
Explanation: cluster - list nodes
End of explanation
for d in rc.dns.resolve_ips_to_names(['127.0.0.1', '10.20.217.62', '192.168.0.1', '192.168.1.1',
'192.168.154.1', '172.16.1.1', '10.120.246.43', '10.10.1.1']):
print("%(ip_address)15s - %(result)10s - %(hostname)s" % d)
Explanation: dns - resolve names from ip addresses
End of explanation
path = '/test/'
dir_ent = rc.fs.read_directory(path=path)
for d in dir_ent['files']:
if d['type'] == 'FS_FILE_TYPE_FILE':
fw = io.BytesIO()
print("Read file %(name)s which is %(size)s bytes, and print first 80 bytes" % d)
rc.fs.read_file(fw, path = path + d['name'])
print(fw.getvalue()[:80])
break
Explanation: File System (fs) - list files and read a file
End of explanation
for g in rc.groups.list_groups():
print("%(gid)6s %(id)6s %(name)16s %(sid)50s" % g)
Explanation: groups - list all groups in Qumulo
End of explanation
for c in rc.network.connections():
print("Node %2s connection count: %4s ---- First %s: %s" % (
c['id'],
len(c['connections']),
min(len(c['connections']), 10),
', '.join([d['network_address'] + '/' + d['type'].replace('CONNECTION_TYPE_', '')
for d in c['connections']])))
Explanation: network - show current connection counts to all nodes
End of explanation
for share in rc.nfs.nfs_list_exports():
print("%(export_path)s -> %(fs_path)s - %(description)s" % share)
Explanation: nfs - list shares
End of explanation
print(rc.node_state.get_node_state())
# (only prints the state of the node the rest client is currently connected to)
Explanation: node_state - get current node state
End of explanation
quota_count = 0
for qd in rc.quota.get_all_quotas_with_status():
for q in qd['quotas']:
quota_count += 1
if quota_count > 20:
break
print("%(path)s - id: %(id)s - %(capacity_usage)s bytes used of %(limit)s" % q)
Explanation: quota - list first 20 quotas and capacity used for each
End of explanation
for share in rc.smb.smb_list_shares():
print("%(share_name)s -> %(fs_path)s - %(description)s" % share)
Explanation: smb - list shares
End of explanation
for snap in rc.snapshot.list_snapshots()['entries'][:10]:
print("%(name)s - %(source_file_id)s - %(directory_name)s - %(timestamp)s" % snap)
Explanation: snapshots - list directory-level snapshots
End of explanation
rc.support.get_config()
Explanation: support - check support/monitoring status
End of explanation
rc.time_config.get_time_status()
Explanation: time_config - get current time status
End of explanation
for user in rc.users.list_users():
print("%(name)10s - %(uid)5s - %(id)5s - %(primary_group)4s - %(sid)s" % user)
Explanation: users - list users
End of explanation |
12,225 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Andrews Curves
D. F. Andrews introduced 'Andrews Curves' in his 1972 paper for plotthing high dimensional data in two dimeion. The underlying principle is simple
Step2: Andrews Curves for iris dataset
Step3: PCA | Python Code:
def andrews_curves(data, granularity=1000):
Parameters
-----------
data : array like
ith row is the ith observation
jth column is the jth feature
Size (m, n) => m replicats with n features
granularity : int
linspace granularity for theta
Returns
-------
matrix : array
Size (m, granularity) =>
n_obs, n_features = data.shape
theta = np.linspace(-np.pi, np.pi, granularity)
# transpose
theta = np.reshape(theta, (-1, theta.shape[0]))
t = np.arange(1, np.floor(n_features/2)+1)
t = np.reshape(t, (t.shape[0], 1))
sin_bases = np.sin(t*theta)
cos_bases = np.cos(t*theta)
if n_features % 2 == 0:
# Remove the last row of cosine bases
# for even values
cos_bases = cos_bases[:-1,:]
c = np.empty((sin_bases.shape[0] + cos_bases.shape[0], sin_bases.shape[1] ),
dtype=sin_bases.dtype)
c[0::2,:] = sin_bases
c[1::2,:] = cos_bases
constant = 1/np.sqrt(2) * np.ones((1, c.shape[1]))
matrix = np.vstack([constant, c])
return (np.dot(data,matrix))
Explanation: Andrews Curves
D. F. Andrews introduced 'Andrews Curves' in his 1972 paper for plotthing high dimensional data in two dimeion. The underlying principle is simple: Embed the high dimensiona data in high diemnsion only using a space of functions and then visualizing these functions.
Consider A $d$ dimensional data point $\mathbf{x} = (x_1, x_2, \dots, x_d)$. Define the following function:
$$f_x(t) = \begin{cases}
\frac{x_1}{\sqrt{2}} + x_2 \sin(t) + x_3 \cos(t) + x_4 \sin (2t) + x_5\cos(2t) + \dots + x_{2k} \sin(kt) + x_{2k+1} \cos(kt) + \dots + x_{d-2}\sin( (\frac{d}{2} -1)t) + x_{d-1}\cos( (\frac{d}{2} -1)t) + x_{d} \sin(\frac{d}{2}t) & d \text{ even}\
\frac{x_1}{\sqrt{2}} + x_2 \sin(t) + x_3 \cos(t) + x_4 \sin (2t) + x_5\cos(2t) + \dots + x_{2k} \sin(kt) + x_{2k+1} \cos(kt) + \dots + x_{d-3}\sin( \frac{d-3}{2} t) + x_{d-2}\cos( \frac{d-3}{2}t) + x_{d-1} \sin(\frac{d-1}{2}t) + x_{d} \cos(\frac{d-1}{2}t)) & d \text{ odd}\
\end{cases}
$$
This representation yields one dimensional projections, which may reveal clustering, outliers or orther patterns that occur in this subspace. All such one dimensional projections can then be plotted on one graph.
Properties
Andrews Curves has some intersting properties that makes it useful as a 2D tool:
Mean
If $\bar{\mathbf{x}}$ represents the mean of $\bar{x}$ for $n$ observations: $\bar{\mathbf{x}} = \frac{1}{n} \mathbf{x_i}$. then,
$$ f_{\bar{\mathbf{x}}}(t) = \frac{1}{n} \sum_{i=1}{n} f_{\mathbf{x_i}}(t)$$
Proof:
We consider an odd $d$.
\begin{align}
f_{\bar{\mathbf{x}}}(t) &= \frac{\bar{\mathbf{x_1}}}{\sqrt{2}} + \bar{\mathbf{x_2}} \sin(t) + \bar{\mathbf{x_3}} \cos(t) + \bar{\mathbf{x_4}} \sin(2t) + \bar{\mathbf{x_5}} \cos(2t) + \dots + \bar{\mathbf{x_d}} \sin(\frac{d}{2}t) \
&= \frac{\sum_{j=1}^n x_{1j}}{\sqrt{2}} + \frac{\sum_{j=1}x_{2j}}{n} \sin(t) + \frac{\sum_{j=1}x_{3j}}{n} \cos(t) + \frac{\sum_{j=1}x_{4j}}{n}\sin(2t) + \frac{\sum_{j=1}x_{5j}}{n}\cos(2t) + \dots + \frac{\sum_{j=1}x_{dj}}{n} \sin(\frac{d}{2}t)\
&= \frac{1}{n} \sum_{i=1}^n f_{x_i} (t)
\end{align}
Distance
Euclidean distance is preserved. Consider two points $\mathbf{x}$ and $\mathbf{y}$
$$||\mathbf{x} - \mathbf{y}||2^2 = \sum{j=1}^d |x_j-y_j|^2$$
Let's consider $||f_{\mathbf{x}}(t) - f_{\mathbf{y}}(t) ||2^2 = \int{-\pi}^{\pi} (f_{\mathbf{x}}(t) - f_{\mathbf{y}}(t))^2 dt $
\begin{align}
\int_{-\pi}^{\pi} (f_{\mathbf{x}}(t) - f_{\mathbf{y}}(t))^2 dt &= \frac{(x_1-y_1)^2}{2}(2\pi) + \int_{-\pi}^{\pi} (x_1-y_1)^2 \sin^2{t}\ dt + \int_{-\pi}^{\pi} (x_2-y_2)^2 \cos^2{t}\ dt + \int_{-\pi}^{\pi} (x_3-y_3)^2 \sin^2{2t}\ dt + \int_{-\pi}^{\pi} (x_4-y_4)^2 \cos^2{2t}\ dt + \dots
\end{align}
\begin{align}
\int^{\pi}{-\pi} \sin^2 (kt) dt &= \frac{1}{k}\int{-k\pi}^{k\pi} \sin^2 (t') dt'\
&= \frac{1}{k} \left( \frac{\int_{-k\pi}^{k\pi} (1-\cos{(2t'))}dt'}{2} \right)\
&= \frac{1}{k} \frac{2k\pi}{2}\
&= \pi\
\int^{\pi}{-\pi} \cos^2 (kt) dt &= \int^{\pi}{-\pi} (1-\sin^2 (kt)) dt\
&= 2\pi-\pi\
&= \pi
\end{align}
Thus,
\begin{align}
\int_{-\pi}^{\pi} (f_{\mathbf{x}}(t) - f_{\mathbf{y}}(t))^2 dt &= \pi ||\mathbf{x} - \mathbf{y}||_2^2
\end{align}
Variance
If the $d$ features/components are all indepdent and have a common variance $\sigma^2$
Then
\begin{align}
\text{Var}f_{\mathbf{x}(t)} &= \text{Var} \left(\frac{x_1}{\sqrt{2}} + x_2 \sin(t) + x_3 \cos(t) + x_4 \sin (2t) + x_5\cos(2t) + \dots + x_{2k} \sin(kt) + x_{2k+1} \cos(kt) + \dots + x_{d-2}\sin( (\frac{d}{2} -1)t) + x_{d-1}\cos( (\frac{d}{2} -1)t) + x_{d} \sin(\frac{d}{2}t) \right)\
&= \sigma^2 \left( \frac{1}{2} + \sin^2 + \cos^2 t + \sin^2 2t + \cos^2 2t + \dots \right)\
&= \begin{cases}
\sigma^2(\frac{1}{2} + \frac{k-1}{2}) & d \text{ odd }\
\sigma^2(\frac{1}{2} + \frac{k}{2} - 1 + \sin^2 {\frac{kt}{2}} ) & d \text{ even }\
\end{cases}\
&= \begin{cases}
\frac{k\sigma^2}{2} & d \text{ odd }\
\sigma^2(\frac{k-1}{2} + \sin^2 {\frac{kt}{2}} ) & d \text{ even }\
\end{cases}
\end{align}
In the even case the variance is boundded between $[\sigma^2(\frac{k-1}{2}), \sigma^2(\frac{k+1}{2})]$
Since the variance is indepedent of $t$, the plotted functions will be smooth!
Interpretation
Clustering
Functions close together, forming a band imply the corresponding points are also close in the euclidean space
Test of significance at particular values of $t$
To test $f_{\mathbf{x}}(t) = f_{\mathbf{y}}(t)$ for some hypothesize $\mathbf{y}$ and assuming the $\text{Var}[f_{\mathbf{x}}(t)]$ is known then testing can be done using the usual $z$ score:
$$
z = \frac{f_{\mathbf{x}}(t)-f_{\mathbf{y}}(t)}{(\text{Var}[{f_{\mathbf{x}}(t)}])^{\frac{1}{2}}}
$$
assuming that the comoponets $x_i$ are independent normal random variables.
Detecting outliers
If comonents $x_i$ are independent normal $ x_i \sim \mathcal{N}(\mu_i, \sigma^2)$, then $\frac{|\mathbf{x}-\mathbf{\mu}}{\sigma^2}$ follows a $\chi^2_d$ distirbution.
Consider a vector $v = \frac{f_\mathbf{1}(t)}{||f_\mathbf{1}(t)||}$ then :
\begin{align}
|(\mathbf{x}-\mathbf{\mu})'v|^2 &= \frac{||f_{\mathbf{x}}(t) - f_{\mathbf{\mu}}(t)||^2 }{||f_\mathbf{1}(t)||^2}
\frac{||f_{\mathbf{x}}(t) - f_{\mathbf{\mu}}(t)||^2 }{||f_\mathbf{1}(t)||^2} &\leq \chi_d^2(\alpha)
\end{align}
Now,
\begin{align}
||f_\mathbf{1}(t)||^2 &= \frac{1}{2} + \sin^2 + \cos^2 t + \dots + \
&\leq \frac{d+1}{2}
\end{align}
Thus,
\begin{align}
||f_{\mathbf{x}}(t) - f_{\mathbf{\mu}}(t)||^2 \leq \sigma^2 ||f_\mathbf{1}(t)||^2 \chi^2_d(\alpha) &\leq \sigma^2 \frac{d+1}{2} \chi^2_d(\alpha)\
\end{align}
Linear relationships
The "Sandwich" theorem: If $\mathbf{y}$ lies on a line joining $\mathbf{x}$ and $\mathbf{z}$, then $\forall t$ : $f_\mathbf{y}(t)$ lies between $f_\mathbf{x}(t)$ and $f_\mathbf{z}(t)$. This is straightforward.
End of explanation
df = pd.read_csv('https://raw.githubusercontent.com/pandas-dev/pandas/master/pandas/tests/data/iris.csv')
df_grouped = df.groupby('Name')
df_setosa = df.query("Name=='Iris-setosa'")
fig, ax = plt.subplots(figsize=(8,8))
index = 0
patches = []
for key, group in df_grouped:
group = group.drop('Name', axis=1)
for row in andrews_curves(group.as_matrix()):
plot = ax.plot(row, CB_color_cycle[index])
patch = mpatches.Patch(color=CB_color_cycle[index], label=key)
index +=1
patches.append(patch)
ax.legend(handles=patches)
fig.tight_layout()
Explanation: Andrews Curves for iris dataset
End of explanation
X = df[['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth']]
y = df['Name'].astype('category').cat.codes
target_names = df['Name'].astype('category').unique()
pca = PCA(n_components=2)
X_r = pca.fit(X).transform(X)
fig, ax = plt.subplots(figsize=(8,8))
colors = CB_color_cycle[:3]
lw = 2
for color, i, target_name in zip(colors, [0, 1, 2], target_names):
plt.scatter(X_r[y == i, 0], X_r[y == i, 1], color=color, alpha=.8, lw=lw,
label=target_name)
ax.legend(loc='best', shadow=False, scatterpoints=1)
ax.set_xlabel('Variance explained: {:.2f}'.format(pca.explained_variance_ratio_[0]))
ax.set_ylabel('Variance explained: {:.2f}'.format(pca.explained_variance_ratio_[1]))
ax.set_title('PCA of IRIS dataset')
fig.tight_layout()
Explanation: PCA
End of explanation |
12,226 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<header class="w3-container w3-teal">
<img src="images/utfsm.png" alt="" height="100px" align="left"/>
<img src="images/mat.png" alt="" height="100px" align="right"/>
</header>
<br/><br/><br/><br/><br/>
MAT281
Aplicaciones de la Matemรกtica en la Ingenierรญa
Sebastiรกn Flores
https
Step1: Ejemplo 3D
Consideremos los siguientes datos
Step2: Los datos
Supondremos que tenemos $m$ datos.
Cada dato $x^{(i)}$, $i=1,\dots,$ $m$ tiene $n$ componentes,
$x^{(i)} = (x^{(i)}_1, ..., x^{(i)}_n)$.
Conocemos ademรกs el valor (etiqueta) asociado a $x^{(i)}$ que llamaremos $y^{(i)}$, $i=1,\dots, m$ .
Modelo
Nuestra hipรณtesis de modelo lineal puede escribirse como
$$\begin{aligned}
h_{\theta}(x) &= \theta_0 + \theta_1 x_1 + \theta_2 x_2 + ... + \theta_n x_n \
&= \begin{bmatrix}\theta_0 & \theta_1 & \theta_2 & \dots & \theta_n\end{bmatrix} \begin{bmatrix}1 \ x_1 \x_2 \ \vdots \ x_n\end{bmatrix} \
&= \theta^T \begin{bmatrix}1\x\end{bmatrix} = \begin{bmatrix}1 & x^T\end{bmatrix} \theta \end{aligned}$$
Modelo
Definiremos $x^{(i)}0 =1$, de modo que
$h{\theta}(x^{(i)}) = (x^{(i)})^T \theta $ y buscamos el vector de parรกmetros
$$\theta = \begin{bmatrix}\theta_0 \ \theta_1 \ \theta_2 \ \vdots \ \theta_n\end{bmatrix}$$
Modelo
Definamos las matrices
$$\begin{aligned}
Y &= \begin{bmatrix}y^{(1)} \ y^{(2)} \ \vdots \ y^{(m)}\end{bmatrix}\end{aligned}$$
y
$$\begin{aligned}
X =
\begin{bmatrix}
1 & x^{(1)}_1 & \dots & x^{(1)}_n \
1 & x^{(2)}_1 & \dots & x^{(2)}_n \
\vdots & \vdots & & \vdots \
1 & x^{(m)}_1 & \dots & x^{(m)}_n \
\end{bmatrix}
=
\begin{bmatrix}
- (x^{(1)})^T - \
- (x^{(2)})^T - \
\vdots \
- (x^{(m)})^T - \
\end{bmatrix}\end{aligned}$$
Modelo
Luego la evaluaciรณn
de todos los datos puede escribirse matricialmente como
$$\begin{aligned}
X \theta &=
\begin{bmatrix}
1 & x_1^{(1)} & ... & x_n^{(1)} \
\vdots & \vdots & & \vdots \
1 & x_1^{(m)} & ... & x_n^{(m)} \
\end{bmatrix}
\begin{bmatrix}\theta_0 \ \theta_1 \ \vdots \ \theta_n\end{bmatrix} \
& =
\begin{bmatrix}
1 \theta_0 + x^{(1)}_1 \theta_1 + ... + x^{(1)}_n \theta_n \
\vdots \
1 \theta_0 + x^{(m)}_1 \theta_1 + ... + x^{(m)}_n \theta_n \
\end{bmatrix} \
& =
\begin{bmatrix}
h(x^{(1)}) \
\vdots \
h(x^{(m)})
\end{bmatrix}\end{aligned}$$
Modelo
Nuestro problema es
encontrar un โbuenโ conjunto de valores $\theta$ de modo que
$$\begin{aligned}
\begin{bmatrix}
h(x^{(1)}) \
h(x^{(2)}) \
\vdots \
h(x^{(m)})
\end{bmatrix}
\approx
\begin{bmatrix}y^{(1)} \ y^{(2)} \ \vdots \ y^{(m)}\end{bmatrix}\end{aligned}$$
es decir, que $$X \theta \approx Y$$
Modelo
Para encontrar el mejor vector $\theta$ podrรญamos definir una funciรณn de costo $J(\theta)$ de la siguiente manera
Step3: Aproximaciรณn Machine Learning
Least Mean Squares
Resultados
Step4: Aproximaciรณn Matemรกtica
Resultados
Step5: Anรกlisis del ejemplo
Las ecuaciones
normales y sklearn entregan el mismo resultado
Step6: Aplicaciรณn a Iris Dataset
Busquemos aplicar una relaciรณn lineal a cada clase. Para ello utilizamos 3 atributos para predecir el cuarto. | Python Code:
%%bash
head data/x01.txt -n 60
import numpy as np
from matplotlib import pyplot as plt
# Plot of data
data = np.loadtxt("data/x01.txt", skiprows=33)
x = data[:,1]
y = data[:,2]
plt.figure(figsize=(16,8))
plt.plot(x, y, 'rs')
plt.xlabel("brain weight")
plt.ylabel("body weight")
plt.show()
import numpy as np
from matplotlib import pyplot as plt
# Plot of data
data = np.loadtxt("data/x01.txt", skiprows=33)
x = np.log(data[:,1])
y = np.log(data[:,2])
plt.figure(figsize=(16,8))
plt.plot(x, y, 'rs')
plt.xlabel("log brain weight")
plt.ylabel("log body weight")
plt.show()
Explanation: <header class="w3-container w3-teal">
<img src="images/utfsm.png" alt="" height="100px" align="left"/>
<img src="images/mat.png" alt="" height="100px" align="right"/>
</header>
<br/><br/><br/><br/><br/>
MAT281
Aplicaciones de la Matemรกtica en la Ingenierรญa
Sebastiรกn Flores
https://www.github.com/usantamaria/mat281
Clase anterior
Clustering
* ยฟCรณmo se llamaba el algoritmo que vimos?
* ยฟCuรกndo funcionaba y cuรกndo fallaba?
ยฟQuรฉ veremos hoy?
Regresiรณn lineal.
ยฟPorquรฉ veremos ese contenido?
Porque regresiรณn lineal es universalmente utilizado, y la derivaciรณn del mรฉtodo nos entrega importantes consideraciones sobre su implementaciรณn, sus hipรณtesis y sus posibles extensiones.
Ejemplo 2D
Consideremos los siguientes datos:
link
End of explanation
%%bash
head data/x06.txt -n 40
%matplotlib gtk
import numpy as np
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# Plot of data
data = np.loadtxt("data/x06.txt", skiprows=37)
x = data[:,1]
y = data[:,2]
z = data[:,3]
fig = plt.figure(figsize=(16,8))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z, 'rs')
plt.xlabel("age [days]")
plt.ylabel("Water Temperature [C]")
plt.title("Length")
plt.show()
import numpy as np
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# Plot of data
data = np.loadtxt("data/x06.txt", skiprows=37)
x = np.log(data[:,1])
y = np.log(data[:,2])
z = np.log(data[:,3])
fig = plt.figure(figsize=(16,8))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z, 'rs')
plt.xlabel("age [days]")
plt.ylabel("Water Temperature [C]")
plt.title("Length")
plt.show()
Explanation: Ejemplo 3D
Consideremos los siguientes datos
End of explanation
import numpy as np
from numpy.linalg import norm
def lms_regression_slow(X, Y, theta, tol=1E-6):
converged = False
alpha = 0.01/len(Y)
while not converged:
gradient = 0.
for xiT, yi in zip(X,Y):
hi = np.dot(theta, xiT)
gradient += (hi - yi)*xiT.T
new_theta = theta - alpha * gradient
converged = norm(theta-new_theta) < tol * norm(theta)
theta = new_theta
return theta
def lms_regression_fast(X, Y, theta, tol=1E-6):
converged = False
alpha = 0.01/len(Y)
theta = theta.reshape(X.shape[1], 1)
A = np.dot(X.T,X)
b = np.dot(X.T, Y)
while not converged:
gradient = np.dot(A, theta) - b
new_theta = theta - alpha * gradient
converged = norm(theta-new_theta) < tol * norm(theta)
theta = new_theta
return theta
m = 1000
t = np.linspace(0,1,m)
x = 2 + 2*t
y = 300 + 100*t
X = np.array([np.ones(m), x]).T
Y = y.reshape(m,1)
theta_0 = np.array([[0.0,0.0]])
theta = lms_regression_fast(X, Y, theta_0)
print theta
Explanation: Los datos
Supondremos que tenemos $m$ datos.
Cada dato $x^{(i)}$, $i=1,\dots,$ $m$ tiene $n$ componentes,
$x^{(i)} = (x^{(i)}_1, ..., x^{(i)}_n)$.
Conocemos ademรกs el valor (etiqueta) asociado a $x^{(i)}$ que llamaremos $y^{(i)}$, $i=1,\dots, m$ .
Modelo
Nuestra hipรณtesis de modelo lineal puede escribirse como
$$\begin{aligned}
h_{\theta}(x) &= \theta_0 + \theta_1 x_1 + \theta_2 x_2 + ... + \theta_n x_n \
&= \begin{bmatrix}\theta_0 & \theta_1 & \theta_2 & \dots & \theta_n\end{bmatrix} \begin{bmatrix}1 \ x_1 \x_2 \ \vdots \ x_n\end{bmatrix} \
&= \theta^T \begin{bmatrix}1\x\end{bmatrix} = \begin{bmatrix}1 & x^T\end{bmatrix} \theta \end{aligned}$$
Modelo
Definiremos $x^{(i)}0 =1$, de modo que
$h{\theta}(x^{(i)}) = (x^{(i)})^T \theta $ y buscamos el vector de parรกmetros
$$\theta = \begin{bmatrix}\theta_0 \ \theta_1 \ \theta_2 \ \vdots \ \theta_n\end{bmatrix}$$
Modelo
Definamos las matrices
$$\begin{aligned}
Y &= \begin{bmatrix}y^{(1)} \ y^{(2)} \ \vdots \ y^{(m)}\end{bmatrix}\end{aligned}$$
y
$$\begin{aligned}
X =
\begin{bmatrix}
1 & x^{(1)}_1 & \dots & x^{(1)}_n \
1 & x^{(2)}_1 & \dots & x^{(2)}_n \
\vdots & \vdots & & \vdots \
1 & x^{(m)}_1 & \dots & x^{(m)}_n \
\end{bmatrix}
=
\begin{bmatrix}
- (x^{(1)})^T - \
- (x^{(2)})^T - \
\vdots \
- (x^{(m)})^T - \
\end{bmatrix}\end{aligned}$$
Modelo
Luego la evaluaciรณn
de todos los datos puede escribirse matricialmente como
$$\begin{aligned}
X \theta &=
\begin{bmatrix}
1 & x_1^{(1)} & ... & x_n^{(1)} \
\vdots & \vdots & & \vdots \
1 & x_1^{(m)} & ... & x_n^{(m)} \
\end{bmatrix}
\begin{bmatrix}\theta_0 \ \theta_1 \ \vdots \ \theta_n\end{bmatrix} \
& =
\begin{bmatrix}
1 \theta_0 + x^{(1)}_1 \theta_1 + ... + x^{(1)}_n \theta_n \
\vdots \
1 \theta_0 + x^{(m)}_1 \theta_1 + ... + x^{(m)}_n \theta_n \
\end{bmatrix} \
& =
\begin{bmatrix}
h(x^{(1)}) \
\vdots \
h(x^{(m)})
\end{bmatrix}\end{aligned}$$
Modelo
Nuestro problema es
encontrar un โbuenโ conjunto de valores $\theta$ de modo que
$$\begin{aligned}
\begin{bmatrix}
h(x^{(1)}) \
h(x^{(2)}) \
\vdots \
h(x^{(m)})
\end{bmatrix}
\approx
\begin{bmatrix}y^{(1)} \ y^{(2)} \ \vdots \ y^{(m)}\end{bmatrix}\end{aligned}$$
es decir, que $$X \theta \approx Y$$
Modelo
Para encontrar el mejor vector $\theta$ podrรญamos definir una funciรณn de costo $J(\theta)$ de la siguiente manera:
$$J(\theta) = \frac{1}{2} \sum_{i=1}^{m} \left( h_{\theta}(x^{(i)}) - y^{(i)}\right)^2$$
El mejor vector $\theta$ serรญa aquel que permite minimizar la norma 2 entre la predicciรณn y el valor real.
Aproximaciรณn Ingenieril
ยฟCรณmo podemos resolver el problema
en el menor nรบmero de pasos?
Deseamos resolver el sistema $$A \theta = b$$ con
$A \in \mathbb{R}^{m \times n}$ y $m > n$ (La matrix $A$ es skinny).
ยฟCรณmo resolvemos?
Aproximaciรณn Ingenieril
Bueno,
si $A \in \mathbb{R}^{m \times n}$, entonces
$A^T \in \mathbb{R}^{n \times m}$ y la multiplicaciรณn estรก bien definida
y obtengo un sistema lineal $n \times n$. $$(A^T A) \ \theta = A^T b$$ Si la
matriz $A^T A$ es invertible, el sistema se puede solucionar โsin mayor
reparoโ. $$\theta = (A^T A)^{-1} A^T b$$
Aproximaciรณn Ingenieril
En
nuestro caso, obtendrรญamos $$\theta = (X^T X)^{-1} X^T Y$$ Esta
respuesta, aunque correcta, no admite interpretaciones y no permite
generalizar a otros casos mรกs generales.
En particular...
ยฟQuรฉ relaciรณn tiene con la funciรณn de costo (no) utilizada?
ยฟQuรฉ pasa si $A^T A$ no es invertible?
Aproximaciรณn Machine Learning
ยฟCรณmo podemos obtener una
buena aproximaciรณn para $\theta$?
Queremos encontrar $\theta^*$ que minimice $J(\theta)$.
Basta con utilizar una buena rutina de optimizaciรณn para cumplir con
dicho objetivo.
En particular, una elecciรณn natural es tomar la direcciรณn de mayor
descenso, es decir, el mรฉtodo del mรกximo descenso (gradient descent).
$$\theta^{(n+1)} = \theta^{(n)} - \alpha \nabla_{\theta} J(\theta^{(n)})$$
donde $\alpha >0$ es la tasa de aprendizaje.
Aproximaciรณn Machine Learning
En
nuestro caso, puesto que tenemos
$$J(\theta) = \frac{1}{2} \sum_{i=1}^{m} \left( h_{\theta}(x^{(i)}) - y^{(i)}\right)^2$$
se tiene que
$$\begin{aligned}
\frac{\partial J(\theta)}{\partial \theta_k} &=
\frac{\partial }{\partial \theta_k} \frac{1}{2} \sum_{i=1}^{m} \left( h_{\theta}(x^{(i)}) - y^{(i)}\right)^2 \
&= \frac{1}{2} \sum_{i=1}^{m} 2 \left( h_{\theta}(x^{(i)}) - y^{(i)}\right) \frac{\partial h_{\theta}(x^{(i)})}{\partial \theta_k} \
&= \sum_{i=1}^{m} \left( h_{\theta}(x^{(i)}) - y^{(i)}\right) x^{(i)}_k\end{aligned}$$
Aproximaciรณn Machine Learning
Este
algoritmo se llama Least Mean Squares
$$\begin{aligned}
\theta^{(n+1)} & = \theta^{(n)} - \alpha \nabla_{\theta} J(\theta^{(n)}) \
\frac{\partial J(\theta)}{\partial \theta_k}
&= \sum_{i=1}^{m} \left( h_{\theta}(x^{(i)}) - y^{(i)}\right) x^{(i)}_k\end{aligned}$$
OBS: La elecciรณn de $\alpha$ es crucial para la convergencia. En
particular, $0.01/m$ funciona bien.
End of explanation
import numpy as np
from numpy.linalg import norm
def matrix_regression(X, Y, theta, tol=1E-6):
A = np.dot(X.T,X)
b = np.dot(X.T,Y)
sol = np.linalg.solve(A,b)
return sol.flatten()
m = 100
t = np.linspace(0,1,m)
x = 2 + 2*t
y = 300 + 100*t
X = np.array([np.ones(m), x]).T
Y = y.reshape(m,1)
theta_0 = np.array([[0.0,0.0]])
theta = matrix_regression(X, Y, theta_0)
print theta
Explanation: Aproximaciรณn Machine Learning
Least Mean Squares
Resultados:
Tarda del orden de 4 segundos para un problema ridรญculamente
pequeรฑo.
Precisiรณn no es tan buena como esperรกbamos $\theta=(199.45, 50.17)$
en vez de $(200, 50)$.
ยฟHay algo mejor que se pueda hacer?
Interpretaciรณn Matemรกtica
ยฟCรณmo podemos obtener una
justificaciรณn para la ecuaciรณn normal?
Necesitamos los siguientes ingredientes:
$$\begin{aligned}
\nabla_x &(x^T A x) = A x + A^T x \
\nabla_x &(b^T x) = b \end{aligned}$$
Interpretaciรณn Matemรกtica
Se tiene
$$\begin{aligned}
J(\theta)
&= \frac{1}{2} \sum_{i=1}^{m} \left( h_{\theta}(x^{(i)}) - y^{(i)}\right)^2 \
&= \frac{1}{2} \sum_{i=1}^{m} \left( h_{\theta}(x^{(i)}) - y^{(i)}\right) \left( h_{\theta}(x^{(i)}) - y^{(i)}\right) \
&= \frac{1}{2} \left( X \theta - Y \right)^T \left( X \theta - Y \right) \
&= \frac{1}{2} \left( \theta^T X^T - Y^T \right) \left( X \theta - Y \right) \
&= \frac{1}{2} \left( \theta^T X^T X \theta - \theta^T X^T Y - Y^T X \theta + Y^T Y \right) \
&= \frac{1}{2} \left( \theta^T X^T X \theta - 2 (Y^T X) \theta + Y^T Y \right)\end{aligned}$$
Interpretaciรณn Matemรกtica
Aplicando a cada uno de los tรฉrminos, obtenemos:
$$\begin{aligned}
\nabla_\theta ( \theta^T X^T X \theta ) &= X^T X \theta + (X^T X)^T \theta \
& = 2 X^T X \theta\end{aligned}$$
tambiรฉn se tiene
$$\begin{aligned}
\nabla_\theta ( Y^T X \theta ) &= (Y^T X) ^T\
&= X^T Y\end{aligned}$$
y por รบltimo
$$\begin{aligned}
\nabla_\theta ( Y^T Y ) = 0\end{aligned}$$
Interpretaciรณn Matemรกtica
Por lo tanto se tiene que
$$\begin{aligned}
\nabla_\theta J(\theta)
& = \nabla_\theta \frac{1}{2} \left( \theta^T X^T X \theta - 2 (Y^T X) \theta + Y^T Y \right) \
&= \frac{1}{2} ( 2 X^T X \theta - 2 X^T Y + 0 ) \
&= X^T X \theta - X^T Y \end{aligned}$$
Interpretaciรณn Matemรกtica
Esto significa que el problema $$\min_\theta J(\theta)$$ se resuelve al
hacer todas las derivadas parciales iguales a cero (ie, gradiente igual
a cero) $$\nabla_\theta J(\theta) = 0$$ lo cual en nuestro caso se
convierte convenientemente a la ecuaciรณn normal $$X^T X \theta = X^T Y$$
y se tiene $$\theta = (X^T X)^{-1} X^T Y$$
End of explanation
from sklearn import datasets
# The data
boston = datasets.load_boston()
X = boston.data
Y = boston.target
#print X
#print Y
# Whatโs the data
print boston.DESCR
from sklearn import datasets
import numpy as np
#from mat281_code.lms import lms_regression as lms
from sklearn import linear_model
# The data
boston = datasets.load_boston()
X = boston.data
Y = boston.target
m = X.shape[0]
# Normalization of data
#X_train_aux = (X-X.min(axis=0))/(X.max(axis=0)-X.min(axis=0))
#Y_train_aux = (Y-Y.min())/(Y.max()-Y.min())
X_train_aux = (X-X.mean(axis=0))/X.std(axis=0)
Y_train_aux = (Y-Y.mean())/Y.std()
# Put in shape for normal equations
X_train = np.hstack([np.ones([m,1]), X_train_aux])
Y_train = Y_train_aux.reshape(m,1)
# Direct Solution
theta = np.linalg.solve(np.dot(X_train.T, X_train),
np.dot(X_train.T, Y_train))
print theta.flatten()
# sklearn solution
regr = linear_model.LinearRegression()
regr.fit(X_train_aux, Y_train_aux)
theta = regr.intercept_, regr.coef_
print theta
# LMS Solution - Tarda mucho
#theta0 = Y_train.mean()/X_train.mean(axis=0)/X.shape[1]
#theta = lms(X_train, Y_train, theta0)
#print theta
Explanation: Aproximaciรณn Matemรกtica
Resultados:
Tarda mucho menos que LMS para un problema pequeรฑo.
Precisiรณn es buena: $(200, 50)$.
Problema potencial es tener suficiente memoria RAM: $X^T X$ puede ser una matriz costosa de conseguir, aunque es sรณlo de tamaรฑo $n\times n$, con $n$ el tamaรฑo del vector $\theta$.
Interpretaciรณn Probabilรญstica
ยฟPorquรฉ la funciรณn de costo $J(\theta)$ en norma $2$ resulta adecuada?
Asumamos que outputs e inputs estรกn relacionados mediante
$$y^{(i)}= \theta^T x^{(i)}+ \varepsilon^{(i)}$$ donde
$\varepsilon^{(i)}$ es un error que captura efectos sin modelar o ruido
de mediciรณn.
Supongamos que los $\varepsilon^{(i)}$ se distribuyen
de manera idรฉntica e independientemente de acuerdo a una distribuciรณn
gausiana de media $0$ y varianza $\sigma^2$.
$$\varepsilon^{(i)}\sim \mathcal{N}(0, \sigma^2)$$
Interpretaciรณn Probabilรญstica
Cabe destacar que:
$\theta$ no es una variable aleatoria, es un parรกmetro
(desconocido).
$\varepsilon^{(i)}$, $x^{(i)}$ y $y^{(i)}$ son variables aleatorias.
$\varepsilon^{(i)}\sim \mathcal{N}(0, \sigma^2)$
$y^{(i)} \ | \ x^{(i)}; \theta \sim \mathcal{N}(\theta^T x^{(i)}, \sigma^2)$
pues $y^{(i)}= \theta^T x^{(i)}+ \varepsilon^{(i)}$
Interpretaciรณn Probabilรญstica
Tenemos entonces que
$$\mathbb{P}[\varepsilon^{(i)}] = \frac{1}{\sqrt{2\pi}\sigma} \exp\left( -\frac{(\varepsilon^{(i)})^2}{2\sigma^2} \right)$$
y por tanto
$$\mathbb{P}[y^{(i)}\ | \ x^{(i)}; \theta ] = \frac{1}{\sqrt{2\pi}\sigma} \exp\left( -\frac{(y^{(i)}- \theta^T x^{(i)})^2}{2\sigma^2} \right)$$
Interpretaciรณn Probabilรญstica
La funciรณn de verosimilitud $L(\theta)$ nos
permite entender que tan probable es encontrar los datos observados,
para una elecciรณn del parรกmetro $\theta$.
$$\begin{aligned}
L(\theta)
&= \prod_{i=1}^{m} \mathbb{P}[y^{(i)}| x^{(i)}; \theta ] \
&= \prod_{i=1}^{m} \frac{1}{\sqrt{2\pi}\sigma} \exp\left( -\frac{(y^{(i)}- \theta^T x^{(i)})^2}{2\sigma^2} \right) \
&= \frac{1}{(\sqrt{2\pi}\sigma)^{m}} \exp\left( - \sum_{i=1}^{m} \frac{(y^{(i)}- \theta^T x^{(i)})^2}{2\sigma^2} \right)\end{aligned}$$
Nos gustarรญa encontrar el parรกmetro $\theta$ que mรกs probablemente haya
generado los datos observados, es decir, el parรกmetro $\theta$ que
maximiza la funciรณn de verosimilitud.
Interpretaciรณn Probabilรญstica
Maximizar la verosimilitud $L(\theta)$ es lo mismo que maximizar la
funciรณn de log-verosimitud $l(\theta)=\log(L(\theta))$ puesto que
$\log$ es una funciรณn monรณtonamente creciente.
Maximizar $-f(\theta)$ es lo mismo que minimizar $f(\theta)$.
Interpretaciรณn Probabilรญstica
$$\begin{aligned}
l(\theta)
&= \log( L(\theta) \
&= \log\left[ \frac{1}{(\sqrt{2\pi}\sigma)^{m}} \exp\left( - \sum_{i=1}^{m} \frac{(y^{(i)}- \theta^T x^{(i)})^2}{2\sigma^2} \right) \right] \
&= - m \log (\sqrt{2\pi} \sigma) - \frac{1}{\sigma^2} \frac{1}{2} \sum_{i=1}^{m} \left( y^{(i)} - \theta^T x^{(i)}\right)^2 \end{aligned}$$
Es decir, la funciรณn costo $J(\theta)$ cuadrรกtica puede interpretarse
como un intento por encontrar el parรกmetro $\theta$ mรกs probable bajo la
hipรณtesis que el error en $y$ es gaussiano.
Aspectos Prรกcticos
ยฟCรณmo se aplica regresiรณn en realidad?
Al realizar regresiรณn, conviene normalizar/estandarizar los datos, es
decir transformarlos para que tengan una escala comรบn:
Utilizando la media y la desviaciรณn estรกndar
$$\frac{x_i-\overline{x_i}}{\sigma_{x_i}}$$
Utilizando mรญnimos y mรกximos
$$\frac{x_i-\min{x_i}}{\max{x_i} - \min{x_i}}$$
Aspectos Prรกcticos
ยฟPorquรฉ normalizar?
Los valores numรฉricos poseen escalas de magnitud distintas.
Las variables tienen distintos significados fรญsicos.
Algoritmos funcionan mejor.
Interpretaciรณn de resultados es mรกs sencilla.
Ejemplo en Boston House-price
End of explanation
%matplotlib inline
from sklearn import datasets
import matplotlib.pyplot as plt
iris = datasets.load_iris()
def plot(dataset, ax, i, j):
ax.scatter(dataset.data[:,i], dataset.data[:,j], c=dataset.target, s=50)
ax.set_xlabel(dataset.feature_names[i], fontsize=20)
ax.set_ylabel(dataset.feature_names[j], fontsize=20)
# row and column sharing
f, ((ax1, ax2), (ax3, ax4), (ax5,ax6)) = plt.subplots(3, 2, figsize=(16,16))
plot(iris, ax1, 0, 1)
plot(iris, ax2, 0, 2)
plot(iris, ax3, 1, 2)
plot(iris, ax4, 0, 3)
plot(iris, ax5, 1, 3)
plot(iris, ax6, 2, 3)
f.tight_layout()
plt.show()
Explanation: Anรกlisis del ejemplo
Las ecuaciones
normales y sklearn entregan el mismo resultado:
$$\begin{aligned}
\theta = (& 0.00, -0.10, 0.12, 0.02, 0.07, -0.22, \
& 0.29, 0.00, -0.34, 0.29, -0.23, -0.22, 0.09, -0.41 )\end{aligned}$$
Mientras que el algoritmo lms entrega
$$\begin{aligned}
\theta = (&0.00, -0.10, 0.12, 0.02, 0.07, -0.21, \
&0.29, 0.00, -0.34, 0.29, -0.23, -0.21, 0.09, -0.41 )\end{aligned}$$
Si las variables son
CRIM, ZN, INDUS, CHAS, NOX, RM, AGE, DIS, RAD, TAX, PTRATIO, B, LSTAT, MEDV
ยฟCuรกles variables tienen mรกs impacto en el precio de la vivienda?
Anรกlisis del ejemplo
$\theta_0=+0.00$.
$\theta_1 = -0.10$: CRIM, per capita crime rate by town.
$\theta_2 = +0.12$: ZN, proportion of residential land zoned for
lots over 25,000 sq.ft.
$\theta_3 = +0.02$: INDUS, proportion of non-retail business acres
per town
$\theta_4 = +0.07$: CHAS, Charles River dummy variable (= 1 if tract
bounds river; 0 otherwise)
$\theta_5 = -0.22$: NOX, nitric oxides concentration (parts per 10
million)
$\theta_6 = +0.29$: RM, average number of rooms per dwelling
$\theta_7 = +0.00$: AGE, proportion of owner-occupied units built
prior to 1940
$\theta_8 = -0.34$: DIS, weighted distances to five Boston
employment centres
$\theta_9 = +0.29$: RAD, index of accessibility to radial highways
$\theta_{10} = -0.23$: TAX, full-value property-tax rate per
\$10,000
$\theta_{11} = -0.22$: PTRATIO pupil-teacher ratio by town
$\theta_{12} = +0.09$: B, $1000(Bk - 0.63)^2$ where Bk is the
proportion of blacks by town
$\theta_{13} = -0.41$: LSTAT, % lower status of the population
Anรกlisis del ejemplo
ยฟEs posible graficar la soluciรณn?
ยฟCรณmo sabemos si el modelo es bueno?
ยฟCuรกl es el error de entrenamiento? ยฟCuรกl es el error de predicciรณn?
ยฟPodemos utilizar el modelo para realizar predicciones?
Aplicaciรณn a Iris Dataset
Recordemos el Iris Dataset.
End of explanation
# REVISAR
import numpy as np
from sklearn import datasets
from sklearn import linear_model
# Loading the data
iris = datasets.load_iris()
X = iris.data
iris_label = iris.target
# Apply linear regression to each model
predictions = {}
regr = linear_model.LinearRegression(fit_intercept=True, normalize=False)
for label in range(0,3):
X_train = X[iris_label==label][:,:-1]
Y_train = X[iris_label==label][:,-1]
regr.fit(X_train, Y_train) # Still must add the column of 1
theta = regr.intercept_, regr.coef_
print theta
Y_pred = regr.predict(X_train)
predictions[label] = Y_pred
print "Error", np.linalg.norm(Y_train-Y_pred,2)/len(Y_pred)
%matplotlib inline
from sklearn import datasets
import matplotlib.pyplot as plt
iris = datasets.load_iris()
def plot(dataset, ax, i, j):
colors = {0:"r", 1:"b", 2:"g"}
markers = {0:"s", 1:"o", 2:"<"}
for label in range(3):
x = dataset.data[:,i][dataset.target==label]
y = dataset.data[:,j][dataset.target==label]
ax.scatter(x, y, c=colors[label], marker=markers[label], s=50)
if j==3:
ax.scatter(x, predictions[label], c="w", marker=markers[label],
s=50)
ax.set_xlabel(dataset.feature_names[i], fontsize=20)
ax.set_ylabel(dataset.feature_names[j], fontsize=20)
# row and column sharing
f, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(16,16))
plot(iris, ax1, 0, 3)
plot(iris, ax2, 1, 3)
plot(iris, ax3, 2, 3)
f.tight_layout()
plt.show()
Explanation: Aplicaciรณn a Iris Dataset
Busquemos aplicar una relaciรณn lineal a cada clase. Para ello utilizamos 3 atributos para predecir el cuarto.
End of explanation |
12,227 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Network Analysis with Python</h1>
<li>Networks are connected bi-directional graphs
<li>Nodes mark the entities in a network
<li>Edges mark the relationships in a network
<h2>Examples of networks</h2>
<li>Facebook friends
<li>Other social networks
<li>transportation networks
<li>Power grids
<li>Internet routers
<li>Activity networks
<li>Many others
<h2>Questions we're interested in</h2>
<li>Shortest path between two nodes
<li>Connectedness
<li>Centrality
<li>Clustering
<li>Communicability
<h1>networkx</h1>
<li>Python package for networks
<li>Nodes and edges can contain data
<li>Nodes can be (hashable!) python objects
<h3>Constructing a simple network</h3>
<b>Necessary imports</b>
Step1: <h1>Add labels to the nodes</h1>
Step2: <h4>Simple queries on the network</h4>
Step3: <h3>Iterating over a network</h3>
Step4: <h3>Types of graph</h3>
Step5: <h4>Shortest path</h4>
Step6: <h2>Weighted Edges</h2>
<li>Example
Step7: <h4>Now we can construct the distance matrix api url</h4>
Step8: <h4>Then let's get the distances and construct a graph</h4>
Step9: <h4>Functionalize this for reuse</h4>
Step10: <h4>Test the function by drawing it with node and edge labels</h4>
Step11: <h3>Yikes! Unreadable!</h3>
<li>Let's see what the edge weights are</li>
Step12: <h4>Let's make this readable</h4>
Step13: <h4>Now let's look a the graph</h4>
Step14: <h4>Let's remove a few edges (randomly)</h4>
Step15: <h4>And draw it again</h4>
Step16: <h4>Shortest path and shortest duration</h4>
Step17: <h2>Graph drawing options</h2>
<li>nltk uses matplotlib to draw graphs
<li>limited, but useful, functionalities
<h3>Let's take a look!</h3>
<b>Differnetiating edges by weight</b>
Step18: <h4>highlight the shortest path</h4>
Step19: <b>Question</b> How would you remove edge labels from all but the shortest path?
<h4>Working with a network</h4>
<b>Given an address, generate a <i>sorted by distance</i> list of all other addresses
Step20: <b>Get all paths from one location to another</b>
Step21: <h2>Social networks</h2>
<br>
We will use the <a href="https
Step22: <h2>Too much data for this class so let's cut it down</h2>
Step23: <h3>Make the graph</h3>
Step24: <h4>Let's remove disconnected nodes</h4>
Step25: <h3>Start looking at different aspects of the graph</h3>
Step26: <h3>Graph components</h3>
<li>Let's see the number of connected components
<li>And then each connected component
Step27: <h4>Largest connected component subgraph</h4>
Step28: <h4>Smallest connected component</h4>
Step29: <h4>Max degree. The yelp user with the most friends</h4>
Step30: <h2>Network analysis algorithms</h2>
https
Step31: <h3>Node 0 has two neighbors
Step32: <h3>Closeness centrality is a measure of how near a node is to every other node in a network</h3>
<h3>The higher the closeness centrality, the more central a node is</h3>
<h3>Roughly, because it can get to more nodes in shorter jumps</h3>
Step33: <h3>Understanding closeness centrality</h3>
Step34: <li>n=4
<li>shortest paths from 2 (2-0
Step35: <h2>Betweenness centrality</h2>
<h3>measures of the extent to which a node is connected to other nodes that are not connected to each other. </h3>
<h3>Itโs a measure of the degree to which a node serves as a connector</h3>
<h3>Example
Step36: <h3>When the graph is fully connected, no shortest paths go through the node. So the numerator is zero</h3>
Step37: <h3>There are 12 shortest paths in total</h3>
<h3>Two go through 0 (1, 0, 2) and (2, 0, 1)</h3>
<h3> Betweeness centrality
Step38: <h3>Dispersion in fully connected graphs</h3>
<li>Eccentricity
Step39: <h2>Diameter</h2>
The longest shortest path in the graph
<h2>Periphery</h2>
The nodes with the longest shortest paths (the peripheral nodes)
Step40: <h3>Cliques</h3>
A clique is a subgraph in which every node is connected to every other node
Step41: <h3>Center | Python Code:
import networkx as nx
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
simple_network = nx.Graph()
nodes = [1,2,3,4,5,6,7,8]
edges = [(1,2),(2,3),(1,3),(4,5),(2,7),(1,9),(3,4),(4,5),(4,9),(5,6),(7,8),(8,9)]
simple_network.add_nodes_from(nodes)
simple_network.add_edges_from(edges)
nx.draw(simple_network)
Explanation: <h1>Network Analysis with Python</h1>
<li>Networks are connected bi-directional graphs
<li>Nodes mark the entities in a network
<li>Edges mark the relationships in a network
<h2>Examples of networks</h2>
<li>Facebook friends
<li>Other social networks
<li>transportation networks
<li>Power grids
<li>Internet routers
<li>Activity networks
<li>Many others
<h2>Questions we're interested in</h2>
<li>Shortest path between two nodes
<li>Connectedness
<li>Centrality
<li>Clustering
<li>Communicability
<h1>networkx</h1>
<li>Python package for networks
<li>Nodes and edges can contain data
<li>Nodes can be (hashable!) python objects
<h3>Constructing a simple network</h3>
<b>Necessary imports</b>
End of explanation
pos=nx.spring_layout(simple_network) # positions for all nodes
# nodes
nx.draw_networkx_nodes(simple_network,pos,
node_color='r',
node_size=500,
alpha=0.8)
# edges
#nx.draw_networkx_edges(sub_graph,pos,width=1.0,alpha=0.5)
nx.draw_networkx_edges(simple_network,pos,
edgelist=edges,
width=8,alpha=0.5,edge_color='b')
node_name={}
for node in simple_network.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(simple_network,pos,node_name,font_size=16)
plt.axis('off')
plt.show() # display
Explanation: <h1>Add labels to the nodes</h1>
End of explanation
simple_network.has_edge(2,9)
#simple_network.has_node(2)
#simple_network.number_of_edges()
#simple_network.number_of_nodes()
#simple_network.order()
#len(simple_network)
Explanation: <h4>Simple queries on the network</h4>
End of explanation
for n in simple_network.nodes_iter():
print(n)
for a in simple_network.adjacency_iter():
print(a)
for e in simple_network.edges_iter():
print(e)
for d in simple_network.degree_iter():
print(d)
Explanation: <h3>Iterating over a network</h3>
End of explanation
G = nx.Graph() #Undirected simple graph
d = nx . DiGraph () #directed simple graph
m = nx . MultiGraph () #undirected with parallel edges
h = nx . MultiDiGraph () #directed with parallel edges
Explanation: <h3>Types of graph</h3>
End of explanation
print(nx.shortest_path(simple_network,6,8))
print(nx.shortest_path_length(simple_network,6,8))
Explanation: <h4>Shortest path</h4>
End of explanation
#Our geocoding data getter is useful here!
def get_json_data(response,country,types):
data = response.json()
result_list = list()
for result in data['results']:
if not country == 'ALL':
if not country in [x['long_name'] for x in result['address_components'] if 'country' in x['types']]:
continue
address = result['formatted_address']
lat = result['geometry']['location']['lat']
lng = result['geometry']['location']['lng']
if types:
result_list.append((address,lat,lng,result['types']))
else:
result_list.append((address,lat,lng))
return result_list
def get_geolocation_data(address_string,format="JSON",country="ALL",types=False):
format = format.lower()
address = '_'.join(address_string.split())
url = 'https://maps.googleapis.com/maps/api/geocode/%s?address=%s' %(format,address)
try:
import requests
response=requests.get(url)
if not response.status_code == 200: return None
func='get_'+format+'_data'
return globals()[func](response,country,types)
except:
return None
def get_lat_lon(address):
data = get_geolocation_data(address,format='JSON')
return str(data[0][1]) + ',' + str(data[0][2])
get_lat_lon('New York, NY')
Explanation: <h2>Weighted Edges</h2>
<li>Example: A network of travel times between locations
<h4>We can use Google Distance Matrix API to get travel times</h4>
<li>Uses addresses to construct a distance matrix
<li>Free version uses latitudes and longitudes
<li>We can find latitudes and longitudes using the function we wrote as homework
<h4>We'll add a get_lat_lon function to our geocoding function to return lat,lon in google's required format</h4>
End of explanation
addresses = [
"Columbia University, New York, NY",
"Amity Hall Uptown, Amsterdam Avenue, New York, NY",
"Ellington in the Park, Riverside Drive, New York, NY",
'Chaiwali, Lenox Avenue, New York, NY',
"Grant's Tomb, West 122nd Street, New York, NY",
'Pisticci, La Salle Street, New York, NY',
'Nicholas Roerich Museum, West 107th Street, New York, NY',
'Audubon Terrace, Broadway, New York, NY',
'Apollo Theater, New York, NY'
]
latlons=''
for address in addresses:
latlon=get_lat_lon(address)
latlons += latlon + '|'
print(latlons)
distance_url = 'https://maps.googleapis.com/maps/api/distancematrix/json?origins='
distance_url+=latlons
distance_url+='&destinations='
distance_url+=latlons
#Set the mode walking, driving, cycling
mode='walking'
distance_url+='&mode='+mode
print(distance_url)
Explanation: <h4>Now we can construct the distance matrix api url</h4>
End of explanation
import requests
data=requests.get(distance_url).json()
all_rows = data['rows']
address_graph=nx.Graph()
address_graph.add_nodes_from(addresses)
for i in range(len(all_rows)):
origin = addresses[i]
for j in range(len(all_rows[i]['elements'])):
duration = all_rows[i]['elements'][j]['duration']['value']
destination = addresses[j]
address_graph.add_edge(origin,destination,d=duration)
#print(origin,destination,duration)
nx.draw(address_graph)
nx.draw(address_graph)
Explanation: <h4>Then let's get the distances and construct a graph</h4>
End of explanation
def get_route_graph(address_list,mode='walking'):
latlons=''
for address in addresses:
latlon=get_lat_lon(address)
latlons += latlon + '|'
distance_url = 'https://maps.googleapis.com/maps/api/distancematrix/json?origins='
distance_url+=latlons
distance_url+='&destinations='
distance_url+=latlons
#Set the mode walking, driving, cycling
mode='driving'
distance_url+='&mode='+mode
import requests
data=requests.get(distance_url).json()
all_rows = data['rows']
address_graph = nx.Graph()
address_graph.add_nodes_from(addresses)
for i in range(len(all_rows)):
origin = addresses[i]
for j in range(len(all_rows[i]['elements'])):
if i==j:
continue
duration = all_rows[i]['elements'][j]['duration']['value']
destination = addresses[j]
address_graph.add_edge(origin,destination,d=duration)
return address_graph
address_graph = get_route_graph(addresses)
Explanation: <h4>Functionalize this for reuse</h4>
End of explanation
for edge in address_graph.edges():
print(edge,address_graph.get_edge_data(*edge))
for n in address_graph.edges_iter():
print(n)
address_graph = get_route_graph(addresses)
pos=nx.circular_layout(address_graph) # positions for all nodes
# nodes
nx.draw_networkx_nodes(address_graph,pos,
node_color='r',
node_size=2000,
alpha=0.001)
# edges
nx.draw_networkx_edges(address_graph,pos,edgelist=address_graph.edges(),width=8,alpha=0.5,edge_color='b')
nx.draw_networkx_edge_labels(address_graph,pos,font_size=10)
node_name={}
for node in address_graph.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(address_graph,pos,node_name,font_size=16)
plt.axis('off')
plt.show() # display
Explanation: <h4>Test the function by drawing it with node and edge labels</h4>
End of explanation
for edge in address_graph.edges():
print(edge,address_graph.get_edge_data(*edge))
Explanation: <h3>Yikes! Unreadable!</h3>
<li>Let's see what the edge weights are</li>
End of explanation
for edge in address_graph.edges():
duration = address_graph.get_edge_data(*edge)['d']
address_graph.get_edge_data(*edge)['d'] = int(duration/60)
print(address_graph.get_edge_data(*edge))
Explanation: <h4>Let's make this readable</h4>
End of explanation
pos=nx.circular_layout(address_graph) # positions for all nodes
fig=plt.figure(1,figsize=(12,12)) #Let's draw a big graph so that it is clearer
# nodes
nx.draw_networkx_nodes(address_graph,pos,
node_color='r',
node_size=2000,
alpha=0.001)
# edges
nx.draw_networkx_edges(address_graph,pos,edgelist=address_graph.edges(),width=8,alpha=0.5,edge_color='b')
nx.draw_networkx_edge_labels(address_graph,pos,font_size=10)
node_name={}
for node in address_graph.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(address_graph,pos,node_name,font_size=16)
#fig.axis('off')
fig.show() # display
def get_route_graph(address_list,mode='walking'):
latlons=''
for address in addresses:
latlon=get_lat_lon(address)
latlons += latlon + '|'
distance_url = 'https://maps.googleapis.com/maps/api/distancematrix/json?origins='
distance_url+=latlons
distance_url+='&destinations='
distance_url+=latlons
#Set the mode walking, driving, cycling
mode='driving'
distance_url+='&mode='+mode
import requests
data=requests.get(distance_url).json()
all_rows = data['rows']
address_graph = nx.Graph()
address_graph.add_nodes_from(addresses)
for i in range(len(all_rows)):
origin = addresses[i]
for j in range(len(all_rows[i]['elements'])):
if i==j:
continue
duration = all_rows[i]['elements'][j]['duration']['value']
destination = addresses[j]
address_graph.add_edge(origin,destination,d=int(duration/60))
return address_graph
address_graph = get_route_graph(addresses)
Explanation: <h4>Now let's look a the graph</h4>
End of explanation
for edge in address_graph.edges():
import random
r = random.random()
if r <0.75: #get rid of 60% of the edges
address_graph.remove_edge(*edge)
Explanation: <h4>Let's remove a few edges (randomly)</h4>
End of explanation
pos=nx.circular_layout(address_graph) # positions for all nodes
plt.figure(1,figsize=(12,12)) #Let's draw a big graph so that it is clearer
# nodes
nx.draw_networkx_nodes(address_graph,pos,
node_color='r',
node_size=2000,
alpha=0.001)
# edges
nx.draw_networkx_edges(address_graph,pos,edgelist=address_graph.edges(),width=8,alpha=0.5,edge_color='b')
nx.draw_networkx_edge_labels(address_graph,pos,font_size=7)
node_name={}
for node in address_graph.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(address_graph,pos,node_name,font_size=16)
#fig.axis('off')
fig.show() # display
print(addresses)
Explanation: <h4>And draw it again</h4>
End of explanation
print(nx.shortest_path(address_graph,'Amity Hall Uptown, Amsterdam Avenue, New York, NY', 'Chaiwali, Lenox Avenue, New York, NY'))
print(nx.dijkstra_path(address_graph,'Amity Hall Uptown, Amsterdam Avenue, New York, NY', 'Chaiwali, Lenox Avenue, New York, NY'))
print(nx.dijkstra_path_length (address_graph,'Amity Hall Uptown, Amsterdam Avenue, New York, NY', 'Chaiwali, Lenox Avenue, New York, NY',weight='d'))
#[print(n1,n2,nx.shortest_path_length(n1,n2),nx.dijkstra_path_length(n1,n2,weight='d')) for n1 in address_graph.nodes() for n2 in address_graph.nodes()]
[print(n1,n2,
nx.shortest_path_length(address_graph,n1,n2),
nx.dijkstra_path_length(address_graph,n1,n2,weight='d'),
) for n1 in address_graph.nodes() for n2 in address_graph.nodes() if not n1 == n2]
for edge in address_graph.edges():
print(edge,address_graph.get_edge_data(*edge))
Explanation: <h4>Shortest path and shortest duration</h4>
End of explanation
#Divide edges into two groups based on weight
#Easily extendable to n-groups
elarge=[(u,v) for (u,v,d) in address_graph.edges(data=True) if d['d'] >5]
esmall=[(u,v) for (u,v,d) in address_graph.edges(data=True) if d['d'] <=5]
pos=nx.spring_layout(address_graph) # positions for all nodes
plt.figure(1,figsize=(12,12)) #Let's draw a big graph so that it is clearer
# nodes
nx.draw_networkx_nodes(address_graph,pos,node_size=700)
# edges. draw the larger weight edges in solid lines and smaller weight edges in dashed lines
nx.draw_networkx_edges(address_graph,pos,edgelist=elarge,
width=6)
nx.draw_networkx_edges(address_graph,pos,edgelist=esmall,
width=6,alpha=0.5,edge_color='b',style='dashed')
# labels
nx.draw_networkx_labels(address_graph,pos,font_size=20,font_family='sans-serif')
nx.draw_networkx_edge_labels(address_graph,pos,font_size=7)
plt.axis('off')
#plt.savefig("address_graph.png") # save as png if you need to use it in a report or web app
fig.show() # display
Explanation: <h2>Graph drawing options</h2>
<li>nltk uses matplotlib to draw graphs
<li>limited, but useful, functionalities
<h3>Let's take a look!</h3>
<b>Differnetiating edges by weight</b>
End of explanation
origin = 'Amity Hall Uptown, Amsterdam Avenue, New York, NY'
destination = 'Chaiwali, Lenox Avenue, New York, NY'
shortest_path = nx.dijkstra_path(address_graph,origin,destination)
shortest_path_edges = list()
for i in range(len(shortest_path)-1):
shortest_path_edges.append((shortest_path[i],shortest_path[i+1]))
shortest_path_edges.append((shortest_path[i+1],shortest_path[i]))
path_edges=list()
other_edges=list()
node_label_list = dict()
node_label_list = {n:'' for n in address_graph.nodes()}
for edge in address_graph.edges():
if edge in shortest_path_edges:
path_edges.append(edge)
node_label_list[edge[0]] = edge[0]
node_label_list[edge[1]] = edge[1]
else:
other_edges.append(edge)
pos=nx.spring_layout(address_graph) # positions for all nodes
fig=plt.figure(1,figsize=(12,12))
# nodes
nx.draw_networkx_nodes(address_graph,pos,node_size=700)
# edges. draw the larger weight edges in solid lines and smaller weight edges in dashed lines
nx.draw_networkx_edges(address_graph,pos,edgelist=path_edges,
width=6)
nx.draw_networkx_edges(address_graph,pos,edgelist=other_edges,
width=6,alpha=0.5,edge_color='b',style='dashed')
# labels
nx.draw_networkx_labels(address_graph,pos,font_size=20,font_family='sans-serif',labels=node_label_list)
nx.draw_networkx_edge_labels(address_graph,pos,font_size=7)
plt.axis('off')
#plt.savefig("address_graph.png") # save as png if you need to use it in a report or web app
plt.show() # display
Explanation: <h4>highlight the shortest path</h4>
End of explanation
location = 'Amity Hall Uptown, Amsterdam Avenue, New York, NY'
distance_list = list()
for node in address_graph.nodes():
if node == location:
continue
distance = nx.dijkstra_path_length(address_graph,location,node)
distance_list.append((node,distance))
from operator import itemgetter
print(sorted(distance_list,key=itemgetter(1)))
Explanation: <b>Question</b> How would you remove edge labels from all but the shortest path?
<h4>Working with a network</h4>
<b>Given an address, generate a <i>sorted by distance</i> list of all other addresses
End of explanation
list(nx.all_simple_paths(address_graph,'Amity Hall Uptown, Amsterdam Avenue, New York, NY','Chaiwali, Lenox Avenue, New York, NY'))
nx.all_simple_paths(address_graph,
'Amity Hall Uptown, Amsterdam Avenue, New York, NY',
'Chaiwali, Lenox Avenue, New York, NY')
Explanation: <b>Get all paths from one location to another</b>
End of explanation
import json
import datetime
datafile='yelp_academic_dataset_user.json'
user_id_count = 1
user_id_dict = dict()
with open(datafile,'r') as f:
for line in f:
data = json.loads(line)
user_id = data.get('user_id')
friends = data.get('friends')
try:
user_id_dict[user_id]
except:
user_id_dict[user_id] = user_id_count
user_id_count+=1
user_data=list()
friends_data=list()
with open(datafile,'r') as f:
count=0
for line in f:
data=json.loads(line)
user_id=user_id_dict[data.get('user_id')]
name=data.get('name')
review_count=data.get('review_count')
average_stars=data.get('average_stars')
yelping_since=datetime.datetime.strptime(data.get('yelping_since'),"%Y-%m").date()
fans=data.get('fans')
user_friends=data.get('friends')
for i in range(len(user_friends)):
user_friends[i] = user_id_dict[user_friends[i]]
user_data.append([user_id,name,review_count,average_stars,yelping_since,fans])
friends_data.append([user_id,user_friends])
count+=1
print(count)
friends_data[0:10]
Explanation: <h2>Social networks</h2>
<br>
We will use the <a href="https://www.yelp.com/dataset_challenge">Yelp database challenge</a><br>
Data on:
users,
businesses,
reviews,
tips (try the mushroom burger!),
check-in (special offers from yelp)
<h3>We're use the data in the users file (yelp_academic_dataset_user.json)</h3>
<h4>Read the data from the data file and create several list variables to hold the data</h4>
<li>You could also use objects to store the data </li>
End of explanation
#Select a random(ish) list of nodes
friends_of_list = [1,5,15,100,2200,3700,13500,23800,45901,78643,112112,198034,267123,298078,301200,353216]
node_super_set = set(friends_of_list)
#Get a superset of these nodes - the friends they are connected to
for n in friends_of_list:
friends = friends_data[n-1][1]
node_super_set = node_super_set.union({f for f in friends})
node_super_list = list(node_super_set)
#Collect node data and edges for these nodes
node_data = dict()
edge_list = list()
for node in node_super_list:
node_data[node]=user_data[node-1]
friends = friends_data[node-1][1]
edges = [(node,e) for e in friends if e in node_super_list]
edge_list.extend(edges)
print(len(edge_list),len(node_super_list),len(node_data))
for e in edge_list:
if e[0] in node_super_list:
continue
if e[1] in node_super_list:
continue
print(e[0],e[1])
Explanation: <h2>Too much data for this class so let's cut it down</h2>
End of explanation
import networkx as nx
friend_graph=nx.Graph()
friend_graph.add_nodes_from(node_super_list)
friend_graph.add_edges_from(edge_list)
print(friend_graph.number_of_nodes(),friend_graph.number_of_edges())
#Querying the graph
len(friend_graph.neighbors(1))
nx.draw(friend_graph)
Explanation: <h3>Make the graph</h3>
End of explanation
count = 0
for n in friend_graph.nodes_iter():
if friend_graph.degree(n) == 1:
print(n)
nodes = friend_graph.nodes()
for node in nodes:
if friend_graph.degree(node) == 0:
friend_graph.remove_node(node)
pos=nx.spring_layout(friend_graph) # positions for all nodes
fig = plt.figure(1,figsize=(12,12))
#pos
# nodes
nx.draw_networkx_nodes(friend_graph,pos,
node_color='r',
node_size=500,
alpha=0.8)
# edges
nx.draw_networkx_edges(friend_graph,pos,width=1.0,alpha=0.5)
nx.draw_networkx_edges(friend_graph,pos,
width=8,alpha=0.5,edge_color='b')
node_name={}
for node in friend_graph.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(friend_graph,pos,node_name,font_size=16)
fig.show()
Explanation: <h4>Let's remove disconnected nodes</h4>
End of explanation
nx.shortest_path(friend_graph,100219,19671)
nx.shortest_path_length(friend_graph,167099,47622)
Explanation: <h3>Start looking at different aspects of the graph</h3>
End of explanation
print(len(list(nx.connected_components(friend_graph))))
for comp in nx.connected_components(friend_graph):
print(comp)
Explanation: <h3>Graph components</h3>
<li>Let's see the number of connected components
<li>And then each connected component
End of explanation
largest_size=0
largest_graph = None
for g in nx.connected_component_subgraphs(friend_graph):
if len(g) > largest_size:
largest_size = len(g)
largest_graph = g
nx.draw(largest_graph)
Explanation: <h4>Largest connected component subgraph</h4>
End of explanation
smallest_size=100000
smallest_graph = None
for g in nx.connected_component_subgraphs(friend_graph):
if len(g) < smallest_size:
smallest_size = len(g)
smallest_graph = g
nx.draw(smallest_graph)
#Find out node degrees in the graph
nx.degree(friend_graph)
Explanation: <h4>Smallest connected component</h4>
End of explanation
#Highest degree
print(max(nx.degree(friend_graph).values()))
#Node with highest degree value
degrees = nx.degree(friend_graph)
print(max(degrees,key=degrees.get))
Explanation: <h4>Max degree. The yelp user with the most friends</h4>
End of explanation
pos=nx.spring_layout(friend_graph) # positions for all nodes
fig = plt.figure(1,figsize=(12,12))
#pos
# nodes
nx.draw_networkx_nodes(friend_graph,pos,
node_color='r',
node_size=500,
alpha=0.8)
# edges
nx.draw_networkx_edges(friend_graph,pos,width=1.0,alpha=0.5)
nx.draw_networkx_edges(friend_graph,pos,
edgelist=edges,
width=8,alpha=0.5,edge_color='b')
node_name={}
for node in friend_graph.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(friend_graph,pos,node_name,font_size=16)
fig.show()
nx.clustering(friend_graph)
nx.average_clustering(friend_graph)
G=nx.complete_graph(4)
nx.draw(G)
nx.clustering(G)
G.remove_edge(1,2)
pos=nx.spring_layout(G) # positions for all nodes
# nodes
nx.draw_networkx_nodes(G,pos,
node_color='r',
node_size=500,
alpha=0.8)
# edges
#nx.draw_networkx_edges(sub_graph,pos,width=1.0,alpha=0.5)
nx.draw_networkx_edges(G,pos,
edgelist=G.edges(),
width=8,alpha=0.5,edge_color='b')
node_name={}
for node in G.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(G,pos,node_name,font_size=16)
plt.axis('off')
plt.show() # display
nx.clustering(G)
Explanation: <h2>Network analysis algorithms</h2>
https://networkx.github.io/documentation/networkx-1.10/reference/algorithms.html
<h3>Clustering</h3>
Clustering is a measure of how closely knit the nodes in a graph are. We can measure the degree to which a node belongs to a cluster and the degree to which the graph is clustered
- Node clustering coefficient: A measure that shows the degree to which a node belongs to a cluster
- Graph clustering coefficient: A measure that shows the degree to which a graph is clustered
End of explanation
from networkx.algorithms.centrality import closeness_centrality, communicability
Explanation: <h3>Node 0 has two neighbors: 1 and 2. Of the three possible edges, only two are actually present. So, its clustering coefficient is 2/3 or 0.667</h3>
<h2>Centrality and communicability</h2>
<b>Centrality</b> deals with identifying the most important nodes in a graph<p>
<b>Communicability</b> measures how easy it is to send a message from node i to node j
<li>closeness_centrality: (n-1)/sum(shortest path to all other nodes)
<li>betweenness_centrality: fraction of pair shortest paths that pass through node n
<li>degree centrality: fraction of nodes that n is connected to
<li>communicability: the sum of all walks from one node to every other node
End of explanation
type(closeness_centrality(friend_graph))
from collections import OrderedDict
cc = OrderedDict(sorted(
closeness_centrality(friend_graph).items(),
key = lambda x: x[1],
reverse = True))
cc
Explanation: <h3>Closeness centrality is a measure of how near a node is to every other node in a network</h3>
<h3>The higher the closeness centrality, the more central a node is</h3>
<h3>Roughly, because it can get to more nodes in shorter jumps</h3>
End of explanation
G=nx.complete_graph(4)
nx.closeness_centrality(G)
G.remove_edge(1,2)
pos=nx.spring_layout(G) # positions for all nodes
# nodes
nx.draw_networkx_nodes(G,pos,
node_color='r',
node_size=500,
alpha=0.8)
# edges
#nx.draw_networkx_edges(sub_graph,pos,width=1.0,alpha=0.5)
nx.draw_networkx_edges(G,pos,
edgelist=G.edges(),
width=8,alpha=0.5,edge_color='b')
node_name={}
for node in G.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(G,pos,node_name,font_size=16)
plt.axis('off')
plt.show() # display
nx.closeness_centrality(G)
Explanation: <h3>Understanding closeness centrality</h3>
End of explanation
G = nx.Graph([(0,1),(1,2),(1,5),(5,4),(2,4),(2,3),(4,3),(3,6)])
nx.communicability(G)
#Define a layout for the graph
pos=nx.spring_layout(G) # positions for all nodes
# draw the nodes: red, sized, transperancy
nx.draw_networkx_nodes(G,pos,
node_color='r',
node_size=500,
alpha=1)
# draw the edges
nx.draw_networkx_edges(G,pos,
width=8,alpha=0.5,edge_color='b')
node_name={}
for node in G.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(G,pos,node_name,font_size=16)
plt.axis('off')
plt.show() # display
# communicability is the sum of closed walks of different lengths between nodes.
#communicability(friend_graph) #Costly operation, we won't do this. Try it at home!
Explanation: <li>n=4
<li>shortest paths from 2 (2-0:1, 2-3:1, 2-1:2)
<li> (n-1)/sum = 3/4 = 0.75
<h2>Communicability</h2>
A measure of the degree to which one node can communicate with another<p>
Takes into account all paths between pairs of nodes<p>
The more paths, the higher the communicability
End of explanation
G=nx.complete_graph(4)
nx.betweenness_centrality(G)
Explanation: <h2>Betweenness centrality</h2>
<h3>measures of the extent to which a node is connected to other nodes that are not connected to each other. </h3>
<h3>Itโs a measure of the degree to which a node serves as a connector</h3>
<h3>Example: a traffic bottleneck</h3>
<h4>The number of shortest paths that go through node n/total number of shortest paths</h4>
End of explanation
G.remove_edge(1,2)
nx.betweenness_centrality(G)
#Define a layout for the graph
pos=nx.spring_layout(G) # positions for all nodes
# draw the nodes: red, sized, transperancy
nx.draw_networkx_nodes(G,pos,
node_color='r',
node_size=500,
alpha=1)
# draw the edges
nx.draw_networkx_edges(G,pos,
width=8,alpha=0.5,edge_color='b')
node_name={}
for node in G.nodes():
node_name[node]=str(node)
nx.draw_networkx_labels(G,pos,node_name,font_size=16)
plt.axis('off')
plt.show() # display
nx.all_pairs_shortest_path(G)
Explanation: <h3>When the graph is fully connected, no shortest paths go through the node. So the numerator is zero</h3>
End of explanation
nx.betweenness_centrality(friend_graph)
Explanation: <h3>There are 12 shortest paths in total</h3>
<h3>Two go through 0 (1, 0, 2) and (2, 0, 1)</h3>
<h3> Betweeness centrality: 2/12</h3>
End of explanation
G = nx.complete_graph(4)
nx.eccentricity(G)
G.remove_edge(1,2)
nx.eccentricity(G)
Explanation: <h3>Dispersion in fully connected graphs</h3>
<li>Eccentricity: the max distance from one node to all other nodes (least eccentric is more central)
<li>diameter: the max eccentricity of all nodes in a graph (the longest shortest path)
<li>periphery: the set of nodes with eccentricity = diameter
End of explanation
nx.diameter(G)
nx.periphery(G)
nx.diameter(friend_graph)
nx.periphery(griend_graph)
G = nx.complete_graph(4)
print(nx.diameter(G))
print(nx.periphery(G))
G.remove_edge(1,2)
print(nx.diameter(G))
print(nx.periphery(G))
Explanation: <h2>Diameter</h2>
The longest shortest path in the graph
<h2>Periphery</h2>
The nodes with the longest shortest paths (the peripheral nodes)
End of explanation
from networkx.algorithms.clique import find_cliques, cliques_containing_node
for clique in find_cliques(friend_graph):
print(clique)
cliques_containing_node(friend_graph,2)
#nx.draw(nx.make_max_clique_graph(friend_graph))
Explanation: <h3>Cliques</h3>
A clique is a subgraph in which every node is connected to every other node
End of explanation
from networkx.algorithms.distance_measures import center
center(largest_graph)
Explanation: <h3>Center: The set of nodes that are the most central (they have the smallest distance to any other node)</h3>
Graph must be fully connected
End of explanation |
12,228 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mean reversion
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie
Notebook released under the Creative Commons Attribution 4.0 License.
Mean-reversion strategies are those relying on the assumption that a variable deviating far from its observed mean will tend to reverse direction and revert to the mean. We expect it to go down if it is unusually high, and go up if it is unusually low. Why would this be the case? One explanation is that the deviations we are observing are random fluctuations, which are 0 in expectation. In this notebook, we will just focus on how to build strategies that take advantage of mean reversion when it is observed.
Single-stock mean reversion
Mean reversion in the context of a stock price implies that periods of the price being far below the mean are followed by periods of the price going up, and vice versa. We can take advantage of this by buying long when the price is lower than expected, and selling short when the price is higher than expected. We can plot the price of a stock along with the mean of the prices up to each day to see whether the price reverts to the mean.
Step1: Note that since we are computing the running average, "reverting to the mean" does not necessarily mean going as high or as low as it did before.
In order to trade using this strategy, we need to quantify what it means for the price to be higher or lower than expected. It's useful to compute the z-score of the price on each day, which tells us how many standard deviations away from the mean a value is
Step2: The danger of applying mean reversion to a single stock is that it exposes us to the movement of the market and the success or failure of the individual company, among other factors. If there is a persistent trend affecting the price of the security, we will find ourselves consitently undervaluing (if the price is moving steadily upward) or overvaluing (if the price is falling) the asset. Below we discuss two strategies that mitigate this risk.
Mean reversion portfolio
Instead of taking the mean of the historical returns on an asset, we can look at the mean of the returns on all of the stocks in, say, the S&P 500. Hypothesizing that the worst-performing stocks last period will do better this period (that is, they are likely to be undervalued) and vice versa, we go long in stocks that performed poorly and short in stocks that performed well.
This approach has the advantage of being market-neutral, so that we do not treat stocks as undervalued just because the market as a whole is falling, or overvalued when the market is rising. Furthermore, by including a large number of securities in portfolio, we are likely to encounter many cases where our prediction is correct.
To construct a portfolio which takes advantage of mean reversion, we first select a universe, such as all S&P 500 stocks or the top-traded stocks on the NYSE. From this universe, we rebalance our portfolio every period (say, every week) by going short in the stocks in the bottom 20% of returns over the last period and long in the stocks in the top 20% of returns. If a stock is in neither of those quintiles, we do not include it in our portfolio.
We can construct a toy example using sector ETFs instead of a large basket of stocks
Step3: We hypothesize that the stocks which do well for the first week will regress after another month, while those which do poorly at first will appreciate in value.
Step4: The returns look like they could be anticorrelated, but what would have happened if we had followed the mean-reversion strategy when we examined the past week's returns?
Step5: An example trading algorithm implementing this strategy in detail can be found in the associated lecture materials.
Pairs trading
In pairs trading, the quantity we are examining is the distance between two securities, which we expect to revert back to its mean. For this to be a reasonable assumption, we need the two securities to be statistically <i>cointegrated</i>. In practice, two companies whose products are substitutes for each other are often cointegrated. That is, they generally move together due to shifts in the market and in their specific industry, and move little relative to each other.
How do we incorporate the prediction about their difference into our portfolio? Suppose we are looking at two securities X and Y. Then we go long in X and short in Y when the two are closer together than expected, and short in X and long in Y when the two are far apart. In this way we remain neutral to the market, industry, and other shifts that cause X and Y to move together, while making money on their difference reverting to the mean. We can quantify "closer than expected" as the difference having a z-score of less than -1, and "farther apart than expected" as a z-score greater than 1. This is easier to picture if X's price is higher than Y's, but the end result is the same in either case.
Using the coint function from statsmodels, let's check whether HP and Microsoft stock prices are cointegrated.
Step6: The p-value is low, so the two series are cointegrated. Next we need to find the mean of the difference. We'll compute the cumulative moving average - that is, the average of all the values up to each day - as though we were looking at the data every day without knowing the future.
Step7: In some cases, we may instead want our mean to refer only to the moving average, excluding data from too long ago. Below we can see the difference between the cumulative moving average and the 60-day running average.
Step8: From here our trading strategy is identical to that for a single security, where we replace the asset with the spread X-Y. When we short the spread, we buy Y and sell X, and vice versa for going long. We'll be using the CMA for the mean, but you can easily change it to see the difference. Keep in mind, however, that what works well with this data may not be suited for other situations, and each definition of the mean will sometimes outperform the other. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Load the pricing data for a stock
start = '2012-01-01'
end = '2015-01-01'
pricing = get_pricing('MCD', fields='price', start_date=start, end_date=end)
# Compute the cumulative moving average of the price
mu = [pricing[:i].mean() for i in range(len(pricing))]
# Plot the price and the moving average
_, ax = plt.subplots()
ax.plot(pricing)
ticks = ax.get_xticks()
ax.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
ax.plot(mu);
Explanation: Mean reversion
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie
Notebook released under the Creative Commons Attribution 4.0 License.
Mean-reversion strategies are those relying on the assumption that a variable deviating far from its observed mean will tend to reverse direction and revert to the mean. We expect it to go down if it is unusually high, and go up if it is unusually low. Why would this be the case? One explanation is that the deviations we are observing are random fluctuations, which are 0 in expectation. In this notebook, we will just focus on how to build strategies that take advantage of mean reversion when it is observed.
Single-stock mean reversion
Mean reversion in the context of a stock price implies that periods of the price being far below the mean are followed by periods of the price going up, and vice versa. We can take advantage of this by buying long when the price is lower than expected, and selling short when the price is higher than expected. We can plot the price of a stock along with the mean of the prices up to each day to see whether the price reverts to the mean.
End of explanation
# Compute the z-scores for each day using the historical data up to that day
zscores = [(pricing[i] - mu[i]) / np.std(pricing[:i]) for i in range(len(pricing))]
# Start with no money and no positions
money = 0
count = 0
for i in range(len(pricing)):
# Sell short if the z-score is > 1
if zscores[i] > 1:
money += pricing[i]
count -= 1
# Buy long if the z-score is < 1
elif zscores[i] < -1:
money -= pricing[i]
count += 1
# Clear positions if the z-score between -.5 and .5
elif abs(zscores[i]) < 0.5:
money += count*pricing[i]
count = 0
print money
Explanation: Note that since we are computing the running average, "reverting to the mean" does not necessarily mean going as high or as low as it did before.
In order to trade using this strategy, we need to quantify what it means for the price to be higher or lower than expected. It's useful to compute the z-score of the price on each day, which tells us how many standard deviations away from the mean a value is:
$$ z = \frac{x - \mu}{\sigma} $$
where $x$ is the value, $\mu$ is the mean of the data set, and $\sigma$ is its standard deviation. So a price with a z-score $> 1$ is more than one standard deviation above the mean, and we will sell short when this happens. If the price on a day has a z-score $< 1$, we will buy long. If the price is within half a standard deviation of the mean, we will clear all positions.
End of explanation
# Fetch pricing data for 10 sector ETFs and plot their returns
assets = ['XLU', 'XLB', 'XLI', 'XLV', 'XLF', 'XLE', 'XLK', 'XLY', 'XLP', 'XBI']
data = get_pricing(assets, start_date='2015-01-01', end_date='2015-02-06').loc['price', :, :]
returns = data.pct_change()[1:]
returns.plot(figsize=(10,7), colors=['r', 'g', 'b', 'k', 'c', 'm', 'orange',
'chartreuse', 'slateblue', 'silver'])
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.ylabel('Returns')
# Convert to numpy array to make manipulation easier
data = np.array(data);
Explanation: The danger of applying mean reversion to a single stock is that it exposes us to the movement of the market and the success or failure of the individual company, among other factors. If there is a persistent trend affecting the price of the security, we will find ourselves consitently undervaluing (if the price is moving steadily upward) or overvaluing (if the price is falling) the asset. Below we discuss two strategies that mitigate this risk.
Mean reversion portfolio
Instead of taking the mean of the historical returns on an asset, we can look at the mean of the returns on all of the stocks in, say, the S&P 500. Hypothesizing that the worst-performing stocks last period will do better this period (that is, they are likely to be undervalued) and vice versa, we go long in stocks that performed poorly and short in stocks that performed well.
This approach has the advantage of being market-neutral, so that we do not treat stocks as undervalued just because the market as a whole is falling, or overvalued when the market is rising. Furthermore, by including a large number of securities in portfolio, we are likely to encounter many cases where our prediction is correct.
To construct a portfolio which takes advantage of mean reversion, we first select a universe, such as all S&P 500 stocks or the top-traded stocks on the NYSE. From this universe, we rebalance our portfolio every period (say, every week) by going short in the stocks in the bottom 20% of returns over the last period and long in the stocks in the top 20% of returns. If a stock is in neither of those quintiles, we do not include it in our portfolio.
We can construct a toy example using sector ETFs instead of a large basket of stocks:
End of explanation
# For each security, take the return for the first week
wreturns = (data[4] - data[0])/data[0]
# Rank securities by return, with 0 being the lowest return
order = wreturns.argsort()
ranks = order.argsort()
# For each security, take the return for the month following the first week
# Normalization for the time period doesn't matter since we're only using the returns to rank them
mreturns = (data[-1] - data[5])/data[5]
order2 = mreturns.argsort()
ranks2 = order2.argsort()
# Plot the returns for the first week vs returns for the next month to visualize them
plt.scatter(wreturns, mreturns)
plt.xlabel('Returns for the first week')
plt.ylabel('Returns for the following month');
Explanation: We hypothesize that the stocks which do well for the first week will regress after another month, while those which do poorly at first will appreciate in value.
End of explanation
# Go long (by one share each) in the bottom 20% of securities and short in the top 20%
longs = np.array([int(x < 2)for x in ranks])
shorts = np.array([int(x > 7) for x in ranks])
print 'Going long in:', [assets[i] for i in range(len(assets)) if longs[i]]
print 'Going short in:', [assets[i] for i in range(len(assets)) if shorts[i]]
# Resolve all positions and calculate how much we would have earned
print 'Yield:', sum((data[-1] - data[4])*(longs - shorts))
Explanation: The returns look like they could be anticorrelated, but what would have happened if we had followed the mean-reversion strategy when we examined the past week's returns?
End of explanation
from statsmodels.tsa.stattools import coint
# Load pricing data for HP and Microsoft
X = get_pricing('MSFT', fields='price', start_date=start, end_date=end)
Y = get_pricing('HPQ', fields='price', start_date=start, end_date=end)
# Compute the p-value for the cointegration of the two series
_, pvalue, _ = coint(X,Y)
print pvalue
Explanation: An example trading algorithm implementing this strategy in detail can be found in the associated lecture materials.
Pairs trading
In pairs trading, the quantity we are examining is the distance between two securities, which we expect to revert back to its mean. For this to be a reasonable assumption, we need the two securities to be statistically <i>cointegrated</i>. In practice, two companies whose products are substitutes for each other are often cointegrated. That is, they generally move together due to shifts in the market and in their specific industry, and move little relative to each other.
How do we incorporate the prediction about their difference into our portfolio? Suppose we are looking at two securities X and Y. Then we go long in X and short in Y when the two are closer together than expected, and short in X and long in Y when the two are far apart. In this way we remain neutral to the market, industry, and other shifts that cause X and Y to move together, while making money on their difference reverting to the mean. We can quantify "closer than expected" as the difference having a z-score of less than -1, and "farther apart than expected" as a z-score greater than 1. This is easier to picture if X's price is higher than Y's, but the end result is the same in either case.
Using the coint function from statsmodels, let's check whether HP and Microsoft stock prices are cointegrated.
End of explanation
# Plot their difference and the cumulative moving average of their difference
diff = X - Y
mu = [diff[:i].mean() for i in range(len(diff))]
plt.plot(diff)
plt.plot(mu);
Explanation: The p-value is low, so the two series are cointegrated. Next we need to find the mean of the difference. We'll compute the cumulative moving average - that is, the average of all the values up to each day - as though we were looking at the data every day without knowing the future.
End of explanation
mu_60d = pd.rolling_mean(diff, window=90)
plt.plot(diff, label='X-Y')
plt.plot(mu, label='CMA')
plt.plot(mu_60d, label='60d MA')
plt.legend();
Explanation: In some cases, we may instead want our mean to refer only to the moving average, excluding data from too long ago. Below we can see the difference between the cumulative moving average and the 60-day running average.
End of explanation
# Compute the z-score of the difference on each day
zscores = [(diff[i] - mu[i]) / np.std(diff[:i]) for i in range(len(diff))]
# Start with no money and no positions
money = 0
count = 0
for i in range(len(diff)):
# Sell short if the z-score is > 1
if zscores[i] > 1:
money += diff[i]
count -= 1
# Buy long if the z-score is < 1
elif zscores[i] < -1:
money -= diff[i]
count += 1
# Clear positions if the z-score between -.5 and .5
elif abs(zscores[i]) < 0.5:
money += count*diff[i]
count = 0
print money
Explanation: From here our trading strategy is identical to that for a single security, where we replace the asset with the spread X-Y. When we short the spread, we buy Y and sell X, and vice versa for going long. We'll be using the CMA for the mean, but you can easily change it to see the difference. Keep in mind, however, that what works well with this data may not be suited for other situations, and each definition of the mean will sometimes outperform the other.
End of explanation |
12,229 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-1', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: HAMMOZ-CONSORTIUM
Source ID: SANDBOX-1
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:03
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
12,230 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Input and output
Currently, the only supported approach for loading and saving ensembles in medusa is via pickle. pickle is the Python module that serializes and de-serializes Python objects (i.e. converts to/from a binary representation). This is an intentional design choice--as medusa matures, we will identify a feasible route for standardization through an extension to the Systems Biology Markup Language (SBML), which is the de facto standard for sharing genome-scale metabolic network reconstructions.
To load an ensemble, use the load function from the pickle module
Step1: To save an ensemble, you can pickle it with | Python Code:
import medusa
from pickle import load
with open("../medusa/test/data/Staphylococcus_aureus_ensemble.pickle", 'rb') as infile:
ensemble = load(infile)
Explanation: Input and output
Currently, the only supported approach for loading and saving ensembles in medusa is via pickle. pickle is the Python module that serializes and de-serializes Python objects (i.e. converts to/from a binary representation). This is an intentional design choice--as medusa matures, we will identify a feasible route for standardization through an extension to the Systems Biology Markup Language (SBML), which is the de facto standard for sharing genome-scale metabolic network reconstructions.
To load an ensemble, use the load function from the pickle module:
End of explanation
save_dir = ("../medusa/test/data/Staphylococcus_aureus_repickled.pickle")
ensemble.to_pickle(save_dir)
Explanation: To save an ensemble, you can pickle it with:
End of explanation |
12,231 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting a config parser
The pypmj-module uses a configuration file in which all information about the JCMsuite-installation, data storage, servers and so on are set. This makes pypmj very flexible, as you can generate as many configuration files as you like. Here, we show how to easily set up your configuration using the config_tools shipped with pypmj.
We first import the config_tools.
Step1: We can get a suitable config parser for convenient setting of our preferences.
Step2: This parser already contains some default values and the standard sections
Step3: We will go through the different sections and show which values can to be set.
Note
Step4: Storage
Set up a base folder into which all the simulation data should be stored. The SimulationSet class of pypmj offers a convenient way to organize your simulations inside this folder. You can also set the special value 'CWD', which will cause that current working directory will be used instead.
Step5: Data
To keep your projects in one place, you can set a global projects folder. If you initialize a JCMProject unsing the JCMProject-class of pypmj, you can then give the path to your project relative to this directory. pypmj will leave the contents if these folders untouched and copy the contents to a working directory. If you don't like to use a global folder, you can also pass absolute paths to JCMProject.
Step6: Note
Step7: JCMsuite
It is assumed that your installation(s) of JCMsuite are in a fixed directory, which is configured using the root key. That way, you can change the version of JCMsuite to use easily by only changing the directory name with the key dir. Some versions of JCMsuite provide different kernels, which can be set using the kay kernel.
Step8: Logging
For the logging, you can specify the logging level ('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL', or 'NOTSET'), whether or not to write a log-file and if status mails should be send by the run_simusets_in_save_mode utility function. For the latter, you further need to configure the mail server used by smtplib.SMTP .
Step9: Adding servers
Finally, you can add one or more servers which can be used by the JCMdaemon. Have a look at the doc string to see the possible configurations
Step10: Minimally, the localhost needs to be added, because otherwise there will be no resources for the JCMdaemon. This is done by using 'localhost' as the hostname and your local username as the login
Step11: But you may have additional server power. Let's assume you have installed JCMsuite on a server called myserver which you can reach via ssh by typing ssh [email protected]. The directory into which your JCMsuite version(s) is(are) installed may be /path/on/server/to/your/jcm_installations. The JCMsuite directory name needs to be the same as configured in the section JCMsuite under key dir! You may further want to set a nickname to manage all your servers later more easily, e.g. myserver. Finaly, you want to set 6 workers and 6 threads per worker as a default. Then just write
Step12: Note
Step13: Using the configuration file with pypmj
Using a specific configuration file is easily done by setting the environment variable 'PYPMJ_CONFIG_FILE'. If this is not set, pypmj will look for a config.cfg in the current working directory. Setting the environment variable can be done using the os module | Python Code:
import config_tools as ct
Explanation: Getting a config parser
The pypmj-module uses a configuration file in which all information about the JCMsuite-installation, data storage, servers and so on are set. This makes pypmj very flexible, as you can generate as many configuration files as you like. Here, we show how to easily set up your configuration using the config_tools shipped with pypmj.
We first import the config_tools.
End of explanation
config = ct.get_config_parser()
Explanation: We can get a suitable config parser for convenient setting of our preferences.
End of explanation
config.sections()
Explanation: This parser already contains some default values and the standard sections:
End of explanation
# config.set('User', 'email', 'your_address@your_provider.com')
Explanation: We will go through the different sections and show which values can to be set.
Note: If a configuration option is not set, a default value will be used by pypmj. So you only need to uncomment and set the options that you like.
Sections
User
Set your e-mail address here if you like to receive status e-mail.
End of explanation
# config.set('Storage', 'base', '/path/to/your/global/storage/folder')
Explanation: Storage
Set up a base folder into which all the simulation data should be stored. The SimulationSet class of pypmj offers a convenient way to organize your simulations inside this folder. You can also set the special value 'CWD', which will cause that current working directory will be used instead.
End of explanation
# config.set('Data', 'projects', 'project/collection/folder')
Explanation: Data
To keep your projects in one place, you can set a global projects folder. If you initialize a JCMProject unsing the JCMProject-class of pypmj, you can then give the path to your project relative to this directory. pypmj will leave the contents if these folders untouched and copy the contents to a working directory. If you don't like to use a global folder, you can also pass absolute paths to JCMProject.
End of explanation
# config.set('Data', 'refractiveIndexDatabase', '/path/to/your/RefractiveIndex/database')
Explanation: Note: Be sure that this path is set to the project-folder shipped with pypmj to successfully run the Using pypmj - the mie2D-project notebook.
If you are using the materials-extension of pypmj, a RefractiveIndex database is needed and the path is configured here. Please contact one of the maintainers of pypmj for info on such a database.
End of explanation
config.set('JCMsuite', 'root', '/path/to/your/parent/CJMsuite/install/dir')
config.set('JCMsuite', 'dir', 'JCMsuite_X_Y_Z') # <- this is simply the folder name
config.set('JCMsuite', 'kernel', 3)
Explanation: JCMsuite
It is assumed that your installation(s) of JCMsuite are in a fixed directory, which is configured using the root key. That way, you can change the version of JCMsuite to use easily by only changing the directory name with the key dir. Some versions of JCMsuite provide different kernels, which can be set using the kay kernel.
End of explanation
# config.set('Logging', 'level', 'INFO')
# config.set('Logging', 'write_logfile', True)
# config.set('Logging', 'log_directory', 'logs') # <- can be a relative or an absolute path
# config.set('Logging', 'log_filename', 'from_date')
# config.set('Logging', 'send_mail', True)
# config.set('Logging', 'mail_server', 'localhost')
Explanation: Logging
For the logging, you can specify the logging level ('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL', or 'NOTSET'), whether or not to write a log-file and if status mails should be send by the run_simusets_in_save_mode utility function. For the latter, you further need to configure the mail server used by smtplib.SMTP .
End of explanation
ct.add_server?
Explanation: Adding servers
Finally, you can add one or more servers which can be used by the JCMdaemon. Have a look at the doc string to see the possible configurations:
End of explanation
# ct.add_server(config, 'localhost',
# multiplicity_default=1,
# n_threads_default=1)
Explanation: Minimally, the localhost needs to be added, because otherwise there will be no resources for the JCMdaemon. This is done by using 'localhost' as the hostname and your local username as the login:
End of explanation
# ct.add_server(config, 'myserver.something.com', 'YOUR_LOGIN',
# JCM_root='/path/on/server/to/your/jcm_installations',
# multiplicity_default=6,
# n_threads_default=6,
# nickname='myserver')
Explanation: But you may have additional server power. Let's assume you have installed JCMsuite on a server called myserver which you can reach via ssh by typing ssh [email protected]. The directory into which your JCMsuite version(s) is(are) installed may be /path/on/server/to/your/jcm_installations. The JCMsuite directory name needs to be the same as configured in the section JCMsuite under key dir! You may further want to set a nickname to manage all your servers later more easily, e.g. myserver. Finaly, you want to set 6 workers and 6 threads per worker as a default. Then just write:
End of explanation
ct.write_config_file(config, 'config.cfg')
Explanation: Note: You will need a password free login to these servers.
Saving the configuration file
So you are done and all that is left is saving the configuration to a config file:
End of explanation
import os
os.environ['PYPMJ_CONFIG_FILE'] = '/path/to/your/config.cfg'
Explanation: Using the configuration file with pypmj
Using a specific configuration file is easily done by setting the environment variable 'PYPMJ_CONFIG_FILE'. If this is not set, pypmj will look for a config.cfg in the current working directory. Setting the environment variable can be done using the os module:
End of explanation |
12,232 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example
Step1: First start by seeing that there does exist a SparkContext object in the sc variable
Step2: Now let's load an RDD with some interesting data. We have the GDELT event data set on our VM as a tab-delimited text file. (Due to VM storage and compute power limitation, we only choose year 2001.)
We use a local file this time, the path is
Step3: Now we'll just confirm that all the Goldstein values are indeed between -10 and 10.
Count and print out the max and min value of the 31st field.
Here we compute the histogram. And print it out.
Plot the histogram.
We can also plot the number of events each day for the 10 countries that have the most events in the second half of year 2001.
First we can see the number of unique countries that are available. Note that we filter out events that don't list a country code.
Show the distict country codes, the 8th field.
Here we convert each event into counts. Aggregate by country and day, for all events in the second half of 2001.
First, filter the raw events. Keep the events for the second half of 2001. Also filter out events that don't list a country code.
Count how many qualified events we have.
Transform the events into key-value pair, key is (countrycode (8th), date (2nd)), value is event count.
((code, date), count)
Show the first five.
Step4: Aggregate the events by country and transform the country_day_counts to (country, time, counts), where time and counts can be later used for drawing. Note the time and its corresponding count should be sorted according to time.
Show the first item.
Plot the figure, x axis is the time and y axis is the event count. Plot for the 10 countries with most events.
What's the big spike for the line above?
Try to see what's going on use reduce and max.
Looks like it was the day after September 11th. | Python Code:
import findspark
import os
findspark.init('/home/ubuntu/shortcourse/spark-1.5.1-bin-hadoop2.6')
from pyspark import SparkContext, SparkConf
conf = SparkConf().setAppName("pyspark-example").setMaster("local[2]")
sc = SparkContext(conf=conf)
Explanation: Example: Use pyspark to process GDELT event data
GDELT: Global Database of Events, Language, and Tone
http://www.gdeltproject.org/
Column Header: http://gdeltproject.org/data/lookups/CSV.header.historical.txt
CountryCode: http://gdeltproject.org/data/lookups/CAMEO.country.txt
More doc: http://gdeltproject.org/data.html#rawdatafiles
Prepare pyspark environment
End of explanation
print sc
Explanation: First start by seeing that there does exist a SparkContext object in the sc variable:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Now let's load an RDD with some interesting data. We have the GDELT event data set on our VM as a tab-delimited text file. (Due to VM storage and compute power limitation, we only choose year 2001.)
We use a local file this time, the path is: '/home/ubuntu/shortcourse/data/gdelt'.
Please read the file, and map each line to a single word list.
Let's see what an object in the RDD looks like.
Take the first element from the created RDD.
Let's count the number of events we have.
We should see about 5 million events at our disposal.
The GDELT event data set collects geopolitical events that occur around the world. Each event is tagged with a Goldstein scale value that measures the potential for the event to destabilize the country. Let's compute and plot a histogram of the Goldstein scale values across all the events in the database. The Goldstein scale value is present in the 31st field.
First, let's make sure that plotting images are set to be displayed inline (see the IPython docs):
End of explanation
# some help function to convert the date to a float value indicates the time within the year in seconds.
from dateutil.parser import parse as parse_date
epoch = parse_date('20010101')
def td2s(td):
return (td.microseconds + (td.seconds + td.days * 24 * 3600) * 1000000) / 1e6
def day2unix(day):
return td2s(parse_date(day) - epoch)
Explanation: Now we'll just confirm that all the Goldstein values are indeed between -10 and 10.
Count and print out the max and min value of the 31st field.
Here we compute the histogram. And print it out.
Plot the histogram.
We can also plot the number of events each day for the 10 countries that have the most events in the second half of year 2001.
First we can see the number of unique countries that are available. Note that we filter out events that don't list a country code.
Show the distict country codes, the 8th field.
Here we convert each event into counts. Aggregate by country and day, for all events in the second half of 2001.
First, filter the raw events. Keep the events for the second half of 2001. Also filter out events that don't list a country code.
Count how many qualified events we have.
Transform the events into key-value pair, key is (countrycode (8th), date (2nd)), value is event count.
((code, date), count)
Show the first five.
End of explanation
# stop the spark context
sc.stop()
Explanation: Aggregate the events by country and transform the country_day_counts to (country, time, counts), where time and counts can be later used for drawing. Note the time and its corresponding count should be sorted according to time.
Show the first item.
Plot the figure, x axis is the time and y axis is the event count. Plot for the 10 countries with most events.
What's the big spike for the line above?
Try to see what's going on use reduce and max.
Looks like it was the day after September 11th.
End of explanation |
12,233 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
There is a lot of room for feature engineering the 8 qualitative features, but we'll reserve it for later
Step1: From now we try a range of estimators and use GridSearch to iteratively tune their hyperparameters | Python Code:
#Drop quantitative features for which most samples take 0 or 1
for cols in quan:
if train_c[cols].mean() < 0.01 or train_c[cols].mean() > 0.99:
train_c.drop(cols, inplace=True, axis=1)
test_c.drop(cols, inplace=True, axis=1)
#For now we only use the quantitative features left to make predictions
quan_features = train_c.columns[8:-1]
from sklearn.metrics import r2_score
from sklearn.model_selection import GridSearchCV
import warnings
warnings.filterwarnings('ignore')
Explanation: There is a lot of room for feature engineering the 8 qualitative features, but we'll reserve it for later
End of explanation
from sklearn.linear_model import Ridge
ridge = Ridge()
ridge_cv = GridSearchCV(estimator=ridge, param_grid={'alpha':np.arange(1, 50, 1)}, cv=5)
ridge_cv.fit(train_c[quan_features], train_c.label)
ridge_cv.best_score_
from sklearn.linear_model import Lasso
lasso = Lasso()
lasso_cv = GridSearchCV(estimator=lasso, param_grid={'alpha':np.arange(0, 0.05, 0.005)}, cv=5)
lasso_cv.fit(train_c[quan_features], train_c.label)
lasso_cv.best_score_
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor()
params = {'max_depth':np.arange(5,8),
'min_samples_split':np.arange(3, 6)}
rf_cv = GridSearchCV(estimator=rf, param_grid=params, cv=5)
rf_cv.fit(train_c[quan_features], train_c.label)
rf_cv.best_score_
from sklearn.linear_model import ElasticNet
en = ElasticNet()
params = {'alpha':np.arange(0.01, 0.05, 0.005),
'l1_ratio': np.arange(0.1, 0.9, 0.1)}
en_cv = GridSearchCV(estimator=en, param_grid=params, cv=5)
en_cv.fit(train_c[quan_features], train_c.label)
en_cv.best_score_
from mlxtend.regressor import StackingRegressor
from sklearn.linear_model import LinearRegression
lin=LinearRegression()
basic_regressors= [ridge_cv.best_estimator_, lasso_cv.best_estimator_,
rf_cv.best_estimator_, en_cv.best_estimator_]
stacker=StackingRegressor(regressors=basic_regressors, meta_regressor=lin)
stacker.fit(train_c[quan_features], train_c.label)
pred = stacker.predict(train_c[quan_features])
r2_score(train_c.label, pred)
result = pd.DataFrame()
result['ID']=test.ID
result['y']=stacker.predict(test_c[quan_features])
result.to_csv('./stackedprediction.csv', index=False)
Explanation: From now we try a range of estimators and use GridSearch to iteratively tune their hyperparameters
End of explanation |
12,234 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple Dendritic Gated Networks in numpy
This colab implements a Dendritic Gated Network (DGN) solving a regression (using quadratic loss) or a binary classification problem (using Bernoulli log loss).
See our paper titled "A rapid and efficient learning rule for biological neural circuits" for details of the DGN model.
Some implementation details
Step1: Choose classification or regression
Step2: Load dataset
Step5: DGN inference/update
Step6: Define architecture
Step7: Initialise weights and gating parameters
Step8: Train | Python Code:
# Copyright 2021 DeepMind Technologies Limited. All rights reserved.
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
from sklearn import datasets
from sklearn import preprocessing
from sklearn import model_selection
from typing import List, Optional
Explanation: Simple Dendritic Gated Networks in numpy
This colab implements a Dendritic Gated Network (DGN) solving a regression (using quadratic loss) or a binary classification problem (using Bernoulli log loss).
See our paper titled "A rapid and efficient learning rule for biological neural circuits" for details of the DGN model.
Some implementation details:
- We utilize sklearn.datasets.load_breast_cancer for binary classification and sklearn.datasets.load_diabetes for regression.
- This code is meant for educational purposes only. It is not optimized for high-performance, both in terms of computational efficiency and quality of fit.
- Network is trained on 80% of the dataset and tested on the rest. For classification, we report log loss (negative log likelihood) and accuracy (percentage of correctly identified labels). For regression, we report MSE expressed in units of target variance.
End of explanation
do_classification = True # if False, does regression
Explanation: Choose classification or regression
End of explanation
if do_classification:
features, targets = datasets.load_breast_cancer(return_X_y=True)
else:
features, targets = datasets.load_diabetes(return_X_y=True)
x_train, x_test, y_train, y_test = model_selection.train_test_split(
features, targets, test_size=0.2, random_state=0)
n_features = x_train.shape[-1]
# Input features are centered and scaled to unit variance:
feature_encoder = preprocessing.StandardScaler()
x_train = feature_encoder.fit_transform(x_train)
x_test = feature_encoder.transform(x_test)
if not do_classification:
# Continuous targets are centered and scaled to unit variance:
target_encoder = preprocessing.StandardScaler()
y_train = np.squeeze(target_encoder.fit_transform(y_train[:, np.newaxis]))
y_test = np.squeeze(target_encoder.transform(y_test[:, np.newaxis]))
Explanation: Load dataset
End of explanation
def step_square_loss(inputs: np.ndarray,
weights: List[np.ndarray],
hyperplanes: List[np.ndarray],
hyperplane_bias_magnitude: Optional[float] = 1.,
learning_rate: Optional[float] = 1e-5,
target: Optional[float] = None,
update: bool = False,
):
Implements a DGN inference/update using square loss.
r_in = inputs
side_info = np.hstack([hyperplane_bias_magnitude, inputs])
for w, h in zip(weights, hyperplanes): # loop over layers
r_in = np.hstack([1., r_in]) # add biases
gate_values = np.heaviside(h.dot(side_info), 0).astype(bool)
effective_weights = gate_values.dot(w).sum(axis=1)
r_out = effective_weights.dot(r_in)
if update:
grad = (r_out[:, None] - target) * r_in[None]
w -= learning_rate * gate_values[:, :, None] * grad[:, None]
r_in = r_out
loss = (target - r_out)**2 / 2
return r_out, loss
def sigmoid(x): # numerically stable sigmoid
return np.exp(-np.logaddexp(0, -x))
def inverse_sigmoid(x):
return np.log(x/(1-x))
def step_bernoulli(inputs: np.ndarray,
weights: List[np.ndarray],
hyperplanes: List[np.ndarray],
hyperplane_bias_magnitude: Optional[float] = 1.,
learning_rate: Optional[float] = 1e-5,
epsilon: float = 0.01,
target: Optional[float] = None,
update: bool = False,
):
Implements a DGN inference/update using Bernoulli log loss.
r_in = np.clip(sigmoid(inputs), epsilon, 1-epsilon)
side_info = np.hstack([hyperplane_bias_magnitude, inputs])
for w, h in zip(weights, hyperplanes): # loop over layers
r_in = np.hstack([sigmoid(1.), r_in]) # add biases
h_in = inverse_sigmoid(r_in)
gate_values = np.heaviside(h.dot(side_info), 0).astype(bool)
effective_weights = gate_values.dot(w).sum(axis=1)
h_out = effective_weights.dot(h_in)
r_out_unclipped = sigmoid(h_out)
r_out = np.clip(r_out_unclipped, epsilon, 1 - epsilon)
if update:
update_indicator = np.abs(target - r_out_unclipped) > epsilon
grad = (r_out[:, None] - target) * h_in[None] * update_indicator[:, None]
w -= learning_rate * gate_values[:, :, None] * grad[:, None]
r_in = r_out
loss = - (target * np.log(r_out) + (1 - target) * np.log(1 - r_out))
return r_out, loss
def forward_pass(step_fn, x, y, weights, hyperplanes, learning_rate, update):
losses, outputs = np.zeros(len(y)), np.zeros(len(y))
for i, (x_i, y_i) in enumerate(zip(x, y)):
outputs[i], losses[i] = step_fn(x_i, weights, hyperplanes, target=y_i,
learning_rate=learning_rate, update=update)
return np.mean(losses), outputs
Explanation: DGN inference/update
End of explanation
# number of neurons per layer, the last element must be 1
n_neurons = np.array([100, 10, 1])
n_branches = 20 # number of dendritic brancher per neuron
Explanation: Define architecture
End of explanation
n_inputs = np.hstack([n_features + 1, n_neurons[:-1] + 1]) # 1 for the bias
dgn_weights = [np.zeros((n_neuron, n_branches, n_input))
for n_neuron, n_input in zip(n_neurons, n_inputs)]
# Fixing random seed for reproducibility:
np.random.seed(12345)
dgn_hyperplanes = [
np.random.normal(0, 1, size=(n_neuron, n_branches, n_features + 1))
for n_neuron in n_neurons]
# By default, the weight parameters are drawn from a normalised Gaussian:
dgn_hyperplanes = [
h_ / np.linalg.norm(h_[:, :, :-1], axis=(1, 2))[:, None, None]
for h_ in dgn_hyperplanes]
Explanation: Initialise weights and gating parameters
End of explanation
if do_classification:
eta = 1e-4
n_epochs = 3
step = step_bernoulli
else:
eta = 1e-5
n_epochs = 10
step = step_square_loss
if do_classification:
step = step_bernoulli
else:
step = step_square_loss
print('Training on {} problem for {} epochs with learning rate {}.'.format(
['regression', 'classification'][do_classification], n_epochs, eta))
print('This may take a minute. Please be patient...')
for epoch in range(0, n_epochs + 1):
train_loss, train_pred = forward_pass(
step, x_train, y_train, dgn_weights,
dgn_hyperplanes, eta, update=(epoch > 0))
test_loss, test_pred = forward_pass(
step, x_test, y_test, dgn_weights,
dgn_hyperplanes, eta, update=False)
to_print = 'epoch: {}, test loss: {:.3f} (train: {:.3f})'.format(
epoch, test_loss, train_loss)
if do_classification:
accuracy_train = np.mean(np.round(train_pred) == y_train)
accuracy = np.mean(np.round(test_pred) == y_test)
to_print += ', test accuracy: {:.3f} (train: {:.3f})'.format(
accuracy, accuracy_train)
print(to_print)
Explanation: Train
End of explanation |
12,235 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EfficientNetV2 Tutorial
Step1: 0.2 View graph in TensorBoard
Step2: 1. inference
Step3: 2. Finetune EfficientNetV2 on CIFAR10. | Python Code:
%%capture
#@title
!pip install tensorflow_addons
import os
import sys
import tensorflow.compat.v1 as tf
# Download source code.
if "efficientnetv2" not in os.getcwd():
!git clone --depth 1 https://github.com/google/automl
os.chdir('automl/efficientnetv2')
sys.path.append('.')
else:
!git pull
def download(m):
if m not in os.listdir():
!wget https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/v2/{m}.tgz
!tar zxf {m}.tgz
ckpt_path = os.path.join(os.getcwd(), m)
return ckpt_path
Explanation: EfficientNetV2 Tutorial: inference, eval, and training
<table align="left"><td>
<a target="_blank" href="https://github.com/google/automl/blob/master/efficientnetv2/tutorial.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on github
</a>
</td><td>
<a target="_blank" href="https://colab.sandbox.google.com/github/google/automl/blob/master/efficientnetv2/tutorial.ipynb">
<img width=32px src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td></table>
0. Install and view graph.
0.1 Install package and download source code/image.
End of explanation
MODEL = 'efficientnetv2-b0' #@param
import effnetv2_model
with tf.compat.v1.Graph().as_default():
model = effnetv2_model.EffNetV2Model(model_name=MODEL)
_ = model(tf.ones([1, 224, 224, 3]), training=False)
tf.io.gfile.mkdir('tb')
train_writer = tf.summary.FileWriter('tb')
train_writer.add_graph(tf.get_default_graph())
train_writer.flush()
%load_ext tensorboard
%tensorboard --logdir tb
Explanation: 0.2 View graph in TensorBoard
End of explanation
MODEL = 'efficientnetv2-b0' #@param
# Download checkpoint.
ckpt_path = download(MODEL)
if tf.io.gfile.isdir(ckpt_path):
ckpt_path = tf.train.latest_checkpoint(ckpt_path)
# Download label map file
!wget https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/eval_data/labels_map.txt -O labels_map.txt
labels_map = 'labels_map.txt'
# Download images
image_file = 'panda.jpg'
!wget https://upload.wikimedia.org/wikipedia/commons/f/fe/Giant_Panda_in_Beijing_Zoo_1.JPG -O {image_file}
# Build model
tf.keras.backend.clear_session()
model = effnetv2_model.EffNetV2Model(model_name=MODEL)
_ = model(tf.ones([1, 224, 224, 3]), training=False)
model.load_weights(ckpt_path)
cfg = model.cfg
# Run inference for a given image
import preprocessing
image = tf.io.read_file(image_file)
image = preprocessing.preprocess_image(
image, cfg.eval.isize, is_training=False, augname=cfg.data.augname)
logits = model(tf.expand_dims(image, 0), False)
# Output classes and probability
pred = tf.keras.layers.Softmax()(logits)
idx = tf.argsort(logits[0])[::-1][:5].numpy()
import ast
classes = ast.literal_eval(open(labels_map, "r").read())
for i, id in enumerate(idx):
print(f'top {i+1} ({pred[0][id]*100:.1f}%): {classes[id]} ')
from IPython import display
display.display(display.Image(image_file))
Explanation: 1. inference
End of explanation
!python main_tf2.py --mode=traineval --model_name=efficientnetv2-b0 --dataset_cfg=cifar10Ft --model_dir={MODEL}_finetune --hparam_str="train.ft_init_ckpt={MODEL},runtime.strategy=gpus,train.batch_size=64"
Explanation: 2. Finetune EfficientNetV2 on CIFAR10.
End of explanation |
12,236 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div align="right">Python [conda env
Step1: <a id="globals" name="globals"></a>
Using globals() and .items() to get Names
Step2: <a id="__name__" name="__name__"></a>
name of Methods
Step3: <a id="loop_4_values" name="loop_4_values"></a>
Looping over variables in globals() to get Values | Python Code:
from dill.source import getname # run this cell first before any cells below
peg1 = [1,2]
peg2 = [3]
peg3 = [5,4]
# this example shows what at first would appear to be unexpected behavior
# it illustrates some concepts though later cells of this notebook show how to effectively use dill.getname
def move_from(source, target):
# this did not work at all:
print("Does not work:\n Move %d from %s to %s next.\n" %(source[-1], getname(source), getname(target)))
print("Resolves to stored Value, not the names:")
print("Move %d from %s to %s next." %(source[-1], getname(*source), getname(*target)))
move_from(peg1, peg3)
print(getname(peg1)) # note how no value is returned for variables
# but the name is returned for functions
getname(move_from)
def test_another_way_fun(fun, x, y):
fun(x, y)
return getname(fun)
test_another_way_fun(move_from, peg1, peg2) # only move_from() resolves to its original name
Explanation: <div align="right">Python [conda env:PY27_Test]</div>
What if you want to use the name of things in your code to do something? The most common use case would be to pass a value into a function and then have that function ouput the name of the variable containing the original value as well as the desired output. Additionally this could allow us to track function calls to other functions by outputting the names of what function got called when it gets used.
Much of the content in this notebook was proposed or inspired on this posting thread: Stack Overflow Post On this Topic
TOC
Tests with dill library - getname()
Using globals() and .items() to get Names
Simplest Answer to get Names of Methods
Looping over variables in globals() to get Values
<a id="getname" name="getname"></a>
Tests with dill library - getname()
End of explanation
### try these lines after some variables and functions are defined:
# globals()
# globals().items()
### result is long and rather messy ...
# this code is from the stack overflow post: http://stackoverflow.com/a/1538399/7525365
def variable_for_value(value):
for n,v in globals().items():
if v == value:
return n
return None
variable_for_value(peg1)
variable_for_value(move_from)
print(variable_for_value(peg1[0])) # cannot pass in an index or pointer and get the name
print(variable_for_value(*peg2)) # None is returned
# this is probably because it is treating the above literally as if you were asking for
# thing named '*peg2' or thing named peg1[0]
# thing doesn't exist? It does this:
x = variable_for_value('does_not_exist') # to even get nonexistent thing to pass in have to wrap in quotes and treat as string
# otherwise, NameError is triggered before we can see how it handles it
print(x)
def move_from(source, target):
print("Move %d from %s to %s next." %(source[-1], getname(*source), getname(*target)))
def test_another_way_fun(fun, x, y):
fun(x, y)
lst = [getname(fun), variable_for_value(x), variable_for_value(y)]
return lst
test_another_way_fun(move_from, peg1, peg3)
# using just variable_for_value
def move_from(source, target):
print("Move %d from %s to %s next." %(source[-1], getname(*source), getname(*target)))
def test_another_way_fun(fun, x, y):
fun(x, y)
lst = [variable_for_value(fun), variable_for_value(x), variable_for_value(y)]
return lst
test_another_way_fun(move_from, peg1, peg3)
'''Odd symptom to be aware of ... in earlier tests, somehow when re-running one of the cells above,
the last value returned as "_" instead of its true name. Despite numerous edits to code in this cell
to attempt to replicate the symtpom, it does not seem to recur. It is not known if this is a random glitch
or something that may occur again under the right coditions. '''
def test_another_way_fun_multiTest(fun, x, y):
fun(x, y)
lst = [getname(fun), variable_for_value(fun), variable_for_value(x), variable_for_value(y)]
lst2 = [getname(fun), variable_for_value(fun), variable_for_value(x), variable_for_value(y)]
lst3 = [lst, lst2]
return lst3
test_another_way_fun_multiTest(move_from, peg1, peg3)
Explanation: <a id="globals" name="globals"></a>
Using globals() and .items() to get Names
End of explanation
# shortcut when you know that it is a method being passed in:
def pass_in_someMethod(fun):
return fun.__name__
pass_in_someMethod(test_another_way_fun_multiTest)
# can't do this with variables (.__name__ attribute is only on methods for built-ins of the language) ...
try:
pass_in_someMethod(peg1)
except Exception as ee:
print(type(ee))
print(ee)
# in action ... finding out what got passed in to functions:
def variable_for_value2(value):
# a copy of value_for_variable to test something
for n,v in globals().items():
if v == value:
return n
return None
def test_functions(fun, *args):
# this function assumes that *args is either:
# * a single function or variable being passed in
# * 2 variables that form the parameters
print("Function call:\n" + ("-"*32))
# print(len(args))
if len(args) == 1:
if str(type(*args)) == "<type 'function'>":
str_args = str(*args)
else:
str_args = variable_for_value(*args)
else:
str_args = variable_for_value(args[0]) + ", " + variable_for_value(args[1])
print(fun.__name__ + "(" + str_args + ")" + "\n" + ("-"*32))
print("Return Value: ")
print(fun(*args))
# For use in testing some functions:
def justSayit(addTxt):
rtnVal= "Say it " + addTxt + "."
return rtnVal
txt = "with love"
funList = [pass_in_someMethod, move_from, variable_for_value, justSayit]
# 1 arg 2 args 1 arg 1 args
funList2 = [pass_in_someMethod, variable_for_value2, variable_for_value ]
# funList2: is list of functions that can all return the name of a function passed into them
# each is given the same function to evaluate
# one unexpected symptom: when the function gets its own name, if returns the fn argument that did it
# all other tests, the original function name passed in to thest function is returned
print("Ouput below is function call first, then return of function (both functions return names in this case):")
for fn in funList2:
# in real world: loop like this would use same type and number of input for each fun
test_functions(fn, variable_for_value)
print("")
# more tests using funList which has some different things to test with
print("Ouput below is function call first, then return of function:")
print("#"*32)
test_functions(funList[0], test_another_way_fun_multiTest)
print("")
test_functions(funList[1], peg1, peg3)
print("")
test_functions(funList[2], peg2)
print("")
test_functions(funList[3], txt)
print("")
# turning some of this code into an object
class GetDeclaredName(object):
'''Get Name of Variable or Function Passed Into Object. Use inside methods to identify what got passed in via args.'''
def name_of_declaredEement(self, value):
''' name_of_declaredElement -->\n\nreturns name of declared element (variable or function) passed into it.'''
# code modified from this Stack Overflow post: http://stackoverflow.com/a/1538399/7525365
for n,v in globals().items():
if v == value:
return n
return None
## idea for future development: add in test_function that can take any combination of arguments
gdn = GetDeclaredName()
GetDeclaredName.name_of_declaredEement(gdn, variable_for_value) # Python 2.7 and 3.6 compliant syntax
animal = "dog"
GetDeclaredName.name_of_declaredEement(gdn, animal) # Python 2.7 and 3.6 compliant syntax
Explanation: <a id="__name__" name="__name__"></a>
name of Methods
End of explanation
# getting the value of variables by their name on globals()
x = peg1
y = peg2
z = peg3
my_list = ["x", "y", "z"] # x, y, z have been previously defined
for name in my_list:
print("handling variable %s" %name)
bla = globals()[name] # accessing the value at the index for name
print("bla: %s" %bla)
Explanation: <a id="loop_4_values" name="loop_4_values"></a>
Looping over variables in globals() to get Values
End of explanation |
12,237 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
IRIS Dataset
Processes selected samples from the iris dataset to fit the specifications of the sliding windows used in images from the FLIR thermal camera.
Step1: A reference image we gathered
Step2: Process IRIS images
Step3: Putting it all together
Step4: Review Cropping Performance
Step5: Load Selected IRIS dataset
Loads and processes images from IRIS dataset which have participant facing mostly toward camera and in the middle of the frame. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import os
from skimage import data
from skimage import io
from skimage.transform import resize
from skimage.color import rgb2gray
from scipy.misc import bytescale
%matplotlib inline
WINDOW_WIDTH = 18
WINDOW_HEIGHT = 26
sample_iris_frame = data.imread('external_data/iris/vivek/Expression/ex1/L-1074.bmp')
plt.imshow(sample_iris_frame)
saved_face_regions = np.load('face_regions.npy')
def load_saved_face_region(face_regions, w,h):
for row in saved_face_regions:
# some images may not exist after data cleaning
if os.path.isfile(row[0]):
img = data.imread(row[0])
x = row[1][0]
y = row[1][1]
yield img[y:y+WINDOW_HEIGHT,x:x+WINDOW_WIDTH]
our_faces = list(load_saved_face_region(saved_face_regions, WINDOW_WIDTH, WINDOW_HEIGHT))
Explanation: IRIS Dataset
Processes selected samples from the iris dataset to fit the specifications of the sliding windows used in images from the FLIR thermal camera.
End of explanation
plt.imshow(our_faces[6])
Explanation: A reference image we gathered
End of explanation
def crop(img, size, corner):
y = corner[0]
x = corner[1]
h = size[0]
w = size[1]
return img[y:y+h, x:x+w]
# Values which specify how to crop IRIS images to fit sliding window
ROI_start_y = 13
ROI_start_x = 100
ROI_height = 207
ROI_scale = ROI_height / WINDOW_HEIGHT
ROI_width = int(np.floor(ROI_scale * WINDOW_WIDTH))
cropped_iris_sample = crop(sample_iris_frame, (ROI_height, ROI_width), (ROI_start_y, ROI_start_x))
plt.imshow(cropped_iris_sample)
resized_sample = resize(cropped_iris_sample, (WINDOW_HEIGHT, WINDOW_WIDTH))
plt.imshow(resized_sample)
plt.imshow(bytescale(rgb2gray(resized_sample), cmin=0.0, cmax=1.0))
Explanation: Process IRIS images
End of explanation
def process_iris_sample(frame):
global ROI_start_y, ROI_start_x, ROI_height, ROI_width
cropped_frame = crop(frame, (ROI_height, ROI_width), (ROI_start_y, ROI_start_x))
resized_frame = resize(cropped_frame, (WINDOW_HEIGHT, WINDOW_WIDTH))
gray_frame = rgb2gray(resized_frame)
return bytescale(gray_frame, cmin=0.0, cmax=1.0)
plt.imshow(process_iris_sample(sample_iris_frame))
Explanation: Putting it all together
End of explanation
sample_2 = data.imread('external_data/iris/Gribok/Expression/ex2/L-1142.bmp')
plt.imshow(sample_2)
plt.imshow(process_iris_sample(sample_2))
Explanation: Review Cropping Performance
End of explanation
raw_iris = io.imread_collection('external_data/selected_iris/*.bmp')
len(raw_iris)
processed_iris = list(map(process_iris_sample, raw_iris))
plt.imshow(processed_iris[0])
Explanation: Load Selected IRIS dataset
Loads and processes images from IRIS dataset which have participant facing mostly toward camera and in the middle of the frame.
End of explanation |
12,238 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.1 - Intรฉgrale et la mรฉthode des rectangles - correction
Approximation du calcul d'une intรฉgrale par la mรฉthode des rectangles.
Step1: Calcul de l'intรฉgrale
Step2: Il faut รฉcrire la fonction qui calcule l'intรฉgrale.
Calcul de prรฉcision
On essaye toutes les valeurs de $n$ jusqu'ร obtenir la prรฉcision souhaitรฉe en supposant qu'elle est obtenue ร partir du moment oรน la diffรฉrence relative entre deux valeurs successives est infรฉrieure ร la prรฉcision.
Step3: Le deuxiรจme nombre indique que la valeur de $n$ nรฉcessaire.
Step4: Calcul plus rapide
Imaginez un piano. A l'itรฉration $n$, on calcule les touches blanches, ร l'itรฉration $n+1$, les touches noires. A l'itรฉration $n$, on calcule les $k$ rectangles de hauteur rouges, puis les $k$ verts ร $n+1$ puis les $2k$ ร $n+2$.
Step5: Qu'est-ce qui explique les diffรฉrences en prรฉcision ? Dans le cas de cette intรฉgrale, on veut calculer $\int_0^1 xdx$. A l'itรฉration $n$, on a
Step6: Chaque petit rectangle est de taille $h=\frac{b-a}{n}$ et de hauteur maximal $M'h$ oรน $M'$ est un majorant de la dรฉrivรฉe. $M'=\underset{x \in [a,b]}{sup} \left|f'(x)\right|$. La marge d'incertitude vรฉrifie
Step7: La fonction rรฉcalcitrante
L'intรฉgrale de Rienmann converge pour toute fonction dont les discontinuitรฉs sont de mesures nulles. Numรฉriquement, c'est autre chose. On peut construire une telle fonction pour laquelle le calcule de l'intรฉgrale ne converge pas.
$$f(x) = \left{ \begin{array}{ll}2 \text{ si } x = k2^{-n} \forall k,n \in \mathbb{N} \ 1 \text{ sinon } \end{array}\right.$$
Step8: La fonction est bornรฉe, intรฉgrable car elle est constante partout sauf sur un ensemble de mesure nulle qui est plus petit que l'ensemble des rationnels. Pourtant le calcul informatique associรฉe ne converge pas car pour certaines valeurs de $n$, les valeurs prises tombent pour la plupart dans cet ensemble.
Calcul de Monte Carlo
Ce calcul consiste simplement ร tirer des valeurs alรฉatoirement dans l'intervalle ร intรฉgrer puis ร en faire la moyenne.
Step9: On peut appeler plusieurs fois la fonction et faire la moyenne des valeurs obtenues pour obtenir un algorithme qui s'arrรชte ร partir d'une certaine prรฉcision.
Step10: Quand s'arrรชter ?
L'intรฉgrale de Monte Carlo est une variable alรฉatoire qui est une moyenne. $I_n(f)=\frac{1}{n} \sum_{i=1}^n f((b-a)U_i + b)$ oรน $U_i$ est une variable alรฉatoire uniforme dans l'intervalle $[0,1]$. $f((b-a)U_i + b)$ est une variable alรฉatoire qui n'est a priori pas uniforme. Elle est bornรฉe si fonction $f$ est bornรฉe. En utilisant le thรฉorรจme centrale limite, on sait que $\sqrt{n}I_n(f)$ tend vers une loi normale dont on peut majorer la variance par $M^2$ oรน $M=max({f(x) | x \in [a,b]})$. Il suffit ensuite de choisir un $n$ suffisant grand telle sorte que l'intervalle de confiance ร 95% soit suffisamment petit. Si $p$ est la prรฉcision, $\frac{1.96 M}{\sqrt{n}} < \frac{p}{2}$ et donc $n > \frac{16M^2}{p^2}$.
Pour une prรฉcision ร $10^{-2}$
Step11: Mesure du temps
Les temps ne sont pas vraiment comparables puisque les conditions d'arrรชt de chaque fonction ne correspondent pas aux mรชme prรฉcisions. | Python Code:
%matplotlib inline
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 1A.1 - Intรฉgrale et la mรฉthode des rectangles - correction
Approximation du calcul d'une intรฉgrale par la mรฉthode des rectangles.
End of explanation
a = -2
b = 3
n = 20
import math
f = lambda x: x * math.cos (x)
f(4)
def integrale(f, a, b, n):
somme = 0
h = float(b-a) / n
x = a
for i in range(0, n + 1):
somme += f(x) * h
x += h
return somme
# On vรฉrifie ave un cas simple.
integrale(lambda x: x, 0, 1, 10)
integrale(f, a, b, n)
Explanation: Calcul de l'intรฉgrale
End of explanation
def integrale_precise(f, a, b, n0, precision):
val = integrale(f, a, b, n0)
val0 = None
while val0 is None or abs(val - val0) / val0 > precision:
val0 = val
n0 += 1
val = integrale(f, a, b, n0)
return val, n0
integrale_precise(lambda x: x, 0, 1, 10, 1e-4)
Explanation: Il faut รฉcrire la fonction qui calcule l'intรฉgrale.
Calcul de prรฉcision
On essaye toutes les valeurs de $n$ jusqu'ร obtenir la prรฉcision souhaitรฉe en supposant qu'elle est obtenue ร partir du moment oรน la diffรฉrence relative entre deux valeurs successives est infรฉrieure ร la prรฉcision.
End of explanation
integrale_precise(f, a, b, n, 1e-4)
Explanation: Le deuxiรจme nombre indique que la valeur de $n$ nรฉcessaire.
End of explanation
from pyquickhelper.helpgen import NbImage
NbImage("images/int2.png", width=400)
def integrale_precise_2n(f, a, b, n0, precision):
val = integrale(f, a, b, n0)
val0 = None
h = float(b-a) / n0
while val0 is None or abs(val - val0) / val0 > precision:
val0 = val
n0 *= 2
h /= 2
val = (val + integrale(f, a + h, b, n0)) / 2
return val, n0
integrale_precise_2n(lambda x: x, 0, 1, 10, 1e-4)
integrale_precise_2n(f, a, b, n, 1e-4)
Explanation: Calcul plus rapide
Imaginez un piano. A l'itรฉration $n$, on calcule les touches blanches, ร l'itรฉration $n+1$, les touches noires. A l'itรฉration $n$, on calcule les $k$ rectangles de hauteur rouges, puis les $k$ verts ร $n+1$ puis les $2k$ ร $n+2$.
End of explanation
NbImage("images/marge.png", width=400)
Explanation: Qu'est-ce qui explique les diffรฉrences en prรฉcision ? Dans le cas de cette intรฉgrale, on veut calculer $\int_0^1 xdx$. A l'itรฉration $n$, on a :
$$I(n)=\frac{i}{n}\sum_{i=1}^n \frac{i}{n} = \frac{n(n+1)}{2n^2} = \frac{n+1}{2n} = \frac{1}{2} + \frac{1}{2n}$$
On en dรฉduit que $I(n+1) - I(n) = \frac{1}{2n+2} - \frac{1}{2n} \sim O(\frac{1}{n^2})$. Autrement dit, l'algorithme s'arrรชte dรจs que $\frac{1}{n^2} < precision$ mais la somme des diffรฉrences qui resteraient ร calculer si on continuait n'est pas nรฉgligeable :
$$\sum_{i>k} I(n+1) - I(n) \sim \frac{1}{k}$$
Pour cette intรฉgrale, l'algorithme s'arrรชte alors que le chemin restant ร parcourir est aussi grand que la diffรฉrence entre deux valeurs successives.
La seconde version multiplie le nombre de calculs par deux ร chaque fois mรชme si elle รฉvite de recalculer les mรชmes valeurs. Une fois qu'on a commencรฉ une sรฉrie, il faut aller jusqu'au bout. C'est intรฉressant si la fonction ร intรฉgrer est longue ร calculer.
L'intรฉgrale converge pour toute fonction rรฉglรฉe ou si l'ensemble de ses discontinuitรฉ est un ensemble de mesure nulle. On peut obtenir une autre propriรฉtรฉ si on suppose que la fonction est $C^1$ (la dรฉrivรฉe est continue). La dรฉrivรฉe admet un majorant et on s'en sert pour majorer la marge d'erreur.
End of explanation
def integrale_precision(f, a, b, n):
somme = 0
h = float(b-a) / n
x = a
max_fp = 0
last_f = 0
for i in range(0, n + 1):
fx = f(x)
somme += f(x) * h
x += h
if last_f is not None:
md = abs(fx - last_f) / h
max_fp = max(max_fp, md)
last_f = fx
return somme, max_fp * n * h**2
def integrale_precise_derivee(f, a, b, n0, precision):
val, prec = integrale_precision(f, a, b, n0)
val0 = None
while val0 is None or prec > precision:
val0 = val
n0 += 1
val, prec = integrale_precision(f, a, b, n0)
return val, n0
integrale_precise_derivee(lambda x: x, 0, 1, 10, 1e-3)
Explanation: Chaque petit rectangle est de taille $h=\frac{b-a}{n}$ et de hauteur maximal $M'h$ oรน $M'$ est un majorant de la dรฉrivรฉe. $M'=\underset{x \in [a,b]}{sup} \left|f'(x)\right|$. La marge d'incertitude vรฉrifie :
$$m(h) \leqslant nM'h^2 = \frac{M'(b-a)^2}{n}$$
C'est une suite dรฉcroissante et pour obtenir une prรฉcision $p$, il suffit de choisir $n$ tel que :
$$n > \frac{M'(b-a)^2}{p}$$
Il faut maintenant estimer un majorant pour la dรฉrivรฉe. Comme on calcule la fonction pour un ensemble de points pris dans l'interval $x_i=(b-a)\frac{i}{n} + a \in [a,b]$, On peut le faire en calculant :
$$M' \sim \max_i\left{ \frac{\left|f(x_{i+1}) - f(x_i)\right|}{x_{i+1} - x_i}\right}$$
Si $f(x)=x^2$, $a=0$, $b=1$, $n = \frac{1}{p}$.
End of explanation
import math
def bizarre(x, n):
if x == 0:
return 1
kn = int(math.log(n) / math.log(2)) + 1
a = 2**kn * x
d = abs(int(a + 1e-10) - a)
if d < 1e-10:
return 2
else:
return 1
bizarre(0.33, 8), bizarre(0.5, 8), bizarre(0.125, 8)
def integrale_bizarre(f, a, b, n):
somme = 0
h = float(b-a) / n
x = a
for i in range(0, n + 1):
# mรชme fonction mais on passe n รฉgalement
somme += f(x, n) * h
x += h
return somme
px = list(range(1,257))
py = [integrale_bizarre(bizarre, 0, 1, i) for i in px]
integrale_bizarre(bizarre, 0, 1, 8)
integrale_bizarre(bizarre, 0, 1, 16)
integrale_bizarre(bizarre, 0, 1, 7)
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1,1)
ax.plot(px, py)
Explanation: La fonction rรฉcalcitrante
L'intรฉgrale de Rienmann converge pour toute fonction dont les discontinuitรฉs sont de mesures nulles. Numรฉriquement, c'est autre chose. On peut construire une telle fonction pour laquelle le calcule de l'intรฉgrale ne converge pas.
$$f(x) = \left{ \begin{array}{ll}2 \text{ si } x = k2^{-n} \forall k,n \in \mathbb{N} \ 1 \text{ sinon } \end{array}\right.$$
End of explanation
import random
def integrale_mc(f, a, b, n):
somme = 0
for i in range(0, n):
x = random.uniform(a, b)
somme += f(x)
return somme / n
# On vรฉrifie ave un cas simple.
integrale_mc(lambda x: x, 0, 1, 100)
Explanation: La fonction est bornรฉe, intรฉgrable car elle est constante partout sauf sur un ensemble de mesure nulle qui est plus petit que l'ensemble des rationnels. Pourtant le calcul informatique associรฉe ne converge pas car pour certaines valeurs de $n$, les valeurs prises tombent pour la plupart dans cet ensemble.
Calcul de Monte Carlo
Ce calcul consiste simplement ร tirer des valeurs alรฉatoirement dans l'intervalle ร intรฉgrer puis ร en faire la moyenne.
End of explanation
def integrale_mc_precise(f, a, b, n0, precision):
val = integrale(f, a, b, n0)
moy = val
moy0 = None
nb = 1
while moy0 is None or abs(moy - moy0) / moy0 > precision:
val += integrale_mc(f, a, b, n0)
nb += 1
moy0 = moy
moy = val / nb
return moy, n0
integrale_mc_precise(lambda x: x, 0, 1, 100, 1e-4)
Explanation: On peut appeler plusieurs fois la fonction et faire la moyenne des valeurs obtenues pour obtenir un algorithme qui s'arrรชte ร partir d'une certaine prรฉcision.
End of explanation
integrale_mc(lambda x: x, 0, 1, int(16e4))
Explanation: Quand s'arrรชter ?
L'intรฉgrale de Monte Carlo est une variable alรฉatoire qui est une moyenne. $I_n(f)=\frac{1}{n} \sum_{i=1}^n f((b-a)U_i + b)$ oรน $U_i$ est une variable alรฉatoire uniforme dans l'intervalle $[0,1]$. $f((b-a)U_i + b)$ est une variable alรฉatoire qui n'est a priori pas uniforme. Elle est bornรฉe si fonction $f$ est bornรฉe. En utilisant le thรฉorรจme centrale limite, on sait que $\sqrt{n}I_n(f)$ tend vers une loi normale dont on peut majorer la variance par $M^2$ oรน $M=max({f(x) | x \in [a,b]})$. Il suffit ensuite de choisir un $n$ suffisant grand telle sorte que l'intervalle de confiance ร 95% soit suffisamment petit. Si $p$ est la prรฉcision, $\frac{1.96 M}{\sqrt{n}} < \frac{p}{2}$ et donc $n > \frac{16M^2}{p^2}$.
Pour une prรฉcision ร $10^{-2}$ :
End of explanation
%timeit integrale_precise(f, a, b, n, 1e-4)
%timeit integrale_precise_2n(f, a, b, n, 1e-4)
%timeit integrale_mc_precise(f, a, b, n, 1e-4)
%timeit integrale_precise(lambda x: x, 0, 1, 10, 1e-4)
%timeit integrale_precise_2n(lambda x: x, 0, 1, 10, 1e-4)
%timeit integrale_mc_precise(lambda x: x, 0, 1, 10, 1e-4)
Explanation: Mesure du temps
Les temps ne sont pas vraiment comparables puisque les conditions d'arrรชt de chaque fonction ne correspondent pas aux mรชme prรฉcisions.
End of explanation |
12,239 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parameter space coverage 3D graphs
See the parameter-space-coverage notebook for more information.
Step1: Set the sample size
Step2: Uniform
Step3: Stepped
Set the step size. I'm using a larger value than in the parameter-space-coverage notebook (0.05 compared to 0.01) so that the quantization effect is more visible in 3 dimensions. | Python Code:
import random
import numpy as np
import plotly.plotly as py
import plotly.graph_objs as go
import plotly.offline as offline
offline.init_notebook_mode(connected=True)
Explanation: Parameter space coverage 3D graphs
See the parameter-space-coverage notebook for more information.
End of explanation
sample_size = 1000
def plot_3d_scatter(X, Y, Z, filename):
trace = go.Scatter3d(
x=X,
y=Y,
z=Z,
mode='markers',
marker=dict(
size=4,
#line=dict(
# color='rgba(217, 217, 217, 0.14)',
# width=0.5
#),
opacity=0.5
)
)
data = [trace]
layout = go.Layout(
margin=dict(
l=0,
r=0,
b=0,
t=0
)
)
fig = go.Figure(data=data, layout=layout)
return py.iplot(fig, filename=filename)
Explanation: Set the sample size:
End of explanation
X = [random.uniform(0, 1) for i in range(sample_size)]
Y = [random.uniform(0, 1) for i in range(sample_size)]
Z = [random.uniform(0, 1) for i in range(sample_size)]
plot_3d_scatter(X, Y, Z, 'paramspace-uniform')
Explanation: Uniform
End of explanation
step_size = 0.05
all_points = []
for x in np.arange(0, 1, step_size):
for y in np.arange(0, 1, step_size):
for z in np.arange(0, 1, step_size):
all_points.append((x, y, z))
print("Number of parameter value combinations: {:,}".format(len(all_points)))
sample = random.sample(all_points, sample_size)
X, Y, Z = zip(*sample) # unzip sample
plot_3d_scatter(X, Y, Z, 'paramspace-stepped')
Explanation: Stepped
Set the step size. I'm using a larger value than in the parameter-space-coverage notebook (0.05 compared to 0.01) so that the quantization effect is more visible in 3 dimensions.
End of explanation |
12,240 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A jupyter notebook is a browser-based environment that integrates
Step1: Create a variable
Step2: Print out the value of the variable
Step3: or even easier
Step4: Datatypes
In computer programming, a data type is a classification identifying one of various types that data
can have.
The most common data type we will see in this class are
Step5: NumPy (Numerical Python) is the fundamental package for scientific computing with Python.
Load the numpy library
Step6: pi and e are built-in constants
Step7: Here is a link to all Numpy math functions.
Arrays
Each element of the array has a Value
The position of each Value is called its Index
Our basic unit will be the NumPy array
Step8: Indexing
Step9: Slices
x[start
Step10: There are lots of different methods that can be applied to a NumPy array
Step11: Help about a function
Step12: NumPy math works over an entire array
Step13: Masking - The key to fast programs
Step14: Fancy masking
Step15: Sorting
Step16: Control Flow
Like all computer languages, Python supports the standard types of control flows including
Step17: For loops are different in python.
You do not need to specify the beginning and end values of the loop
Step18: Loops are slow in Python. Do not use them if you do not have to!
Step19: Functions
In computer science, a function (also called a procedure, method, subroutine, or routine) is a portion
of code within a larger program that performs a specific task and is relatively independent of the
remaining code. The big advantage of a function is that it breaks a program into smaller, easier
to understand pieces. It also makes debugging easier. A function can also be reused in another
program.
The basic idea of a function is that it will take various values, do something with them, and return a result. The variables in a function are local. That means that they do not affect anything outside the function.
Below is a simple example of a function that solves the equation
Step20: The results of one function can be used as the input to another function
Step21: Creating Arrays
Numpy has a wide variety of ways of creating arrays | Python Code:
print("Hello World!")
# lines that begin with a # are treated as comment lines and not executed
# print("This line is not printed")
print("This line is printed")
Explanation: A jupyter notebook is a browser-based environment that integrates:
A Kernel (python)
Text
Executable code
Plots and images
Rendered mathematical equations
Cell
The basic unit of a jupyter notebook is a cell. A cell can contain any of the above elements.
In a notebook, to run a cell of code, hit Shift-Enter. This executes the cell and puts the cursor in the next cell below, or makes a new one if you are at the end. Alternately, you can use:
Alt-Enter to force the creation of a new cell unconditionally (useful when inserting new content in the middle of an existing notebook).
Control-Enter executes the cell and keeps the cursor in the same cell, useful for quick experimentation of snippets that you don't need to keep permanently.
Hello World
End of explanation
g = 3.0 * 2.0
Explanation: Create a variable
End of explanation
print(g)
Explanation: Print out the value of the variable
End of explanation
g
Explanation: or even easier:
End of explanation
a = 1
b = 2.3
c = 2.3e4
d = True
e = "Spam"
type(a), type(b), type(c), type(d), type(e)
a + b, type(a + b)
c + d, type(c + d) # True = 1
a + e
str(a) + e
Explanation: Datatypes
In computer programming, a data type is a classification identifying one of various types that data
can have.
The most common data type we will see in this class are:
Integers (int): Integers are the classic cardinal numbers: ... -3, -2, -1, 0, 1, 2, 3, 4, ...
Floating Point (float): Floating Point are numbers with a decimal point: 1.2, 34.98, -67,23354435, ...
Floating point values can also be expressed in scientific notation: 1e3 = 1000
Booleans (bool): Booleans types can only have one of two values: True or False. In many languages 0 is considered False, and any other value is considered True.
Strings (str): Strings can be composed of one or more characters: โaโ, โspamโ, โspam spam eggs and spamโ. Usually quotes (โ) are used to specify a string. For example โ12โ would refer to the string, not the integer.
Collections of Data Types
Scalar: A single value of any data type.
List: A collection of values. May be mixed data types. (1, 2.34, โSpamโ, True) including lists of lists: (1, (1,2,3), (3,4))
Array: A collection of values. Must be same data type. [1,2,3,4] or [1.2, 4.5, 2.6] or [True, False, False] or [โSpamโ, โEggsโ, โSpamโ]
Matrix: A multi-dimensional array: [[1,2], [3,4]] (an array of arrays).
End of explanation
import numpy as np
Explanation: NumPy (Numerical Python) is the fundamental package for scientific computing with Python.
Load the numpy library:
End of explanation
np.pi, np.e
Explanation: pi and e are built-in constants:
End of explanation
np.random.seed(42) # set the seed - everyone gets the same random numbers
x = np.random.randint(1,10,20) # 20 random ints between 1 and 10
x
Explanation: Here is a link to all Numpy math functions.
Arrays
Each element of the array has a Value
The position of each Value is called its Index
Our basic unit will be the NumPy array
End of explanation
x[0] # The Value at Index = 0
x[-1] # The last Value in the array x
Explanation: Indexing
End of explanation
x
x[0:4] # first 4 items
x[:4] # same
x[0:4:2] # first four item, step = 2
x[3::-1] # first four items backwards, step = -1
x[::-1] # Reverse the array x
print(x[-5:]) # last 5 elements of the array x
Explanation: Slices
x[start:stop:step]
start is the first Index that you want [default = first element]
stop is the first Index that you do not want [default = last element]
step defines size of step and whether you are moving forwards (positive) or backwards (negative) [default = 1]
End of explanation
x.size # Number of elements in x
x.mean() # Average of the elements in x
x.sum() # Total of the elements in x
x[-5:].sum() # Total of last 5 elements in x
x.cumsum() # Cumulative sum
x.cumsum()/x.sum() # Cumulative percentage
x.
Explanation: There are lots of different methods that can be applied to a NumPy array
End of explanation
?x.min
Explanation: Help about a function:
End of explanation
y = x * 2
y
sin(x) # need to Numpy's math functions
np.sin(x)
Explanation: NumPy math works over an entire array:
End of explanation
mask1 = np.where(x>5)
x, mask1
x[mask1], y[mask1]
mask2 = np.where((x>3) & (x<7))
x[mask2]
Explanation: Masking - The key to fast programs
End of explanation
mask3 = np.where(x >= 8)
x[mask3]
# Set all values of x that match mask3 to 0
x[mask3] = 0
x
mask4 = np.where(x != 0)
mask4
#Add 10 to every value of x that matches mask4:
x[mask4] += 100
x
Explanation: Fancy masking
End of explanation
np.random.seed(13) # set the seed - everyone gets the same random numbers
z = np.random.randint(1,10,20) # 20 random ints between 1 and 10
z
np.sort(z)
np.sort(z)[0:4]
# Returns the indices that would sort an array
np.argsort(z)
z, z[np.argsort(z)]
maskS = np.argsort(z)
z, z[maskS]
Explanation: Sorting
End of explanation
xx = -1
if xx > 0:
print("This number is positive")
else:
print("This number is NOT positive")
xx = 0
if xx > 0:
print("This number is positive")
elif xx == 0:
print("This number is zero")
else:
print("This number is negative")
Explanation: Control Flow
Like all computer languages, Python supports the standard types of control flows including:
IF statements
FOR loops
End of explanation
z
for value in z:
print(value)
for idx,val in enumerate(z):
print(idx,val)
for idx,val in enumerate(z):
if (val > 5):
z[idx] = 0
for idx,val in enumerate(z):
print(idx,val)
Explanation: For loops are different in python.
You do not need to specify the beginning and end values of the loop
End of explanation
np.random.seed(42)
BigZ = np.random.random(10000) # 10,000 value array
BigZ[:10]
# This is slow!
for Idx,Val in enumerate(BigZ):
if (Val > 0.5):
BigZ[Idx] = 0
BigZ[:10]
%%timeit
for Idx,Val in enumerate(BigZ):
if (Val > 0.5):
BigZ[Idx] = 0
# Masks are MUCH faster
mask = np.where(BigZ>0.5)
BigZ[mask] = 0
BigZ[:10]
%%timeit -o
mask = np.where(BigZ>0.5)
BigZ[mask] = 0
Explanation: Loops are slow in Python. Do not use them if you do not have to!
End of explanation
def find_f(x,y):
result = (x ** 2) * np.sin(y) # assign the variable result the value of the function
return result # return the value of the function to the main program
np.random.seed(42)
array_x = np.random.rand(10) * 10
array_y = np.random.rand(10) * 2.0 * np.pi
array_x, array_y
value_f = find_f(array_x,array_y)
value_f
Explanation: Functions
In computer science, a function (also called a procedure, method, subroutine, or routine) is a portion
of code within a larger program that performs a specific task and is relatively independent of the
remaining code. The big advantage of a function is that it breaks a program into smaller, easier
to understand pieces. It also makes debugging easier. A function can also be reused in another
program.
The basic idea of a function is that it will take various values, do something with them, and return a result. The variables in a function are local. That means that they do not affect anything outside the function.
Below is a simple example of a function that solves the equation:
$ f(x,y) = x^2\ sin(y)$
In the example the name of the function is find_f (you can name functions what ever you want). The function find_f takes two arguments x and y, and returns the value of the equation to the main program. In the main program a variable named value_f is assigned the value returned by find_f. Notice that in the main program the function find_f is called using the arguments array_x and array_y. Since the variables in the function are local, you do not have name them x and y in the main program.
End of explanation
def find_g(z):
result = z / np.e
return result
find_g(value_f)
find_g(find_f(array_x,array_y))
Explanation: The results of one function can be used as the input to another function
End of explanation
# a new array filled with zeros
array_0 = np.zeros(10)
array_0
# a new array filled with ones
array_1 = np.ones(10)
array_1
# a new array filled with evenly spaced values within a given interval
array_2 = np.arange(10,20)
array_2
# a new array filled with evenly spaced numbers over a specified interval (start, stop, num)
array_3 = np.linspace(10,20,5)
array_3
# a new array filled with evenly spaced numbers over a log scale. (start, stop, num, base)
array_4 = np.logspace(1,2,5,10)
array_4
Explanation: Creating Arrays
Numpy has a wide variety of ways of creating arrays: Array creation routines
End of explanation |
12,241 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
build your model
| Python Code::
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Input, Flatten
model = Sequential([
Input(shape=(28,28,1,)),
Flatten(),
Dense(units=84, activation="relu"),
Dense(units=10, activation="softmax"),
])
print (model.summary())
|
12,242 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hands-on LH1
Step1: Scattering parameters measurement
The following figure illustrates the measurement setup, and the adopted port indexing convention.
<img src="./LH1_Mode-Converter_data/setup2.png">
Below we import the measurements data generated by the network analyser. These data consist in an ASCII file (tab separated file), with the following usual header
Step2: The measurement contains $N$ frequency points, where $N$ is
Step3: Let's have a look to these data, for example by plotting the amplitude of the $S_{10}$ parameter, that is the ratio of the power coming from port 0 to port 1 (which corresponds to $S_{21}$ in the network analyser file).
Step4: OK. Let's do the same for the second and third measurements performed, that is for the power transferred from port 0 to ports 2 and 3.
Step5: Nice. Now, let's stop and think. The purpose of the mode converter is the transfer the power from the fundamental mode of rectangular waveguides, namely the $\mathrm{TE}{10}$, into a higher order mode, the $\mathrm{TE}{30}$. Once these mode conversion achieved, thin metallic wall septum are located in zero E-field regions, which allows to split the power into three independant waveguides.
Dividing the power by 3, is equivalent in decibels to
Step6: Thus, ideally, the three transmission scattering parameters should be equal to -4.77 dB at the operational frequency, 3.7 GHz in our case. Clearly from the previous figure we see that it is not the case. The power splitting is unbalanced, and more power is directed to port 2 than to ports 1 and 3 at 3.7 GHz. In conclusion of this first serie of measurements
Step7: Thus, we can convert the (dB,degree) data into natural (real,imaginary) numbers
Step8: Let's check the power conservation. If the calibration has been correctly performed and if the conduction losses are negligible, one has
Step9: We are close to 1. The difference is the conduction losses, but also the intrinsic measurement error. Let's see how much power is lost in the device in terms of percent
Step10: Electric field measurements
In the previous section we figured out that the mode converter was not working properly as a 3-way splitter; indeed the power splitting is unbalanced. It's now time to understand why, since the guy who performed the RF modeling of the stucture is very good and is sure that his design is correct. In order to dig into this problem, we propose to probe the electric field after the mode converter but before the thin septum splitter
Step11: Let's check if we have the same number of points for amplitude and phase
Step12: What does it look like?
Step13: In natural values
Step14: From the previous amplitude figures, one can remark that the measureed amplitude is not ideal
Step15: Least Square solving with Python scipy routines
Step16: We prescribe additional information concerning the field at the edge
Step17: We deduce that there is 76% of TE30 mode and 13% of TE10 mode, 3% of TE40 and 50
Step18: Least Square Equation Solving (Manually)
We can also directly solve the problem, by defining
$$
\phi_{ij} = \sin\left(\frac{j \pi}{a} x_i \right) e^{-j \beta_j z_i}
$$
Then
$$
\vec{a} = \left( \phi^T \phi \right)^{-1} \phi^T \vec{E}_{meas}
$$
Step19: This evaluation gives 71% of TE30 mode and 9 % for TE10.
Step20: Least Square Equation Solving (with linalg.lstsq)
Same as in previous section, but using the Python library that do the job for you.
Step21: This gives 86% as TE30 mode.
Step22: 2D plot view
Step23: Finding the mode content using the Fast Fourier Transform
An other solution could be to deduce the mode content from a Fourier analysis of the electric field.
We recall that the total electric field measured on a row $\ell={1,3}$ is
Step24: Clearly, there is something strange wite the analytical spectrum! This is in fact normal, since the initial field "wideband" (from $x\in [0,a]$) is not large enough, which leads to a reduced precision in spectral dimension. So the solution would be to consider that the field is not terminated at the boundaries $x=0$ and $x=a$ and instead to consider it as infinite (undefinite Fourier integral)
Step25: Using this technique, we clearly see the TE30 mode is the dominant one.
Using orthonormalization properties
In this section we use the fact that waveguide modes form a complete spectrum. The mode base is orthonormal, thus that
Step26: We can see on the picture above that the TE30 mode is dominant, but that there is some TE10 and other mode also present. Almost 81 to 85% of the TE30 modes, and between 5-8% for the TE10 mode.
Solving a linear system
This approach is derived from A.G.Bailey et al. paper, Experimental Determination of Higher Order Mode Conversion in a Multimode Waveguide. Here, the electric field in the waveguide is supposed to be
Step27: The latter can be expressed as
Step28: And we recall the definition of the Dirac Delta function
Step29: CSS Styling | Python Code:
# This line configures matplotlib to show figures embedded in the notebook,
# and also import the numpy library
%pylab
%matplotlib inline
Explanation: Hands-on LH1: the $\mathrm{TE}{10}$-$\mathrm{TE}{30}$ Mode Converter
Introduction
The Tore Supra Lower Hybrid Launchers are equiped by $\mathrm{TE}{10}$-$\mathrm{TE}{30}$ mode converters, a waveguide structure which convert the RF power from a propagation mode to another, in order to split the power by three in the poloidal direction. The electric field topology in this device is illustrated in the following figure.
<img src="./LH1_Mode-Converter_data/Efield.png">
During this hands-on, students are initiated to RF measurements and analysis. Before measuring the RF performances of a mode converter, the students are brought to calibrate the RF measurement apparatus, a (Vectorial) Network Analyser. This device measures the scattering parameters (or S-parameters) between two ports, generally referred as ports 1 and 2. The scattering parameters are defined in terms of incident and reflected waves at ports, and are widely used at microwave frequencies where it becomes difficult to measure voltages and currents directly.
Before starting, let's import the necessary Python libraries:
End of explanation
%cd C:\Users\JH218595\Documents\Notebooks\TP Master Fusion
CDM1 = np.loadtxt('./LH1_Mode-Converter_data/CDM01', skiprows=4)
f_1 = CDM1[:,0]
Explanation: Scattering parameters measurement
The following figure illustrates the measurement setup, and the adopted port indexing convention.
<img src="./LH1_Mode-Converter_data/setup2.png">
Below we import the measurements data generated by the network analyser. These data consist in an ASCII file (tab separated file), with the following usual header :
where we have, by column number:
The frequency in Hertz,
The amplitude of the $S_{11}$ parameter in decibel (dB),
The phase of the $S_{11}$ parameter in degree,
The amplitude of the $S_{21}$ parameter in decibel (dB),
The phase of the $S_{21}$ parameter in degree,
and etc for $S_{12}$ and $S_{22}$.
Below we import the first measurement data, performed between the port indexed "0" of the mode converter (the input, corresponding to the port 1 of the network analyser) and the port indexed "1" of the mode converter (corresponding to port 2 of the network analyser).
End of explanation
len(f_1)
Explanation: The measurement contains $N$ frequency points, where $N$ is:
End of explanation
S10_dB = CDM1[:,3]
plot(f_1/1e9,S10_dB, lw=2)
xlabel('f [GHz]')
ylabel('Amplitude [dB]')
grid('on')
title('$S_{10}$ amplitude vs frequency')
Explanation: Let's have a look to these data, for example by plotting the amplitude of the $S_{10}$ parameter, that is the ratio of the power coming from port 0 to port 1 (which corresponds to $S_{21}$ in the network analyser file).
End of explanation
CDM2 = loadtxt('LH1_Mode-Converter_data/CDM02', skiprows=4)
CDM3 = loadtxt('LH1_Mode-Converter_data/CDM03', skiprows=4)
f_2 = CDM2[:,0]
f_3 = CDM3[:,0]
S20_dB = CDM2[:,3]
S30_dB = CDM3[:,3]
S00_dB = CDM1[:,1]
plot(f_1/1e9, S10_dB, f_2/1e9, S20_dB, f_3/1e9, S30_dB, lw=2)
xlabel('f [GHz]')
ylabel('Amplitude [dB]')
grid('on')
title('$S_{i0}$, $i=1,2,3$: amplitude vs frequency')
legend(('$S_{10}$','$S_{20}$', '$S_{30}$'),loc='best')
idx = np.argmin(np.abs(f_1/1e9 - 3.7))
print(f'Measured values at 3.7 GHz: {S10_dB[idx]}, {S20_dB[idx]} and {S30_dB[idx]} dB')
Explanation: OK. Let's do the same for the second and third measurements performed, that is for the power transferred from port 0 to ports 2 and 3.
End of explanation
10*log10(1.0/3.0)
Explanation: Nice. Now, let's stop and think. The purpose of the mode converter is the transfer the power from the fundamental mode of rectangular waveguides, namely the $\mathrm{TE}{10}$, into a higher order mode, the $\mathrm{TE}{30}$. Once these mode conversion achieved, thin metallic wall septum are located in zero E-field regions, which allows to split the power into three independant waveguides.
Dividing the power by 3, is equivalent in decibels to:
End of explanation
def dBdegree_2_natural(ampl_dB, phase_deg):
amp = 10**(ampl_dB/20)
phase_rad = np.pi/180*phase_deg
return amp*np.exp(1j*phase_rad)
Explanation: Thus, ideally, the three transmission scattering parameters should be equal to -4.77 dB at the operational frequency, 3.7 GHz in our case. Clearly from the previous figure we see that it is not the case. The power splitting is unbalanced, and more power is directed to port 2 than to ports 1 and 3 at 3.7 GHz. In conclusion of this first serie of measurements: this mode converter is not working properly. The big deal is: "why ?".
Before continuying, it may be usefull to define a function that convert a (dB,degree) number into a natural complex number:
End of explanation
S00 = dBdegree_2_natural(CDM1[:,1],CDM1[:,2])
S10 = dBdegree_2_natural(CDM1[:,3],CDM1[:,4])
S20 = dBdegree_2_natural(CDM2[:,3],CDM2[:,4])
S30 = dBdegree_2_natural(CDM3[:,3],CDM3[:,4])
Explanation: Thus, we can convert the (dB,degree) data into natural (real,imaginary) numbers
End of explanation
# Check the power conservation in dB : reflected+incident = all the power
plot(f_1/1e9, 10**(S00_dB/10)+10**(S10_dB/10)+10**(S20_dB/10)+10**(S30_dB/10))
ylim([0, 1])
plot(f_1/1e9, sum([abs(S00)**2,abs(S10)**2,abs(S20)**2,abs(S30)**2],axis=0))
xlabel('f [GHz]')
grid('on')
ylim([0, 1])
Explanation: Let's check the power conservation. If the calibration has been correctly performed and if the conduction losses are negligible, one has: $$\sum_{i=0\ldots 3} \left | S_{j0} \right|^2 = 1$$
Let's try:
End of explanation
plot(f_1/1e9, 100*(1-sum([abs(S00)**2,abs(S10)**2,abs(S20)**2,abs(S30)**2],axis=0)))
xlabel('f [GHz]')
ylabel('Fraction of RF power lost in the device [%]')
grid('on')
Explanation: We are close to 1. The difference is the conduction losses, but also the intrinsic measurement error. Let's see how much power is lost in the device in terms of percent :
End of explanation
# Electric field measurement
# Measurement Hands-On Group #1
# Feb 2013
# columns correspond to hole rows 1,2,3
ampl_dB = -1.0*np.array([ # use a numpy array in order to avoid the caveat to multiply a list by a float
[31.3, 32.0, 31.2],
[30.6, 31.2, 30.2],
[33.2, 33.3, 32.5],
[42.8, 42.0, 41.7],
[40.1, 41.3, 40.6],
[32.7, 33.5, 32.7],
[31.2, 31.4, 30.9],
[32.5, 33.0, 32.8],
[39.1, 39.9, 40.9],
[46.6, 44.0, 42.0],
[34.8, 33.2, 33.2],
[32.4, 30.6, 30.6],
[33.3, 30.9, 32.0]])
phase_deg = np.array(
[[-111.7, 64.9, -119.2],
[-111.9, 67.2, -119.9],
[-110.7, 67.1, -119.8],
[-99.1, 75.7, -119.7],
[55.90, -127.4, 60.1],
[60.4, -123.3, 59.2],
[64.3, -117.4, 62.8],
[61.3, -119.7, 63.3],
[50.9, -118.3, 68.1],
[-95.8, 70.5, -120.9],
[-107.9, 74.1, -116.0],
[-112.0, 71.2, -112.6],
[-116.6, 72.4, -116.0]])
Explanation: Electric field measurements
In the previous section we figured out that the mode converter was not working properly as a 3-way splitter; indeed the power splitting is unbalanced. It's now time to understand why, since the guy who performed the RF modeling of the stucture is very good and is sure that his design is correct. In order to dig into this problem, we propose to probe the electric field after the mode converter but before the thin septum splitter: the objective is to "see" what is the electric field topology after the mode conversion (is it as we expect it should be?). But by the way, how should it be?
If the mode converter has performed well, the totality of the electromagnetic field should behave as a $\mathrm{TE}_{30}$ mode.
End of explanation
shape(ampl_dB) == shape(phase_deg)
Explanation: Let's check if we have the same number of points for amplitude and phase:
End of explanation
f = 3.7e9 # frรฉquence de la mesure
a = 192e-3 # largeur grand cotรฉ du guide surdimensionnรฉ
b = 40e-3
## On ne prend que N mesures aux N abscisses de mesures
# dรฉfinition des emplacements des N abscisses :
x_mes = 24e-3 + arange(0, 12*12e-3, step=12e-3) # Modele @3.7GHz d'aprรจs S.Berio
x = linspace(0, a, 100) # theorical values
## Mesures tirรฉes de la thรจse de S.Berio p.30
# definition des emplacements des 3 sรฉries de "mesures"
z_mes = [0, 28*1e-3, 100*1e-3]
z_mes = [0, 53*1e-3, 106*1e-3]
fig, (ax1,ax2) = plt.subplots(2,1,sharex=True)
ax1.plot(x_mes*1e3, ampl_dB, '-o')
ax1.set_ylabel('amplitude [dB]')
ax1.grid(True)
ax1.set_title('Amplitude and phase measurements (3 rows)')
ax2.plot(x_mes*1e3, phase_deg, '--.')
ax2.set_xlabel('Measurement location x [mm] #')
ax2.set_ylabel('(relative) phase [deg]')
ax2.grid(True)
ax2.set_ylim(-180, 180)
fig.savefig('mode-converter_measurements_dB.png', dpi=150)
Explanation: What does it look like?
End of explanation
measures = dBdegree_2_natural(ampl_dB, phase_deg)
fig, (ax1,ax2) = plt.subplots(2,1,sharex=True)
ax1.plot(x_mes*1e3, real(measures),'-x')
ax1.set_ylabel('amplitude [a.u.]')
ax1.grid(True)
ax1.set_title('Amplitude and phase measurements (3 rows)')
ax2.plot(x_mes*1e3, phase_deg, '--.')
ax2.set_xlabel('Measurement location x [mm] #')
ax2.set_ylabel('(relative) phase [deg]')
ax2.grid(True)
ax2.set_ylim(-180, 180)
fig.savefig('mode-converter_measurements.png', dpi=150)
Explanation: In natural values
End of explanation
from scipy.constants import c
def Ey(a_n, x, z, wg_a=192e-3, f=3.7e9):
'''
Evaluates the electric field at the (x,z) location
x and z should be scalars
'''
k0 = 2*pi*f/c
sin_n = np.zeros_like(a_n, dtype='complex')
beta_n = np.zeros_like(a_n, dtype='complex')
exp_n = np.zeros_like(a_n, dtype='complex')
exp_n2 = np.zeros_like(a_n, dtype='complex')
N_modes = len(a_n)
for n in np.arange(N_modes):
# Guided wavenumber
# use a negative imaginary part for the square root
# in order to insure the convergence of the exponential
if k0**2 - ((n+1)*pi/wg_a)**2 >= 0:
beta_n[n] = np.sqrt(k0**2 - ((n+1)*pi/wg_a)**2)
else:
beta_n[n] = -1j*np.sqrt(((n+1)*pi/wg_a)**2 - k0**2)
exp_n[n] = np.exp(-1j*beta_n[n]*z)
sin_n[n] = np.sin((n+1)*pi/wg_a*x)
# sum of the modes
Ey = np.sum(a_n*sin_n*exp_n)
return Ey
u_n = np.array([0.5, 0, 1,])
x_test = linspace(0, 192e-3, 201)
z_test = zeros_like(x_test)
E_test = zeros_like(x_test, dtype='complex')
for idx in range(len(x_test)):
E_test[idx] = Ey(u_n, x_test[idx], z_test[idx])
plot(x_test, real(E_test), lw=2)
xlabel('x [m]')
ylabel('|$E_y$| [a.u.]')
grid()
Explanation: From the previous amplitude figures, one can remark that the measureed amplitude is not ideal: the maxima and the minima seem not have exactly the same values. Thus is seems that the mode after the mode converter is not a pure $\mathrm{TE}_{30}$, but probably a mixture of various modes. The question is : what is that mixture of modes? Our objective is to deduce that from the Efield probe data.
Electromagnetic model
Let's define first some usefull functions:
The following function calculates the electric field at a point $(x,z)$ of the waveguide of shape $(a,b)$ for $N$ modes:
$$
E_y (x,z) = \sum_{n=1}^N a_n \sin\left(\frac{n\pi}{a}x \right) e^{-j \beta_n z}
$$
where
$$
\beta_n = \sqrt{k_0^2 - \left(\frac{n\pi}{a}\right)^2}
$$
End of explanation
print(x_mes)
print(z_mes)
Emeas = dBdegree_2_natural(ampl_dB, phase_deg)
Explanation: Least Square solving with Python scipy routines
End of explanation
x_mes = hstack((0, x_mes, a))
Emeas = vstack((array([0,0,0]), Emeas, array([0,0,0]) ))
# Let's reshape x_mes and z_mes vectors
# in order to get position vectors with the same length
# x -> [x1 ... x13 x1 ... x13 x1 ... x13]
# z -> [z1 ... z1 z2 ... z2 z3 ... z3]
XX = tile(x_mes, len(z_mes))
ZZ = repeat(z_mes, len(x_mes))
# and the same for the measurements :
# Emes -> [E(x1,z1) ... E(x13,z1) E(x1,z2) ... E(x13,z2) E(x1,z3) ... E(x13,z3)]
EEmeas = reshape(Emeas, size(Emeas), order='F') # order='F' is important to get the correct ordering
def optim_fun(a, x, z):
Emodel = zeros_like(x, dtype='complex')
for idx in range(len(x)):
Emodel[idx] = Ey(a, x[idx], z[idx])
y = EEmeas - Emodel
return y.real**2 + y.imag**2
from scipy.optimize import leastsq
a0 = np.array([1,0,1,0,0])
sol=leastsq(optim_fun, a0, args=(XX, ZZ))
a_sol = sol[0]
print(abs(a_sol)/norm(sol[0],1)*100)
Explanation: We prescribe additional information concerning the field at the edge : we know the field should be zero in x=0 and x=a.
End of explanation
x_test = linspace(0, a, 201)
def subplote(x_test, z, a, Emes, ax):
E_test = zeros_like(x_test, dtype='complex')
for idx in range(len(x_test)):
E_test[idx] = Ey(a, x_test[idx], z)
ax.plot(x_test*1e3, real(E_test), lw=2)
ax.plot(x_mes*1e3, real(Emes), 'x', ms=16, lw=2)
ax.set_ylabel('|$E_y$| [a.u.]')
ax.grid(True)
fig, axes = plt.subplots(3,1, sharex=True)
subplote(x_test, z_mes[0], a_sol, Emeas[:,0], ax=axes[0])
subplote(x_test, z_mes[1], a_sol, Emeas[:,1], ax=axes[1])
subplote(x_test, z_mes[2], a_sol, Emeas[:,2], ax=axes[2])
axes[-1].set_xlabel('x [mm]')
axes[1].legend(('Reconstruction', 'Measurement'), loc='best')
fig.tight_layout()
fig.savefig('moce_converter_reconstruction.png', dpi=150)
Explanation: We deduce that there is 76% of TE30 mode and 13% of TE10 mode, 3% of TE40 and 50
End of explanation
def phi(n, x, z, wg_a=192e-3, f=3.7e9):
k0 = 2*pi*f/c
if k0**2 - (n*pi/wg_a)**2 >= 0:
beta_n = np.sqrt(k0**2 - (n*pi/wg_a)**2)
else:
beta_n = -1j*np.sqrt((n*pi/wg_a)**2 - k0**2)
return np.sin(n*pi/wg_a*x) * np.exp(-1j*beta_n*z)
MAT = np.array([phi(1, XX, ZZ), phi(2, XX, ZZ), phi(3, XX, ZZ), phi(4, XX, ZZ), phi(5, XX, ZZ)]).T
a_sol = np.linalg.inv(np.dot(MAT.T, MAT)).dot(MAT.T).dot(EEmeas)
print(abs(a_sol)/norm(a_sol,1)*100)
Explanation: Least Square Equation Solving (Manually)
We can also directly solve the problem, by defining
$$
\phi_{ij} = \sin\left(\frac{j \pi}{a} x_i \right) e^{-j \beta_j z_i}
$$
Then
$$
\vec{a} = \left( \phi^T \phi \right)^{-1} \phi^T \vec{E}_{meas}
$$
End of explanation
x_test = linspace(0, a, 201)
def subplote(x_test, z, a, Emes):
E_test = zeros_like(x_test, dtype='complex')
for idx in range(len(x_test)):
E_test[idx] = Ey(a, x_test[idx], z)
plot(x_test, real(E_test), lw=2)
plot(x_mes, real(Emes), 'x', ms=15, lw=2)
xlabel('x [m]')
ylabel('|$E_y$| [a.u.]')
grid()
subplot(311)
subplote(x_test, z_mes[0], a_sol, Emeas[:,0])
title('z1')
subplot(312)
subplote(x_test, z_mes[1], a_sol, Emeas[:,1])
title('z2')
subplot(313)
subplote(x_test, z_mes[2], a_sol, Emeas[:,2])
title('z3')
Explanation: This evaluation gives 71% of TE30 mode and 9 % for TE10.
End of explanation
from numpy.linalg import lstsq
C, res, _, _ = lstsq(MAT, EEmeas, rcond=None)
print(abs(C)/norm(C,1)*100)
Explanation: Least Square Equation Solving (with linalg.lstsq)
Same as in previous section, but using the Python library that do the job for you.
End of explanation
x_test = linspace(0, a, 201)
def subplote(x_test, z, a, Emes):
E_test = zeros_like(x_test, dtype='complex')
for idx in range(len(x_test)):
E_test[idx] = Ey(a, x_test[idx], z)
plot(x_test, real(E_test), lw=2)
plot(x_mes, real(Emes), 'x', ms=15, lw=2)
xlabel('x [m]')
ylabel('|$E_y$| [a.u.]')
grid()
subplot(311)
subplote(x_test, z_mes[0], C, Emeas[:,0])
title('z1')
subplot(312)
subplote(x_test, z_mes[1], C, Emeas[:,1])
title('z2')
subplot(313)
subplote(x_test, z_mes[2], C, Emeas[:,2])
title('z3')
Explanation: This gives 86% as TE30 mode.
End of explanation
x = linspace(0, a, 201)
z = linspace(-0.1e-2, 15e-2, 301)
XX, ZZ = meshgrid(x,z)
XX2 = reshape(XX, len(x)*len(z))
ZZ2 = reshape(ZZ, len(x)*len(z))
E_2D = zeros_like(XX2, dtype='complex')
for idx in range(len(XX2)):
E_2D[idx] = Ey(a_sol, XX2[idx], ZZ2[idx])
E_2D = reshape(E_2D, (len(z), len(x)))
pcolormesh(ZZ, XX, real(E_2D))
xlabel('z [m]', size=14)
ylabel('x [m]', size=14)
for idx in range(3):
axvline(z_mes[idx], color='k', ls='--', lw=2)
xlim(z[0], z[-1])
ylim(x[0], x[-1])
Explanation: 2D plot view
End of explanation
def Etilde_analytic(kx, Em, a):
Etilde = np.zeros_like(kx, dtype='complex')
for m, Ei in enumerate(Em, start=1):
Etilde += Ei \
* ((-1)**(m) * np.exp(1j*kx*a) - 1) \
* (m*pi/a)/ (kx**2 - (m*pi/a)**2)
return Etilde
# Ideal spectrum
ky_ana = np.linspace(0, 7*pi/a, num=101)
Etilde_ideal = Etilde_analytic(ky_ana, array([0.2, 0, 0.5, 0, 0.1]), a)
fig, ax = plt.subplots()
ax.plot(ky_ana, np.abs(Etilde_ideal))
# shows where the modes 1,2,... are
for mode_index in range(8):
axvline(mode_index*pi/a, color='#888888', linestyle='--')
xticks(arange(0,8)*pi/a, ('0', '$\pi/a$', '$2\pi/a$', '$3\pi/a$', '$4\pi/a$', '$5\pi/a$', '$6\pi/a$', '$7\pi/a$'), size=16)
ax.set_xlabel('$k_y$', size=16)
Explanation: Finding the mode content using the Fast Fourier Transform
An other solution could be to deduce the mode content from a Fourier analysis of the electric field.
We recall that the total electric field measured on a row $\ell={1,3}$ is :
$$
E_{tot}(x,z_\ell) = \sum_{m=1}^M E_m \sin\left( \frac{m\pi}{a} x\right) e^{-j\beta_m z} \;\;, x \in [0,a]
$$
where $E_m$ is complex valued, ie. $E_m=A_m e^{j\phi_m}$. Our goal is to deduce these coefficients $E_m$, from the measurements $E_{tot}$.
The Fourier transform of the field is defined by:
$$
\tilde{E}{tot} =
\frac{1}{2\pi}
\iint E{tot}(x,z) e^{j k_x x + j k_z z} \,dx \, dz
$$
which leads to:
$$
\tilde{E}{tot}(k_x,k_z) =
\frac{1}{2\pi}
\sum{m=1}^M E_m
\int_z \int_{x=0}^a
\sin\left( \frac{m\pi}{a} x\right) e^{j k_x x + j (k_z-\beta_m) z} \,dx \, dz
$$
$$
\tilde{E}{tot}(k_x,k_z) =
\frac{1}{2\pi}
\sum{m=1}^M E_m
\int_z e^{j (k_z-\beta_m) z} \, dz
\cdot
\int_{x=0}^a\sin\left( \frac{m\pi}{a} x\right) e^{j k_x x} \,dx
$$
So,
$$
\tilde{E}{tot}(k_x,k_z) =
\sum{m=1}^M E_m
\delta(k_z-\beta_m)
\frac{\frac{m\pi}{a}}{k_x^2 - \left(\frac{m\pi}{a}\right)^2 }
\left(
(-1)^m e^{j k_x a} -1
\right)
$$
where we used the following results:
$$
\int_{x=0}^a\sin\left( \frac{m\pi}{a} x\right) e^{j k_x x} \,dx
=
\frac{\frac{m\pi}{a}}{k_x^2 - \left(\frac{m\pi}{a}\right)^2 }
\left(
(-1)^m e^{j k_x a} -1
\right)
$$
and:
$$
\int_z e^{j (k_z-\beta_m) z} \, dz
=
2\pi \delta(k_z-\beta_m)
$$
Need a proof? Look at the end of this notebook.
The later formula can be simplified for even or odd $m$ values, but it can also be implemented directly:
End of explanation
# index of the measurement row (0, 1 or 2)
# TODO : find a way to use all the rows ?
index_row = 0
x_mes2 = x_mes
E_mes2 = Emeas[:,index_row]
# interpolating the measurements in order to have smoother initial values
from scipy.interpolate import InterpolatedUnivariateSpline
ius_re = InterpolatedUnivariateSpline(x_mes2, real(E_mes2))
ius_im = InterpolatedUnivariateSpline(x_mes2, imag(E_mes2))
x_mes3 = linspace(x_mes2[0], x_mes2[-1], 101)
E_mes3 = ius_re(x_mes3) + 1j*ius_im(x_mes3)
# Padding the measurements by replicating the data,
# in order to have a better fourier domain precision
# (the larger spatial wideband, the better the precision)
# The trick is to replicate correctly, taking into account the symmetry of the field
x_mes4 = np.pad(x_mes3, (2**13,), 'reflect', reflect_type='odd')
E_mes4 = np.pad(E_mes3, (2**13,), 'reflect', reflect_type='odd')
fig, ax = plt.subplots(2,1)
ax[0].plot(x_mes4, real(E_mes4), lw=2) # replicated interpolated data
ax[0].plot(x_mes3, real(E_mes3), lw=2) # interpolated data
ax[0].plot(x_mes, real(Emeas[:,index_row]), 'x', ms=8) # initial data
ax[0].set_xlim(-3*a, 4*a)
ax[0].axvline(0, color='#999999')
ax[0].axvline(a, color='#999999')
ax[0].set_ylabel('real E')
ax[1].plot(x_mes4, imag(E_mes4), lw=2) # replicated interpolated data
ax[1].plot(x_mes3, imag(E_mes3), lw=2) # interpolated data
ax[1].plot(x_mes, imag(Emeas[:,index_row]), 'x', ms=8)
ax[1].set_xlim(-3*a, 4*a)
ax[1].axvline(0, color='#999999')
ax[1].axvline(a, color='#999999')
ax[1].set_xlabel('x [m]')
ax[1].set_ylabel('imag E')
from numpy.fft import fft, fftshift, fftfreq
# Calcul the numerical spectrum
ky_num = 2*pi*fftshift(fftfreq(len(x_mes4), d=x_mes4[1]-x_mes4[0]))
Etilde_num = fftshift(fft(E_mes4))
fig,ax=subplots()
ax.plot(ky_num, abs(Etilde_num)/max(abs(Etilde_num)), lw=2, color='r')
ax.set_xlim(0, 1.5)
# shows where the modes 1,2,... are
for mode_index in range(8):
ax.axvline(mode_index*pi/a, color='#888888', linestyle='--')
ax.set_xticks(np.arange(0,8)*pi/a)
ax.set_xticklabels([0] + [f'${m}\pi/a$' for m in range(1,8)])
ax.set_xlabel('$k_y$', size=16)
# TODO : calculates the relative height of the peak to deduce the mode % content
Explanation: Clearly, there is something strange wite the analytical spectrum! This is in fact normal, since the initial field "wideband" (from $x\in [0,a]$) is not large enough, which leads to a reduced precision in spectral dimension. So the solution would be to consider that the field is not terminated at the boundaries $x=0$ and $x=a$ and instead to consider it as infinite (undefinite Fourier integral):
$$
\tilde{E}{tot}(k_x,k_z) =
\frac{1}{2\pi}
\sum{m=1}^M E_m
\int_z e^{j (k_z-\beta_m) z} \, dz
\cdot
\int_{x=-\infty}^{+\infty} \sin\left( \frac{m\pi}{a} x\right) e^{j k_x x} \,dx
$$
Now we calculates the spectrum of the field from the spatial fields, using the Fast Fourier Transform:
End of explanation
def interpolate_measurements(row_index, num_points=501):
# Add two points, at x=0 and x=a, to the measurements.
# They are the edges of the waveguides, thus the field is zero here.
x = x_mes
E = Emeas[:,row_index]
# interpolating the measurements in order to have smoother initial values
from scipy.interpolate import InterpolatedUnivariateSpline
ius_re = InterpolatedUnivariateSpline(x, real(E))
ius_im = InterpolatedUnivariateSpline(x, imag(E))
x2 = linspace(x[0], x[-1], num_points)
E2 = ius_re(x2) + 1j*ius_im(x2)
return x2, E2
x2, E2 = interpolate_measurements(1)
plot(x2, real(E2), x2, imag(E2))
xlabel('x [m]')
title('interpolated measurements')
def orthonorm(x, E, number_of_modes=5):
Im = []
for m in arange(1, number_of_modes+1):
integrande = E * sin(m*pi/a*x)
Im.append(2/a*trapz(x, integrande))
return asarray(Im)
colors = ['b','g','r']
for idx_row in [0,1,2]:
x2, E2 = interpolate_measurements(idx_row)
Im = orthonorm(x2, E2)
scatter(arange(1,len(Im)+1), abs(Im), color=colors[idx_row-1])
print(abs(Im)/norm(Im,1)*100)
grid(True)
xlabel('m', size=16)
ylim(0,0.04)
Explanation: Using this technique, we clearly see the TE30 mode is the dominant one.
Using orthonormalization properties
In this section we use the fact that waveguide modes form a complete spectrum. The mode base is orthonormal, thus that:
$$
\int_0^a
\sin\left( \frac{m \pi}{a} x \right)
\sin\left( \frac{n \pi}{a} x \right)
dx =
\left{
\begin{array}{lr}
a/2 & \mathrm{if} \; m=n \
0 & \mathrm{if} \; m\neq n
\end{array}
\right.
$$
For a given $z$, we such multiply by $\sin(m \pi/a)$ and integrate between $[0,a]$ the measured data:
End of explanation
import sympy as s
s.init_printing() # render formula nicely
x_, k_x_ = s.symbols('x k_x', real=True)
a_ = s.symbols('a', positive=True)
m_ = s.symbols('m', integer=True, positive=True)
I = s.integrate( s.sin(m_*s.pi/a_*x_) * s.exp(s.I*k_x_*x_), (x_, 0, a_))
I.simplify()
Explanation: We can see on the picture above that the TE30 mode is dominant, but that there is some TE10 and other mode also present. Almost 81 to 85% of the TE30 modes, and between 5-8% for the TE10 mode.
Solving a linear system
This approach is derived from A.G.Bailey et al. paper, Experimental Determination of Higher Order Mode Conversion in a Multimode Waveguide. Here, the electric field in the waveguide is supposed to be :
$$
E_y(x,z) =
\sum_{m=1}^4
a_m \sin\left(\frac{m\pi}{a}x\right) e^{- j\beta_m z}
+
b_m \sin\left(\frac{m\pi}{a}x\right) e^{+ j\beta_m z}
$$
Since measurements give both real and imaginary parts of the previous expression, in the form of magnitude $A(x,z)$ and phase $\gamma(x,z)$, one has:
$$
A(x,z) \cos\gamma(x,z) =
\sum_{m=1}^4
|a_m|
\sin\left(\frac{m\pi}{a}x\right)
\cos\left(\theta_m - \beta_m z\right)
+
|b_m|
\sin\left(\frac{m\pi}{a}x\right)
\cos\left(\theta_m - \beta_m z\right)
$$
and
$$
A(x,z) \sin\gamma(x,z) =
\sum_{m=1}^4
|a_m| \sin\left(\frac{m\pi}{a}x\right) \sin\left(\theta_m - \beta_m z\right)
+
|b_m| \sin\left(\frac{m\pi}{a}x\right) \sin\left(\theta_m - \beta_m z\right)
$$
Appendix
Fourier transform integral calculation
Here, using SymPy we want to calculate the integral:
$$
\int_{x=0}^a\sin\left( \frac{m\pi}{a} x\right) e^{j k_x x} \,dx
$$
End of explanation
# Unfortunately, SymPy does not know how to compute the FT of a sinus :
s.fourier_transform(s.sin(m_*pi/a_*x_),x_, k_x_, noconds=True)
Explanation: The latter can be expressed as:
$$
\frac{\frac{m\pi}{a}}{k_x^2 - \left(\frac{m\pi}{a}\right)^2 }
\left(
(-1)^m e^{j k_x a} -1
\right)
$$
In the case of an undefinite integral ($x\in\mathbb{R}$):
End of explanation
# This function calculates the FFT of the field
# and the corresponding wavenumber axis.
# This function is not used in this notebook, and just given here for example.
# (The wavenumber axis can be constructed instead with the fftfreq function)
def calculate_spectrum(x, E, f=3.7e9):
k0 = 2*pi*f/c
lambda0 = c/f
# fourier domain points
B = 2**18
Efft = np.fft.fftshift(np.fft.fft(E,B))
# fourier domain bins
dx = x[1] - x[0] # assumes spatial period is constant
df = 1/(B*dx)
K = arange(-B/2,+B/2)
# spatial frequency bins
Fz= K*df
# spatial wavenumber kx
kx= (2*pi)*Fz
# "power density" spectrum
p = (dx)**2/lambda0 * Efft;
return(kx,p)
Explanation: And we recall the definition of the Dirac Delta function:
$$
\int_z e^{j (k_z-\beta_m) z} \, dz
=
2\pi \delta(k_z-\beta_m)
$$
Numerical Fourier Transform - option 2
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("../../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: CSS Styling
End of explanation |
12,243 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spring 2017 Data Bootcamp Final Project by Colleen Jin dj928, Yingying Chen yc1875
Analysis On Relation Between News Sentiment And Market Portfolio
In this project, we use two sets of data to draw insights on how media sentiment can be an indicator for the financial sector. For the financial data, we plan to use daily return of the market index <font color='green'>(^GSPC)</font>, which is a good indicator for market fluctuation; for media sentiment, we use summarized information of news pieces from top 10 most popular press because of their stronger influence in shaping people's perception of events that are happening in the world.
Both sets of data are real-time, which means the source files are of the moment and need to be loaded each time analysis is performed. The sentiment analysis library returns a <font color='green'>polarity</font> score (-1.0 to 1.0) and a <font color='green'>polarity</font> score (0.0 to 1.0) on the news stories. Using quantified sentiment analysis, we juxtapose the two time series of data and observe if they present any correlation and search for potential causality. For example, we may test the hypothesis that when polarity among the daily news posts is higher (a.k.a., positive), the financial market that same day is more likely to rise. The rest of the notebook is a step-by-step instruction.
Modules used in this notebook
Step1: PART 1
Step2: Some values may be missing in the <font color='green'>article</font> column. For example, if there is no imformation of the key <font color='green'>author</font> of news pieces from BBC, it will indicates <font color='green'>None</font> where the <font color='green'>author</font> information should have been. Therefore, we need to convert <font color='green'>Nonetype</font> entries to string type, because the <font color='green'>.append()</font> method for a <font color='green'>list</font> cannot pass values of <font color='green'>Nonetype</font>. We will use <font color='green'>.append()</font> method later for displaying sentiment analysis results.
Step3: Contents of the column named <font color='green'>articles</font> are of <font color='green'>dict</font> type; each row contains information including <font color='green'>author</font>, <font color='green'>title</font>, <font color='green'>description</font>, <font color='green'>url</font>, <font color='green'>urlToImage</font> and <font color='green'>publishedAt</font>, among which <font color='green'>title</font> is selected for main analysis.
Step4: The <font color='green'>tags</font> method performs part-of-speech tagging (for example, <font color='green'>NNP</font> stands for a singular proper noun).
Step5: A loop prints all the news titles, which are later used for sentiment analysis.
Step6: All descriptions for the 100 news posts are printed in the same way as above; their presence is useful for adding accuracy for our sentiment analysis by providing more words on the same topic as the titles.
Step7: PART 2
Step8: PART 3
Step9: From the TextBlob module, the <font color='green'>.sentiment</font> method returns results in the form of <font color='green'>namedtuples</font>. Elements in <font color='green'>namedtuples</font> can only be printed after being appended into the form of a <font color='green'>list</font>. Therefore, we use a <font color='green'>list</font> named <font color='green'>tests_title</font> to store all the results from our sentiment tests on the news titles.
Step10: We create a list named <font color='green'>list_polarity_title</font> to store polarity scores for news titles.
Step11: Similarly, we create a list of subjectivity scores for news titles.
Step12: 'description'
We use <font color='green'>.sentiment</font> method again to calculate <font color='green'>polarity</font> and <font color='green'>subjectivity</font> of each <font color='green'>description</font>. As mentioned above, analysis on descritions make the final results more versatile and hopefully more accurate.
Step13: We create a list of polarity scores for news descriptions by appending each polarity score to the list named <font color='green'>list_polarity_description</font>.
Step14: Same as above, we create a list of subjectivity for news descriptions.
Step15: Now we have four lists of data
Step16: We transpose the dataframe to make it compatible with the .plot() method.
Step17: -Analysis by news press
Apparently, the 100 news posts standing alone aren't of much information. For a better perspective, we need to group scores by the press they belong to, under the assumption that posts from the same press are much more likely to embed a uniform tone. We create a list names <font color='green'>new_T_polarity</font> to store the sum of polarity scores of news titles for each press. The we do the same operation on subjectivity scores.
Step18: Graph for scores by news press
Step19: -Analysis by date
We have loaded news titles and descriptions over 2 weeks and stored them in a csv file called all_news.csv. We then calculated an average news polarity score for each day. We then then graph news polarity score to see how it has changed over time.
Step20: Graph for scores by date
Step21: Part 4
Step22: We create a .csv file called yahoo.csv to store the financial data upon each import.
Step23: PART 5 CORRELATION BETWEEN NEWS POLARITY AND S&P 500
Step24: Estimate correlation between polarity scores and S&P500 index
Step25: A parametic estimation for Yahoo daily return by news polarity
Step26: A non-parametic estimation for Yahoo daily return by news polarity | Python Code:
%matplotlib inline
# import necessary packages
import pandas as pd
import matplotlib.pyplot as plt
from pandas_datareader import data
from datetime import datetime
import numpy as np
from textblob import TextBlob
import csv
from wordcloud import WordCloud,ImageColorGenerator
#from scipy.misc import imread
import string
Explanation: Spring 2017 Data Bootcamp Final Project by Colleen Jin dj928, Yingying Chen yc1875
Analysis On Relation Between News Sentiment And Market Portfolio
In this project, we use two sets of data to draw insights on how media sentiment can be an indicator for the financial sector. For the financial data, we plan to use daily return of the market index <font color='green'>(^GSPC)</font>, which is a good indicator for market fluctuation; for media sentiment, we use summarized information of news pieces from top 10 most popular press because of their stronger influence in shaping people's perception of events that are happening in the world.
Both sets of data are real-time, which means the source files are of the moment and need to be loaded each time analysis is performed. The sentiment analysis library returns a <font color='green'>polarity</font> score (-1.0 to 1.0) and a <font color='green'>polarity</font> score (0.0 to 1.0) on the news stories. Using quantified sentiment analysis, we juxtapose the two time series of data and observe if they present any correlation and search for potential causality. For example, we may test the hypothesis that when polarity among the daily news posts is higher (a.k.a., positive), the financial market that same day is more likely to rise. The rest of the notebook is a step-by-step instruction.
Modules used in this notebook:
TextBlob: its library provides an API for common natural language processing <font color='green'>(NLP)</font> tasks, including part-of-speech tagging, noun phrase extraction, sentiment analysis, classification, translation, etc.
Non-Parametric Regression: a category of regression analysis in which the predictor does not take a predetermined form but is constructed according to information derived from the data.
WordCloud
Data sources:
News API: We use a news api provided by NewsAPI.org to load real-time news headlines (in the form of JSON metadata), then apply methods mainly from Python's TextBlob module to conduct sentiment analysis. We seleced 10 publish houses by their popularity (please see the ranking of news press here).
S&P 500 index open and closing price derived from Yahoo Finance.
End of explanation
cnn = pd.read_json('https://newsapi.org/v1/articles?source=cnn&sortBy=top&apiKey=bdc0623102e94a7586137f02a51e0518')
nyt= pd.read_json('https://newsapi.org/v1/articles?source=the-new-york-times&sortBy=top&apiKey=bdc0623102e94a7586137f02a51e0518')
wsp=pd.read_json('https://newsapi.org/v1/articles?source=the-washington-post&sortBy=top&apiKey=bdc0623102e94a7586137f02a51e0518')
bbc=pd.read_json("https://newsapi.org/v1/articles?source=bbc-news&sortBy=top&apiKey=bdc0623102e94a7586137f02a51e0518")
abc=pd.read_json("https://newsapi.org/v1/articles?source=abc-news-au&sortBy=top&apiKey=bdc0623102e94a7586137f02a51e0518")
#google = pd.read_json(" https://newsapi.org/v1/articles?source=google-news&sortBy=top&apiKey=bdc0623102e94a7586137f02a51e0518")
ft = pd.read_json("https://newsapi.org/v1/articles?source=financial-times&sortBy=top&apiKey=bdc0623102e94a7586137f02a51e0518")
bloomberg = pd.read_json("https://newsapi.org/v1/articles?source=bloomberg&sortBy=top&apiKey=bdc0623102e94a7586137f02a51e0518")
economist = pd.read_json("https://newsapi.org/v1/articles?source=the-economist&sortBy=top&apiKey=bdc0623102e94a7586137f02a51e0518")
wsj = pd.read_json("https://newsapi.org/v1/articles?source=the-wall-street-journal&sortBy=top&apiKey=bdc0623102e94a7586137f02a51e0518")
total = [wsj, cnn, nyt, wsp, bbc, abc, ft, bloomberg, economist]
total1 = pd.concat(total, ignore_index=True)
total1
Explanation: PART 1: NEWS COLLECTION - pd.read_json()
We use <font color='green'>pd.read_json()</font> to import real-time news information (top 10 posts from each publisher). These news items are stored separately as dataframes and combined into one collective dataframe. (News API powered by NewsAPI.org)**
The news press consists of
* CNN,
* The New York Times,
* Washington Post,
* BBC News,
* ABC News,
* Financial Times,
* Bloomberg.
End of explanation
k = 0
while k < len(total1):
if total1['articles'][k]['description'] is None:
total1['articles'][k]['description'] = 'None'
k += 1
j = 0
while j < len(total1):
print(type(total1['articles'][j]['description']))
j += 1
# now all entries are of type string, regardless whether there is real contents.
l = 0
while l < len(total1):
if total1['articles'][l]['title'] is None:
total1['articles'][l]['title'] = 'None'
l += 1
p = 0
while p < len(total1):
print(type(total1['articles'][p]['title']))
p += 1
# now all entries are of type string, regardless whether there is real contents.
Explanation: Some values may be missing in the <font color='green'>article</font> column. For example, if there is no imformation of the key <font color='green'>author</font> of news pieces from BBC, it will indicates <font color='green'>None</font> where the <font color='green'>author</font> information should have been. Therefore, we need to convert <font color='green'>Nonetype</font> entries to string type, because the <font color='green'>.append()</font> method for a <font color='green'>list</font> cannot pass values of <font color='green'>Nonetype</font>. We will use <font color='green'>.append()</font> method later for displaying sentiment analysis results.
End of explanation
# write the news posts into a new .csv file
n_rows = len(total1.index)
articles = total1['articles']
result = csv.writer(open('result.csv','a'))
result.writerow(['PublishedAt','Title','description'])
for i in range(0,n_rows):
line = [articles[i]['publishedAt'],articles[i]['title'],articles[i]['description']]
result.writerow(line)
# print the first item in the 'articles' series as an example.
articles[0]
# type of each entry in the 'articles' column is 'dict'
type(articles[0])
# keys of the 'dict' variables are 'author', 'publishedAt', 'urlToImage', 'description', 'title', 'url'
articles[0].keys()
Explanation: Contents of the column named <font color='green'>articles</font> are of <font color='green'>dict</font> type; each row contains information including <font color='green'>author</font>, <font color='green'>title</font>, <font color='green'>description</font>, <font color='green'>url</font>, <font color='green'>urlToImage</font> and <font color='green'>publishedAt</font>, among which <font color='green'>title</font> is selected for main analysis.
End of explanation
blob = TextBlob(str(articles[0]['title']))
blob.tags
Explanation: The <font color='green'>tags</font> method performs part-of-speech tagging (for example, <font color='green'>NNP</font> stands for a singular proper noun).
End of explanation
i = 0
while i < n_rows:
blob = TextBlob(articles[i]['title'])
print(1 + i, ". ", blob, sep = "")
i += 1
Explanation: A loop prints all the news titles, which are later used for sentiment analysis.
End of explanation
j = 0
while j < n_rows:
blob1 = TextBlob(str(articles[j]['description']))
print(1 + j, ". ", blob1, sep = "")
j += 1
Explanation: All descriptions for the 100 news posts are printed in the same way as above; their presence is useful for adding accuracy for our sentiment analysis by providing more words on the same topic as the titles.
End of explanation
#write the csv file into a txt file called entire_text.txt
contents = csv.reader(open('result.csv','r'))
texts = open('entire_text.txt','w')
list_of_text = []
for row in contents:
line = row[2].encode('utf-8')
line = str(line.decode())
list_of_text.append(line)
texts.writelines(list_of_text)
text=open("entire_text.txt",'r')
text=text.read()
wordcloud = WordCloud().generate(text)
#display the generated image
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
# increase max_font_size and change backgroud color to white
wordcloud = WordCloud(max_font_size=40).generate(text)
wordcloud = WordCloud(max_words=200,background_color='white',max_font_size=100).generate(text)
plt.figure()
plt.imshow(wordcloud, interpolation="bilinear")
plt.axis("off")
plt.show()
Explanation: PART 2: WORD CLOUD
A word cloud of news tiltles can provide us with a direct and vivid impression of the most frequently discussed topics in today's news reports. Topic/person/event that prevails among the top news pieces appears in the largest font, occupies the center space and displays the most salient colors.
In a visually pleasant way, a word cloud gives us a hint for the news sentiment of the day.
Code referred to https://github.com/amueller/word_cloud/blob/master/examples/simple.py
End of explanation
# a loop to show sentiment analysis results of the 100 titles
n = 0
while n < n_rows:
print(TextBlob(articles[n]['title']).sentiment)
n += 1
Explanation: PART 3: SENTIMENT ANALYSIS
We use <font color='green'>.sentiment</font> method from <font color='green'>TextBlob</font> to calculate polatiry and subjectivity of each <font color='green'>title</font>.
The <font color='green'>sentiment</font> property returns an output in the form of <font color='green'>namedtuple</font> (Sentiment(polarity, subjectivity)). The polarity score is a float within the range [-1.0, 1.0]. The subjectivity is a float within the range [0.0, 1.0] where 0.0 is very objective and 1.0 is very subjective.
End of explanation
N = 0
tests_title = []
while N < n_rows:
tests_title.append(TextBlob(articles[N]['title']).sentiment)
N += 1
Explanation: From the TextBlob module, the <font color='green'>.sentiment</font> method returns results in the form of <font color='green'>namedtuples</font>. Elements in <font color='green'>namedtuples</font> can only be printed after being appended into the form of a <font color='green'>list</font>. Therefore, we use a <font color='green'>list</font> named <font color='green'>tests_title</font> to store all the results from our sentiment tests on the news titles.
End of explanation
list_polarity_title = [] # this list contains all titles polarity scores.
for test in tests_title:
list_polarity_title.append(test.polarity)
Explanation: We create a list named <font color='green'>list_polarity_title</font> to store polarity scores for news titles.
End of explanation
list_subjectivity_title = [] # this list contains all titles subjectivity scores.
for test in tests_title:
list_subjectivity_title.append(test.subjectivity)
Explanation: Similarly, we create a list of subjectivity scores for news titles.
End of explanation
m = 0
while m < n_rows:
print(TextBlob(articles[m]['description']).sentiment)
m += 1
M = 0
tests_description = []
while M < n_rows:
tests_description.append(TextBlob(articles[M]['description']).sentiment)
M += 1
Explanation: 'description'
We use <font color='green'>.sentiment</font> method again to calculate <font color='green'>polarity</font> and <font color='green'>subjectivity</font> of each <font color='green'>description</font>. As mentioned above, analysis on descritions make the final results more versatile and hopefully more accurate.
End of explanation
list_polarity_description = [] # this list contains all descriptions' polarity scores.
for test in tests_description:
list_polarity_description.append(test.polarity)
Explanation: We create a list of polarity scores for news descriptions by appending each polarity score to the list named <font color='green'>list_polarity_description</font>.
End of explanation
list_subjectivity_description = [] # this list contains all descriptions' subjectivity scores.
for test in tests_description:
list_subjectivity_description.append(test.subjectivity)
Explanation: Same as above, we create a list of subjectivity for news descriptions.
End of explanation
total_score = [list_polarity_title, list_subjectivity_title, list_polarity_description, list_subjectivity_description]
labels = ['T_polarity', 'T_subjectivity', 'D_polarity', 'D_subjectivity']
df = pd.DataFrame.from_records(total_score, index = labels)
df
Explanation: Now we have four lists of data:
1. list_polarity_title
2. list_subjectivity_title
3. list_polarity_description
4. list_subjectivity_description
We convert the four lists of data into one dataframe for drawing plots.
End of explanation
df = df.transpose()
df
# this plot shows scores for all 100 news posts.
df.plot()
Explanation: We transpose the dataframe to make it compatible with the .plot() method.
End of explanation
c_T_polarity = df['T_polarity']
new_T_polarity = []
B = 0
C = 0
while B < n_rows:
sum = 0
while C < B + 10:
sum += c_T_polarity[C]
C += 1
new_T_polarity.append(sum)
B += 10
new_T_polarity
# The press are in the order as: wsj, cnn, nyt, wsp, bbc, abc, google, ft, bloomberg and economist.
c_T_subjectivity = df['T_subjectivity']
new_T_subjectivity = []
D = 0
E = 0
while D < n_rows:
sum = 0
while E < D + 10:
sum += c_T_subjectivity[E]
E += 1
new_T_subjectivity.append(sum)
D += 10
new_T_subjectivity
c_D_polarity = df['D_polarity']
new_D_polarity = []
F = 0
G = 0
while F < n_rows:
sum = 0
while G < F + 10:
sum += c_D_polarity[G]
G += 1
new_D_polarity.append(sum)
F += 10
new_D_polarity
c_D_subjectivity = df['D_subjectivity']
new_D_subjectivity = []
H = 0
I = 0
while H < n_rows:
sum = 0
while I < H + 10:
sum += c_D_subjectivity[I]
I += 1
new_D_subjectivity.append(sum)
H += 10
new_D_subjectivity
total_score_bypublishhouse = [new_T_polarity, new_T_subjectivity, new_D_polarity, new_D_subjectivity]
df1 = pd.DataFrame.from_records(total_score_bypublishhouse, index = labels)
df1
# change the column labels to press house.
new_columns = ['wsj', 'cnn', 'nyt', 'wsp', 'guardian', 'abc', 'ft', 'bloomberg', 'economist']
df1.columns = new_columns
df1
Explanation: -Analysis by news press
Apparently, the 100 news posts standing alone aren't of much information. For a better perspective, we need to group scores by the press they belong to, under the assumption that posts from the same press are much more likely to embed a uniform tone. We create a list names <font color='green'>new_T_polarity</font> to store the sum of polarity scores of news titles for each press. The we do the same operation on subjectivity scores.
End of explanation
#colors = [(x/10.0, x/20.0, 0.75) for x in range(n_rows)]
df1.plot(kind = 'bar', legend = True, figsize = (15, 2), colormap='Paired', grid = True)
# place the legend above the subplot and use all the expended width.
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=10, mode="expand", borderaxespad=0.)
bar_color = 'orange'
row = df1.iloc[0]
row.plot(kind = 'bar', title = "Polarity for news titles by news press", color = bar_color, grid = True)
Explanation: Graph for scores by news press
End of explanation
contents = csv.reader(open('all_news.csv','r', encoding = "ISO-8859-1"))
result = csv.writer(open('entire_result.csv','w'))
result.writerow(['Date','polarity'])
for row in contents:
comment = row[2]
blob = TextBlob(comment)
polarity = blob.sentiment.polarity
line = [row[0],polarity]
result.writerow(line)
data = pd.read_csv('entire_result.csv')
data
#group the data by date
data=data.groupby('Date', as_index=False)['polarity'].mean()
#convert column "Date" to a date data type
data['Date'] = pd.to_datetime(data['Date'])
#sort the data by date ascending
data=data.sort_values(by="Date", axis=0, ascending=True, inplace=False, kind='quicksort')
data
Explanation: -Analysis by date
We have loaded news titles and descriptions over 2 weeks and stored them in a csv file called all_news.csv. We then calculated an average news polarity score for each day. We then then graph news polarity score to see how it has changed over time.
End of explanation
data.plot(x=data["Date"],kind = 'bar',title='Polarity for news titles by date',grid = True, color = 'orange')
Explanation: Graph for scores by date
End of explanation
from yahoo_finance import Share
# '^GSPC' is the market symble for S&P 500 Index
yahoo = Share('^GSPC')
print(yahoo.get_open())
print(yahoo.get_price())
print(yahoo.get_trade_datetime())
from pprint import pprint
pprint(yahoo.get_historical('2017-04-09', '2017-05-09'))
Explanation: Part 4: S&P 500 INDEX
Using the <font color='green'>yahoo_finance</font> module in Python, we will eventually compare the sentiment analysis of the news posts with the movement of the market index.
End of explanation
from yahoo_finance import Share
yahoo = Share('^GSPC')
dataset = yahoo.get_historical('2017-04-27','2017-05-09')
result = csv.writer(open('yahoo.csv','w'))
result.writerow(['Date','Low','High'])
for i in range(0,len(dataset)):
line = [dataset[i]['Date'],dataset[i]['Low'],dataset[i]['High']]
result.writerow(line)
yahoo = pd.read_csv('yahoo.csv')
yahoo
#convert column "Date" to a date data type
yahoo['Date'] = pd.to_datetime(yahoo['Date'])
#sort the data by date ascending
yahoo=yahoo.sort_values(by="Date", axis=0, ascending=True, inplace=False, kind='quicksort')
yahoo
type(data['Date'])
type(yahoo['Date'])
Explanation: We create a .csv file called yahoo.csv to store the financial data upon each import.
End of explanation
#join yahoo and data together on "Date"
result = pd.merge(data, yahoo,on='Date')
result
result_len = len(result)
yahoo.plot(x="Date",figsize=(6, 2),title='Yahoo Finance')
data.plot(x='Date',figsize=(6, 2),title='News Title Polarity')
Explanation: PART 5 CORRELATION BETWEEN NEWS POLARITY AND S&P 500
End of explanation
import numpy
low=result['Low']
high=result['High']
polarity=result['polarity']
numpy.corrcoef(low, polarity)
#from the data we have, we can conclude that news polarity and S&P500 index are positively correlated
numpy.corrcoef(high, polarity)
numpy.corrcoef(high, low)
#a scatterplot for news polarity and Yahoo daily return of the market index
result.plot.scatter(x="polarity", y="Low")
Explanation: Estimate correlation between polarity scores and S&P500 index
End of explanation
#a parametic estimation for Yahoo daily return by news polarity
import seaborn as sns
#lmplot plots the data with the regression coefficient through it.
sns.lmplot(x="polarity", y="Low", data=result, ci=0.95) #ci stands for confidence interval
Explanation: A parametic estimation for Yahoo daily return by news polarity
End of explanation
import pyqt_fit.nonparam_regression as smooth
from pyqt_fit import npr_methods
k0 = smooth.NonParamRegression(polarity, low, method=npr_methods.SpatialAverage())
k0.fit()
grid = np.r_[-0.05:0.05:0.01]
plt.plot(grid, k0(grid), label="Spatial Averaging", linewidth=2)
plt.legend(loc='best')
Explanation: A non-parametic estimation for Yahoo daily return by news polarity
End of explanation |
12,244 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simulation Progress
Christian Kongsgaard
RIBuild Meeting 19-09-2019
Simulated Projects
Step1: Simulation Time Stats
Step2: Compute Resources
500 - 1000 cores available
250 - 500 Delphin Jobs running in parallel
Actual Average Simulation Time
Number of simulated projects divided by time since the first simulation finished
Step3: Estimated Number of Simulations 01-01-2020
Step4: Convergence Plots
Step5: Design 1d_exterior_CalciumSilicateBoard_592_125_705_50_SD001
Insulation System
Step6: What is missing?
1) Computing the mould at the interior interface.
Currently, a medium resistant properties are used at the interface.
2) Surface Temperatures should we compute them? | Python Code:
current_projects = get_simulated_projects_count()
print(f'There are currently {current_projects} simulated Delphin projects in the database')
Explanation: Simulation Progress
Christian Kongsgaard
RIBuild Meeting 19-09-2019
Simulated Projects
End of explanation
times = get_simulation_time()
for key in times.keys():
if key == 'gt_250':
print(f'\tLonger simulation time than 250min: {times[key]}')
else:
print(f'\t{key.upper()}:\t{times[key]:.2f} min')
get_simulation_time_plot()
print(f'\tMEDIAN: {times["median"]:.2f} min\t\tMEAN: {times["mean"]:.2f} min')
Explanation: Simulation Time Stats
End of explanation
actual_avg = get_actual_average_simulation_time()
print(f'{actual_avg:.2f} projects/min')
Explanation: Compute Resources
500 - 1000 cores available
250 - 500 Delphin Jobs running in parallel
Actual Average Simulation Time
Number of simulated projects divided by time since the first simulation finished
End of explanation
import datetime
minutes_left = (datetime.datetime(year=2020, month=1, day=1) - datetime.datetime.now()).total_seconds()/60
projects_left = actual_avg * minutes_left
estimated_projects = current_projects + projects_left
print(f'Current Simulated Projects:\t\t{current_projects}\nEstimated Projects at 01-01-2020:\t{int(estimated_projects)}')
Explanation: Estimated Number of Simulations 01-01-2020
End of explanation
stats = get_convergence_mould()
print(f'Iteration:\t35\t36\t37')
for key in stats.keys():
values = "\t".join([f'{item:.3f}' for item in stats[key][-3:]])
print(f'\t{key.upper()}:\t{values}')
estimate_future_convergence(stats, 'mould', 45)
stats = get_convergence_heatloss()
print(f'Iteration:\t35\t36\t37')
for key in stats.keys():
values = "\t".join([f'{item:.3f}' for item in stats[key][-3:]])
print(f'\t{key.upper()}:\t{values}')
estimate_future_convergence(stats, 'heat loss', 45)
get_convergence_heatloss('all')
print(f'The only design with an error above 1 is: 1d_exterior_CalciumSilicateBoard_592_125_705_50_SD001')
Explanation: Convergence Plots
End of explanation
get_mould_cdf()
from delphin_6_automation.database_interactions.db_templates import result_processed_entry
heat_losses = result_processed_entry.ProcessedResult.objects.only('thresholds.heat_loss')
def compute_cdf(results: list, quantity: str):
quantities = [doc.thresholds[quantity] for doc in results if doc.thresholds[quantity] > 0.5 * 10**7]
hist, edges = np.histogram(quantities, density=True, bins=200)
dx = edges[1] - edges[0]
cdf = np.cumsum(hist) * dx
return edges[1:], cdf
def get_heatloss_cdf(results):
#results = result_processed_entry.ProcessedResult.objects.only('thresholds.heat_loss')
x, y = compute_cdf(results, 'heat_loss')
plt.figure(figsize=figure_size)
plt.plot(x, y)
plt.title('Cumulative Distribution Function\nHeat Loss')
plt.xlabel('Wh')
plt.ylabel('Ratio')
plt.xlim(0, 1.25*10**8)
get_heatloss_cdf(heat_losses)
Explanation: Design 1d_exterior_CalciumSilicateBoard_592_125_705_50_SD001
Insulation System:
* Insulation Material: RemmersCalciumsilikatSchimmelsanierplatte2_592.m6
* Plaster Material: GlueMortarForClimateBoard_705.m6
* Finish: KlimaputzMKKQuickmix_125.m6
* Thickness: 50mm
SD Value: 0.01
Result Plots
End of explanation
mongo_setup.global_end_ssh(server)
Explanation: What is missing?
1) Computing the mould at the interior interface.
Currently, a medium resistant properties are used at the interface.
2) Surface Temperatures should we compute them?
End of explanation |
12,245 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mass Maps From Mass-Luminosity Inference Posterior
In this notebook we start to explore the potential of using a mass-luminosity relation posterior to refine mass maps.
Content
Step2: Probability Functions
Step3: Results
Step4: Turning into Probabilistic Catalogue | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import rc
rc('text', usetex=True)
from bigmali.grid import Grid
from bigmali.prior import TinkerPrior
from bigmali.hyperparameter import get
import numpy as np
from scipy.stats import lognorm
from numpy.random import normal
#globals that functions rely on
grid = Grid()
prior = TinkerPrior(grid)
a_seed = get()[:-1]
S_seed = get()[-1]
mass_points = prior.fetch(grid.snap(0)).mass[2:-2] # cut edges
tmp = np.loadtxt('/Users/user/Code/PanglossNotebooks/MassLuminosityProject/SummerResearch/mass_mapping.txt')
z_data = tmp[:,0]
lobs_data = tmp[:,1]
mass_data = tmp[:,2]
ra_data = tmp[:,3]
dec_data = tmp[:,4]
sigobs = 0.05
def fast_lognormal(mu, sigma, x):
return (1/(x * sigma * np.sqrt(2 * np.pi))) * np.exp(- 0.5 * (np.log(x) - np.log(mu)) ** 2 / sigma ** 2)
Explanation: Mass Maps From Mass-Luminosity Inference Posterior
In this notebook we start to explore the potential of using a mass-luminosity relation posterior to refine mass maps.
Content:
- Math
- Imports, Constants, Utils, Data
- Probability Functions
- Results
- Discussion
Math
Infering mass from mass-luminosity relation posterior ...
\begin{align}
P(M|L_{obs},z,\sigma_L^{obs}) &= \iint P(M|\alpha, S, L_{obs}, z)P(\alpha, S|L_{obs},z,\sigma_L^{obs})\ d\alpha dS\
&\propto \iiint P(L_{obs}| L,\sigma_L^{obs})P(L|M,\alpha,S,z)P(M|z)P(\alpha, S|L_{obs},z,\sigma_L^{obs})\ dLd\alpha dS\
&\approx \frac{P(M|z)}{n_{\alpha,S}}\sum_{\alpha,S \sim P(\alpha, S|L_{obs},z,\sigma_L^{obs})}\left( \frac{1}{n_L}\sum_{L\sim P(L|M,\alpha,S,z)}P(L_{obs}|L,\sigma_L^{obs})\right)\
&= \frac{P(M|z)}{n_{\alpha,S}}\sum_{\alpha,S \sim P(\alpha, S|L_{obs},z,\sigma_L^{obs})}f(M;\alpha,S,z)\
\end{align}
Refine for individual halo ...
\begin{align}
P(M_k|L_{obs},z,\sigma_L^{obs}) &= \iint P(M_k|\alpha, S, L_{obs\ k}, z_k)P(\alpha, S|L_{obs},z,\sigma_L^{obs})\ d\alpha dS\
&\propto \iiint P(L_{obs\ k}| L_k,\sigma_L^{obs})P(L_k|M_k,\alpha,S,z_k)P(M_k|z_k)P(\alpha, S|L_{obs},z,\sigma_L^{obs})\ dLd\alpha dS\
&\approx \frac{P(M_k|z_k)}{n_{\alpha,S}}\sum_{\alpha,S \sim P(\alpha, S|L_{obs},z,\sigma_L^{obs})}\left( \frac{1}{n_L}\sum_{L\sim P(L_k|M_k,\alpha,S,z_k)}P(L_{obs\ k}|L_k,\sigma_L^{obs})\right)\
&=\frac{P(M_k|z_k)}{n_{\alpha,S}}\sum_{\alpha,S \sim P(\alpha, S|L_{obs},z,\sigma_L^{obs})}f(M_k;\alpha,S,z_k)\
\end{align}
Can also factor it more conventionally for MCMC ...
\begin{align}
\underbrace{P(M_k|L_{obs},z,\sigma_L^{obs})}{posterior}
&\propto \underbrace{P(M_k|z_k)}{prior}\underbrace{\iiint P(L_{obs\ k}| L_k,\sigma_L^{obs})P(L_k|M_k,\alpha,S,z_k)P(\alpha, S|L_{obs},z,\sigma_L^{obs})\ dLd\alpha dS}_{likelihood}\
\end{align}
In the code we have the following naming convention:
- p1 for $P(M|z)$
- p2 for $P(\alpha, S|L_{obs},z,\sigma_L^{obs})$
- p3 for $P(L_k|M_k,\alpha,S,z_k)$
- p4 for $P(L_{obs\ k}|L_k, \sigma^{obs}_L)$
We use the terms eval and samp to help distinguish between evaluating a distribution and sampling from it.
Imports, Constants, Utils, Data
End of explanation
def p1_eval(zk):
return prior.fetch(grid.snap(zk)).prob[2:-2]
def p2_samp(nas=100):
a is fixed on hyperseed,
S is normal distribution centered at hyperseed.
return normal(S_seed, S_seed / 10, size=nas)
def p3_samp(mk, a, S, zk, nl=100):
mu_lum = np.exp(a[0]) * ((mk / a[2]) ** a[1]) * ((1 + zk) ** (a[3]))
return lognorm(S, scale=mu_lum).rvs(nl)
def p4_eval(lobsk, lk, sigobs):
return fast_lognormal(lk, sigobs, lobsk)
def f(a, S, zk, lobsk, nl=100):
ans = []
for mk in mass_points:
tot = 0
for x in p3_samp(mk, a, S, zk, nl):
tot += p4_eval(lobsk, x, sigobs)
ans.append(tot / nl)
return ans
def mass_dist(ind=1, nas=10, nl=100):
lobsk = lobs_data[ind]
zk = z_data[ind]
tot = np.zeros(len(mass_points))
for S in p2_samp(nas):
tot += f(a_seed, S, zk, lobsk, nl)
prop = p1_eval(zk) * tot / nas
return prop / np.trapz(prop, x=mass_points)
Explanation: Probability Functions
End of explanation
plt.subplot(3,3,1)
dist = p1_eval(zk)
plt.plot(mass_points, dist)
plt.gca().set_xscale('log')
plt.gca().set_yscale('log')
plt.ylim([10**-25, 10])
plt.xlim([mass_points.min(), mass_points.max()])
plt.title('Prior')
plt.xlabel(r'Mass $(M_\odot)$')
plt.ylabel('Density')
for ind in range(2,9):
plt.subplot(3,3,ind)
dist = mass_dist(ind)
plt.plot(mass_points, dist, alpha=0.6, linewidth=2)
plt.xlim([mass_points.min(), mass_points.max()])
plt.gca().set_xscale('log')
plt.gca().set_yscale('log')
plt.ylim([10**-25, 10])
plt.gca().axvline(mass_data[ind], color='red', linewidth=2, alpha=0.6)
plt.title('Mass Distribution')
plt.xlabel(r'Mass $(M_\odot)$')
plt.ylabel('Density')
# most massive
ind = np.argmax(mass_data)
plt.subplot(3,3,9)
dist = mass_dist(ind)
plt.plot(mass_points, dist, alpha=0.6, linewidth=2)
plt.gca().set_xscale('log')
plt.gca().set_yscale('log')
plt.xlim([mass_points.min(), mass_points.max()])
plt.ylim([10**-25, 10])
plt.gca().axvline(mass_data[ind], color='red', linewidth=2, alpha=0.6)
plt.title('Mass Distribution')
plt.xlabel(r'Mass $(M_\odot)$')
plt.ylabel('Density')
# plt.tight_layout()
plt.gcf().set_size_inches((10,6))
Explanation: Results
End of explanation
index = range(2,9) + [np.argmax(mass_data)]
plt.title('Simple Sketch of Field of View')
plt.scatter(ra_data[index], dec_data[index] , s=np.log(mass_data[index]), alpha=0.6)
plt.xlabel('ra')
plt.ylabel('dec');
Explanation: Turning into Probabilistic Catalogue
End of explanation |
12,246 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Float $Y_i$ & Float $\alpha_{MLT}$
First, we load the appropriate libraries and data file. MCMC trials where all quantities are permitted to float happened during Run 05. Note that the metallicity uncertainty for those measurements where no observational uncertainties were provided, are assumed to be ยฑ0.2 dex.
Step1: As before, we can confirm that distances were accurately recovered and we can check on how well metallicities were recovered compared to the measured value.
Step2: Distance are well recovered, as anticipated. Metallicities are scattered about the zero-point with perhaps a tendency for predicting systematically higher metallicities between $-0.40$ and $-0.20$ dex. Typical scatter appears to be around $\pm0.2$ dex, consistent with the assumed metallicities uncertainty. Points in red are those that have a measured metallicity uncertainty. In general, those metallicities are better recovered, perhaps owing to tighter constraints.
Define relative errors and errors normalized to observational unceratinties.
Step3: Recovery of observed fundamental properties.
Step4: There is considerably better recovery of stellar fundamental properties once variations in helium abundance and the convective mixing length parameter are permitted. All points, with the exception of one, lie within the 99% confidence interval. We can explore whether systematic errors still remain in the sample, although from the above figure we can gather that such systematic effects may be small.
First as a function of bolometric flux and angular diameter,
Step5: There is a rise of the average error as one moves toward lower bolometric fluxes and smaller angular diameters. These points are effectively all M dwarfs. This illustrates, quite well, that problems for the lowest mass stars are most resiliant to variations in stellar model input parameters, thus preserving the trends present in the data where $\alpha_{MLT}$ and $Y_i$ are fixed. The growth of the trend is therefore attributable to the model's steadily increasing resistance to change resulting from modifications to input parameters.
As a function of stellar mass and effective temperature,
Step6: These figures nicely illustrate the aforementioned phenomenon that models grow increasingly resiliant to variations in input parameters as stellar mass (and effective temperature) decreases. Note that it should be possible to quantify the significance of any potential rising trend toward lower masses or temperatures with statistical tests, if one so desires.
For the moment, we can look how the tunable parameters vary. Starting with helium abundance,
Step7: Should probably provide some analysis.
Now we can look at the mixing length parameter,
Step8: NOTE
Step9: Now plot the Bonaca et al values against those we derived, | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
data = np.genfromtxt('data/run05_kde_props_tmp3.txt')
data = np.array([x for x in data if x[30] > -0.5]) # remove stars that our outside of the model grid
Explanation: Float $Y_i$ & Float $\alpha_{MLT}$
First, we load the appropriate libraries and data file. MCMC trials where all quantities are permitted to float happened during Run 05. Note that the metallicity uncertainty for those measurements where no observational uncertainties were provided, are assumed to be ยฑ0.2 dex.
End of explanation
fig, ax = plt.subplots(1, 2, figsize=(10, 5))
# distance recovery diagnostic
distance_limits = (0.0, 20.0)
ax[0].set_xlabel('Observed Distance (pc)', fontsize=22.)
ax[0].set_ylabel('Inferred Distance (pc)', fontsize=22.)
ax[0].set_xlim(distance_limits)
ax[0].set_ylim(distance_limits)
ax[0].plot(distance_limits, distance_limits, '--', lw=3, color="#444444")
ax[0].plot(1.0/data[:, 20], data[:, 4], 'o', markersize=9.0, color='#4682B4')
# metallicity recovery diagnostic
quoted_err = np.array([x for x in data if x[31] > 0.0])
FeH_limits = (-0.5, 0.5)
ax[1].set_xlabel('Observed [Fe/H] (dex)', fontsize=22.)
ax[1].set_ylabel('Inferred [M/H] (dex)', fontsize=22.)
ax[1].set_xlim(FeH_limits)
ax[1].set_ylim(FeH_limits)
ax[1].plot(FeH_limits, FeH_limits, '--', lw=3, color="#444444")
ax[1].plot(data[:, 30], data[:, 1], 'o', markersize=9.0, color='#4682B4')
ax[1].plot(quoted_err[:, 30], quoted_err[:, 1], 'o', markersize=9.0, color='#800000')
# auto-adjust subplot spacing
fig.tight_layout()
Explanation: As before, we can confirm that distances were accurately recovered and we can check on how well metallicities were recovered compared to the measured value.
End of explanation
# relative errors
dTheta = (data[:,18] - data[:,8])/data[:,18]
dTeff = (data[:,24] - 10**data[:,6])/data[:,24]
dFbol = (data[:,22] - 10**(data[:,7]+ 8.0))/data[:,22]
# uncertainty normalized errors
dTheta_sigma = (data[:,18] - data[:,8])/data[:,19]
dTeff_sigma = (data[:,24] - 10**data[:,6])/data[:,25]
dFbol_sigma = (data[:,22] - 10**(data[:,7] + 8.0))/data[:,23]
Explanation: Distance are well recovered, as anticipated. Metallicities are scattered about the zero-point with perhaps a tendency for predicting systematically higher metallicities between $-0.40$ and $-0.20$ dex. Typical scatter appears to be around $\pm0.2$ dex, consistent with the assumed metallicities uncertainty. Points in red are those that have a measured metallicity uncertainty. In general, those metallicities are better recovered, perhaps owing to tighter constraints.
Define relative errors and errors normalized to observational unceratinties.
End of explanation
from matplotlib.patches import Ellipse
fig, ax = plt.subplots(1, 1, figsize=(8, 8))
# set axis labels
ax.set_xlabel('$\\Delta F_{\\rm bol} / \\sigma$', fontsize=22.)
ax.set_ylabel('$\\Delta \\Theta / \\sigma$', fontsize=22.)
ax.set_xlim(-3.0, 3.0)
ax.set_ylim(-4.0, 4.0)
# plot 68% and 99% confidence intervals
ells = [Ellipse(xy=(0.0, 0.0), width=2.*x, height=2.*x, angle=0.0, lw=3, fill=False,
linestyle='dashed', edgecolor='#333333') for x in [1.0, 3.0]]
for e in ells:
ax.add_artist(e)
# plot recovery diagnostics (uncertainty normalized errors)
ax.plot([-3.0, 3.0], [ 0.0, 0.0], '--', lw=2, color="#444444")
ax.plot([ 0.0, 0.0], [-4.0, 4.0], '--', lw=2, color="#444444")
ax.plot(dFbol_sigma, dTheta_sigma, 'o', markersize=9.0, color='#4682B4')
Explanation: Recovery of observed fundamental properties.
End of explanation
fig, ax = plt.subplots(2, 2, figsize=(10, 8), sharex=False, sharey=True)
ax[1, 0].set_xlabel('Bolometric Flux (erg s$^{-1}$ cm$^{-2}$)', fontsize=20.)
ax[1, 1].set_xlabel('Angular Diameter (mas)', fontsize=20.)
ax[1, 0].set_ylabel('$\\Delta \\Theta / \\sigma$', fontsize=20.)
ax[0, 0].set_ylabel('$\\Delta F_{\\rm bol} / \\sigma$', fontsize=20.)
# vs bolometric flux
ax[0, 0].semilogx([0.1, 1.0e3], [0.0, 0.0], '--', lw=2, color='#444444')
ax[1, 0].semilogx([0.1, 1.0e3], [0.0, 0.0], '--', lw=2, color='#444444')
ax[0, 0].semilogx(data[:, 22], dFbol_sigma, 'o', markersize=9.0, color='#4682B4')
ax[1, 0].semilogx(data[:, 22], dTheta_sigma, 'o', markersize=9.0, color='#4682B4')
# vs angular diameter
ax[0, 1].plot([0.0, 7.0], [0.0, 0.0], '--', lw=2, color='#444444')
ax[1, 1].plot([0.0, 7.0], [0.0, 0.0], '--', lw=2, color='#444444')
ax[0, 1].plot(data[:, 18], dFbol_sigma, 'o', markersize=9.0, color='#4682B4')
ax[1, 1].plot(data[:, 18], dTheta_sigma, 'o', markersize=9.0, color='#4682B4')
fig.tight_layout()
Explanation: There is considerably better recovery of stellar fundamental properties once variations in helium abundance and the convective mixing length parameter are permitted. All points, with the exception of one, lie within the 99% confidence interval. We can explore whether systematic errors still remain in the sample, although from the above figure we can gather that such systematic effects may be small.
First as a function of bolometric flux and angular diameter,
End of explanation
fig, ax = plt.subplots(2, 2, figsize=(10, 8), sharex=False, sharey=True)
ax[1, 0].set_xlabel('Mass (M$_{\\odot}$)', fontsize=20.)
ax[1, 1].set_xlabel('Effective Temperature (K)', fontsize=20.)
ax[1, 0].set_ylabel('$\\Delta \\Theta / \\sigma$', fontsize=20.)
ax[0, 0].set_ylabel('$\\Delta F_{\\rm bol} / \\sigma$', fontsize=20.)
# vs mass
ax[0, 0].plot([0.0, 1.0], [0.0, 0.0], '--', lw=2, color='#444444')
ax[1, 0].plot([0.0, 1.0], [0.0, 0.0], '--', lw=2, color='#444444')
ax[0, 0].plot(data[:, 0], dFbol_sigma, 'o', markersize=9.0, color='#4682B4')
ax[1, 0].plot(data[:, 0], dTheta_sigma, 'o', markersize=9.0, color='#4682B4')
# vs effective temperature
ax[0, 1].plot([2500., 6000.], [0.0, 0.0], '--', lw=2, color='#444444')
ax[1, 1].plot([2500., 6000.], [0.0, 0.0], '--', lw=2, color='#444444')
ax[0, 1].plot(data[:,24], dFbol_sigma, 'o', markersize=9.0, color='#4682B4')
ax[1, 1].plot(data[:,24], dTheta_sigma, 'o', markersize=9.0, color='#4682B4')
fig.tight_layout()
Explanation: There is a rise of the average error as one moves toward lower bolometric fluxes and smaller angular diameters. These points are effectively all M dwarfs. This illustrates, quite well, that problems for the lowest mass stars are most resiliant to variations in stellar model input parameters, thus preserving the trends present in the data where $\alpha_{MLT}$ and $Y_i$ are fixed. The growth of the trend is therefore attributable to the model's steadily increasing resistance to change resulting from modifications to input parameters.
As a function of stellar mass and effective temperature,
End of explanation
fig, ax = plt.subplots(2, 2, figsize=(12, 12), sharey=True)
ax[0, 0].set_xlabel('Mass (M$_{\\odot}$)', fontsize=20.)
ax[0, 1].set_xlabel('Effective Temperature (K)', fontsize=20.)
ax[1, 0].set_xlabel('Heavy Element Mass Fraction, $Z_i$', fontsize=20.)
ax[1, 1].set_xlabel('Mixing Length Parameter', fontsize=20.)
ax[1, 0].set_ylabel('Helium Mass Fraction, $Y_i$', fontsize=20.)
ax[0, 0].set_ylabel('Helium Mass Fraction, $Y_i$', fontsize=20.)
for x in ax:
for y in x:
y.tick_params(which='major', axis='both', length=10., labelsize=16.)
Z_init = (1.0 - data[:, 2])/(10.0**(-1.0*(data[:, 1] + np.log10(0.026579))) + 1.0)
# Helium abundance variation
ax[0, 0].plot(data[:, 0], data[:, 2], 'o', markersize=9.0, color='#4682B4')
ax[1, 0].plot(Z_init, data[:, 2], 'o', markersize=9.0, color='#4682B4')
ax[0, 1].plot(data[:,24], data[:, 2], 'o', markersize=9.0, color='#4682B4')
ax[1, 1].plot(data[:, 5], data[:, 2], 'o', markersize=9.0, color='#4682B4')
fig.tight_layout()
Explanation: These figures nicely illustrate the aforementioned phenomenon that models grow increasingly resiliant to variations in input parameters as stellar mass (and effective temperature) decreases. Note that it should be possible to quantify the significance of any potential rising trend toward lower masses or temperatures with statistical tests, if one so desires.
For the moment, we can look how the tunable parameters vary. Starting with helium abundance,
End of explanation
fig, ax = plt.subplots(2, 2, figsize=(12, 12), sharey=True)
ax[0, 0].set_xlabel('Mass (M$_{\\odot}$)', fontsize=20.)
ax[0, 1].set_xlabel('Effective Temperature (K)', fontsize=20.)
ax[1, 0].set_xlabel('Metallicity, [M/H] (dex)', fontsize=20.)
ax[1, 1].set_xlabel('$\\log (g)$', fontsize=20.)
ax[1, 0].set_ylabel('Mixing Length Parameter', fontsize=20.)
ax[0, 0].set_ylabel('Mixing Length Parameter', fontsize=20.)
for x in ax:
for y in x:
y.tick_params(which='major', axis='both', length=10., labelsize=16.)
Log_g = np.log10(6.67e-8*data[:,0]*1.989e33/(data[:,26]*6.956e10)**2)
# mixing length parameter variation
ax[0, 0].plot(data[:, 0], data[:, 5], 'o', markersize=9.0, color='#4682B4')
ax[1, 0].plot(data[:, 1], data[:, 5], 'o', markersize=9.0, color='#4682B4')
ax[0, 1].plot(10**data[:, 6], data[:, 5], 'o', markersize=9.0, color='#4682B4')
ax[1, 1].plot(Log_g, data[:, 5], 'o', markersize=9.0, color='#4682B4')
# points of reference (Sun, HD 189733)
ax[0, 0].plot([1.0, 0.80], [1.884, 1.231], '*', markersize=15.0, color='#DC143C')
ax[0, 1].plot([5778., 4883.], [1.884, 1.231], '*', markersize=15.0, color='#DC143C')
ax[1, 0].plot([0.01876, 0.01614], [1.884, 1.231], '*', markersize=15.0, color='#DC143C')
ax[1, 1].plot([4.43, 4.54], [1.884, 1.231], '*', markersize=15.0, color='#DC143C')
fig.tight_layout()
Explanation: Should probably provide some analysis.
Now we can look at the mixing length parameter,
End of explanation
B12_coeffs = [-12.77, 0.54, 3.18, 0.52] # from Table 1: Trilinear analysis
B12_alphas = B12_coeffs[0] + B12_coeffs[1]*Log_g + B12_coeffs[2]*data[:,6] + B12_coeffs[3]*data[:,1]
Explanation: NOTE: values for the Sun plotted above are drawn from our solar-calibrated model.
We can also compare how the inferred mixing length compares to those that are the result of extrapolating the Bonaca et al. (2012, ApJL, 755, L12) relation. The latter is valid for stars with $3.8 \le \log(g) \le 4.5$, $5000 \le T_{\rm eff} \le 6700$ K, and $-0.65 \le \textrm{[Fe/H]} \le +0.35$, but here we extrapolate to see how the mixing length parameter might evolve toward cooler temperatures.
End of explanation
fig, ax = plt.subplots(1, 1, figsize=(8, 8))
ax.set_xlabel('Bonaca et al. $\\alpha_{\\rm MLT}$', fontsize=20.)
ax.set_ylabel('This work, $\\alpha_{\\rm MLT}$', fontsize=20.)
ax.tick_params(which='major', axis='both', length=10., labelsize=16.)
# one-to-one relation
ax.plot([1.0, 1.59], [1.0, 1.59], '--', lw=2, color='#444444')
# compare values
ax.errorbar(B12_alphas, data[:,5], yerr=data[:,14], fmt='o', lw=2, markersize=9.0, color='#4682B4')
Explanation: Now plot the Bonaca et al values against those we derived,
End of explanation |
12,247 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Disaggregation and Metrics
Step1: Dividing data into train and test set
Step2: Let us use building 1 for demo purposes
Step3: Let's split data at April 30th
Step4: REDD data set has got appliance level data sampled every 3 or 4 seconds and mains data sampled every 1 second. Let us verify the same.
To allow disaggregation to be done on any arbitrarily large dataset, disaggregation output is dumped to disk chunk-by-chunk
Step5: Since, both of these are sampled at different frequencies, we will downsample both to 1 minute resolution. We will also select the top-5 appliances in terms of energy consumption and use them for training our FHMM and CO models.
Selecting top-5 appliances
Step6: Training and disaggregation
FHMM | Python Code:
from __future__ import print_function, division
import time
from matplotlib import rcParams
import matplotlib.pyplot as plt
%matplotlib inline
rcParams['figure.figsize'] = (13, 6)
plt.style.use('ggplot')
from nilmtk import DataSet, TimeFrame, MeterGroup, HDFDataStore
from nilmtk.disaggregate import CombinatorialOptimisation
Explanation: Disaggregation and Metrics
End of explanation
train = DataSet('/data/REDD/redd.h5')
test = DataSet('/data/REDD/redd.h5')
Explanation: Dividing data into train and test set
End of explanation
building = 1
train.buildings[building].elec.mains().plot()
Explanation: Let us use building 1 for demo purposes
End of explanation
train.set_window(end="30-4-2011")
test.set_window(start="30-4-2011")
train_elec = train.buildings[1].elec
test_elec = test.buildings[1].elec
train_elec.mains().plot()
test_elec.mains().plot()
Explanation: Let's split data at April 30th
End of explanation
fridge_meter = train_elec['fridge']
fridge_df = fridge_meter.load().next()
fridge_df.head()
mains = train_elec.mains()
mains_df = mains.load().next()
mains_df.head()
Explanation: REDD data set has got appliance level data sampled every 3 or 4 seconds and mains data sampled every 1 second. Let us verify the same.
To allow disaggregation to be done on any arbitrarily large dataset, disaggregation output is dumped to disk chunk-by-chunk:
End of explanation
top_5_train_elec = train_elec.submeters().select_top_k(k=5)
top_5_train_elec
Explanation: Since, both of these are sampled at different frequencies, we will downsample both to 1 minute resolution. We will also select the top-5 appliances in terms of energy consumption and use them for training our FHMM and CO models.
Selecting top-5 appliances
End of explanation
start = time.time()
from nilmtk.disaggregate import fhmm_exact
fhmm = fhmm_exact.FHMM()
# Note that we have given the sample period to downsample the data to 1 minute
fhmm.train(top_5_train_elec, sample_period=60)
end = time.time()
print("Runtime =", end-start, "seconds.")
disag_filename = '/data/REDD/redd-disag-fhmm.h5'
output = HDFDataStore(disag_filename, 'w')
# Note that we have mentioned to disaggregate after converting to a sample period of 60 seconds
fhmm.disaggregate(test_elec.mains(), output, sample_period=60)
output.close()
disag_fhmm = DataSet(disag_filename)
disag_fhmm_elec = disag_fhmm.buildings[building].elec
from nilmtk.metrics import f1_score
f1_fhmm = f1_score(disag_fhmm_elec, test_elec)
f1_fhmm.index = disag_fhmm_elec.get_labels(f1_fhmm.index)
f1_fhmm.plot(kind='barh')
plt.ylabel('appliance');
plt.xlabel('f-score');
plt.title("FHMM");
start = time.time()
from nilmtk.disaggregate import CombinatorialOptimisation
co = CombinatorialOptimisation()
# Note that we have given the sample period to downsample the data to 1 minute
co.train(top_5_train_elec, sample_period=60)
end = time.time()
print("Runtime =", end-start, "seconds.")
disag_filename = '/data/REDD/redd-disag-co.h5'
output = HDFDataStore(disag_filename, 'w')
# Note that we have mentioned to disaggregate after converting to a sample period of 60 seconds
co.disaggregate(test_elec.mains(), output, sample_period=60)
output.close()
disag_co = DataSet(disag_filename)
disag_co_elec = disag_co.buildings[building].elec
from nilmtk.metrics import f1_score
f1_co= f1_score(disag_co_elec, test_elec)
f1_co.index = disag_co_elec.get_labels(f1_co.index)
f1_co.plot(kind='barh')
plt.ylabel('appliance');
plt.xlabel('f-score');
plt.title("CO");
Explanation: Training and disaggregation
FHMM
End of explanation |
12,248 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Support Vector Machines
Step1: Kernel SVMs
Predictions in a kernel-SVM are made using the formular
$$
\hat{y} = \alpha_0 + \alpha_1 y_1 k(\mathbf{x^{(1)}}, \mathbf{x}) + ... + \alpha_n y_n k(\mathbf{x^{(n)}}, \mathbf{x})> 0
$$
$$
0 \leq \alpha_i \leq C
$$
Radial basis function (Gaussian) kernel | Python Code:
from sklearn.datasets import load_digits
from sklearn.cross_validation import train_test_split
digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data / 16., digits.target % 2, random_state=2)
from sklearn.svm import LinearSVC, SVC
linear_svc = LinearSVC(loss="hinge").fit(X_train, y_train)
svc = SVC(kernel="linear").fit(X_train, y_train)
np.mean(linear_svc.predict(X_test) == svc.predict(X_test))
Explanation: Support Vector Machines
End of explanation
from sklearn.metrics.pairwise import rbf_kernel
line = np.linspace(-3, 3, 100)[:, np.newaxis]
kernel_value = rbf_kernel([[0]], line, gamma=1)
plt.plot(line, kernel_value.T)
from plots import plot_svm_interactive
plot_svm_interactive()
svc = SVC().fit(X_train, y_train)
svc.score(X_test, y_test)
Cs = [0.001, 0.01, 0.1, 1, 10, 100]
gammas = [0.001, 0.01, 0.1, 1, 10, 100]
from sklearn.grid_search import GridSearchCV
param_grid = {'C': Cs, 'gamma' : gammas}
grid_search = GridSearchCV(SVC(), param_grid, cv=5)
grid_search.fit(X_train, y_train)
grid_search.score(X_test, y_test)
# We extract just the scores
scores = [x[1] for x in grid_search.grid_scores_]
scores = np.array(scores).reshape(6, 6)
plt.matshow(scores)
plt.xlabel('gamma')
plt.ylabel('C')
plt.colorbar()
plt.xticks(np.arange(6), param_grid['gamma'])
plt.yticks(np.arange(6), param_grid['C']);
Explanation: Kernel SVMs
Predictions in a kernel-SVM are made using the formular
$$
\hat{y} = \alpha_0 + \alpha_1 y_1 k(\mathbf{x^{(1)}}, \mathbf{x}) + ... + \alpha_n y_n k(\mathbf{x^{(n)}}, \mathbf{x})> 0
$$
$$
0 \leq \alpha_i \leq C
$$
Radial basis function (Gaussian) kernel:
$$k(\mathbf{x}, \mathbf{x'}) = \exp(-\gamma ||\mathbf{x} - \mathbf{x'}||^2)$$
End of explanation |
12,249 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The Google Research Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http
Step1: Necessary packages and functions call
rossmann_data_loading
Step2: Data loading & Select source, target, validation datasets
Load source, target, validation dataset and save those datasets as source.csv, target.csv, valid.csv in './repo/data_files/' directory.
If users have their own source.csv, target.csv, valid.csv, the users can skip this cell and just save those files to './repo/data_files/' directory .
Input
Step3: Data preprocessing
Extract features and labels from source.csv, valid.csv, target.csv in './repo/data_files/' directory.
Normalize the features of source, validation, and target sets.
Step4: Run DVRL
Input
Step5: Evaluations
In this notebook, we use LightGBM as the predictor model for evaluation purposes (but you can also replace it with another model).
Here, we use Root Mean Squared Percentage Error (RMSPE) as the performance metric.
DVRL Performance
DVRL learns robustly although the training data has different distribution from the target data distribution, using the guidance from the small validation data (which comes from the target distribution) via reinforcement learning.
* Train predictive model with weighted optimization using estimated data values by DVRL as the weights. | Python Code:
# Uses pip3 to install necessary package (lightgbm)
!pip3 install lightgbm
# Resets the IPython kernel to import the installed package.
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
import os
from git import Repo
# Current working directory
repo_dir = os.getcwd() + '/repo'
if not os.path.exists(repo_dir):
os.makedirs(repo_dir)
# Clones github repository
if not os.listdir(repo_dir):
git_url = "https://github.com/google-research/google-research.git"
Repo.clone_from(git_url, repo_dir)
Explanation: Copyright 2019 The Google Research Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Domain Adaptation using DVRL
Jinsung Yoon, Sercan O Arik, Tomas Pfister, "Data Valuation using Reinforcement Learning", arXiv preprint arXiv:1909.11671 (2019) - https://arxiv.org/abs/1909.11671
This notebook describes the user-guide of a domain adaptation application using "Data Valuation using Reinforcement Learning (DVRL)".
We consider the scenario where the training dataset comes from a substantially different distribution from the validation and testing sets. Data valuation is expected to be beneficial for this task by selecting the samples from the training dataset that best match the distribution of the validation dataset.
You need:
Source / Target / Validation Datasets
* If there is no explicit validation set, users can utilize a small portion of target set as the validation set and the remaining as the target set.
* If users come with their own source / target / validation datasets, the users should save those files as 'source.csv', 'target.csv', 'valid.csv' in './repo/data_files/' directory.
Requirements
We use Rossmann store sales dataset (https://www.kaggle.com/c/rossmann-store-sales) as an example in this notebook. Please download the dataset (rossmann-store-sales.zip) from the following link (https://www.kaggle.com/c/rossmann-store-sales/data) and save it to './repo/data_files/' directory after cloning github repository.
Prerequisite
Download lightgbm package.
Clone https://github.com/google-research/google-research.git to the current directory.
End of explanation
import numpy as np
import tensorflow as tf
import lightgbm
# Sets current directory
os.chdir(repo_dir)
from dvrl.data_loading import load_rossmann_data, preprocess_data
from dvrl import dvrl
from dvrl.dvrl_metrics import learn_with_dvrl, learn_with_baseline
Explanation: Necessary packages and functions call
rossmann_data_loading: Data loader for rossmann dataset.
data_preprocess: Data extraction and normalization.
dvrl_regress: Data valuation function for regression problem.
metrics: Evaluation metrics of the quality of data valuation in domain adaptation setting.
End of explanation
# The number of source / validation / target samples (79%/1%/20%)
dict_no = dict()
dict_no['source'] = 667027 # 79% of data
dict_no['valid'] = 8443 # 1% of data
# Selects a setting and target store type
setting = 'train-on-rest'
target_store_type = 'B'
# Loads data and selects source, target, validation datasets
load_rossmann_data(dict_no, setting, target_store_type)
print('Finished data loading.')
Explanation: Data loading & Select source, target, validation datasets
Load source, target, validation dataset and save those datasets as source.csv, target.csv, valid.csv in './repo/data_files/' directory.
If users have their own source.csv, target.csv, valid.csv, the users can skip this cell and just save those files to './repo/data_files/' directory .
Input:
* dict_no: The number of source / valid / target samples. We use 79% / 1% / 20% as the ratio of each dataset.
* settings: 'train-on-all', 'train-on-rest', 'train-on-specific'.
* target_store_type: Target store types ('A','B','C','D').
For instance, to evaluate the performance of store type 'A', (1) 'train-on-all' setting uses the entire source dataset, (2) 'train-on-rest' setting uses the source samples with store type 'B', 'C', and 'D', (3) 'train-on-specific' setting uses the source samples with store type 'A'. Therefore, 'train-on-rest' has the maximum distribution differences between source and target datasets.
End of explanation
# Normalization methods: either 'minmax' or 'standard'
normalization = 'minmax'
# Extracts features and labels. Then, normalizes features.
x_source, y_source, x_valid, y_valid, x_target, y_target, _ = \
preprocess_data(normalization, 'source.csv', 'valid.csv', 'target.csv')
print('Finished data preprocess.')
Explanation: Data preprocessing
Extract features and labels from source.csv, valid.csv, target.csv in './repo/data_files/' directory.
Normalize the features of source, validation, and target sets.
End of explanation
# Resets the graph
tf.reset_default_graph()
# Defines the problem
problem = 'regression'
# Network parameters
parameters = dict()
parameters['hidden_dim'] = 100
parameters['comb_dim'] = 10
parameters['iterations'] = 1000
parameters['activation'] = tf.nn.tanh
parameters['layer_number'] = 5
parameters['batch_size'] = 50000
parameters['learning_rate'] = 0.001
# Defines predictive model
pred_model = lightgbm.LGBMRegressor()
# Sets checkpoint file name
checkpoint_file_name = './tmp/model.ckpt'
# Defines flag for using stochastic gradient descent / pre-trained model
flags = {'sgd': False, 'pretrain': False}
# Initializes DVRL
dvrl_class = dvrl.Dvrl(x_source, y_source, x_valid, y_valid, problem, pred_model, parameters, checkpoint_file_name, flags)
# Trains DVRL
dvrl_class.train_dvrl('rmspe')
# Estimates data values
dve_out = dvrl_class.data_valuator(x_source, y_source)
# Predicts with DVRL
y_target_hat = dvrl_class.dvrl_predictor(x_target)
print('Finished data valuation.')
Explanation: Run DVRL
Input:
data valuator network parameters: Set network parameters of data valuator.
pred_model: The predictor model that maps output from the input. Any machine learning model (e.g. a neural network or ensemble decision tree) can be used as the predictor model, as long as it has fit, and predict (for regression)/predict_proba (for classification) as its subfunctions. Fit can be implemented using multiple backpropagation iterations.
Output:
data_valuator: Function that uses training set as inputs to estimate data values.
dvrl_predictor: Function that predicts labels of the testing samples.
dve_out: Estimated data values of the entire training samples.
End of explanation
# Defines evaluation model
eval_model = lightgbm.LGBMRegressor()
# DVRL-weighted learning
dvrl_perf = learn_with_dvrl(dve_out, eval_model,
x_source, y_source, x_valid, y_valid, x_target, y_target, 'rmspe')
# Baseline prediction performance (treat all training samples equally)
base_perf = learn_with_baseline(eval_model, x_source, y_source, x_target, y_target, 'rmspe')
print('Finished evaluation.')
print('DVRL learning performance: ' + str(np.round(dvrl_perf, 4)))
print('Baseline performance: ' + str(np.round(base_perf, 4)))
Explanation: Evaluations
In this notebook, we use LightGBM as the predictor model for evaluation purposes (but you can also replace it with another model).
Here, we use Root Mean Squared Percentage Error (RMSPE) as the performance metric.
DVRL Performance
DVRL learns robustly although the training data has different distribution from the target data distribution, using the guidance from the small validation data (which comes from the target distribution) via reinforcement learning.
* Train predictive model with weighted optimization using estimated data values by DVRL as the weights.
End of explanation |
12,250 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing patient data
Words are useful, but whatโs more useful are the sentences and stories we build with them.
A lot of powerful tools are built into languages like Python, even more live in the libraries they are used to build
We need to import a library called NumPy
Use this library to do fancy things with numbers (e.g. if you have matrices or arrays).
Step1: Importing a library akin to getting lab equipment out of a locker and setting up on bench
Libraries provide additional functionality
With NumPy loaded we can read the CSV into python.
Step2: numpy.loadtex() is a function call, runs loadtxt in numpy
uses dot notation to access thing.component
two parameters
Step3: print above shows several things at once by separating with commas
variable as putting sticky note on value
means assigning a value to one variable does not chage the value of other variables.
Step4: weight_lb dosn't remember where its value came from
it isn't automatically updated when weight_kg changes - not like spreadsheets
whos #ipython command to see what variables & mods you have
What does each variable contain after each statement in the following program
Step5: we can also assign an array of values to a variable
rerung numpy.loadtxt and save its result
Step6: data refers to N-dimensional array
data corres. to patients' inflammation
let's look at the shape of the data
Step7: data has 60 rows and 40 columns
when we created data with numpy it also creates members or attributes
extra info describes data like adjective does a noun
dot notation to access members
Step8: programming languages like MATLAB and R start counting at 1
languages in C family (C++, Java, Perl & python)
we have MxN array in python, indices go from 0 to M-1 on the first axis and 0 to N-1 on second
indices are (row, column)
Step9: slice 0
Step10: dont' have to include uper and lower bound
python uses 0 by default if we don't include lower
no upper slice runs to the axis
Step11: A section of an array is called a slice. We can take slices of character strings as well
Step12: operation on arrays is done on each individual element of the array
Step13: we can also do arithmetic operation with another array of same shape (same dims)
Step14: we can do more than simple arithmetic
let's take average inflammation for patients
Step15: mean is a method of the array (function)
variables are nouns, methods are verbs - they are what the thing knows how to do
for mean we need empty () parense even if we aren't passing in parameters to tell python to go do something
data.shape doesn't need () because it's just a description
NumPy arrays have lots of useful methods
Step16: however, we are usually more interested in partial stats, e.g. max value per patient or the avg value per day
we can create a new subset array of the data we want
Step17: but we dont' need to create a smaller array, instead we can combine selection and method call
Step18: what if we need max inflammation for all patients, or the average for each day?
most array methods let us specify the axis we want to work on
Step19: let's visualize this data with matplotlib library
first we import the plyplot module from matplotlib
Step20: nice, but ipython/jupyter proved us with 'magic' functions and one lets us display our plot inline
% indicates an ipython magic function
Step21: now let's look at avg inflammation over days (columns)
Step22: avg per day across all patients in the var avg_inflamation
matplotlib.pyplot create and display a line graph of those values
results a linear rise and fall, which is suspicious
based on other studies we expect a sharper rise and slower fall
let's look at two other stats
Step23: max vals rise and falls smoothly, while min seems to be a step function
neither seem likely
we can group into a single figure using subplots
script below uses a number of new commands
matplotlib.pyplot.figure() create the plotting space
figsize tells python how big
each plot is placed into the figure using add_subplot
1st val = how many rows, 2nd refers to the total number of subplot columns, 3rd denotes which plot you are referencing (left to right)
each plot in a diff variable (axes1, axes2, axes3)
we set_xlabel() & set_ylabel set the titles fo the axes | Python Code:
import numpy
Explanation: Analyzing patient data
Words are useful, but whatโs more useful are the sentences and stories we build with them.
A lot of powerful tools are built into languages like Python, even more live in the libraries they are used to build
We need to import a library called NumPy
Use this library to do fancy things with numbers (e.g. if you have matrices or arrays).
End of explanation
#assuming the data file is in the data/ folder
numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
Explanation: Importing a library akin to getting lab equipment out of a locker and setting up on bench
Libraries provide additional functionality
With NumPy loaded we can read the CSV into python.
End of explanation
weight_kg = 55 #assigns value 55 to weight_kg
print(weight_kg) #we can print to the screen
print('weight in pounds:', 2.2 * weight_kg) # do arithmetic with it
print?
weight_kg = 57.5 #change variable's value by assign new value
print('weight in kilograms is now :', weight_kg)
Explanation: numpy.loadtex() is a function call, runs loadtxt in numpy
uses dot notation to access thing.component
two parameters: filename and delimiter - both characters
we didn't save in memory using a variable
variables in python must start with letter & are case sensitive
assignment operator is =
let's look at assigning a single value to a variable
End of explanation
weight_lb = 2.2 * weight_kg #example let's store patients weight in pounds
print('weight in kilograms: ', weight_kg, 'and in pounds', weight_lb)
weight_kg = 100.0 #now change weight_kg
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
Explanation: print above shows several things at once by separating with commas
variable as putting sticky note on value
means assigning a value to one variable does not chage the value of other variables.
End of explanation
whos
Explanation: weight_lb dosn't remember where its value came from
it isn't automatically updated when weight_kg changes - not like spreadsheets
whos #ipython command to see what variables & mods you have
What does each variable contain after each statement in the following program:
python
mass = 47.5
age = 122
mass = mass * 2.0
age = age - 20
What does the following program print out?
python
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
End of explanation
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
print(data) #statement above doesn't produce output, let's pring
print(type(data)) #we can get type of object
Explanation: we can also assign an array of values to a variable
rerung numpy.loadtxt and save its result
End of explanation
print(data.shape)
Explanation: data refers to N-dimensional array
data corres. to patients' inflammation
let's look at the shape of the data
End of explanation
print('first value in data', data[0,0]) #use index in square brackets
data[30,20] # get the middle value - notice here i didn't use print
Explanation: data has 60 rows and 40 columns
when we created data with numpy it also creates members or attributes
extra info describes data like adjective does a noun
dot notation to access members
End of explanation
data[0:4, 0:10] #select whole sections of matrix, 1st 10 days & 4 patients
Explanation: programming languages like MATLAB and R start counting at 1
languages in C family (C++, Java, Perl & python)
we have MxN array in python, indices go from 0 to M-1 on the first axis and 0 to N-1 on second
indices are (row, column)
End of explanation
data[5:10,0:10]
Explanation: slice 0:4 means start at 0 and go up to but not include 4
up-to-but-not-including takes a bit of getting used to
End of explanation
data[:3, 36:]
Explanation: dont' have to include uper and lower bound
python uses 0 by default if we don't include lower
no upper slice runs to the axis
: will include everything
End of explanation
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
print(element[:4])
print(element[4:])
print(:)
#oxygen
print(element[-1])
print(element[-2])
print(element[2:-1])
doubledata = data * 2.0 #we can perform math on array
Explanation: A section of an array is called a slice. We can take slices of character strings as well:
python
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
first three characters: oxy
last three characters: gen
What is the value of element[:4]? What about element[4:]? Or element[:]?
What is element[-1]? What is element[-2]? Given those answers, explain what element[1:-1] does.
End of explanation
doubledata
data[:3, 36:]
doubledata[:3, 36:]
Explanation: operation on arrays is done on each individual element of the array
End of explanation
tripledata = doubledata + data
print('tripledata:')
print(tripledata[:3, 36:])
Explanation: we can also do arithmetic operation with another array of same shape (same dims)
End of explanation
print(data.mean())
Explanation: we can do more than simple arithmetic
let's take average inflammation for patients
End of explanation
print('maximum inflammation: ', data.max())
print('minimum inflammation: ', data.min())
print('standard deviation:', data.std())
Explanation: mean is a method of the array (function)
variables are nouns, methods are verbs - they are what the thing knows how to do
for mean we need empty () parense even if we aren't passing in parameters to tell python to go do something
data.shape doesn't need () because it's just a description
NumPy arrays have lots of useful methods:
End of explanation
patient_0 = data[0, :] #0 on first axis, everythign on second
print('maximum inflammation for patient 0: ', patient_0.max())
Explanation: however, we are usually more interested in partial stats, e.g. max value per patient or the avg value per day
we can create a new subset array of the data we want
End of explanation
data[2, :].max() #max inflammation of patient 2
Explanation: but we dont' need to create a smaller array, instead we can combine selection and method call:
End of explanation
print(data.mean(axis=0))
print(data.mean(axis=0).shape) #Nx1 vector of averages
print(data.mean(axis=1)) #avg inflam per patient across all days
print(data.mean(axis=1).shape)
Explanation: what if we need max inflammation for all patients, or the average for each day?
most array methods let us specify the axis we want to work on
End of explanation
import matplotlib.pyplot
image = matplotlib.pyplot.imshow(data)
matplotlib.pyplot.imshow?
Explanation: let's visualize this data with matplotlib library
first we import the plyplot module from matplotlib
End of explanation
%matplotlib inline
numpy.mean?
Explanation: nice, but ipython/jupyter proved us with 'magic' functions and one lets us display our plot inline
% indicates an ipython magic function
End of explanation
ave_inflammation = data.mean(axis = 0)
ave_plot = matplotlib.pyplot.plot(ave_inflammation)
matplotlib.pyplot.show(ave_plot)
Explanation: now let's look at avg inflammation over days (columns)
End of explanation
max_plot = matplotlib.pyplot.plot(data.max(axis=0))
matplotlib.pyplot.show(max_plot)
min_plot = matplotlib.pyplot.plot(data.min(axis=0))
matplotlib.pyplot.show(min_plot)
Explanation: avg per day across all patients in the var avg_inflamation
matplotlib.pyplot create and display a line graph of those values
results a linear rise and fall, which is suspicious
based on other studies we expect a sharper rise and slower fall
let's look at two other stats
End of explanation
import numpy
import matplotlib.pyplot
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
fig = matplotlib.pyplot.figure(figsize=(10.0, 3.0))
axes1 = fig.add_subplot(1, 3, 1)
axes2 = fig.add_subplot(1, 3, 2)
axes3 = fig.add_subplot(1, 3, 3)
axes1.set_ylabel('average')
axes1.plot(data.mean(axis=0))
axes2.set_ylabel('max')
axes2.plot(data.max(axis=0))
axes3.set_ylabel('min')
axes3.plot(data.min(axis=0))
fig.tight_layout()
matplotlib.pyplot.show()
Explanation: max vals rise and falls smoothly, while min seems to be a step function
neither seem likely
we can group into a single figure using subplots
script below uses a number of new commands
matplotlib.pyplot.figure() create the plotting space
figsize tells python how big
each plot is placed into the figure using add_subplot
1st val = how many rows, 2nd refers to the total number of subplot columns, 3rd denotes which plot you are referencing (left to right)
each plot in a diff variable (axes1, axes2, axes3)
we set_xlabel() & set_ylabel set the titles fo the axes
End of explanation |
12,251 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Grid algorithm for a beta-binomial hierarchical model
Bayesian Inference with PyMC
Copyright 2021 Allen B. Downey
License
Step2: Heart Attack Data
This example is based on Chapter 10 of Probability and Bayesian Modeling; it uses data on death rates due to heart attack for patients treated at various hospitals in New York City.
We can use Pandas to read the data into a DataFrame.
Step3: The columns we need are Cases, which is the number of patients treated at each hospital, and Deaths, which is the number of those patients who died.
Step4: Hospital Data with PyMC
Here's a hierarchical model that estimates the death rate for each hospital, and simultaneously estimates the distribution of rates across hospitals.
Step5: Here's the graph representation of the model, showing that the observable is an array of 13 values.
Step6: Here are the posterior distributions of alpha and beta.
Step7: And we can extract the posterior distributions of the xs.
Step8: As an example, here's the posterior distribution of x for the first hospital.
Step9: Just one update
Step10: Here's the graphical representation of the model.
Step11: The grid priors
Step12: The joint distribution of hyperparameters
Step13: Joint prior of alpha, beta, and x
Step14: We can speed this up by computing just $x^{\alpha-1} (1-x)^{\beta-1}$ and skipping the terms that don't depend on x
Step15: The following function computes the marginal distributions.
Step16: And let's confirm that the marginal distributions are what they are supposed to be.
Step17: The Update
Step18: Multiple updates
Step19: One at a time | Python Code:
# If we're running on Colab, install libraries
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install pymc3
!pip install arviz
!pip install empiricaldist
# PyMC generates a FutureWarning we don't need to deal with yet
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
import matplotlib.pyplot as plt
def legend(**options):
Make a legend only if there are labels.
handles, labels = plt.gca().get_legend_handles_labels()
if len(labels):
plt.legend(**options)
def decorate(**options):
plt.gca().set(**options)
legend()
plt.tight_layout()
from empiricaldist import Cdf
def compare_cdf(pmf, sample):
pmf.make_cdf().plot(label='grid')
Cdf.from_seq(sample).plot(label='mcmc')
print(pmf.mean(), sample.mean())
decorate()
from empiricaldist import Pmf
def make_pmf(ps, qs, name):
pmf = Pmf(ps, qs)
pmf.normalize()
pmf.index.name = name
return pmf
Explanation: Grid algorithm for a beta-binomial hierarchical model
Bayesian Inference with PyMC
Copyright 2021 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
import os
filename = 'DeathHeartAttackManhattan.csv'
if not os.path.exists(filename):
!wget https://github.com/AllenDowney/BayesianInferencePyMC/raw/main/DeathHeartAttackManhattan.csv
import pandas as pd
df = pd.read_csv(filename)
df
Explanation: Heart Attack Data
This example is based on Chapter 10 of Probability and Bayesian Modeling; it uses data on death rates due to heart attack for patients treated at various hospitals in New York City.
We can use Pandas to read the data into a DataFrame.
End of explanation
data_ns = df['Cases'].values
data_ks = df['Deaths'].values
Explanation: The columns we need are Cases, which is the number of patients treated at each hospital, and Deaths, which is the number of those patients who died.
End of explanation
import pymc3 as pm
def make_model():
with pm.Model() as model:
alpha = pm.Gamma('alpha', alpha=4, beta=0.5)
beta = pm.Gamma('beta', alpha=4, beta=0.5)
xs = pm.Beta('xs', alpha, beta, shape=len(data_ns))
ks = pm.Binomial('ks', n=data_ns, p=xs, observed=data_ks)
return model
%time model = make_model()
with model:
%time trace = pm.sample(500)
Explanation: Hospital Data with PyMC
Here's a hierarchical model that estimates the death rate for each hospital, and simultaneously estimates the distribution of rates across hospitals.
End of explanation
pm.model_to_graphviz(model)
Explanation: Here's the graph representation of the model, showing that the observable is an array of 13 values.
End of explanation
import arviz as az
with model:
az.plot_posterior(trace, var_names=['alpha', 'beta'])
Explanation: Here are the posterior distributions of alpha and beta.
End of explanation
trace_xs = trace['xs'].transpose()
trace_xs.shape
Explanation: And we can extract the posterior distributions of the xs.
End of explanation
with model:
az.plot_posterior(trace_xs[0])
Explanation: As an example, here's the posterior distribution of x for the first hospital.
End of explanation
i = 3
data_n = data_ns[i]
data_k = data_ks[i]
def make_model1():
with pm.Model() as model1:
alpha = pm.Gamma('alpha', alpha=4, beta=0.5)
beta = pm.Gamma('beta', alpha=4, beta=0.5)
x = pm.Beta('x', alpha, beta)
k = pm.Binomial('k', n=data_n, p=x, observed=data_k)
return model1
model1 = make_model1()
with model1:
pred1 = pm.sample_prior_predictive(1000)
trace1 = pm.sample(500)
Explanation: Just one update
End of explanation
pm.model_to_graphviz(model1)
Cdf.from_seq(pred1['alpha']).plot(label='prior', color='gray')
Cdf.from_seq(trace1['alpha']).plot(label='posterior')
decorate(title='Distribution of alpha')
Cdf.from_seq(pred1['beta']).plot(label='prior', color='gray')
Cdf.from_seq(trace1['beta']).plot(label='posterior')
decorate(title='Distribution of beta')
Cdf.from_seq(pred1['x']).plot(label='prior', color='gray')
Cdf.from_seq(trace1['x']).plot(label='posterior')
decorate(title='Distribution of x')
Explanation: Here's the graphical representation of the model.
End of explanation
import numpy as np
from scipy.stats import gamma
alpha = 4
beta = 0.5
alphas = np.linspace(0.1, 30, 100)
ps = gamma(alpha, scale=1/beta).pdf(alphas)
prior_alpha = make_pmf(ps, alphas, 'alpha')
compare_cdf(prior_alpha, pred1['alpha'])
decorate(title='Prior distribution of alpha')
betas = np.linspace(0.1, 50, 90)
ps = gamma(alpha, scale=1/beta).pdf(betas)
prior_beta = make_pmf(ps, betas, 'beta')
compare_cdf(prior_beta, pred1['beta'])
decorate(title='Prior distribution of beta')
Explanation: The grid priors
End of explanation
def make_hyper(prior_alpha, prior_beta):
PA, PB = np.meshgrid(prior_alpha.ps, prior_beta.ps, indexing='ij')
hyper = PA * PB
return hyper
prior_hyper = make_hyper(prior_alpha, prior_beta)
prior_hyper.shape
import pandas as pd
from utils import plot_contour
plot_contour(pd.DataFrame(prior_hyper,
index=prior_alpha.index,
columns=prior_beta.index))
decorate(title="Joint prior of alpha and beta")
(prior_hyper == 0).sum()
Explanation: The joint distribution of hyperparameters
End of explanation
A, B, X = np.meshgrid(alphas, betas, xs, indexing='ij')
from scipy.stats import beta as betadist
%time betapdf = betadist.pdf(X, A, B)
Explanation: Joint prior of alpha, beta, and x
End of explanation
xs = np.linspace(0.01, 0.99, 99)
logx = np.log(xs)
logy = np.log(1-xs)
logpdf = (A-1) * logx + (B-1) * logy
betapdf = np.exp(logpdf)
totals = betapdf.sum(axis=2)
shape = totals.shape + (1,)
betapdf /= totals.reshape(shape)
def make_prior(hyper):
# reshape hyper so we can multiply along axis 0
shape = hyper.shape + (1,)
prior = betapdf * hyper.reshape(shape)
return prior
%time prior = make_prior(prior_hyper)
prior.sum()
Explanation: We can speed this up by computing just $x^{\alpha-1} (1-x)^{\beta-1}$ and skipping the terms that don't depend on x
End of explanation
def marginal(joint, axis):
axes = [i for i in range(3) if i != axis]
return joint.sum(axis=tuple(axes))
Explanation: The following function computes the marginal distributions.
End of explanation
prior_alpha.plot()
marginal_alpha = Pmf(marginal(prior, 0), prior_alpha.qs)
marginal_alpha.plot()
decorate(title='Checking the marginal distribution of alpha')
prior_beta.plot()
marginal_beta = Pmf(marginal(prior, 1), prior_beta.qs)
marginal_beta.plot()
decorate(title='Checking the marginal distribution of beta')
prior_x = Pmf(marginal(prior, 2), xs)
prior_x.plot()
decorate(title='Prior distribution of x')
marginal_x = Pmf(marginal(prior, 2), xs)
compare_cdf(marginal_x, pred1['x'])
decorate(title='Checking the marginal distribution of x')
def get_hyper(joint):
return joint.sum(axis=2)
hyper = get_hyper(prior)
plot_contour(pd.DataFrame(hyper,
index=prior_alpha.index,
columns=prior_beta.index))
decorate(title="Joint prior of alpha and beta")
Explanation: And let's confirm that the marginal distributions are what they are supposed to be.
End of explanation
from scipy.stats import binom
like_x = binom.pmf(data_k, data_n, xs)
like_x.shape
plt.plot(xs, like_x)
decorate(title='Likelihood of the data')
def update(prior, data):
n, k = data
like_x = binom.pmf(k, n, xs)
posterior = prior * like_x
posterior /= posterior.sum()
return posterior
data = data_n, data_k
%time posterior = update(prior, data)
marginal_alpha = Pmf(marginal(posterior, 0), prior_alpha.qs)
compare_cdf(marginal_alpha, trace1['alpha'])
marginal_beta = Pmf(marginal(posterior, 1), prior_beta.qs)
compare_cdf(marginal_beta, trace1['beta'])
marginal_x = Pmf(marginal(posterior, 2), xs)
compare_cdf(marginal_x, trace1['x'])
marginal_x.mean(), trace1['x'].mean()
posterior_hyper = get_hyper(posterior)
plot_contour(pd.DataFrame(posterior_hyper,
index=prior_alpha.index,
columns=prior_beta.index))
decorate(title="Joint posterior of alpha and beta")
like_hyper = posterior_hyper / prior_hyper
plot_contour(pd.DataFrame(like_hyper,
index=prior_alpha.index,
columns=prior_beta.index))
decorate(title="Likelihood of alpha and beta")
Explanation: The Update
End of explanation
prior = make_prior(prior_hyper)
prior.shape
def multiple_updates(prior, ns, ks, xs):
for data in zip(ns, ks):
print(data)
posterior = update(prior, data)
hyper = get_hyper(posterior)
prior = make_prior(hyper)
return posterior
%time posterior = multiple_updates(prior, data_ns, data_ks, xs)
marginal_alpha = Pmf(marginal(posterior, 0), prior_alpha.qs)
compare_cdf(marginal_alpha, trace['alpha'])
marginal_beta = Pmf(marginal(posterior, 1), prior_beta.qs)
compare_cdf(marginal_beta, trace['beta'])
marginal_x = Pmf(marginal(posterior, 2), prior_x.qs)
compare_cdf(marginal_x, trace_xs[-1])
posterior_hyper = get_hyper(posterior)
plot_contour(pd.DataFrame(posterior_hyper,
index=prior_alpha.index,
columns=prior_beta.index))
decorate(title="Joint posterior of alpha and beta")
like_hyper = posterior_hyper / prior_hyper
plot_contour(pd.DataFrame(like_hyper,
index=prior_alpha.index,
columns=prior_beta.index))
decorate(title="Likelihood of alpha and beta")
Explanation: Multiple updates
End of explanation
def compute_likes_hyper(ns, ks):
shape = ns.shape + alphas.shape + betas.shape
likes_hyper = np.empty(shape)
for i, data in enumerate(zip(ns, ks)):
print(data)
n, k = data
like_x = binom.pmf(k, n, xs)
posterior = betapdf * like_x
likes_hyper[i] = posterior.sum(axis=2)
print(likes_hyper[i].sum())
return likes_hyper
%time likes_hyper = compute_likes_hyper(data_ns, data_ks)
likes_hyper.sum()
like_hyper_all = likes_hyper.prod(axis=0)
like_hyper_all.sum()
plot_contour(pd.DataFrame(like_hyper_all,
index=alphas,
columns=betas))
decorate(title="Likelihood of alpha and beta")
posterior_hyper_all = prior_hyper * like_hyper_all
posterior_hyper_all /= posterior_hyper_all.sum()
np.allclose(posterior_hyper_all, posterior_hyper)
marginal_alpha2 = Pmf(posterior_hyper_all.sum(axis=1), prior_alpha.qs)
marginal_alpha2.make_cdf().plot()
marginal_alpha.make_cdf().plot()
np.allclose(marginal_alpha, marginal_alpha2)
marginal_beta2 = Pmf(posterior_hyper_all.sum(axis=0), prior_beta.qs)
marginal_beta2.make_cdf().plot()
marginal_beta.make_cdf().plot()
np.allclose(marginal_beta, marginal_beta2)
plot_contour(pd.DataFrame(posterior_hyper_all,
index=alphas,
columns=betas))
decorate(title="Joint posterior of alpha and beta")
i = 3
data = data_ns[i], data_ks[i]
data
hyper_i = prior_hyper * like_hyper_all / likes_hyper[i]
hyper_i.sum()
prior_i = make_prior(hyper_i)
posterior_i = update(prior_i, data)
Pmf(marginal(posterior_i, 0), prior_alpha.qs).make_cdf().plot()
marginal_alpha.make_cdf().plot()
Pmf(marginal(posterior_i, 1), prior_beta.qs).make_cdf().plot()
marginal_beta.make_cdf().plot()
marginal_alpha = Pmf(marginal(posterior_i, 0), prior_alpha.qs)
marginal_beta = Pmf(marginal(posterior_i, 1), prior_beta.qs)
marginal_x = Pmf(marginal(posterior_i, 2), prior_x.qs)
compare_cdf(marginal_alpha, trace['alpha'])
compare_cdf(marginal_beta, trace['beta'])
compare_cdf(marginal_x, trace_xs[i])
def compute_all_marginals(ns, ks):
prior = prior_hyper * like_hyper_all
for i, data in enumerate(zip(ns, ks)):
print(data)
n, k = data
hyper_i = prior / likes_hyper[i]
prior_i = make_prior(hyper_i)
posterior_i = update(prior_i, data)
marginal_x = Pmf(marginal(posterior_i, 2), xs)
marginal_x.make_cdf().plot()
%time compute_all_marginals(data_ns, data_ks)
Explanation: One at a time
End of explanation |
12,252 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Coffee, November 5, 2015
Import required libraries
Step1: The previous import code requires that you have pandas, numpy and matplotlib installed. If you are using conda
you already have all of this libraries installed. Otherwise, use pip to install them. The magic command %matplotlib inline loads the required variables and tools needed to embed matplotlib figures in a ipython notebook.
Import optional libraries to use plotly.
Plot.ly is a cloud based visualization tool, which has a mature python API. It is very useful to create profesional looking and interactive plots, that are
shared publicly on the cloud; so be careful on publishing only data that you want (and can) share.
Installing plot.ly is done easily with pip or conda, but it requires you to create an account and then require a API token. If you don't want to install it, you can jump this section.
Step2: Import data file with pandas
Step3: df is an instance of the pandas object (data structure) pandas.DataFrame. A DataFrame instance has several methods (functions) to operate over the object. For example, is easy to display the data for a first exploration of what it contains using .head()
Step4: A DataFrame can be converted into a numpy array by using the method .values
Step5: For numpy expert, you have also methods to access the data using the numpy standards. If you want to extract the data at the coordinate (0,1) you can do
Step6: But also you can use the column names and index keys, to substract, for example, the name of the first antenna in a baseline pair from row 3
Step7: DataFrame are objects containgin tabular data, that can be grouped by columns and then used to aggreate data. Let's say you want to obtaing the mean frequency for the baselines and the number of channels used
Step8: Plot.ly
Step9: Matplotlib | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
Explanation: Python Coffee, November 5, 2015
Import required libraries
End of explanation
import plotly.tools as tls
import plotly.plotly as py
import cufflinks as cf
import plotly
plotly.offline.init_notebook_mode()
cf.offline.go_offline()
Explanation: The previous import code requires that you have pandas, numpy and matplotlib installed. If you are using conda
you already have all of this libraries installed. Otherwise, use pip to install them. The magic command %matplotlib inline loads the required variables and tools needed to embed matplotlib figures in a ipython notebook.
Import optional libraries to use plotly.
Plot.ly is a cloud based visualization tool, which has a mature python API. It is very useful to create profesional looking and interactive plots, that are
shared publicly on the cloud; so be careful on publishing only data that you want (and can) share.
Installing plot.ly is done easily with pip or conda, but it requires you to create an account and then require a API token. If you don't want to install it, you can jump this section.
End of explanation
df = pd.read_csv('data_files/baseline_channels_phase.txt', sep=' ')
Explanation: Import data file with pandas
End of explanation
df.head()
Explanation: df is an instance of the pandas object (data structure) pandas.DataFrame. A DataFrame instance has several methods (functions) to operate over the object. For example, is easy to display the data for a first exploration of what it contains using .head()
End of explanation
df.values
Explanation: A DataFrame can be converted into a numpy array by using the method .values:
End of explanation
df.iloc[0,1]
Explanation: For numpy expert, you have also methods to access the data using the numpy standards. If you want to extract the data at the coordinate (0,1) you can do:
End of explanation
df.ix[3, 'ant1name']
Explanation: But also you can use the column names and index keys, to substract, for example, the name of the first antenna in a baseline pair from row 3:
End of explanation
data_group = df.groupby(['ant1name', 'ant2name'])
df2 = data_group.agg({'freq': np.mean, 'chan': np.count_nonzero}).reset_index()
df2.head()
data_raw = df.groupby(['ant1name', 'ant2name', 'chan']).y.mean()
data_raw.head(30)
data_raw.unstack().head(20)
pd.options.display.max_columns = 200
data_raw.unstack().head(20)
data_raw = data_raw.unstack().reset_index()
data_raw.head()
data_raw.to_excel('test.xls', index=False)
todegclean = np.degrees(np.arcsin(np.sin(np.radians(data_raw.iloc[:,2:]))))
todegclean.head()
todegclean['mean'] = todegclean.mean(axis=1)
todegclean.head()
data_clean = todegclean.iloc[:,:-1].apply(lambda x: x - todegclean.iloc[:,-1])
data_clean.head(20)
data_ready = pd.merge(data_raw[['ant1name', 'ant2name']], todegclean, left_index=True, right_index=True)
data_ready.head()
Explanation: DataFrame are objects containgin tabular data, that can be grouped by columns and then used to aggreate data. Let's say you want to obtaing the mean frequency for the baselines and the number of channels used:
End of explanation
data_clean2 = data_clean.unstack().reset_index().copy()
data_clean2.query('100 < level_1 < 200')
data_clean2.query('100 < level_1 < 200').iplot(kind='scatter3d', x='chan', y='level_1', mode='markers', z=0, size=6,
title='Phase BL', filename='phase_test', width=1, opacity=0.8, colors='blue', symbol='circle',
layout={'scene': {'aspectratio': {'x': 1, 'y': 3, 'z': 0.7}}})
ploting = data_clean2.query('100 < level_1 < 200').figure(kind='scatter3d', x='chan', y='level_1', mode='markers', z=0, size=6,
title='Phase BL', filename='phase_test', width=1, opacity=0.8, colors='blue', symbol='circle',
layout={'scene': {'aspectratio': {'x': 1, 'y': 3, 'z': 0.7}}})
# ploting
ploting.data[0]['marker']['color'] = 'blue'
ploting.data[0]['marker']['line'] = {'color': 'blue', 'width': 0.5}
ploting.data[0]['marker']['opacity'] = 0.5
plotly.offline.iplot(ploting)
Explanation: Plot.ly
End of explanation
fig=plt.figure()
ax=fig.gca(projection='3d')
X = np.arange(0, data_clean.shape[1],1)
Y = np.arange(0, data_clean.shape[0],1)
X, Y = np.meshgrid(X,Y)
surf = ax.scatter(X, Y, data_clean, '.', c=data_clean,s=2,lw=0,cmap='winter')
%matplotlib notebook
fig=plt.figure()
ax=fig.gca(projection='3d')
X = np.arange(0, data_clean.shape[1],1)
Y = np.arange(0, data_clean.shape[0],1)
X, Y = np.meshgrid(X,Y)
surf = ax.scatter(X, Y, data_clean, '.', c=data_clean,s=2,lw=0,cmap='winter')
data_clean2.plot(kind='scatter', x='chan', y=0)
import seaborn as sns
data_clean2.plot(kind='scatter', x='level_1', y=0)
data_ready['noise'] = todegclean.iloc[:,2:].std(axis=1)
data_ready[['ant1name', 'ant2name', 'noise']].head(10)
corr = data_ready[['ant1name', 'ant2name', 'noise']].pivot_table(index=['ant1name'], columns=['ant2name'])
corr.columns.levels[1]
corr2 = pd.DataFrame(corr.values, index=corr.index.values, columns=corr.columns.levels[1].values)
corr2.head(10)
f, ax = plt.subplots(figsize=(11, 9))
cmap = sns.diverging_palette(220, 10, as_cmap=True)
sns.heatmap(corr2, cmap=cmap,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5}, ax=ax)
?sns.heatmap
Explanation: Matplotlib
End of explanation |
12,253 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise Introduction
We will return to the automatic rotation problem you worked on in the previous exercise. But we'll add data augmentation to improve your model.
The model specification and compilation steps don't change when you start using data augmentation. The code you've already worked with for specifying and compiling a model is in the cell below. Run it so you'll be ready to work on data augmentation.
Step1: 1) Fit the Model Using Data Augmentation
Here is some code to set up some ImageDataGenerators. Run it, and then answer the questions below about it.
Step2: Why do we need both a generator with augmentation and a generator without augmentation? After thinking about it, check out the solution below.
Step3: 2) Choosing Augmentation Types
ImageDataGenerator offers many types of data augmentation. For example, one argument is rotation_range. This rotates each image by a random amount that can be up to whatever value you specify.
Would it be sensible to use automatic rotation for this problem? Why or why not?
Step4: 3) Code
Fill in the missing pieces in the following code. We've supplied some boilerplate. You need to think about what ImageDataGenerator is used for each data source.
Step5: 4) Did Data Augmentation Help?
How could you test whether data augmentation improved your model accuracy? | Python Code:
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, GlobalAveragePooling2D
num_classes = 2
resnet_weights_path = '../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5'
my_new_model = Sequential()
my_new_model.add(ResNet50(include_top=False, pooling='avg', weights=resnet_weights_path))
my_new_model.add(Dense(num_classes, activation='softmax'))
my_new_model.layers[0].trainable = False
my_new_model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['accuracy'])
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.deep_learning.exercise_5 import *
print("Setup Complete")
Explanation: Exercise Introduction
We will return to the automatic rotation problem you worked on in the previous exercise. But we'll add data augmentation to improve your model.
The model specification and compilation steps don't change when you start using data augmentation. The code you've already worked with for specifying and compiling a model is in the cell below. Run it so you'll be ready to work on data augmentation.
End of explanation
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow.keras.preprocessing.image import ImageDataGenerator
image_size = 224
# Specify the values for all arguments to data_generator_with_aug.
data_generator_with_aug = ImageDataGenerator(preprocessing_function=preprocess_input,
horizontal_flip = True,
width_shift_range = 0.1,
height_shift_range = 0.1)
data_generator_no_aug = ImageDataGenerator(preprocessing_function=preprocess_input)
Explanation: 1) Fit the Model Using Data Augmentation
Here is some code to set up some ImageDataGenerators. Run it, and then answer the questions below about it.
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_1.solution()
Explanation: Why do we need both a generator with augmentation and a generator without augmentation? After thinking about it, check out the solution below.
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_2.solution()
Explanation: 2) Choosing Augmentation Types
ImageDataGenerator offers many types of data augmentation. For example, one argument is rotation_range. This rotates each image by a random amount that can be up to whatever value you specify.
Would it be sensible to use automatic rotation for this problem? Why or why not?
End of explanation
# Specify which type of ImageDataGenerator above is to load in training data
train_generator = data_generator_with_aug.flow_from_directory(
directory = '../input/dogs-gone-sideways/images/train',
target_size=(image_size, image_size),
batch_size=12,
class_mode='categorical')
# Specify which type of ImageDataGenerator above is to load in validation data
validation_generator = data_generator_no_aug.flow_from_directory(
directory = '../input/dogs-gone-sideways/images/val',
target_size=(image_size, image_size),
class_mode='categorical')
my_new_model.fit_generator(
____, # if you don't know what argument goes first, try the hint
epochs = 3,
steps_per_epoch=19,
validation_data=____)
# Check your answer
q_3.check()
# q_3.hint()
# q_3.solution()
#%%RM_IF(PROD)%%
train_generator = data_generator_with_aug.flow_from_directory(
directory = '../input/dogs-gone-sideways/images/train',
target_size=(image_size, image_size),
batch_size=12,
class_mode='categorical')
# Specify which type of ImageDataGenerator above is to load in validation data
validation_generator = data_generator_no_aug.flow_from_directory(
directory = '../input/dogs-gone-sideways/images/val',
target_size=(image_size, image_size),
class_mode='categorical')
my_new_model.fit_generator(
train_generator,
epochs = 3,
steps_per_epoch=19,
validation_data=validation_generator)
q_3.assert_check_passed()
Explanation: 3) Code
Fill in the missing pieces in the following code. We've supplied some boilerplate. You need to think about what ImageDataGenerator is used for each data source.
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_4.solution()
Explanation: 4) Did Data Augmentation Help?
How could you test whether data augmentation improved your model accuracy?
End of explanation |
12,254 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Autoencoder on MNIST dataset
Learning Objective
1. Build an autoencoder architecture (consisting of an encoder and decoder) in Keras
2. Define the loss using the reconstructive error
3. Define a training step for the autoencoder using tf.GradientTape()
4. Train the autoencoder on the MNIST dataset
Introduction
This notebook demonstrates how to build and train a convolutional autoencoder.
Autoencoders consist of two models
Step1: Next, we'll define some of the environment variables we'll use in this notebook. Note that we are setting the EMBED_DIM to be 64. This is the dimension of the latent space for our autoencoder.
Step2: Load and prepare the dataset
For this notebook, we will use the MNIST dataset to train the autoencoder. The encoder will map the handwritten digits into the latent space, to force a lower dimensional representation and the decoder will then map the encoding back.
Step3: Next, we define our input pipeline using tf.data. The pipeline below reads in train_images as tensor slices and then shuffles and batches the examples for training.
Step4: Create the encoder and decoder models
Both our encoder and decoder models will be defined using the Keras Sequential API.
The Encoder
The encoder uses tf.keras.layers.Conv2D layers to map the image into a lower-dimensional latent space. We will start with an image of size 28x28x1 and then use convolution layers to map into a final Dense layer.
Exercise. Complete the code below to create the CNN-based encoder model. Your model should have input_shape to be 28x28x1 and end with a final Dense layer the size of embed_dim.
Step5: The Decoder
The decoder uses tf.keras.layers.Conv2DTranspose (upsampling) layers to produce an image from the latent space. We will start with a Dense layer with the same input shape as embed_dim, then upsample several times until you reach the desired image size of 28x28x1.
Exercise. Complete the code below to create the decoder model. Start with a Dense layer that takes as input a tensor of size embed_dim. Use tf.keras.layers.Conv2DTranspose over multiple layers to upsample so that the final layer has shape 28x28x1 (the shape of our original MNIST digits).
Hint
Step6: Finally, we stitch the encoder and decoder models together to create our autoencoder.
Step7: Using .summary() we can have a high-level summary of the full autoencoder model as well as the individual encoder and decoder. Note how the shapes of the tensors mirror each other as data is passed through the encoder and then the decoder.
Step8: Next, we define the loss for our autoencoder model. The loss we will use is the reconstruction error. This loss is similar to the MSE loss we've commonly use for regression. Here we are applying this error pixel-wise to compare the original MNIST image and the image reconstructed from the decoder.
Step9: Optimizer for the autoencoder
Next we define the optimizer for model, specifying the learning rate.
Step10: Save checkpoints
This notebook also demonstrates how to save and restore models, which can be helpful in case a long running training task is interrupted.
Step11: Define the training loop
Next, we define the training loop for training our autoencoder. The train step will use tf.GradientTape() to keep track of gradient steps through training.
Exercise.
Complete the code below to define the training loop for our autoencoder. Notice the use of tf.function below. This annotation causes the function train_step to be "compiled". The train_step function takes as input a batch of images and passes them through the ae_model. The gradient is then computed on the loss against the ae_model output and the original image. In the code below, you should
- define ae_gradients. This is the gradient of the autoencoder loss with respect to the variables of the ae_model.
- create the gradient_variables by assigning each ae_gradient computed above to it's respective training variable.
- apply the gradient step using the optimizer
Step12: We use the train_step function above to define training of our autoencoder. Note here, the train function takes as argument the tf.data dataset and the number of epochs for training.
Step13: Generate and save images.
We'll use a small helper function to generate images and save them.
Step14: Let's see how our model performs before any training. We'll take as input the first 16 digits of the MNIST test set. Right now they just look like random noise.
Step15: Train the model
Call the train() method defined above to train the autoencoder model.
We'll print the resulting images as training progresses. At the beginning of the training, the decoded images look like random noise. As training progresses, the model outputs will look increasingly better. After about 50 epochs, they resemble MNIST digits. This may take about one or two minutes / epoch
Step16: Create a GIF
Lastly, we'll create a gif that shows the progression of our produced images through training. | Python Code:
import glob
import os
import time
import imageio
import matplotlib.pyplot as plt
import numpy as np
import PIL
import tensorflow as tf
from IPython import display
from tensorflow.keras import layers
Explanation: Convolutional Autoencoder on MNIST dataset
Learning Objective
1. Build an autoencoder architecture (consisting of an encoder and decoder) in Keras
2. Define the loss using the reconstructive error
3. Define a training step for the autoencoder using tf.GradientTape()
4. Train the autoencoder on the MNIST dataset
Introduction
This notebook demonstrates how to build and train a convolutional autoencoder.
Autoencoders consist of two models: an encoder and a decoder.
<img src="../assets/autoencoder2.png" width="600">
In this notebook we'll build an autoencoder to recreate MNIST digits. This notebook demonstrates this process on the MNIST dataset. The following animation shows a series of images produced by the generator as it was trained for 100 epochs. The images increasingly resemble hand written digits as the autoencoder learns to reconstruct the original images.
<img src="../assets/autoencoder.gif">
Import TensorFlow and other libraries
End of explanation
np.random.seed(1)
tf.random.set_seed(1)
BATCH_SIZE = 128
BUFFER_SIZE = 60000
EPOCHS = 60
LR = 1e-2
EMBED_DIM = 64 # intermediate_dim
Explanation: Next, we'll define some of the environment variables we'll use in this notebook. Note that we are setting the EMBED_DIM to be 64. This is the dimension of the latent space for our autoencoder.
End of explanation
(train_images, _), (test_images, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype(
"float32"
)
train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
Explanation: Load and prepare the dataset
For this notebook, we will use the MNIST dataset to train the autoencoder. The encoder will map the handwritten digits into the latent space, to force a lower dimensional representation and the decoder will then map the encoding back.
End of explanation
# Batch and shuffle the data
train_dataset = tf.data.Dataset.from_tensor_slices(train_images)
train_dataset = train_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
train_dataset = train_dataset.prefetch(BATCH_SIZE * 4)
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1).astype(
"float32"
)
test_images = (test_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
Explanation: Next, we define our input pipeline using tf.data. The pipeline below reads in train_images as tensor slices and then shuffles and batches the examples for training.
End of explanation
# TODO 1.
def make_encoder(embed_dim):
model = tf.keras.Sequential(name="encoder")
model.add(
layers.Conv2D(
64, (5, 5), strides=(2, 2), padding="same", input_shape=[28, 28, 1]
)
)
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding="same"))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Flatten())
model.add(layers.Dense(embed_dim))
assert model.output_shape == (None, embed_dim)
return model
Explanation: Create the encoder and decoder models
Both our encoder and decoder models will be defined using the Keras Sequential API.
The Encoder
The encoder uses tf.keras.layers.Conv2D layers to map the image into a lower-dimensional latent space. We will start with an image of size 28x28x1 and then use convolution layers to map into a final Dense layer.
Exercise. Complete the code below to create the CNN-based encoder model. Your model should have input_shape to be 28x28x1 and end with a final Dense layer the size of embed_dim.
End of explanation
# TODO 1.
def make_decoder(embed_dim):
model = tf.keras.Sequential(name="decoder")
model.add(layers.Dense(embed_dim, use_bias=False, input_shape=(embed_dim,)))
model.add(layers.Dense(6272, use_bias=False, input_shape=(embed_dim,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((7, 7, 128)))
model.add(
layers.Conv2DTranspose(
128, (5, 5), strides=(1, 1), padding="same", use_bias=False
)
)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(
layers.Conv2DTranspose(
64, (5, 5), strides=(2, 2), padding="same", use_bias=False
)
)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(
layers.Conv2DTranspose(
1,
(5, 5),
strides=(2, 2),
padding="same",
use_bias=False,
activation="tanh",
)
)
assert model.output_shape == (None, 28, 28, 1)
return model
Explanation: The Decoder
The decoder uses tf.keras.layers.Conv2DTranspose (upsampling) layers to produce an image from the latent space. We will start with a Dense layer with the same input shape as embed_dim, then upsample several times until you reach the desired image size of 28x28x1.
Exercise. Complete the code below to create the decoder model. Start with a Dense layer that takes as input a tensor of size embed_dim. Use tf.keras.layers.Conv2DTranspose over multiple layers to upsample so that the final layer has shape 28x28x1 (the shape of our original MNIST digits).
Hint: Experiment with using BatchNormalization or different activation functions like LeakyReLU.
End of explanation
ae_model = tf.keras.models.Sequential(
[make_encoder(EMBED_DIM), make_decoder(EMBED_DIM)]
)
Explanation: Finally, we stitch the encoder and decoder models together to create our autoencoder.
End of explanation
ae_model.summary()
make_encoder(EMBED_DIM).summary()
make_decoder(EMBED_DIM).summary()
Explanation: Using .summary() we can have a high-level summary of the full autoencoder model as well as the individual encoder and decoder. Note how the shapes of the tensors mirror each other as data is passed through the encoder and then the decoder.
End of explanation
# TODO 2.
def loss(model, original):
reconstruction_error = tf.reduce_mean(
tf.square(tf.subtract(model(original), original))
)
return reconstruction_error
Explanation: Next, we define the loss for our autoencoder model. The loss we will use is the reconstruction error. This loss is similar to the MSE loss we've commonly use for regression. Here we are applying this error pixel-wise to compare the original MNIST image and the image reconstructed from the decoder.
End of explanation
optimizer = tf.keras.optimizers.SGD(lr=LR)
Explanation: Optimizer for the autoencoder
Next we define the optimizer for model, specifying the learning rate.
End of explanation
checkpoint_dir = "./ae_training_checkpoints"
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=ae_model)
Explanation: Save checkpoints
This notebook also demonstrates how to save and restore models, which can be helpful in case a long running training task is interrupted.
End of explanation
# TODO 3.
@tf.function
def train_step(images):
with tf.GradientTape() as tape:
ae_gradients = tape.gradient(
loss(ae_model, images), ae_model.trainable_variables
)
gradient_variables = zip(ae_gradients, ae_model.trainable_variables)
optimizer.apply_gradients(gradient_variables)
Explanation: Define the training loop
Next, we define the training loop for training our autoencoder. The train step will use tf.GradientTape() to keep track of gradient steps through training.
Exercise.
Complete the code below to define the training loop for our autoencoder. Notice the use of tf.function below. This annotation causes the function train_step to be "compiled". The train_step function takes as input a batch of images and passes them through the ae_model. The gradient is then computed on the loss against the ae_model output and the original image. In the code below, you should
- define ae_gradients. This is the gradient of the autoencoder loss with respect to the variables of the ae_model.
- create the gradient_variables by assigning each ae_gradient computed above to it's respective training variable.
- apply the gradient step using the optimizer
End of explanation
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for image_batch in dataset:
train_step(image_batch)
# Produce images for the GIF as we go
display.clear_output(wait=True)
generate_and_save_images(ae_model, epoch + 1, test_images[:16, :, :, :])
# Save the model every 5 epochs
if (epoch + 1) % 5 == 0:
checkpoint.save(file_prefix=checkpoint_prefix)
print(f"Time for epoch {epoch + 1} is {time.time() - start} sec")
# Generate after the final epoch
display.clear_output(wait=True)
generate_and_save_images(ae_model, epochs, test_images[:16, :, :, :])
Explanation: We use the train_step function above to define training of our autoencoder. Note here, the train function takes as argument the tf.data dataset and the number of epochs for training.
End of explanation
def generate_and_save_images(model, epoch, test_input):
# Notice `training` is set to False.
# This is so all layers run in inference mode (batchnorm).
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4, 4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i + 1)
pixels = predictions[i, :, :] * 127.5 + 127.5
pixels = np.array(pixels, dtype="float")
pixels = pixels.reshape((28, 28))
plt.imshow(pixels, cmap="gray")
plt.axis("off")
plt.savefig(f"image_at_epoch_{epoch:04d}.png")
plt.show()
Explanation: Generate and save images.
We'll use a small helper function to generate images and save them.
End of explanation
generate_and_save_images(ae_model, 4, test_images[:16, :, :, :])
Explanation: Let's see how our model performs before any training. We'll take as input the first 16 digits of the MNIST test set. Right now they just look like random noise.
End of explanation
# TODO 4.
train(train_dataset, EPOCHS)
Explanation: Train the model
Call the train() method defined above to train the autoencoder model.
We'll print the resulting images as training progresses. At the beginning of the training, the decoded images look like random noise. As training progresses, the model outputs will look increasingly better. After about 50 epochs, they resemble MNIST digits. This may take about one or two minutes / epoch
End of explanation
# Display a single image using the epoch number
def display_image(epoch_no):
return PIL.Image.open(f"./ae_images/image_at_epoch_{epoch_no:04d}.png")
display_image(EPOCHS)
anim_file = "autoencoder.gif"
with imageio.get_writer(anim_file, mode="I") as writer:
filenames = glob.glob("./ae_images/image*.png")
filenames = sorted(filenames)
last = -1
for i, filename in enumerate(filenames):
frame = 2 * (i**0.5)
if round(frame) > round(last):
last = frame
else:
continue
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
import IPython
if IPython.version_info > (6, 2, 0, ""):
display.Image(filename=anim_file)
Explanation: Create a GIF
Lastly, we'll create a gif that shows the progression of our produced images through training.
End of explanation |
12,255 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.algo - Programmation dynamique et plus court chemin
La programmation dynamique est une faรงon des calculs qui revient dans beaucoup d'algorithmes. Elle s'applique dรจs que ceux-ci peuvent s'รฉcrire de faรงon rรฉcurrente.
Step1: La programmation dynamique est une faรงon de rรฉsoudre de maniรจre similaire une classe de problรจmes d'optimisation qui vรฉrifie la mรชme propriรฉtรฉ. On suppose qu'il est possible de dรฉcouper le problรจme $P$ en plusieurs parties $P_1$, $P_2$, ... Si $S$ est la solution optimale du problรจme $P$, alors chaque partie $S_1$, $S_2$, ... de cette solution appliquรฉe aux sous-problรจmes est aussi optimale.
Par exemple, on cherche le plus court chemin $c(A,B)$ entre les villes $A$ et $B$. Si celui-ci passe par la ville $M$ alors les chemins $c(A,M)+c(M,B) = c(A,B)$ sont aussi les plus courts chemins entre les villes $A,M$ et $M,B$. La dรฉmonstration se fait simplement par l'absurde
Step2: On peut lire ce fichier soit avec le module pandas introduit lors de la sรฉance 10 TD 10
Step3: Le membre values se comporte comme une matrice, une liste de listes
Step4: On peut aussi utiliser le petit exemple qui a รฉtรฉ prรฉsentรฉ lors de la sรฉance 4 sur les fichiers TD 4
Step5: Chaque ligne dรฉfinit un voyage entre deux villes effectuรฉ d'une traite, sans รฉtape. Les accents ont รฉtรฉ supprimรฉs du fichier.
Exercice 1
Construire la liste des villes sans doublons.
Exercice 2
Constuire un dictionnaire { (a,b)
Step6: Exercice 7
Quelle est la meilleure distribution des skis aux skieurs ?
Exercice 8
Quels sont les coรปts des deux algorithmes (plus court chemin et ski) ?
Prolongements
Step7: Il faut dรฉcompresser ce fichier avec 7zip si vous utilisez pysense < 0.8. Sous Linux (et Mac), il faudra utiliser une commande dรฉcrite ici tar. | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 1A.algo - Programmation dynamique et plus court chemin
La programmation dynamique est une faรงon des calculs qui revient dans beaucoup d'algorithmes. Elle s'applique dรจs que ceux-ci peuvent s'รฉcrire de faรงon rรฉcurrente.
End of explanation
import pyensae.datasource
pyensae.datasource.download_data("matrix_distance_7398.zip", website = "xd")
Explanation: La programmation dynamique est une faรงon de rรฉsoudre de maniรจre similaire une classe de problรจmes d'optimisation qui vรฉrifie la mรชme propriรฉtรฉ. On suppose qu'il est possible de dรฉcouper le problรจme $P$ en plusieurs parties $P_1$, $P_2$, ... Si $S$ est la solution optimale du problรจme $P$, alors chaque partie $S_1$, $S_2$, ... de cette solution appliquรฉe aux sous-problรจmes est aussi optimale.
Par exemple, on cherche le plus court chemin $c(A,B)$ entre les villes $A$ et $B$. Si celui-ci passe par la ville $M$ alors les chemins $c(A,M)+c(M,B) = c(A,B)$ sont aussi les plus courts chemins entre les villes $A,M$ et $M,B$. La dรฉmonstration se fait simplement par l'absurde : si la distance $c(A,M)$ n'est pas optimale alors il est possible de constuire un chemin plus court entre les villes $A$ et $B$. Cela contredit l'hypothรจse de dรฉpart.
Ces problรจmes ont en rรจgle gรฉnรฉrale une expression simple sous forme de rรฉcurrence : si on sait rรฉsoudre le problรจme pour un รฉchantillon de taille $n$, on appelle cette solution $S(n)$ alors on peut facilement la solution $S(n+1)$ en fonction de $S(n)$. Parfois cette rรฉcurrence va au delร : $S(n+1) = f(S(n), S(n-1), ..., S(0))$.
Les donnรฉes
On rรฉcupรจre le fichier matrix_distance_7398.txt depuis matrix_distance_7398.zip qui contient des distances entre diffรฉrentes villes (pas toutes).
End of explanation
import pandas
df = pandas.read_csv("matrix_distance_7398.txt", sep="\t", header=None, names=["v1","v2","distance"])
df.head()
Explanation: On peut lire ce fichier soit avec le module pandas introduit lors de la sรฉance 10 TD 10 : DataFrame et Matrice :
End of explanation
matrice = df.values
matrice[:5]
Explanation: Le membre values se comporte comme une matrice, une liste de listes :
End of explanation
with open ("matrix_distance_7398.txt", "r") as f :
matrice = [ row.strip(' \n').split('\t') for row in f.readlines() ]
for row in matrice:
row[2] = float(row[2])
print(matrice[:5])
Explanation: On peut aussi utiliser le petit exemple qui a รฉtรฉ prรฉsentรฉ lors de la sรฉance 4 sur les fichiers TD 4 : Modules, fichiers, expressions rรฉguliรจres. Les donnรฉes se prรฉsente sous forme de matrice. Les deux premiรจres colonnes sont des chaรฎnes de caractรจres, la derniรจre est une valeur numรฉrique qu'il faut convertir.
End of explanation
import random
skieurs = [ random.gauss(1.75, 0.1) for i in range(0,10) ]
paires = [ random.gauss(1.75, 0.1) for i in range(0,15) ]
skieurs.sort()
paires.sort()
print(skieurs)
print(paires)
Explanation: Chaque ligne dรฉfinit un voyage entre deux villes effectuรฉ d'une traite, sans รฉtape. Les accents ont รฉtรฉ supprimรฉs du fichier.
Exercice 1
Construire la liste des villes sans doublons.
Exercice 2
Constuire un dictionnaire { (a,b) : d, (b,a) : d } oรน a,b sont des villes et d la distance qui les sรฉpare ?
On veut calculer la distance entre la ville de Charleville-Mezieres et Bordeaux ? Est-ce que cette distance existe dans la liste des distances dont on dispose ?
Algorithme du plus court chemin
On crรฉรฉ un tableau d[v] qui contient ou contiendra la distance optimale entre les villes v et Charleville-Mezieres. La valeur qu'on cherche est d['Bordeaux']. On initialise le tableau comme suit :
d['Charleville-Mezieres'] = 0
d[v] = infini pour tout $v \neq$ 'Charleville-Mezieres'.
Exercice 3
Quelles sont les premiรจres cases qu'on peut remplir facilement ?
Exercice 4
Soit une ville $v$ et une autre $w$, on s'aperรงoit que $d[w] > d[v] + dist[w,v]$. Que proposez-vous de faire ? En dรฉduire un algorithme qui permet de dรฉterminer la distance la plus courte entre Charleville-Mezieres et Bordeaux.
Si la solution vous รฉchappe encore, vous pouvez vous inspirer de l'Algorithme de Djikstra.
La rรฉpartition des skis
Ce problรจme est un exemple pour lequel il faut d'abord prouver que la solution vรฉrifie une certaine propriรฉtรฉ avant de pouvoir lui appliquer une solution issue de la programmation dynamique.
$N=10$ skieurs rentrent dans un magasins pour louer 10 paires de skis (parmi $M>N$). On souhaite leur donner ร tous une paire qui leur convient (on suppose que la taille de la paire de skis doit รชtre au plus proche de la taille du skieurs. On cherche donc ร minimiser :
$\arg \min_\sigma \sum_{i=1}^{N} \left| t_i - s_{\sigma(i)} \right|$
Oรน $\sigma$ est un ensemble de $N$ paires de skis parmi $M$ (un arrangement pour รชtre plus prรฉcis).
A premiรจre vue, il faut chercher la solution du problรจme dans l'ensemble des arrangements de $N$ paires parmi $M$. Mais si on ordonne les paires et les skieurs par taille croissantes : $t_1 \leqslant t_2 \leqslant ... \leqslant t_N$ (tailles de skieurs) et $s_1 \leqslant s_2 \leqslant ... \leqslant s_M$ (tailles de skis), rรฉsoudre le problรจme revient ร prendre les skieurs dans l'ordre croissant et ร les placer en face d'une paire dans l'ordre oรน elles viennent. C'est comme si on insรฉrait des espaces dans la sรฉquence des skieurs sans changer l'ordre :
$\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline t_1 & & t_2 & t_3 & & & t_4 & ... & t_{N-1} & & t_{N} & \ \hline s_1 & s_2 & s_3 & s_4 & s_5 & s_6 & s_7 & ... & s_{M-3} & s_{M-2} & s_{M-1} & s_M \ \hline \end{array}$
Exercice facultatif
Il faut d'abord prouver que l'algorithme suggรฉrรฉ ci-dessus permet bien d'obtenir la solution optimale.
Exercice 5
Aprรจs avoir avoir triรฉ les skieurs et les paires par tailles croissantes. On dรฉfinit :
$p(n,m) = \sum_{i=1}^{n} \left| t_i - s_{\sigma_m^*(i)} \right|$
Oรน $\sigma_m^*$ est le meilleur choix possible de $n$ paires de skis parmi les $m$ premiรจres. Exprimer $p(n,m)$ par rรฉcurrence (en fonction de $p(n,m-1)$ et $p(n-1,m-1)$. On suppose qu'un skieur sans paire de ski correspond au cas oรน la paire est de taille nulle.
Exercice 6
Ecrire une fonction qui calcule l'erreur pour la distribution optimale ? On pourra choisir des skieurs et des paires de tailles alรฉatoires par exemple.
End of explanation
import pyensae.datasource
files = pyensae.datasource.download_data("facebook.tar.gz",website="http://snap.stanford.edu/data/")
fe = [ f for f in files if "edge" in f ]
fe
Explanation: Exercice 7
Quelle est la meilleure distribution des skis aux skieurs ?
Exercice 8
Quels sont les coรปts des deux algorithmes (plus court chemin et ski) ?
Prolongements : degrรฉ de sรฉparation sur Facebook
Le plus court chemin dans un graphe est un des algorithmes les plus connus en programmation. Il permet de dรฉterminer la solution en un coรปt polynรดmial - chaque itรฉration est en $O(n^2)$. La programmation dynamique caractรจrise le passage d'une vision combinatoire ร une comprรฉhension rรฉcursif du mรชme problรจme. Dans le cas du plus court chemin, l'approche combinatoire consiste ร รฉnumรฉrer tous les chemins du graphe. L'approche dynamique consiste ร dรฉmontrer que la premiรจre approche combinatoire aboutit ร un calcul trรจs redondant. On note $e(v,w)$ la matrice des longueurs des routes, $e(v,w) = \infty$ s'il n'existe aucune route entre les villes $v$ et $w$. On suppose que $e(v,w)=e(w,v)$. La construction du tableau d se dรฉfinit de maniรจre itรฉrative et rรฉcursive comme suit :
Etape 0
$d(v) = \infty, \, \forall v \in V$
Etape $n$
$d(v) = \left { \begin{array}{ll} 0 & si \; v = v_0 \ \min { d(w) + e(v,w) \, | \, w \in V } & sinon \end{array} \right.$ oรน $v_0 =$ 'Charleville-Mezieres'
Tant que l'รฉtape $n$ continue ร faire des mises ร jour ($\sum_v d(v)$ diminue), on rรฉpรจte l'รฉtape $n$. Ce mรชme algorithme peut รชtre appliquรฉ pour dรฉterminer le degrรฉ de sรฉparation dans un rรฉseau social. L'agorithme s'applique presque tel quel ร condition de dรฉfinir ce que sont une ville et une distance entre villes dans ce nouveau graphe. Vous pouvez tester vos idรฉes sur cet exemple de graphe Social circles: Facebook. L'algorithme de Dikjstra calcule le plus court chemin entre deux noeuds d'un graphe, l'algorithme de Bellman-Ford est une variante qui calcule toutes les distances des plus courts chemin entre deux noeuds d'un graphe.
End of explanation
import pandas
df = pandas.read_csv("facebook/1912.edges", sep=" ", names=["v1","v2"])
print(df.shape)
df.head()
Explanation: Il faut dรฉcompresser ce fichier avec 7zip si vous utilisez pysense < 0.8. Sous Linux (et Mac), il faudra utiliser une commande dรฉcrite ici tar.
End of explanation |
12,256 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What is NLP?
Natural Language Processing (NLP) is often taught at the academic level from the perspective of computational linguists. However, as data scientists, we have a richer view of the natural language world - unstructured data that by its very nature has latent information that is important to humans. NLP practioners have benefitted from machine learning techniques to unlock meaning from large corpora, and in this tutorial weโll explore how to do that using Python, the Natural Language Toolkit (NLTK) and Gensim.
NLTK is an excellent library for machine-learning based NLP, written in Python by experts from both academia and industry. Python allows you to create rich data applications rapidly, iterating on hypotheses. The combination of Python + NLTK means that you can easily add language-aware data products to your larger analytical workflows and applications.
Quick Overview of NLTK
NLTK was written by two eminent computational linguists, Steven Bird (Senior Research Associate of the LDC and professor at the University of Melbourne) and Ewan Klein (Professor of Linguistics at Edinburgh University). The NTLK library provides a combination of natural language corpora, lexical resources, and example grammars with language processing algorithms, methodologies and demonstrations for a very Pythonic "batteries included" view of natural language processing.
As such, NLTK is perfect for research-driven (hypothesis-driven) workflows for agile data science.
Installing NLTK
This notebook has a few dependencies, most of which can be installed via the python package manger - pip.
Python 2.7+ or 3.5+ (Anaconda is ok)
NLTK
The NLTK corpora
The BeautifulSoup library
The gensim libary
Once you have Python and pip installed you can install NLTK from the terminal as follows
Step1: Methods for Working with Sample NLTK Corpora
To explore much of the built-in corpus, use the following methods
Step2: fileids()
Step3: text.Text()
The nltk.text.Text class is a wrapper around a sequence of simple (string) tokens - intended only for the initial exploration of text usually via the Python REPL. It has the following methods
Step4: concordance()
The concordance function performs a search for the given token and then also provides the surrounding context.
Step5: similar()
Given some context surrounding a word, we can discover similar words, e.g. words that that occur frequently in the same context and with a similar distribution
Step6: As you can see, this takes a bit of time to build the index in memory, one of the reasons it's not suggested to use this class in production code.
common_contexts()
Now that we can do searching and similarity, we find the common contexts of a set of words.
Step7: your turn, go ahead and explore similar words and contexts - what does the common context mean?
dispersion_plot()
NLTK also uses matplotlib and pylab to display graphs and charts that can show dispersions and frequency. This is especially interesting for the corpus of innagural addresses given by U.S. presidents.
Step8: Stopwords
Step9: These corpora export several vital methods
Step10: sents()
Step11: words()
Step12: raw()
Be careful!
Step13: Your turn! Explore some of the text in the available corpora
<a id='freqdist'></a>
Frequency Analyses
In statistical machine learning approaches to NLP, the very first thing we need to do is count things - especially the unigrams that appear in the text and their relationships to each other. NLTK provides two excellent classes to enable these frequency analyses
Step14: counts()
Step15: most_common()
The n most common tokens in the corpus
Step16: counts.max()
The most frequent token in the corpus.
Step17: counts.hapaxes()
A list of all hapax legomena (words that only appear one time in the corpus).
Step18: counts.freq()
The percentage of the corpus for the given token.
Step19: counts.plot()
Plot the frequencies of the n most commonly occuring words.
Step20: ConditionalFreqDist()
Step21: Your turn
Step22: Preprocessing Text
NLTK is great at the preprocessing of raw text - it provides the following tools for dividing text into it's constituent parts
Step23: All of these taggers work pretty well - but you can (and should train them on your own corpora).
<a id='lemmatize'></a>
Stemming and Lemmatization
We have an immense number of word forms as you can see from our various counts in the FreqDist above - it is helpful for many applications to normalize these word forms (especially applications like search) into some canonical word for further exploration. In English (and many other languages) - morphological context indicate gender, tense, quantity, etc. but these sublties might not be necessary
Step25: Note that the lemmatizer has to load the WordNet corpus which takes a bit.
Typical normalization of text for use as features in machine learning models looks something like this
Step26: <a id='nerc'></a>
Named Entity Recognition
NLTK has an excellent MaxEnt backed Named Entity Recognizer that is trained on the Penn Treebank. You can also retrain the chunker if you'd like - the code is very readable to extend it with a Gazette or otherwise.
<a id='chunk'></a>
Step27: You can also wrap the Stanford NER system, which many of you are also probably used to using.
Step28: Parsing
Parsing is a difficult NLP task due to structural ambiguities in text. As the length of sentences increases, so does the number of possible trees.
Step30: Similar to how you might write a compiler or an interpreter; parsing starts with a grammar that defines the construction of phrases and terminal entities.
Step31: NLTK does come with some large grammars; but if constructing your own domain specific grammar isn't your thing; then you can use the Stanford parser (so long as you're willing to pay for it). | Python Code:
# Take a moment to explore what is in this directory
dir(nltk)
Explanation: What is NLP?
Natural Language Processing (NLP) is often taught at the academic level from the perspective of computational linguists. However, as data scientists, we have a richer view of the natural language world - unstructured data that by its very nature has latent information that is important to humans. NLP practioners have benefitted from machine learning techniques to unlock meaning from large corpora, and in this tutorial weโll explore how to do that using Python, the Natural Language Toolkit (NLTK) and Gensim.
NLTK is an excellent library for machine-learning based NLP, written in Python by experts from both academia and industry. Python allows you to create rich data applications rapidly, iterating on hypotheses. The combination of Python + NLTK means that you can easily add language-aware data products to your larger analytical workflows and applications.
Quick Overview of NLTK
NLTK was written by two eminent computational linguists, Steven Bird (Senior Research Associate of the LDC and professor at the University of Melbourne) and Ewan Klein (Professor of Linguistics at Edinburgh University). The NTLK library provides a combination of natural language corpora, lexical resources, and example grammars with language processing algorithms, methodologies and demonstrations for a very Pythonic "batteries included" view of natural language processing.
As such, NLTK is perfect for research-driven (hypothesis-driven) workflows for agile data science.
Installing NLTK
This notebook has a few dependencies, most of which can be installed via the python package manger - pip.
Python 2.7+ or 3.5+ (Anaconda is ok)
NLTK
The NLTK corpora
The BeautifulSoup library
The gensim libary
Once you have Python and pip installed you can install NLTK from the terminal as follows:
bash
~$ pip install nltk
~$ pip install matplotlib
~$ pip install beautifulsoup4
~$ pip install gensim
Note that these will also install Numpy and Scipy if they aren't already installed.
What NLTK Includes
tokenization, stemming, and tagging
chunking and parsing
language modeling
classification and clustering
logical semantics
NLTK is a useful pedagogical resource for learning NLP with Python and serves as a starting place for producing production grade code that requires natural language analysis. It is also important to understand what NLTK is not.
What NLTK is Not
Production ready out of the box
Lightweight
Generally applicable
Magic
NLTK provides a variety of tools that can be used to explore the linguistic domain but is not a lightweight dependency that can be easily included in other workflows, especially those that require unit and integration testing or other build processes. This stems from the fact that NLTK includes a lot of added code but also a rich and complete library of corpora that power the built-in algorithms.
The Good Parts of NLTK
Preprocessing
segmentation
tokenization
Part-of-Speech (PoS) tagging
Word level processing
WordNet
Lemmatization
Stemming
NGrams
Utilities
Tree
FreqDist
ConditionalFreqDist
Streaming CorpusReaders
Classification
Maximum Entropy
Naive Bayes
Decision Tree
Chunking
Named Entity Recognition
Parsers Galore!
The Bad parts of NLTK
Syntactic Parsing
No included grammar (not a black box)
No Feature/Dependency Parsing
No included feature grammar
The sem package
Toy only (lambda-calculus & first order logic)
Lots of extra stuff (heavyweight dependency)
papers, chat programs, alignments, etc.
Knowing the good and the bad parts will help you explore NLTK further - looking into the source code to extract the material you need, then moving that code to production. We will explore NLTK in more detail in the rest of this notebook.
Obtaining and Exploring the NLTK Corpora
NLTK ships with a variety of corpora, let's use a few of them to do some work. To download the NLTK corpora, open a Python interpreter:
python
import nltk
nltk.download()
This will open up a window with which you can download the various corpora and models to a specified location. For now, go ahead and download it all as we will be exploring as much of NLTK as we can. Also take note of the download_directory - you're going to want to know where that is so you can get a detailed look at the corpora that's included. I usually export an environment variable to track this. You can do this from your terminal:
~$ export NLTK_DATA=/path/to/nltk_data
End of explanation
# Lists the various corpora and CorpusReader classes in the nltk.corpus module
for name in dir(nltk.corpus):
print(name)
if name.islower() and not name.startswith('_'): print(name)
Explanation: Methods for Working with Sample NLTK Corpora
To explore much of the built-in corpus, use the following methods:
End of explanation
# You can explore the titles with:
print(nltk.corpus.gutenberg.fileids())
# For a specific corpus, list the fileids that are available:
print(nltk.corpus.shakespeare.fileids())
Explanation: fileids()
End of explanation
hamlet = nltk.text.Text(nltk.corpus.gutenberg.words('shakespeare-hamlet.txt'))
Explanation: text.Text()
The nltk.text.Text class is a wrapper around a sequence of simple (string) tokens - intended only for the initial exploration of text usually via the Python REPL. It has the following methods:
common_contexts
concordance
collocations
count
plot
findall
index
You shouldn't use this class in production level systems, but it is useful to explore (small) snippets of text in a meaningful fashion.
For example, you can get access to the text from Hamlet as follows:
End of explanation
hamlet.concordance("king", 55, lines=10)
Explanation: concordance()
The concordance function performs a search for the given token and then also provides the surrounding context.
End of explanation
print(hamlet.similar("marriage"))
austen = nltk.text.Text(nltk.corpus.gutenberg.words("austen-sense.txt"))
print()
print(austen.similar("marriage"))
Explanation: similar()
Given some context surrounding a word, we can discover similar words, e.g. words that that occur frequently in the same context and with a similar distribution: Distributional similarity:
Note ContextIndex.similar_words(word) calculates the similarity score for each word as the sum of the products of frequencies in each context. Text.similar() simply counts the number of unique contexts the words share.
http://bit.ly/2a2udIr
End of explanation
hamlet.common_contexts(["king", "father"])
Explanation: As you can see, this takes a bit of time to build the index in memory, one of the reasons it's not suggested to use this class in production code.
common_contexts()
Now that we can do searching and similarity, we find the common contexts of a set of words.
End of explanation
inaugural = nltk.text.Text(nltk.corpus.inaugural.words())
inaugural.dispersion_plot(["citizens", "democracy", "freedom", "duty", "America"])
Explanation: your turn, go ahead and explore similar words and contexts - what does the common context mean?
dispersion_plot()
NLTK also uses matplotlib and pylab to display graphs and charts that can show dispersions and frequency. This is especially interesting for the corpus of innagural addresses given by U.S. presidents.
End of explanation
print(nltk.corpus.stopwords.fileids())
nltk.corpus.stopwords.words('english')
import string
print(string.punctuation)
Explanation: Stopwords
End of explanation
corpus = nltk.corpus.brown
print(corpus.paras())
Explanation: These corpora export several vital methods:
paras (iterate through each paragraph)
sents (iterate through each sentence)
words (iterate through each word)
raw (get access to the raw text)
paras()
End of explanation
print(corpus.sents())
Explanation: sents()
End of explanation
print(corpus.words())
Explanation: words()
End of explanation
print(corpus.raw()[:200]) # Be careful!
Explanation: raw()
Be careful!
End of explanation
reuters = nltk.corpus.reuters # Corpus of news articles
counts = nltk.FreqDist(reuters.words())
vocab = len(counts.keys())
words = sum(counts.values())
lexdiv = float(words) / float(vocab)
print("Corpus has %i types and %i tokens for a lexical diversity of %0.3f" % (vocab, words, lexdiv))
Explanation: Your turn! Explore some of the text in the available corpora
<a id='freqdist'></a>
Frequency Analyses
In statistical machine learning approaches to NLP, the very first thing we need to do is count things - especially the unigrams that appear in the text and their relationships to each other. NLTK provides two excellent classes to enable these frequency analyses:
FreqDist
ConditionalFreqDist
And these two classes serve as the foundation for most of the probability and statistical analyses that we will conduct.
Zipf's Law
Zipf's law states that given some corpus of natural language utterances, the frequency of any word is inversely proportional to its rank in the frequency table. Thus the most frequent word will occur approximately twice as often as the second most frequent word, three times as often as the third most frequent word, etc.: the rank-frequency distribution is an inverse relation. Read more on Wikipedia.
First we will compute the following:
The count of words
The vocabulary (unique words)
The lexical diversity (the ratio of word count to vocabulary)
End of explanation
counts.B()
Explanation: counts()
End of explanation
print(counts.most_common(40))
Explanation: most_common()
The n most common tokens in the corpus
End of explanation
print(counts.max())
Explanation: counts.max()
The most frequent token in the corpus.
End of explanation
print(counts.hapaxes()[0:10])
Explanation: counts.hapaxes()
A list of all hapax legomena (words that only appear one time in the corpus).
End of explanation
counts.freq('stipulate') * 100
Explanation: counts.freq()
The percentage of the corpus for the given token.
End of explanation
counts.plot(50, cumulative=False)
# By setting cumulative to True, we can visualize the cumulative counts of the _n_ most common words.
counts.plot(50, cumulative=True)
Explanation: counts.plot()
Plot the frequencies of the n most commonly occuring words.
End of explanation
from itertools import chain
brown = nltk.corpus.brown
categories = brown.categories()
counts = nltk.ConditionalFreqDist(chain(*[[(cat, word) for word in brown.words(categories=cat)] for cat in categories]))
for category, dist in counts.items():
vocab = len(dist.keys())
tokens = sum(dist.values())
lexdiv = float(tokens) / float(vocab)
print("%s: %i types with %i tokens and lexical diversity of %0.3f" % (category, vocab, tokens, lexdiv))
Explanation: ConditionalFreqDist()
End of explanation
for ngram in nltk.ngrams(["The", "bear", "walked", "in", "the", "woods", "at", "midnight"], 5):
print(ngram)
Explanation: Your turn: compute the conditional frequency distribution of bigrams in a corpus
Hint:
<a id='ngram'></a>
End of explanation
import bs4
from readability.readability import Document
# Tags to extract as paragraphs from the HTML text
TAGS = [
'h1', 'h2', 'h3', 'h4', 'h5', 'h6', 'h7', 'p', 'li'
]
def read_html(path):
with open(path, 'r') as f:
# Transform the document into a readability paper summary
html = Document(f.read()).summary()
# Parse the HTML using BeautifulSoup
soup = bs4.BeautifulSoup(html)
# Extract the paragraph delimiting elements
for tag in soup.find_all(TAGS):
# Get the HTML node text
yield tag.get_text()
for paragraph in read_html('fixtures/nrRB0.html'):
print(paragraph + "\n")
text = u"Medical personnel returning to New York and New Jersey from the Ebola-riddled countries in West Africa will be automatically quarantined if they had direct contact with an infected person, officials announced Friday. New York Gov. Andrew Cuomo (D) and New Jersey Gov. Chris Christie (R) announced the decision at a joint news conference Friday at 7 World Trade Center. โWe have to do more,โ Cuomo said. โItโs too serious of a situation to leave it to the honor system of compliance.โ They said that public-health officials at John F. Kennedy and Newark Liberty international airports, where enhanced screening for Ebola is taking place, would make the determination on who would be quarantined. Anyone who had direct contact with an Ebola patient in Liberia, Sierra Leone or Guinea will be quarantined. In addition, anyone who traveled there but had no such contact would be actively monitored and possibly quarantined, authorities said. This news came a day after a doctor who had treated Ebola patients in Guinea was diagnosed in Manhattan, becoming the fourth person diagnosed with the virus in the United States and the first outside of Dallas. And the decision came not long after a health-care worker who had treated Ebola patients arrived at Newark, one of five airports where people traveling from West Africa to the United States are encountering the stricter screening rules."
for sent in nltk.sent_tokenize(text):
print(sent)
print()
for sent in nltk.sent_tokenize(text):
print(list(nltk.wordpunct_tokenize(sent)))
print()
for sent in nltk.sent_tokenize(text):
print(list(nltk.pos_tag(nltk.word_tokenize(sent))))
print()
Explanation: Preprocessing Text
NLTK is great at the preprocessing of raw text - it provides the following tools for dividing text into it's constituent parts:
<a id='tokenize'></a>
<a id='segment'></a>
- sent_tokenize: a Punkt sentence tokenizer:
This tokenizer divides a text into a list of sentences, by using an unsupervised algorithm to build a model for abbreviation words, collocations, and words that start sentences. It must be trained on a large collection of plaintext in the target language before it can be used.
However, Punkt is designed to learn parameters (a list of abbreviations, etc.) unsupervised from a corpus similar to the target domain. The pre-packaged models may therefore be unsuitable: use PunktSentenceTokenizer(text) to learn parameters from the given text.
word_tokenize: a Treebank tokenizer
The Treebank tokenizer uses regular expressions to tokenize text as in Penn Treebank. This is the method that is invoked by word_tokenize(). It assumes that the text has already been segmented into sentences, e.g. using sent_tokenize().
<a id='pos'></a>
- pos_tag: a maximum entropy tagger trained on the Penn Treebank
There are several other taggers including (notably) the BrillTagger as well as the BrillTrainer to train your own tagger or tagset.
End of explanation
from nltk.stem.snowball import SnowballStemmer
from nltk.stem.lancaster import LancasterStemmer
from nltk.stem.porter import PorterStemmer
text = list(nltk.word_tokenize("The women running in the fog passed bunnies working as computer scientists."))
snowball = SnowballStemmer('english')
lancaster = LancasterStemmer()
porter = PorterStemmer()
for stemmer in (snowball, lancaster, porter):
stemmed_text = [stemmer.stem(t) for t in text]
print(" ".join(stemmed_text))
from nltk.stem.wordnet import WordNetLemmatizer
# Note: use part of speech tag, we'll see this in machine learning!
lemmatizer = WordNetLemmatizer()
lemmas = [lemmatizer.lemmatize(t) for t in text]
print(" ".join(lemmas))
Explanation: All of these taggers work pretty well - but you can (and should train them on your own corpora).
<a id='lemmatize'></a>
Stemming and Lemmatization
We have an immense number of word forms as you can see from our various counts in the FreqDist above - it is helpful for many applications to normalize these word forms (especially applications like search) into some canonical word for further exploration. In English (and many other languages) - morphological context indicate gender, tense, quantity, etc. but these sublties might not be necessary:
<a id='stemming'></a>
Stemming = chop off affixes to get the root stem of the word:
running --> run
flowers --> flower
geese --> geese
Lemmatization = look up word form in a lexicon to get canonical lemma
women --> woman
foxes --> fox
sheep --> sheep
There are several stemmers available:
- Lancaster (English, newer and aggressive)
- Porter (English, original stemmer)
- Snowball (Many languages, newest)
<a id='wordnet'></a>
The Lemmatizer uses the WordNet lexicon
End of explanation
import string
from nltk.corpus import wordnet as wn
## Module constants
lemmatizer = WordNetLemmatizer()
stopwords = set(nltk.corpus.stopwords.words('english'))
punctuation = string.punctuation
def tagwn(tag):
Returns the WordNet tag from the Penn Treebank tag.
return {
'N': wn.NOUN,
'V': wn.VERB,
'R': wn.ADV,
'J': wn.ADJ
}.get(tag[0], wn.NOUN)
def normalize(text):
for token, tag in nltk.pos_tag(nltk.wordpunct_tokenize(text)):
#if you're going to do part of speech tagging, do it here
token = token.lower()
if token in stopwords and token in punctuation:
continue
token = lemmatizer.lemmatize(token, tagwn(tag))
yield token
print(list(normalize("The eagle flies at midnight.")))
Explanation: Note that the lemmatizer has to load the WordNet corpus which takes a bit.
Typical normalization of text for use as features in machine learning models looks something like this:
End of explanation
print(nltk.ne_chunk(nltk.pos_tag(nltk.word_tokenize("John Smith is from the United States of America and works at Microsoft Research Labs"))))
Explanation: <a id='nerc'></a>
Named Entity Recognition
NLTK has an excellent MaxEnt backed Named Entity Recognizer that is trained on the Penn Treebank. You can also retrain the chunker if you'd like - the code is very readable to extend it with a Gazette or otherwise.
<a id='chunk'></a>
End of explanation
import os
from nltk.tag import StanfordNERTagger
# change the paths below to point to wherever you unzipped the Stanford NER download file
stanford_root = '/Users/benjamin/Development/stanford-ner-2014-01-04'
stanford_data = os.path.join(stanford_root, 'classifiers/english.all.3class.distsim.crf.ser.gz')
stanford_jar = os.path.join(stanford_root, 'stanford-ner-2014-01-04.jar')
st = StanfordNERTagger(stanford_data, stanford_jar, 'utf-8')
for i in st.tag("John Smith is from the United States of America and works at Microsoft Research Labs".split()):
print('[' + i[1] + '] ' + i[0])
Explanation: You can also wrap the Stanford NER system, which many of you are also probably used to using.
End of explanation
for name in dir(nltk.parse):
if not name.startswith('_'): print(name)
Explanation: Parsing
Parsing is a difficult NLP task due to structural ambiguities in text. As the length of sentences increases, so does the number of possible trees.
End of explanation
grammar = nltk.grammar.CFG.fromstring(
S -> NP PUNCT | NP
NP -> N N | ADJP NP | DET N | DET ADJP
ADJP -> ADJ NP | ADJ N
DET -> 'an' | 'the' | 'a' | 'that'
N -> 'airplane' | 'runway' | 'lawn' | 'chair' | 'person'
ADJ -> 'red' | 'slow' | 'tired' | 'long'
PUNCT -> '.'
)
def parse(sent):
sent = sent.lower()
parser = nltk.parse.ChartParser(grammar)
for p in parser.parse(nltk.word_tokenize(sent)):
yield p
for tree in parse("the long runway"):
tree.pprint()
tree[0].draw()
Explanation: Similar to how you might write a compiler or an interpreter; parsing starts with a grammar that defines the construction of phrases and terminal entities.
End of explanation
from nltk.parse.stanford import StanfordParser
# change the paths below to point to wherever you unzipped the Stanford NER download file
stanford_root = '/Users/benjamin/Development/stanford-parser-full-2014-10-31'
stanford_model = os.path.join(stanford_root, 'stanford-parser-3.5.0-models.jar')
stanford_jar = os.path.join(stanford_root, 'stanford-parser.jar')
st = StanfordParser(stanford_model, stanford_jar)
sent = "The man hit the building with the baseball bat."
for tree in st.parse(nltk.wordpunct_tokenize(sent)):
tree.pprint()
tree.draw()
Explanation: NLTK does come with some large grammars; but if constructing your own domain specific grammar isn't your thing; then you can use the Stanford parser (so long as you're willing to pay for it).
End of explanation |
12,257 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
RNNs tutorial
Step1: An LSTM/RNN overview
Step2: Note that when we create the builder, it adds the internal RNN parameters to the model.
We do not need to care about them, but they will be optimized together with the rest of the network's parameters.
Step3: If our LSTM/RNN was one layer deep, y2 would be equal to the hidden state. However, since it is 2 layers deep, y2 is only the hidden state (= output) of the last layer.
If we were to want access to the all the hidden state (the output of both the first and the last layers), we could use the .h() method, which returns a list of expressions, one for each layer
Step4: The same interface that we saw until now for the LSTM, holds also for the Simple RNN
Step5: To summarize, when calling .add_input(x) on an RNNState what happens is that the state creates a new RNN/LSTM column, passing it
Step6: As we can see, the LSTM has two extra state expressions (one for each hidden layer) before the outputs h.
Extra options in the RNN/LSTM interface
Stack LSTM The RNN's are shaped as a stack
Step7: Aside
Step8: This is convenient.
What if we do not care about .s() and .h(), and do not need to access the previous vectors? In such cases
we can use the transduce(xs) method instead of add_inputs(xs).
transduce takes in a sequence of Expressions, and returns a sequence of Expressions.
As a consequence of not returning RNNStates, trnasduce is much more memory efficient than add_inputs or a series of calls to add_input.
Step9: Charecter-level LSTM
Now that we know the basics of RNNs, let's build a character-level LSTM language-model.
We have a sequence LSTM that, at each step, gets as input a character, and needs to predict the next character.
Step10: Notice that
Step11: The model seem to learn the sentence quite well.
Somewhat surprisingly, the Simple-RNN model learn quicker than the LSTM!
How can that be?
The answer is that we are cheating a bit. The sentence we are trying to learn
has each letter-bigram exactly once. This means a simple trigram model can memorize
it very well.
Try it out with more complex sequences. | Python Code:
# we assume that we have the dynet module in your path.
# OUTDATED: we also assume that LD_LIBRARY_PATH includes a pointer to where libcnn_shared.so is.
from dynet import *
Explanation: RNNs tutorial
End of explanation
model = Model()
NUM_LAYERS=2
INPUT_DIM=50
HIDDEN_DIM=10
builder = LSTMBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, model)
# or:
# builder = SimpleRNNBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, model)
Explanation: An LSTM/RNN overview:
An (1-layer) RNN can be thought of as a sequence of cells, $h_1,...,h_k$, where $h_i$ indicates the time dimenstion.
Each cell $h_i$ has an input $x_i$ and an output $r_i$. In addition to $x_i$, cell $h_i$ receives as input also $r_{i-1}$.
In a deep (multi-layer) RNN, we don't have a sequence, but a grid. That is we have several layers of sequences:
$h_1^3,...,h_k^3$
$h_1^2,...,h_k^2$
$h_1^1,...h_k^1$,
Let $r_i^j$ be the output of cell $h_i^j$. Then:
The input to $h_i^1$ is $x_i$ and $r_{i-1}^1$.
The input to $h_i^2$ is $r_i^1$ and $r_{i-1}^2$,
and so on.
The LSTM (RNN) Interface
RNN / LSTM / GRU follow the same interface. We have a "builder" which is in charge of creating definining the parameters for the sequence.
End of explanation
s0 = builder.initial_state()
x1 = vecInput(INPUT_DIM)
s1=s0.add_input(x1)
y1 = s1.output()
# here, we add x1 to the RNN, and the output we get from the top is y (a HIDEN_DIM-dim vector)
y1.npvalue().shape
s2=s1.add_input(x1) # we can add another input
y2=s2.output()
Explanation: Note that when we create the builder, it adds the internal RNN parameters to the model.
We do not need to care about them, but they will be optimized together with the rest of the network's parameters.
End of explanation
print s2.h()
Explanation: If our LSTM/RNN was one layer deep, y2 would be equal to the hidden state. However, since it is 2 layers deep, y2 is only the hidden state (= output) of the last layer.
If we were to want access to the all the hidden state (the output of both the first and the last layers), we could use the .h() method, which returns a list of expressions, one for each layer:
End of explanation
# create a simple rnn builder
rnnbuilder=SimpleRNNBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, model)
# initialize a new graph, and a new sequence
rs0 = rnnbuilder.initial_state()
# add inputs
rs1 = rs0.add_input(x1)
ry1 = rs1.output()
print "all layers:", s1.h()
print s1.s()
Explanation: The same interface that we saw until now for the LSTM, holds also for the Simple RNN:
End of explanation
rnn_h = rs1.h()
rnn_s = rs1.s()
print "RNN h:", rnn_h
print "RNN s:", rnn_s
lstm_h = s1.h()
lstm_s = s1.s()
print "LSTM h:", lstm_h
print "LSTM s:", lstm_s
Explanation: To summarize, when calling .add_input(x) on an RNNState what happens is that the state creates a new RNN/LSTM column, passing it:
1. the state of the current RNN column
2. the input x
The state is then returned, and we can call it's output() method to get the output y, which is the output at the top of the column. We can access the outputs of all the layers (not only the last one) using the .h() method of the state.
.s() The internal state of the RNN may be more involved than just the outputs $h$. This is the case for the LSTM, that keeps an extra "memory" cell, that is used when calculating $h$, and which is also passed to the next column. To access the entire hidden state, we use the .s() method.
The output of .s() differs by the type of RNN being used. For the simple-RNN, it is the same as .h(). For the LSTM, it is more involved.
End of explanation
s2=s1.add_input(x1)
s3=s2.add_input(x1)
s4=s3.add_input(x1)
# let's continue s3 with a new input.
s5=s3.add_input(x1)
# we now have two different sequences:
# s0,s1,s2,s3,s4
# s0,s1,s2,s3,s5
# the two sequences share parameters.
assert(s5.prev() == s3)
assert(s4.prev() == s3)
s6=s3.prev().add_input(x1)
# we now have an additional sequence:
# s0,s1,s2,s6
s6.h()
s6.s()
Explanation: As we can see, the LSTM has two extra state expressions (one for each hidden layer) before the outputs h.
Extra options in the RNN/LSTM interface
Stack LSTM The RNN's are shaped as a stack: we can remove the top and continue from the previous state.
This is done either by remembering the previous state and continuing it with a new .add_input(), or using
we can access the previous state of a given state using the .prev() method of state.
Initializing a new sequence with a given state When we call builder.initial_state(), we are assuming the state has random /0 initialization. If we want, we can specify a list of expressions that will serve as the initial state. The expected format is the same as the results of a call to .final_s(). TODO: this is not supported yet.
End of explanation
state = rnnbuilder.initial_state()
xs = [x1,x1,x1]
states = state.add_inputs(xs)
outputs = [s.output() for s in states]
hs = [s.h() for s in states]
print outputs, hs
Explanation: Aside: memory efficient transduction
The RNNState interface is convenient, and allows for incremental input construction.
However, sometimes we know the sequence of inputs in advance, and care only about the sequence of
output expressions. In this case, we can use the add_inputs(xs) method, where xs is a list of Expression.
End of explanation
state = rnnbuilder.initial_state()
xs = [x1,x1,x1]
outputs = state.transduce(xs)
print outputs
Explanation: This is convenient.
What if we do not care about .s() and .h(), and do not need to access the previous vectors? In such cases
we can use the transduce(xs) method instead of add_inputs(xs).
transduce takes in a sequence of Expressions, and returns a sequence of Expressions.
As a consequence of not returning RNNStates, trnasduce is much more memory efficient than add_inputs or a series of calls to add_input.
End of explanation
import random
from collections import defaultdict
from itertools import count
import sys
LAYERS = 2
INPUT_DIM = 50
HIDDEN_DIM = 50
characters = list("abcdefghijklmnopqrstuvwxyz ")
characters.append("<EOS>")
int2char = list(characters)
char2int = {c:i for i,c in enumerate(characters)}
VOCAB_SIZE = len(characters)
model = Model()
srnn = SimpleRNNBuilder(LAYERS, INPUT_DIM, HIDDEN_DIM, model)
lstm = LSTMBuilder(LAYERS, INPUT_DIM, HIDDEN_DIM, model)
params = {}
params["lookup"] = model.add_lookup_parameters((VOCAB_SIZE, INPUT_DIM))
params["R"] = model.add_parameters((VOCAB_SIZE, HIDDEN_DIM))
params["bias"] = model.add_parameters((VOCAB_SIZE))
# return compute loss of RNN for one sentence
def do_one_sentence(rnn, sentence):
# setup the sentence
renew_cg()
s0 = rnn.initial_state()
R = parameter(params["R"])
bias = parameter(params["bias"])
lookup = params["lookup"]
sentence = ["<EOS>"] + list(sentence) + ["<EOS>"]
sentence = [char2int[c] for c in sentence]
s = s0
loss = []
for char,next_char in zip(sentence,sentence[1:]):
s = s.add_input(lookup[char])
probs = softmax(R*s.output() + bias)
loss.append( -log(pick(probs,next_char)) )
loss = esum(loss)
return loss
# generate from model:
def generate(rnn):
def sample(probs):
rnd = random.random()
for i,p in enumerate(probs):
rnd -= p
if rnd <= 0: break
return i
# setup the sentence
renew_cg()
s0 = rnn.initial_state()
R = parameter(params["R"])
bias = parameter(params["bias"])
lookup = params["lookup"]
s = s0.add_input(lookup[char2int["<EOS>"]])
out=[]
while True:
probs = softmax(R*s.output() + bias)
probs = probs.vec_value()
next_char = sample(probs)
out.append(int2char[next_char])
if out[-1] == "<EOS>": break
s = s.add_input(lookup[next_char])
return "".join(out[:-1]) # strip the <EOS>
# train, and generate every 5 samples
def train(rnn, sentence):
trainer = SimpleSGDTrainer(model)
for i in xrange(200):
loss = do_one_sentence(rnn, sentence)
loss_value = loss.value()
loss.backward()
trainer.update()
if i % 5 == 0:
print loss_value,
print generate(rnn)
Explanation: Charecter-level LSTM
Now that we know the basics of RNNs, let's build a character-level LSTM language-model.
We have a sequence LSTM that, at each step, gets as input a character, and needs to predict the next character.
End of explanation
sentence = "a quick brown fox jumped over the lazy dog"
train(srnn, sentence)
sentence = "a quick brown fox jumped over the lazy dog"
train(lstm, sentence)
Explanation: Notice that:
1. We pass the same rnn-builder to do_one_sentence over and over again.
We must re-use the same rnn-builder, as this is where the shared parameters are kept.
2. We renew_cg() before each sentence -- because we want to have a new graph (new network) for this sentence.
The parameters will be shared through the model and the shared rnn-builder.
End of explanation
train(srnn, "these pretzels are making me thirsty")
Explanation: The model seem to learn the sentence quite well.
Somewhat surprisingly, the Simple-RNN model learn quicker than the LSTM!
How can that be?
The answer is that we are cheating a bit. The sentence we are trying to learn
has each letter-bigram exactly once. This means a simple trigram model can memorize
it very well.
Try it out with more complex sequences.
End of explanation |
12,258 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1 Simple Octave/MATLAB Function
As a quick warm up, create a function to return a 5x5 identity matrix.
Step1: 2 Linear Regression with One Variable
In this part of this exercise, you will implement linear regression with one variable to predict profits for a food truck. Suppose you are the CEO of a restaurant franchise and are considering different cities for opening a new outlet. The chain already has trucks in various cities and you have data for profits and populations from the cities.
You would like to use this data to help you select which city to expand to next. The file ex1data1.txt contains the dataset for our linear regression prob- lem. The first column is the population of a city and the second column is
the profit of a food truck in that city. A negative value for profit indicates a loss.
2.1 Plotting the Data
Before starting on any task, it is often useful to understand the data by visualizing it. For this dataset, you can use a scatter plot to visualize the data, since it has only two properties to plot (profit and population). (Many other problems that you will encounter in real life are multi-dimensional and can't be plotted on a 2-d plot.)
Step2: 2.2 Gradient Descent
In this part, you will fit the linear regression parameters $\theta$ to our dataset using gradient descent.
2.2.1 Update Equations
The objective of linear regression is to minimize the cost function
$$
J\left( \theta \right) = \frac{1}{2m} \sum_{i=1}^m \left( h_\theta \left( x^{\left( i\right)} \right) - y^{\left( i \right)} \right)^2
$$
where $h_\theta\left( x \right)$ is the hypothesis given by the linear model
$$
h_\theta\left( x \right) = \theta^\intercal x = \theta_0 + \theta_1 x_1
$$
Recall that the parameters of your model are the $\theta_j$ values. These are the values you will adjust to minimize cost $J(\theta)$. One way to do this is to use the batch gradient descent algorithm. In batch gradient descent, each iteration performs the update
$$
\theta_j
Step3: Let's make the (totally random) guess that $\theta_0$ = 0 and $\theta_1$ = 0. In that case, we have the following output from the hypothesis function.
Step6: 2.2.3 Computing the Cost $J(\theta)$
Now, we can define our actual hypothesis function for linear regression with a single variable.
Step8: Gradient Descent
Now we'll actually implement the gradient descent algorithm. Keep in mind that the cost $J(\theta)$ is parameterized by the vector $\theta$, not $X$ and $y$. That is, we minimize $J(\theta)$ by changing $\theta$. We initialize the initial parameters to 0 and the learning rate alpha to 0.01.
Step9: After running the batch gradient descent algorithm, we can plot the convergence of $J(\theta)$ over the number of iterations.
Step10: 2.4 Visualizing $J(\theta)$ | Python Code:
A = np.eye(5)
print(A)
Explanation: 1 Simple Octave/MATLAB Function
As a quick warm up, create a function to return a 5x5 identity matrix.
End of explanation
datafile = 'ex1\\ex1data1.txt'
df = pd.read_csv(datafile, header=None, names=['Population', 'Profit'])
def plot_data(x, y):
plt.figure(figsize=(10, 6))
plt.plot(x, y, '.', label='Training Data')
plt.xlabel("Population of City in 10,000s", fontsize=16)
plt.ylabel("Profit in $10,000s", fontsize=16)
import os
import sys
import datetime as dt
fp_list_master = ['C:', 'Users', 'szahn', 'Dropbox', 'Statistics & Machine Learning', 'coursera_ml_notes']
fp = os.sep.join(fp_list_master)
fp_fig = fp + os.sep + 'LaTeX Notes' + os.sep + 'Figures'
print(os.path.isdir(fp), os.path.isdir(fp_fig))
plot_data(df['Population'], df['Profit'])
#plt.savefig(fp_fig + os.sep + 'linreg_hw_2_1_plot_data.pdf')
Explanation: 2 Linear Regression with One Variable
In this part of this exercise, you will implement linear regression with one variable to predict profits for a food truck. Suppose you are the CEO of a restaurant franchise and are considering different cities for opening a new outlet. The chain already has trucks in various cities and you have data for profits and populations from the cities.
You would like to use this data to help you select which city to expand to next. The file ex1data1.txt contains the dataset for our linear regression prob- lem. The first column is the population of a city and the second column is
the profit of a food truck in that city. A negative value for profit indicates a loss.
2.1 Plotting the Data
Before starting on any task, it is often useful to understand the data by visualizing it. For this dataset, you can use a scatter plot to visualize the data, since it has only two properties to plot (profit and population). (Many other problems that you will encounter in real life are multi-dimensional and can't be plotted on a 2-d plot.)
End of explanation
# set the number of training examples
m = len(df['Population'])
# create an array from the dataframe (missing column for x_0 values)
X = df['Population'].values
# add in the first column of the array for x_0 values
X = X[:, np.newaxis]
X = np.insert(X, 0, 1, axis=1)
y = df['Profit'].values
y = y[:, np.newaxis]
Explanation: 2.2 Gradient Descent
In this part, you will fit the linear regression parameters $\theta$ to our dataset using gradient descent.
2.2.1 Update Equations
The objective of linear regression is to minimize the cost function
$$
J\left( \theta \right) = \frac{1}{2m} \sum_{i=1}^m \left( h_\theta \left( x^{\left( i\right)} \right) - y^{\left( i \right)} \right)^2
$$
where $h_\theta\left( x \right)$ is the hypothesis given by the linear model
$$
h_\theta\left( x \right) = \theta^\intercal x = \theta_0 + \theta_1 x_1
$$
Recall that the parameters of your model are the $\theta_j$ values. These are the values you will adjust to minimize cost $J(\theta)$. One way to do this is to use the batch gradient descent algorithm. In batch gradient descent, each iteration performs the update
$$
\theta_j := \theta_j - \alpha\frac{1}{m}\sum_{i=1}^m \left( h_\theta\left( x^{\left( i\right)} \right) - y^{\left(i\right)}\right) x_j^{\left(i\right)} \;\;\;\;\;\;\;\;\;\; \text{simultaneously update } \theta_j \text{ for all } j \text{.}
$$
With each step of gradient descent, your parameters $\theta_j$ come closer to the optimal values that will achieve the lowest cost $J(\theta)$.
2.2.2 Implementation
In the following lines, we add another dimension to our data to accommodate the $\theta_0$ intercept term.
End of explanation
theta_values = np.array([[0.], [0]])
print(theta_values.shape)
print(X.shape, end='\n\n')
_ = np.dot(X, theta_values)
print(_.shape)
Explanation: Let's make the (totally random) guess that $\theta_0$ = 0 and $\theta_1$ = 0. In that case, we have the following output from the hypothesis function.
End of explanation
# define the hypothesis
def h(theta, X):
Takes the dot product of the matrix X and the vector theta,
yielding a predicted result.
return np.dot(X, theta)
def compute_cost(X, y, theta):
Takes the design matrix X and output vector y, and computes the cost of
the parameters stored in the vector theta.
The dimensions must be as follows:
- X must be m x n
- y must be m x 1
- theta must be n x 1
m = len(y)
J = 1 / (2*m) * np.dot((np.dot(X, theta) - y).T, (np.dot(X, theta) - y))
return J
# define column vector theta = [[0], [0]]
theta = np.zeros((2, 1))
# compute the cost function for our existing X and y, with our new theta vector
# verify that the cost for our theta of zeros is 32.07
compute_cost(X, y, theta)
Explanation: 2.2.3 Computing the Cost $J(\theta)$
Now, we can define our actual hypothesis function for linear regression with a single variable.
End of explanation
def gradient_descent(X, y, theta, alpha, num_iters):
m = len(y)
J_history = []
theta_history = []
for i in range(num_iters):
J_history.append(float(compute_cost(X, y, theta)))
theta_history.append(theta)
theta = theta - (alpha / m) * np.dot(X.T, (np.dot(X, theta) - y))
return theta, J_history, theta_history
# set up some initial parameters for gradient descent
theta_initial = np.zeros((2, 1))
iterations = 1500
alpha = 0.01
theta_final, J_hist, theta_hist = gradient_descent(X, y,
theta_initial,
alpha, iterations)
Explanation: Gradient Descent
Now we'll actually implement the gradient descent algorithm. Keep in mind that the cost $J(\theta)$ is parameterized by the vector $\theta$, not $X$ and $y$. That is, we minimize $J(\theta)$ by changing $\theta$. We initialize the initial parameters to 0 and the learning rate alpha to 0.01.
End of explanation
def plot_cost_convergence(J_history):
abscissa = list(range(len(J_history)))
ordinate = J_history
plt.figure(figsize=(10, 6))
plt.plot(abscissa, ordinate, '.')
plt.title('Convergence of the Cost Function', fontsize=24)
plt.xticks(fontsize=16)
plt.yticks(fontsize=16)
plt.xlabel('Iteration Number', fontsize=18)
plt.ylabel('Cost Function', fontsize=18)
plt.xlim(min(abscissa) - max(abscissa) * 0.05, 1.05 * max(abscissa))
plot_cost_convergence(J_hist)
plt.ylim(4.3, 6.9)
#plt.savefig(fp_fig + os.sep + 'linreg_hw_2_4_viz_j_of_theta.pdf')
plot_data(df['Population'], df['Profit'])
x_min = min(df.Population)
x_max = max(df.Population)
abscissa = np.linspace(x_min, x_max, 50)
hypot = lambda x: theta_final[0] + theta_final[1] * x
ordinate = [hypot(x) for x in abscissa]
plt.plot(abscissa, ordinate, label='Hypothesis h(x) = {:.2f} + {:.2f}x'.format(
float(theta_final[0]), float(theta_final[1])), color='indianred')
plt.legend(loc=4, frameon=True, fontsize=16)
# plt.savefig(fp_fig + os.sep + 'linreg_hw_2_3_plot_lin_reg.pdf')
Explanation: After running the batch gradient descent algorithm, we can plot the convergence of $J(\theta)$ over the number of iterations.
End of explanation
from mpl_toolkits.mplot3d import axes3d, Axes3D
from matplotlib import cm
fig = plt.figure(figsize=(12, 12))
ax = fig.gca(projection='3d')
theta_0_vals = np.linspace(-10, 10, 100)
theta_1_vals = np.linspace(-1, 4, 100)
theta1, theta2, cost = [], [], []
for t0 in theta_0_vals:
for t1 in theta_1_vals:
theta1.append(t0)
theta2.append(t1)
theta_array = np.array([[t0], [t1]])
cost.append(compute_cost(X, y, theta_array))
scat = ax.scatter(theta1, theta2, cost,
c=np.abs(cost), cmap=plt.get_cmap('rainbow'))
plt.xlabel(r'$\theta_0$', fontsize=24)
plt.ylabel(r'$\theta_1$', fontsize=24)
plt.title(r'Cost Function by $\theta_0$ and $\theta_1$', fontsize=24)
theta_0_hist = [x[0] for x in theta_hist]
theta_1_hist = [x[1] for x in theta_hist]
theta_hist_end = len(theta_0_hist) - 1
fig = plt.figure(figsize=(12, 12))
ax = fig.gca(projection='3d')
theta_0_vals = np.linspace(-10, 10, 100)
theta_1_vals = np.linspace(-1, 4, 100)
theta1, theta2, cost = [], [], []
for t0 in theta_0_vals:
for t1 in theta_1_vals:
theta1.append(t0)
theta2.append(t1)
theta_array = np.array([[t0], [t1]])
cost.append(compute_cost(X, y, theta_array))
scat = ax.scatter(theta1, theta2, cost,
c=np.abs(cost), cmap=plt.get_cmap('rainbow'))
plt.plot(theta_0_hist, theta_1_hist, J_hist, 'r',
label='Cost Minimization Path')
plt.plot(theta_0_hist[0], theta_1_hist[0], J_hist[0], 'ro',
label='Cost Minimization Start')
plt.plot(theta_0_hist[theta_hist_end],
theta_1_hist[theta_hist_end],
J_hist[theta_hist_end], 'co', label='Cost Minimization Finish')
plt.xlabel(r'$\theta_0$', fontsize=24)
plt.ylabel(r'$\theta_1$', fontsize=24)
plt.title(r'Cost Function Minimization', fontsize=24)
plt.legend(fontsize=12)
plt.savefig(fp_fig + os.sep + 'linreg_hw_2_4_plot_surface_plot.pdf')
Explanation: 2.4 Visualizing $J(\theta)$
End of explanation |
12,259 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<header class="w3-container w3-teal">
<img src="images/utfsm.png" alt="" height="100px" align="left"/>
<img src="images/mat.png" alt="" height="100px" align="right"/>
</header>
<br/><br/><br/><br/><br/>
MAT281
Laboratorio Aplicaciones de la Matemรกtica en la Ingenierรญa
Modelamiento y Errores
INSTRUCCIONES
Anoten su nombre y rol en la celda siguiente.
Desarrollen los problemas de manera secuencial.
Guarden constantemente con Ctr-S para evitar sorpresas.
Reemplacen en las celdas de cรณdigo donde diga #FIX_ME por el cรณdigo correspondiente.
Ejecuten cada celda de cรณdigo utilizando Ctr-Enter
Step1: Problema
Step3: Desafรญo 1 (10%)
ยฟQue significado tiene la siguiente expresiรณn en el cรณdigo para cargar los datos? ยฟPorquรฉ se realiza?
abalone[0] = conversion_sexo[abalone[0]]
Respuesta
...
3. Exploraciรณn visual de los datos
A continuaciรณn se le provee cรณdigo para generar visualizaciones de los datos. Ejecute secuencialmente e interprete los grรกficos.
Step4: Desafรญo 2 (20%)
ยฟQuรฉ variables parecen a priori mรกs relevantes para explicar el nรบmero de anillos? ยฟEl sexo del abalone es relevante?
Respuesta
...
4. Entrenando y Testeando los Modelos
A continuaciรณn se entrega el cรณdigo necesario para entrenar los distintos modelos y realizar una predicciรณn.
Se proveen ejemplos de utilizaciรณn.
Modelo A
$$ \log(A) = \alpha_0 + \alpha_1 W_1 + \alpha_2 W_2 +\alpha_3 W_3 +\alpha_4 W_4 + \alpha_5 S + \alpha_6 \log L + \alpha_7 \log D+ \alpha_8 \log H$$
Step5: Modelo B
$$ \log(A) = \beta_0 + \beta_1 W_1 + \beta_2 W_2 +\beta_3 W_3 +\beta W_4 + \beta_5 \log( L D H ) $$
Step6: Modelo C
Si $S=male$
Step7: Desafรญo 3 (20%)
Realice un grรกfico en el cual se comparan simultรกneamente el nรบmero de anillos reales vs el nรบmero de anillos estimados con los modelos A, B y C, รบnicamente para el caso de los abalones de sexo masculino.
Step8: 5. Obteniendo el error de mediciรณn
Utilice Holdout Set o Cross Validation para obtener una estimaciรณn razonable del error predictivo de los modelos A, B y C. Justifique la decisiรณn realizada. No se entrega la implementaciรณn numรฉrica de los mรฉtodos, pero puede basarse en los cรณdigos provistos en clases | Python Code:
#Configuracion para recargar mรณdulos y librerรญas cada vez
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from mat281_code.lab import *
from IPython.core.display import HTML
HTML(open("style/mat281.css", "r").read())
alumno_1 = ("Sebastian Flores", "2004001-7")
alumno_2 = ("Maria Jose Vargas", "2004007-8")
HTML(greetings(alumno_1, alumno_2))
Explanation: <header class="w3-container w3-teal">
<img src="images/utfsm.png" alt="" height="100px" align="left"/>
<img src="images/mat.png" alt="" height="100px" align="right"/>
</header>
<br/><br/><br/><br/><br/>
MAT281
Laboratorio Aplicaciones de la Matemรกtica en la Ingenierรญa
Modelamiento y Errores
INSTRUCCIONES
Anoten su nombre y rol en la celda siguiente.
Desarrollen los problemas de manera secuencial.
Guarden constantemente con Ctr-S para evitar sorpresas.
Reemplacen en las celdas de cรณdigo donde diga #FIX_ME por el cรณdigo correspondiente.
Ejecuten cada celda de cรณdigo utilizando Ctr-Enter
End of explanation
%%bash
head data/abalone.data.txt
import numpy as np
# Cargando los datos
data = []
fh = open("data/abalone.data.txt","r")
# Estructura de datos:
# 'sex','length','diameter','height','weight.whole','weight.shucked','weight.viscera','weight.shell','rings'
# Ejemplo de linea
conversion_sexo = {"M":+1, "I":0, "F":-1}
for line in fh:
abalone = line.split(",")
abalone[0] = conversion_sexo[abalone[0]]
data.append([float(x) for x in abalone])
fh.close()
# Convertir lista a array
data = np.array(data)
# Limpiando datos erroneos (todos los valores excepto sexo deben ser estricamente positivos)
mask = np.all(data[:,1:]>0, axis=1)
data = data[mask]
# Imprimir datos (opcional)
#print data[:10]
Explanation: Problema: Abalone Dataset
Los datos Abalone Dataset corresponden a medidas fรญsicas de abulones u orejas marinas (abalones), una especie de caracoles marinos comestibles. Este set de datos fue descrito por Sam Waugh para su tesis de doctorado, en la cual utilizรณ los datos para ilustrar el comportamiento de algoritmos de clasificaciรณn. Desde entonces, se ha utilizado para verificar algoritmos de clasificaciรณn y regresiรณn.
<img src="images/abalone.jpg" alt="" width="600px" align="middle"/>
La base de datos contiene mediciones a 4177 abalones, donde las mediciones posibles son sexo ($S$), largo ($L$), diametro $D$, altura $H$, peso entero $W_1$, peso sin concha $W_2$, peso de visceras $W_3$, peso de concha $W_4$ y el nรบmero de anillos $N$.
Buscaremos predecir el nรบmero de anillos, utilizando las otras variables.
Modelos propuestos
Los modelos propuestos son los siguientes:
Modelo A
$$ \log(A) = \alpha_0 + \alpha_1 W_1 + \alpha_2 W_2 +\alpha_3 W_3 +\alpha_4 W_4 + \alpha_5 S + \alpha_6 \log L + \alpha_7 \log D+ \alpha_8 \log H$$
Modelo B
$$ \log(A) = \beta_0 + \beta_1 W_1 + \beta_2 W_2 +\beta_3 W_3 +\beta W_4 + \beta_5 \log( L D H ) $$
Modelo C
Si $S=male$:
$$ \log(A) = \theta_0^M + \theta_1^M W_2 + \theta_2^M W_4 + \theta_3^M \log( L D H ) $$
Si $S=female$
$$ \log(A) = \theta_0^F + \theta_1^F W_2 + \theta_2^F W_4 + \theta_3^F \log( L D H ) $$
Si $S=indefined$
$$ \log(A) = \theta_0^I + \theta_1^I W_2 + \theta_2^I W_4 + \theta_3^I \log( L D H ) $$
1. Descargando los datos
Descargue el archivo a analizar desde el siguiente link:
http://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.data
Guarde el archivo en la carpeta Lab03/data/ con el nombre abalone.data.txt
2. Leyendo los datos
A continuaciรณn se le provee cierto cรณdigo para que lea los datos desde el archivo data/abalone.data.txt y los cargue en un arreglo en numpy.
End of explanation
from matplotlib import pyplot as plt
def plot(data, i, j):
label = ['Sexo',
'Largo',
'Diametro',
'Altura',
'Peso Entero',
'Peso Desconchado',
'Peso Viscera',
'Peso Concha',
'Numero Anillos']
M_mask = data[:,0] == +1
I_mask = data[:,0] == 0
F_mask = data[:,0] == -1
plt.figure(figsize=(16,8))
plt.plot(data[:,i][M_mask], data[:,j][M_mask], "og", label="M")
plt.plot(data[:,i][F_mask], data[:,j][F_mask], "sr", label="F")
plt.plot(data[:,i][I_mask], data[:,j][I_mask], "<b", label="I")
plt.xlabel(label[i])
plt.ylabel(label[j])
plt.legend()
plt.show()
Sandbox: Cambie los parรกmetros para obtener distintos grรกficos
Indices:
0:'Sexo',
1:'Largo',
2:'Diametro',
3:'Altura',
4:'Peso Entero',
5:'Peso Desconchado',
6:'Peso Viscera',
7:'Peso Concha',
8:'Numero Anillos'
plot(data, 1, 8)
Explanation: Desafรญo 1 (10%)
ยฟQue significado tiene la siguiente expresiรณn en el cรณdigo para cargar los datos? ยฟPorquรฉ se realiza?
abalone[0] = conversion_sexo[abalone[0]]
Respuesta
...
3. Exploraciรณn visual de los datos
A continuaciรณn se le provee cรณdigo para generar visualizaciones de los datos. Ejecute secuencialmente e interprete los grรกficos.
End of explanation
# 'sex','length','diameter','height','weight.whole','weight.shucked','weight.viscera','weight.shell','rings'
# Entrenando el modelo A
def train_model_A(data):
y = np.log(data[:,-1])
X = data.copy()
X[:,0] = 1.0
X[:,1:4] = np.log(X[:,1:4])
coeffs = np.linalg.lstsq(X, y)[0]
return coeffs
# Testeando el modelo A
def test_model_A(data, coeffs):
X = data.copy()
X[:,0] = 1.0
X[:,1:4] = np.log(X[:,1:4])
ln_anillos = np.dot(X, coeffs)
return np.exp(ln_anillos)
# Obtener valores y prediccion
coeffs_A = train_model_A(data)
y_pred = test_model_A(data, coeffs_A)
# Mostrar graficamente
y_data = data[:,-1]
plt.figure(figsize=(16,8))
plt.plot(y_data, y_pred, "x")
plt.plot(y_data, y_data, "k-")
plt.show()
Explanation: Desafรญo 2 (20%)
ยฟQuรฉ variables parecen a priori mรกs relevantes para explicar el nรบmero de anillos? ยฟEl sexo del abalone es relevante?
Respuesta
...
4. Entrenando y Testeando los Modelos
A continuaciรณn se entrega el cรณdigo necesario para entrenar los distintos modelos y realizar una predicciรณn.
Se proveen ejemplos de utilizaciรณn.
Modelo A
$$ \log(A) = \alpha_0 + \alpha_1 W_1 + \alpha_2 W_2 +\alpha_3 W_3 +\alpha_4 W_4 + \alpha_5 S + \alpha_6 \log L + \alpha_7 \log D+ \alpha_8 \log H$$
End of explanation
# 'sex','length','diameter','height','weight.whole','weight.shucked','weight.viscera','weight.shell','rings'
# Entrenando el modelo B
def train_model_B(data):
y = np.log(data[:,-1])
X = np.ones([data.shape[0],6])
X[:,0] = 1.0
X[:,1:5] = data[:,4:8]
X[:,5] = np.log(data[:,1]*data[:,2]*data[:,3])
coeffs = np.linalg.lstsq(X, y)[0]
return coeffs
# Testeando el modelo B
def test_model_B(data, coeffs):
X = np.ones([data.shape[0],6])
X[:,0] = 1.0
X[:,1:5] = data[:,4:8]
X[:,5] = np.log(data[:,1]*data[:,2]*data[:,3])
ln_anillos = np.dot(X, coeffs)
return np.round(np.exp(ln_anillos))
# Obtener valores y prediccion
coeffs_B = train_model_B(data)
y_pred = test_model_B(data, coeffs_B)
# Mostrar graficamente
plt.figure(figsize=(16,8))
plt.plot(y_data, y_pred, "x")
plt.plot(y_data, y_data, "k-")
plt.show()
Explanation: Modelo B
$$ \log(A) = \beta_0 + \beta_1 W_1 + \beta_2 W_2 +\beta_3 W_3 +\beta W_4 + \beta_5 \log( L D H ) $$
End of explanation
# 'sex','length','diameter','height','weight.whole','weight.shucked','weight.viscera','weight.shell','rings'
# Entrenando el modelo C
def train_model_C(data):
mask_I = data[:,0] == 0
mask_M = data[:,0] == +1
mask_F = data[:,0] == -1
y = np.log(data[:,-1])
X = np.ones([data.shape[0], 4])
X[:,0] = 1.0
X[:,1] = data[:,5]
X[:,2] = data[:,7]
X[:,3] = np.log(data[:,1]*data[:,2]*data[:,3])
coeffs_I = np.linalg.lstsq(X[mask_I], y[mask_I])[0]
coeffs_M = np.linalg.lstsq(X[mask_M], y[mask_M])[0]
coeffs_F = np.linalg.lstsq(X[mask_F], y[mask_F])[0]
return (coeffs_I, coeffs_M, coeffs_F)
# Testeando el modelo C
def test_model_C(data, coeffs):
mask_I = data[:,0] == 0
mask_M = data[:,0] == +1
mask_F = data[:,0] == -1
y = np.log(data[:,-1])
X = np.ones([data.shape[0], 4])
X[:,0] = 1.0
X[:,1] = data[:,5]
X[:,2] = data[:,7]
X[:,3] = np.log(data[:,1]*data[:,2]*data[:,3])
# Fill up the solution
ln_anillos = np.zeros(data[:,0].shape)
ln_anillos[mask_I] = np.dot(X[mask_I], coeffs[0])
ln_anillos[mask_M] = np.dot(X[mask_M], coeffs[1])
ln_anillos[mask_F] = np.dot(X[mask_F], coeffs[-1])
return np.round(np.exp(ln_anillos))
# Obtener valores y prediccion
coeffs_C = train_model_C(data)
y_pred = test_model_C(data, coeffs_C)
# Mostrar graficamente
plt.figure(figsize=(16,8))
plt.plot(y_data, y_pred, "x")
plt.plot(y_data, y_data, "k-")
plt.show()
Explanation: Modelo C
Si $S=male$:
$$ \log(A) = \theta_0^M + \theta_1^M W_2 + \theta_2^M W_4 + \theta_3^M \log( L D H ) $$
Si $S=female$
$$ \log(A) = \theta_0^F + \theta_1^F W_2 + \theta_2^F W_4 + \theta_3^F \log( L D H ) $$
Si $S=indefined$
$$ \log(A) = \theta_0^I + \theta_1^I W_2 + \theta_2^I W_4 + \theta_3^I \log( L D H ) $$
End of explanation
# Realice aqui su grafico
plt.figure(figsize=(16,8))
plt.plot()
plt.show()
Explanation: Desafรญo 3 (20%)
Realice un grรกfico en el cual se comparan simultรกneamente el nรบmero de anillos reales vs el nรบmero de anillos estimados con los modelos A, B y C, รบnicamente para el caso de los abalones de sexo masculino.
End of explanation
# Implemente aquรญ su algoritmo para obtener el error predictivo de los mรฉtodos
Explanation: 5. Obteniendo el error de mediciรณn
Utilice Holdout Set o Cross Validation para obtener una estimaciรณn razonable del error predictivo de los modelos A, B y C. Justifique la decisiรณn realizada. No se entrega la implementaciรณn numรฉrica de los mรฉtodos, pero puede basarse en los cรณdigos provistos en clases:
Holdout Set
Cross Validation
End of explanation |
12,260 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Verification of the FUSED-Wind wrapper
common inputs
Step2: FUSED-Wind implementation
Step3: pure python implementation
Step4: Asserting new implementation
Step5: There was a bug corrected in the new implementation of the GCL model
Time comparison
New implementation is wrapped inside fusedwind
Step6: Pandas
Step7: WD uncertainty
Normally distributed wind direction uncertainty (reference wind direction, not for individual turbines).
Step8: Uniformly distributed wind direction uncertainty (bin/sectors definition) | Python Code:
wf.WindFarm?
v80 = wt.WindTurbine('Vestas v80 2MW offshore','V80_2MW_offshore.dat',70,40)
HR1 = wf.WindFarm(name='Horns Rev 1',yml='hornsrev.yml')#,v80)
WD = range(0,360,1)
Explanation: Verification of the FUSED-Wind wrapper
common inputs
End of explanation
##Fused inputs
inputs = dict(
wind_speed=8.0,
roughness=0.0001,
TI=0.05,
NG=4,
sup='lin',
wt_layout = fused_gcl.generate_GenericWindFarmTurbineLayout(HR1))
fgcl = fused_gcl.FGCLarsen()
# Setting the inputs
for k,v in inputs.iteritems():
setattr(fgcl, k, v)
fP_WF = np.zeros([len(WD)])
for iwd, wd in enumerate(WD):
fgcl.wind_direction = wd
fgcl.run()
fP_WF[iwd] = fgcl.power
Explanation: FUSED-Wind implementation
End of explanation
P_WF = np.zeros([len(WD)])
P_WF_v0 = np.zeros([len(WD)])
for iwd, wd in enumerate(WD):
P_WT,U_WT,CT_WT = GCLarsen(WS=8.0,z0=0.0001,TI=0.05,WD=wd,WF=HR1,NG=4,sup='lin')
P_WF[iwd] = P_WT.sum()
P_WT,U_WT,CT_WT = GCLarsen_v0(WS=8.0,z0=0.0001,TI=0.05,WD=wd,WF=HR1,NG=4,sup='lin')
P_WF_v0[iwd] = P_WT.sum()
fig, ax = plt.subplots()
ax.plot(WD,P_WF/(HR1.WT[0].get_P(8.0)*HR1.nWT),'-o', label='python')
ax.plot(WD,P_WF_v0/(HR1.WT[0].get_P(8.0)*HR1.nWT),'-d', label='python v0')
ax.set_xlabel('wd [deg]')
ax.set_ylabel('Wind farm efficiency [-]')
ax.set_title(HR1.name)
ax.legend(loc=3)
plt.savefig(HR1.name+'_Power_wd_360.pdf')
Explanation: pure python implementation
End of explanation
WD = 261.05
P_WT,U_WT,CT_WT = GCLarsen_v0(WS=10.,z0=0.0001,TI=0.1,WD=WD,WF=HR1, NG=5, sup='quad')
P_WT_2,U_WT_2,CT_WT_2 = GCLarsen(WS=10.,z0=0.0001,TI=0.1,WD=WD,WF=HR1, NG=5, sup='quad')
np.testing.assert_array_almost_equal(U_WT,U_WT_2)
np.testing.assert_array_almost_equal(P_WT,P_WT_2)
Explanation: Asserting new implementation
End of explanation
WD = range(0,360,1)
%%timeit
fP_WF = np.zeros([len(WD)])
for iwd, wd in enumerate(WD):
fgcl.wind_direction = wd
fgcl.run()
fP_WF[iwd] = fgcl.power
%%timeit
#%%prun -s cumulative #profiling
P_WF = np.zeros([len(WD)])
for iwd, wd in enumerate(WD):
P_WT,U_WT,CT_WT = GCLarsen(WS=8.0,z0=0.0001,TI=0.05,WD=wd,WF=HR1,NG=4,sup='lin')
P_WF[iwd] = P_WT.sum()
Explanation: There was a bug corrected in the new implementation of the GCL model
Time comparison
New implementation is wrapped inside fusedwind
End of explanation
df=pd.DataFrame(data=P_WF, index=WD, columns=['P_WF'])
df.plot()
Explanation: Pandas
End of explanation
P_WF_GAv8 = np.zeros([len(WD)])
P_WF_GAv16 = np.zeros([len(WD)])
for iwd, wd in enumerate(WD):
P_WT_GAv,U_WT,CT_WT = GCL_P_GaussQ_Norm_U_WD(meanWD=wd,stdWD=2.5,NG_P=8, WS=8.0,z0=0.0001,TI=0.05,WF=HR1,NG=4,sup='lin')
P_WF_GAv8[iwd] = P_WT_GAv.sum()
P_WT_GAv,U_WT,CT_WT = GCL_P_GaussQ_Norm_U_WD(meanWD=wd,stdWD=2.5,NG_P=16, WS=8.0,z0=0.0001,TI=0.05,WF=HR1,NG=4,sup='lin')
P_WF_GAv16[iwd] = P_WT_GAv.sum()
fig, ax = plt.subplots()
fig.set_size_inches([12,6])
ax.plot(WD,P_WF/(HR1.WT.get_P(8.0)*HR1.nWT),'-o', label='Pure python')
ax.plot(WD,fP_WF/(HR1.WT.get_P(8.0)*HR1.nWT),'-d', label='FUSED wrapper')
ax.plot(WD,P_WF_GAv16/(HR1.WT.get_P(8.0)*HR1.nWT),'-', label='Gauss Avg. FUSED wrapper, NG_P = 16')
ax.plot(WD,P_WF_GAv8/(HR1.WT.get_P(8.0)*HR1.nWT),'-', label='Gauss Avg. FUSED wrapper, NG_P = 8')
ax.set_xlabel('wd [deg]')
ax.set_ylabel('Wind farm efficiency [-]')
ax.set_title(HR1.name)
ax.legend(loc=3)
plt.savefig(HR1.name+'_Power_wd_360.pdf')
Explanation: WD uncertainty
Normally distributed wind direction uncertainty (reference wind direction, not for individual turbines).
End of explanation
P_WF_GA_u8 = np.zeros([len(WD)])
for iwd, wd in enumerate(WD):
P_WT_GAv,U_WT,CT_WT = GCL_P_GaussQ_Uni_U_WD(meanWD=wd,U_WD=2.5,NG_P=8, WS=8.0,z0=0.0001,TI=0.05,WF=HR1,NG=4,sup='lin')
P_WF_GA_u8[iwd] = P_WT_GAv.sum()
fig, ax = plt.subplots()
fig.set_size_inches([12,6])
ax.plot(WD,fP_WF/(HR1.WT.get_P(8.0)*HR1.nWT),'-d', label='FUSED wrapper')
ax.plot(WD,P_WF_GAv8/(HR1.WT.get_P(8.0)*HR1.nWT),'-', label='Gauss Quad. Normal, NG_P = 8')
ax.plot(WD,P_WF_GA_u8/(HR1.WT.get_P(8.0)*HR1.nWT),'-', label='Gauss Quad. Uniform, NG_P = 8')
ax.set_xlabel('wd [deg]')
ax.set_ylabel('Wind farm efficiency [-]')
ax.set_title(HR1.name)
ax.legend(loc=3)
plt.savefig(HR1.name+'_Power_wd_360.pdf')
Explanation: Uniformly distributed wind direction uncertainty (bin/sectors definition)
End of explanation |
12,261 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Contents
This notebook covers the basics of creating TransferFunction object, obtaining time and energy resolved responses, plotting them and using IO methods available. Finally, artificial responses are introduced which provide a way for quick testing.
Setup
Set up some useful libraries.
Step1: Import relevant stingray libraries.
Step2: Creating TransferFunction
A transfer function can be initialized by passing a 2-d array containing time across the first dimension and energy across the second. For example, if the 2-d array is defined by arr, then arr[1][5] defines a time of 5 units and energy of 1 unit.
For the purpose of this tutorial, we have stored a 2-d array in a text file named intensity.txt. The script to generate this file is explained in Data Preparation notebook.
Step3: Initialize transfer function by passing the array defined above.
Step4: By default, time and energy spacing across both axes are set to 1. However, they can be changed by supplying additional parameters dt and de.
Obtaining Time-Resolved Response
The 2-d transfer function can be converted into a time-resolved/energy-averaged response.
Step5: This sets time parameter which can be accessed by transfer.time
Step6: Additionally, energy interval over which to average, can be specified by specifying e0 and e1 parameters.
Obtaining Energy-Resolved Response
Energy-resolved/time-averaged response can be also be formed from 2-d transfer function.
Step7: This sets energy parameter which can be accessed by transfer.energy
Step8: Plotting Responses
TransferFunction() creates plots of time-resolved, energy-resolved and 2-d responses. These plots can be saved by setting save parameter.
Step9: By enabling save=True parameter, the plots can be also saved.
IO
TransferFunction can be saved in pickle format and retrieved later.
Step10: Saved files can be read using static read() method.
Step11: Artificial Responses
For quick testing, two helper impulse response models are provided.
1- Simple IR
simple_ir() allows to define an impulse response of constant height. It takes in time resolution starting time, width and intensity as arguments.
Step12: 2- Relativistic IR
A more realistic impulse response mimicking black hole dynamics can be created using relativistic_ir(). Its arguments are | Python Code:
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
Explanation: Contents
This notebook covers the basics of creating TransferFunction object, obtaining time and energy resolved responses, plotting them and using IO methods available. Finally, artificial responses are introduced which provide a way for quick testing.
Setup
Set up some useful libraries.
End of explanation
from stingray.simulator.transfer import TransferFunction
from stingray.simulator.transfer import simple_ir, relativistic_ir
Explanation: Import relevant stingray libraries.
End of explanation
response = np.loadtxt('intensity.txt')
Explanation: Creating TransferFunction
A transfer function can be initialized by passing a 2-d array containing time across the first dimension and energy across the second. For example, if the 2-d array is defined by arr, then arr[1][5] defines a time of 5 units and energy of 1 unit.
For the purpose of this tutorial, we have stored a 2-d array in a text file named intensity.txt. The script to generate this file is explained in Data Preparation notebook.
End of explanation
transfer = TransferFunction(response)
transfer.data.shape
Explanation: Initialize transfer function by passing the array defined above.
End of explanation
transfer.time_response()
Explanation: By default, time and energy spacing across both axes are set to 1. However, they can be changed by supplying additional parameters dt and de.
Obtaining Time-Resolved Response
The 2-d transfer function can be converted into a time-resolved/energy-averaged response.
End of explanation
transfer.time[1:10]
Explanation: This sets time parameter which can be accessed by transfer.time
End of explanation
transfer.energy_response()
Explanation: Additionally, energy interval over which to average, can be specified by specifying e0 and e1 parameters.
Obtaining Energy-Resolved Response
Energy-resolved/time-averaged response can be also be formed from 2-d transfer function.
End of explanation
transfer.energy[1:10]
Explanation: This sets energy parameter which can be accessed by transfer.energy
End of explanation
transfer.plot(response='2d')
transfer.plot(response='time')
transfer.plot(response='energy')
Explanation: Plotting Responses
TransferFunction() creates plots of time-resolved, energy-resolved and 2-d responses. These plots can be saved by setting save parameter.
End of explanation
transfer.write('transfer.pickle')
Explanation: By enabling save=True parameter, the plots can be also saved.
IO
TransferFunction can be saved in pickle format and retrieved later.
End of explanation
transfer_new = TransferFunction.read('transfer.pickle')
transfer_new.time[1:10]
Explanation: Saved files can be read using static read() method.
End of explanation
s_ir = simple_ir(dt=0.125, start=10, width=5, intensity=0.1)
plt.plot(s_ir)
Explanation: Artificial Responses
For quick testing, two helper impulse response models are provided.
1- Simple IR
simple_ir() allows to define an impulse response of constant height. It takes in time resolution starting time, width and intensity as arguments.
End of explanation
r_ir = relativistic_ir(dt=0.125)
plt.plot(r_ir)
Explanation: 2- Relativistic IR
A more realistic impulse response mimicking black hole dynamics can be created using relativistic_ir(). Its arguments are: time_resolution, primary peak time, secondary peak time, end time, primary peak value, secondary peak value, rise slope and decay slope. These paramaters are set to appropriate values by default.
End of explanation |
12,262 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic usage of Sklearn
Step1: Text processing with Scikit learn
We can use CountVectorizer to extract a bag of words representation from a collection of documents, using the SciKit-Learn method fit_transform. We will use a list of strings as documents.
Step2: Array vector for the first document
Step3: Number of times word "hard" occurs
Step4: Using the 20 Newsgroups dataset
We are going to fetch just some categories so that it doesn't take that long to download the docs.
Step5: Creating a CountVectorizer object
Step6: We can now see how frequently the word algorithm occurs in the subset of the 20Newgroups collection we are considering.
Step7: How many terms were extracted? use get_feature_names()
Step8: CountVectorizer can do more preprocessing. This can be stopword removal.
Step9: More preprocessing
For stemming and more advanced preprocessing, supplement SciKit Learn with another Python library, NLTK. Up next.
More advanced preprocessing with NLTK
NLTK is described in detail in a book by Bird, Klein and Loper available online
Step10: Create an English stemmer
http
Step11: NLTK for text analytics
NERs
Sentiment analysis
Extracting information from social media.
Step12: Integrating NLTK with SciKit's vectorizer
NLTK Stemmer
The stemmer can be used to stem documents before feeding into SciKit's vectorizer, thus obtaining a more compact index.
One way to do this is to define a new class StemmedCountVectorizer extending CountVectorizer by redifining the method build_analyzer() that handles preprocessing and tokenization.
http
Step13: If we modify build_analyzer() to apply the NLTK stemmer to the output of default build_analyzer(), we get a version that does stemming as well
Step14: So now we can create an instance of this class
Step15: Use this vectorizer to extract features
Compare this result to around 35,000 features we obtained using the unstemmed version.
Step16: Notes
You should always experiment and see if it is good to use stemming with your problem set. It might not be the best thing to do.
SOLR works for processing larger datasets, since Python and SciKit-Learn become less effective, and more industrial strength software is required. One example of such software is Apache SOLR, an open source indexing package available from | Python Code:
import sklearn
import numpy as np
import matplotlib.pyplot as plt
data = np.array([[1,2], [2,3], [3,4], [4,5], [5,6]])
x = data[:,0]
y = data[:,1]
data, x, y
Explanation: Basic usage of Sklearn
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(min_df = 1)
content = ["How to format my hard disk", " Hard disk format problems "]
# fit_transform returns array of two rows, one per 'document'.
# each row has 7 elements, each element being the number of items
# a given feature occurred in that document.
X = vectorizer.fit_transform(content)
vectorizer.get_feature_names(), X.toarray()
Explanation: Text processing with Scikit learn
We can use CountVectorizer to extract a bag of words representation from a collection of documents, using the SciKit-Learn method fit_transform. We will use a list of strings as documents.
End of explanation
X.toarray()[0]
Explanation: Array vector for the first document
End of explanation
X.toarray()[1][vectorizer.get_feature_names().index('hard')]
Explanation: Number of times word "hard" occurs
End of explanation
from sklearn.datasets import fetch_20newsgroups
categories = ['alt.atheism', 'soc.religion.christian',
'comp.graphics', 'sci.med']
twenty_train = fetch_20newsgroups(subset='train',
categories=categories, shuffle=True,
random_state=42)
Explanation: Using the 20 Newsgroups dataset
We are going to fetch just some categories so that it doesn't take that long to download the docs.
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
train_counts = vectorizer.fit_transform(twenty_train.data)
Explanation: Creating a CountVectorizer object
End of explanation
vectorizer.vocabulary_.get(u'algorithm')
Explanation: We can now see how frequently the word algorithm occurs in the subset of the 20Newgroups collection we are considering.
End of explanation
len(vectorizer.get_feature_names())
Explanation: How many terms were extracted? use get_feature_names()
End of explanation
vectorizer = CountVectorizer(stop_words='english')
sorted(vectorizer.get_stop_words())[0:20]
Explanation: CountVectorizer can do more preprocessing. This can be stopword removal.
End of explanation
import nltk
Explanation: More preprocessing
For stemming and more advanced preprocessing, supplement SciKit Learn with another Python library, NLTK. Up next.
More advanced preprocessing with NLTK
NLTK is described in detail in a book by Bird, Klein and Loper available online:
http://www.nltk.org/book_1ed/ for version 2.7 of python
About NLTK
It is not the best
It is very easy to use
You should read the book linked above to get familiar with the package and with text preprocessing.
End of explanation
s = nltk.stem.SnowballStemmer('english')
s.stem("cats"), s.stem("ran"), s.stem("jumped")
Explanation: Create an English stemmer
http://www.nltk.org/howto/stem.html for general intro.
http://www.nltk.org/api/nltk.stem.html for more details (including languages covered).
End of explanation
from nltk.tokenize import word_tokenize
text = word_tokenize("And now for something completely different")
nltk.pos_tag(text)
Explanation: NLTK for text analytics
NERs
Sentiment analysis
Extracting information from social media.
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(stop_words="english")
analyze = vectorizer.build_analyzer()
analyze("John bought carrots and potatoes")
Explanation: Integrating NLTK with SciKit's vectorizer
NLTK Stemmer
The stemmer can be used to stem documents before feeding into SciKit's vectorizer, thus obtaining a more compact index.
One way to do this is to define a new class StemmedCountVectorizer extending CountVectorizer by redifining the method build_analyzer() that handles preprocessing and tokenization.
http://scikitlearn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html
build_analyzer() takes a string as input and outputs a list of tokens.
End of explanation
import nltk.stem
english_stemmer = nltk.stem.SnowballStemmer('english')
class StemmedCountVectorizer(CountVectorizer):
def build_analyzer(self):
analyzer=super(StemmedCountVectorizer, self).build_analyzer()
return lambda doc:(english_stemmer.stem(w) for w in analyzer(doc))
Explanation: If we modify build_analyzer() to apply the NLTK stemmer to the output of default build_analyzer(), we get a version that does stemming as well:
End of explanation
stem_vectorizer = StemmedCountVectorizer(min_df=1,
stop_words='english')
stem_analyze = stem_vectorizer.build_analyzer()
Y = stem_analyze("John bought carrots and potatoes")
[tok for tok in Y]
Explanation: So now we can create an instance of this class:
End of explanation
from sklearn.datasets import fetch_20newsgroups
categories = ['alt.atheism', 'soc.religion.christian',
'comp.graphics', 'sci.med']
twenty_train = fetch_20newsgroups(subset='train',
categories=categories,
shuffle=True, random_state=42)
train_counts = stem_vectorizer.fit_transform(twenty_train.data)
len(stem_vectorizer.get_feature_names())
Explanation: Use this vectorizer to extract features
Compare this result to around 35,000 features we obtained using the unstemmed version.
End of explanation
!ipython nbconvert --to script Lab1\ Text\ processing\ with\ python.ipynb
Explanation: Notes
You should always experiment and see if it is good to use stemming with your problem set. It might not be the best thing to do.
SOLR works for processing larger datasets, since Python and SciKit-Learn become less effective, and more industrial strength software is required. One example of such software is Apache SOLR, an open source indexing package available from:
http://lucene.apache.org/solr/
It produces Lucene-style indices that can be used by text analytics packages such as Mahout.
Elastic http://www.elastic.co/
End of explanation |
12,263 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Config files with specifications
Epoch files
Epoch files are config files which specify a set of options repeatedly on different date/times, represented by sections of the config file. When a value for an option for a given date is requested, the value in the last epoch (section) before the given date is returned.
All options do not need to be set in each epoch. Values are inherited from epoch to epoch in chronological order.
Epoch files have a specification for each option -- a type and default value.
Step1: Specify an epoch parser with an epoch filename and the specification filename. | Python Code:
%cat epochs_spec.cfg
%cat epochs.cfg
Explanation: Config files with specifications
Epoch files
Epoch files are config files which specify a set of options repeatedly on different date/times, represented by sections of the config file. When a value for an option for a given date is requested, the value in the last epoch (section) before the given date is returned.
All options do not need to be set in each epoch. Values are inherited from epoch to epoch in chronological order.
Epoch files have a specification for each option -- a type and default value.
End of explanation
ep = burin.config.EpochParser('epochs.cfg', 'epochs_spec.cfg')
ep.is_valid()
ep.get('cal_version', date='20180101')
ep.get('cal_version', date='20180101.120000')
Explanation: Specify an epoch parser with an epoch filename and the specification filename.
End of explanation |
12,264 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Lists have a very simple method to insert elements: | Problem:
import numpy as np
a = np.array([[1,2],[3,4]])
pos = [1, 2]
element = np.array([[3, 5], [6, 6]])
pos = np.array(pos) - np.arange(len(element))
a = np.insert(a, pos, element, axis=0) |
12,265 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Peak Magnetic Field Strength
Magnetic models of young stars have their peak magnetic field strength prescribed where $R = 0.5 R_{\star}$. This works well and permits models of young stars with strong surface magnetic fields ($\langle{\rm B}f\rangle$) to converge and evolve without issue. We can understand this by plotting how the peak magnetic field strength varies as a function of depth in the dipole magnetic field models. The magnetic field profile in this case is prescribed to be
\begin{equation}
B(R) = B_{\rm surf} \left(\frac{R_{\star}}{R}\right)^3.
\end{equation}
We can therefore easily visualize the peak magnetic field strength as a function of the surface magnetic field strength and the cut-off radius where we define the peak strength.
Step1: which leads to the following figure,
Step2: It is clear that interior magnetic field strengths remain below 100 kG for all values of the surface field strength when the peak magnetic field strength is assigned to a radial fraction greater than approximately $0.40 R_{\star}$. These values for magnetic field strengths ($< 100 \textrm{ kG}$) permit convergence as they produce values of the magnetic pressure and magnetic energy density that are well below values for the gas pressure and the internal energy of the ambient plasma.
This is important to understand once we introduce magnetic fields in young stars that later develop a radiative core during their pre-main-sequence evolution. Since the code was initially designed to treat the effects of magnetic fields in main-sequence stars, the development of a radiative core was not explicitly considered. Instead, radiative cores existed prior to the inclusion of a magnetic field. Since maost stars have radiative cores that extend no lower than $0.40 R_{\star}$, there was no need to be concerned about convergence outside of strong surface magnetic fields. As a result, we defined the peak magnetic field strength to occur at the base of the tachocline (radiative-convection zone interface).
Fully convective stars were assigned peak magnetic field strengths at $0.30 R_{\star}$, which is just beyond the maximum depth observed for the convection zone in partially convective stars and is also where the peak magnetic field strength appeared (a very slight peak) in 3D MHD models of fully convective dynamos (Browning et al. 2008).
Pre-main-sequence stars therefore pose an interesting problem
Step3: In the figure above, the standard track (dark grey) is slightly cooler than the unperturbed magnetic tracks due to a difference in the depth at which the surface boundary conditions are defined. The above track is used only for illustration, as $\tau_{\rm ross} = 10$ standard models have not been computed, yet.
Now, at a higher mass where magnetic models have not previously been computed.
Step4: Differences in the effeictive temperature frame are considerable; nearly 1000 K cooler in the case where magnetic fields are implemented! We can now push the models further with the new boundary definition. For example, | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
radial_points = np.arange(0.01, 1.0, 0.01) # units of Rstar
bfield_scaling = radial_points**(-3.0) # see equation (1)
bfield_surface = np.arange(0.5, 4.1, 0.5) # units of kiloGauss
Explanation: Peak Magnetic Field Strength
Magnetic models of young stars have their peak magnetic field strength prescribed where $R = 0.5 R_{\star}$. This works well and permits models of young stars with strong surface magnetic fields ($\langle{\rm B}f\rangle$) to converge and evolve without issue. We can understand this by plotting how the peak magnetic field strength varies as a function of depth in the dipole magnetic field models. The magnetic field profile in this case is prescribed to be
\begin{equation}
B(R) = B_{\rm surf} \left(\frac{R_{\star}}{R}\right)^3.
\end{equation}
We can therefore easily visualize the peak magnetic field strength as a function of the surface magnetic field strength and the cut-off radius where we define the peak strength.
End of explanation
fig, ax = plt.subplots(1, 1, figsize=(8.0, 8.0))
# configure axis labels and limits
ax.set_xlabel('Radius Fraction, $R/R_{\\star}$', fontsize=20.)
ax.set_ylabel('Magnetic Field Strength (kG)', fontsize=20.)
ax.set_ylim(0.1, 1.0e3)
ax.tick_params(which='major', axis='both', length=10.0, labelsize=16.)
ax.grid(True)
for bf in bfield_surface:
ax.semilogy(radial_points, bf*bfield_scaling, '-', lw=3)
fig.tight_layout()
Explanation: which leads to the following figure,
End of explanation
filename = 'm0400_GAS07_p000_p0_y26_mlt2.040_mag25kG.trk'
std_iso = np.genfromtxt('../../evolve/dmestar/trk/gas07/p000/a0/amlt2202/m0400_GAS07_p000_p0_y26_mlt2.202.trk')
old_iso = np.genfromtxt('../../evolve/dmestar/trk/gas07/p000/a0/amlt2040/mag25kG/{0}'.format(filename))
new_iso = np.genfromtxt('../../evolve/models/tmp/{0}'.format(filename))
fig , ax = plt.subplots(3, 1, figsize=(8, 12), sharex=True)
ax[0].set_title('$M = 0.40 M_{\\odot}$ with $\\langle{\\rm B}f\\rangle$ = 2.5 kG', family='serif', fontsize=20.)
ax[2].set_xlabel('Age (Myr)', fontsize=20.)
ax[0].set_ylabel('Effective Temperature (K)', fontsize=20.)
ax[1].set_ylabel('Luminosity ($L_{\\odot}$)', fontsize=20.)
ax[2].set_ylabel('Radius ($R_{\\odot}$)', fontsize=20.)
for axis in ax:
axis.set_xlim(1.0e-1, 1.0e1)
axis.tick_params(which='major', axis='both', length=10.0, labelsize=16.)
# Temperature
ax[0].semilogx(old_iso[:,0]/1.0e6, 10**old_iso[:,1], '-', lw=3, color='#800000')
ax[0].semilogx(new_iso[:,0]/1.0e6, 10**new_iso[:,1], '-', lw=2, color='#1e90ff')
ax[0].semilogx(std_iso[:,0]/1.0e6, 10**std_iso[:,1], '-', lw=4, color='#555555')
# Luminosity
ax[1].semilogx(old_iso[:,0]/1.0e6, 10**old_iso[:,3], '-', lw=3, color='#800000')
ax[1].semilogx(new_iso[:,0]/1.0e6, 10**new_iso[:,3], '-', lw=2, color='#1e90ff')
ax[1].semilogx(std_iso[:,0]/1.0e6, 10**std_iso[:,3], '-', lw=4, color='#555555')
# Radius
ax[2].semilogx(old_iso[:,0]/1.0e6, 10**old_iso[:,4], '-', lw=3, color='#800000')
ax[2].semilogx(new_iso[:,0]/1.0e6, 10**new_iso[:,4], '-', lw=2, color='#1e90ff')
ax[2].semilogx(std_iso[:,0]/1.0e6, 10**std_iso[:,4], '-', lw=4, color='#555555')
Explanation: It is clear that interior magnetic field strengths remain below 100 kG for all values of the surface field strength when the peak magnetic field strength is assigned to a radial fraction greater than approximately $0.40 R_{\star}$. These values for magnetic field strengths ($< 100 \textrm{ kG}$) permit convergence as they produce values of the magnetic pressure and magnetic energy density that are well below values for the gas pressure and the internal energy of the ambient plasma.
This is important to understand once we introduce magnetic fields in young stars that later develop a radiative core during their pre-main-sequence evolution. Since the code was initially designed to treat the effects of magnetic fields in main-sequence stars, the development of a radiative core was not explicitly considered. Instead, radiative cores existed prior to the inclusion of a magnetic field. Since maost stars have radiative cores that extend no lower than $0.40 R_{\star}$, there was no need to be concerned about convergence outside of strong surface magnetic fields. As a result, we defined the peak magnetic field strength to occur at the base of the tachocline (radiative-convection zone interface).
Fully convective stars were assigned peak magnetic field strengths at $0.30 R_{\star}$, which is just beyond the maximum depth observed for the convection zone in partially convective stars and is also where the peak magnetic field strength appeared (a very slight peak) in 3D MHD models of fully convective dynamos (Browning et al. 2008).
Pre-main-sequence stars therefore pose an interesting problem: star that eventually become partially convective on the main sequence start off as fully convective stars along the Hyashi track. However, once a radiative core begins developing at the beginning of the Henyey track, the peak magnetic field strength jumps from $0.50 R_{\star}$ to $0.12 R_{\star}$, which causes an immense increase in the peak magnetic field strength. Such a deep convection zone is not seen in main-sequence low-mass stars (they'd become fully convective, instead). We therefore need to control for the evolutionary development of a radiative core and the receding convection zone boundary.
Revising the Boundary Definition
Possible solutions:
1. Define all peak magnetic field strengths at the tachocline or $0.5 R_{\star}$, whichever is larger.
2. Define the peak magnetic field strength at $(R_{\star} - R_{\rm bcz}) / 2$.
Solution 1 is now implemented in an experimental version that was quite easy to implement. Within the magnetic field module, the tachocline definition function was adjusted to select between the peak magnetic field strength location in a fully convection model (fc_tach) and the radius of the tachocline (r_tach) depending on which is larger. That is
fortran
r_tach = max(fc_tach, r_tach)
Test models with masses between $0.40 M_{\odot}$ and $0.65 M_{\odot}$ are now running. Models at the lower mass end of this range can be compared with previous converged models.
End of explanation
filename = 'm0600_GAS07_p000_p0_y26_mlt2.040_mag25kG.trk'
std_iso = np.genfromtxt('../../evolve/dmestar/trk/gas07/p000/a0/amlt2202/m0600_GAS07_p000_p0_y26_mlt2.202.trk')
new_iso = np.genfromtxt('../../evolve/models/tmp/{0}'.format(filename))
fig , ax = plt.subplots(3, 1, figsize=(8, 12), sharex=True)
ax[0].set_title('$M = 0.60 M_{\\odot}$ with $\\langle{\\rm B}f\\rangle$ = 2.5 kG', family='serif', fontsize=20.)
ax[2].set_xlabel('Age (Myr)', fontsize=20.)
ax[0].set_ylabel('Effective Temperature (K)', fontsize=20.)
ax[1].set_ylabel('Luminosity ($L_{\\odot}$)', fontsize=20.)
ax[2].set_ylabel('Radius ($R_{\\odot}$)', fontsize=20.)
ax[1].set_ylim(0.0, 5.0)
ax[2].set_ylim(0.0, 5.0)
for axis in ax:
axis.set_xlim(1.0e-1, 1.0e1)
axis.tick_params(which='major', axis='both', length=10.0, labelsize=16.)
# Temperature
ax[0].semilogx(new_iso[:,0]/1.0e6, 10**new_iso[:,1], '-', lw=2, color='#1e90ff')
ax[0].semilogx(std_iso[:,0]/1.0e6, 10**std_iso[:,1], '-', lw=4, color='#555555')
# Luminosity
ax[1].semilogx(new_iso[:,0]/1.0e6, 10**new_iso[:,3], '-', lw=2, color='#1e90ff')
ax[1].semilogx(std_iso[:,0]/1.0e6, 10**std_iso[:,3], '-', lw=4, color='#555555')
# Radius
ax[2].semilogx(new_iso[:,0]/1.0e6, 10**new_iso[:,4], '-', lw=2, color='#1e90ff')
ax[2].semilogx(std_iso[:,0]/1.0e6, 10**std_iso[:,4], '-', lw=4, color='#555555')
Explanation: In the figure above, the standard track (dark grey) is slightly cooler than the unperturbed magnetic tracks due to a difference in the depth at which the surface boundary conditions are defined. The above track is used only for illustration, as $\tau_{\rm ross} = 10$ standard models have not been computed, yet.
Now, at a higher mass where magnetic models have not previously been computed.
End of explanation
filename = 'm0900_GAS07_p000_p0_y26_mlt2.040_mag25kG.trk'
std_iso = np.genfromtxt('../../evolve/dmestar/trk/gas07/p000/a0/amlt2202/m0900_GAS07_p000_p0_y26_mlt2.202.trk')
new_iso = np.genfromtxt('../../evolve/models/tmp/{0}'.format(filename))
fig , ax = plt.subplots(3, 1, figsize=(8, 12), sharex=True)
ax[0].set_title('$M = 0.90 M_{\\odot}$ with $\\langle{\\rm B}f\\rangle$ = 2.5 kG', family='serif', fontsize=20.)
ax[2].set_xlabel('Age (Myr)', fontsize=20.)
ax[0].set_ylabel('Effective Temperature (K)', fontsize=20.)
ax[1].set_ylabel('Luminosity ($L_{\\odot}$)', fontsize=20.)
ax[2].set_ylabel('Radius ($R_{\\odot}$)', fontsize=20.)
ax[1].set_ylim(0.0, 7.0)
ax[2].set_ylim(1.0, 7.0)
for axis in ax:
axis.set_xlim(1.0e-1, 1.0e1)
axis.tick_params(which='major', axis='both', length=10.0, labelsize=16.)
# Temperature
ax[0].semilogx(new_iso[:,0]/1.0e6, 10**new_iso[:,1], '-', lw=2, color='#1e90ff')
ax[0].semilogx(std_iso[:,0]/1.0e6, 10**std_iso[:,1], '-', lw=4, color='#555555')
# Luminosity
ax[1].semilogx(new_iso[:,0]/1.0e6, 10**new_iso[:,3], '-', lw=2, color='#1e90ff')
ax[1].semilogx(std_iso[:,0]/1.0e6, 10**std_iso[:,3], '-', lw=4, color='#555555')
# Radius
ax[2].semilogx(new_iso[:,0]/1.0e6, 10**new_iso[:,4], '-', lw=2, color='#1e90ff')
ax[2].semilogx(std_iso[:,0]/1.0e6, 10**std_iso[:,4], '-', lw=4, color='#555555')
Explanation: Differences in the effeictive temperature frame are considerable; nearly 1000 K cooler in the case where magnetic fields are implemented! We can now push the models further with the new boundary definition. For example,
End of explanation |
12,266 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Download the list of occultation periods from the MOC at Berkeley.
Note that the occultation periods typically only are stored at Berkeley for the future and not for the past. So this is only really useful for observation planning.
Step1: Download the NuSTAR TLE archive.
This contains every two-line element (TLE) that we've received for the whole mission. We'll expand on how to use this later.
The times, line1, and line2 elements are now the TLE elements for each epoch.
Step2: Here is where we define the observing window that we want to use.
Note that tstart and tend must be in the future otherwise you won't find any occultation times and sunlight_periods will return an error.
Step3: We want to know how to orient NuSTAR for the Sun.
We can more or less pick any angle that we want. But this angle has to be specified a little in advance so that the NuSTAR SOC can plan the "slew in" maneuvers. Below puts DET0 in the top left corner (north-east with respect to RA/Dec coordinates).
This is what you tell the SOC you want the "Sky PA angle" to be.
Step4: This is where you actually make the Mosaic for Orbit 1
Step5: This is where you actually make the Mosaic for Orbit 2
Step6: This is where you actually make the Mosaic for Orbit 3
Step7: This is where you actually make the Mosaic for Orbit 4 | Python Code:
fname = io.download_occultation_times(outdir='../data/')
print(fname)
Explanation: Download the list of occultation periods from the MOC at Berkeley.
Note that the occultation periods typically only are stored at Berkeley for the future and not for the past. So this is only really useful for observation planning.
End of explanation
tlefile = io.download_tle(outdir='../data')
print(tlefile)
times, line1, line2 = io.read_tle_file(tlefile)
Explanation: Download the NuSTAR TLE archive.
This contains every two-line element (TLE) that we've received for the whole mission. We'll expand on how to use this later.
The times, line1, and line2 elements are now the TLE elements for each epoch.
End of explanation
tstart = '2019-04-25T22:00:00'
tend = '2019-04-26T23:00:00'
orbits = planning.sunlight_periods(fname, tstart, tend)
Explanation: Here is where we define the observing window that we want to use.
Note that tstart and tend must be in the future otherwise you won't find any occultation times and sunlight_periods will return an error.
End of explanation
pa = planning.get_nustar_roll(tstart, 0)
print(tstart)
print("NuSTAR Roll angle for Det0 in NE quadrant: {}".format(pa))
Explanation: We want to know how to orient NuSTAR for the Sun.
We can more or less pick any angle that we want. But this angle has to be specified a little in advance so that the NuSTAR SOC can plan the "slew in" maneuvers. Below puts DET0 in the top left corner (north-east with respect to RA/Dec coordinates).
This is what you tell the SOC you want the "Sky PA angle" to be.
End of explanation
# Just use the first orbit...or choose one. This may download a ton of deltat.preds, which is a known
# bug to be fixed.
orbit = orbits[0].copy()
print(orbit)
#...adjust the index above to get the correct orbit. Then uncomment below.
planning.make_mosaic(orbit, make_regions=True, outfile='orbit1_mosaic.txt', write_output=True)
Explanation: This is where you actually make the Mosaic for Orbit 1
End of explanation
# Just use the first orbit...or choose one. This may download a ton of deltat.preds, which is a known
# bug to be fixed.
orbit = orbits[1].copy()
print(orbit)
#...adjust the index above to get the correct orbit. Then uncomment below.
planning.make_mosaic(orbit, make_regions=True, outfile='orbit2_mosaic.txt', write_output=True)
Explanation: This is where you actually make the Mosaic for Orbit 2
End of explanation
# Just use the first orbit...or choose one. This may download a ton of deltat.preds, which is a known
# bug to be fixed.
orbit = orbits[2].copy()
print(orbit)
#...adjust the index above to get the correct orbit. Then uncomment below.
planning.make_mosaic(orbit, make_regions=True, outfile='orbit3_mosaic.txt', write_output=True)
Explanation: This is where you actually make the Mosaic for Orbit 3
End of explanation
# Just use the first orbit...or choose one. This may download a ton of deltat.preds, which is a known
# bug to be fixed.
orbit = orbits[3].copy()
print(orbit)
#...adjust the index above to get the correct orbit. Then uncomment below.
planning.make_mosaic(orbit, make_regions=True, outfile='orbit4_mosaic.txt', write_output=True)
Explanation: This is where you actually make the Mosaic for Orbit 4
End of explanation |
12,267 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Image Mark
Image is a Mark object, used to visualize images in standard format (png, jpg etc...), in a bqplot Figure
It takes as input an ipywidgets Image widget
The ipywidgets Image
Step1: Using pyplot's imshow to display the image
Step2: Displaying the image inside a bqplot Figure
Step3: Mixing with other marks
Image is a mark like any other, so they can be mixed and matched together.
Step4: Its traits (attributes) will also respond dynamically to a change from the backend | Python Code:
import os
import ipywidgets as widgets
import bqplot.pyplot as plt
from bqplot import LinearScale
image_path = os.path.abspath('../../data_files/trees.jpg')
with open(image_path, 'rb') as f:
raw_image = f.read()
ipyimage = widgets.Image(value=raw_image, format='jpg')
ipyimage
Explanation: The Image Mark
Image is a Mark object, used to visualize images in standard format (png, jpg etc...), in a bqplot Figure
It takes as input an ipywidgets Image widget
The ipywidgets Image
End of explanation
plt.figure(padding_y=0)
axes_options = {'x': {'visible': False}, 'y': {'visible': False}}
plt.imshow(image_path, 'filename')
plt.show()
Explanation: Using pyplot's imshow to display the image
End of explanation
fig = plt.figure(title='Trees', padding_x=0, padding_y=0)
image = plt.imshow(ipyimage, 'widget')
fig
Explanation: Displaying the image inside a bqplot Figure
End of explanation
fig = plt.figure(padding_x=0, padding_y=0)
plt.scales(scales={'x': LinearScale(min=-1, max=2),
'y': LinearScale(min=-0.5, max=2)})
image = plt.imshow(ipyimage, format='widget')
plt.plot([0, 1, 1, 0, 0], [0, 0, 1, 1, 0], 'r')
fig
Explanation: Mixing with other marks
Image is a mark like any other, so they can be mixed and matched together.
End of explanation
# Full screen
image.x = [-1, 2]
image.y = [-.5, 2]
Explanation: Its traits (attributes) will also respond dynamically to a change from the backend
End of explanation |
12,268 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize channel over epochs as images in sensor topography
This will produce what is sometimes called event related
potential / field (ERP/ERF) images.
One sensor topography plot is produced with the evoked field images from
the selected channels.
Step1: Set parameters
Step2: Show event related fields images | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Denis Engemann <[email protected]>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
Explanation: Visualize channel over epochs as images in sensor topography
This will produce what is sometimes called event related
potential / field (ERP/ERF) images.
One sensor topography plot is produced with the evoked field images from
the selected channels.
End of explanation
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id, tmin, tmax = 1, -0.2, 0.5
# Setup for reading the raw data
raw = io.Raw(raw_fname)
events = mne.read_events(event_fname)
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=True, eog=True,
exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, eog=150e-6))
Explanation: Set parameters
End of explanation
layout = mne.find_layout(epochs.info, 'meg') # use full layout
title = 'ERF images - MNE sample data'
mne.viz.plot_topo_image_epochs(epochs, layout, sigma=0.5, vmin=-200, vmax=200,
colorbar=True, title=title)
plt.show()
Explanation: Show event related fields images
End of explanation |
12,269 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Python 2
Command-Line Programs
<section class="objectives panel panel-warning">
<div class="panel-heading">
<h2 id="learning-objectives"><span class="fa fa-certificate"></span>Learning Objectives</h2>
</div>
<div class="panel-body">
<ul>
<li>Use the values of command-line arguments in a program.</li>
<li>Handle flags and files separately in a command-line program.</li>
<li>Read data from standard input in a program so that it can be used in a pipeline.</li>
</ul>
</div>
</section>
The IPython Notebook and other interactive tools are great for prototyping code and exploring data, but sooner or later we will want to use our program in a pipeline or run it in a shell script to process thousands of data files. In order to do that, we need to make our programs work like other Unix command-line tools. For example, we may want a program that reads a dataset and prints the average inflammation per patient.
<aside class="callout panel panel-info">
<div class="panel-heading">
<h2 id="switching-to-shell-commands"><span class="fa fa-certificate"></span>Switching to Shell Commands</h2>
</div>
<div class="panel-body">
<p>In this lesson we are switching from typing commands in a Python interpreter to typing commands in a shell terminal window (such as bash). When you see a <code>$</code> in front of a command that tells you to run that command in the shell rather than the Python interpreter.</p>
</div>
</aside>
This program does exactly what we want - it prints the average inflammation per patient for a given file.
We might also want to look at the minimum of the first four lines
or the maximum inflammations in several files one after another
Step1: This function gets the name of the script from sys.argv[0], because thatโs where itโs always put, and the name of the file to process from sys.argv[1]. Hereโs a simple test
Step2: and run that
Step3: <section class="challenge panel panel-success">
<div class="panel-heading">
<h2 id="finding-particular-files"><span class="fa fa-pencil"></span>Finding particular files</h2>
</div>
<div class="panel-body">
<p>Using the <code>glob</code> module introduced earlier, write a simple version of <code>ls</code> that shows files in the current directory with a particular suffix. A call to this script should look like this
Step5: <section class="challenge panel panel-success">
<div class="panel-heading">
<h2 id="changing-flags"><span class="fa fa-pencil"></span>Changing flags</h2>
</div>
<div class="panel-body">
<p>Rewrite <code>readings.py</code> so that it uses <code>-n</code>, <code>-m</code>, and <code>-x</code> instead of <code>--min</code>, <code>--mean</code>, and <code>--max</code> respectively. Is the code easier to read? Is the program easier to understand?</p>
</div>
</section>
<section class="challenge panel panel-success">
<div class="panel-heading">
<h2 id="adding-a-help-message"><span class="fa fa-pencil"></span>Adding a help message</h2>
</div>
<div class="panel-body">
<p>Separately, modify <code>readings.py</code> so that if no parameters are given (i.e., no action is specified and no filenames are given), it prints a message explaining how it should be used.</p>
</div>
</section>
Step6: <section class="challenge panel panel-success">
<div class="panel-heading">
<h2 id="adding-a-default-action"><span class="fa fa-pencil"></span>Adding a default action</h2>
</div>
<div class="panel-body">
<p>Separately, modify <code>readings.py</code> so that if no action is given it displays the means of the data.</p>
</div>
</section>
Step7: <section class="challenge panel panel-success">
<div class="panel-heading">
<h2 id="a-file-checker"><span class="fa fa-pencil"></span>A file-checker</h2>
</div>
<div class="panel-body">
<p>Write a program called <code>check.py</code> that takes the names of one or more inflammation data files as arguments and checks that all the files have the same number of rows and columns. What is the best way to test your program?</p>
</div>
</section>
Step8: <section class="challenge panel panel-success">
<div class="panel-heading">
<h2 id="counting-lines"><span class="fa fa-pencil"></span>Counting lines</h2>
</div>
<div class="panel-body">
<p>Write a program called <code>line-count.py</code> that works like the Unix <code>wc</code> command | Python Code:
import sys
import numpy
def main():
script = sys.argv[0]
filename = sys.argv[1]
data = numpy.loadtxt(filename, delimiter=',')
for m in data.mean(axis=1):
print(m)
Explanation: Introduction to Python 2
Command-Line Programs
<section class="objectives panel panel-warning">
<div class="panel-heading">
<h2 id="learning-objectives"><span class="fa fa-certificate"></span>Learning Objectives</h2>
</div>
<div class="panel-body">
<ul>
<li>Use the values of command-line arguments in a program.</li>
<li>Handle flags and files separately in a command-line program.</li>
<li>Read data from standard input in a program so that it can be used in a pipeline.</li>
</ul>
</div>
</section>
The IPython Notebook and other interactive tools are great for prototyping code and exploring data, but sooner or later we will want to use our program in a pipeline or run it in a shell script to process thousands of data files. In order to do that, we need to make our programs work like other Unix command-line tools. For example, we may want a program that reads a dataset and prints the average inflammation per patient.
<aside class="callout panel panel-info">
<div class="panel-heading">
<h2 id="switching-to-shell-commands"><span class="fa fa-certificate"></span>Switching to Shell Commands</h2>
</div>
<div class="panel-body">
<p>In this lesson we are switching from typing commands in a Python interpreter to typing commands in a shell terminal window (such as bash). When you see a <code>$</code> in front of a command that tells you to run that command in the shell rather than the Python interpreter.</p>
</div>
</aside>
This program does exactly what we want - it prints the average inflammation per patient for a given file.
We might also want to look at the minimum of the first four lines
or the maximum inflammations in several files one after another:
Our scripts should do the following:
If no filename is given on the command line, read data from standard input.
If one or more filenames are given, read data from them and report statistics for each file separately.
Use the --min, --mean, or --max flag to determine what statistic to print.
To make this work, we need to know how to handle command-line arguments in a program, and how to get at standard input. Weโll tackle these questions in turn below.
Command-Line Arguments
Using the text editor of your choice, save the following in a text file called sys-version.py:
The first line imports a library called sys, which is short for โsystemโ. It defines values such as sys.version, which describes which version of Python we are running. We can run this script from the command line like this:
Create another file called argv-list.py and save the following text to it.
The strange name argv stands for โargument valuesโ. Whenever Python runs a program, it takes all of the values given on the command line and puts them in the list sys.argv so that the program can determine what they were. If we run this program with no arguments:
the only thing in the list is the full path to our script, which is always sys.argv[0]. If we run it with a few arguments, however:
then Python adds each of those arguments to that magic list.
With this in hand, letโs build a version of readings.py that always prints the per-patient mean of a single data file. The first step is to write a function that outlines our implementation, and a placeholder for the function that does the actual work. By convention this function is usually called main, though we can call it whatever we want:
End of explanation
import sys
import numpy
def main():
script = sys.argv[0]
filename = sys.argv[1]
data = numpy.loadtxt(filename, delimiter=',')
for m in data.mean(axis=1):
print(m)
main()
Explanation: This function gets the name of the script from sys.argv[0], because thatโs where itโs always put, and the name of the file to process from sys.argv[1]. Hereโs a simple test:
There is no output because we have defined a function, but havenโt actually called it. Letโs add a call to main:
End of explanation
from __future__ import division, print_function
import sys
def main():
action = sys.argv[1]
number1 = int(sys.argv[2])
number2 = int(sys.argv[3])
assert action in ['add', 'subtract']
if action == 'add':
print(number1 + number2)
else:
print(number1 - number2)
main()
Explanation: and run that:
<aside class="callout panel panel-info">
<div class="panel-heading">
<h2 id="the-right-way-to-do-it"><span class="fa fa-certificate"></span>The Right Way to Do It</h2>
</div>
<div class="panel-body">
<p>If our programs can take complex parameters or multiple filenames, we shouldnโt handle <code>sys.argv</code> directly. Instead, we should use Pythonโs <code>argparse</code> library, which handles common cases in a systematic way, and also makes it easy for us to provide sensible error messages for our users.</p>
</div>
</aside>
Handling Multiple Files
The next step is to teach our program how to handle multiple files. Since 60 lines of output per file is a lot to page through, weโll start by using three smaller files, each of which has three days of data for two patients:
Using small data files as input also allows us to check our results more easily: here, for example, we can see that our program is calculating the mean correctly for each line, whereas we were really taking it on faith before. This is yet another rule of programming: test the simple things first.
We want our program to process each file separately, so we need a loop that executes once for each filename. If we specify the files on the command line, the filenames will be in sys.argv, but we need to be careful: sys.argv[0] will always be the name of our script, rather than the name of a file. We also need to handle an unknown number of filenames, since our program could be run for any number of files.
The solution to both problems is to loop over the contents of sys.argv[1:]. The โ1โ tells Python to start the slice at location 1, so the programโs name isnโt included; since weโve left off the upper bound, the slice runs to the end of the list, and includes all the filenames. Hereโs our changed program readings-03.py:
and here it is in action:
<aside class="callout panel panel-info">
<div class="panel-heading">
<h2 id="the-right-way-to-do-it-1"><span class="fa fa-certificate"></span>The Right Way to Do It</h2>
</div>
<div class="panel-body">
<p>At this point, we have created three versions of our script called <code>readings-01.py</code>, <code>readings-02.py</code>, and <code>readings-03.py</code>. We wouldnโt do this in real life: instead, we would have one file called <code>readings.py</code> that we committed to version control every time we got an enhancement working. For teaching, though, we need all the successive versions side by side.</p>
</div>
</aside>
Handling Command-Line Flags
The next step is to teach our program to pay attention to the --min, --mean, and --max flags. These always appear before the names of the files, so we could just do this:
This works:
but there are several things wrong with it:
main is too large to read comfortably.
If action isnโt one of the three recognized flags, the program loads each file but does nothing with it (because none of the branches in the conditional match). Silent failures like this are always hard to debug.
This version pulls the processing of each file out of the loop into a function of its own. It also checks that action is one of the allowed flags before doing any processing, so that the program fails fast:
This is four lines longer than its predecessor, but broken into more digestible chunks of 8 and 12 lines.
Python has a module named argparse that helps handle complex command-line flags. We will not cover this module in this lesson but you can go to Tshepang Lekhonkhobeโs Argparse tutorial that is part of Pythonโs Official Documentation.
Handling Standard Input
The next thing our program has to do is read data from standard input if no filenames are given so that we can put it in a pipeline, redirect input to it, and so on. Letโs experiment in another script called count-stdin.py:
This little program reads lines from a special โfileโ called sys.stdin, which is automatically connected to the programโs standard input. We donโt have to open it โ Python and the operating system take care of that when the program starts up โ but we can do almost anything with it that we could do to a regular file. Letโs try running it as if it were a regular command-line program:
A common mistake is to try to run something that reads from standard input like this:
i.e., to forget the < character that redirects the file to standard input. In this case, thereโs nothing in standard input, so the program waits at the start of the loop for someone to type something on the keyboard. Since thereโs no way for us to do this, our program is stuck, and we have to halt it using the Interrupt option from the Kernel menu in the Notebook.
We now need to rewrite the program so that it loads data from sys.stdin if no filenames are provided. Luckily, numpy.loadtxt can handle either a filename or an open file as its first parameter, so we donโt actually need to change process. That leaves main:
Letโs try it out:
Thatโs better. In fact, thatโs done: the program now does everything we set out to do.
<section class="challenge panel panel-success">
<div class="panel-heading">
<h2 id="arithmetic-on-the-command-line"><span class="fa fa-pencil"></span>Arithmetic on the command line</h2>
</div>
<div class="panel-body">
<p>Write a command-line program that does addition and subtraction:</p>
<pre class="sourceCode python"><code class="sourceCode python">$ python arith.py add <span class="dv">1</span> <span class="dv">2</span></code></pre>
<pre class="output"><code>3</code></pre>
<pre class="sourceCode python"><code class="sourceCode python">$ python arith.py subtract <span class="dv">3</span> <span class="dv">4</span></code></pre>
<pre class="output"><code>-1</code></pre>
</div>
</section>
End of explanation
from __future__ import division, print_function
import sys
import glob
def main():
suffix = sys.argv[1]
files = glob.glob('*.' + suffix)
for file in files:
print(file)
main()
Explanation: <section class="challenge panel panel-success">
<div class="panel-heading">
<h2 id="finding-particular-files"><span class="fa fa-pencil"></span>Finding particular files</h2>
</div>
<div class="panel-body">
<p>Using the <code>glob</code> module introduced earlier, write a simple version of <code>ls</code> that shows files in the current directory with a particular suffix. A call to this script should look like this:</p>
<pre class="sourceCode python"><code class="sourceCode python">$ python my_ls.py py</code></pre>
<pre class="output"><code>left.py
right.py
zero.py</code></pre>
</div>
</section>
End of explanation
from __future__ import division, print_function
import sys
def main():
if len(sys.argv) < 2:
print(This program should be called with an action and a filename or list of filenames, like so:
$ python readings.py [action] [filename(s)]
In the above, action is either '--min', '--mean', or '--max'.
filename(s) is a filename or several filenames, or a file in standard input.)
script = sys.argv[0]
action = sys.argv[1]
filenames = sys.argv[2:]
assert action in ['--min', '--mean', '--max'], \
'Action is not one of --min, --mean, or --max: ' + action
if len(filenames) == 0:
process(sys.stdin, action)
else:
for f in filenames:
process(f, action)
def process(filename, action):
data = numpy.loadtxt(filename, delimiter=',')
if action == '--min':
values = data.min(axis=1)
elif action == '--mean':
values = data.mean(axis=1)
elif action == '--max':
values = data.max(axis=1)
for m in values:
print(m)
main()
Explanation: <section class="challenge panel panel-success">
<div class="panel-heading">
<h2 id="changing-flags"><span class="fa fa-pencil"></span>Changing flags</h2>
</div>
<div class="panel-body">
<p>Rewrite <code>readings.py</code> so that it uses <code>-n</code>, <code>-m</code>, and <code>-x</code> instead of <code>--min</code>, <code>--mean</code>, and <code>--max</code> respectively. Is the code easier to read? Is the program easier to understand?</p>
</div>
</section>
<section class="challenge panel panel-success">
<div class="panel-heading">
<h2 id="adding-a-help-message"><span class="fa fa-pencil"></span>Adding a help message</h2>
</div>
<div class="panel-body">
<p>Separately, modify <code>readings.py</code> so that if no parameters are given (i.e., no action is specified and no filenames are given), it prints a message explaining how it should be used.</p>
</div>
</section>
End of explanation
def main():
script = sys.argv[0]
if len(sys.argv) > 1:
action = sys.argv[1]
else:
action = '--mean'
filenames = sys.argv[2:]
assert action in ['--min', '--mean', '--max'], \
'Action is not one of --min, --mean, or --max: ' + action
if len(filenames) == 0:
process(sys.stdin, action)
else:
for f in filenames:
process(f, action)
Explanation: <section class="challenge panel panel-success">
<div class="panel-heading">
<h2 id="adding-a-default-action"><span class="fa fa-pencil"></span>Adding a default action</h2>
</div>
<div class="panel-body">
<p>Separately, modify <code>readings.py</code> so that if no action is given it displays the means of the data.</p>
</div>
</section>
End of explanation
from __future__ import division, print_function
import sys
import numpy as np
def main():
script = sys.argv[0]
filenames = sys.argv[1:]
print filenames[0]
shape0 = np.loadtxt(filenames[0]).shape
for file in filenames[1:]:
assert np.loadtxt(file).shape == shape0, 'Shape of {} does not match'.format(file)
main()
from __future__ import division, print_function
import sys
import numpy as np
def main():
script = sys.argv[0]
filenames = sys.argv[1:]
shape0 = np.loadtxt(filenames[0], delimiter=',').shape
for file in filenames[1:]:
assert np.loadtxt(file, delimiter=',').shape == shape0, 'Shape of {} does not match'.format(file)
main()
Explanation: <section class="challenge panel panel-success">
<div class="panel-heading">
<h2 id="a-file-checker"><span class="fa fa-pencil"></span>A file-checker</h2>
</div>
<div class="panel-body">
<p>Write a program called <code>check.py</code> that takes the names of one or more inflammation data files as arguments and checks that all the files have the same number of rows and columns. What is the best way to test your program?</p>
</div>
</section>
End of explanation
import sys
def main():
if len(sys.argv) < 2:
count = 0
for line in sys.stdin:
count += 1
print(count, 'lines in standard input')
else:
total_lines = 0
filenames = sys.argv[1:]
for file in filenames:
file_contents = np.loadtxt(file)
print('Lines in', file, ':', len(file_contents))
total_lines += 1
print('Total number of lines:', total_lines)
Explanation: <section class="challenge panel panel-success">
<div class="panel-heading">
<h2 id="counting-lines"><span class="fa fa-pencil"></span>Counting lines</h2>
</div>
<div class="panel-body">
<p>Write a program called <code>line-count.py</code> that works like the Unix <code>wc</code> command:</p>
<ul>
<li>If no filenames are given, it reports the number of lines in standard input.</li>
<li>If one or more filenames are given, it reports the number of lines in each, followed by the total number of lines.</li>
</ul>
</div>
</section>
End of explanation |
12,270 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image features exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.
All of your work for this exercise will be done in this notebook.
Step1: Load data
Similar to previous exercises, we will load CIFAR-10 data from disk.
Step2: Extract Features
For each image we will compute a Histogram of Oriented
Gradients (HOG) as well as a color histogram using the hue channel in HSV
color space. We form our final feature vector for each image by concatenating
the HOG and color histogram feature vectors.
Roughly speaking, HOG should capture the texture of the image while ignoring
color information, and the color histogram represents the color of the input
image while ignoring texture. As a result, we expect that using both together
ought to work better than using either alone. Verifying this assumption would
be a good thing to try for the bonus section.
The hog_feature and color_histogram_hsv functions both operate on a single
image and return a feature vector for that image. The extract_features
function takes a set of images and a list of feature functions and evaluates
each feature function on each image, storing the results in a matrix where
each column is the concatenation of all feature vectors for a single image.
Step3: Train SVM on features
Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
Step4: Inline question 1 | Python Code:
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
Explanation: Image features exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.
All of your work for this exercise will be done in this notebook.
End of explanation
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
Explanation: Load data
Similar to previous exercises, we will load CIFAR-10 data from disk.
End of explanation
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
Explanation: Extract Features
For each image we will compute a Histogram of Oriented
Gradients (HOG) as well as a color histogram using the hue channel in HSV
color space. We form our final feature vector for each image by concatenating
the HOG and color histogram feature vectors.
Roughly speaking, HOG should capture the texture of the image while ignoring
color information, and the color histogram represents the color of the input
image while ignoring texture. As a result, we expect that using both together
ought to work better than using either alone. Verifying this assumption would
be a good thing to try for the bonus section.
The hog_feature and color_histogram_hsv functions both operate on a single
image and return a feature vector for that image. The extract_features
function takes a set of images and a list of feature functions and evaluates
each feature function on each image, storing the results in a matrix where
each column is the concatenation of all feature vectors for a single image.
End of explanation
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
#learning_rates = list(map(lambda x: x*1e-9, np.arange(0.9, 2, 0.1)))
#regularization_strengths = list(map(lambda x: x*1e4, np.arange(1, 10)))
results = {}
best_val = -1
best_svm = None
iters = 2000
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr in learning_rates:
for reg in regularization_strengths:
print('Training with lr={0}, reg={1}'.format(lr, reg))
svm = LinearSVM()
loss_hist = svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg, num_iters=iters)
y_train_pred = svm.predict(X_train_feats)
y_val_pred = svm.predict(X_val_feats)
train_accuracy = np.mean(y_train == y_train_pred)
validation_accuracy = np.mean(y_val == y_val_pred)
if validation_accuracy > best_val:
best_val = validation_accuracy
best_svm = svm
results[(lr, reg)] = (validation_accuracy, train_accuracy)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
Explanation: Train SVM on features
Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
End of explanation
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
learning_rates = np.arange(0.1, 1.6, 0.1)
regularization_params = [1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1]
results = {}
best_val_accuracy = 0
for lr in learning_rates:
for reg in regularization_params:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
stats = net.train(X_train_feats, y_train, X_val_feats, y_val, num_iters=2000, batch_size=200,
learning_rate=lr, learning_rate_decay=0.95, reg=reg)
val_accuracy = (net.predict(X_val_feats) == y_val).mean()
if val_accuracy > best_val_accuracy:
best_val_accuracy = val_accuracy
best_net = net
print('LR: {0} REG: {1} ACC: {2}'.format(lr, reg, val_accuracy))
print('best validation accuracy achieved during cross-validation: {0}'.format(best_val_accuracy))
net = best_net
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
Explanation: Inline question 1:
Describe the misclassification results that you see. Do they make sense?
It makes sense given that we are using color histogram features, so for some results the background seems to affect. For example, blue background/flat background for a plane, trucks as cars (street + background) and the other way, etc.
Neural Network on image features
Earlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels.
For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
End of explanation |
12,271 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
.. _tut_intro_pyton
Step1: If you come from a background of matlab, remember that indexing in python
starts from zero | Python Code:
a = 3
print(type(a))
b = [1, 2.5, 'This is a string']
print(type(b))
c = 'Hello world!'
print(type(c))
Explanation: .. _tut_intro_pyton:
Introduction to Python
Python is a modern, general-purpose, object-oriented, high-level programming
language. First make sure you have a working python environment and
dependencies (see :ref:install_python_and_mne_python). If you are
completely new to python, don't worry, it's just like any other programming
language, only easier. Here are a few great resources to get you started:
SciPy lectures <http://scipy-lectures.github.io>_
Learn X in Y minutes: Python <https://learnxinyminutes.com/docs/python/>_
NumPy for MATLAB users <https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html>_ # noqa
We highly recommend watching the Scipy videos and reading through these
sites to get a sense of how scientific computing is done in Python.
Here are few bulletin points to familiarise yourself with python:
Everything is dynamically typed. No need to declare simple data
structures or variables separately.
End of explanation
a = [1, 2, 3, 4]
print('This is the zeroth value in the list: {}'.format(a[0]))
Explanation: If you come from a background of matlab, remember that indexing in python
starts from zero:
End of explanation |
12,272 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Two particle equilibrium
If you haven't read the One particle equilibrium notebook yet, go and read it now.
In the previous notebook we showed that we can use Magpy to compute the correct thermal equilibrium for a single particle. However, we also need to check that the interactions are correctly implemented by simulating the thermal equilibrium of multiple interacting particles.
In this notebook we'll simulate an ensemble of two particle systems with Magpy. Instead of computing the distribution analytically, we will use the Metropolis Markov-Chain Monte-Carlo technique to generate the correct equilibrium.
Acknowledgements
Many thanks to Jonathon Waters for the terse python implementation of the Metropolis algorithm!
Problem setup
In this example the system comprises two identical particles separated by a distance $R$. The particles have their anisotropy axes in the same direction. We are interested in the following four variables
Step1: Metropolis MCMC
Energy terms
Anisotropy
The energy contribution from the anisotropy of a single particle $i$ is
Step2: Dipolar interaction energy
The energy contribution from $N$ particles $j=1,2,\dots,N$ interacting with a single particle $i$
Step3: Total energy
The total energy contribution from a single particle in the ensemble is
Step4: The Monte-Carlo algorithm
Initialise each spin in the system
Randomly choose a particle in the system and change it's orientation
Compute $\Delta E$ the change in total energy arising from changing the particle orienation
if
$\Delta E<0$ then we accept the new state and store it
$\Delta E>0$ we accept the new state and store it with probability $p=e^{\Delta E/(K_BT)}$
otherwise we reject the new state
Return to 2 until desired number of samples
Once we run this loop many times, we'll have a list of accepted samples of the system state. The distribution of this ensemble of states is guaranteed to converge to the true distribution. Monte-Carlo is much faster than numerical integration methods when we have many particles.
Step5: Parameter set up
Now we set the parameters for the two particle system. Both particles are identical and have their anisotropy axes aligned with the $z$ direction.
Step6: Run the MCMC sampler!
This will take some time
Step7: Magpy - Dynamical Simulation
We now use Magpy to simulate a large ensemble of the identical two-particle system. Once the ensemble has reached a stationary distribution, we determine the distribution of magnetisation angles over the ensemble. We expect this distribution to match the equilibrium distribution determined by the MCMC sampler.
Define the Magpy model
Step8: Simulate the ensemble!
Now we run the dynamical simulation using an implicit solver. Each model is simulated for 1ns.
Step9: Compute the final state
We use the Results.final_state() function to determine the state of each member of the ensemble after 1ns of simulation. The magnetisation angle is computed as the cosine of the $z$-axis component of magnetisation.
Step10: Compare results
Single variable comparison
Below we compare the magnetisation angle distribution for a single particle as simulated with Magpy and the MCMC algorithm.
Step11: The results look to be a good match!
Join distribution comparison
Below we compare the joint distribution of $\theta_0$ and $\theta_1$ (the magnetisation angle of both particles). In other words, this is the probability distribution over the entire state space. It is important to compare the joint distributions because the two particles interact with one another, creating a dependence between the two magnetisation angles.
Step12: Alternatively compare using a kernel density function
An alternative method to visually compare the two distributions is to construct a kernel density estimation one set of results and overaly it on a histogram of the other.
Step13: Sanity check | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
from tqdm import tqdm_notebook
#import tqdm
import magpy as mp
%matplotlib inline
Explanation: Two particle equilibrium
If you haven't read the One particle equilibrium notebook yet, go and read it now.
In the previous notebook we showed that we can use Magpy to compute the correct thermal equilibrium for a single particle. However, we also need to check that the interactions are correctly implemented by simulating the thermal equilibrium of multiple interacting particles.
In this notebook we'll simulate an ensemble of two particle systems with Magpy. Instead of computing the distribution analytically, we will use the Metropolis Markov-Chain Monte-Carlo technique to generate the correct equilibrium.
Acknowledgements
Many thanks to Jonathon Waters for the terse python implementation of the Metropolis algorithm!
Problem setup
In this example the system comprises two identical particles separated by a distance $R$. The particles have their anisotropy axes in the same direction. We are interested in the following four variables: the angle between the particle's moments and the anisotropy axis $\theta_1,\theta_2$ and the rotational (azimuth) angle of the particles around the anisotropy axis $\phi_1,\phi_2$
Modules
End of explanation
def e_anisotropy(moments, anisotropy_axes, V, K, particle_id):
cos_t = np.sum(moments[particle_id, :]*anisotropy_axes[particle_id, :])
return -K*V*cos_t**2
Explanation: Metropolis MCMC
Energy terms
Anisotropy
The energy contribution from the anisotropy of a single particle $i$ is:
$$E^a_i=V_i\vec{K}_i\cdot\vec{m}_i$$
End of explanation
def e_dipole(moments, positions, Ms, V, particle_id):
mu_0 = mp.core.get_mu0()
mask = np.ones(moments.shape[0], dtype=bool)
mask[particle_id] = False
rs = positions[mask]-positions[particle_id, :]
mod_rs = np.linalg.norm(rs, axis=1)
rs[:, 0] = rs[:, 0] / mod_rs
rs[:, 1] = rs[:, 1] / mod_rs
rs[:, 2] = rs[:, 2] / mod_rs
m1_m2 = np.sum(moments[particle_id, :]*moments[mask], axis=1)
m1_r = np.sum(moments[particle_id, :]*rs, axis=1)
m2_r = np.sum(moments[mask]*rs, axis=1)
numer = (V**2)*(Ms**2)*mu_0*(3*m1_r*m2_r - m1_m2)
denom = 4*np.pi*np.power(mod_rs, 3)
return -np.sum(numer/denom)
Explanation: Dipolar interaction energy
The energy contribution from $N$ particles $j=1,2,\dots,N$ interacting with a single particle $i$:
$$E^d_{i} = \sum_j\frac{V_i^2 M_s^2 \mu_0 \left(3 (\vec{m}i\cdot\vec{r}{ij})(\vec{m}j\cdot\vec{r}{ij}) - \vec{m}_i\cdot\vec{m}_j\right)}{4\pi\left|r\right|^3}$$
End of explanation
def e_total(moments, positions, anisotropy_axes, Ms, V, K, particle_id):
return (
e_dipole(moments, positions, Ms, V, particle_id)
+ e_anisotropy(moments, anisotropy_axes, V, K, particle_id)
)
Explanation: Total energy
The total energy contribution from a single particle in the ensemble is:
$$E_i=E^a_i+E^d_i$$
End of explanation
def sphere_point():
theta = 2*np.pi*np.random.rand()
phi = np.arccos(1-2*np.random.rand())
return np.array([np.sin(phi)*np.cos(theta), np.sin(phi)*np.sin(theta), np.cos(phi)])
def MH(positions, ani_axis, spins, Neq, Nsamps, SampRate, Ms, V, K, T, seed=42):
np.random.seed(seed)
k_b = mp.core.get_KB()
test = np.copy(spins)
Ntot = Neq+Nsamps*SampRate
Out = np.zeros([spins.shape[0], spins.shape[1], Nsamps])
ns = 0
for n in tqdm_notebook(range(Ntot)):
# pick a random spin
i = int(np.random.rand(1)*positions.shape[0])
# pick a random dir
test[i, :] = sphere_point()
dE = e_total(test, positions, ani_axis, Ms, V, K, i) - \
e_total(moments, positions, ani_axis, Ms, V, K, i)
if(np.random.rand(1) < np.exp(-dE/(k_b*T))):
spins[i, :] = test[i, :]
else:
test[i, :] = spins[i, :]
if (n >= Neq and (n-Neq)%SampRate == 0):
Out[:, :, ns] = np.copy(spins)
ns += 1
return Out
Explanation: The Monte-Carlo algorithm
Initialise each spin in the system
Randomly choose a particle in the system and change it's orientation
Compute $\Delta E$ the change in total energy arising from changing the particle orienation
if
$\Delta E<0$ then we accept the new state and store it
$\Delta E>0$ we accept the new state and store it with probability $p=e^{\Delta E/(K_BT)}$
otherwise we reject the new state
Return to 2 until desired number of samples
Once we run this loop many times, we'll have a list of accepted samples of the system state. The distribution of this ensemble of states is guaranteed to converge to the true distribution. Monte-Carlo is much faster than numerical integration methods when we have many particles.
End of explanation
N = 2 # Two particles
T = 330 # temperature
K = 1e5 # anisotropy strength
R = 9e-9 # distance between two particles
r = 7e-9 # radius of the particles
V = 4./3 * np.pi * r**3 # volume of particle
Ms = 4e5 # saturation magnetisation
# particle 1 particle 2
positions = np.array([[0., 0., 0.], [0., 0., R]])
moments = np.array([sphere_point(), sphere_point()])
anisotropy_axes = np.array([[0., 0., 1.], [0., 0., 1.]])
Explanation: Parameter set up
Now we set the parameters for the two particle system. Both particles are identical and have their anisotropy axes aligned with the $z$ direction.
End of explanation
output = MH(positions, anisotropy_axes, moments, 100000, 600000, 20, Ms, V, K, T, 0)
thetas = np.arccos(output[:, 2, :])
plt.hist(thetas[0], bins=50, normed=True)
plt.title('Magnetisation angle histogram (MCMC)')
plt.xlabel('Magnetisation angle $\\theta$ rads')
plt.ylabel('Probability $p(\\theta)$');
Explanation: Run the MCMC sampler!
This will take some time
End of explanation
# additionally we must specify damping
alpha = 0.1
# We build a model of the two particles
base_model = mp.Model(
anisotropy=[K,K],
anisotropy_axis=anisotropy_axes,
damping=alpha,
location=positions,
magnetisation=Ms,
magnetisation_direction=moments,
radius=[r, r],
temperature=T
)
# Create an ensemble of 50,000 identical models
ensemble = mp.EnsembleModel(50000, base_model)
Explanation: Magpy - Dynamical Simulation
We now use Magpy to simulate a large ensemble of the identical two-particle system. Once the ensemble has reached a stationary distribution, we determine the distribution of magnetisation angles over the ensemble. We expect this distribution to match the equilibrium distribution determined by the MCMC sampler.
Define the Magpy model
End of explanation
res = ensemble.simulate(end_time=1e-9, time_step=1e-12,
max_samples=500, random_state=1002,
n_jobs=-1, implicit_solve=True,
interactions=True)
Explanation: Simulate the ensemble!
Now we run the dynamical simulation using an implicit solver. Each model is simulated for 1ns.
End of explanation
m_z0 = np.array([state['z'][0] for state in res.final_state()])/Ms
m_z1 = np.array([state['z'][1] for state in res.final_state()])/Ms
theta0 = np.arccos(m_z0)
theta1 = np.arccos(m_z1)
Explanation: Compute the final state
We use the Results.final_state() function to determine the state of each member of the ensemble after 1ns of simulation. The magnetisation angle is computed as the cosine of the $z$-axis component of magnetisation.
End of explanation
plt.hist(theta0, bins=50, alpha=0.5, normed=True, label='magpy')
plt.hist(thetas[0], bins=50, alpha=0.5, normed=True, label='MCMC')
plt.legend();
plt.xlabel('Magnetisation angle $\\theta$ (rads)')
plt.ylabel('Probability $p(\\theta)$');
Explanation: Compare results
Single variable comparison
Below we compare the magnetisation angle distribution for a single particle as simulated with Magpy and the MCMC algorithm.
End of explanation
fg, axs = plt.subplots(ncols=2, figsize=(11,4), sharey=True)
histdat = axs[0].hist2d(theta0, theta1, bins=16, normed=True)
axs[1].hist2d(thetas[0], thetas[1], bins=histdat[1], normed=True);
for ax, title in zip(axs, ['Magpy', 'MCMC']):
ax.set_xlabel('Magnetisation angle $\\theta_0$')
ax.set_ylabel('Magnetisation angle $\\theta_1$')
ax.set_title(title)
fg.colorbar(histdat[3], ax=axs.tolist());
Explanation: The results look to be a good match!
Join distribution comparison
Below we compare the joint distribution of $\theta_0$ and $\theta_1$ (the magnetisation angle of both particles). In other words, this is the probability distribution over the entire state space. It is important to compare the joint distributions because the two particles interact with one another, creating a dependence between the two magnetisation angles.
End of explanation
from scipy.stats import gaussian_kde
kde = gaussian_kde(thetas)
tgrid_x = np.linspace(theta0.min(), theta0.max(), 16)
tgrid_y = np.linspace(theta1.min(), theta1.max(), 16)
tgrid_x, tgrid_y = np.meshgrid(tgrid_x, tgrid_y)
Z = np.reshape(kde(np.vstack([tgrid_x.ravel(), tgrid_y.ravel()])).T, tgrid_x.shape)
fg, ax = plt.subplots(figsize=(9,5))
hist = ax.hist2d(theta0, theta1, bins=16, normed=True)
contour = ax.contour(tgrid_x, tgrid_y, Z, cmap='hot_r')
fg.colorbar(contour, label='MCMC')
fg.colorbar(hist[3], label='Magpy')
ax.set_xlabel('Magnetisation angle $\\theta_0$')
ax.set_ylabel('Magnetisation angle $\\theta_1$');
Explanation: Alternatively compare using a kernel density function
An alternative method to visually compare the two distributions is to construct a kernel density estimation one set of results and overaly it on a histogram of the other.
End of explanation
res_noi = ensemble.simulate(end_time=1e-9, time_step=1e-12,
max_samples=500, random_state=1002,
n_jobs=-1, implicit_solve=True,
interactions=False)
m_z0 = np.array([state['z'][0] for state in res_noi.final_state()])/Ms
m_z1 = np.array([state['z'][1] for state in res_noi.final_state()])/Ms
theta0_noi = np.arccos(m_z0)
theta1_noi = np.arccos(m_z1)
plt.hist(theta0, bins=50, normed=True, alpha=0.4, label='Magpy')
plt.hist(theta0_noi, bins=50, normed=True, alpha=0.4, label='Magpy (no inter.)');
plt.hist(thetas[0], bins=50, histtype='step', lw=2, normed=True, alpha=0.4, label='MCMC')
plt.legend();
plt.xlabel('Magnetisation angle $\\theta_0$ rads')
plt.ylabel('Probability $p(\\theta_0)$');
plt.title('Comparison of $\\theta_0$ distrubition');
fg, ax = plt.subplots(figsize=(9,5))
hist = ax.hist2d(theta0_noi, theta1_noi, bins=16, normed=True)
contour = ax.contour(tgrid_x, tgrid_y, Z, cmap='hot_r')
fg.colorbar(contour, label='MCMC')
fg.colorbar(hist[3], label='Magpy')
ax.set_xlabel('Magnetisation angle $\\theta_0$')
ax.set_ylabel('Magnetisation angle $\\theta_1$');
Explanation: Sanity check: no interactions
To ensure that the interactions are having a significant effect on the joint distribution, we simulate the same system but disable the interaction false (simply set interactions=False).
End of explanation |
12,273 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data in Quilt is organized in terms of data packages. A data package is a logical group of files, directories, and metadata.
Initializing a package
To edit a new empty package, use the package constructor
Step1: To edit a preexisting package, we need to first make sure to install the package
Step2: Use browse to edit the package
Step3: For more information on accessing existing packages see the section "Installing a Package".
Adding data to a package
Use the set and set_dir commands to add individual files and whole directories, respectively, to a Package
Step4: The first parameter to these functions is the logical key, which will determine where the file lives within the package. So after running the commands above our package will look like this
Step5: The second parameter is the physical key, which states the file's actual location. The physical key may point to either a local file or a remote object (with an s3
Step6: Another useful trick. Use "." to set the contents of the package to that of the current directory
Step7: Deleting data in a package
Use delete to remove entries from a package
Step8: Note that this will only remove this piece of data from the package. It will not delete the actual data itself.
Adding metadata to a package
Packages support metadata anywhere in the package. To set metadata on package entries or directories, use the meta argument
Step9: You can also set metadata on the package as a whole using set_meta. | Python Code:
import quilt3
p = quilt3.Package()
Explanation: Data in Quilt is organized in terms of data packages. A data package is a logical group of files, directories, and metadata.
Initializing a package
To edit a new empty package, use the package constructor:
End of explanation
quilt3.Package.install(
"examples/hurdat",
"s3://quilt-example",
)
Explanation: To edit a preexisting package, we need to first make sure to install the package:
End of explanation
p = quilt3.Package.browse('examples/hurdat')
Explanation: Use browse to edit the package:
End of explanation
# add entries individually using `set`
# ie p.set("foo.csv", "/local/path/foo.csv"),
# p.set("bar.csv", "s3://bucket/path/bar.csv")
# create test data
with open("data.csv", "w") as f:
f.write("id, value\na, 42")
p = quilt3.Package()
p.set("data.csv", "data.csv")
p.set("banner.png", "s3://quilt-example/imgs/banner.png")
# or grab everything in a directory at once using `set_dir`
# ie p.set_dir("stuff/", "/path/to/stuff/"),
# p.set_dir("things/", "s3://path/to/things/")
# create test directory
import os
os.mkdir("data")
p.set_dir("stuff/", "./data/")
p.set_dir("imgs/", "s3://quilt-example/imgs/")
Explanation: For more information on accessing existing packages see the section "Installing a Package".
Adding data to a package
Use the set and set_dir commands to add individual files and whole directories, respectively, to a Package:
End of explanation
p
Explanation: The first parameter to these functions is the logical key, which will determine where the file lives within the package. So after running the commands above our package will look like this:
End of explanation
# assuming data.csv is in the current directory
p = quilt3.Package()
p.set("data.csv")
Explanation: The second parameter is the physical key, which states the file's actual location. The physical key may point to either a local file or a remote object (with an s3:// path).
If the physical key and the logical key are the same, you may omit the second argument:
End of explanation
# switch to a test directory and create some test files
import os
%cd data/
os.mkdir("stuff")
with open("new_data.csv", "w") as f:
f.write("id, value\na, 42")
# set the contents of the package to that of the current directory
p.set_dir(".", ".")
Explanation: Another useful trick. Use "." to set the contents of the package to that of the current directory:
End of explanation
p.delete("data.csv")
Explanation: Deleting data in a package
Use delete to remove entries from a package:
End of explanation
p = quilt3.Package()
p.set("data.csv", "new_data.csv", meta={"type": "csv"})
p.set_dir("stuff/", "stuff/", meta={"origin": "unknown"})
Explanation: Note that this will only remove this piece of data from the package. It will not delete the actual data itself.
Adding metadata to a package
Packages support metadata anywhere in the package. To set metadata on package entries or directories, use the meta argument:
End of explanation
# set metadata on a package
p.set_meta({"package-type": "demo"})
Explanation: You can also set metadata on the package as a whole using set_meta.
End of explanation |
12,274 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
RNNs tutorial
Step1: An LSTM/RNN overview
Step2: Note that when we create the builder, it adds the internal RNN parameters to the ParameterCollection.
We do not need to care about them, but they will be optimized together with the rest of the network's parameters.
Step3: If our LSTM/RNN was one layer deep, y2 would be equal to the hidden state. However, since it is 2 layers deep, y2 is only the hidden state (= output) of the last layer.
If we were to want access to the all the hidden state (the output of both the first and the last layers), we could use the .h() method, which returns a list of expressions, one for each layer
Step4: The same interface that we saw until now for the LSTM, holds also for the Simple RNN
Step5: To summarize, when calling .add_input(x) on an RNNState what happens is that the state creates a new RNN/LSTM column, passing it
Step6: As we can see, the LSTM has two extra state expressions (one for each hidden layer) before the outputs h.
Extra options in the RNN/LSTM interface
Stack LSTM The RNN's are shaped as a stack
Step7: Aside
Step8: This is convenient.
What if we do not care about .s() and .h(), and do not need to access the previous vectors? In such cases
we can use the transduce(xs) method instead of add_inputs(xs).
transduce takes in a sequence of Expressions, and returns a sequence of Expressions.
As a consequence of not returning RNNStates, trnasduce is much more memory efficient than add_inputs or a series of calls to add_input.
Step9: Character-level LSTM
Now that we know the basics of RNNs, let's build a character-level LSTM language-model.
We have a sequence LSTM that, at each step, gets as input a character, and needs to predict the next character.
Step10: Notice that
Step11: The model seem to learn the sentence quite well.
Somewhat surprisingly, the Simple-RNN model learn quicker than the LSTM!
How can that be?
The answer is that we are cheating a bit. The sentence we are trying to learn
has each letter-bigram exactly once. This means a simple trigram model can memorize
it very well.
Try it out with more complex sequences. | Python Code:
# we assume that we have the dynet module in your path.
# OUTDATED: we also assume that LD_LIBRARY_PATH includes a pointer to where libcnn_shared.so is.
import dynet as dy
Explanation: RNNs tutorial
End of explanation
pc = dy.ParameterCollection()
NUM_LAYERS=2
INPUT_DIM=50
HIDDEN_DIM=10
builder = dy.LSTMBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, pc)
# or:
# builder = dy.SimpleRNNBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, pc)
Explanation: An LSTM/RNN overview:
An (1-layer) RNN can be thought of as a sequence of cells, $h_1,...,h_k$, where $h_i$ indicates the time dimenstion.
Each cell $h_i$ has an input $x_i$ and an output $r_i$. In addition to $x_i$, cell $h_i$ receives as input also $r_{i-1}$.
In a deep (multi-layer) RNN, we don't have a sequence, but a grid. That is we have several layers of sequences:
$h_1^3,...,h_k^3$
$h_1^2,...,h_k^2$
$h_1^1,...h_k^1$,
Let $r_i^j$ be the output of cell $h_i^j$. Then:
The input to $h_i^1$ is $x_i$ and $r_{i-1}^1$.
The input to $h_i^2$ is $r_i^1$ and $r_{i-1}^2$,
and so on.
The LSTM (RNN) Interface
RNN / LSTM / GRU follow the same interface. We have a "builder" which is in charge of creating definining the parameters for the sequence.
End of explanation
s0 = builder.initial_state()
x1 = dy.vecInput(INPUT_DIM)
s1=s0.add_input(x1)
y1 = s1.output()
# here, we add x1 to the RNN, and the output we get from the top is y (a HIDEN_DIM-dim vector)
y1.npvalue().shape
s2=s1.add_input(x1) # we can add another input
y2=s2.output()
Explanation: Note that when we create the builder, it adds the internal RNN parameters to the ParameterCollection.
We do not need to care about them, but they will be optimized together with the rest of the network's parameters.
End of explanation
print s2.h()
Explanation: If our LSTM/RNN was one layer deep, y2 would be equal to the hidden state. However, since it is 2 layers deep, y2 is only the hidden state (= output) of the last layer.
If we were to want access to the all the hidden state (the output of both the first and the last layers), we could use the .h() method, which returns a list of expressions, one for each layer:
End of explanation
# create a simple rnn builder
rnnbuilder=dy.SimpleRNNBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, pc)
# initialize a new graph, and a new sequence
rs0 = rnnbuilder.initial_state()
# add inputs
rs1 = rs0.add_input(x1)
ry1 = rs1.output()
print "all layers:", s1.h()
print s1.s()
Explanation: The same interface that we saw until now for the LSTM, holds also for the Simple RNN:
End of explanation
rnn_h = rs1.h()
rnn_s = rs1.s()
print "RNN h:", rnn_h
print "RNN s:", rnn_s
lstm_h = s1.h()
lstm_s = s1.s()
print "LSTM h:", lstm_h
print "LSTM s:", lstm_s
Explanation: To summarize, when calling .add_input(x) on an RNNState what happens is that the state creates a new RNN/LSTM column, passing it:
1. the state of the current RNN column
2. the input x
The state is then returned, and we can call it's output() method to get the output y, which is the output at the top of the column. We can access the outputs of all the layers (not only the last one) using the .h() method of the state.
.s() The internal state of the RNN may be more involved than just the outputs $h$. This is the case for the LSTM, that keeps an extra "memory" cell, that is used when calculating $h$, and which is also passed to the next column. To access the entire hidden state, we use the .s() method.
The output of .s() differs by the type of RNN being used. For the simple-RNN, it is the same as .h(). For the LSTM, it is more involved.
End of explanation
s2=s1.add_input(x1)
s3=s2.add_input(x1)
s4=s3.add_input(x1)
# let's continue s3 with a new input.
s5=s3.add_input(x1)
# we now have two different sequences:
# s0,s1,s2,s3,s4
# s0,s1,s2,s3,s5
# the two sequences share parameters.
assert(s5.prev() == s3)
assert(s4.prev() == s3)
s6=s3.prev().add_input(x1)
# we now have an additional sequence:
# s0,s1,s2,s6
s6.h()
s6.s()
Explanation: As we can see, the LSTM has two extra state expressions (one for each hidden layer) before the outputs h.
Extra options in the RNN/LSTM interface
Stack LSTM The RNN's are shaped as a stack: we can remove the top and continue from the previous state.
This is done either by remembering the previous state and continuing it with a new .add_input(), or using
we can access the previous state of a given state using the .prev() method of state.
Initializing a new sequence with a given state When we call builder.initial_state(), we are assuming the state has random /0 initialization. If we want, we can specify a list of expressions that will serve as the initial state. The expected format is the same as the results of a call to .final_s(). TODO: this is not supported yet.
End of explanation
state = rnnbuilder.initial_state()
xs = [x1,x1,x1]
states = state.add_inputs(xs)
outputs = [s.output() for s in states]
hs = [s.h() for s in states]
print outputs, hs
Explanation: Aside: memory efficient transduction
The RNNState interface is convenient, and allows for incremental input construction.
However, sometimes we know the sequence of inputs in advance, and care only about the sequence of
output expressions. In this case, we can use the add_inputs(xs) method, where xs is a list of Expression.
End of explanation
state = rnnbuilder.initial_state()
xs = [x1,x1,x1]
outputs = state.transduce(xs)
print outputs
Explanation: This is convenient.
What if we do not care about .s() and .h(), and do not need to access the previous vectors? In such cases
we can use the transduce(xs) method instead of add_inputs(xs).
transduce takes in a sequence of Expressions, and returns a sequence of Expressions.
As a consequence of not returning RNNStates, trnasduce is much more memory efficient than add_inputs or a series of calls to add_input.
End of explanation
import random
from collections import defaultdict
from itertools import count
import sys
LAYERS = 2
INPUT_DIM = 50
HIDDEN_DIM = 50
characters = list("abcdefghijklmnopqrstuvwxyz ")
characters.append("<EOS>")
int2char = list(characters)
char2int = {c:i for i,c in enumerate(characters)}
VOCAB_SIZE = len(characters)
pc = dy.ParameterCollection()
srnn = dy.SimpleRNNBuilder(LAYERS, INPUT_DIM, HIDDEN_DIM, pc)
lstm = dy.LSTMBuilder(LAYERS, INPUT_DIM, HIDDEN_DIM, pc)
params = {}
params["lookup"] = pc.add_lookup_parameters((VOCAB_SIZE, INPUT_DIM))
params["R"] = pc.add_parameters((VOCAB_SIZE, HIDDEN_DIM))
params["bias"] = pc.add_parameters((VOCAB_SIZE))
# return compute loss of RNN for one sentence
def do_one_sentence(rnn, sentence):
# setup the sentence
dy.renew_cg()
s0 = rnn.initial_state()
R = dy.parameter(params["R"])
bias = dy.parameter(params["bias"])
lookup = params["lookup"]
sentence = ["<EOS>"] + list(sentence) + ["<EOS>"]
sentence = [char2int[c] for c in sentence]
s = s0
loss = []
for char,next_char in zip(sentence,sentence[1:]):
s = s.add_input(lookup[char])
probs = dy.softmax(R*s.output() + bias)
loss.append( -dy.log(dy.pick(probs,next_char)) )
loss = dy.esum(loss)
return loss
# generate from model:
def generate(rnn):
def sample(probs):
rnd = random.random()
for i,p in enumerate(probs):
rnd -= p
if rnd <= 0: break
return i
# setup the sentence
dy.renew_cg()
s0 = rnn.initial_state()
R = dy.parameter(params["R"])
bias = dy.parameter(params["bias"])
lookup = params["lookup"]
s = s0.add_input(lookup[char2int["<EOS>"]])
out=[]
while True:
probs = dy.softmax(R*s.output() + bias)
probs = probs.vec_value()
next_char = sample(probs)
out.append(int2char[next_char])
if out[-1] == "<EOS>": break
s = s.add_input(lookup[next_char])
return "".join(out[:-1]) # strip the <EOS>
# train, and generate every 5 samples
def train(rnn, sentence):
trainer = dy.SimpleSGDTrainer(pc)
for i in xrange(200):
loss = do_one_sentence(rnn, sentence)
loss_value = loss.value()
loss.backward()
trainer.update()
if i % 5 == 0:
print loss_value,
print generate(rnn)
Explanation: Character-level LSTM
Now that we know the basics of RNNs, let's build a character-level LSTM language-model.
We have a sequence LSTM that, at each step, gets as input a character, and needs to predict the next character.
End of explanation
sentence = "a quick brown fox jumped over the lazy dog"
train(srnn, sentence)
sentence = "a quick brown fox jumped over the lazy dog"
train(lstm, sentence)
Explanation: Notice that:
1. We pass the same rnn-builder to do_one_sentence over and over again.
We must re-use the same rnn-builder, as this is where the shared parameters are kept.
2. We dy.renew_cg() before each sentence -- because we want to have a new graph (new network) for this sentence.
The parameters will be shared through the model and the shared rnn-builder.
End of explanation
train(srnn, "these pretzels are making me thirsty")
Explanation: The model seem to learn the sentence quite well.
Somewhat surprisingly, the Simple-RNN model learn quicker than the LSTM!
How can that be?
The answer is that we are cheating a bit. The sentence we are trying to learn
has each letter-bigram exactly once. This means a simple trigram model can memorize
it very well.
Try it out with more complex sequences.
End of explanation |
12,275 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Imputation
Real-world datasets often contain many missing values. In those situations, we have to either remove those missing data (also known as "complete case") or replace them by some values. Though using complete case is pretty straightforward, it is only applicable when the number of missing entries is so small that throwing away those entries would not affect much the power of the analysis we are conducting on the data. The second strategy, also known as imputation, is more applicable and will be our focus in this tutorial.
Probably the most popular way to perform imputation is to fill a missing value with the mean, median, or mode of its corresponding feature. In that case, we implicitly assume that the feature containing missing values has no correlation with the remaining features of our dataset. This is a pretty strong assumption and might not be true in general. In addition, it does not encode any uncertainty that we might put on those values. Below, we will construct a Bayesian setting to resolve those issues. In particular, given a model on the dataset, we will
create a generative model for the feature with missing value
and consider missing values as unobserved latent variables.
Step1: Dataset
The data is taken from the competition Titanic
Step2: Look at the data info, we know that there are missing data at Age, Cabin, and Embarked columns. Although Cabin is an important feature (because the position of a cabin in the ship can affect the chance of people in that cabin to survive), we will skip it in this tutorial for simplicity. In the dataset, there are many categorical columns and two numerical columns Age and Fare. Let's first look at the distribution of those categorical columns
Step3: Prepare data
First, we will merge rare groups in SibSp and Parch columns together. In addition, we'll fill 2 missing entries in Embarked by the mode S. Note that we can make a generative model for those missing entries in Embarked but let's skip doing so for simplicity.
Step4: Looking closer at the data, we can observe that each name contains a title. We know that age is correlated with the title of the name
Step5: We will make a new column Title, where rare titles are merged into one group Misc..
Step6: Now, it is ready to turn the dataframe, which includes categorical values, into numpy arrays. We also perform standardization (a good practice for regression models) for Age column.
Step7: Modelling
First, we want to note that in NumPyro, the following models
python
def model1a()
Step8: Note that in the model, the prior for age is dist.Normal(age_mu, age_sigma), where the values of age_mu and age_sigma depend on title. Because there are missing values in age, we will encode those missing values in the latent parameter age_impute. Then we can replace NaN entries in age with the vector age_impute.
Sampling
We will use MCMC with NUTS kernel to sample both regression coefficients and imputed values.
Step9: To double check that the assumption "age is correlated with title" is reasonable, let's look at the infered age by title. Recall that we performed standarization on age, so here we need to scale back to original domain.
Step10: The infered result confirms our assumption that Age is correlated with Title
Step11: So far so good, we have many information about the regression coefficients together with imputed values and their uncertainties. Let's inspect those results a bit
Step12: This is a pretty good result using a simple logistic regression model. Let's see how the model performs if we don't use Bayesian imputation here. | Python Code:
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
# first, we need some imports
import os
from IPython.display import set_matplotlib_formats
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from jax import numpy as jnp
from jax import random
from jax.scipy.special import expit
import numpyro
from numpyro import distributions as dist
from numpyro.distributions import constraints
from numpyro.infer import MCMC, NUTS, Predictive
plt.style.use("seaborn")
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats("svg")
assert numpyro.__version__.startswith("0.9.1")
Explanation: Bayesian Imputation
Real-world datasets often contain many missing values. In those situations, we have to either remove those missing data (also known as "complete case") or replace them by some values. Though using complete case is pretty straightforward, it is only applicable when the number of missing entries is so small that throwing away those entries would not affect much the power of the analysis we are conducting on the data. The second strategy, also known as imputation, is more applicable and will be our focus in this tutorial.
Probably the most popular way to perform imputation is to fill a missing value with the mean, median, or mode of its corresponding feature. In that case, we implicitly assume that the feature containing missing values has no correlation with the remaining features of our dataset. This is a pretty strong assumption and might not be true in general. In addition, it does not encode any uncertainty that we might put on those values. Below, we will construct a Bayesian setting to resolve those issues. In particular, given a model on the dataset, we will
create a generative model for the feature with missing value
and consider missing values as unobserved latent variables.
End of explanation
train_df = pd.read_csv(
"https://raw.githubusercontent.com/agconti/kaggle-titanic/master/data/train.csv"
)
train_df.info()
train_df.head()
Explanation: Dataset
The data is taken from the competition Titanic: Machine Learning from Disaster hosted on kaggle. It contains information of passengers in the Titanic accident such as name, age, gender,... And our target is to predict if a person is more likely to survive.
End of explanation
for col in ["Survived", "Pclass", "Sex", "SibSp", "Parch", "Embarked"]:
print(train_df[col].value_counts(), end="\n\n")
Explanation: Look at the data info, we know that there are missing data at Age, Cabin, and Embarked columns. Although Cabin is an important feature (because the position of a cabin in the ship can affect the chance of people in that cabin to survive), we will skip it in this tutorial for simplicity. In the dataset, there are many categorical columns and two numerical columns Age and Fare. Let's first look at the distribution of those categorical columns:
End of explanation
train_df.SibSp.clip(0, 1, inplace=True)
train_df.Parch.clip(0, 2, inplace=True)
train_df.Embarked.fillna("S", inplace=True)
Explanation: Prepare data
First, we will merge rare groups in SibSp and Parch columns together. In addition, we'll fill 2 missing entries in Embarked by the mode S. Note that we can make a generative model for those missing entries in Embarked but let's skip doing so for simplicity.
End of explanation
train_df.Name.str.split(", ").str.get(1).str.split(" ").str.get(0).value_counts()
Explanation: Looking closer at the data, we can observe that each name contains a title. We know that age is correlated with the title of the name: e.g. those with Mrs. would be older than those with Miss. (on average) so it might be good to create that feature. The distribution of titles is:
End of explanation
train_df["Title"] = (
train_df.Name.str.split(", ")
.str.get(1)
.str.split(" ")
.str.get(0)
.apply(lambda x: x if x in ["Mr.", "Miss.", "Mrs.", "Master."] else "Misc.")
)
Explanation: We will make a new column Title, where rare titles are merged into one group Misc..
End of explanation
title_cat = pd.CategoricalDtype(
categories=["Mr.", "Miss.", "Mrs.", "Master.", "Misc."], ordered=True
)
embarked_cat = pd.CategoricalDtype(categories=["S", "C", "Q"], ordered=True)
age_mean, age_std = train_df.Age.mean(), train_df.Age.std()
data = dict(
age=train_df.Age.pipe(lambda x: (x - age_mean) / age_std).values,
pclass=train_df.Pclass.values - 1,
title=train_df.Title.astype(title_cat).cat.codes.values,
sex=(train_df.Sex == "male").astype(int).values,
sibsp=train_df.SibSp.values,
parch=train_df.Parch.values,
embarked=train_df.Embarked.astype(embarked_cat).cat.codes.values,
)
survived = train_df.Survived.values
# compute the age mean for each title
age_notnan = data["age"][jnp.isfinite(data["age"])]
title_notnan = data["title"][jnp.isfinite(data["age"])]
age_mean_by_title = jnp.stack([age_notnan[title_notnan == i].mean() for i in range(5)])
Explanation: Now, it is ready to turn the dataframe, which includes categorical values, into numpy arrays. We also perform standardization (a good practice for regression models) for Age column.
End of explanation
def model(
age, pclass, title, sex, sibsp, parch, embarked, survived=None, bayesian_impute=True
):
b_pclass = numpyro.sample("b_Pclass", dist.Normal(0, 1).expand([3]))
b_title = numpyro.sample("b_Title", dist.Normal(0, 1).expand([5]))
b_sex = numpyro.sample("b_Sex", dist.Normal(0, 1).expand([2]))
b_sibsp = numpyro.sample("b_SibSp", dist.Normal(0, 1).expand([2]))
b_parch = numpyro.sample("b_Parch", dist.Normal(0, 1).expand([3]))
b_embarked = numpyro.sample("b_Embarked", dist.Normal(0, 1).expand([3]))
# impute age by Title
isnan = np.isnan(age)
age_nanidx = np.nonzero(isnan)[0]
if bayesian_impute:
age_mu = numpyro.sample("age_mu", dist.Normal(0, 1).expand([5]))
age_mu = age_mu[title]
age_sigma = numpyro.sample("age_sigma", dist.Normal(0, 1).expand([5]))
age_sigma = age_sigma[title]
age_impute = numpyro.sample(
"age_impute",
dist.Normal(age_mu[age_nanidx], age_sigma[age_nanidx]).mask(False),
)
age = jnp.asarray(age).at[age_nanidx].set(age_impute)
numpyro.sample("age", dist.Normal(age_mu, age_sigma), obs=age)
else:
# fill missing data by the mean of ages for each title
age_impute = age_mean_by_title[title][age_nanidx]
age = jnp.asarray(age).at[age_nanidx].set(age_impute)
a = numpyro.sample("a", dist.Normal(0, 1))
b_age = numpyro.sample("b_Age", dist.Normal(0, 1))
logits = a + b_age * age
logits = logits + b_title[title] + b_pclass[pclass] + b_sex[sex]
logits = logits + b_sibsp[sibsp] + b_parch[parch] + b_embarked[embarked]
numpyro.sample("survived", dist.Bernoulli(logits=logits), obs=survived)
Explanation: Modelling
First, we want to note that in NumPyro, the following models
python
def model1a():
x = numpyro.sample("x", dist.Normal(0, 1).expand([10]))
and
python
def model1b():
x = numpyro.sample("x", dist.Normal(0, 1).expand([10]).mask(False))
numpyro.sample("x_obs", dist.Normal(0, 1).expand([10]), obs=x)
are equivalent in the sense that both of them have
the same latent sites x drawn from dist.Normal(0, 1) prior,
and the same log densities dist.Normal(0, 1).log_prob(x).
Now, assume that we observed the last 6 values of x (non-observed entries take value NaN), the typical model will be
python
def model2a(x):
x_impute = numpyro.sample("x_impute", dist.Normal(0, 1).expand([4]))
x_obs = numpyro.sample("x_obs", dist.Normal(0, 1).expand([6]), obs=x[4:])
x_imputed = jnp.concatenate([x_impute, x_obs])
or with the usage of mask,
python
def model2b(x):
x_impute = numpyro.sample("x_impute", dist.Normal(0, 1).expand([4]).mask(False))
x_imputed = jnp.concatenate([x_impute, x[4:]])
numpyro.sample("x", dist.Normal(0, 1).expand([10]), obs=x_imputed)
Both approaches to model the partial observed data x are equivalent. For the model below, we will use the latter method.
End of explanation
mcmc = MCMC(NUTS(model), num_warmup=1000, num_samples=1000)
mcmc.run(random.PRNGKey(0), **data, survived=survived)
mcmc.print_summary()
Explanation: Note that in the model, the prior for age is dist.Normal(age_mu, age_sigma), where the values of age_mu and age_sigma depend on title. Because there are missing values in age, we will encode those missing values in the latent parameter age_impute. Then we can replace NaN entries in age with the vector age_impute.
Sampling
We will use MCMC with NUTS kernel to sample both regression coefficients and imputed values.
End of explanation
age_by_title = age_mean + age_std * mcmc.get_samples()["age_mu"].mean(axis=0)
dict(zip(title_cat.categories, age_by_title))
Explanation: To double check that the assumption "age is correlated with title" is reasonable, let's look at the infered age by title. Recall that we performed standarization on age, so here we need to scale back to original domain.
End of explanation
train_df.groupby("Title")["Age"].mean()
Explanation: The infered result confirms our assumption that Age is correlated with Title:
those with Master. title has pretty small age (in other words, they are children in the ship) comparing to the other groups,
those with Mrs. title have larger age than those with Miss. title (in average).
We can also see that the result is similar to the actual statistical mean of Age given Title in our training dataset:
End of explanation
posterior = mcmc.get_samples()
survived_pred = Predictive(model, posterior)(random.PRNGKey(1), **data)["survived"]
survived_pred = (survived_pred.mean(axis=0) >= 0.5).astype(jnp.uint8)
print("Accuracy:", (survived_pred == survived).sum() / survived.shape[0])
confusion_matrix = pd.crosstab(
pd.Series(survived, name="actual"), pd.Series(survived_pred, name="predict")
)
confusion_matrix / confusion_matrix.sum(axis=1)
Explanation: So far so good, we have many information about the regression coefficients together with imputed values and their uncertainties. Let's inspect those results a bit:
The mean value -0.44 of b_Age implies that those with smaller ages have better chance to survive.
The mean value (1.11, -1.07) of b_Sex implies that female passengers have higher chance to survive than male passengers.
Prediction
In NumPyro, we can use Predictive utility for making predictions from posterior samples. Let's check how well the model performs on the training dataset. For simplicity, we will get a survived prediction for each posterior sample and perform the majority rule on the predictions.
End of explanation
mcmc.run(random.PRNGKey(2), **data, survived=survived, bayesian_impute=False)
posterior_1 = mcmc.get_samples()
survived_pred_1 = Predictive(model, posterior_1)(random.PRNGKey(2), **data)["survived"]
survived_pred_1 = (survived_pred_1.mean(axis=0) >= 0.5).astype(jnp.uint8)
print("Accuracy:", (survived_pred_1 == survived).sum() / survived.shape[0])
confusion_matrix = pd.crosstab(
pd.Series(survived, name="actual"), pd.Series(survived_pred_1, name="predict")
)
confusion_matrix / confusion_matrix.sum(axis=1)
confusion_matrix = pd.crosstab(
pd.Series(survived, name="actual"), pd.Series(survived_pred_1, name="predict")
)
confusion_matrix / confusion_matrix.sum(axis=1)
Explanation: This is a pretty good result using a simple logistic regression model. Let's see how the model performs if we don't use Bayesian imputation here.
End of explanation |
12,276 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-2', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: NERC
Source ID: SANDBOX-2
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:27
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
12,277 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<span style="color
Step1: <span style="color
Step2: <span style="color
Step3: <span style="color
Step4: <span style="color
Step5: <span style="color | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
pd.set_option('max_columns', 50)
mpl.rcParams['lines.linewidth'] = 2
%matplotlib inline
Explanation: <span style="color:black; font-family:Helvetica; font-size:2.5em;">Practical Code to Calculating Customer Lifetime Value (CLV)</span>
<span style="color:gray; font-family:Helvetica; font-size:1em;"><b>Customer Lifetime Value (CLV)</b> is an estimation of the entire net profit attributed to a single customer. Itโs an important metric to understand because it helps businesses determine how much is too much to spend on advertising to acquire a single customer.</span>
End of explanation
data = pd.read_csv('/Users/crucker/Desktop/clv_transactions.csv')
data.head(6)
data.tail(6)
Transactions = data['CustomerID'].count()
Customers = data['CustomerID'].max()
MinTransactionDate = data['TransactionDate'].min()
MaxTransactionDate = data['TransactionDate'].max()
Amount = data['Amount'].sum()
summary = [Transactions, Customers, MinTransactionDate, MaxTransactionDate, round(Amount, 2)]
summary
Explanation: <span style="color:black; font-family:Helvetica; font-size:2.5em;">Data Exploration</span>
<span style="color:gray; font-family:Helvetica; font-size:1em;">For this example weโll calculate CLV from a dataset of roughly 4,200 transactions.</span>
End of explanation
data = {'Transactions': [4181],
'Customers': [1000],
'MinTransactionDate': ['2010-01-04'],
'MaxTransactionDate': ['2015-12-31'],
'Amount': [33729.91]}
df = pd.DataFrame(data, index = [''])
df
TransactionsPerCustomer = round(Transactions / Customers, 2)
TransactionsPerCustomer
AmountPerTransaction = round(Amount / Transactions, 2)
AmountPerTransaction
AmountPerCustomer = round(Amount / Customers, 2)
AmountPerCustomer
Explanation: <span style="color:gray; font-family:Helvetica; font-size:1em;">As with any analysis, the first thing weโll do is look at some basic summary statistics.</span>
End of explanation
data = {'TransactionsPerCustomer': [4.0],
'AmountPerTransaction': [8.07],
'AmountPerCustomer': [33.73]}
df = pd.DataFrame(data, index = [''])
df
more_summary = [TransactionsPerCustomer, AmountPerTransaction, AmountPerCustomer]
more_summary
Explanation: <span style="color:gray; font-family:Helvetica; font-size:1em;">Note that the data consists of 1000 customers who made transactions between 2010 and 2015. Furthermore, each customer made about 4 transactions for 8 bucks a piece, totaling close to $34. This amount can be considered a lower bound on CLV since itโs the total amount spent by each customer, but we still expect existing customers to make future purchases.</span>
End of explanation
data.loc[data['Amount'] >= 29.99]
import seaborn as sns
sns.set(color_codes=True)
Explanation: <span style="color:gray; font-family:Helvetica; font-size:1em;">We need to consider outlier transactions and should remove the transactions from the data entirely. Here we inspect the largest transactions.</span>
End of explanation
plt.title('Distribution of Transaction Amounts', fontsize=14, fontweight="bold")
sns.distplot(data.Amount, color='#3498db')
Explanation: <span style="color:black; font-family:Helvetica; font-size:2.5em;">Plotting Univariate Distributions</span>
<span style="color:gray; font-family:Helvetica; font-size:1em;">We could use a statistical test to check for outliers, but here itโs pretty clear that none exist. Plotting the entire distribution of transaction amounts should give us more confidence in our assertion.</span>
End of explanation |
12,278 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Input and Output
Basic Output - print()
In Python, we talk about the terminal - by this we really just mean the screen, or maybe a window on the screen.
Python 3 can output to the terminal using the print() function. In the very early days of computers, there weren't any screens - terminal output always went to a printer, hence telling a computer to print() would spew something out of a printer. Nowadays, we still use the print() function, but the output goes to a screen!
You just
* type print
* open a bracket (
* put in the stuff you want to show on the screen or terminal
* close the bracket )
python
print(42)
print('Boris')
Try it yourself. This bit of python will print the number 42 and the name 'Boris'.
Fix the last two lines to show the number 47 and the name 'Jane'.
Step1: If you've done it right, the error messages should go away and your output should look like this
Step2: If you've done it right, your output should look like this
Step3: If you've done it right, your output should look like this
Step4: Try to run the following code then fix the error message
(Hint
Step5: If you've done it right, your output should look like this, unless you aren't Katie | Python Code:
print(42)
print('Boris')
pint[47
print'Jane
Explanation: Basic Input and Output
Basic Output - print()
In Python, we talk about the terminal - by this we really just mean the screen, or maybe a window on the screen.
Python 3 can output to the terminal using the print() function. In the very early days of computers, there weren't any screens - terminal output always went to a printer, hence telling a computer to print() would spew something out of a printer. Nowadays, we still use the print() function, but the output goes to a screen!
You just
* type print
* open a bracket (
* put in the stuff you want to show on the screen or terminal
* close the bracket )
python
print(42)
print('Boris')
Try it yourself. This bit of python will print the number 42 and the name 'Boris'.
Fix the last two lines to show the number 47 and the name 'Jane'.
End of explanation
print(42, 'Boris')
Explanation: If you've done it right, the error messages should go away and your output should look like this:
42
Boris
47
Jane
Printing More Than One Thing
You can put 42 Boris 47 Jane on the screen using just one print() function instead of four.
Change the following line to print 42 Boris 47 Jane using commas to separate each part.
End of explanation
print(42, '\nBoris', 47, 'Jane')
Explanation: If you've done it right, your output should look like this:
42 Boris 47 Jane
Printing on a New Line
If you want to have items printed on new lines, you can use the \n special character.
python
print(42, '\nBoris')
Change the following code to use \n to get each item on a separate line
End of explanation
some_text = input('Please enter some text: ')
print(some_text)
Explanation: If you've done it right, your output should look like this:
42
Boris
47
Jane
Basic User Input
You the human will use your computing device by typing on the keyboard, moving and clicking the mouse, touching or pinching on a phone touchscreen, talking into a microphone etc. The easiest method for beginner programmers is the keyboard.
Think about the last time you ordered anything online: the webpage asks for your name, you type it into the correct box, then your address etc. Your answers are stored in a variable (more on that later). Type in some text and press enter to see how it works:
(Don't try to change the program code here, just run it, then type in something).
End of explanation
name = input('Please enter your name: ')
print('It is a pleasure to meet you 'name,'.')
Explanation: Try to run the following code then fix the error message
(Hint: there's a missing comma somewhere...)
End of explanation
name = input('What is your name? ')
seats = int(input('How many seats do you want to book? '))
height_metres = float(input('What is your height in metres? '))
print ('\nCustomer Report:', name, 'booking', seats, 'seats.')
Explanation: If you've done it right, your output should look like this, unless you aren't Katie:
Please enter your name: Katie
It is a pleasure to meet you Katie.
Number or Text Input?
Python easily handles situations when you want to input a number or text.
This will accept any input at all and store it as text.
python
name = input('What is your name? ')
This will accept whole numbers only (whole numbers are officially known as 'integers'). The program will crash if the user types in anything that is not a whole number.
python
seats = int(input('How many seats do you want to book? '))
This will accept decimal numbers, which are officially called real or floating point numbers. It will still accept integer numbers too. The program will crash if the user types in anything that is not a number.
python
height_metres = float(input('What is your height in metres? '))
Type your answers to the following questions - try to get each input function to crash!
End of explanation |
12,279 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CrowdTruth for Sparse Multiple Choice Tasks
Step1: Declaring a pre-processing configuration
The pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class
Step2: Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Relation Extraction task
Step3: Pre-processing the input data
After declaring the configuration of our input file, we are ready to pre-process the crowd data
Step4: Computing the CrowdTruth metrics
The pre-processed data can then be used to calculate the CrowdTruth metrics
Step5: results is a dict object that contains the quality metrics for sentences, events and crowd workers.
The sentence metrics are stored in results["units"]
Step6: The uqs column in results["units"] contains the sentence quality scores, capturing the overall workers agreement over each sentence. Here we plot its histogram
Step7: The unit_annotation_score column in results["units"] contains the sentence-relation scores, capturing the likelihood that a relation is expressed in a sentence. For each sentence, we store a dictionary mapping each relation to its sentence-relation score.
Step8: The worker metrics are stored in results["workers"]
Step9: The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers.
Step10: Open to Closed Task Transformation
The goal of this crowdsourcing task is to understand how clearly a word or a word phrase is expressing an event or an action across all the sentences in the dataset and not at the level of a single sentence as previously. Therefore, in the remainder of this tutorial we show how to translate an open task to a closed task by processing both the input units and the annotations of a crowdsourcing task.
The answers from the crowd are stored in the selected_events column.
Step11: As you already know, each word can be expressed in a canonical form, i.e., as a lemma. For example, the words
Step12: The following functions create the values of the annotation vector and extracts the lemma of the events selected by each worker.
Step13: Effect on CrowdTruth metrics
Finally, we can compare the effect of the transformation from an open task to a closed task on the CrowdTruth sentence quality score. | Python Code:
import pandas as pd
test_data = pd.read_csv("../data/event-text-sparse-multiple-choice.csv")
test_data.head()
Explanation: CrowdTruth for Sparse Multiple Choice Tasks: Event Extraction
In this tutorial, we will apply CrowdTruth metrics to a sparse multiple choice crowdsourcing task for Event Extraction from sentences. The workers were asked to read a sentence and then pick from a multiple choice list which are the words or words phrases in the sentence that are events or actions. The options available in the multiple choice list change with the input sentence. The task was executed on FigureEight. For more crowdsourcing annotation task examples, click here.
In this tutorial, we will also show how to translate an open task to a closed task by processing both the input units and the annotations of a crowdsourcing task, and how this impacts the results of the CrowdTruth quality metrics. We start with an open-ended extraction task, where the crowd was asked to read a sentence and then pick from a multiple choice list which are the words or words phrases in the sentence that are events or actions.
To replicate this experiment, the code used to design and implement this crowdsourcing annotation template is available here: template, css, javascript.
This is a screenshot of the task as it appeared to workers:
A sample dataset for this task is available in this file, containing raw output from the crowd on FigureEight. Download the file and place it in a folder named data that has the same root as this notebook. Now you can check your data:
End of explanation
import crowdtruth
from crowdtruth.configuration import DefaultConfig
Explanation: Declaring a pre-processing configuration
The pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class:
End of explanation
class TestConfig(DefaultConfig):
inputColumns = ["doc_id", "events", "events_count", "original_sentence", "processed_sentence", "sentence_id", "tokens"]
outputColumns = ["selected_events"]
annotation_separator = ","
# processing of a closed task
open_ended_task = True
def processJudgments(self, judgments):
# pre-process output to match the values in annotation_vector
for col in self.outputColumns:
# transform to lowercase
judgments[col] = judgments[col].apply(lambda x: str(x).lower())
# remove square brackets from annotations
judgments[col] = judgments[col].apply(lambda x: str(x).replace('[',''))
judgments[col] = judgments[col].apply(lambda x: str(x).replace(']',''))
# remove the quotes around the annotations
judgments[col] = judgments[col].apply(lambda x: str(x).replace('"',''))
return judgments
Explanation: Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Relation Extraction task:
inputColumns: list of input columns from the .csv file with the input data
outputColumns: list of output columns from the .csv file with the answers from the workers
annotation_separator: string that separates between the crowd annotations in outputColumns
open_ended_task: boolean variable defining whether the task is open-ended (i.e. the possible crowd annotations are not known beforehand, like in the case of free text input); in the task that we are processing, workers pick the answers from a pre-defined list, therefore the task is not open ended, and this variable is set to False
annotation_vector: list of possible crowd answers, mandatory to declare when open_ended_task is False; for our task, this is the list of all relations that were given as input to the crowd in at least one sentence
processJudgments: method that defines processing of the raw crowd data; for this task, we process the crowd answers to correspond to the values in annotation_vector
The complete configuration class is declared below:
End of explanation
data_open, config = crowdtruth.load(
file = "../data/event-text-sparse-multiple-choice.csv",
config = TestConfig()
)
data_open['judgments'].head()
Explanation: Pre-processing the input data
After declaring the configuration of our input file, we are ready to pre-process the crowd data:
End of explanation
results_open = crowdtruth.run(data_open, config)
Explanation: Computing the CrowdTruth metrics
The pre-processed data can then be used to calculate the CrowdTruth metrics:
End of explanation
results_open["units"].head()
Explanation: results is a dict object that contains the quality metrics for sentences, events and crowd workers.
The sentence metrics are stored in results["units"]:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(results_open["units"]["uqs"])
plt.xlabel("Sentence Quality Score")
plt.ylabel("Sentences")
Explanation: The uqs column in results["units"] contains the sentence quality scores, capturing the overall workers agreement over each sentence. Here we plot its histogram:
End of explanation
results_open["units"]["unit_annotation_score"].head(10)
Explanation: The unit_annotation_score column in results["units"] contains the sentence-relation scores, capturing the likelihood that a relation is expressed in a sentence. For each sentence, we store a dictionary mapping each relation to its sentence-relation score.
End of explanation
results_open["workers"].head()
Explanation: The worker metrics are stored in results["workers"]:
End of explanation
plt.hist(results_open["workers"]["wqs"])
plt.xlabel("Worker Quality Score")
plt.ylabel("Workers")
Explanation: The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers.
End of explanation
test_data["selected_events"][0:30]
Explanation: Open to Closed Task Transformation
The goal of this crowdsourcing task is to understand how clearly a word or a word phrase is expressing an event or an action across all the sentences in the dataset and not at the level of a single sentence as previously. Therefore, in the remainder of this tutorial we show how to translate an open task to a closed task by processing both the input units and the annotations of a crowdsourcing task.
The answers from the crowd are stored in the selected_events column.
End of explanation
import nltk
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer
from nltk.corpus import wordnet
def nltk2wn_tag(nltk_tag):
if nltk_tag.startswith('J'):
return wordnet.ADJ
elif nltk_tag.startswith('V'):
return wordnet.VERB
elif nltk_tag.startswith('N'):
return wordnet.NOUN
elif nltk_tag.startswith('R'):
return wordnet.ADV
else:
return None
def lemmatize_events(event):
nltk_tagged = nltk.pos_tag(nltk.word_tokenize(str(event.lower().split("__")[0])))
wn_tagged = map(lambda x: (str(x[0]), nltk2wn_tag(x[1])), nltk_tagged)
res_words = []
for word, tag in wn_tagged:
if tag is None:
res_word = wordnet._morphy(str(word), wordnet.NOUN)
if res_word == []:
res_words.append(str(word))
else:
if len(res_word) == 1:
res_words.append(str(res_word[0]))
else:
res_words.append(str(res_word[1]))
else:
res_word = wordnet._morphy(str(word), tag)
if res_word == []:
res_words.append(str(word))
else:
if len(res_word) == 1:
res_words.append(str(res_word[0]))
else:
res_words.append(str(res_word[1]))
lematized_keyword = " ".join(res_words)
return lematized_keyword
import nltk
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer
from nltk.corpus import wordnet
def nltk2wn_tag(nltk_tag):
if nltk_tag.startswith('J'):
return wordnet.ADJ
elif nltk_tag.startswith('V'):
return wordnet.VERB
elif nltk_tag.startswith('N'):
return wordnet.NOUN
elif nltk_tag.startswith('R'):
return wordnet.ADV
else:
return None
Explanation: As you already know, each word can be expressed in a canonical form, i.e., as a lemma. For example, the words: run, runs, running, they all have the lemma run. As you can see in the previous cell, events in text can appear under multiple forms. To evaluate the clarity of each event, we will process both the input units and the crowd annotations to refer to a word in its canonical form, i.e., we will lemmatize them.
Following, we define the function used to lemmatize the options that are shown to the workers in the crowdsourcing task:
End of explanation
def define_annotation_vector(eventsList):
events = []
for i in range(len(eventsList)):
currentEvents = eventsList[i].split("###")
for j in range(len(currentEvents)):
if currentEvents[j] != "no_event":
lematized_keyword = lemmatize_events(currentEvents[j])
if lematized_keyword not in events:
events.append(lematized_keyword)
events.append("no_event")
return events
def lemmatize_keywords(keywords, separator):
keywords_list = keywords.split(separator)
lematized_keywords = []
for keyword in keywords_list:
lematized_keyword = lemmatize_events(keyword)
lematized_keywords.append(lematized_keyword)
return separator.join(lematized_keywords)
class TestConfig(DefaultConfig):
inputColumns = ["doc_id", "events", "events_count", "original_sentence", "processed_sentence", "sentence_id", "tokens"]
outputColumns = ["selected_events"]
annotation_separator = ","
# processing of a closed task
open_ended_task = False
annotation_vector = define_annotation_vector(test_data["events"])
def processJudgments(self, judgments):
# pre-process output to match the values in annotation_vector
for col in self.outputColumns:
# transform to lowercase
judgments[col] = judgments[col].apply(lambda x: str(x).lower())
# remove square brackets from annotations
judgments[col] = judgments[col].apply(lambda x: str(x).replace("[",""))
judgments[col] = judgments[col].apply(lambda x: str(x).replace("]",""))
# remove the quotes around the annotations
judgments[col] = judgments[col].apply(lambda x: str(x).replace('"',''))
judgments[col] = judgments[col].apply(lambda x: lemmatize_keywords(str(x), self.annotation_separator))
return judgments
data_closed, config = crowdtruth.load(
file = "../data/event-text-sparse-multiple-choice.csv",
config = TestConfig()
)
data_closed['judgments'].head()
results_closed = crowdtruth.run(data_closed, config)
results_closed["annotations"]
Explanation: The following functions create the values of the annotation vector and extracts the lemma of the events selected by each worker.
End of explanation
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.scatter(
results_open["units"]["uqs"],
results_closed["units"]["uqs"],
)
plt.plot([0, 1], [0, 1], 'red', linewidth=1)
plt.title("Sentence Quality Score")
plt.xlabel("open task")
plt.ylabel("closed task")
plt.scatter(
results_open["workers"]["wqs"],
results_closed["workers"]["wqs"],
)
plt.plot([0, 1], [0, 1], 'red', linewidth=1)
plt.title("Worker Quality Score")
plt.xlabel("open task")
plt.ylabel("closed task")
Explanation: Effect on CrowdTruth metrics
Finally, we can compare the effect of the transformation from an open task to a closed task on the CrowdTruth sentence quality score.
End of explanation |
12,280 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem
Step1: Unit Test
The following unit test is expected to fail until you solve the challenge. | Python Code:
def list_of_chars(list_chars):
# TODO: Implement me
if li
return list_chars[::-1]
Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Implement a function to reverse a string (a list of characters), in-place.
Constraints
Test Cases
Algorithm
Code
Unit Test
Solution Notebook
Constraints
Can I assume the string is ASCII?
Yes
Note: Unicode strings could require special handling depending on your language
Since we need to do this in-place, it seems we cannot use the slice operator or the reversed function?
Correct
Since Python string are immutable, can I use a list of characters instead?
Yes
Test Cases
None -> None
[''] -> ['']
['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f']
Algorithm
Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
Code
End of explanation
# %load test_reverse_string.py
from nose.tools import assert_equal
class TestReverse(object):
def test_reverse(self):
assert_equal(list_of_chars(None), None)
assert_equal(list_of_chars(['']), [''])
assert_equal(list_of_chars(
['f', 'o', 'o', ' ', 'b', 'a', 'r']),
['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse')
def main():
test = TestReverse()
test.test_reverse()
if __name__ == '__main__':
main()
Explanation: Unit Test
The following unit test is expected to fail until you solve the challenge.
End of explanation |
12,281 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro to deep learning in Chainer
Welcome - this interactive tutorial will introduce you to deep learning in Chainer, to prepare for the DIY practical tutorial.
0. iPython
First off, you need to know how to run code & see the results. When you see Exercise, it is an exercise for you to do. Try to read & do each exercise in order, to understand what is going on!
Exercise - run the next cell by selecting it & pressing Ctrl+Enter
Step1: This is an iPython Notebook. You can write whatever Python code you like here - output like the print will be shown below the cell, and the final result is also shown (the result of a + 200).
Note - your Python code is running on a server I've set up (which has everything you need), not on your local machine.
Exercise - save the notebook (do this regularly), by pressing Ctrl+s (or the save icon)
Hint - if you are struggling what to write at any point, try pressing Tab - iPython should try to offer some sensible completions. If you want to know what a function does, try Shift+Tab to bring up documentation.
1. Numpy
Next we'll import the libraries we need...
Step2: Now we'll learn how to use these libraries to create deep learning functions (later, in the full tutorial, we'll use this to train a handwriting recognizer).
Here are two ways to create a numpy array
Step3: A np.array is a multidimensional array - a very flexible thing, it can be
Step4: Now we just need a few ways of working with these arrays - here are some examples of things that you can do
Step5: I won't explain all of this in detail, but have a play around with arrays, see what you can do with the above operations.
Exercise - try to use your numpy operations to find the following with M
Step6: 2. Chainer
We'll use numpy to get data in & out of Chainer, which is our deep learning library, but Chainer will do most of the data processing.
Here is how you get some data into Chainer, use a linear operation to change its shape, and get the result back out again
Step7: This may not seem particularly special, but this is the heart of a deep learning function. Take an input array, make various transformations that mess around with the shape, and produce an output array.
Some concepts
Step8: If you can do all of this, you're ready to create a deep learning function.
In the last step, you may have noticed something interesting - the parameters inside the link change every time it is re-created. This is because deep learning functions start off random! Random functions don't sound too useful, so later we're going to learn how to "teach" them to be useful functions.
3. Plotting curves
We've provided a very simple log plotting library, dlt.Log, demonstrated below | Python Code:
a = 100
print("a is", a)
a + 200
Explanation: Intro to deep learning in Chainer
Welcome - this interactive tutorial will introduce you to deep learning in Chainer, to prepare for the DIY practical tutorial.
0. iPython
First off, you need to know how to run code & see the results. When you see Exercise, it is an exercise for you to do. Try to read & do each exercise in order, to understand what is going on!
Exercise - run the next cell by selecting it & pressing Ctrl+Enter
End of explanation
%matplotlib inline
import dlt
import numpy as np
import chainer as C
Explanation: This is an iPython Notebook. You can write whatever Python code you like here - output like the print will be shown below the cell, and the final result is also shown (the result of a + 200).
Note - your Python code is running on a server I've set up (which has everything you need), not on your local machine.
Exercise - save the notebook (do this regularly), by pressing Ctrl+s (or the save icon)
Hint - if you are struggling what to write at any point, try pressing Tab - iPython should try to offer some sensible completions. If you want to know what a function does, try Shift+Tab to bring up documentation.
1. Numpy
Next we'll import the libraries we need...
End of explanation
a = np.array([1, 2, 3, 4, 5], dtype=np.int32)
print("a =", a)
print("a.shape =", a.shape)
print()
b = np.zeros((2, 3), dtype=np.float32)
print("b =", b)
print("b.shape =", b.shape)
Explanation: Now we'll learn how to use these libraries to create deep learning functions (later, in the full tutorial, we'll use this to train a handwriting recognizer).
Here are two ways to create a numpy array:
End of explanation
# EXERCISE
# 1. an array scalar containing the integer 5
# 2. a (10, 20) array of zeros
# 3. a (3, 3) array of different numbers (hint: use a list-of-lists)
Explanation: A np.array is a multidimensional array - a very flexible thing, it can be:
- 0-dimensional (a number, like 5)
- 1-dimensional (a vector, like a above)
- 2-dimensional (a matrix, like b above)
- N-dimensional (...)
It can also contain either whole numbers (np.int32) or real numbers (np.float32).
OK, I've done a bit much now - time for you...
Exercise - create the following numpy arrays, and print out the shape:
End of explanation
x = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.int32)
print("x =\n%s" % x)
print()
# Indexing
print("x[0, 1] =", x[0, 1]) # 0th row, 1st column
print("x[1, 1] =", x[1, 1]) # 1st row, 1st column
print()
# Slicing
print("x[0, :] =", x[0, :]) # 0th row, all columns
print("x[:, 2] =", x[:, 2]) # 2nd column, all rows
print("x[1, :] =", x[1, :]) # 1st row, all columns
print("x[1, 0:2] =", x[1, 0:2]) # 1st row, first two columns
print()
# Other numpy functions (there are very many more...)
print("np.argmax(x[0, :]) =", np.argmax(x[0, :])) # Find the index of the maximum element in the 0th row
Explanation: Now we just need a few ways of working with these arrays - here are some examples of things that you can do:
End of explanation
M = np.arange(900, dtype=np.float32).reshape(45, 20)
print(M.shape)
# EXERCISE
# 1. print out row number 0 (hint, it should be shape (20,))
# 2. print out row number 34
# 3. select column 15, print out the shape
# 4. select rows 30-40 inclusive, columns 5-8 inclusive, print out the shape (hint: should be (11, 4))
Explanation: I won't explain all of this in detail, but have a play around with arrays, see what you can do with the above operations.
Exercise - try to use your numpy operations to find the following with M:
End of explanation
a = C.Variable(np.zeros((10, 20), dtype=np.float32))
print("a.data.shape =", a.data.shape)
transformation = C.links.Linear(20, 30)
b = transformation(a)
print("b.data.shape =", b.data.shape)
c = C.functions.tanh(b)
print("c.data.shape =", c.data.shape)
Explanation: 2. Chainer
We'll use numpy to get data in & out of Chainer, which is our deep learning library, but Chainer will do most of the data processing.
Here is how you get some data into Chainer, use a linear operation to change its shape, and get the result back out again:
End of explanation
# EXERCISE
# 1. Create an array, shape (2, 3) of various float numbers, put it in a variable
a = None # your array here
# 2. Print out tanh(a) (for the whole array)
# 3. Create a linear link of shape (3, 5) - this means it takes (N, 3) and produces (N, 5)
mylink = None # your link here
# 4. Use your link to transform `a`, then take the tanh, check the shape of the result
# 5. Uncomment the following; what happens when you re-run the code?
# print("W =", mylink.W.data)
Explanation: This may not seem particularly special, but this is the heart of a deep learning function. Take an input array, make various transformations that mess around with the shape, and produce an output array.
Some concepts:
- A Variable holds an array - this is some data going through the function
- A Link contains some parameters (these start random), which process an input Variable, and produce an output Variable.
- A Function is a Link without any parameters (like sin, cos, tan, tanh, max... so many more...)
Exercise - use Chainer to calculate the following:
End of explanation
log = dlt.Log()
for i in range(100):
# The first argument "loss" says which plot to put the value on
# The second argument "train" gives it a name on that plot
# The third argument is the y-value
log.add("loss", "train", i)
log.add("loss", "valid", 2 * i)
log.show()
Explanation: If you can do all of this, you're ready to create a deep learning function.
In the last step, you may have noticed something interesting - the parameters inside the link change every time it is re-created. This is because deep learning functions start off random! Random functions don't sound too useful, so later we're going to learn how to "teach" them to be useful functions.
3. Plotting curves
We've provided a very simple log plotting library, dlt.Log, demonstrated below:
End of explanation |
12,282 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hypocycloid definition and animation
Deriving the parametric equations of a hypocycloid
On May 11 @fermatslibrary posted a gif file, https
Step1: We refer to the figure in the above cell to explain how we get the parameterization of the hypocycloid generated by a fixed point of a circle of center $O'_0$ and radius r, rolling without slipping along the circle
of center O and radius $R>r$.
Suppose that initially the hypocycloid generating point, $P$, is located at $(R,0)$.
After the small circle was rolling along the greater circle a length corresponding to an angle of measure, $t$, it reaches the point $P'$ on the circle $C(O'_t, r)$.
Rolling without slipping means that the length the arc $\stackrel{\frown}{PQ}$ of the greater circle equals the length of the arc $\stackrel{\frown}{P'Q}$ on the smaller one, i.e $Rt=r\omega$, where $\omega$ is the measure of the non-oriented angle $\widehat{P'O'_tQ}$ (i.e. we consider $\omega>0$) . Thus $\omega=(R/r)t$
The center $O'_t$ has the coordinates $x=(R-r)\cos(t), (R-r)\sin(t)$. The clockwise parameterization of the circle $C(O'_t,r)$ with respect to the coordinate system $x'O'_ty'$ is as follows
Step2: The arbitrary point $A$ on the rolling circle has, for t=0, the coordinates
Step3: Set the layout of the plot
Step4: Define the base circle
Step5: Prepare data for animation to be uploaded to Plotly cloud
Step6: Set data for each animation frame
Step7: Animate the generation of a hypocycloid with 3 cusps(i.e. $R/r=3$)
Step8: Hypocycloid with four cusps (astroid)
Step9: Degenerate hypocycloid (R/r=2) | Python Code:
from IPython.display import Image
Image(filename='generate-hypocycloid.png')
Explanation: Hypocycloid definition and animation
Deriving the parametric equations of a hypocycloid
On May 11 @fermatslibrary posted a gif file, https://twitter.com/fermatslibrary/status/862659602776805379, illustrating the motion of eight cocircular points. The Fermat's Library followers found it so fascinating that the tweet picked up more than 1000 likes and 800 retweets. Soon after I saw the gif I created a similar Python Plotly animation
although the tweet did not mention how it was generated. @plotlygraphs tweeted a link
to my Jupyter notebook presenting the animation code.
How I succeeded to reproduce it so fast? Here I explain the secret:
At the first sight you can think that the gif displays an illusory rectiliniar motion of the eight points, but it is a real one. I noticed that the moving points lie on a rolling circle along another circle, and I knew that a fixed point on a rolling circle describes a curve called hypocycloid. In the particular case when the ratio of the two radii is 2 the hypocycloid degenerates to a diameter in the base (fixed) circle.
In this Jupyter notebook I deduce the parametric equations of a hypoycyloid, animate its construction
and explain why when $R/r=2$ any point on the rolling circle runs a diameter in the base circle.
End of explanation
Image(filename='hypocycloid-2r.png')
Explanation: We refer to the figure in the above cell to explain how we get the parameterization of the hypocycloid generated by a fixed point of a circle of center $O'_0$ and radius r, rolling without slipping along the circle
of center O and radius $R>r$.
Suppose that initially the hypocycloid generating point, $P$, is located at $(R,0)$.
After the small circle was rolling along the greater circle a length corresponding to an angle of measure, $t$, it reaches the point $P'$ on the circle $C(O'_t, r)$.
Rolling without slipping means that the length the arc $\stackrel{\frown}{PQ}$ of the greater circle equals the length of the arc $\stackrel{\frown}{P'Q}$ on the smaller one, i.e $Rt=r\omega$, where $\omega$ is the measure of the non-oriented angle $\widehat{P'O'_tQ}$ (i.e. we consider $\omega>0$) . Thus $\omega=(R/r)t$
The center $O'_t$ has the coordinates $x=(R-r)\cos(t), (R-r)\sin(t)$. The clockwise parameterization of the circle $C(O'_t,r)$ with respect to the coordinate system $x'O'_ty'$ is as follows:
$$\begin{array}{llr}
x'(\tau)&=&r\cos(\tau)\
y'(\tau)&=&-r\sin(\tau),
\end{array}$$
$\tau\in[0,2\pi]$.
Hence the point $P'$ on the hypocycloid has the coordinates:
$x'=r\cos(\omega-t), y'=-r\sin(\omega-t)$, and with respect to $xOy$, the coordinates:
$x=(R-r)\cos(t)+r\cos(\omega-t), y=(R-r)\sin(t)-r\sin(\omega-t)$.
Replacing $\omega=(R/r)t$ we get the parameterization of the hypocycloid generated by the initial point $P$:
$$\begin{array}{lll}
x(t)&=&(R-r)\cos(t)+r\cos(t(R-r)/r)\
y(t)&=&(R-r)\sin(t)-r\sin(t(R-r)/r), \quad t\in[0,2\pi]
\end{array}$$
If $R/r=2$ the parametric equations of the corresponding hypocycloid are:
$$\begin{array}{lll}
x(t)&=&2r\cos(t)\
y(t)&=&0
\end{array}$$
i.e. the moving point $P$ runs the diameter $y=0$, from the position $(R=2r, 0)$ to $(-R,0)$ when $t\in[0,\pi]$,
and back to $(R,0)$, for $t\in[\pi, 2\pi]$.
What about the trajectory of any other point, $A$, on the rolling circle that at $t=0$ has the angular coordinate $\varphi$ with respect to the center $O'_0$?
We show that it is also a diameter in the base circle, referring to the figure in the next cell that is a particularization of
the above figure to the case $R=2r$.
End of explanation
import numpy as np
from numpy import pi, cos, sin
import copy
import plotly.plotly as py
from plotly.grid_objs import Grid, Column
import time
Explanation: The arbitrary point $A$ on the rolling circle has, for t=0, the coordinates:
$x=r+r\cos(\varphi), y=r\sin(\varphi)$.
The angle $\widehat{QO'_tP'}=\omega$ is in this case $2t$, and $\widehat{B'O'_tP'}=t$. Since $\widehat{A'O'_tP'}=\varphi$, we get that the position of the fixed point on the smaller circle, after rolling along an arc of length $r(2t-\varphi)$,
is $A'(x(t)=r\cos(t)+r\cos(t-\varphi), y(t)=r\sin(t)-r\sin(t-\varphi))$, with $\varphi$ constant, and $t$ variable in the interval $[\varphi, 2\pi+\varphi]$.
Let us show that $y(t)/x(t)=$constant for all $t$, i.e. the generating point of the hypocycloid lies on a segment of line (diameter in the base circle):
$$\displaystyle\frac{y(t)}{x(t)}=\frac{r\sin(t)-r\sin(t-\varphi)}{r\cos(t)+r\cos(t-\varphi)}=\left{\begin{array}{ll}\tan(\varphi/2)& \mbox{if}\:\: t=\varphi/2\
\displaystyle\frac{2\cos(t-\varphi/2)\sin(\varphi/2)}{2\cos(t-\varphi/2)\cos(\varphi/2)}=\tan(\varphi/2)& \mbox{if}\:\: t\neq\varphi/2 \end{array}\right.$$
Hence the @fermatslibrary animation, illustrated by a Python Plotly code in my Jupyter notebook, displays the motion of the eight points placed on the rolling
circle of radius $r=R/2$, along the corresponding diameters in the base circle.
Animating the hypocycloid generation
End of explanation
axis=dict(showline=False,
zeroline=False,
showgrid=False,
showticklabels=False,
range=[-1.1,1.1],
autorange=False,
title=''
)
layout=dict(title='',
font=dict(family='Balto'),
autosize=False,
width=600,
height=600,
showlegend=False,
xaxis=dict(axis),
yaxis=dict(axis),
hovermode='closest',
shapes=[],
updatemenus=[dict(type='buttons',
showactive=False,
y=1,
x=1.2,
xanchor='right',
yanchor='top',
pad=dict(l=10),
buttons=[dict(label='Play',
method='animate',
args=[None, dict(frame=dict(duration=90, redraw=False),
transition=dict(duration=0),
fromcurrent=True,
mode='immediate'
)]
)]
)]
)
Explanation: Set the layout of the plot:
End of explanation
layout['shapes'].append(dict(type= 'circle',
layer= 'below',
xref= 'x',
yref='y',
fillcolor= 'rgba(245,245,245, 0.95)',
x0= -1.005,
y0= -1.005,
x1= 1.005,
y1= 1.005,
line= dict(color= 'rgb(40,40,40)', width=2
)
)
)
def circle(C, rad):
#C=center, rad=radius
theta=np.linspace(0,1,100)
return C[0]+rad*cos(2*pi*theta), C[1]-rad*sin(2*pi*theta)
Explanation: Define the base circle:
End of explanation
def set_my_columns(R=1.0, ratio=3):
#R=the radius of base circle
#ratio=R/r, where r=is the radius of the rolling circle
r=R/float(ratio)
xrol, yrol=circle([R-r, 0], 0)
my_columns=[Column(xrol, 'xrol'), Column(yrol, 'yrol')]
my_columns.append(Column([R-r, R], 'xrad'))
my_columns.append(Column([0,0], 'yrad'))
my_columns.append(Column([R], 'xstart'))
my_columns.append(Column([0], 'ystart'))
a=R-r
b=(R-r)/float(r)
frames=[]
t=np.linspace(0,1,50)
xpts=[]
ypts=[]
for k in range(t.shape[0]):
X,Y=circle([a*cos(2*pi*t[k]), a*sin(2*pi*t[k])], r)
my_columns.append(Column(X, 'xrcirc{}'.format(k+1)))
my_columns.append(Column(Y, 'yrcirc{}'.format(k+1)))
#The generator point has the coordinates(xp,yp)
xp=a*cos(2*pi*t[k])+r*cos(2*pi*b*t[k])
yp=a*sin(2*pi*t[k])-r*sin(2*pi*b*t[k])
xpts.append(xp)
ypts.append(yp)
my_columns.append(Column([a*cos(2*pi*t[k]), xp], 'xrad{}'.format(k+1)))
my_columns.append(Column([a*sin(2*pi*t[k]), yp], 'yrad{}'.format(k+1)))
my_columns.append(Column(copy.deepcopy(xpts), 'xpt{}'.format(k+1)))
my_columns.append(Column(copy.deepcopy(ypts), 'ypt{}'.format(k+1)))
return t, Grid(my_columns)
def set_data(grid):
return [dict(xsrc=grid.get_column_reference('xrol'),#rolling circle
ysrc= grid.get_column_reference('yrol'),
mode='lines',
line=dict(width=2, color='blue'),
name='',
),
dict(xsrc=grid.get_column_reference('xrad'),#radius in the rolling circle
ysrc= grid.get_column_reference('yrad'),
mode='markers+lines',
line=dict(width=1.5, color='blue'),
marker=dict(size=4, color='blue'),
name=''),
dict(xsrc=grid.get_column_reference('xstart'),#starting point on the hypocycloid
ysrc= grid.get_column_reference('ystart'),
mode='marker+lines',
line=dict(width=2, color='red', shape='spline'),
name='')
]
Explanation: Prepare data for animation to be uploaded to Plotly cloud:
End of explanation
def set_frames(t, grid):
return [dict(data=[dict(xsrc=grid.get_column_reference('xrcirc{}'.format(k+1)),#update rolling circ position
ysrc=grid.get_column_reference('yrcirc{}'.format(k+1))
),
dict(xsrc=grid.get_column_reference('xrad{}'.format(k+1)),#update the radius
ysrc=grid.get_column_reference('yrad{}'.format(k+1))#of generating point
),
dict(xsrc=grid.get_column_reference('xpt{}'.format(k+1)),#update hypocycloid arc
ysrc=grid.get_column_reference('ypt{}'.format(k+1))
)
],
traces=[0,1,2]) for k in range(t.shape[0])
]
Explanation: Set data for each animation frame:
End of explanation
py.sign_in('empet', 'my_api_key')#access my Plotly account
t, grid=set_my_columns(R=1, ratio=3)
py.grid_ops.upload(grid, 'animdata-hypo3'+str(time.time()), auto_open=False)#upload data to Plotly cloud
data1=set_data(grid)
frames1=set_frames(t, grid)
title='Hypocycloid with '+str(3)+' cusps, '+'<br>generated by a fixed point of a circle rolling inside another circle; R/r=3'
layout.update(title=title)
fig1=dict(data=data1, layout=layout, frames=frames1)
py.icreate_animations(fig1, filename='anim-hypocycl3'+str(time.time()))
Explanation: Animate the generation of a hypocycloid with 3 cusps(i.e. $R/r=3$):
End of explanation
t, grid=set_my_columns(R=1, ratio=4)
py.grid_ops.upload(grid, 'animdata-hypo4'+str(time.time()), auto_open=False)#upload data to Plotly cloud
data2=set_data(grid)
frames2=set_frames(t, grid)
title2='Hypocycloid with '+str(4)+' cusps, '+'<br>generated by a fixed point of a circle rolling inside another circle; R/r=4'
layout.update(title=title2)
fig2=dict(data=data2, layout=layout, frames=frames2)
py.icreate_animations(fig2, filename='anim-hypocycl4'+str(time.time()))
Explanation: Hypocycloid with four cusps (astroid):
End of explanation
t, grid=set_my_columns(R=1, ratio=2)
py.grid_ops.upload(grid, 'animdata-hypo2'+str(time.time()), auto_open=False)#upload data to Plotly cloud
data3=set_data(grid)
frames3=set_frames(t, grid)
title3='Degenerate Hypocycloid; R/r=2'
layout.update(title=title3)
fig3=dict(data=data3, layout=layout, frames=frames3)
py.icreate_animations(fig3, filename='anim-hypocycl2'+str(time.time()))
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: Degenerate hypocycloid (R/r=2):
End of explanation |
12,283 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rock Paper Scissors with Python
Here's a couple of versions. The first is the most verbose, most explicit version. It plays best two of three.
Step1: This version removes the explicit ties and just says if player1 == player2 it's a tie. Less code, same result.
Step2: This version takes the explicit version and plays a million games of Rock Paper Scissors. The result is ... pretty equal. | Python Code:
import random
choices = ["Rock", "Paper", "Scissors"]
def choice():
selection = random.choice(choices)
return selection
def winner(player1, player2):
if player1 == "Rock" and player2 == "Rock":
result = "Tie"
elif player1 == "Rock" and player2 == "Paper":
result = "Player 2 wins"
elif player1 == "Rock" and player2 == "Scissors":
result = "Player 1 wins"
elif player1 == "Paper" and player2 == "Paper":
result = "Tie"
elif player1 == "Paper" and player2 == "Rock":
result = "Player 1 wins"
elif player1 == "Paper" and player2 == "Scissors":
result = "Player 2 wins"
elif player1 == "Scissors" and player2 == "Scissors":
result = "Tie"
elif player1 == "Scissors" and player2 == "Rock":
result = "Player 2 wins"
elif player1 == "Scissors" and player2 == "Paper":
result = "Player 1 wins"
return result
player_1_wins = 0
player_2_wins = 0
while player_1_wins != 2 and player_2_wins != 2:
player_1 = choice()
player_2 = choice()
winr = winner(player_1, player_2)
print("Player 1 plays %s" % player_1)
print("Player 2 plays %s" % player_2)
print(winr)
# Here's where we'll overwrite the counts of the wins to determine best two of three
if winr == "Player 1 wins":
player_1_wins = player_1_wins + 1
print("Player 1 total wins: %i" % player_1_wins)
elif winr == "Player 2 wins":
player_2_wins = player_2_wins + 1
print("Player 2 total wins: %i" % player_2_wins)
else:
pass
Explanation: Rock Paper Scissors with Python
Here's a couple of versions. The first is the most verbose, most explicit version. It plays best two of three.
End of explanation
import random
choices = ["Rock", "Paper", "Scissors"]
def choice():
selection = random.choice(choices)
return selection
def winner(player1, player2):
if player1 == player2:
result = "Tie"
elif player1 == "Rock" and player2 == "Paper":
result = "Player 2 wins"
elif player1 == "Rock" and player2 == "Scissors":
result = "Player 1 wins"
elif player1 == "Paper" and player2 == "Rock":
result = "Player 1 wins"
elif player1 == "Paper" and player2 == "Scissors":
result = "Player 2 wins"
elif player1 == "Scissors" and player2 == "Rock":
result = "Player 2 wins"
elif player1 == "Scissors" and player2 == "Paper":
result = "Player 1 wins"
return result
player_1_wins = 0
player_2_wins = 0
while player_1_wins != 2 and player_2_wins != 2:
player_1 = choice()
player_2 = choice()
winr = winner(player_1, player_2)
print("Player 1 plays %s" % player_1)
print("Player 2 plays %s" % player_2)
print(winr)
# Here's where we'll overwrite the counts of the wins to determine best two of three
if winr == "Player 1 wins":
player_1_wins = player_1_wins + 1
print("Player 1 total wins: %i" % player_1_wins)
elif winr == "Player 2 wins":
player_2_wins = player_2_wins + 1
print("Player 2 total wins: %i" % player_2_wins)
else:
pass
Explanation: This version removes the explicit ties and just says if player1 == player2 it's a tie. Less code, same result.
End of explanation
import random
choices = ["Rock", "Paper", "Scissors"]
def choice():
selection = random.choice(choices)
return selection
def winner(player1, player2):
if player1 == "Rock" and player2 == "Rock":
result = "Tie"
elif player1 == "Rock" and player2 == "Paper":
result = "Player 2 wins"
elif player1 == "Rock" and player2 == "Scissors":
result = "Player 1 wins"
elif player1 == "Paper" and player2 == "Paper":
result = "Tie"
elif player1 == "Paper" and player2 == "Rock":
result = "Player 1 wins"
elif player1 == "Paper" and player2 == "Scissors":
result = "Player 2 wins"
elif player1 == "Scissors" and player2 == "Scissors":
result = "Tie"
elif player1 == "Scissors" and player2 == "Rock":
result = "Player 2 wins"
elif player1 == "Scissors" and player2 == "Paper":
result = "Player 1 wins"
return result
player_1_wins = 0
player_2_wins = 0
ties = 0
for i in range(1000000):
player_1 = choice()
player_2 = "Scissors"
winr = winner(player_1, player_2)
#print("Player 1 plays %s" % player_1)
#print("Player 2 plays %s" % player_2)
#print(winr)
# Here's where we'll overwrite the counts of the wins to determine best two of three
if winr == "Player 1 wins":
player_1_wins = player_1_wins + 1
#print("Player 1 total wins: %i" % player_1_wins)
elif winr == "Player 2 wins":
player_2_wins = player_2_wins + 1
#print("Player 2 total wins: %i" % player_2_wins)
else:
ties = ties + 1
print("Player 1 won %s games. Player 2 won %s. There were %s ties." % (player_1_wins, player_2_wins, ties))
Explanation: This version takes the explicit version and plays a million games of Rock Paper Scissors. The result is ... pretty equal.
End of explanation |
12,284 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
wk1.4
warm-up
Instructions
Step1: Testing membership
Step2: Subsets and supersets
Step3: Removing items
Step4: Iterating over sets
Big takeaway
Step5: Set operations
Intersection
Any element which is in both $S_1$ and $S_2$ will appear in their intersection.
Step6: Union
The union is the merger of two sets. Any element in $S_1$ or $S_2$ will appear in their union.
Step7: Symmetric difference (xor)
the set of elements which are in one of either set, but not in both.
Step8: Set Difference
Elements in $S_1$ but not in $S_2$ | Python Code:
# How to make a set
a = {1, 2, 3}
type(a)
# Getting a set from a list
b = set([1, 2, 3])
a == b
# How to make a frozen set
a = frozenset({1, 2, 3})
# Getting a set from a list
b = frozenset([1, 2, 3])
# Getting a set from a string
set("obtuse")
# Getting a set from a dictionary
c = set({'a':1, 'b':2})
type(c)
# Getting a set from a tuple
c = set(('a','b'))
type(c)
# Sets do not contain duplicates
a = {1, 2, 2, 3, 3, 3, 3}
a
# Sets do not support indexing (because they don't preserve order)
a[2]
# Sets cannot be used for dictionary keys because they are mutable but frozensets can be used for dictionary keys.
# Adding elements to a set
s = set([12, 26, 54])
s.add(32)
s # If we try to add 32 again, nothing will happen
# Updating a set using an iterable
s.update([26, 12, 9, 14]) # once again, note that adding duplicates has no effect.
s
# making copies of sets
s2 = s.copy()
Explanation: wk1.4
warm-up
Instructions: For each of the following problems, fill out the answer underneath and submit the finished quiz to me via personal slack message. You may consult your notes but please do not use the internet.
assign the number 8 to a variable eight.
set b equal to eight.
print b.
Write a boolean expression that will return true if x is 'a' or 'b' and false otherwise.
Write a boolean expression that returns true if and only if x is greater than ten and x is odd.
write a function that takes a parameter, n, and then returns n (unchanged).
write a function that takes a string, str_, and prints the string three times (once per line).
Write a program to prompt the user for hours and rate per hour to compute gross pay.
Enter Hours: 35
Enter Rate: 2.75
Pay: 96.25
given a str1 = "Hello " and a str2 = "World", how can we concatenate (join together) str1 to str2?
given a str1 = "Hello", how can we index str1 to get the 'o'? Give two different ways.
given a str1 = "Hi", what operation can we do to the string to output "HiHiHiHi"?
make a list, lst, containing the numbers 0 through 10.
append the string 'hi' to the list
remove the 4 from the lst
how can you check if 5 is in the lst (your expression should return True if 5 is in the lst, and False otherwise)
write a loop that prints each element from 0 through 9
write a loop that prints each element from your lst.
write a loop that prints out the element multiplied by two for each element from 0 through 9.
write a loop that will count from 0 to infinity.
write a statement that checks if a variable var is empty.
make a tuple containing a single element 'a'
make a tuple containing two elements, 'a' and 'b'
given a tuple containing 'Dicaprio' and 43, unpack the tuple with the variables name and age.
make an empty dictionary, dct.
add the key value pairs 'one'/1, 'two'/2, 'three'/3, 'four'/4
change the value of three to 'tres'
delete the key value pair 'two'/2.
write the following loops over dct:
a loop that gets the keys
a loop that gets the values
a loop that prints the key value pairs (not tuple)
a loop that prints tuples of the key value pairs.
why might we use a dictionary over a list of tuples?
Give a definition of the following:
mutability/immutability
homogeneous/heterogenous datatypes
overflow
abstraction
modularization
For each of the following datatypes, write M for mutable or I for immutable, HO for homogeneous or HE for heterogenous:
ex. blub: MHO (note blub is not a datatype we will be going over in this class)
string
list
tuple
dictionary
what is the difference between printing output from a function vs. returning output from a function?
what is a variable?
what is the difference between aliasing and copying? What type of datatypes does aliasing apply to? Why do we prefer to copy?
Sets and frozen sets
A big difference: sets are mutable, frozen sets are not
End of explanation
32 in s
55 in s
Explanation: Testing membership
End of explanation
s.issubset(set([32, 8, 9, 12, 14, -4, 54, 26, 19]))
s.issuperset(set([9, 12]))
# Note that subset and superset testing works on other iterables
s.issuperset([32, 9])
# We can also use <= and >= respectively for subset and superset testing
set([4, 5, 7]) <= set([4, 5, 7, 9])
set([9, 12, 15]) >= set([9, 12])
Explanation: Subsets and supersets
End of explanation
s = set([1,2,3,4,5,6])
s.pop()
s.remove(3)
s.remove(9) # Removing an item that isn't in the set causes an error
s.discard(9) # discard is the same as remove but doesn't throw an error
s.clear() # removes everything
s
Explanation: Removing items
End of explanation
s = set("blerg")
for char in s:
print(char)
Explanation: Iterating over sets
Big takeaway: you can do it but good luck guessing the order
End of explanation
s1 = set([4, 6, 9])
s2 = set([1, 6, 8])
s1.intersection(s2)
s1 & s2
s1.intersection_update(s2) # updates s1 with the intersection of s1 and s2
s1
Explanation: Set operations
Intersection
Any element which is in both $S_1$ and $S_2$ will appear in their intersection.
End of explanation
s1 = set([4, 6, 9])
s2 = set([1, 6, 8])
s1.union(s2)
s1 | s2
# To update using union, simply use update
Explanation: Union
The union is the merger of two sets. Any element in $S_1$ or $S_2$ will appear in their union.
End of explanation
s1 = {8, 1, 6, 5, 3}
print(s1)
s2.update([7])
print(s2)
s1.symmetric_difference(s2)
(s1 | s2) - (s1 & s2) == s1 ^ s2
s1 ^ s2
s1.symmetric_difference_update(s2)
s1
Explanation: Symmetric difference (xor)
the set of elements which are in one of either set, but not in both.
End of explanation
s1 = set([4, 6, 9])
s2 = set([1, 6, 8])
s1.difference(s2)
s1 - s2
s1.difference_update(s2)
s1
Explanation: Set Difference
Elements in $S_1$ but not in $S_2$
End of explanation |
12,285 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
้
่ฏป็ฌ่ฎฐ
ไฝ่
๏ผๆน่ทๆ
Email
Step1: ndarray ๆฏ ๅๆๆฐๆฎๅค็ปดๅฎนๅจ๏ผthat is to say, ๆๆๅ
็ด ๅฟ
้กปๆฏๅ็ฑปๅ็ใ
ๆฏไธชๆฐ็ป้ฝๆไธไธช shape ๏ผไธไธช่กจ็คบๅ็ปดๅบฆๅคงๅฐ็ๅ
็ฅ๏ผๅไธไธช dtype ๏ผไธไธช็จไบ่ฏดๆๆฐ็ปๆฐๆฎ็ฑปๅ็ๅฏน่ฑก๏ผ๏ผ
Step2: ่ฝ็ถๅคงๅคๆฐๆฐๆฎๅๆๅทฅไฝไธ้่ฆๆทฑๅ
ฅ็่งฃNumpy๏ผไฝๆฏ็ฒพ้้ขๅๆฐ็ป็็ผ็จๅๆ็ปดๆนๅผๆฏๆไธบ Python ็งๅญฆ่ฎก็ฎ่พพไบบ็ไธๅคงๆญฅ้ชคใ
ๆณจๆ๏ผ็ฌฌไธ็็ฟป่ฏ็ๆฌไธญๆไธชๆนๆณจ๏ผ่ฏดโๆฌไนฆไธญ็ๆฐ็ปใNumpyๆฐ็ปใndarray ๅบๆฌๆ็้ฝๆฏๅไธๆ ทไธ่ฅฟ๏ผๅณ ndarray ๅฏน่ฑกโ
ๅๅปบ ndarray
ๅๅปบๆฐ็ปๆ็ฎๅ็ๅๆณๅฐฑๆฏไฝฟ็จ array ๅฝๆฐใๅฎๆฅๅไธๅๅบๅ่ก็ๅฏน่ฑก๏ผๅ
ๆฌๅ
ถไปๆฐ็ป๏ผ๏ผ็ถๅไบง็ไธไธชๆฐ็ๅซๆไผ ๅ
ฅๆฐๆฎ็ NumPy ๆฐ็ปใไปฅๅ่กจ่ฝฌๆขไธบๆฐ็ปๆนๅผไธบไพ๏ผ
Step3: ๅตๅฅๅบๅ๏ผๆฏๅฆ็ฑไธ็ป็ญ้ฟๅ่กจ็ปๆ็ๅ่กจ๏ผ๏ผๅฐไผ่ขซ่ฝฌๆขไธบไธไธชๅค็ปดๆฐ็ป๏ผ
Step4: ้ค้ๆพ็คบ่ฏดๆ๏ผnp.array ไผๅฐ่ฏไธบๆฐๅปบ็่ฟไธชๆฐ็ปๆจๆญๅบไธไธช่พไธบๅ้็ๆฐๆฎ็ฑปๅใๆฐๆฎ็ฑปๅไฟๅญๅจไธไธช็นๆฎ็ dtype ๅฏน่ฑกไธญใๆฏๅฆ่ฏด๏ผๅจไธ้ข็ไธคไธชexamplesไธญ๏ผๆไปฌๆ
Step5: ้ค np.array ไนๅค๏ผ่ฟๆไธไบๅฝๆฐๅฏไปฅๆฐๅปบๆฐ็ปใๆฏๅฆ๏ผzeros ๅ ones ๅๅซๅฏๅๅปบๆๅฎ้ฟๅบฆๆๅฝข็ถ็ๅ
จ 0 ๅ ๅ
จ 1 ๆฐ็ปใempty ๅฏๅๅปบไธไธชๆฒกๆไปปไฝๅ
ทไฝๅผ็ๆฐ็ปใ่ฆ็จ่ฟไบๆนๆณๅๅปบๅค็ปดๆฐ็ป๏ผๅช้่ฆไผ ๅ
ฅไธไธช่กจ็คบๅฝข็ถ็ๅ
็ฅๅณๅฏ๏ผ
Step6: ่ญฆๅ ่ฎคไธบ np.emptry ไผ่ฟๅๅ
จ 0 ๆฐ็ป็ๆณๆณๆฏไธๅฎๅ
จ็ใๅพๅคๆ
ๅตไธ๏ผๅฆไธๆ็คบ๏ผ๏ผๅฎ่ฟๅ็้ฝๆฏไธไบๆชๅๅงๅ็ๅๅพๅผใ
arange ๆฏ Python ๅ
็ฝฎๅฝๆฐrange ็ๆฐ็ป็๏ผ
Step7: ไธ่กจๅๅบไบไธไบๆฐ็ปๅๅปบๅฝๆฐใ็ฑไบNumpyๅ
ณๆณจ็ๆฏๆฐๅผ่ฎก็ฎ๏ผๅ ๆญค๏ผๅฆๆๆฒกๆ็นๅซ็ๅถๅฎ๏ผๆฐๆฎ็ฑปๅไธ่ฌ้ฝๆฏ float64ใ
|ๅฝๆฐ | ่ฏดๆ |
|-------------|---------------|
| array | ๅฐ่พๅ
ฅๆฐๆฎ(ๅ่กจใๅ
็ฅใๆฐๅญๆ่
ๅ
ถไปๆฐๆฎ็ฑปๅ)่ฝฌๆขไธบ ndarrayใ่ฆไนๆจๆญๅบ dtype๏ผ่ฆไนๆพ็คบๅฐๆๅฎdtypeใ้ป่ฎค็ดๆฅๅคๅถ่พๅ
ฅๆฐๆฎ|
| asarray | ๅฐ่พๅ
ฅ่ฝฌไธบ ndarray๏ผๅฆๆ่พๅ
ฅๆฌ่บซๅฐฑๆฏไธไธชndarrayๅฐฑไธ่ฟ่กๅคๅถ|
| arange | ็ฑปไผผไบpythonๅ
็ฝฎ็range,ไฝๆฏ่ฟๅ็ๆฏไธไธชndarray,่ไธๆฏไธไธชๅ่กจ|
| onesใones_like | ๆ นๆฎๆๅฎ็ๅฝข็ถๅdtypeๅๅปบไธไธชๅ
จ1ๆฐ็ปใones_likeไปฅๅฆไธไธชๆฐ็ปไธบๅๆฐ๏ผๅนถๆ นๆฎๅ
ถๅฝข็ถๅdtypeๅๅปบไธไธชๅ
จ1ๆฐ็ป|
|zerosใzeros_like | ็ฑปไผผไธ่ฟฐๅฝไปค๏ผๅชๆฏๆนไธบๅ
จ0ๆฐ็ป|
|emptyใempty_like|ๅๅปบๆฐๆฐ็ป๏ผๅชๅ้
ๅ
ๅญ็ฉบ้ดไฝไธๅกซๅ
ไปปไฝๅผ|
|eyeใidentity|ๅๅปบไธไธชๆญฃๆน็N * N ๅไฝ็ฉ้ต๏ผๅฏน่ง็บฟไธบ1๏ผๅ
ถไฝไธบ0๏ผ|
Step8: ndarray ็ๆฐๆฎ็ฑปๅ
Recently I jsut moved from Shanghai to Kyoto, hence I have stopped taking notes for almost two weeks.
From now on, I will continue writing this notes. Let's note~
YWFANG @Kyoto University November, 2017
dtype()
dtype ๆฏไธไธช็นๆฎ็ๅฏน่ฑก๏ผๅฎๅซๆndarrayๅฐไธๅๅ
ๅญ่งฃ้ไธบ็นๅฎๆฐๆฎ็ฑปๅ็ๆ้ไฟกๆฏ๏ผ
Step9: dtype ๆฏ NumPy ๅผบๅคง็ๅๅ ไนไธใๅจๅคๆฐๆ
ๅตไธ๏ผๅฎไปฌ็ดๆฅๆ ๅฐๅฐ็ธๅบ็ๆบๅจ่กจ็คบ๏ผ่ฟไฝฟๅพโ่ฏปๅ็ฃ็ไธ็ไบ่ฟๅถๆฐๆฎๆตโไปฅๅโ้ๆไฝ็บง่ฏญ่จ๏ผๅฆfortran"็ญๅทฅไฝๅๅพ็ฎๅใ
ไธ่กจ่ฎฐๅฝไบNumPyๆๆฏๆ็ๅ
จ้จๆฐๆฎ็ฑปๅ๏ผ๏ผ่ฎฐไธไฝๆฒกๆๅ
ณ็ณป๏ผๅๅผๅง่ฎฐไธไฝไนๅพๆญฃๅธธ๏ผ
|็ฑปๅ|็ฑปๅไปฃ็ |่ฏดๆ
|-------------|---------------|
|int8ใunit8| i1ใu1| ๆ็ฌฆๅทๅๆ ็ฌฆๅท็8ไฝ๏ผ1ไธชๅญ่๏ผๆดๅ|
|int16ใunit16| i2ใu2| ๆ็ฌฆๅทๅๆ ็ฌฆๅท็16ไฝ๏ผ2ๅญ่๏ผๆดๅ|
|int32ใunit32| i4ใu4| ใใใ32ไฝใใใ|
|int64ใunit64| i8ใu8|ใใใ64ไฝใใใ|
| float16| f2| ๅ็ฒพๅบฆๆตฎ็นๆฐ|
| flaot32| f4ๆ่
f| ๆ ๅๅ็ฒพๅบฆๆตฎ็นๆฐ๏ผไธC็floatๅ
ผๅฎน|
| float64| f8ๆd | ๆ ๅๅ็ฒพๅบฆๆตฎ็นๆฐ๏ผไธC็doubleๅPython็floatๅฏน่ฑกๅ
ผๅฎน|
|float128| f16ๆ่
g| ๆฉๅฑ็ฒพๅบฆๆตฎ็นๆฐ|
|complex64ใcomplex128|c8ใc16| ๅๅซ็จไธคไธช32ไฝใ64ไฝๆ128ไฝๆตฎ็นๆฐ่กจ็คบ็ๅคๆฐ|
|complex256|c32|ๅคๆฐ|
| bool|๏ผ|ๅญๅจTrue ๆFlase ๅผ็ๅธๅฐ็ฑปๅ|
|object | O | Pythonๅค่ฑก็ฑปๅ|
| string_|S|ๅบๅฎ้ฟๅบฆ็ๅญ็ฌฆไธฒ็ฑปๅ๏ผๆฏไธชๅญ็ฌฆ1ไธชๅญ่๏ผใไพๅฆ๏ผ่ฆๅๅปบไธไธช้ฟๅบฆไฝ10็ๅญ็ฌฆไธฒ๏ผๅบไฝฟ็จS10|
|unicode|U|ๅบๅฎ้ฟๅบฆ็unicode็ฑปๅ๏ผๅญ่ๆฐ็ฑๅนณๅฐๅณๅฎ๏ผใ่ทๅญ็ฌฆไธฒๅฎไนๆนๅผไธๆ ท๏ผๅฆU10๏ผ|
ๆไปฌๅฏไปฅ้่ฟ ndarray ็ astype ๆนๆณๆพ็คบๅฐ่ฝฌๆขๅ
ถdtype๏ผ
Step10: In the above example, an integer array was converted into a floating array.
In the following example, I will show you how to convert a float array to an int array. You will see that, if I cast some floating point numbers to be of interger type, the decimal part will be truncated.
Step11: If you have an array of strings representing numbers, you can also use 'astype' to convert them into numberic form
Step12: In addition, we can use another arrayโs dtype attribute
Step13: ็นๆฐ๏ผๆฏๅฆfloat64ๅfloat32๏ผๅช่ฝ่กจ็คบ่ฟไผผ็ๅๆฐๅผใๅ ๆญคๅคๆ่ฎก็ฎไธญ๏ผ็ฑไบๅฏ่ฝ็งฏ็ดฏ็ๆตฎ็น้่ฏฏ๏ผๆฏ่พๆตฎ็นๆฐๅญๅคงๅฐๆถ๏ผๅช่ฝๅจไธๅฎ็ๅฐๆฐไฝๆฐไปฅๅ
ๆๆใ
ๆฐ็ปๅๆ ้ไน้ด็่ฟ็ฎ
ๆฐๆฎ็ไพฟๅฉไนๅคๅจไบๅณไฝฟๆไปฌไธ็จloop๏ผไนๅฏไปฅๅฏนๆน้ๆฐๆฎ่ฟ่ก่ฟ็ฎๅๆไฝใ่ฟ็งๆนๅผ้ๅธธๅซๅโ็ข้ๅโ๏ผvectorization๏ผใๅคงๅฐ็ธ็ญ็ๆฐ็ปไน้ด็ไปปไฝ็ฎๆฐ่ฟ็ฎ้ฝไผๅฐ่ฟ็ฎๅบ็จๅฐๅ
็ด ็บง๏ผ
Step14: ๅๆ ทๅฐ๏ผๅฝๆฐ็ปไธๆ ้่ฟ่ก็ฎๆฐ่ฟ็ฎๆถ๏ผไนไผ้ๅๅฐๅไธชๅ
็ด
Step15: ไธๅๅคงๅฐ็ๆฐ็ปไน้ด็่ฟ็ฎๅซๅๅนฟๆญ broadcasting๏ผๆไปฌไนๅ่ฟไผๅจ็ฌฌ12็ซ ่ฟ่กๆทฑๅบฆ็ๅญฆไน ใ
ๅบๆฌ็็ดขๅผๅๅ็
NumPy ๆฐ็ป็็ดขๅผๆฏไธไธชๅ
ๅฎนไธฐๅฏ็ไธป้ข๏ผๅ ไธบ้ๅๆฐๆฎๅญ้ๆ่
ๅไธชๅ
็ด ็ๆนๅผ้ๅธธๅคใไธ็ปดๆฐ็ปๅพ็ฎๅใไป่กจ้ข็๏ผๅฎไปฌ่ทpythonๅ่กจ็ๅ่ฝๅทฎไธๅคใ
Step16: ๅฆไธ้ขไพๅญไธญ็ๅฐ็้ฃ็ง๏ผๅฝๆไปฌๅฐๆ ้่ตๅผ็ปไธไธชๅ็ๆถ๏ผarr[5
Step17: ็ฑไบpythonๅธธ็จๆฅๅค็ๅคงๆฐๆฎ๏ผ่ฟ็ง้่ฟๆไฝๆฐ็ป่งๅพๅฐฑๅฏไปฅๆนๅๆบๆฐ็ป็ๆนๅผ๏ผๅฏไปฅ้ฟๅ
ๅฏนๆฐๆฎ็ๅๅคๅคๅถๆๅธฆๆฅ็ๆง่ฝๅๅ
ๅญ้ฎ้ขใ
ๅฆๆๆไปฌๆณ่ฆๅพๅฐ็ๆฏไธไธชๆฐ็ปๅ็็ๅฏๆฌ๏ผ่ไธๆฏ่งๅพ๏ผๅฐฑ้่ฆๆพๅผๅฐ่ฟ่กๅคๅถๆไฝ๏ผไพๅฆ
Step18: ๅฏนไบ้ซ็ปดๆฐ็ป๏ผ่ฝๅ็ไบๆ
ๆดๅคใๅจไธไธชไบ็ปดๆฐ็ปไธญ๏ผๅไธช็ดขๅผไฝ็ฝฎไธ็ๅ
็ด ไธๅๆฏๆ ้๏ผ่ๆฏไธ็ปดๆฐ็ป๏ผ
Step19: ๅ ๆญคๅฏไปฅๅฏนๅไธชๅ
็ด ่ฟ่ก้ๅฝ็่ฎฟ้ฎ๏ผไธ่ฟ่ฟๆ ท้่ฆๅ็ไบๆ
ๆ็นๅคใๆไปฌๅฏไปฅไผ ๅ
ฅไธไธชไปฅ้ๅท้ๅผ็็ดขๅผๅ่กจๆฅ้ๅบๅไธชๅ
็ด ใไนๅฐฑๆฏ่ฏด๏ผไธ้ขไธค็งๆนๅผๆฏ็ญไปท็๏ผ
Step20: ไธๅพ่ฏดๆไบไบ็ปดๆฐ็ป็็ดขๅผๆนๅผ
ๅจๅค็ปดๆฐ็ปไธญ๏ผๅฆๆ็็ฅไบๅ้ข็็ดขๅผ๏ผๅ่ฟๅๅฏน่ฑกไผๆฏไธไธช็ปดๅบฆไฝไธ็น็ndarray๏ผๅฎๅซๆ้ซไธ็บง็ปดๅบฆไธ็ๆๆๆฐๆฎ๏ผใ
่ฟ้ไธญๆ็็ไฝ่
็นๅซ่ฏดๆไบไธ้ข่ฟๅฅ่ฏใๆฌๅทๅค้ข็โ็ปดๅบฆโๆฏไธ็ปดใไบ็ปดใไธ็ปดไน็ฑป็ๆๆ๏ผ่ๆฌๅทๅค้ข็ๅบ่ฏฅ็่งฃไธบโ่ฝดโใไนๅฐฑๆฏ่ฏด๏ผ่ฟ้ๆ็ๆฏโ่ฟๅ็ไฝ็ปดๅบฆๆฐ็ปๅซๆๅๅง้ซ็ปดๅบฆๆฐ็ปๆๆก่ฝดไธ็ๆๆๆฐๆฎใ
ไธ้ข็ไธชไพๅญๆฅ็่งฃ๏ผ
Step21: ๆ ้ๅผๅๆฐๅผ้ฝๅฏไปฅ่ตๅผ็ป arr3d[0]
Step22: ๆณจๆ๏ผไธ้ขๆๆ้ๅๆฐ็ปๅญ้็ไพๅญไธญ๏ผ่ฟๅ็ๆฐ็ป้ฝๆฏ่งๅพใ
ๅ็็ดขๅผ
ndarray ็ๅ็่ฏญๆณ่ทpythonๅ่กจ่ฟๆ ท็ไธ็ปดๅฏน่ฑกๅทฎไธๅค๏ผ
Step23: ้ซ็ปดๅบฆๅฏน่ฑก็่ฑๆ ทๆดๅค๏ผๆไปฌๅฏไปฅๅจไธไธชๆ่
ๅคไธช่ฝดไธ่ฟ่กๅ็ใไนๅฏไปฅ่ทๆดๆฐ็ดขๅผๆททๅไฝฟ็จใ
Step24: ไธ่ฟฐๆไปฌๅฏไปฅ็ๅบ๏ผ่ฟ้็ๅ็ๆฏๆฒฟ็็ฌฌ0่ฝด๏ผๅณ็ฌฌไธไธช่ฝด๏ผๅ็็ใๆขๅฅ่ฏ่ฏด๏ผๅ็ๆฏๆฒฟ็ไธไธช่ฝดๅ้ๅๅ
็ด ็ใๆไปฌๅฏไปฅๅๆฌกไผ ๅ
ฅๅคไธชๅ็๏ผๅฐฑๅไผ ๅ
ฅๅคไธช็ดขๅผ้ฃๆ ท๏ผ
Step25: ๅไธ่ฟฐ่ฟๆ ท็ๅ็ๆนๅผ๏ผๅช่ฝๅพๅฐ็ธๅ็ปดๆฐ็ๆฐ็ป่งๅพใๆไปฌ่ฟๅฏไปฅๅฐๆดๆฐ็ดขๅผไธๅ็ๆททๅไฝฟ็จ๏ผไป่ๅพๅฐไฝ็บฌๅบฆ็ๅ็๏ผ
Step26: ่ช็ถๅฐ๏ผๅฏนๅ็่กจ่พพๅผ็่ตๅผๆไฝไนไผ่ขซๆฉๆฃๅฐๆดไธช้ๅบ๏ผ
Step27: ๅธๅฐๅ็ดขๅผ
ๆฅ็่ฟๆ ทไธไธชไพๅญ๏ผๅ่ฎพๆไปฌๆไธไธช็จไบๅญๅจๆฐๆฎ็ๆฐ็ปไปฅๅไธไธชๅญๅจๅงๅ็ๆฐ็ป๏ผๅซๆ้ๅค้กน๏ผใๅจ่ฟ้๏ผๆๅฐไฝฟ็จ numpy.random ไธญ็randnๅฝๆฐ็ๆไธไบๆญฃๆๅๅธ็้ๆบๆฐๆฎใ
Step28: ๅ่ฎพ names ๆฐ็ปไธญ็ๆฏไธชๅๅญ้ฝๅฏนๅบ dataๆฐ็ปไธญ็ไธ่ก๏ผ่ๆไปฌๆณ่ฆ้ๅบๅฏนๅบไบๅๅญโBob'็ๆๆ่กใ่ท็ฎๆฐ่ฟ็ฎไธๆ ท๏ผๆฐ็ป็ๆฏ่พ่ฟ็ฎ๏ผๅฆ==๏ผไนๆฏ็ข้ๅ็ใๅ ๆญค๏ผๅฏนไบnamesๅๅญ็ฌฆไธฒ"Bob"็ๆฏ่พ่ฟ็ฎๅฐไผไบง็ไธไธชboolean array
Step29: ่ฟไธชBoolean arrayๅฏไปฅ็จไบๆฐ็ป็ดขๅผ๏ผThis boolean array can be passed when indexing the array
Step30: ๅฝๅฉ็จๅธๅฐๅๆฐ็ป่ฟ่ก็ดขๅผๆถๅ๏ผๅฟ
้กปๆณจๆๅธๅฐๅๆฐ็ป็้ฟๅบฆ้่ฆไธ่ขซ็ดขๅผ็่ฝด้ฟๅบฆไธ่ดใๆญคๅค๏ผ่ฟๅฏไปฅๅฐๅธๅฐๅๆฐ็ป่ทๅ็ใๆดๆฐ๏ผๆ่
ๆดๆฐๅบๅ๏ผ็จๅๅฏนๆญค่ฟ่ก่ฏฆ็ป็ไป็ป๏ผๆททๅไฝฟ็จ
Step31: ่ฆ้ๆฉ้คไบwillไปฅๅค็ๅ
ถไปๅผ๏ผๆขๅฏไปฅไฝฟ็จไธ็ญไบ็ฌฆๅท(!=)๏ผไนๅฏไปฅ้่ฟ็ฌฆๅท๏ผ-๏ผๅฏนๆกไปถ่ฟ่กๅฆๅฎ
Step32: ๅฆๆๆไปฌ่ฆ้ๅ่ฟไธไธชๅๅญไธญ็ไธคไธช่ฟ่ก็ปๅๆฅๅบ็จๅคไธชๅธๅฐๆกไปถ๏ผ้่ฆไฝฟ็จ&๏ผๅ๏ผใ|๏ผๆ๏ผไน็ฑป็ๅธๅฐ่ฟ็ฎ็ฌฆ๏ผ๏ผๆณจๆ๏ผpythonๅ
ณ้ฎๅญandๅorๅจๅธๅฐๅๆฐ็ปไธญๆฏๆ ๆ็๏ผ
Step33: ๅผๅพๆณจๆ็ๆฏ๏ผ้่ฟๅธๅฐ็ดขๅผ้ๅๆฐ็ปไธญ็ๆฐๆฎ๏ผๅฐๆปๆฏๅๅปบๆฐๆฎ็ๅฏๆฌ๏ผๅณไฝฟ่ฟๅไธๆจกไธๆ ท็ๆฐ็ปไนๆฏๅฆๆญคใ
้่ฟๅธๅฐๅๆฐ็ป่ฎพ็ฝฎๅผๆฏไธ็งๅธธ็จ็ๆนๆณใไธบไบๅฐdataไธญ็ๆๆ่ดๆฐๅไธบ0๏ผๆไปฌๅช้่ฆ
Step34: ้่ฟไธ็ปดๅธๅฐๆฐ็ป่ฎพ็ฝฎๆด่กๆๅ็ๅผไนๅพ็ฎๅ๏ผ
Step35: ่ฑๅผ็ดขๅผ
fancy indexing๏ผๅณ่ฑๅผ็ดขๅผ๏ผๆฏไธไธชNumPyไธไธๆฏ่ฏญ๏ผไปฃๆๅฉ็จๆดๆฐๆฐ็ป่ฟ่ก็ดขๅผใ
Step36: ไธบไบไปฅ็นๅฎ้กบๅบ้ๅ่กๅญ้๏ผๅช้ไผ ๅ
ฅไธไธช็จไบๆๅฎ้กบๅบ็ๆดๆฐๅ่กจๆndarrayๅณๅฏ๏ผ
Step37: ไธ้ข็ไปฃ็ ็๏ผๆไปฌ็จไธไธชๅ่กจ[4,3,0,6]ๅฐฑ้ๅบไบarra1ไธญ็็ฌฌ4๏ผ3๏ผ0๏ผ6็ๅญ้ใ
ๅฆๆๆไปฌไฝฟ็จ่ดๆฐ่ฟ่ก็ดขๅผ๏ผๅ้ๆฉ็้กบๅบๅฐๆฏไปๆซๅฐพๅฐๅผๅคดใ
ๆณจๆ-0ๅ0ๆฏไธๆ ท็๏ผ่ฟๆฏๅผๅคด็็ฌฌไธ่กไฝไธบ0. ่ฟๆฏๅผๅพๆณจๆ็ๅฐๆนใ
Step38: ไธๆฌกไผ ๅ
ฅๅคไธช็ดขๅผๆฐ็ปไผไผๆฏ่พ็นๅซใๅฎ่ฟๅ็ๆฏไธไธชไธ็ปดๆฐ็ป๏ผๅ
ถไธญ็ๅ
็ด ๅฏนๅบๅไธช็ดขๅผๅ
็ป๏ผ
Step39: ไปไธ่ฟฐไปฃ็ ็็ปๆ็ไธ้พ็ๅบ๏ผๅพๅบๆฅ็็ปๆๆฏ[1,0] [5,3] [7,1] ๅ [2,2]
้ฃไนๆไน้ๅ็ฉ้ต็่กๅๅญ้ๅข๏ผไธ้ข๏ผๆไปฌๅช้่ฆ็จๅพฎๆนๅจไธไปฃ็ ๅณๅฏๅฎ็ฐ๏ผ๏ผ่ฟ้จๅๆๅฅฝๅ่ฏปๅ ้ๅไนฆ๏ผๅญๅฅไธๅฅฝ็่งฃ๏ผ
Step40: ๆญคๅค๏ผ่ฟๅฏไปฅไฝฟ็จ np.ix_ๅฝๆฐๆฅๅฎ็ฐไธ่ฟฐ็ๅ่ฝ๏ผๅฎๅฏไปฅๅฐไธคไธชไธ็ปดๆดๆฐๆฐ็ป่ฝฌๆขไธบไธไธช็จไบ้ๅๆนๅฝขๅบๅ็็ดขๅผๅจ๏ผ
Step41: It should be mentioned that, ่ฑๅผ็ดขๅผไธๅ็ไธไธๆ ท๏ผๅฎๆปๆฏๅฐๆฐๆฎๅคๅถๅฐๆฐๆฐ็ปไธญใ
ๆฐ็ป่ฝฌ็ฝฎๅ่ฝดๅฏน็งฐ
่ฝฌ็ฝฎ๏ผๅณ transpose๏ผๆฏ้ๅก็ไธ็ง้่ฆ็นๆฎๅฝขๅผ๏ผๅฎ่ฟๅ็ๆฏๅๆฐๆฎ็่งๅพ๏ผไธไผ่ฟ่กไปปไฝๅคๅถๆไฝ๏ผใๆฐ็ปไธ็ฆๆtransposeๆนๆณ๏ผ่ฟๆไธไธช็นๆฎ็Tๅฑๆงใ
Step42: ๅฝๆไปฌ่ฟ่ก็ฉ้ต้ข็ฎๆถๅ๏ผ่ฟ่ก้่ฆ็จๅฐ่ฝฌ็ฝฎๆไฝใไพๅฆ๏ผ่ฆ็จ np.dot่ฎก็ฎ็ฉ้ตๅ
็งฏX$^T$X๏ผ
Step43: ๅฏนไบๆด้ซ็ปด็ๆฐ็ป๏ผtranspose ๆถ้่ฆๅพๅฐไธไธช็ฑ่ฝด็ผๅท็ปๆ็ๅ
็ฅๆ่ฝๅฏน่ฟไบ่ฝด่ฟ่ก่ฝฌ็ฝฎ๏ผ่ฟไธชๅฏ่ฝไธๅฅฝ็่งฃ๏ผๅพๅค้
่ฏปๅ ๆฌก๏ผ๏ผ
Step44: ไปไธ้ขๅ ไธชไพๅญ๏ผๆไปฌๅฏไปฅ็ๅบ๏ผๅฏนไบ็ฎๅ็ไฝ็ปด็ฉ้ต๏ผไฝฟ็จ.Tๅฐฑๅฏไปฅๅฎ็ฐ่ฝฌ็ฝฎ๏ผๆฏ็ซๅชๆฏ่ฟ่ก่ฝดๅฏนๆข่ๅทฒ๏ผไฝๆฏๅฏนไบ้ซ็ปดๆฐ็ป๏ผๅฐฑๆพๅพ้บป็ฆๅฅฝๅคใndarray่ฟๆไธไธชswapaxesๆนๆณ๏ผๅฎ้่ฆๆฅๅไธๅฏน่ฝด็ผๅท๏ผ(ๆณจๆswapaxesไนๆฏ่ฟๅๆบๆฐๆฎ็่งๅพ๏ผๅนถไธไผ่ฟ่กไปปไฝๅคๅถๆไฝใ)
Step45: ้็จๅฝๆฐ๏ผๅฟซ้็ๅ
็ด ็บงๆฐ็ปๅฝๆฐใ
้็จๅฝๆฐ๏ผๅณufuc๏ผๆฏไธ็งๅฏนndarrayไธญๅฏนๆฐๆฎๆง่กๅ
็ด ็บง่ฟ็ฎๅฏนๅฝๆฐใๆไปฌๅฏไปฅๅฐๅ
ถ็ไฝ็ฎๅๅฏนๅฝๆฐ๏ผๆฅๅไธไธชๆ่
ๅคไธชๆ ้ๅผ๏ผๅนถไบง็ไธไธชๆ่
ๅคไธชๆ ้ๅผ๏ผ็็ข้ๅๅ
่ฃ
ๅจใ
่ฎธๅค unfunc ้ฝๆฏ็ฎๅ็ๅ
็ด ็บงๅไฝ๏ผๅฆsqrtๅexp๏ผ
Step46: ไธ่ฟฐ่ฟไบ้ฝๆฏไธๅ
๏ผunary๏ผufuncใๅฆๅคไธไบ๏ผๅฆaddๆmaximum๏ผๆฅๅ2ไธชๆฐ็ป๏ผๅ ๆญคไนๅซไบๅ
binary ufunc๏ผ๏ผๅนถ่ฟๅไธไธช็ปๆๆฐ็ป๏ผ
Step47: ๆญคๅค๏ผๆไธๅฐ้จๅ็ufunc๏ผๅฎไปฌๅฏไปฅ่ฟๅๅคไธชๆฐ็ปใmofๅฐฑๆฏไธไธชไพๅญ๏ผๅฎๆฏPythonๅ
็ฝฎๅฝๆฐ
divmod็็ข้ๅ็ๆฌ๏ผ็จไบๅ็ฆปๆตฎ็นๆฐ็ป็ๅฐๆฐๅๆดๆฐ้จๅใ้่ฟไธ้ข็ไพๅญ๏ผๆไปฌไผๅ็ฐ๏ผmofๅ
ถๅฎๅพๅฐ็ๆฏๅ ไธชๆฐ็ป็ปๆ็tuple
Step48: ไธ่กจไธญๅๅบไบไธไบไธๅ
ๅไบๅ
ufunc
ไธๅ
ufunc
|ๅฝๆฐ|่ฏดๆ|
|------|-----|
|abs, fabs|่ฎก็ฎๆดๆฐใๆตฎ็นๆฐๅ่ดๆฐ็็ปๅฏนๅผใๅฏนไบๅคๆฐๅผ๏ผๅฏไปฅไฝฟ็จๆดๅฟซ็fabs|
|sqrt|่ฎก็ฎๅๅ
็ด ็ๅนณๆนๆ นใ็ธๅฝไบ arr 0.5|
|square|่ฎก็ฎๅๅ
็ด ็ๅนณๆนใ็ธๅฝไบๆฏ arr 2 |
|exp|่ฎก็ฎๅๅ
็ด ็eๆๆฐ๏ผๅณ e$^x$|
|log,log10,log2,log1p|ๅๅซๅฏนๅบ่ช็ถๅฏนๆฐ๏ผไปฅeไธบๅบ๏ผ๏ผๅบๆฐๆฏ10็log๏ผๅบๆฐๆฏ2็log๏ผไปฅๅlog(1+x)|
|sign|่ฎก็ฎๅๅ
็ด ็ๆญฃ่ดๅท๏ผ1ไปฃ่กจๆดๆฐ๏ผ0ไปฃ่กจ้ถ๏ผ-1ไปฃ่กจ่ดๆฐ|
|ceil|่ฎก็ฎๅๅ
็ด ็ceilingๅผ๏ผๅณๅคงไบ็ญไบ่ฏฅๅผ็ๆๅฐๆดๆฐ|
|floor|่ฎก็ฎๅๅ
็ด ็floorๅผ๏ผๅณๅฐไบ็ญไบ่ฏฅๅผ็ๆๅคงๆดๆฐ|
|rint|ๅฐๅๅ
็ด ไนๅ่ไบๅ
ฅๅฐๆๆฅ่ฟ็ๆดๆฐ๏ผไฟ็dtype|
|modf|ๅฐๆฐ็ป็ๅฐๆฐๅๆดๆฐ้จๅไปฅไธคไธช็ฌ็ซ็ๆฐ็ปๅฝขๅผ่ฟๅ|
|isnan| ่ฟๅไธไธช่กจ็คบโๅชไบๅผๆฏNaN๏ผ่ฟไธๆฏไธไธชๆฐๅญ๏ผโ็booleanๆฐ็ป|
|isfiniteใisinf|ๅๅซ่ฟๅไธไธช่กจ็คบโๅชไบๅ
็ด ๆฏๆ็ฉท็๏ผ้inf๏ผ้NaN๏ผโ ๆ่
โๅชไบๅ
็ด ๆฏๆ ็ฉท็โ็ๅธๅฐๅๆฐ็ป|
|cosใcoshใsinใsinhใtan๏ผtanh|ๆฎ้ๅๅๅๆฒๅไธ่งๅฝๆฐ|
|arccosใarccoshใarcsinใarcsinhใarctanใarctanh|ๅไธ่งๅฝๆฐ|
|logical_not| ่ฎก็ฎๅไธชๅ
็ด not x็็ๅผใ็ธๅฝไบ-arr|
ไบๅ
ufunc
|ๅฝๆฐ|่ฏดๆ|
|------|-----|
|add|ๅฐๆฐ็ปไธญๅฏนๅบ็ๅ
็ด ็ธๅ |
|substract|ไป็ฌฌไธไธชๆฐ็ปไธญๅๅป็ฌฌไบไธชๆฐ็ปไธญ็ๅ
็ด |
|multiply|ๆฐ็ปๅ
็ด ็ธไน|
|divideใfloor_divide|้คๆณๆๅไธๅๆด้คๆณ๏ผไธขๅผไฝๆฐ๏ผ|
|power|ๅฏน็ฌฌไธไธชๆฐ็ปไธญ็ๅ
็ด A๏ผๆ นๆฎ็ฌฌไบไธชๆฐ็ปไธญ็็ธๅบๅฅฝๅ
็ด B๏ผ่ฎก็ฎA$^B$|
|maximum, fmax|ๅ
็ด ็บง็ๆๅคงๅผ่ฎก็ฎใfmaxๅฐๅฟฝ็ฅNaN|
|minimumใfmin|ๅ
็ด ็บง็ๆๅฐๅผ่ฎก็ฎใfminๅฐๅฟฝ็ฅNaN|
|mod|ๅ
็ด ็บง็ๆฑๆจก่ฎก็ฎ๏ผ้คๆณ็ไฝๆฐ๏ผ|
|copysign|ๅฐ็ฌฌไบไธชๆฐ็ปไธญ็ๅผ็็ฌฆๅทๅคๅถ็ป็ฌฌไธไธชๆฐ็ปไธญ็ๅผ|
|greaterใgreater_equalใlessใless_equalใequalใnot_equal|ๆง่กๅ
็ด ็บง็ๆฏ่พ่ฟ็ฎ๏ผๆ็ปไบง็booleanๅๆฐ็ปใ็ธๅฝไบไธญ็ผ่ฟ็ฎ>, >=, <, <=, ==, !=|
|logical_andใlogical_orใlogical_xor | ๆง่กๅ
็ด ็บง็็ๅผ้ป่พ่ฟ็ฎใ็ธๅฝไบไธญ็ผ่ฟ็ฎ็ฌฆ '&'๏ผ'$|$'๏ผ'^'|
Step49: ๅฉ็จๆฐ็ป่ฟ่กๆฐๆฎๅค็
NumPyๆฐ็ป็็ข้ๅๅจๅพๅคง็จๅบฆไธ็ฎๅไบๆฐๆฎๅค็ๆนๅผใไธ่ฌ่่จ๏ผ็ข้ๅ่ฟ็ฎ่ฆๆฏ็ญไปท็็บฏpythonๆนๅผๅฟซ1-2ไธชๆฐ้็บง๏ผๅฐคๅ
ถๆฏๅจๆฐๅผ่ฎก็ฎๅค็่ฟ็จไธญ่ฟไธชไผๅฟๆดๅ ็ๆๆพใๅจๅ้ข็็ฌฌ12็ซ ่ไธญ๏ผๆไปฌๅฐไบ่งฃๅฐๅนฟๆญ๏ผๅฎๆฏไธ็ง้ๅฏน็ข้ๅ่ฎก็ฎ็ๅผบๅคงๆๆฎตใ
ๅ่ฎพๆไปฌๆณ่ฆๅจไธ็ปๅผ๏ผ็ฝๆ ผๅ๏ผไธ่ฎก็ฎsqrt(x^2+y^2)ใๆไปฌๅฝ็ถๅฏไปฅ้ๆฉ็จloop็ๆนๅผๆฅ่ฎก็ฎ๏ผไฝๆฏๆไปฌๅจ่ฟ้ไฝฟ็จๆฐ็ป็ๆนๆณใ
np.meshgrid ๅฝๆฐๆฅๅไธคไธชไธ็ปดๆฐ็ป๏ผๅนถไบง็ไธคไธชไบ็ปด็ฉ้ต๏ผๅฏน่ฑ่ฏญไธคไธชๆฐ็ปไธญๆๆ็(x,y)ๅฏน๏ผ๏ผ
Step50: ็ฐๅจ๏ผๆไปฌๆฅ่ฎก็ฎxsไบๆฌกๆนไธysไบๆฌกๆน็ๅ๏ผ
Step51: ๆไปฌ่ฏ็ๅฐไธ่ฟฐ่ฟไธชzๅฝๆฐ็ปๅบๆฅ
Step52: ไธ้ข๏ผๆไปฌๅชไฝฟ็จไบๅพโ็โ็็น๏ผๆฅไธๆฅ๏ผๆไปฌๅฐ่ฏไฝฟ็จๅพๅฏ้็็น๏ผ่ฟๆ ทๆๅฉไบๆไปฌๅฏ่งๅsqrt(x^2+y^2)่ฟไธชๅฝๆฐใ
Step53: ๅฐๆกไปถ้ป่พ่กจ่ฟฐไธบๆฐ็ป่ฟ็ฎ
Expressing conditional logic as array operations
numpy.where ๅฝๆฐๆฏไธๅ
่กจ่พพๅผ x if condition else y ็็ข้ๅ็ๆฌใๅ่ฎพๆไปฌๆไธไธชboolean ๆฐ็ปๅไธคไธชๅผๆฐ็ปใ
Step54: ๅ่ฎพๆไปฌๆณ่ฆๆ นๆฎ cond ไธญ็ๅผๆฅๅณๅฎๆไปฌๆฏ้ๅ xarr ่ฟๆฏ yarr ็ๅผใๅฝ cond ไธญ็ๅผไธบ True ๆถ๏ผๆไปฌ้ๅ xarr ไธญ็ๅผ๏ผๅฆๅ้็จ yarr ไธญ็ๆฐๅผใ
pythonไธญๅ่กจๆจๅฏผๅผ็ๅๆณๅฆไธๆ็คบ๏ผ
Step55: It has multiple problems here. First, it will not be fast for large arrages (because all the work is being done in interpreted python code,ๅณ็บฏpythonๅค็)๏ผsecond, it will not work with multidimensional array,ๅณๆ ๆณๅค็ๅค็ปดๆฐ็ปใ
ๅฆๆๆไปฌไฝฟ็จ np.where๏ผwe can wirte this code very concisely๏ผ
Step56: np.where็็ฌฌไบไธชๅ็ฌฌไธไธชๅๆฐไธๅฟ
ๆฏๆฐ็ป๏ผๅฎไปฌๅฏไปฅๆฏๆ ้ใๅจๆฐๆฎๅๆๅทฅไฝไธญ๏ผwhere ้ๅธธ็จไบๆ นๆฎๅฆไธไธชๆฐ็ป่ไบง็ไธไธชๆฐ็ๆฐ็ปใๅ่ฎพๆไธไธช็ฑ้ๆบๆฐๆฎ็ปๆ็็ฉ้ต๏ผๆไปฌๆณๅฐๆๆๆญฃ็ๅผๆฟๆขไธบ2๏ผๆๆ่ดๅผๆนไธบ-2ใ้ฃไนๆไปฌๅฏไปฅๅไธบ๏ผ
Step57: ๅฆๆๆไปฌๅช้่ฆๆ่ด็ๅผๆนไธบ -3๏ผ ้ฃไนๆไปฌๅฏไปฅ็จ
Step58: Highlight๏ผ ๆไปฌๅฏไปฅไฝฟ็จwhere่กจ็ฐๆดๅ ๅคๆ็้ป่พใๆณ่ฑก่ฟๆ ทไธไธชไพๅญ๏ผๆไธคไธชboolean array๏ผๅๅซๅซๅcond1ๅconda2๏ผๅธๆไฝฟ็จๅ็งไธๅ็ๅธๅฐๅผ็ปๅๅฎ็ฐไธๅ็่ตๅผๆไฝ.
ๅฆๆๆไปฌไธ็จwhere๏ผ้ฃไน่ฟไธชpseudo code ็้ป่พๅคงๆฆๅฆไธ
่ฝ็ถไธๆฏ้ฃไนๅฎนๆ็ๅบๆฅ๏ผๆไปฌๅฏไปฅไฝฟ็จ where ็ๅตๅฅๆฅๅฎ็ฐไธ่ฟฐ็pseudocode้ป่พ
np.where(conda1 & conda2, 0,
np.where(conda1, 1,
np.where(conda2, 2, 3)))
ๅจ่ฟไธช็นๆฎ็ไพๅญไธญ๏ผๆไปฌ่ฟๅฏไปฅๅฉ็จโๅธๅฐๅผๅจ่ฎก็ฎ่ฟ็จไธญ่ขซๅฝไฝ0ๆ่
1ๅค็โ่ฟไธชไบๅฎ๏ผๅฐไธ่ฟฐresult็็ปๆๆนๅๆ
Step59: ็ฐๅจๆไปฌๆฅๅบ็จไธไธ้ข็ๅตๅฅnp.where
Step60: ๆฐๅญฆๅ็ป่ฎกๆนๆณ Mathematical and Statical Methods
ๆไปฌๅฏไปฅไฝฟ็จๆฐ็ปไธ็ไธๅฅๆฐๅญฆๅฝๆฐๅฏนๆดไธชๆฐ็ปๆ่
ๆฐ็ป็ๆไธช่ฝดๅไธ็ๆฐๆฎ่ฟ่ก็ป่ฎก่ฎก็ฎใYou can use aggregations (often called reductions) like 'sum', 'mean', and 'std' either by calling the array instance method or using the top-level Numpy function.
Step61: ไธ้ขไปฃ็ ไธญ๏ผๆไบง็ไบไธไบ enormally distributed random data๏ผๅนถไธ็จimshow function ๆ่ฟไธชไบ็ปดๆฐ็ป็ป็ปไบๅบๆฅใๆไปฌๅฏไปฅไฝฟ็จ aggregate statistics ๅไธไบ่ฎก็ฎ. ๏ผๅ
ถๅฎๆๅจๅ้ขๅทฒ็ป็จๅฐ่ฟไบ่ฟไบ array ๅฎไพๆนๆณใ
Step62: mean ๅ sum ่ฟ็ฑป็ๅฝๆฐๅฏไปฅๆฅๅไธไธช axis ๅๆฐ ๏ผ็จไบ่ฎก็ฎ่ฏฅ่ฝดๅไธ็็ป่ฎกๅผ๏ผ๏ผๆ็ป็ปๆๆฏไธไธช็ธๅฏนไบๅๆฐ็ปๅฐไบไธ็ปด็ๆฐ็ป๏ผ
Step63: ๅ
ถไปๅฆ โcumsumโ๏ผ โcumprodโ ่ฟ็ฑปๅฝๆฐๆนๆณๅนถไธ่ๅ๏ผ่ๆฏไบง็ไธไธช็ฑไธญ้ด็ปๆ็ปๆ็ๆฐ็ป๏ผ
English
Step64: In multidimensional arrays, accumulation functions like cumsum return an array of the same size, but with the partial aggregates computed along the indicated axis according to each lower dimensional slice
Step65: ็จไบBooleanๆฐ็ป็ๆนๆณ
ๅจไธ่ฟฐๆนๆณไธญ๏ผๅธๅฐๅผไผ่ขซๅผบๅถ่ฝฌๆขไธบ 1 ๏ผTrue๏ผ ๅ 0 ๏ผFalse๏ผใๅ ๆญค๏ผsum ็ปๅธธ่ขซ็จๆฅๅฏนBooleanๆฐ็ปไธญ็Trueๅผ่ฎก็ฎ๏ผ
Step66: ๅฆๅค่ฟๆไธคไธชๆนๆณ any ๅ all๏ผๅฎไปฌๅฏน Boolean array ๅพๆ็จใany็จไบๆต่ฏๆฐ็ปไธญๆฏๅฆๅญๅจไธไธชๆๅคไธชTrue๏ผ่allๅๆฃๆฅๆฐ็ปไธญๆๆๅผๆฏๅฆ้ฝๆฏTrueใ
Step67: ๆๅบ Sorting
่ทPythonๅ
็ฝฎ็ๅ่กจไธๆ ท๏ผNumPy ๆฐ็ปไนๅฏไปฅ้่ฟ sort ๆนๆณๅฐฑๅฐๆๅบ
Step68: ๅฏนไบๅค็ปดๆฐ็ป๏ผๅช่ฆๆไปฌ็ปๅฎ็กฎๅฎ็่ฝด็ผๅท๏ผๅฎๅฐฑไผๆฒฟ็็นๅฎ็่ฝด่ฟ่กๆๅบใๆไปฌ่ฟ้ๆฟไธไธชไบ็ปดๆฐ็ปไธพไพ
Step69: The top-level method 'np.sort' returns a sorted copy of an array instead of modifying the array in-place. ่ฟไธช้่ฆๆไปฌๅบๅ np.sort ๅๆฐ็ปๅฎไพ sort ็ๅฐๆนใ
Step70: ๆฐ็ป sort ็ๅบ็จไนไธ๏ผๅฐฑๆฏ็กฎๅฎๆฐ็ป็ๅไฝๆฐ(quantile)ใ
A quick-and-dirty way to compute the quantiles of an array is to sort it, and select the value at a particular rank.
Step71: ไธ้ขๆไปฌๅชๆฏไฝฟ็จไบๅพๅฐ็ๆฐ็ป๏ผๆไปฌไธ็ผๅฐฑๅฏไปฅ็ๅบๅๅไฝๆฐไธ็ๆฐๅผ๏ผๅฝๆฐ็ปๅๅพๅพๅคงๆถๅ๏ผๆ่ฝๅธๆพๅบ sort ็ไพฟๆทใไพๅฆ๏ผ
Step72: ๅ
ณไบ NumPy ๆๅบๆนๆณไปฅๅ่ฏธๅฆ้ดๆฅๆๅบไน็ฑป็้ซ็บงๆๆฏ๏ผๆไปฌๅจ็ฌฌ12็ซ ่ฟไผ่ฏฆ็ป็่ฎจ่ฎบ๏ผๅจ Pandas ไธญไนๆไธไบ็นๅซ็ๆๆฐๆๆฏใ
Unique and Other Set Logic ๅฏไธๅไปฅๅๅ
ถไป้ๅ้ป่พ
NumPy ๆไพไบไธไบ้ๅฏนไธ็ปดndarray็ๅบๆฌ้ๅ่ฟ็ฎใๅ
ถไธญๅฏ่ฝๆๅธธ็จ็ๆฏ np.unique๏ผๅฎ็จไบๆพๅบๆฐ็ปไธญ็ๅฏไธๅผ(ไนๅฐฑๆฏ่ฏด่ฟไธชๅผๅจๆฐ็ปไธญๅชๆไธไธช)ๅนถ่ฟๅๅทฒๆๅบ็็ปๆใ
Step73: ๆไปฌๅฏไปฅๆฟ็ไธ np.unique ็ญไปท็็บฏpythonไปฃ็ ๆฅๆฏ่พไธไธ๏ผContrast no.unique with the pure Python alternative
Step74: Anotehr function, np.in1d, tests membership of the values in one array in another, returning a boolean array.
ๅฆไธไธชๅฝๆฐnp.in1d็จไบๆต่ฏไธไธชๆฐ็ป็ๅผๅจๅฆไธไธชๆฐ็ปไธญ็ๆๅ่ตๆ ผ๏ผ่ฟๅไธไธชBoolean array
Step75: ่ฟ้็ปๅบไธไบ NumPy ไธญ็ๅบๆฌ้ๅๅฝๆฐ๏ผset function๏ผ
Array set operations
|ๅฝๆฐ|่ฏดๆ|
|------|-----|
|unique(x)|่ฎก็ฎxไธญ็ๅฏไธๅ
็ด ๏ผๅนถ่ฟๅๆๅบ็ปๆ|
|intersect1d(x,y)|่ฎก็ฎxๅy็ๅ
ฌๅ
ฑๅ
็ด ๏ผๅนถไธ่ฟๅๆๅบ็ปๆ|
|union1d(x,y)|่ฎก็ฎxๅy็ๅนถ้๏ผๅนถ่ฟๅๆๅบ็ปๆ|
|in1d(x,y)|ๅพๅฐไธไธช่กจ็คบโx็ๅ
็ด ๆฏๅฆๅ
ๅซไบyโ็ๅธๅฐๅๆฐ็ป|
|setdiff1d(x,y)|้ๅ็ๅทฎ๏ผๅณๅ
็ด ๅจxไธญไธไธๅจyไธญ|
|setxor1d(x,y)|้ๅ็ๅฏน็งฐๅทฎ๏ผๅณๅญๅจไบไธไธชๆฐ็ปไธญไฝไธๅๆถๅญๅจไบไธคไธชๆฐ็ปไธญ็ๅ
็ด ๏ผ็ธๅฝไบๆฏๅผๆ|
Step76: File Input and Output with Arrays ็จไบๆฐ็ป็ๆไปถ่พๅ
ฅ่พๅบ
NumPy ๅฏไปฅ็จๆฅ่ฏปๅ็ฃ็ไธญ็ๆๆฌๆฐๆฎๅไบ่ฟๅถๆฐๆฎใๅจ่ฟไธช็ซ ่ไธญ๏ผๆไปฌๅฐๅช่ฎจ่ฎบ NumPy ๅ
ๅปบ็ไบ่ฟๅถๆ ผๅผ๏ผ่ฟไธป่ฆๆฏๅ ไธบๅคง้จๅpython็จๆทๆดๅๆฌข็จpandasๅๅ
ถไปๅทฅๅ
ทๆฅ่ฏปๅๆๆฌๅ่กจๆ ผๆฐๆฎ๏ผ่ฟๅจไนๅ็็ซ ่ไธญไผ่ฟ่ก่ฎจ่ฎบ
ๅฐๆฐ็ปไปฅไบ่ฟๅถๆ ผๅผไฟๅญๅฐ็ฃ็
np.save ๅ np.load ๆฏ่ฏปๅ็ฃ็ๆฐ็ปๆฐๆฎ็ไธคไธชไธป่ฆๅฝๆฐใ้ป่ฎคๆ
ๅตไธ๏ผๆฐ็ปๆฏไปฅๆชๅ็ผฉ็ๅๅงไบ่ฟๅถๆ ผๅผไฟๅญๅจๆฉๅฑๅไธบ .npy ็ๆไปถไธญ็ใ
Step77: ruๅฆๆๆไปถ่ทฏๅพๆซๅฐพๆฒกๆๆฉๅฑๅ .npy๏ผ้ฃไน่ฟไธชๆฉๅฑๅไผ่ขซ่ชๅจ่กฅๅ
จใ็ถๅๅฐฑๅฏไปฅ้่ฟ np.load ่ฏปๅ็ฃ็ไธ็ๆฐ็ป๏ผ
Step78: ้่ฟ np.savez ๅฏไปฅๅฐๅคไธชๆฐ็ปไฟๅญๅฐไธไธชuncompressed npzๆไปถไธญ๏ผๆณจๆๅไนฆๅไธญๆ็ฟป่ฏ็็ฌฌไธ็้ฝๆ่ฟไธชnpz่ฏดๆไบๆฏๅ็ผฉๆไปถ๏ผ่ฟไธชๆฏ้่ฏฏ็๏ผไฝๆฏๅไฝ่
็ฌฌไบ็๏ผๅณๅฉ็จpython 3็็ๆฌๅทฒ็ปๆดๆญฃไบ๏ผๆไนๆฅ้
ไบ NumPy ็ๆๆกฃ๏ผnp.savezไฟๅญ็ๅนถไธๆฏๅ็ผฉๆไปถ๏ผๅฆๆ่ฆๅ็ผฉๆไปถ๏ผๅฏไปฅไฝฟ็จ np.savez_compressed๏ผ๏ผๅฐๆฐ็ปไปฅๅ
ณ้ฎๅญๅๆฐ็ๅฝขๅผไผ ๅ
ฅๅณๅฏ๏ผ
Step79: When loading an .npz file, we get back a dict-like object ๏ผๆไปฌๅพๅฐ็ๆฏไธไธช็ฑปไผผๅญๅ
ธ็ๅฏน่ฑก๏ผthat laods the individual arrays lazily (่ฏฅๅฏน่ฑกไผๅฏนๅไธชๆฐ็ป่ฟ่กๅปถ่ฟๅ ่ฝฝ)
Step80: ๅญๅๆๆฌๆไปถ
ไปๆไปถๅ ่ฝฝๆๆฌๆฏไธชๅพๆ ๅ็pythonไปปๅก๏ผไธ่ฟpython็ๆไปถ่ฏปๅๅฝๆฐๅพๅฎนๆๅฆๅๅญฆ่
ๆ็ณๆถ๏ผๅ ๆญค่ฟ้ๆไปฌไธป่ฆไป็ป pandas ไธญ็ read_csv ๅ read_table ๅฝๆฐใๆๆถ๏ผๆไปฌ้่ฆ็จๅฐ np.loadtxt ๆ่
ๆดไธบไธ้จๅ็ np.genfromtxt ๅฐๆฐๆฎ่ฎฐ่ฝฝๅฐๆฎ้็ NumPy ๆฐ็ปไธญใ
่ฟไบๅฝๆฐ้ฝๆ่ฎธๅค้้กนๅฏไพไฝฟ็จ๏ผๆๅฎๅ็งๅ้็ฌฆใ้ๅฏน็นๅฎๅ็่ฝฌๆขๅจๅฝๆฐใ้่ฆ่ทณ่ฟ็่กๆฐ็ญใ่ฟ้๏ผไปฅไธไธช็ฎๅ็้ๅทๅๅฒๆไปถ ๏ผCSV) ไฝไธบ
example๏ผ
Step81: ่ฏฅๆไปถๅฏไปฅ่ขซๅ ่ฝฝๅฐไธไธชไบ็ปดๆฐ็ปไธญ๏ผๅฆไธๆ็คบ๏ผ
Step82: np.savetxt ๆง่ก็ๆฏ็ธๅ็ๆไฝ๏ผๅฐๆฐ็ปๅๅฐไปฅๆ็งๅ้็ฌฆๅๅผ็ๆๆฌๆไปถไธญๅปใ genfromtxt ่ท loadtxt ๅทฎไธๅค๏ผๅชไธ่ฟๅฎ้ขๅ็ๆฏ็ปๆๅๆฐ็ปๅ็ผบๅคฑๆฐๆฎๅค็ใๅจ12็ซ ไธญ๏ผๆไปฌ่ฟไผไป็ป่ฎจ่ฎบ็ปๆๅๆฐ็ป็็ฅ่ฏใ
Step83: Linear Algebra
Linear algebra, like matrix multiplication, decompisitions, determinants, and other square matrix math, is an important part of any array library. Unlike MATLAB, multiplying two two-dimensional arrays with * is an element-wise product instead a matrix dot product. Thus, there is a function 'dot', both an array method and a function in the numpy namespace, for matrix multiplication
Step84: x.dot(y) is equivalent to np.dot(x,y)
Step85: A matrix product between a 2D array and a suitably sized 1D array result in a 1D array
Step86: numpy.linalg ไธญๆไธ็ปๆ ๅ็็ฉ้ตๅ่งฃ่ฟ็ฎไปฅๅ่ฏธๅฆๆฑ้่กๅๅผไน็ฑป็ไธ่ฅฟใๅฎไปฌ่ท matlab ๅ R ็ญ่ฏญ่จๆไฝฟ็จ็ๆฏ็ธๅ็่กไธๆ ๅ็บง Fortran ๅบ๏ผๅฆ BLASใLAPACKใIntel MKL ๏ผๅฏ่ฝๆ๏ผ่ฟไธชๅๅณไบๆไฝฟ็จ็ NumPy ็ๆฌ๏ผ็ญ๏ผ
Step87: ่ฟ้๏ผๆๅฏนๅธธ็จ numpy.linalg ๅฝๆฐ่ฟ่กไธไบๆกไพๅฏน่ฏดๆ๏ผๅไนฆๅฉ็จไบไธไธช่กจๆ ผ๏ผไฝๆฏๆ่ชๅทฑไธบไบ็่ฟๆฌไนฆไน้ๅคๅไบๅ ไธช่กจๆ ผไบ๏ผ่ฎฐๅฟๆ
ๅตๅนถไธไฝณ๏ผๅฏ่ฝ่ฟๆฏไธไธชๅฝๆฐไธไธชไพๅญๅฏน่ฟ็งๆนๆณๆดๅ ๅฎนๆ่ฎฉไบบ่ฎฐๅฟๆทฑๅปไธไบใ๏ผ2017/12/25๏ผ
|ๅฝๆฐ|่ฏดๆ|
|---|---|
|diag|ไปฅไธ็ปดๆฐ็ป็ๅฝขๅผ่ฟๅๆน้ต็ๅฏน่ง็บฟ๏ผๆ้ๅฏน่ง็บฟ๏ผๅ
็ด |
Step88: |ๅฝๆฐ|่ฏดๆ|
|---|---|
|dot|matrix multiplication, ็ฉ้ตไนๆณ|
่ฟๅทฒ็ปๅจๅ้ขไธพ่ฟไพๅญ๏ผ่ฟ้็ฅไบใ
|ๅฝๆฐ|่ฏดๆ|
|---|---|
|trace|่ฎก็ฎๅฏน่ง็บฟๅ
็ด ็ๅ|
Step89: |ๅฝๆฐ|่ฏดๆ|
|---|---|
|det|่ฎก็ฎ็ฉ้ต็่กๅๅผ|
Step90: |ๅฝๆฐ|่ฏดๆ|
|---|---|
|eig|่ฎก็ฎๆน้ต็ๆฌๅพๅผๅๆฌๅพๅ้|
Step91: |ๅฝๆฐ|่ฏดๆ|
|---|---|
|inv|่ฎก็ฎๆน้ต็้|
Step92: |ๅฝๆฐ|่ฏดๆ|
|---|---|
|pinv|่ฎก็ฎ็ฉ้ต็Moore-Penroseไผช้|
Compute the Moore-Penrose pseudo-inverse of a matrix
Step93: |ๅฝๆฐ|่ฏดๆ|
|---|---|
|qr|copute the QR decompisition|
ไธ้ขๆ่ฟไบ๏ผๆญคๅค็ฅ
|ๅฝๆฐ|่ฏดๆ|
|---|---|
|svd|่ฎก็ฎๅฅๅผๅผๅ่งฃ๏ผ compute the singular value decomposition SVD|
Step94: |ๅฝๆฐ|่ฏดๆ|
|---|---|
|solve|่ฎก็ฎๆน็จ็ป Ax = b๏ผ ๅ
ถไธญ A ไธบไธไธชๆน้ต|
Note๏ผThe solutions are computed using LAPACK routine_gesv. 'a' must be square and of full-rank, i.e., all rows (or, equivalently, columns) must be linearly independent; if either is not true, use lstsq for the least-squares best 'solutions' of the system/equation.
Step95: |ๅฝๆฐ|่ฏดๆ|
|---|---|
|lstsq|่ฎก็ฎๆน็จ็ป $ax = b$ ็ๆๅฐไบไน่งฃ|
numpy.linalg.lstsq(a, b, rcond=-1)
Return the equation $a$ $x$ = $b$ by computing a vector $x$ that minimizes the Eouliden 2-norm $ ||b - ax ||^2$. The equation may be under-, over-, or well-determined (i.e. the numer of linearly independent rows of $a$ can be less less, greater, or less than its number of linearly independent columns). If $a$ is square and of full rank, then $x$ (but for round-off error) is the "exact" solution of the equation. (reivsed on the content from scipy.org)
Fit a line, $y = mx + c$, through some noisy data-points
Step96: ้่ฟๆฅ็ไธ้ขxๅy็็น๏ผๆไปฌๅฏไปฅๅ็ฐ่ฟไธชๆฒ็บฟ็ๅคงๆฆๆ็ๅจ1ๅทฆๅณ๏ผ่ๅจy่ฝดไธ็cut offๅจ-1ๅทฆๅณใ
ๆไปฌๅฏไปฅ้ๆฐๅไธไธไธ้ข่ฟไธช็บฟๆงๆน็จ๏ผ$y$ = A p, ๆญคๅค A = [ [x, 1] ] ๅนถไธ p = [[m], [c]]ใ็ฐๅจๆไปฌไฝฟ็จ lstsq ๅป่งฃ p
Step97: ้ๆบๆฐ็ๆ Pseudorandom Number Generation
numpy.random ๆจกๅๅฏน PYthon ๅ
็ฝฎๅฏน random ่ฟ่กไบ่กฅๅ
๏ผๅขๅ ไบไธไบๆ็ฌ็ๆ้ๆบๆ ทๆฌๅฏนๅฝๆฐใไพๅฆ๏ผๆไปฌๅฏไปฅ็จnormalๆฅๅพๅฐไธไธชๆ ๅๆญฃๆๅๅธๅฏน 4 * 4 ๆ ทๆฌๆฐ็ป๏ผ
Step98: ไธๆญคๅฏนๆฏๅฐ๏ผๅจpythonๅฏนๅ
็ฝฎrandomๅฝๆฐไธญ๏ผไธๆฌกๅช่ฝ็ๆไธไธชๆ ทๆฌๅผใไธ้ขๆไปฌๅฐฑๆฅๅฏนๆฏไธ่ฟไธค็งๆนๆณๅฏนๅบๅซ๏ผๆไปฌๅฐไผ็ๅฐ numpyไธญๅฏนๆจกๅๆๆดไผ่ถๅฏนๆ็๏ผ
Step99: ไธ่กจๅๅบไบ numpy.random ไธญ็้จๅๅฝๆฐใๅจไธไธ่ไธญ๏ผๆ้จๅฐ็ปๅบไธไบๅฉ็จ่ฟไบๅฝๆฐไธๆฌกๆง็ๆๅคง้ๆ ทๆฌๅผ็ๆกไพใ
|ๅฝๆฐ|่ฏดๆ|
|---|---|
|seed|็กฎๅฎ้ๆบๆฐ็ๆๅจ็็งๅญ|
|permutation|่ฟๅไธไธชๅบๅ็้ๆบๆๅๆ่ฟๅไธไธช้ๆบๆๅ็่ๅด|
|shuffle|ๅฏนไธไธชๅบๅๅฐฑๅฐ้ๆบๆๅ|
|rand|ไบง็ๅๅๅๅธ็ๆ ทๆฌๅผ|
|randint|ไป็ปๅฎ็ไธไธ้่ๅดๅ
้ๆบ้ๅๆดๆฐ|
|randn|ไบง็ๆญฃๆๅๅธ๏ผๅนณๅๅผไธบ0๏ผๆ ๅๅทฎไธบ1๏ผ็ๆ ทๆฌๅผ๏ผ็ฑปไผผไบmatlabๆฅๅฃ|
|binomial|ไบง็ไบ้กนๅๅธ็ๆ ทๆฌๅผ|
|normal|ไบง็ๆญฃๆ๏ผ้ซๆฏ๏ผๅๅธ็ๆ ทๆฌๅผ|
|beta|ไบง็Betaๅๅธ็ๆ ทๆฌๅผ|
|chisquare|ไบง็ๅกๆนๅๅธ็ๆ ทๆฌๅผ|
|gamma|ไบง็Gammaๅๅธ็ๆ ทๆฌๅผ|
|uniform|ไบง็ๅจ[0,1)ไธญๅๅๅๅธ็ๆ ทๆฌๅผ|
่ไพ๏ผ้ๆบๆผซๆญฅ random walks
้ๆบๆผซๆญฅๆฏ่ฏดๆๆฐ็ปๆไฝๆๅฅฝ็ๆกไพไนไธใ็ฐๅจ๏ผๆไปฌๆฅ่่ไธไธช็ฎๅ็้ๆบๆผซๆญฅ๏ผๆไปฌไป0ๅผๅง๏ผๅนถไธไปฅ1ๆ่
-1ไฝไธบstep width๏ผ1ๅ-1ๅบ็ฐ็ๆฆ็ๆฏๅ็ญ็ใ็ถๅๆไปฌ่ตฐ1000ๆญฅ๏ผๆไปฌๅฏไปฅ็็ๆไปฌไผ่ตฐๅบไปไนๆ ท็่ฝจ่ฟน
Step100: ๆณจๆๆไธ้ข็ไปฃ็ ่ทๅไนฆไธ็ๅบๅซ๏ผไธป่ฆๅจไบๆๅนถไธๆฏpython่ช่บซ็random standard libraryใๆไฝฟ็จ็ๆฏnumpy.random๏ผ่ฟไธคไธชๆฏๆๅบๅซ็๏ผ่ฟไธป่ฆๅจไบ
numpy.random.randint(a, b)๏ผ่ฟๅ็ๅผๆฏa ~ (b-1)ไน้ด็ๆดๆฐๅผ๏ผๅ
ๆฌa ๅ b-1๏ผ๏ผ
่python่ชๅธฆ็random.randint(a,b) ่ฟๅ็ๅผๆฏ a ๏ฝ bไน้ด็ๆดๆฐๅผ๏ผๅ
ๆฌaๅb๏ผ
Step101: ไธ้ข็walkๆฐๅผ๏ผๅ
ถๅฎๅฐฑๆฏ้ๆบๆฐ็็ดฏ่ฎกๅใไธ่ฟไธ้ข็ๆนๅผไธญ๏ผๆ้จ้ฝๆฏ่ตฐไธๆญฅ็ถๅไบง็ไธไธช้ๆบๆฐ๏ผๅ
ถๅฎๆไปฌๅฏไปฅ็จnumpy.random.randintไธๆฌกๆงๅฐไบง็Nไธช้ๆบๆฐ๏ผ่ฟ้ไปฅN=1000ไธบไพ
Step102: A more complicated statics is the 'first crossing time', the step at which the random walk reaches a particular value. Here we might want to know how long it look the random walk to get at least 10 steps aways from the origin 0 in either direction. np.abs(walk) >= 10 gives us a boolean array indicating where the walk has reached or exceeded 10, but we want the index of the first 10 or -10.
Step103: ไธๆฌกๆจกๆๅคไธช้ๆบๆผซๆญฅ simulating many random walks at once
ๅฆๆๅธๆๆจกๆๅคไธช้ๆบๆผซๆญฅ่ฟ็จ๏ผๅช้่ฆๅฏนไธ้ขๅฏนไปฃ็ ๅไธ็นๅพฎ่ฐใๆไปฌๅช้่ฆ็ปnumpy.random ไผ ๅ
ฅไธไธชไบๅ
ๅ
็ฅๅณๅฏไบง็ไธไธชไบ็ปดๆฐ็ป๏ผ็ถๅๆไปฌๅฐฑ่ฝไธๆฌกๆง่ฎก็ฎ5000ไธช้ๆบๆผซๆญฅ่ฟ็จ๏ผไธ่กไธไธช๏ผ็็ดฏ่ฎกๅไบใ
Step104: ๅพๅฐ่ฟไบๆฐๆฎๅ๏ผๆไปฌๅฏไปฅๆฅ่ฎก็ฎๅบ30ๆ่
-30็ๆๅฐ็ฉฟ่ถๆถ้ดใ่ฟ้ๅพ่ฆ็จๅพฎๅจไธไธ่ๅญ๏ผๅ ไธบไธๆฏ5000ไธช่ฟ็จ้ฝๅฐ่พพไบ30ใๆไปฌๅฏไปฅ็จanyๆนๆณๆฅๅฏนๆญค่ฟ่กๆฃๆฅ
Step105: ็ถๅๆไปฌๅๅฉ็จ่ฟไธชboolean array้ๅบๅชไบ็ฉฟ่ถไบ30๏ผ็ปๅฏนๅผ๏ผ็้ๆบๆผซๆญฅ๏ผ่ก๏ผ๏ผๅนถ่ฐ็จargmaxๅจ่ฝด1ไธ่ทๅ็ฉฟ่ถๆถ้ด๏ผ
Step106: ่ฟ้่ฏทๅฐ่ฏๅ
ถไปๅๅธๆนๅผๅพๅฐๆผซๆญฅๆฐๆฎใๅช้่ฆไฝฟ็จไธๅ็้ๆบๆฐ็ๆๅฝๆฐๅณๅฏใไพๅฆ๏ผnormal ็จไบ็ๆๆๅฎๅๅผๅๆ ๅๅทฎ็ๆญฃๆๅๅธๆฐๆฎ
Step107: Appendix for chapter04-note
date
Step108: ไปไธ้ขๅฏไปฅ็ๅบ๏ผnumpyๆฐ็ปๆไฝๆถๅๆฏๅฏน่ขซไฝ็จๅฏนๆๆๅ
็ด ็๏ผ่ฟไธไบๅฎไฝฟๅพๆฐ็ป่ฎก็ฎ้ฝๅๅพ็ฎๅๅๅฟซ้ใๆฏๅฆๆไปฌๅฏไปฅๅฟซ้ๅฐ่ฎก็ฎๅค้กนๅผ๏ผ
Step109: NumPyๆไพไบไธไบ้็จๅฝๆฐ้ๅ๏ผไปไปฌไน่ฝๅฏนๆฐ็ป่ฟ่ก็ดๆฅๅฏนๆไฝใ่ฟไบ้็จๅฝๆฐๅฏไปฅไฝไธบmathๆจกๅไธญๆๅฏนๅบๅฝๆฐๅฏนๆฟไปฃใ็คบไพๅฆไธ๏ผ
Step110: ไฝฟ็จNumPyไธญ็้็จๅฝๆฐ๏ผๅ
ถๆ็่ฆๆฏๅฏนๆฐ็ป่ฟ่ก่ฟญไปฃ็ถๅไฝฟ็จmathๆจกๅไธญ็ๅฝๆฐๆฏๆฌกๅชๅค็ไธไธชๅ
็ด ๅฟซไธๆฐๅใๅ ๆญค๏ผๅช่ฆๆๅฏ่ฝๅฐฑๅบ่ฏฅ็ดๆฅไฝฟ็จ่ฟไบ้็จๅฝๆฐใ
ๅจๅบๅฑ๏ผNumPy ๆฐ็ป็ๅ
ๅฑๅ้
ๆนๅผๅCๅFortranๆฏไธๆ ท็ใไปไปฌๅจๅคงๅ็่ฟ็ปญๅ
ๅญไธญๅญๅจใๆญฃๅ ๅฆๆญค๏ผNumPyๆ่ฝๅๅปบๆฏ้ๅธธPythonๅ่กจๅคง่ฎธๅค็ๆฐ็ปใไพๅฆ๏ผๅฆๆๅๅๅปบ10000 * 10000็ไบ็ปดๆตฎ็นๅฝๆฐ๏ผ่ฟๅฏนnumpy่่จๆฏๅพ่ฝปๆพ็ไบๆ
๏ผ
Step111: ๆๆ็้็จๆไฝไป็ถๅฏไปฅๅๆถๆฝๅ ไบๆๆ็ๅ
็ด ไนไธ๏ผ
Step112: ๅ
ณไบNumPy๏ผไธไธช็นๅซๅผๅพๆ่ตท็ๆน้ขๅฐฑๆฏNumPyๆฉๅฑไบpythonๅ่กจ็็ดขๅผๅ่ฝโโๅฐคๅ
ถๆฏ้ๅฏนๅค็ปดๆฐ็ปๆถๆดๆฏๅฆๆญคใ็ฐๅจๆไปฌๆฅๆๅปบไธไธช็ฎๅ็ไบ็ปดๆฐ็ป็ถๅๅไธไบ็ฎๅ็experiment
Step113: 3.10 ็ฉ้ตๅ็บฟๆงไปฃๆฐ็่ฎก็ฎ in "python cookbook"
3.10.1 Question
ๅฆไฝๅฉ็จpythonๆฅ่ฟ่ก็ฉ้ตไนๆณ๏ผๆฑ่กๅๅผ๏ผ่งฃๅณ็บฟๆงๆน็จ็ญ็ญ
3.10.2 ่งฃๅณๆนๆก
NumPy ไธญๆไธช matrix ๅฏน่ฑกๅฏไปฅ็จๆฅๅค็่ฟ็งๆ
ๅตใmatrix ๅฏน่ฑกๅไธ่ฟฐ3.9ไธญๆ่ฟฐ็ๆฐ็ปๅฏน่ฑกๆไบ็ฑปไผผ๏ผไฝๆฏๅจ่ฎก็ฎๆถ้ตๅพช็บฟๆงไปฃๆฐ่งๅใไธ้ข็ไพๅญๅฑ็คบไบๅ ไธช้่ฆ็็นๆง๏ผ
Step114: ๆดๅค็ๆไฝๅฏไปฅๅจnumpy.linalgๅญๆจกๅไธญๆพๅฐ๏ผไพๅฆ๏ผ | Python Code:
import numpy.random as nrandom
data = nrandom.randn(3,2)
data
data*10
data + data
Explanation: ้
่ฏป็ฌ่ฎฐ
ไฝ่
๏ผๆน่ทๆ
Email: [email protected]
ๆถ้ด๏ผๅงไบ2017ๅนด9ๆ12ๆฅ๏ผ ็ปๆๅไฝไบ
็ฌฌๅ็ซ ็ฌ่ฎฐๅงไบ2017ๅนด10ๆ17ๆฅ๏ผ็ปๆไบ2018ๅนด1ๆ6ๆฅ
็ฌฌๅ็ซ Numpyๅบ็ก๏ผๆฐ็ปๅ็ข้่ฎก็ฎ
ๆถ้ด๏ผ 2017ๅนด10ๆ17ๆฅๆฉๆจ
Numpy๏ผๅณ numerical python็็ฎ็งฐ๏ผๆฏ้ซๆง่ฝ็งๅญฆ่ฎก็ฎๅๆฐๆฎๅๆ็ๅบ็กๅ
ใๅฎๆฏๆฌไนฆๆไป็ป็ๅ ไนๆๆ้ซ็บงๅทฅๅ
ท็ๆๅปบๅบ็กใๅ
ถ้จๅๅ่ฝๅฆไธ๏ผ
ndarray๏ผไธไธชๅ
ทๆ็ข้็ฎๆฐ่ฟ็ฎๅๅคๆๅนฟๆญ่ฝๅ็ๅฟซ้ไธ่็็ฉบ้ด็ๅค็ปดๆฐ็ป
ๅจไธ้่ฆๅพช็ฏ็ๆ
ๅตไธ๏ผ็จไบๅฏนๆฐ็ปๅฟซ้่ฟ็ฎ็ๆ ๅๆฐๅญฆๅฝๆฐ
็จไบ่ฏปๅ็ฃ็ๆฐๆฎ็ๅทฅๅ
ทไปฅๅ็จไบๆไฝๅ
ๅญๆ ๅฐๆไปถ็ๅทฅๅ
ท
็บฟๆงไปฃๆฐใ้ๆบๆฐ็ๆไปฅๅๅ
้ๅถๅๅ
็จไบ้ๆ็ฑ CใC++ใFortran ็ญ่ฏญ่จ็ผๅ็ไปฃ็ ็ๅทฅๅ
ท
Numpy ๆฌ่บซๅ่ฝไธๅคๆ๏ผไฝๆฏ็่งฃ Numpy ๆๅฉไบๆด้ซๆๅฐไฝฟ็จ่ฏธๅฆ Pandas ไน็ฑป็ๅทฅๅ
ทใ
ๅไนฆไฝ่
ไธป่ฆไปไบๆฐๆฎๅๆ๏ผๆไปฅไปๅ
ณๆณจ็ๅ่ฝไธป่ฆ้ไธญไบ๏ผ
็จไบๆฐๆฎๆด็ๅๆธ
็ใๅญ้ๆ้ ๅ่ฟๆปคใ่ฝฌๆข็ญๅฟซ้็็ข้ๅๆฐ็ป่ฟ็ฎ
ๅธธ็จ็ๆฐ็ป็ฎๆณ๏ผๅฆๆๅบใๅฏไธๅใ้ๅ่ฟ็ฎ็ญใ
้ซๆๅฐๆ่ฟฐ็ป่ฎกๅๆฐๆฎ่ๅ/ๆ่ฆ่ฟ็ฎ
็จไบๅผๆๆฐๆฎ้็ๅๅนถ/่ฟๆฅ่ฟ็ฎ็ๆฐๆฎๅๅ
ณ็ณปๅๆฐๆฎ่ฟ็ฎ
ๅฐๆกไปถ้ป่พ่กจ่ฟฐไธบๆฐ็ป่กจ่พพๅผ๏ผ่ไธๆฏๅธฆๆif-elif-elseๅๆฏ็ๅพช็ฏ๏ผ
ๆฐๆฎ็ๅ็ป่ฟ็ฎ๏ผ่ๅใ่ฝฌๆขใๅฝๆฐๅบ็จ็ญ๏ผ็ฌฌไบ็ซ ๅฐๅฏนๆญค่ฟ่ก่ฏฆ็ป่งฃ้ใ
ๆณจ๏ผๅปบ่ฎฎๆปๆฏไฝฟ็จ import numpy as np๏ผ ่ไธๆฏ็จ from numpy import *
Numpy ็ ndarray๏ผไธ็งๅค็ปดๆฐ็ปๅฏน่ฑก
ๆถ้ด๏ผ 2017ๅนด10ๆ18ๆฅๆ
Numpy ไธไธช้่ฆ็น็นๅฐฑๆฏๅ
ถ N ็ปดๆฐ็ปๅฏน่ฑก๏ผๅณ ndarray๏ผ่ฏฅๅฏน่ฑกๆฏไธไธชๅฟซ้่็ตๆดป็ๆฐๆฎ้ๅฎนๅจใๆไปฌๅฏไปฅๅฉ็จ่ฟ็งๆฐ็ปๅฏนๆดๅๆฐๆฎ่ฟ่กไธไบ่ฟ็ฎ๏ผๅฎ็่ฏญๆณ่ทๆ ้ๅ
็ด ไน้ด็่ฟ็ฎ็ธๅ๏ผ
End of explanation
data.shape # ๆฐ็ป็็ปดๆฐ๏ผๅณ่กๆฐๅๅๆฐ
data.dtype #ๆฐ็ปไธญๅ
็ด ็็ฑปๅ
data.size #ๆฐ็ป็ๅคงๅฐ
dataconversion = data.astype('int8')
print('data is: ', data)
print('\n dataconversion is ', dataconversion)
Explanation: ndarray ๆฏ ๅๆๆฐๆฎๅค็ปดๅฎนๅจ๏ผthat is to say, ๆๆๅ
็ด ๅฟ
้กปๆฏๅ็ฑปๅ็ใ
ๆฏไธชๆฐ็ป้ฝๆไธไธช shape ๏ผไธไธช่กจ็คบๅ็ปดๅบฆๅคงๅฐ็ๅ
็ฅ๏ผๅไธไธช dtype ๏ผไธไธช็จไบ่ฏดๆๆฐ็ปๆฐๆฎ็ฑปๅ็ๅฏน่ฑก๏ผ๏ผ
End of explanation
import numpy as np
data1 = [2,3,3,5,6,9]
array1 = np.array(data1)
print('data1 is ', type(data1))
print('array1 is ', type(array1))
data1[:]
array1
print(array1)
print(array1.dtype)
print(array1.shape)
print(array1.size)
Explanation: ่ฝ็ถๅคงๅคๆฐๆฐๆฎๅๆๅทฅไฝไธ้่ฆๆทฑๅ
ฅ็่งฃNumpy๏ผไฝๆฏ็ฒพ้้ขๅๆฐ็ป็็ผ็จๅๆ็ปดๆนๅผๆฏๆไธบ Python ็งๅญฆ่ฎก็ฎ่พพไบบ็ไธๅคงๆญฅ้ชคใ
ๆณจๆ๏ผ็ฌฌไธ็็ฟป่ฏ็ๆฌไธญๆไธชๆนๆณจ๏ผ่ฏดโๆฌไนฆไธญ็ๆฐ็ปใNumpyๆฐ็ปใndarray ๅบๆฌๆ็้ฝๆฏๅไธๆ ทไธ่ฅฟ๏ผๅณ ndarray ๅฏน่ฑกโ
ๅๅปบ ndarray
ๅๅปบๆฐ็ปๆ็ฎๅ็ๅๆณๅฐฑๆฏไฝฟ็จ array ๅฝๆฐใๅฎๆฅๅไธๅๅบๅ่ก็ๅฏน่ฑก๏ผๅ
ๆฌๅ
ถไปๆฐ็ป๏ผ๏ผ็ถๅไบง็ไธไธชๆฐ็ๅซๆไผ ๅ
ฅๆฐๆฎ็ NumPy ๆฐ็ปใไปฅๅ่กจ่ฝฌๆขไธบๆฐ็ปๆนๅผไธบไพ๏ผ
End of explanation
import numpy as np
data2=[[23,5,5,6], [4,56,2,8],[3,5,6,7],[2,3,4,5]]
arr2=np.array(data2)
arr2
arr2.ndim #Number of array dimensions.
arr2.shape
arr2.size
Explanation: ๅตๅฅๅบๅ๏ผๆฏๅฆ็ฑไธ็ป็ญ้ฟๅ่กจ็ปๆ็ๅ่กจ๏ผ๏ผๅฐไผ่ขซ่ฝฌๆขไธบไธไธชๅค็ปดๆฐ็ป๏ผ
End of explanation
data.dtype
arr2.dtype
Explanation: ้ค้ๆพ็คบ่ฏดๆ๏ผnp.array ไผๅฐ่ฏไธบๆฐๅปบ็่ฟไธชๆฐ็ปๆจๆญๅบไธไธช่พไธบๅ้็ๆฐๆฎ็ฑปๅใๆฐๆฎ็ฑปๅไฟๅญๅจไธไธช็นๆฎ็ dtype ๅฏน่ฑกไธญใๆฏๅฆ่ฏด๏ผๅจไธ้ข็ไธคไธชexamplesไธญ๏ผๆไปฌๆ
End of explanation
np.zeros(10)
arr4 = np.zeros((3,6,3))
arr4
arr4.ndim
arr3 = np.empty((2,4,2))
arr3
arr3.ndim
arr5 = np.empty((2,3,4,2))
arr5
Explanation: ้ค np.array ไนๅค๏ผ่ฟๆไธไบๅฝๆฐๅฏไปฅๆฐๅปบๆฐ็ปใๆฏๅฆ๏ผzeros ๅ ones ๅๅซๅฏๅๅปบๆๅฎ้ฟๅบฆๆๅฝข็ถ็ๅ
จ 0 ๅ ๅ
จ 1 ๆฐ็ปใempty ๅฏๅๅปบไธไธชๆฒกๆไปปไฝๅ
ทไฝๅผ็ๆฐ็ปใ่ฆ็จ่ฟไบๆนๆณๅๅปบๅค็ปดๆฐ็ป๏ผๅช้่ฆไผ ๅ
ฅไธไธช่กจ็คบๅฝข็ถ็ๅ
็ฅๅณๅฏ๏ผ
End of explanation
np.arange(15)
np.arange(2)
Explanation: ่ญฆๅ ่ฎคไธบ np.emptry ไผ่ฟๅๅ
จ 0 ๆฐ็ป็ๆณๆณๆฏไธๅฎๅ
จ็ใๅพๅคๆ
ๅตไธ๏ผๅฆไธๆ็คบ๏ผ๏ผๅฎ่ฟๅ็้ฝๆฏไธไบๆชๅๅงๅ็ๅๅพๅผใ
arange ๆฏ Python ๅ
็ฝฎๅฝๆฐrange ็ๆฐ็ป็๏ผ
End of explanation
data1 = (1,2,3,4)
np.asarray(data1)
np.array(data1)
data2 = ([2,2])
np.asarray(data2)
import numpy as np
np.arange(15)
ones
np.ones(19)
np.zeros(10)
np.empty(4)
np.eye(3)
np.eye(4)
np.identity(2)
np.identity(3)
Explanation: ไธ่กจๅๅบไบไธไบๆฐ็ปๅๅปบๅฝๆฐใ็ฑไบNumpyๅ
ณๆณจ็ๆฏๆฐๅผ่ฎก็ฎ๏ผๅ ๆญค๏ผๅฆๆๆฒกๆ็นๅซ็ๅถๅฎ๏ผๆฐๆฎ็ฑปๅไธ่ฌ้ฝๆฏ float64ใ
|ๅฝๆฐ | ่ฏดๆ |
|-------------|---------------|
| array | ๅฐ่พๅ
ฅๆฐๆฎ(ๅ่กจใๅ
็ฅใๆฐๅญๆ่
ๅ
ถไปๆฐๆฎ็ฑปๅ)่ฝฌๆขไธบ ndarrayใ่ฆไนๆจๆญๅบ dtype๏ผ่ฆไนๆพ็คบๅฐๆๅฎdtypeใ้ป่ฎค็ดๆฅๅคๅถ่พๅ
ฅๆฐๆฎ|
| asarray | ๅฐ่พๅ
ฅ่ฝฌไธบ ndarray๏ผๅฆๆ่พๅ
ฅๆฌ่บซๅฐฑๆฏไธไธชndarrayๅฐฑไธ่ฟ่กๅคๅถ|
| arange | ็ฑปไผผไบpythonๅ
็ฝฎ็range,ไฝๆฏ่ฟๅ็ๆฏไธไธชndarray,่ไธๆฏไธไธชๅ่กจ|
| onesใones_like | ๆ นๆฎๆๅฎ็ๅฝข็ถๅdtypeๅๅปบไธไธชๅ
จ1ๆฐ็ปใones_likeไปฅๅฆไธไธชๆฐ็ปไธบๅๆฐ๏ผๅนถๆ นๆฎๅ
ถๅฝข็ถๅdtypeๅๅปบไธไธชๅ
จ1ๆฐ็ป|
|zerosใzeros_like | ็ฑปไผผไธ่ฟฐๅฝไปค๏ผๅชๆฏๆนไธบๅ
จ0ๆฐ็ป|
|emptyใempty_like|ๅๅปบๆฐๆฐ็ป๏ผๅชๅ้
ๅ
ๅญ็ฉบ้ดไฝไธๅกซๅ
ไปปไฝๅผ|
|eyeใidentity|ๅๅปบไธไธชๆญฃๆน็N * N ๅไฝ็ฉ้ต๏ผๅฏน่ง็บฟไธบ1๏ผๅ
ถไฝไธบ0๏ผ|
End of explanation
import numpy as np
arr1 = np.array([1,2,3], dtype = np.float64)
arr2 = np.array([1,2,3], dtype = np.int32)
arr1.dtype
arr2.dtype
Explanation: ndarray ็ๆฐๆฎ็ฑปๅ
Recently I jsut moved from Shanghai to Kyoto, hence I have stopped taking notes for almost two weeks.
From now on, I will continue writing this notes. Let's note~
YWFANG @Kyoto University November, 2017
dtype()
dtype ๆฏไธไธช็นๆฎ็ๅฏน่ฑก๏ผๅฎๅซๆndarrayๅฐไธๅๅ
ๅญ่งฃ้ไธบ็นๅฎๆฐๆฎ็ฑปๅ็ๆ้ไฟกๆฏ๏ผ
End of explanation
import numpy as np
arr = np.array([1,2,3,4,5], dtype='i2')
print(arr.dtype)
print(arr)
float_arr = arr.astype(np.float64)
float_arr.dtype
Explanation: dtype ๆฏ NumPy ๅผบๅคง็ๅๅ ไนไธใๅจๅคๆฐๆ
ๅตไธ๏ผๅฎไปฌ็ดๆฅๆ ๅฐๅฐ็ธๅบ็ๆบๅจ่กจ็คบ๏ผ่ฟไฝฟๅพโ่ฏปๅ็ฃ็ไธ็ไบ่ฟๅถๆฐๆฎๆตโไปฅๅโ้ๆไฝ็บง่ฏญ่จ๏ผๅฆfortran"็ญๅทฅไฝๅๅพ็ฎๅใ
ไธ่กจ่ฎฐๅฝไบNumPyๆๆฏๆ็ๅ
จ้จๆฐๆฎ็ฑปๅ๏ผ๏ผ่ฎฐไธไฝๆฒกๆๅ
ณ็ณป๏ผๅๅผๅง่ฎฐไธไฝไนๅพๆญฃๅธธ๏ผ
|็ฑปๅ|็ฑปๅไปฃ็ |่ฏดๆ
|-------------|---------------|
|int8ใunit8| i1ใu1| ๆ็ฌฆๅทๅๆ ็ฌฆๅท็8ไฝ๏ผ1ไธชๅญ่๏ผๆดๅ|
|int16ใunit16| i2ใu2| ๆ็ฌฆๅทๅๆ ็ฌฆๅท็16ไฝ๏ผ2ๅญ่๏ผๆดๅ|
|int32ใunit32| i4ใu4| ใใใ32ไฝใใใ|
|int64ใunit64| i8ใu8|ใใใ64ไฝใใใ|
| float16| f2| ๅ็ฒพๅบฆๆตฎ็นๆฐ|
| flaot32| f4ๆ่
f| ๆ ๅๅ็ฒพๅบฆๆตฎ็นๆฐ๏ผไธC็floatๅ
ผๅฎน|
| float64| f8ๆd | ๆ ๅๅ็ฒพๅบฆๆตฎ็นๆฐ๏ผไธC็doubleๅPython็floatๅฏน่ฑกๅ
ผๅฎน|
|float128| f16ๆ่
g| ๆฉๅฑ็ฒพๅบฆๆตฎ็นๆฐ|
|complex64ใcomplex128|c8ใc16| ๅๅซ็จไธคไธช32ไฝใ64ไฝๆ128ไฝๆตฎ็นๆฐ่กจ็คบ็ๅคๆฐ|
|complex256|c32|ๅคๆฐ|
| bool|๏ผ|ๅญๅจTrue ๆFlase ๅผ็ๅธๅฐ็ฑปๅ|
|object | O | Pythonๅค่ฑก็ฑปๅ|
| string_|S|ๅบๅฎ้ฟๅบฆ็ๅญ็ฌฆไธฒ็ฑปๅ๏ผๆฏไธชๅญ็ฌฆ1ไธชๅญ่๏ผใไพๅฆ๏ผ่ฆๅๅปบไธไธช้ฟๅบฆไฝ10็ๅญ็ฌฆไธฒ๏ผๅบไฝฟ็จS10|
|unicode|U|ๅบๅฎ้ฟๅบฆ็unicode็ฑปๅ๏ผๅญ่ๆฐ็ฑๅนณๅฐๅณๅฎ๏ผใ่ทๅญ็ฌฆไธฒๅฎไนๆนๅผไธๆ ท๏ผๅฆU10๏ผ|
ๆไปฌๅฏไปฅ้่ฟ ndarray ็ astype ๆนๆณๆพ็คบๅฐ่ฝฌๆขๅ
ถdtype๏ผ
End of explanation
import numpy as np
arr = np.array([1.2, 2.3, 4.5, 53.4,3.2,4.2])
print(arr.dtype)
print(arr)
print(id(arr)) #memoery address of arr
print('\n')
#conversion to integer
int_arr = arr.astype(np.int32)
print(int_arr.dtype)
print(int_arr)
Explanation: In the above example, an integer array was converted into a floating array.
In the following example, I will show you how to convert a float array to an int array. You will see that, if I cast some floating point numbers to be of interger type, the decimal part will be truncated.
End of explanation
import numpy as np
num_strings_arr = np.array(['1.25', '-9.6', '42'], dtype = np.string_)
print(num_strings_arr)
print(num_strings_arr.dtype)
float_arr = num_strings_arr.astype(np.float64)
# num_strings_arr.astype(float)
print(float_arr.dtype)
print(float_arr)
# alternatively, we can use a lazy writing
float1_arr = num_strings_arr.astype(float)
print(float_arr.dtype)
print(float_arr)
Explanation: If you have an array of strings representing numbers, you can also use 'astype' to convert them into numberic form:
End of explanation
# in this example, we can see that the int_arry will converted into
# a floating array, in particular, the dtype of calibers was used
# during the conversion using astype(calibers.dtype)
import numpy as np
int_array = np.arange(10)
print(int_array, int_array.dtype)
calibers = np.array([.22, .20, .23,.45, .44], dtype=np.float64)
print(calibers , calibers.dtype)
int_array_new = int_array.astype(calibers.dtype)
print(int_array_new, int_array_new.dtype)
#when stating an array, we can use the short code in the table to assign
# the dtype of the array
# for example
import numpy as np
empty_array = np.empty(8, dtype='u4')
print(empty_array)
print('\n')
zero_array = np.zeros(12, dtype='u4')
print(zero_array, zero_array.dtype)
print('\n')
one_array = np.ones(9, dtype='f8')
print(one_array, one_array.dtype)
print(*one_array)
Explanation: In addition, we can use another arrayโs dtype attribute:
End of explanation
import numpy as np
arr = np.array([[1., 2., 3.,],[3.,5.,6.]])
print(arr.shape)
print(arr)
arr*arr
arr-arr
arr+arr
Explanation: ็นๆฐ๏ผๆฏๅฆfloat64ๅfloat32๏ผๅช่ฝ่กจ็คบ่ฟไผผ็ๅๆฐๅผใๅ ๆญคๅคๆ่ฎก็ฎไธญ๏ผ็ฑไบๅฏ่ฝ็งฏ็ดฏ็ๆตฎ็น้่ฏฏ๏ผๆฏ่พๆตฎ็นๆฐๅญๅคงๅฐๆถ๏ผๅช่ฝๅจไธๅฎ็ๅฐๆฐไฝๆฐไปฅๅ
ๆๆใ
ๆฐ็ปๅๆ ้ไน้ด็่ฟ็ฎ
ๆฐๆฎ็ไพฟๅฉไนๅคๅจไบๅณไฝฟๆไปฌไธ็จloop๏ผไนๅฏไปฅๅฏนๆน้ๆฐๆฎ่ฟ่ก่ฟ็ฎๅๆไฝใ่ฟ็งๆนๅผ้ๅธธๅซๅโ็ข้ๅโ๏ผvectorization๏ผใๅคงๅฐ็ธ็ญ็ๆฐ็ปไน้ด็ไปปไฝ็ฎๆฐ่ฟ็ฎ้ฝไผๅฐ่ฟ็ฎๅบ็จๅฐๅ
็ด ็บง๏ผ
End of explanation
1/arr
arr*2
Explanation: ๅๆ ทๅฐ๏ผๅฝๆฐ็ปไธๆ ้่ฟ่ก็ฎๆฐ่ฟ็ฎๆถ๏ผไนไผ้ๅๅฐๅไธชๅ
็ด
End of explanation
import numpy as np
arr = np.arange(10, dtype='i1')
print(arr)
print(arr.dtype)
print(arr[0],arr[5])
print(arr[0:2])
arr[5:8]=12
print(arr)
#ไฝไธบๅฏนๆฏ๏ผๆไปฌๅ้กพไธไนๅๅ่กจ็ไธไบๆไฝ
list1=[0,1,2,3,4,5,6,7,8,9]
print(list1[:])
print(list1[0:2])
list1[5] = 12
print(list1[:])
list1[5:8]=12 #่ฟ้ๆฏ่ทๆฐ็ปๅพไธๅ็ๅฐๆน
#ๅฆๆไธไฝฟ็จไธไธชiterable๏ผ่ฟ้ๅนถๆ ๆณ่ตๅผ
print(list1[:])
Explanation: ไธๅๅคงๅฐ็ๆฐ็ปไน้ด็่ฟ็ฎๅซๅๅนฟๆญ broadcasting๏ผๆไปฌไนๅ่ฟไผๅจ็ฌฌ12็ซ ่ฟ่กๆทฑๅบฆ็ๅญฆไน ใ
ๅบๆฌ็็ดขๅผๅๅ็
NumPy ๆฐ็ป็็ดขๅผๆฏไธไธชๅ
ๅฎนไธฐๅฏ็ไธป้ข๏ผๅ ไธบ้ๅๆฐๆฎๅญ้ๆ่
ๅไธชๅ
็ด ็ๆนๅผ้ๅธธๅคใไธ็ปดๆฐ็ปๅพ็ฎๅใไป่กจ้ข็๏ผๅฎไปฌ่ทpythonๅ่กจ็ๅ่ฝๅทฎไธๅคใ
End of explanation
import numpy as np
arr = np.arange(10)
print(arr)
arr_slice = arr[5:8]
arr_slice[1] = 12345
print(arr)
arr_slice[:]=123
print(arr)
Explanation: ๅฆไธ้ขไพๅญไธญ็ๅฐ็้ฃ็ง๏ผๅฝๆไปฌๅฐๆ ้่ตๅผ็ปไธไธชๅ็ๆถ๏ผarr[5:8]=12)๏ผ่ฏฅๅผไผ่ชๅจไผ ๆญ๏ผไนๅฐฑๆฏ12็ซ ๅฐ้ๅฐ็broadcasting๏ผๅฐๆดไธช้ๅบใ่ทๅ่กจๆ้่ฆ็ๅบๅซๅจไบ๏ผๆฐ็ปๅ็ๆฏๅๅงๆฐ็ป็่งๅพใ่ฟๆๅณ็ๆฐๆฎไธไผ่ขซๅคๅถ๏ผ่งๅพไธไปปไฝ็ไฟฎๆน้ฝไผ็ดๆฅๅๆ ๅฐๆบๆฐ็ปไธใ
End of explanation
import numpy as np
arr = np.arange(10)
arr_slice = arr[5:8]
arr_slice[1] = 12345
arr1 = arr[5:8]
print(arr1)
arr2 = arr[5:8].copy()
print(arr2)
#in this example๏ผarr1ไป็ถๆฏๆฐ็ป็่งๅพ๏ผ
#ไฝๆฏarr2ๅทฒ็ปๆฏ้่ฟๅคๅถๅพๅฐ็ๅฏๆฌไบ
arr[5:8]=78
print('arr1 = ', arr1)
print('arr2 = ', arr2)
Explanation: ็ฑไบpythonๅธธ็จๆฅๅค็ๅคงๆฐๆฎ๏ผ่ฟ็ง้่ฟๆไฝๆฐ็ป่งๅพๅฐฑๅฏไปฅๆนๅๆบๆฐ็ป็ๆนๅผ๏ผๅฏไปฅ้ฟๅ
ๅฏนๆฐๆฎ็ๅๅคๅคๅถๆๅธฆๆฅ็ๆง่ฝๅๅ
ๅญ้ฎ้ขใ
ๅฆๆๆไปฌๆณ่ฆๅพๅฐ็ๆฏไธไธชๆฐ็ปๅ็็ๅฏๆฌ๏ผ่ไธๆฏ่งๅพ๏ผๅฐฑ้่ฆๆพๅผๅฐ่ฟ่กๅคๅถๆไฝ๏ผไพๅฆ
End of explanation
import numpy as np
arr2d = np.array([[1,2,3],[4,5,6],[7,8,9]])
arr2d[2]
Explanation: ๅฏนไบ้ซ็ปดๆฐ็ป๏ผ่ฝๅ็ไบๆ
ๆดๅคใๅจไธไธชไบ็ปดๆฐ็ปไธญ๏ผๅไธช็ดขๅผไฝ็ฝฎไธ็ๅ
็ด ไธๅๆฏๆ ้๏ผ่ๆฏไธ็ปดๆฐ็ป๏ผ
End of explanation
arr2d[0][2]
arr2d[0,2]
Explanation: ๅ ๆญคๅฏไปฅๅฏนๅไธชๅ
็ด ่ฟ่ก้ๅฝ็่ฎฟ้ฎ๏ผไธ่ฟ่ฟๆ ท้่ฆๅ็ไบๆ
ๆ็นๅคใๆไปฌๅฏไปฅไผ ๅ
ฅไธไธชไปฅ้ๅท้ๅผ็็ดขๅผๅ่กจๆฅ้ๅบๅไธชๅ
็ด ใไนๅฐฑๆฏ่ฏด๏ผไธ้ขไธค็งๆนๅผๆฏ็ญไปท็๏ผ
End of explanation
import numpy as np
arr3d = np.array([[[1,2,3],[4,5,6]],[[7,8,9],[10,11,12]]])
print(arr3d)
arr3d[0] #ๅฎๆฏไธไธช 2*3 ๆฐ็ป
Explanation: ไธๅพ่ฏดๆไบไบ็ปดๆฐ็ป็็ดขๅผๆนๅผ
ๅจๅค็ปดๆฐ็ปไธญ๏ผๅฆๆ็็ฅไบๅ้ข็็ดขๅผ๏ผๅ่ฟๅๅฏน่ฑกไผๆฏไธไธช็ปดๅบฆไฝไธ็น็ndarray๏ผๅฎๅซๆ้ซไธ็บง็ปดๅบฆไธ็ๆๆๆฐๆฎ๏ผใ
่ฟ้ไธญๆ็็ไฝ่
็นๅซ่ฏดๆไบไธ้ข่ฟๅฅ่ฏใๆฌๅทๅค้ข็โ็ปดๅบฆโๆฏไธ็ปดใไบ็ปดใไธ็ปดไน็ฑป็ๆๆ๏ผ่ๆฌๅทๅค้ข็ๅบ่ฏฅ็่งฃไธบโ่ฝดโใไนๅฐฑๆฏ่ฏด๏ผ่ฟ้ๆ็ๆฏโ่ฟๅ็ไฝ็ปดๅบฆๆฐ็ปๅซๆๅๅง้ซ็ปดๅบฆๆฐ็ปๆๆก่ฝดไธ็ๆๆๆฐๆฎใ
ไธ้ข็ไธชไพๅญๆฅ็่งฃ๏ผ
End of explanation
arr3d[0] = 42
print(arr3d)
print(arr3d[0,1])
print(arr3d[1,0])
Explanation: ๆ ้ๅผๅๆฐๅผ้ฝๅฏไปฅ่ตๅผ็ป arr3d[0]:
End of explanation
import numpy as np
arr = np.arange(10)
print(arr)
arr[4]=54
print(arr[1:6])
Explanation: ๆณจๆ๏ผไธ้ขๆๆ้ๅๆฐ็ปๅญ้็ไพๅญไธญ๏ผ่ฟๅ็ๆฐ็ป้ฝๆฏ่งๅพใ
ๅ็็ดขๅผ
ndarray ็ๅ็่ฏญๆณ่ทpythonๅ่กจ่ฟๆ ท็ไธ็ปดๅฏน่ฑกๅทฎไธๅค๏ผ
End of explanation
import numpy as np
arr2d = np.array([[2,3,4],[3,5,5],[3,5,5]])
print(arr2d)
arr2d[:2]
Explanation: ้ซ็ปดๅบฆๅฏน่ฑก็่ฑๆ ทๆดๅค๏ผๆไปฌๅฏไปฅๅจไธไธชๆ่
ๅคไธช่ฝดไธ่ฟ่กๅ็ใไนๅฏไปฅ่ทๆดๆฐ็ดขๅผๆททๅไฝฟ็จใ
End of explanation
arr2d[:2, :2]
Explanation: ไธ่ฟฐๆไปฌๅฏไปฅ็ๅบ๏ผ่ฟ้็ๅ็ๆฏๆฒฟ็็ฌฌ0่ฝด๏ผๅณ็ฌฌไธไธช่ฝด๏ผๅ็็ใๆขๅฅ่ฏ่ฏด๏ผๅ็ๆฏๆฒฟ็ไธไธช่ฝดๅ้ๅๅ
็ด ็ใๆไปฌๅฏไปฅๅๆฌกไผ ๅ
ฅๅคไธชๅ็๏ผๅฐฑๅไผ ๅ
ฅๅคไธช็ดขๅผ้ฃๆ ท๏ผ
End of explanation
arr2d[2,:2]
arr2d[:,:1] #่ฟ้๏ผๆไปฌๅฎ็ฐไบๅฏน้ซ็ปด่ฝด่ฟ่กไบๅ็
Explanation: ๅไธ่ฟฐ่ฟๆ ท็ๅ็ๆนๅผ๏ผๅช่ฝๅพๅฐ็ธๅ็ปดๆฐ็ๆฐ็ป่งๅพใๆไปฌ่ฟๅฏไปฅๅฐๆดๆฐ็ดขๅผไธๅ็ๆททๅไฝฟ็จ๏ผไป่ๅพๅฐไฝ็บฌๅบฆ็ๅ็๏ผ
End of explanation
arr2d[:,:1] = 0
print(arr2d)
Explanation: ่ช็ถๅฐ๏ผๅฏนๅ็่กจ่พพๅผ็่ตๅผๆไฝไนไผ่ขซๆฉๆฃๅฐๆดไธช้ๅบ๏ผ
End of explanation
%reset
import numpy as np
from numpy.random import randn
names = np.array(['Bob', 'Joe', 'Will', 'Bob', 'Will', 'Joe', 'Joe'])
#please make a comparison, if you use
# names = np.array(['Bob', 'Joe', 'Will', 'Bob', 'Will', 'Joe', 'Joe'], dtype='S4')
print(names, names.dtype)
type(names)
print('\n')
data = randn(7,4)
print(data, data.dtype, data.shape)
type(data)
Explanation: ๅธๅฐๅ็ดขๅผ
ๆฅ็่ฟๆ ทไธไธชไพๅญ๏ผๅ่ฎพๆไปฌๆไธไธช็จไบๅญๅจๆฐๆฎ็ๆฐ็ปไปฅๅไธไธชๅญๅจๅงๅ็ๆฐ็ป๏ผๅซๆ้ๅค้กน๏ผใๅจ่ฟ้๏ผๆๅฐไฝฟ็จ numpy.random ไธญ็randnๅฝๆฐ็ๆไธไบๆญฃๆๅๅธ็้ๆบๆฐๆฎใ
End of explanation
names == 'Will'
Explanation: ๅ่ฎพ names ๆฐ็ปไธญ็ๆฏไธชๅๅญ้ฝๅฏนๅบ dataๆฐ็ปไธญ็ไธ่ก๏ผ่ๆไปฌๆณ่ฆ้ๅบๅฏนๅบไบๅๅญโBob'็ๆๆ่กใ่ท็ฎๆฐ่ฟ็ฎไธๆ ท๏ผๆฐ็ป็ๆฏ่พ่ฟ็ฎ๏ผๅฆ==๏ผไนๆฏ็ข้ๅ็ใๅ ๆญค๏ผๅฏนไบnamesๅๅญ็ฌฆไธฒ"Bob"็ๆฏ่พ่ฟ็ฎๅฐไผไบง็ไธไธชboolean array
End of explanation
data[names =='Will']
Explanation: ่ฟไธชBoolean arrayๅฏไปฅ็จไบๆฐ็ป็ดขๅผ๏ผThis boolean array can be passed when indexing the array:
End of explanation
data[names =='Will', 2:]
data[names =='Will', 2]
Explanation: ๅฝๅฉ็จๅธๅฐๅๆฐ็ป่ฟ่ก็ดขๅผๆถๅ๏ผๅฟ
้กปๆณจๆๅธๅฐๅๆฐ็ป็้ฟๅบฆ้่ฆไธ่ขซ็ดขๅผ็่ฝด้ฟๅบฆไธ่ดใๆญคๅค๏ผ่ฟๅฏไปฅๅฐๅธๅฐๅๆฐ็ป่ทๅ็ใๆดๆฐ๏ผๆ่
ๆดๆฐๅบๅ๏ผ็จๅๅฏนๆญค่ฟ่ก่ฏฆ็ป็ไป็ป๏ผๆททๅไฝฟ็จ:
End of explanation
names != 'Will'
print(data[names != 'Will'])
data[-(names == 'Will')]
#this '-' was discarded in python3, alternatively we
# use '~'
data[~(names == 'Bob')]
# in python2, it should be
# data[-(names == 'Bob')]
Explanation: ่ฆ้ๆฉ้คไบwillไปฅๅค็ๅ
ถไปๅผ๏ผๆขๅฏไปฅไฝฟ็จไธ็ญไบ็ฌฆๅท(!=)๏ผไนๅฏไปฅ้่ฟ็ฌฆๅท๏ผ-๏ผๅฏนๆกไปถ่ฟ่กๅฆๅฎ
End of explanation
mask = (names =='Bob') | (names == 'Will')
mask
data[mask]
Explanation: ๅฆๆๆไปฌ่ฆ้ๅ่ฟไธไธชๅๅญไธญ็ไธคไธช่ฟ่ก็ปๅๆฅๅบ็จๅคไธชๅธๅฐๆกไปถ๏ผ้่ฆไฝฟ็จ&๏ผๅ๏ผใ|๏ผๆ๏ผไน็ฑป็ๅธๅฐ่ฟ็ฎ็ฌฆ๏ผ๏ผๆณจๆ๏ผpythonๅ
ณ้ฎๅญandๅorๅจๅธๅฐๅๆฐ็ปไธญๆฏๆ ๆ็๏ผ
End of explanation
data[data<0] = 0
data
Explanation: ๅผๅพๆณจๆ็ๆฏ๏ผ้่ฟๅธๅฐ็ดขๅผ้ๅๆฐ็ปไธญ็ๆฐๆฎ๏ผๅฐๆปๆฏๅๅปบๆฐๆฎ็ๅฏๆฌ๏ผๅณไฝฟ่ฟๅไธๆจกไธๆ ท็ๆฐ็ปไนๆฏๅฆๆญคใ
้่ฟๅธๅฐๅๆฐ็ป่ฎพ็ฝฎๅผๆฏไธ็งๅธธ็จ็ๆนๆณใไธบไบๅฐdataไธญ็ๆๆ่ดๆฐๅไธบ0๏ผๆไปฌๅช้่ฆ
End of explanation
data[names != 'Will'] = 7
data
Explanation: ้่ฟไธ็ปดๅธๅฐๆฐ็ป่ฎพ็ฝฎๆด่กๆๅ็ๅผไนๅพ็ฎๅ๏ผ
End of explanation
#Suppose we had an 8 ร 4 array:
import numpy as np
arr1 = np.zeros((8,4))
print(arr1)
print('\n')
for i in range(8):
arr1[i] = i+1
print(arr1)
Explanation: ่ฑๅผ็ดขๅผ
fancy indexing๏ผๅณ่ฑๅผ็ดขๅผ๏ผๆฏไธไธชNumPyไธไธๆฏ่ฏญ๏ผไปฃๆๅฉ็จๆดๆฐๆฐ็ป่ฟ่ก็ดขๅผใ
End of explanation
arr1[[4,3,0,6]]
Explanation: ไธบไบไปฅ็นๅฎ้กบๅบ้ๅ่กๅญ้๏ผๅช้ไผ ๅ
ฅไธไธช็จไบๆๅฎ้กบๅบ็ๆดๆฐๅ่กจๆndarrayๅณๅฏ๏ผ
End of explanation
arr1[[-4,-3,-1,-6,-0]]
Explanation: ไธ้ข็ไปฃ็ ็๏ผๆไปฌ็จไธไธชๅ่กจ[4,3,0,6]ๅฐฑ้ๅบไบarra1ไธญ็็ฌฌ4๏ผ3๏ผ0๏ผ6็ๅญ้ใ
ๅฆๆๆไปฌไฝฟ็จ่ดๆฐ่ฟ่ก็ดขๅผ๏ผๅ้ๆฉ็้กบๅบๅฐๆฏไปๆซๅฐพๅฐๅผๅคดใ
ๆณจๆ-0ๅ0ๆฏไธๆ ท็๏ผ่ฟๆฏๅผๅคด็็ฌฌไธ่กไฝไธบ0. ่ฟๆฏๅผๅพๆณจๆ็ๅฐๆนใ
End of explanation
# ๅจ็ฌฌ12็ซ ๏ผๆไปฌไผๅฑๅผ่ฎฒ่ฎฒreshape๏ผๅจ่ฟไธชไพๅญไธญ๏ผๆไปฌๅชๆฏไฝฟ็จ reshape
import numpy as np
arr = np.arange(32).reshape((8,4))
print(arr)
print('\n')
arr_select = arr[[1,5,7,2],[0,3,1,2]]
print(arr_select)
Explanation: ไธๆฌกไผ ๅ
ฅๅคไธช็ดขๅผๆฐ็ปไผไผๆฏ่พ็นๅซใๅฎ่ฟๅ็ๆฏไธไธชไธ็ปดๆฐ็ป๏ผๅ
ถไธญ็ๅ
็ด ๅฏนๅบๅไธช็ดขๅผๅ
็ป๏ผ
End of explanation
import numpy as np
arr = np.arange(32).reshape((8,4))
print(arr)
print('\n')
arr_select = arr[[1,5,7,2]][:, [0,3,1,2]]
#1 5 7 2 ้ๅ่ก
#0 3 2 1 ้ๅๅ
print(arr_select)
Explanation: ไปไธ่ฟฐไปฃ็ ็็ปๆ็ไธ้พ็ๅบ๏ผๅพๅบๆฅ็็ปๆๆฏ[1,0] [5,3] [7,1] ๅ [2,2]
้ฃไนๆไน้ๅ็ฉ้ต็่กๅๅญ้ๅข๏ผไธ้ข๏ผๆไปฌๅช้่ฆ็จๅพฎๆนๅจไธไปฃ็ ๅณๅฏๅฎ็ฐ๏ผ๏ผ่ฟ้จๅๆๅฅฝๅ่ฏปๅ ้ๅไนฆ๏ผๅญๅฅไธๅฅฝ็่งฃ๏ผ
End of explanation
import numpy as np
arr = np.arange(32).reshape((8,4))
print(arr)
print('\n')
arr_select = arr[np.ix_([1,5,7,2],[0,3,1,2])]
print(arr_select)
Explanation: ๆญคๅค๏ผ่ฟๅฏไปฅไฝฟ็จ np.ix_ๅฝๆฐๆฅๅฎ็ฐไธ่ฟฐ็ๅ่ฝ๏ผๅฎๅฏไปฅๅฐไธคไธชไธ็ปดๆดๆฐๆฐ็ป่ฝฌๆขไธบไธไธช็จไบ้ๅๆนๅฝขๅบๅ็็ดขๅผๅจ๏ผ
End of explanation
import numpy as np
arr = np.arange(15).reshape((3,5))
print(arr)
print(arr.T)
print('\n')
print(arr)
Explanation: It should be mentioned that, ่ฑๅผ็ดขๅผไธๅ็ไธไธๆ ท๏ผๅฎๆปๆฏๅฐๆฐๆฎๅคๅถๅฐๆฐๆฐ็ปไธญใ
ๆฐ็ป่ฝฌ็ฝฎๅ่ฝดๅฏน็งฐ
่ฝฌ็ฝฎ๏ผๅณ transpose๏ผๆฏ้ๅก็ไธ็ง้่ฆ็นๆฎๅฝขๅผ๏ผๅฎ่ฟๅ็ๆฏๅๆฐๆฎ็่งๅพ๏ผไธไผ่ฟ่กไปปไฝๅคๅถๆไฝ๏ผใๆฐ็ปไธ็ฆๆtransposeๆนๆณ๏ผ่ฟๆไธไธช็นๆฎ็Tๅฑๆงใ
End of explanation
import numpy
from numpy.random import randn
arr = randn(6,3)
print(arr, '\n')
np.dot(arr.T, arr)
Explanation: ๅฝๆไปฌ่ฟ่ก็ฉ้ต้ข็ฎๆถๅ๏ผ่ฟ่ก้่ฆ็จๅฐ่ฝฌ็ฝฎๆไฝใไพๅฆ๏ผ่ฆ็จ np.dot่ฎก็ฎ็ฉ้ตๅ
็งฏX$^T$X๏ผ
End of explanation
#่ฟ้ๆ็ฎๅไธพไธชไพๅญ
import numpy as np
arr = np.arange(16).reshape((2,2,4))
print(arr)
arr_transpose = arr.transpose((1))
Explanation: ๅฏนไบๆด้ซ็ปด็ๆฐ็ป๏ผtranspose ๆถ้่ฆๅพๅฐไธไธช็ฑ่ฝด็ผๅท็ปๆ็ๅ
็ฅๆ่ฝๅฏน่ฟไบ่ฝด่ฟ่ก่ฝฌ็ฝฎ๏ผ่ฟไธชๅฏ่ฝไธๅฅฝ็่งฃ๏ผๅพๅค้
่ฏปๅ ๆฌก๏ผ๏ผ
End of explanation
import numpy as np
arr = np.arange(18).reshape(3,3,2)
print(arr, '\n')
arr_axes1 = arr.swapaxes(0,1)
print(arr_axes1)
print('\n')
arr_axes2 = arr.swapaxes(1,2)
print(arr_axes2)
Explanation: ไปไธ้ขๅ ไธชไพๅญ๏ผๆไปฌๅฏไปฅ็ๅบ๏ผๅฏนไบ็ฎๅ็ไฝ็ปด็ฉ้ต๏ผไฝฟ็จ.Tๅฐฑๅฏไปฅๅฎ็ฐ่ฝฌ็ฝฎ๏ผๆฏ็ซๅชๆฏ่ฟ่ก่ฝดๅฏนๆข่ๅทฒ๏ผไฝๆฏๅฏนไบ้ซ็ปดๆฐ็ป๏ผๅฐฑๆพๅพ้บป็ฆๅฅฝๅคใndarray่ฟๆไธไธชswapaxesๆนๆณ๏ผๅฎ้่ฆๆฅๅไธๅฏน่ฝด็ผๅท๏ผ(ๆณจๆswapaxesไนๆฏ่ฟๅๆบๆฐๆฎ็่งๅพ๏ผๅนถไธไผ่ฟ่กไปปไฝๅคๅถๆไฝใ)
End of explanation
import numpy as np
arr = np.arange(10)
print(arr, '\n')
print(np.sqrt(arr))
print(arr,'\n')
np.exp(arr) #the results are e^N (N = 0, 1, 2,...)
Explanation: ้็จๅฝๆฐ๏ผๅฟซ้็ๅ
็ด ็บงๆฐ็ปๅฝๆฐใ
้็จๅฝๆฐ๏ผๅณufuc๏ผๆฏไธ็งๅฏนndarrayไธญๅฏนๆฐๆฎๆง่กๅ
็ด ็บง่ฟ็ฎๅฏนๅฝๆฐใๆไปฌๅฏไปฅๅฐๅ
ถ็ไฝ็ฎๅๅฏนๅฝๆฐ๏ผๆฅๅไธไธชๆ่
ๅคไธชๆ ้ๅผ๏ผๅนถไบง็ไธไธชๆ่
ๅคไธชๆ ้ๅผ๏ผ็็ข้ๅๅ
่ฃ
ๅจใ
่ฎธๅค unfunc ้ฝๆฏ็ฎๅ็ๅ
็ด ็บงๅไฝ๏ผๅฆsqrtๅexp๏ผ
End of explanation
import numpy as np
from numpy.random import randn
x = randn(8)
print(x,'\n')
y = randn(8)
print(y,'\n')
max_number = np.maximum(x,y)
print(max_number,'\n')
Explanation: ไธ่ฟฐ่ฟไบ้ฝๆฏไธๅ
๏ผunary๏ผufuncใๅฆๅคไธไบ๏ผๅฆaddๆmaximum๏ผๆฅๅ2ไธชๆฐ็ป๏ผๅ ๆญคไนๅซไบๅ
binary ufunc๏ผ๏ผๅนถ่ฟๅไธไธช็ปๆๆฐ็ป๏ผ
End of explanation
import numpy as np
from numpy.random import randn
arr = randn(7)*5
print(arr,'\n')
arr_1 = np.modf(arr)
print(arr_1)
print(type(arr_1))
print(arr_1[1])
Explanation: ๆญคๅค๏ผๆไธๅฐ้จๅ็ufunc๏ผๅฎไปฌๅฏไปฅ่ฟๅๅคไธชๆฐ็ปใmofๅฐฑๆฏไธไธชไพๅญ๏ผๅฎๆฏPythonๅ
็ฝฎๅฝๆฐ
divmod็็ข้ๅ็ๆฌ๏ผ็จไบๅ็ฆปๆตฎ็นๆฐ็ป็ๅฐๆฐๅๆดๆฐ้จๅใ้่ฟไธ้ข็ไพๅญ๏ผๆไปฌไผๅ็ฐ๏ผmofๅ
ถๅฎๅพๅฐ็ๆฏๅ ไธชๆฐ็ป็ปๆ็tuple
End of explanation
import numpy as np
from numpy.random import randn
new = randn(10)
new
np.sign(new)
import numpy as np
new = np.arange(10)
new = new+0.1
print(new,'\n')
np.ceil(new)
import numpy as np
from numpy.random import randn
new = randn(10)
print(new,'\n')
print('rint function:', np.rint(new))
print('isnan function: ', np.isnan(new))
print('isfinite function', np.isfinite(new))
print('isinf function: ', np.isinf(new))
print('logical_not function: ', np.logical_not(new))
#Revieing some knowledge I have learnt
import numpy as np
arr1 = np.arange(16,dtype='i4').reshape(2,2,4)
arr2 = np.arange(10,dtype='float')
print(arr1)
print('\n')
print(arr2)
print('\n')
arr3=arr1.copy()
arr3[1]=23
print(arr3.astype(arr2.dtype))
sum(arr1,arr3)
print('mean value = ', arr1.mean(), '\n' 'max value is ',
arr1.max(), '\n' 'std root = ', arr1.std(), '\n'
'The sum of all the elements = ', arr1.cumsum(),
'\n' 'The multipy of all the elements = ', arr1.cumprod())
Explanation: ไธ่กจไธญๅๅบไบไธไบไธๅ
ๅไบๅ
ufunc
ไธๅ
ufunc
|ๅฝๆฐ|่ฏดๆ|
|------|-----|
|abs, fabs|่ฎก็ฎๆดๆฐใๆตฎ็นๆฐๅ่ดๆฐ็็ปๅฏนๅผใๅฏนไบๅคๆฐๅผ๏ผๅฏไปฅไฝฟ็จๆดๅฟซ็fabs|
|sqrt|่ฎก็ฎๅๅ
็ด ็ๅนณๆนๆ นใ็ธๅฝไบ arr 0.5|
|square|่ฎก็ฎๅๅ
็ด ็ๅนณๆนใ็ธๅฝไบๆฏ arr 2 |
|exp|่ฎก็ฎๅๅ
็ด ็eๆๆฐ๏ผๅณ e$^x$|
|log,log10,log2,log1p|ๅๅซๅฏนๅบ่ช็ถๅฏนๆฐ๏ผไปฅeไธบๅบ๏ผ๏ผๅบๆฐๆฏ10็log๏ผๅบๆฐๆฏ2็log๏ผไปฅๅlog(1+x)|
|sign|่ฎก็ฎๅๅ
็ด ็ๆญฃ่ดๅท๏ผ1ไปฃ่กจๆดๆฐ๏ผ0ไปฃ่กจ้ถ๏ผ-1ไปฃ่กจ่ดๆฐ|
|ceil|่ฎก็ฎๅๅ
็ด ็ceilingๅผ๏ผๅณๅคงไบ็ญไบ่ฏฅๅผ็ๆๅฐๆดๆฐ|
|floor|่ฎก็ฎๅๅ
็ด ็floorๅผ๏ผๅณๅฐไบ็ญไบ่ฏฅๅผ็ๆๅคงๆดๆฐ|
|rint|ๅฐๅๅ
็ด ไนๅ่ไบๅ
ฅๅฐๆๆฅ่ฟ็ๆดๆฐ๏ผไฟ็dtype|
|modf|ๅฐๆฐ็ป็ๅฐๆฐๅๆดๆฐ้จๅไปฅไธคไธช็ฌ็ซ็ๆฐ็ปๅฝขๅผ่ฟๅ|
|isnan| ่ฟๅไธไธช่กจ็คบโๅชไบๅผๆฏNaN๏ผ่ฟไธๆฏไธไธชๆฐๅญ๏ผโ็booleanๆฐ็ป|
|isfiniteใisinf|ๅๅซ่ฟๅไธไธช่กจ็คบโๅชไบๅ
็ด ๆฏๆ็ฉท็๏ผ้inf๏ผ้NaN๏ผโ ๆ่
โๅชไบๅ
็ด ๆฏๆ ็ฉท็โ็ๅธๅฐๅๆฐ็ป|
|cosใcoshใsinใsinhใtan๏ผtanh|ๆฎ้ๅๅๅๆฒๅไธ่งๅฝๆฐ|
|arccosใarccoshใarcsinใarcsinhใarctanใarctanh|ๅไธ่งๅฝๆฐ|
|logical_not| ่ฎก็ฎๅไธชๅ
็ด not x็็ๅผใ็ธๅฝไบ-arr|
ไบๅ
ufunc
|ๅฝๆฐ|่ฏดๆ|
|------|-----|
|add|ๅฐๆฐ็ปไธญๅฏนๅบ็ๅ
็ด ็ธๅ |
|substract|ไป็ฌฌไธไธชๆฐ็ปไธญๅๅป็ฌฌไบไธชๆฐ็ปไธญ็ๅ
็ด |
|multiply|ๆฐ็ปๅ
็ด ็ธไน|
|divideใfloor_divide|้คๆณๆๅไธๅๆด้คๆณ๏ผไธขๅผไฝๆฐ๏ผ|
|power|ๅฏน็ฌฌไธไธชๆฐ็ปไธญ็ๅ
็ด A๏ผๆ นๆฎ็ฌฌไบไธชๆฐ็ปไธญ็็ธๅบๅฅฝๅ
็ด B๏ผ่ฎก็ฎA$^B$|
|maximum, fmax|ๅ
็ด ็บง็ๆๅคงๅผ่ฎก็ฎใfmaxๅฐๅฟฝ็ฅNaN|
|minimumใfmin|ๅ
็ด ็บง็ๆๅฐๅผ่ฎก็ฎใfminๅฐๅฟฝ็ฅNaN|
|mod|ๅ
็ด ็บง็ๆฑๆจก่ฎก็ฎ๏ผ้คๆณ็ไฝๆฐ๏ผ|
|copysign|ๅฐ็ฌฌไบไธชๆฐ็ปไธญ็ๅผ็็ฌฆๅทๅคๅถ็ป็ฌฌไธไธชๆฐ็ปไธญ็ๅผ|
|greaterใgreater_equalใlessใless_equalใequalใnot_equal|ๆง่กๅ
็ด ็บง็ๆฏ่พ่ฟ็ฎ๏ผๆ็ปไบง็booleanๅๆฐ็ปใ็ธๅฝไบไธญ็ผ่ฟ็ฎ>, >=, <, <=, ==, !=|
|logical_andใlogical_orใlogical_xor | ๆง่กๅ
็ด ็บง็็ๅผ้ป่พ่ฟ็ฎใ็ธๅฝไบไธญ็ผ่ฟ็ฎ็ฌฆ '&'๏ผ'$|$'๏ผ'^'|
End of explanation
import numpy as np
points = np.arange(-1,1,0.5) # ไบง็4ไธช้ด้ๅไธบ0.5็็นใ
print(points[:10],'\n')
xs, ys = np.meshgrid(points, points)
print('xs is \n',xs,'\n')
print('transposed xs is \n', xs.T)
print('ys is \n', ys, '\n')
Explanation: ๅฉ็จๆฐ็ป่ฟ่กๆฐๆฎๅค็
NumPyๆฐ็ป็็ข้ๅๅจๅพๅคง็จๅบฆไธ็ฎๅไบๆฐๆฎๅค็ๆนๅผใไธ่ฌ่่จ๏ผ็ข้ๅ่ฟ็ฎ่ฆๆฏ็ญไปท็็บฏpythonๆนๅผๅฟซ1-2ไธชๆฐ้็บง๏ผๅฐคๅ
ถๆฏๅจๆฐๅผ่ฎก็ฎๅค็่ฟ็จไธญ่ฟไธชไผๅฟๆดๅ ็ๆๆพใๅจๅ้ข็็ฌฌ12็ซ ่ไธญ๏ผๆไปฌๅฐไบ่งฃๅฐๅนฟๆญ๏ผๅฎๆฏไธ็ง้ๅฏน็ข้ๅ่ฎก็ฎ็ๅผบๅคงๆๆฎตใ
ๅ่ฎพๆไปฌๆณ่ฆๅจไธ็ปๅผ๏ผ็ฝๆ ผๅ๏ผไธ่ฎก็ฎsqrt(x^2+y^2)ใๆไปฌๅฝ็ถๅฏไปฅ้ๆฉ็จloop็ๆนๅผๆฅ่ฎก็ฎ๏ผไฝๆฏๆไปฌๅจ่ฟ้ไฝฟ็จๆฐ็ป็ๆนๆณใ
np.meshgrid ๅฝๆฐๆฅๅไธคไธชไธ็ปดๆฐ็ป๏ผๅนถไบง็ไธคไธชไบ็ปด็ฉ้ต๏ผๅฏน่ฑ่ฏญไธคไธชๆฐ็ปไธญๆๆ็(x,y)ๅฏน๏ผ๏ผ
End of explanation
z = np.sqrt(xs**2 + ys**2)
print('z = \n', z)
Explanation: ็ฐๅจ๏ผๆไปฌๆฅ่ฎก็ฎxsไบๆฌกๆนไธysไบๆฌกๆน็ๅ๏ผ
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
#Here, the matplotlib function 'imshow' was used
# to create an image plot from a 2D array of function values
plt.imshow(z, cmap=plt.cm.gray);
plt.colorbar()
Explanation: ๆไปฌ่ฏ็ๅฐไธ่ฟฐ่ฟไธชzๅฝๆฐ็ปๅบๆฅ
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
arr1=np.arange(-1,1,0.001)
print(arr1)
xs1,ys1=np.meshgrid(arr1,arr1)
#print(xs)
z1 = np.sqrt(xs1**2+ys1**2)
print(z1)
plt.imshow(z1, cmap=plt.cm.gray)
plt.colorbar()
Explanation: ไธ้ข๏ผๆไปฌๅชไฝฟ็จไบๅพโ็โ็็น๏ผๆฅไธๆฅ๏ผๆไปฌๅฐ่ฏไฝฟ็จๅพๅฏ้็็น๏ผ่ฟๆ ทๆๅฉไบๆไปฌๅฏ่งๅsqrt(x^2+y^2)่ฟไธชๅฝๆฐใ
End of explanation
import numpy as np
xarr = np.array([1.1, 1.2, 1.3, 1.4, 1.5])
yarr = np.array([2.1, 2.2, 2.3, 2.4, 2.5])
cond = np.array([True, False, True, True, False])
Explanation: ๅฐๆกไปถ้ป่พ่กจ่ฟฐไธบๆฐ็ป่ฟ็ฎ
Expressing conditional logic as array operations
numpy.where ๅฝๆฐๆฏไธๅ
่กจ่พพๅผ x if condition else y ็็ข้ๅ็ๆฌใๅ่ฎพๆไปฌๆไธไธชboolean ๆฐ็ปๅไธคไธชๅผๆฐ็ปใ
End of explanation
result = [(x if c else y)
for x, y, c in zip(xarr, yarr, cond)]
print(result)
Explanation: ๅ่ฎพๆไปฌๆณ่ฆๆ นๆฎ cond ไธญ็ๅผๆฅๅณๅฎๆไปฌๆฏ้ๅ xarr ่ฟๆฏ yarr ็ๅผใๅฝ cond ไธญ็ๅผไธบ True ๆถ๏ผๆไปฌ้ๅ xarr ไธญ็ๅผ๏ผๅฆๅ้็จ yarr ไธญ็ๆฐๅผใ
pythonไธญๅ่กจๆจๅฏผๅผ็ๅๆณๅฆไธๆ็คบ๏ผ
End of explanation
result_where = np.where(cond, xarr, yarr)
print(result_where)
Explanation: It has multiple problems here. First, it will not be fast for large arrages (because all the work is being done in interpreted python code,ๅณ็บฏpythonๅค็)๏ผsecond, it will not work with multidimensional array,ๅณๆ ๆณๅค็ๅค็ปดๆฐ็ปใ
ๅฆๆๆไปฌไฝฟ็จ np.where๏ผwe can wirte this code very concisely๏ผ
End of explanation
from numpy.random import randn
import numpy as np
arr_a = randn(10)
print(arr_a)
arr_b = np.where(arr_a <0, -2, 2)
print(arr_a)
print(arr_b)
Explanation: np.where็็ฌฌไบไธชๅ็ฌฌไธไธชๅๆฐไธๅฟ
ๆฏๆฐ็ป๏ผๅฎไปฌๅฏไปฅๆฏๆ ้ใๅจๆฐๆฎๅๆๅทฅไฝไธญ๏ผwhere ้ๅธธ็จไบๆ นๆฎๅฆไธไธชๆฐ็ป่ไบง็ไธไธชๆฐ็ๆฐ็ปใๅ่ฎพๆไธไธช็ฑ้ๆบๆฐๆฎ็ปๆ็็ฉ้ต๏ผๆไปฌๆณๅฐๆๆๆญฃ็ๅผๆฟๆขไธบ2๏ผๆๆ่ดๅผๆนไธบ-2ใ้ฃไนๆไปฌๅฏไปฅๅไธบ๏ผ
End of explanation
arr_c = np.where(arr_a < 0, -3, arr_a)
print(arr_c)
Explanation: ๅฆๆๆไปฌๅช้่ฆๆ่ด็ๅผๆนไธบ -3๏ผ ้ฃไนๆไปฌๅฏไปฅ็จ
End of explanation
result = 1*๏ผcond1 - cond2) + 2 *(cond2 & -cond1) + 3*-(cond1|cond2)
#่ฟ็งๅๆณๆ่งๅพๅนถไธๆฏๅคชๆจ่๏ผๅจ2017ๅนด็ๆฐ็ไธญ๏ผๅไฝ่
ๅๅ ้คไบ่ฟ้จๅ็่ฎจ่ฎบ
Explanation: Highlight๏ผ ๆไปฌๅฏไปฅไฝฟ็จwhere่กจ็ฐๆดๅ ๅคๆ็้ป่พใๆณ่ฑก่ฟๆ ทไธไธชไพๅญ๏ผๆไธคไธชboolean array๏ผๅๅซๅซๅcond1ๅconda2๏ผๅธๆไฝฟ็จๅ็งไธๅ็ๅธๅฐๅผ็ปๅๅฎ็ฐไธๅ็่ตๅผๆไฝ.
ๅฆๆๆไปฌไธ็จwhere๏ผ้ฃไน่ฟไธชpseudo code ็้ป่พๅคงๆฆๅฆไธ
่ฝ็ถไธๆฏ้ฃไนๅฎนๆ็ๅบๆฅ๏ผๆไปฌๅฏไปฅไฝฟ็จ where ็ๅตๅฅๆฅๅฎ็ฐไธ่ฟฐ็pseudocode้ป่พ
np.where(conda1 & conda2, 0,
np.where(conda1, 1,
np.where(conda2, 2, 3)))
ๅจ่ฟไธช็นๆฎ็ไพๅญไธญ๏ผๆไปฌ่ฟๅฏไปฅๅฉ็จโๅธๅฐๅผๅจ่ฎก็ฎ่ฟ็จไธญ่ขซๅฝไฝ0ๆ่
1ๅค็โ่ฟไธชไบๅฎ๏ผๅฐไธ่ฟฐresult็็ปๆๆนๅๆ
End of explanation
import numpy as np
from numpy.random import randn
x1 = randn(10)
y1 = randn(10)
cond1 = np.where(x1<0, True, False)
cond2 = np.where(y1>0, True, False)
result=np.where(cond1 & cond2, 0,
np.where(cond1, 1,
np.where(cond2, 2, 3)))
print(result)
Explanation: ็ฐๅจๆไปฌๆฅๅบ็จไธไธ้ข็ๅตๅฅnp.where
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
arr = np.random.randn(5,4)
print(arr)
plt.imshow(arr, cmap=plt.cm.gray)
plt.colorbar()
arr[0,2]
arr[0,3]
Explanation: ๆฐๅญฆๅ็ป่ฎกๆนๆณ Mathematical and Statical Methods
ๆไปฌๅฏไปฅไฝฟ็จๆฐ็ปไธ็ไธๅฅๆฐๅญฆๅฝๆฐๅฏนๆดไธชๆฐ็ปๆ่
ๆฐ็ป็ๆไธช่ฝดๅไธ็ๆฐๆฎ่ฟ่ก็ป่ฎก่ฎก็ฎใYou can use aggregations (often called reductions) like 'sum', 'mean', and 'std' either by calling the array instance method or using the top-level Numpy function.
End of explanation
arr.mean() #่ฟ้็ไฝฟ็จๆนๆณๅฐฑๆฏไฝไธบ array instance method
np.mean(arr) # ่ฟ้็ไฝฟ็จๆนๆณๅฐฑๆฏ top-level Numpy function
arr.sum()
Explanation: ไธ้ขไปฃ็ ไธญ๏ผๆไบง็ไบไธไบ enormally distributed random data๏ผๅนถไธ็จimshow function ๆ่ฟไธชไบ็ปดๆฐ็ป็ป็ปไบๅบๆฅใๆไปฌๅฏไปฅไฝฟ็จ aggregate statistics ๅไธไบ่ฎก็ฎ. ๏ผๅ
ถๅฎๆๅจๅ้ขๅทฒ็ป็จๅฐ่ฟไบ่ฟไบ array ๅฎไพๆนๆณใ
End of explanation
arr.mean(axis=1) # compute mean across the columns
arr.mean(axis = 0 ) # compute mean down the rows
#ๅฆๆๅฏนaxis = 0 ๅ axis = 1 ไธคไธช็ปๆ็ๅๆฐๆไธ่งฃ๏ผ
# ๅฏไปฅๅ้กพไธไธๅ้ข็ไบ็ปดๆฐ็ป็็ดขๅผๆนๅผ
# ็นๅซๆฏ้ฃไธช NumPy ๆฐ็ปๅ
็ด ็ดขๅผ็ๅพ๏ผ้ฃไธชๅพไธๆ่ฟฐไบ axis1 ๅ axis0 ็็ธๅฏนๆๅใ
Explanation: mean ๅ sum ่ฟ็ฑป็ๅฝๆฐๅฏไปฅๆฅๅไธไธช axis ๅๆฐ ๏ผ็จไบ่ฎก็ฎ่ฏฅ่ฝดๅไธ็็ป่ฎกๅผ๏ผ๏ผๆ็ป็ปๆๆฏไธไธช็ธๅฏนไบๅๆฐ็ปๅฐไบไธ็ปด็ๆฐ็ป๏ผ
End of explanation
arr1 = np.array([0,1,2,3,4,5,6,7])
arr1.cumsum()
Explanation: ๅ
ถไปๅฆ โcumsumโ๏ผ โcumprodโ ่ฟ็ฑปๅฝๆฐๆนๆณๅนถไธ่ๅ๏ผ่ๆฏไบง็ไธไธช็ฑไธญ้ด็ปๆ็ปๆ็ๆฐ็ป๏ผ
English: Other methods like cumsum and cumprod donot aggregate, instead producing an array of the intermediate results:
End of explanation
arr2 = np.array([[0,1,2],[3,4,5],[6,7,8]])
arr2.cumsum(0)
Explanation: In multidimensional arrays, accumulation functions like cumsum return an array of the same size, but with the partial aggregates computed along the indicated axis according to each lower dimensional slice:
End of explanation
import numpy as np
from numpy.random import randn
arr = randn(100)
print(arr)
(arr>0).sum() # ๆญฃๅผ็ไธชๆฐ
Explanation: ็จไบBooleanๆฐ็ป็ๆนๆณ
ๅจไธ่ฟฐๆนๆณไธญ๏ผๅธๅฐๅผไผ่ขซๅผบๅถ่ฝฌๆขไธบ 1 ๏ผTrue๏ผ ๅ 0 ๏ผFalse๏ผใๅ ๆญค๏ผsum ็ปๅธธ่ขซ็จๆฅๅฏนBooleanๆฐ็ปไธญ็Trueๅผ่ฎก็ฎ๏ผ
End of explanation
bools = np.array([False, False, True, False])
bools.any() # any of them if True, then the return result is True
bools.all() # all of them should be True, otherwise the return result is False
Explanation: ๅฆๅค่ฟๆไธคไธชๆนๆณ any ๅ all๏ผๅฎไปฌๅฏน Boolean array ๅพๆ็จใany็จไบๆต่ฏๆฐ็ปไธญๆฏๅฆๅญๅจไธไธชๆๅคไธชTrue๏ผ่allๅๆฃๆฅๆฐ็ปไธญๆๆๅผๆฏๅฆ้ฝๆฏTrueใ
End of explanation
import numpy as np
from numpy.random import randn
arr_a = randn(8)
print(arr_a)
arr_a.sort() # ๆณจๆ๏ผๅฎๅฐ็ดๆฅๆนๅๆฐ็ปๆฌ่บซ
print(arr_a)
Explanation: ๆๅบ Sorting
่ทPythonๅ
็ฝฎ็ๅ่กจไธๆ ท๏ผNumPy ๆฐ็ปไนๅฏไปฅ้่ฟ sort ๆนๆณๅฐฑๅฐๆๅบ
End of explanation
import numpy as np
arr_b = randn(4,5)
print(arr_b)
arr_c = arr_b.copy()
print('\n')
print(arr_c)
arr_b.sort(1) # ่ฟ้ๆไปฌๆฒฟ็ axis = 1 ๆนๅ่ฟ่กๆๅบ๏ผๆไปฌๅ็ฐๆฏไธชไธไฝๆฐ็ปไธญ็ๅ
็ด ้ฝ่ขซๆๅบไบ
print(arr_b)
arr_c[2].sort() #่ฟ้ๆไปฌๅช้ๆฉไบ็ผๅทไธบ2็้ฃไธชไธ็ปดๆฐ็ป่ฟ่กๆๅบ
print(arr_c)
Explanation: ๅฏนไบๅค็ปดๆฐ็ป๏ผๅช่ฆๆไปฌ็ปๅฎ็กฎๅฎ็่ฝด็ผๅท๏ผๅฎๅฐฑไผๆฒฟ็็นๅฎ็่ฝด่ฟ่กๆๅบใๆไปฌ่ฟ้ๆฟไธไธชไบ็ปดๆฐ็ปไธพไพ
End of explanation
np.sort(arr_c)
print(arr_c, '\n')
print(np.sort(arr_c))
Explanation: The top-level method 'np.sort' returns a sorted copy of an array instead of modifying the array in-place. ่ฟไธช้่ฆๆไปฌๅบๅ np.sort ๅๆฐ็ปๅฎไพ sort ็ๅฐๆนใ
End of explanation
short_arr = randn(10)
short_arr.sort()
print(short_arr, '\n', len(short_arr))
short_arr[int(0.1*len(large_arr))] #(ๅคไบ0.1ๅไฝๆฐไฝ็ฝฎไธ)
short_arr[int(0*len(large_arr))] #๏ผๅคไบ0ๅไฝไธ๏ผ
Explanation: ๆฐ็ป sort ็ๅบ็จไนไธ๏ผๅฐฑๆฏ็กฎๅฎๆฐ็ป็ๅไฝๆฐ(quantile)ใ
A quick-and-dirty way to compute the quantiles of an array is to sort it, and select the value at a particular rank.
End of explanation
large = randn(2000)
large.sort()
large[int(0.5*len(large))]
Explanation: ไธ้ขๆไปฌๅชๆฏไฝฟ็จไบๅพๅฐ็ๆฐ็ป๏ผๆไปฌไธ็ผๅฐฑๅฏไปฅ็ๅบๅๅไฝๆฐไธ็ๆฐๅผ๏ผๅฝๆฐ็ปๅๅพๅพๅคงๆถๅ๏ผๆ่ฝๅธๆพๅบ sort ็ไพฟๆทใไพๅฆ๏ผ
End of explanation
import numpy as np
names = np.array(['Bob', 'Joe', 'Will', 'Bob', 'Will', 'Joe', 'Joe'])
np.unique(names)
ints = np.array([3,3,41,4424,523,523,22,22,43]
)
np.unique(ints)
Explanation: ๅ
ณไบ NumPy ๆๅบๆนๆณไปฅๅ่ฏธๅฆ้ดๆฅๆๅบไน็ฑป็้ซ็บงๆๆฏ๏ผๆไปฌๅจ็ฌฌ12็ซ ่ฟไผ่ฏฆ็ป็่ฎจ่ฎบ๏ผๅจ Pandas ไธญไนๆไธไบ็นๅซ็ๆๆฐๆๆฏใ
Unique and Other Set Logic ๅฏไธๅไปฅๅๅ
ถไป้ๅ้ป่พ
NumPy ๆไพไบไธไบ้ๅฏนไธ็ปดndarray็ๅบๆฌ้ๅ่ฟ็ฎใๅ
ถไธญๅฏ่ฝๆๅธธ็จ็ๆฏ np.unique๏ผๅฎ็จไบๆพๅบๆฐ็ปไธญ็ๅฏไธๅผ(ไนๅฐฑๆฏ่ฏด่ฟไธชๅผๅจๆฐ็ปไธญๅชๆไธไธช)ๅนถ่ฟๅๅทฒๆๅบ็็ปๆใ
End of explanation
sorted(set(names))
sorted(set(ints))
Explanation: ๆไปฌๅฏไปฅๆฟ็ไธ np.unique ็ญไปท็็บฏpythonไปฃ็ ๆฅๆฏ่พไธไธ๏ผContrast no.unique with the pure Python alternative:)
End of explanation
values = np.array([6,623,43,22,3])
np.in1d(values,[6,43,22])
Explanation: Anotehr function, np.in1d, tests membership of the values in one array in another, returning a boolean array.
ๅฆไธไธชๅฝๆฐnp.in1d็จไบๆต่ฏไธไธชๆฐ็ป็ๅผๅจๅฆไธไธชๆฐ็ปไธญ็ๆๅ่ตๆ ผ๏ผ่ฟๅไธไธชBoolean array
End of explanation
np.unique(values)
np.intersect1d([3,6],[3,22,43])
np.union1d([3,6],[3,22,43])
np.in1d([3,6],[3,22,43])
np.in1d([3,6],[3,6,22])
np.setdiff1d([3,22,6],[6])
np.setxor1d([3,22,6],[6])
Explanation: ่ฟ้็ปๅบไธไบ NumPy ไธญ็ๅบๆฌ้ๅๅฝๆฐ๏ผset function๏ผ
Array set operations
|ๅฝๆฐ|่ฏดๆ|
|------|-----|
|unique(x)|่ฎก็ฎxไธญ็ๅฏไธๅ
็ด ๏ผๅนถ่ฟๅๆๅบ็ปๆ|
|intersect1d(x,y)|่ฎก็ฎxๅy็ๅ
ฌๅ
ฑๅ
็ด ๏ผๅนถไธ่ฟๅๆๅบ็ปๆ|
|union1d(x,y)|่ฎก็ฎxๅy็ๅนถ้๏ผๅนถ่ฟๅๆๅบ็ปๆ|
|in1d(x,y)|ๅพๅฐไธไธช่กจ็คบโx็ๅ
็ด ๆฏๅฆๅ
ๅซไบyโ็ๅธๅฐๅๆฐ็ป|
|setdiff1d(x,y)|้ๅ็ๅทฎ๏ผๅณๅ
็ด ๅจxไธญไธไธๅจyไธญ|
|setxor1d(x,y)|้ๅ็ๅฏน็งฐๅทฎ๏ผๅณๅญๅจไบไธไธชๆฐ็ปไธญไฝไธๅๆถๅญๅจไบไธคไธชๆฐ็ปไธญ็ๅ
็ด ๏ผ็ธๅฝไบๆฏๅผๆ|
End of explanation
import numpy as np
arr_c = np.arange(10)
np.save('./chapter04/some_array',arr_c)
Explanation: File Input and Output with Arrays ็จไบๆฐ็ป็ๆไปถ่พๅ
ฅ่พๅบ
NumPy ๅฏไปฅ็จๆฅ่ฏปๅ็ฃ็ไธญ็ๆๆฌๆฐๆฎๅไบ่ฟๅถๆฐๆฎใๅจ่ฟไธช็ซ ่ไธญ๏ผๆไปฌๅฐๅช่ฎจ่ฎบ NumPy ๅ
ๅปบ็ไบ่ฟๅถๆ ผๅผ๏ผ่ฟไธป่ฆๆฏๅ ไธบๅคง้จๅpython็จๆทๆดๅๆฌข็จpandasๅๅ
ถไปๅทฅๅ
ทๆฅ่ฏปๅๆๆฌๅ่กจๆ ผๆฐๆฎ๏ผ่ฟๅจไนๅ็็ซ ่ไธญไผ่ฟ่ก่ฎจ่ฎบ
ๅฐๆฐ็ปไปฅไบ่ฟๅถๆ ผๅผไฟๅญๅฐ็ฃ็
np.save ๅ np.load ๆฏ่ฏปๅ็ฃ็ๆฐ็ปๆฐๆฎ็ไธคไธชไธป่ฆๅฝๆฐใ้ป่ฎคๆ
ๅตไธ๏ผๆฐ็ปๆฏไปฅๆชๅ็ผฉ็ๅๅงไบ่ฟๅถๆ ผๅผไฟๅญๅจๆฉๅฑๅไธบ .npy ็ๆไปถไธญ็ใ
End of explanation
np.load('./chapter04/some_array.npy') # ๆณจๆ๏ผ้่ฆๆๆๆไปถๅ็ผๅใ
Explanation: ruๅฆๆๆไปถ่ทฏๅพๆซๅฐพๆฒกๆๆฉๅฑๅ .npy๏ผ้ฃไน่ฟไธชๆฉๅฑๅไผ่ขซ่ชๅจ่กฅๅ
จใ็ถๅๅฐฑๅฏไปฅ้่ฟ np.load ่ฏปๅ็ฃ็ไธ็ๆฐ็ป๏ผ
End of explanation
np.savez('./chapter04/array_archive.npz', a = arr_c, b = arr_c)
np.savez_compressed('./chapter04/array_compressed.npz', a1 = arr_c,
b1 = arr_c)
Explanation: ้่ฟ np.savez ๅฏไปฅๅฐๅคไธชๆฐ็ปไฟๅญๅฐไธไธชuncompressed npzๆไปถไธญ๏ผๆณจๆๅไนฆๅไธญๆ็ฟป่ฏ็็ฌฌไธ็้ฝๆ่ฟไธชnpz่ฏดๆไบๆฏๅ็ผฉๆไปถ๏ผ่ฟไธชๆฏ้่ฏฏ็๏ผไฝๆฏๅไฝ่
็ฌฌไบ็๏ผๅณๅฉ็จpython 3็็ๆฌๅทฒ็ปๆดๆญฃไบ๏ผๆไนๆฅ้
ไบ NumPy ็ๆๆกฃ๏ผnp.savezไฟๅญ็ๅนถไธๆฏๅ็ผฉๆไปถ๏ผๅฆๆ่ฆๅ็ผฉๆไปถ๏ผๅฏไปฅไฝฟ็จ np.savez_compressed๏ผ๏ผๅฐๆฐ็ปไปฅๅ
ณ้ฎๅญๅๆฐ็ๅฝขๅผไผ ๅ
ฅๅณๅฏ๏ผ
End of explanation
arch = np.load('./chapter04/array_archive.npz')
arch['b']
arch = np.load('./chapter04/array_compressed.npz')
arch['b1']
Explanation: When loading an .npz file, we get back a dict-like object ๏ผๆไปฌๅพๅฐ็ๆฏไธไธช็ฑปไผผๅญๅ
ธ็ๅฏน่ฑก๏ผthat laods the individual arrays lazily (่ฏฅๅฏน่ฑกไผๅฏนๅไธชๆฐ็ป่ฟ่กๅปถ่ฟๅ ่ฝฝ)
End of explanation
!cat ./chapter04/array_ex.txt #for windows system, use !type
Explanation: ๅญๅๆๆฌๆไปถ
ไปๆไปถๅ ่ฝฝๆๆฌๆฏไธชๅพๆ ๅ็pythonไปปๅก๏ผไธ่ฟpython็ๆไปถ่ฏปๅๅฝๆฐๅพๅฎนๆๅฆๅๅญฆ่
ๆ็ณๆถ๏ผๅ ๆญค่ฟ้ๆไปฌไธป่ฆไป็ป pandas ไธญ็ read_csv ๅ read_table ๅฝๆฐใๆๆถ๏ผๆไปฌ้่ฆ็จๅฐ np.loadtxt ๆ่
ๆดไธบไธ้จๅ็ np.genfromtxt ๅฐๆฐๆฎ่ฎฐ่ฝฝๅฐๆฎ้็ NumPy ๆฐ็ปไธญใ
่ฟไบๅฝๆฐ้ฝๆ่ฎธๅค้้กนๅฏไพไฝฟ็จ๏ผๆๅฎๅ็งๅ้็ฌฆใ้ๅฏน็นๅฎๅ็่ฝฌๆขๅจๅฝๆฐใ้่ฆ่ทณ่ฟ็่กๆฐ็ญใ่ฟ้๏ผไปฅไธไธช็ฎๅ็้ๅทๅๅฒๆไปถ ๏ผCSV) ไฝไธบ
example๏ผ
End of explanation
arr = np.loadtxt('chapter04/array_ex.txt', delimiter = ',')
arr
print(arr)
Explanation: ่ฏฅๆไปถๅฏไปฅ่ขซๅ ่ฝฝๅฐไธไธชไบ็ปดๆฐ็ปไธญ๏ผๅฆไธๆ็คบ๏ผ
End of explanation
np.savetxt('./chapter04/array_ex-savetxt.txt', arr)
!cat chapter04/array_ex-savetxt.txt
Explanation: np.savetxt ๆง่ก็ๆฏ็ธๅ็ๆไฝ๏ผๅฐๆฐ็ปๅๅฐไปฅๆ็งๅ้็ฌฆๅๅผ็ๆๆฌๆไปถไธญๅปใ genfromtxt ่ท loadtxt ๅทฎไธๅค๏ผๅชไธ่ฟๅฎ้ขๅ็ๆฏ็ปๆๅๆฐ็ปๅ็ผบๅคฑๆฐๆฎๅค็ใๅจ12็ซ ไธญ๏ผๆไปฌ่ฟไผไป็ป่ฎจ่ฎบ็ปๆๅๆฐ็ป็็ฅ่ฏใ
End of explanation
x = np.array([[1., 2., 3.],[4., 5., 6.]])
y = np.array([[6.,23.,], [-1, 7], [8, 9]])
x
y
x.dot(y)
Explanation: Linear Algebra
Linear algebra, like matrix multiplication, decompisitions, determinants, and other square matrix math, is an important part of any array library. Unlike MATLAB, multiplying two two-dimensional arrays with * is an element-wise product instead a matrix dot product. Thus, there is a function 'dot', both an array method and a function in the numpy namespace, for matrix multiplication:
End of explanation
import numpy as np
np.dot(x,y)
Explanation: x.dot(y) is equivalent to np.dot(x,y)
End of explanation
np.dot(x, np.ones(3))
import numpy as np
x1 = np.array([[2,2],[3,3]])
y1 = np.array([[1,1],[1,1]])
dotvalue1=x1.dot(y1)
dotvalue2=np.dot(x1,y1)
print('dotvalue1 = \n', dotvalue1, '\n' 'dotvalue2 = \n', dotvalue2)
Explanation: A matrix product between a 2D array and a suitably sized 1D array result in a 1D array:
End of explanation
from numpy.linalg import inv, qr
from numpy.random import randn
X = randn(5,5)
mat = X.T.dot(X)
inv(mat)
mat.dot(inv(mat))
q, r = qr(mat)
# QR decomposition is decomposition of a matrix A into a product
# A = Q R
# where Q is an orthogonal matrix, and R is upper triangular matrix
q
q.dot(q.T) # we can see that the result is an unit matrix
r
Explanation: numpy.linalg ไธญๆไธ็ปๆ ๅ็็ฉ้ตๅ่งฃ่ฟ็ฎไปฅๅ่ฏธๅฆๆฑ้่กๅๅผไน็ฑป็ไธ่ฅฟใๅฎไปฌ่ท matlab ๅ R ็ญ่ฏญ่จๆไฝฟ็จ็ๆฏ็ธๅ็่กไธๆ ๅ็บง Fortran ๅบ๏ผๅฆ BLASใLAPACKใIntel MKL ๏ผๅฏ่ฝๆ๏ผ่ฟไธชๅๅณไบๆไฝฟ็จ็ NumPy ็ๆฌ๏ผ็ญ๏ผ
End of explanation
import numpy as np
from numpy.random import randn
arr_d = np.arange(10)
arr_e= randn(16).reshape(4,4)
print(arr_d, '\n', arr_e)
np.diag(arr_d) # ๅฏนไบ1D array๏ผๅฎๅฐ่ฝฌๅไธบไธไธชๅฏน่งๅ็็ฉ้ต
np.diag(arr_e) #ๅฝdiagไฝ็จๅจไธไธช2Dไปฅไธๆฐ็ปๆถ๏ผๅ่ฟๅๅฏน่ง็บฟไธ็ๅ
็ด ใ
Explanation: ่ฟ้๏ผๆๅฏนๅธธ็จ numpy.linalg ๅฝๆฐ่ฟ่กไธไบๆกไพๅฏน่ฏดๆ๏ผๅไนฆๅฉ็จไบไธไธช่กจๆ ผ๏ผไฝๆฏๆ่ชๅทฑไธบไบ็่ฟๆฌไนฆไน้ๅคๅไบๅ ไธช่กจๆ ผไบ๏ผ่ฎฐๅฟๆ
ๅตๅนถไธไฝณ๏ผๅฏ่ฝ่ฟๆฏไธไธชๅฝๆฐไธไธชไพๅญๅฏน่ฟ็งๆนๆณๆดๅ ๅฎนๆ่ฎฉไบบ่ฎฐๅฟๆทฑๅปไธไบใ๏ผ2017/12/25๏ผ
|ๅฝๆฐ|่ฏดๆ|
|---|---|
|diag|ไปฅไธ็ปดๆฐ็ป็ๅฝขๅผ่ฟๅๆน้ต็ๅฏน่ง็บฟ๏ผๆ้ๅฏน่ง็บฟ๏ผๅ
็ด |
End of explanation
np.trace(arr_e)
np.trace(np.diag(arr_d))
arr_f = np.arange(64).reshape(4,4,4)
print(arr_f
)
np.trace(arr_f[1])
#ๅฏไปฅ็ๅฐๅฏนไบๅค็ปดๆฐ็ป๏ผๆไปฌ่ฟๅฏไปฅๅฏนๅ
ถไธญไฝ็ปดๅบฆ็ๆฑtrace
Explanation: |ๅฝๆฐ|่ฏดๆ|
|---|---|
|dot|matrix multiplication, ็ฉ้ตไนๆณ|
่ฟๅทฒ็ปๅจๅ้ขไธพ่ฟไพๅญ๏ผ่ฟ้็ฅไบใ
|ๅฝๆฐ|่ฏดๆ|
|---|---|
|trace|่ฎก็ฎๅฏน่ง็บฟๅ
็ด ็ๅ|
End of explanation
from numpy.linalg import det
det(arr_e)
Explanation: |ๅฝๆฐ|่ฏดๆ|
|---|---|
|det|่ฎก็ฎ็ฉ้ต็่กๅๅผ|
End of explanation
import numpy as np
from numpy.linalg import eig
eig(arr_e)
# computes the eigenvalues and eigenvectors of a square matrix
#ๅ
ณไบ่ฟไธชๅฝๆฐ็ๆฐๅญฆๅซไน๏ผ่ฏทๅ่็บฟๆงไปฃๆฐ็ธๅ
ณ็ไนฆ็ฑ
Explanation: |ๅฝๆฐ|่ฏดๆ|
|---|---|
|eig|่ฎก็ฎๆน้ต็ๆฌๅพๅผๅๆฌๅพๅ้|
End of explanation
from numpy.linalg import inv
# if 'a' is a matrix object,
# the return value is a matrix as well
a = np.array([[1., 2.], [3., 4.]])
ainv = inv(a)
print(a, '\n', ainv)
# inverses of several matrices can be computed at once:
b = np.array([
[
[1.,2.],[3., 4.]
],
[
[1., 3.],[3., 5.]
]
])
binv = inv(b)
binv
Explanation: |ๅฝๆฐ|่ฏดๆ|
|---|---|
|inv|่ฎก็ฎๆน้ต็้|
End of explanation
from numpy.linalg import pinv
from numpy.random import randn
c = randn(9,6)
bpinv = pinv(c)
bpinv
Explanation: |ๅฝๆฐ|่ฏดๆ|
|---|---|
|pinv|่ฎก็ฎ็ฉ้ต็Moore-Penroseไผช้|
Compute the Moore-Penrose pseudo-inverse of a matrix: The pseudo-inverse of a matrix $A^+$, is defined as "the matrix that 'sloves' [the least-squares problem ] Ax = b," i.e., if $\bar{x}$ is said solution, then $A^+$ is that matrix such that $\bar{x}$ = $A^+$b
For more information, please refere to linear algebra books
End of explanation
import numpy as np
from numpy.random import randn
a = randn(9,6) + 1j*randn(9,6)
a
# Reconstruction based on full SVD
# factors the matrix a as u * np.diag(s) * V,
# where u and v are unitary and s is a 1D array of a's
# singular values
U, s, V = np.linalg.svd(a, full_matrices = True)
U.shape, s.shape, V.shape
s
Explanation: |ๅฝๆฐ|่ฏดๆ|
|---|---|
|qr|copute the QR decompisition|
ไธ้ขๆ่ฟไบ๏ผๆญคๅค็ฅ
|ๅฝๆฐ|่ฏดๆ|
|---|---|
|svd|่ฎก็ฎๅฅๅผๅผๅ่งฃ๏ผ compute the singular value decomposition SVD|
End of explanation
# Solve the systems of equatons 3* x0 + x1 = 9 and x0 + 2*x1 =8
from numpy.linalg import solve
a = np.array([[3,1],[1,2]])
b = np.array([9,8])
x = solve(a,b)
a
b
x
# check that the solution is correct:
np.allclose(np.dot(a, x), b)
Explanation: |ๅฝๆฐ|่ฏดๆ|
|---|---|
|solve|่ฎก็ฎๆน็จ็ป Ax = b๏ผ ๅ
ถไธญ A ไธบไธไธชๆน้ต|
Note๏ผThe solutions are computed using LAPACK routine_gesv. 'a' must be square and of full-rank, i.e., all rows (or, equivalently, columns) must be linearly independent; if either is not true, use lstsq for the least-squares best 'solutions' of the system/equation.
End of explanation
x = np.array([0,1,2,3])
y = np.array([-1, 0.2, 0.9, 2.1])
Explanation: |ๅฝๆฐ|่ฏดๆ|
|---|---|
|lstsq|่ฎก็ฎๆน็จ็ป $ax = b$ ็ๆๅฐไบไน่งฃ|
numpy.linalg.lstsq(a, b, rcond=-1)
Return the equation $a$ $x$ = $b$ by computing a vector $x$ that minimizes the Eouliden 2-norm $ ||b - ax ||^2$. The equation may be under-, over-, or well-determined (i.e. the numer of linearly independent rows of $a$ can be less less, greater, or less than its number of linearly independent columns). If $a$ is square and of full rank, then $x$ (but for round-off error) is the "exact" solution of the equation. (reivsed on the content from scipy.org)
Fit a line, $y = mx + c$, through some noisy data-points:
End of explanation
A = np.vstack([x, np.ones(len(x))]).T
# np.vstack ๅฏไปฅๆ้กบๅบๆarrayๅ ๅ ๅจไธ่ตท
# ๆญคๅคๆฏๆ x ๅ np.ones(4) ๅ ๅจไบไธ่ตท
A #ๅณ A = [[x, 1]]
from numpy.linalg import lstsq
m, c = lstsq(A, y)[0]
print(m, c)
# ๆไปฌๆๆฐๆฎๅๆๅ็็บฟๅฏไปฅ็ปๅบๆฅ
####basic settings started
import matplotlib.style
import matplotlib as mpl
mpl.style.use('classic')
mpl.rcParams['figure.facecolor'] = '1'
#if choose the grey backgroud, use 0.75
mpl.rcParams['figure.figsize'] = [6.4,4.8]
mpl.rcParams['lines.linewidth'] = 1.5
mpl.rcParams['legend.fancybox'] = True
#####basic settings finished
%matplotlib inline
# plot inline jupyter
import matplotlib.pyplot as plt
# plot orginal data (i.e. four points)
plt.plot(x, y, 'o', label = 'Original data', ms =14)
# plot the fitted line using red line style and linewidth = 2
plt.plot(x, m*x + c, 'r', lw=2, label = 'Fitted line')
# plot the legend
plt.legend()
# plot grid
plt.grid()
plt.show()
#ๅ ไธบไธ้ข็จๅฐไบ numpy.stack,้ฃไนๆๅฐฑ้กบไพฟๅไธพไธไธชไพๅญๆฅ่ฏดๆ vstack ็็จๆณ
# ไธ vstack ็ธๅ็ๆไฝๆฏ vsplit
import numpy as np
a = np.array([1,2,3])
b = np.array([4,5,6])
ab = np.vstack((a, b))
m, n= np.vsplit(ab, 2) # ๆabๅๆ2ไธช๏ผๅๅซๅญๅจๅจmๅnไธญ
print(ab)
print(m)
Explanation: ้่ฟๆฅ็ไธ้ขxๅy็็น๏ผๆไปฌๅฏไปฅๅ็ฐ่ฟไธชๆฒ็บฟ็ๅคงๆฆๆ็ๅจ1ๅทฆๅณ๏ผ่ๅจy่ฝดไธ็cut offๅจ-1ๅทฆๅณใ
ๆไปฌๅฏไปฅ้ๆฐๅไธไธไธ้ข่ฟไธช็บฟๆงๆน็จ๏ผ$y$ = A p, ๆญคๅค A = [ [x, 1] ] ๅนถไธ p = [[m], [c]]ใ็ฐๅจๆไปฌไฝฟ็จ lstsq ๅป่งฃ p
End of explanation
import numpy as np
samples = np.random.normal(size=(4,4))
samples
Explanation: ้ๆบๆฐ็ๆ Pseudorandom Number Generation
numpy.random ๆจกๅๅฏน PYthon ๅ
็ฝฎๅฏน random ่ฟ่กไบ่กฅๅ
๏ผๅขๅ ไบไธไบๆ็ฌ็ๆ้ๆบๆ ทๆฌๅฏนๅฝๆฐใไพๅฆ๏ผๆไปฌๅฏไปฅ็จnormalๆฅๅพๅฐไธไธชๆ ๅๆญฃๆๅๅธๅฏน 4 * 4 ๆ ทๆฌๆฐ็ป๏ผ
End of explanation
from random import normalvariate
N = 1000000
%timeit samples = [normalvariate(0,1) for _ in range(N)]
#ไธญๆ็ฟป่ฏ็ๆฌไธญ่ฟ่กไปฃ็ ๆฏ้็๏ผ็ฟป่ฏ่
ๅๆไบ xrange
%timeit np.random.normal(size=N)
Explanation: ไธๆญคๅฏนๆฏๅฐ๏ผๅจpythonๅฏนๅ
็ฝฎrandomๅฝๆฐไธญ๏ผไธๆฌกๅช่ฝ็ๆไธไธชๆ ทๆฌๅผใไธ้ขๆไปฌๅฐฑๆฅๅฏนๆฏไธ่ฟไธค็งๆนๆณๅฏนๅบๅซ๏ผๆไปฌๅฐไผ็ๅฐ numpyไธญๅฏนๆจกๅๆๆดไผ่ถๅฏนๆ็๏ผ
End of explanation
####basic settings started
import matplotlib.style
import matplotlib as mpl
mpl.style.use('classic')
mpl.rcParams['figure.facecolor'] = '1'
#if choose the grey backgroud, use 0.75
mpl.rcParams['figure.figsize'] = [6.4,4.8]
mpl.rcParams['lines.linewidth'] = 1.5
mpl.rcParams['legend.fancybox'] = True
#####basic settings finished
%matplotlib inline
# plot inline jupyter
import matplotlib.pyplot as plt
import numpy as np
position = 0 # ๅๅงๅไฝ็ฝฎ
walk = []
steps = 1000
for _ in range(steps):
stepwidth = 1 if np.random.randint(0,2) else -1
position += stepwidth
walk.append(position)
#print(walk)
#plot this trajectory
plt.plot(walk[:1000])
Explanation: ไธ่กจๅๅบไบ numpy.random ไธญ็้จๅๅฝๆฐใๅจไธไธ่ไธญ๏ผๆ้จๅฐ็ปๅบไธไบๅฉ็จ่ฟไบๅฝๆฐไธๆฌกๆง็ๆๅคง้ๆ ทๆฌๅผ็ๆกไพใ
|ๅฝๆฐ|่ฏดๆ|
|---|---|
|seed|็กฎๅฎ้ๆบๆฐ็ๆๅจ็็งๅญ|
|permutation|่ฟๅไธไธชๅบๅ็้ๆบๆๅๆ่ฟๅไธไธช้ๆบๆๅ็่ๅด|
|shuffle|ๅฏนไธไธชๅบๅๅฐฑๅฐ้ๆบๆๅ|
|rand|ไบง็ๅๅๅๅธ็ๆ ทๆฌๅผ|
|randint|ไป็ปๅฎ็ไธไธ้่ๅดๅ
้ๆบ้ๅๆดๆฐ|
|randn|ไบง็ๆญฃๆๅๅธ๏ผๅนณๅๅผไธบ0๏ผๆ ๅๅทฎไธบ1๏ผ็ๆ ทๆฌๅผ๏ผ็ฑปไผผไบmatlabๆฅๅฃ|
|binomial|ไบง็ไบ้กนๅๅธ็ๆ ทๆฌๅผ|
|normal|ไบง็ๆญฃๆ๏ผ้ซๆฏ๏ผๅๅธ็ๆ ทๆฌๅผ|
|beta|ไบง็Betaๅๅธ็ๆ ทๆฌๅผ|
|chisquare|ไบง็ๅกๆนๅๅธ็ๆ ทๆฌๅผ|
|gamma|ไบง็Gammaๅๅธ็ๆ ทๆฌๅผ|
|uniform|ไบง็ๅจ[0,1)ไธญๅๅๅๅธ็ๆ ทๆฌๅผ|
่ไพ๏ผ้ๆบๆผซๆญฅ random walks
้ๆบๆผซๆญฅๆฏ่ฏดๆๆฐ็ปๆไฝๆๅฅฝ็ๆกไพไนไธใ็ฐๅจ๏ผๆไปฌๆฅ่่ไธไธช็ฎๅ็้ๆบๆผซๆญฅ๏ผๆไปฌไป0ๅผๅง๏ผๅนถไธไปฅ1ๆ่
-1ไฝไธบstep width๏ผ1ๅ-1ๅบ็ฐ็ๆฆ็ๆฏๅ็ญ็ใ็ถๅๆไปฌ่ตฐ1000ๆญฅ๏ผๆไปฌๅฏไปฅ็็ๆไปฌไผ่ตฐๅบไปไนๆ ท็่ฝจ่ฟน
End of explanation
import random
position = 0 # ๅๅงๅไฝ็ฝฎ
walk = []
steps = 1000
for _ in range(steps):
stepwidth = 1 if random.randint(0,1) else -1
position += stepwidth
walk.append(position)
#print(walk)
#plot this trajectory
plt.plot(walk[:1000])
Explanation: ๆณจๆๆไธ้ข็ไปฃ็ ่ทๅไนฆไธ็ๅบๅซ๏ผไธป่ฆๅจไบๆๅนถไธๆฏpython่ช่บซ็random standard libraryใๆไฝฟ็จ็ๆฏnumpy.random๏ผ่ฟไธคไธชๆฏๆๅบๅซ็๏ผ่ฟไธป่ฆๅจไบ
numpy.random.randint(a, b)๏ผ่ฟๅ็ๅผๆฏa ~ (b-1)ไน้ด็ๆดๆฐๅผ๏ผๅ
ๆฌa ๅ b-1๏ผ๏ผ
่python่ชๅธฆ็random.randint(a,b) ่ฟๅ็ๅผๆฏ a ๏ฝ bไน้ด็ๆดๆฐๅผ๏ผๅ
ๆฌaๅb๏ผ
End of explanation
nsteps = 1000
import numpy as np
draws = np.random.randint(0, 2, size = nsteps)
steps = np.where(draws > 0, 1, -1)
walk = steps.cumsum()
walk.min()
walk.max()
Explanation: ไธ้ข็walkๆฐๅผ๏ผๅ
ถๅฎๅฐฑๆฏ้ๆบๆฐ็็ดฏ่ฎกๅใไธ่ฟไธ้ข็ๆนๅผไธญ๏ผๆ้จ้ฝๆฏ่ตฐไธๆญฅ็ถๅไบง็ไธไธช้ๆบๆฐ๏ผๅ
ถๅฎๆไปฌๅฏไปฅ็จnumpy.random.randintไธๆฌกๆงๅฐไบง็Nไธช้ๆบๆฐ๏ผ่ฟ้ไปฅN=1000ไธบไพ
End of explanation
(np.abs(walk) >= 10).argmax()
#Note that using argmax here is not always efficient because
#it always makes a full scan of the array. In this special case,
#once a True is observed we know it to be the maxiโ mum value.
Explanation: A more complicated statics is the 'first crossing time', the step at which the random walk reaches a particular value. Here we might want to know how long it look the random walk to get at least 10 steps aways from the origin 0 in either direction. np.abs(walk) >= 10 gives us a boolean array indicating where the walk has reached or exceeded 10, but we want the index of the first 10 or -10.
End of explanation
import numpy as np
nwalks = 5000
nsteps = 1000
draws = np.random.randint(0, 2, size=(nwalks, nsteps)) # 0 or 1
steps = np.where(draws > 0, 1, -1)
walks = steps.cumsum(1)
walks
walks.max()
walks.min()
Explanation: ไธๆฌกๆจกๆๅคไธช้ๆบๆผซๆญฅ simulating many random walks at once
ๅฆๆๅธๆๆจกๆๅคไธช้ๆบๆผซๆญฅ่ฟ็จ๏ผๅช้่ฆๅฏนไธ้ขๅฏนไปฃ็ ๅไธ็นๅพฎ่ฐใๆไปฌๅช้่ฆ็ปnumpy.random ไผ ๅ
ฅไธไธชไบๅ
ๅ
็ฅๅณๅฏไบง็ไธไธชไบ็ปดๆฐ็ป๏ผ็ถๅๆไปฌๅฐฑ่ฝไธๆฌกๆง่ฎก็ฎ5000ไธช้ๆบๆผซๆญฅ่ฟ็จ๏ผไธ่กไธไธช๏ผ็็ดฏ่ฎกๅไบใ
End of explanation
hits30 = (np.abs(walks) >= 30).any(1)
hits30
hits30.sum() #ๅฐ่พพ30ๆ่
-30็ๆฐ้
Explanation: ๅพๅฐ่ฟไบๆฐๆฎๅ๏ผๆไปฌๅฏไปฅๆฅ่ฎก็ฎๅบ30ๆ่
-30็ๆๅฐ็ฉฟ่ถๆถ้ดใ่ฟ้ๅพ่ฆ็จๅพฎๅจไธไธ่ๅญ๏ผๅ ไธบไธๆฏ5000ไธช่ฟ็จ้ฝๅฐ่พพไบ30ใๆไปฌๅฏไปฅ็จanyๆนๆณๆฅๅฏนๆญค่ฟ่กๆฃๆฅ
End of explanation
crossing_times= (np.abs(walks[hits30]) >= 30).argmax(1)
crossing_times.mean()
Explanation: ็ถๅๆไปฌๅๅฉ็จ่ฟไธชboolean array้ๅบๅชไบ็ฉฟ่ถไบ30๏ผ็ปๅฏนๅผ๏ผ็้ๆบๆผซๆญฅ๏ผ่ก๏ผ๏ผๅนถ่ฐ็จargmaxๅจ่ฝด1ไธ่ทๅ็ฉฟ่ถๆถ้ด๏ผ
End of explanation
steps = np.random.normal(loc=0, scale=0.25,
size=(nwalks,nsteps))
steps
Explanation: ่ฟ้่ฏทๅฐ่ฏๅ
ถไปๅๅธๆนๅผๅพๅฐๆผซๆญฅๆฐๆฎใๅช้่ฆไฝฟ็จไธๅ็้ๆบๆฐ็ๆๅฝๆฐๅณๅฏใไพๅฆ๏ผnormal ็จไบ็ๆๆๅฎๅๅผๅๆ ๅๅทฎ็ๆญฃๆๅๅธๆฐๆฎ
End of explanation
#python list
x = [1,2,3,4]
y = [5,6,7,8]
x*2
x+10
x+y
#numpy arrays
import numpy as np
ax = np.array([1,2,3,4])
ay = np.array([5,6,7,8])
ax*2
ax+10
ax+ay
Explanation: Appendix for chapter04-note
date: 2018 Feb.
I add some note for array operations, the reference book is pthon cookbook by David Beazley et al
3.9 ๅค็ๅคงๅๆฐ็ป็่ฎก็ฎ in "python cookbook"
ๆไปฌ้่ฆๅฏนๅคงๅๆฐๆฎๆฏๅฆๆฐ็ปๅ็ฝๆ ผ๏ผgrid๏ผ่ฟ่ก่ฎก็ฎใๅจ่ฟ่กๅคงๅๆฐๆฎ่ฎก็ฎๅฏนๆถๅ๏ผไธๅฎ่ฆๅไบ็จnumpy๏ผ่ไธๆฏไป
ไป
็จpython่ช่บซ็ๅ่กจ่ฎก็ฎใnumpyๆ็ๆด้ซ็่ฎก็ฎๆ็ใไธ้ขๆไปฌ็จไพๅญๆฅ่ฏดๆ๏ผๅ่กจๅNumPyๆฐ็ป็ๅบๅซ๏ผ
End of explanation
def f(x):
return 3*x**2 + 2*x +7
f(ax)
Explanation: ไปไธ้ขๅฏไปฅ็ๅบ๏ผnumpyๆฐ็ปๆไฝๆถๅๆฏๅฏน่ขซไฝ็จๅฏนๆๆๅ
็ด ็๏ผ่ฟไธไบๅฎไฝฟๅพๆฐ็ป่ฎก็ฎ้ฝๅๅพ็ฎๅๅๅฟซ้ใๆฏๅฆๆไปฌๅฏไปฅๅฟซ้ๅฐ่ฎก็ฎๅค้กนๅผ๏ผ
End of explanation
np.sqrt(ax)
np.cos(ax)
Explanation: NumPyๆไพไบไธไบ้็จๅฝๆฐ้ๅ๏ผไปไปฌไน่ฝๅฏนๆฐ็ป่ฟ่ก็ดๆฅๅฏนๆไฝใ่ฟไบ้็จๅฝๆฐๅฏไปฅไฝไธบmathๆจกๅไธญๆๅฏนๅบๅฝๆฐๅฏนๆฟไปฃใ็คบไพๅฆไธ๏ผ
End of explanation
grid = np.zeros(shape=(10000,10000), dtype=float)
grid
Explanation: ไฝฟ็จNumPyไธญ็้็จๅฝๆฐ๏ผๅ
ถๆ็่ฆๆฏๅฏนๆฐ็ป่ฟ่ก่ฟญไปฃ็ถๅไฝฟ็จmathๆจกๅไธญ็ๅฝๆฐๆฏๆฌกๅชๅค็ไธไธชๅ
็ด ๅฟซไธๆฐๅใๅ ๆญค๏ผๅช่ฆๆๅฏ่ฝๅฐฑๅบ่ฏฅ็ดๆฅไฝฟ็จ่ฟไบ้็จๅฝๆฐใ
ๅจๅบๅฑ๏ผNumPy ๆฐ็ป็ๅ
ๅฑๅ้
ๆนๅผๅCๅFortranๆฏไธๆ ท็ใไปไปฌๅจๅคงๅ็่ฟ็ปญๅ
ๅญไธญๅญๅจใๆญฃๅ ๅฆๆญค๏ผNumPyๆ่ฝๅๅปบๆฏ้ๅธธPythonๅ่กจๅคง่ฎธๅค็ๆฐ็ปใไพๅฆ๏ผๅฆๆๅๅๅปบ10000 * 10000็ไบ็ปดๆตฎ็นๅฝๆฐ๏ผ่ฟๅฏนnumpy่่จๆฏๅพ่ฝปๆพ็ไบๆ
๏ผ
End of explanation
grid+10
np.sin(grid+10)
Explanation: ๆๆ็้็จๆไฝไป็ถๅฏไปฅๅๆถๆฝๅ ไบๆๆ็ๅ
็ด ไนไธ๏ผ
End of explanation
import numpy as np
x=list(range(1,5))
y=list(range(5,9))
z=list(range(9,13))
a = np.array(x)
b = np.array(y)
c = np.array(z)
array1 = np.concatenate((a, b, c), axis=0)
array2 = np.stack((a, b, c), axis=0)
array1
array2
#select row 1
array2[1]
#select column 1
array2[:,1]
array2[1:3,1:3]
array2[1:3,1:3] += 10
array2
#broadcast a row vector across an operation on all rows
array2+[100,101,102,103]
array2
#conditional assigan on an array
np.where(a < 10, a, 10) #a<10 is the condition, if ture, return a. I have introduced np.where before in this chapter
Explanation: ๅ
ณไบNumPy๏ผไธไธช็นๅซๅผๅพๆ่ตท็ๆน้ขๅฐฑๆฏNumPyๆฉๅฑไบpythonๅ่กจ็็ดขๅผๅ่ฝโโๅฐคๅ
ถๆฏ้ๅฏนๅค็ปดๆฐ็ปๆถๆดๆฏๅฆๆญคใ็ฐๅจๆไปฌๆฅๆๅปบไธไธช็ฎๅ็ไบ็ปดๆฐ็ป็ถๅๅไธไบ็ฎๅ็experiment
End of explanation
import numpy as np
m = np.matrix([[1,-2,3],[0,4,5],[7,8,-9]])
m
#Return transpose ่ฝฌ็ฝฎ็ฉ้ต
m.T
#return inverse ้็ฉ้ต
m.I
# create a vector and multiply
v = np.matrix([[2],[3],[4]])
v
m*v
Explanation: 3.10 ็ฉ้ตๅ็บฟๆงไปฃๆฐ็่ฎก็ฎ in "python cookbook"
3.10.1 Question
ๅฆไฝๅฉ็จpythonๆฅ่ฟ่ก็ฉ้ตไนๆณ๏ผๆฑ่กๅๅผ๏ผ่งฃๅณ็บฟๆงๆน็จ็ญ็ญ
3.10.2 ่งฃๅณๆนๆก
NumPy ไธญๆไธช matrix ๅฏน่ฑกๅฏไปฅ็จๆฅๅค็่ฟ็งๆ
ๅตใmatrix ๅฏน่ฑกๅไธ่ฟฐ3.9ไธญๆ่ฟฐ็ๆฐ็ปๅฏน่ฑกๆไบ็ฑปไผผ๏ผไฝๆฏๅจ่ฎก็ฎๆถ้ตๅพช็บฟๆงไปฃๆฐ่งๅใไธ้ข็ไพๅญๅฑ็คบไบๅ ไธช้่ฆ็็นๆง๏ผ
End of explanation
import numpy as np
import numpy.linalg as nlg
#Determinant
nlg.det(m)
#Eigenvalues
nlg.eigvals(m)
#Solve for x in mx = v
x = nlg.solve(m,v)
x
m*x
v
Explanation: ๆดๅค็ๆไฝๅฏไปฅๅจnumpy.linalgๅญๆจกๅไธญๆพๅฐ๏ผไพๅฆ๏ผ
End of explanation |
12,286 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overlays
Spatial overlays allow you to compare two GeoDataFrames containing polygon or multipolygon geometries
and create a new GeoDataFrame with the new geometries representing the spatial combination and
merged properties. This allows you to answer questions like
What are the demographics of the census tracts within 1000 ft of the highway?
The basic idea is demonstrated by the graphic below but keep in mind that overlays operate at the dataframe level,
not on individual geometries, and the properties from both are retained
Now we can load up two GeoDataFrames containing (multi)polygon geometries...
Step1: The first dataframe contains multipolygons of the NYC boros
Step2: And the second GeoDataFrame is a sequentially generated set of circles in the same geographic space. We'll plot these with a different color palette.
Step3: The geopandas.tools.overlay function takes three arguments
Step4: And take a look at the attributes; we see that the attributes from both of the original GeoDataFrames are retained.
Step5: Now let's look at the other how operations | Python Code:
%matplotlib inline
from shapely.geometry import Point
from geopandas import datasets, GeoDataFrame, read_file
from geopandas.tools import overlay
# NYC Boros
zippath = datasets.get_path('nybb')
polydf = read_file(zippath)
# Generate some circles
b = [int(x) for x in polydf.total_bounds]
N = 10
polydf2 = GeoDataFrame([
{'geometry': Point(x, y).buffer(10000), 'value1': x + y, 'value2': x - y}
for x, y in zip(range(b[0], b[2], int((b[2] - b[0]) / N)),
range(b[1], b[3], int((b[3] - b[1]) / N)))])
Explanation: Overlays
Spatial overlays allow you to compare two GeoDataFrames containing polygon or multipolygon geometries
and create a new GeoDataFrame with the new geometries representing the spatial combination and
merged properties. This allows you to answer questions like
What are the demographics of the census tracts within 1000 ft of the highway?
The basic idea is demonstrated by the graphic below but keep in mind that overlays operate at the dataframe level,
not on individual geometries, and the properties from both are retained
Now we can load up two GeoDataFrames containing (multi)polygon geometries...
End of explanation
polydf.plot()
Explanation: The first dataframe contains multipolygons of the NYC boros
End of explanation
polydf2.plot(cmap='tab20b')
Explanation: And the second GeoDataFrame is a sequentially generated set of circles in the same geographic space. We'll plot these with a different color palette.
End of explanation
newdf = polydf.overlay(polydf2, how="intersection")
newdf.plot(cmap='tab20b')
Explanation: The geopandas.tools.overlay function takes three arguments:
df1
df2
how
Where how can be one of:
['intersection',
'union',
'identity',
'symmetric_difference',
'difference']
So let's identify the areas (and attributes) where both dataframes intersect using the overlay method.
End of explanation
polydf.head()
polydf2.head()
newdf.head()
Explanation: And take a look at the attributes; we see that the attributes from both of the original GeoDataFrames are retained.
End of explanation
newdf = polydf.overlay(polydf2, how="union")
newdf.plot(cmap='tab20b')
newdf = polydf.overlay(polydf2, how="identity")
newdf.plot(cmap='tab20b')
newdf = polydf.overlay(polydf2, how="symmetric_difference")
newdf.plot(cmap='tab20b')
newdf = polydf.overlay(polydf2, how="difference")
newdf.plot(cmap='tab20b')
Explanation: Now let's look at the other how operations:
End of explanation |
12,287 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algorithms Exercise 2
Imports
Step2: Peak finding
Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should
Step3: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
Explanation: Algorithms Exercise 2
Imports
End of explanation
def find_peaks(a):
Find the indices of the local maxima in a sequence.
n = 0
x = []
if a[n] > a[n+1]:
x.append(n)
while n < len(a) - 2:
n = n + 1
if a[n] > a[n+1] and a[n] > a[n-1]:
x.append(n)
if a[n+1] > a[n]:
x.append(n+1)
y = np.asarray(x)
return y
print(find_peaks([2,0,1,0,2,0,1]))
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
Explanation: Peak finding
Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should:
Properly handle local maxima at the endpoints of the input array.
Return a Numpy array of integer indices.
Handle any Python iterable as input.
End of explanation
from sympy import pi, N
pi_digits_str = str(N(pi, 10001))[2:]
w = []
for ints in pi_digits_str:
w.append(ints)
x = find_peaks(w)
plt.hist(np.diff(x), bins = 20)
plt.show()
assert True # use this for grading the pi digits histogram
Explanation: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following:
Convert that string to a Numpy array of integers.
Find the indices of the local maxima in the digits of $\pi$.
Use np.diff to find the distances between consequtive local maxima.
Visualize that distribution using an appropriately customized histogram.
End of explanation |
12,288 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cross-Validation on the Iris Dataset
Here is an example on you to split the data on the iris dataset.
Let's re-use the results of the 2D PCA of the iris dataset
in order to explore clustering. First we need to repeat
some of the code from the previous notebook
Step1: First we need to shuffle the order of the samples and the
target to ensure that all classes are well represented on
both sides of the split
Step2: We can now split the data using a 2/3 - 1/3 ratio
Step3: We can now re-train a new linear classifier on the training set only
Step4: To evaluate its quality we can compute the average number
of correct classifications on the test set | Python Code:
# all of this is taken from the notebook '04_iris_clustering.ipynb'
import numpy as np
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
y = iris.target
n_samples, n_features = iris.data.shape
print n_samples
Explanation: Cross-Validation on the Iris Dataset
Here is an example on you to split the data on the iris dataset.
Let's re-use the results of the 2D PCA of the iris dataset
in order to explore clustering. First we need to repeat
some of the code from the previous notebook:
End of explanation
indices = np.arange(n_samples)
indices[:10]
np.random.RandomState(42).shuffle(indices)
indices[:10]
X = iris.data[indices]
y = iris.target[indices]
Explanation: First we need to shuffle the order of the samples and the
target to ensure that all classes are well represented on
both sides of the split:
End of explanation
split = (n_samples * 2) / 3
X_train, X_test = X[:split], X[split:]
y_train, y_test = y[:split], y[split:]
X_train.shape
X_test.shape
y_train.shape
y_test.shape
Explanation: We can now split the data using a 2/3 - 1/3 ratio:
End of explanation
from sklearn.svm import SVC(kernel='linear')
clf = SVC().fit(X_train, y_train)
Explanation: We can now re-train a new linear classifier on the training set only:
End of explanation
np.mean(clf.predict(X_test) == y_test)
Explanation: To evaluate its quality we can compute the average number
of correct classifications on the test set:
End of explanation |
12,289 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
#%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
plt.show()
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, X, y):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
if len(y.shape) == 1:
y = y[:,None]
n_records = X.shape[0]
delta_weights = [None,None]
delta_weights[0] = np.zeros(self.weights_input_to_hidden.shape)
delta_weights[1] = np.zeros(self.weights_hidden_to_output.shape)
h = [None,None]
a = [None,None]
#print(X.shape,self.weights_input_to_hidden.shape)
#print(y.shape)
# Python 3.5 introduced the @ operator for matrix multiplication.
# Which version of Python are you using?
h[0] = np.matmul(X, self.weights_input_to_hidden).T
a[0] = self.activation_function(h[0])
h[1] = np.matmul(a[0].T, self.weights_hidden_to_output)
a[1] = h[1]
delta = [None,None]
#print(a[1].shape)
delta[1] = (y - a[1])# * a[1] * (1. - a[1])
f_prime = a[0] * (1. - a[0])
delta[0] = self.weights_hidden_to_output * delta[1].T * f_prime
ddw = [None,None]
ddw[1] = np.matmul(a[0], delta[1])
ddw[0] = np.matmul(X.T, delta[0].T)
#print(n_records)
delta_weights[0] += ddw[0]
delta_weights[1] += ddw[1]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += delta_weights[1] * self.lr / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += delta_weights[0] * self.lr / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.matmul(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs)# signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = hidden_outputs # signals into final output layer
final_outputs = np.matmul(final_inputs, self.weights_hidden_to_output) # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 5000
learning_rate = 0.3
hidden_nodes = 9
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
fig, ax = plt.subplots(figsize=(13,6))
ax.plot(losses['train'], label='Training loss')
ax.plot(losses['validation'], label='Validation loss')
ax.set_title('it={0},lr={1},n={2} loss={3:.3f}'.format(iterations, learning_rate, hidden_nodes,
losses['validation'][-1]))
ax.grid(axis='y')
_ = ax.legend()
#_ = fig.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, (ax,err) = plt.subplots(2,1, figsize=(13,13))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
ax.grid()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
mean, std = scaled_features['cnt']
err.plot((test_targets['cnt']*std + mean).values - predictions[0], label='Error')
err.set_xlim(right=len(predictions))
err.grid()
#ax.legend()
err.set_xticks(np.arange(len(dates))[12::24])
_ = err.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
12,290 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise
Step2: Training
Step3: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
Step4: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
size = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(dtype=tf.float32, shape=(None, size), name="inputs")
targets_ = tf.placeholder(dtype=tf.float32, shape=(None, size), name="targets")
# Output of hidden layer, single fully connected layer here with ReLU activation
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits, fully connected layer with no activation
logits = tf.layers.dense(encoded, size, activation=None)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits, name='output')
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
# Create the session
sess = tf.Session()
Explanation: Training
End of explanation
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation |
12,291 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
XCS Tutorial
This is the official tutorial for the xcs package for Python 3. You can find the latest release and get updates on the project's status at the project home page.
What is XCS?
XCS is a Python 3 implementation of the XCS algorithm as described in the 2001 paper, An Algorithmic Description of XCS, by
Martin Butz and Stewart Wilson. XCS is a type of Learning Classifier System (LCS), a machine learning algorithm that utilizes a genetic algorithm acting on a rule-based system, to solve a reinforcement learning problem.
In its canonical form, XCS accepts a fixed-width string of bits as its input, and attempts to select the best action from a predetermined list of choices using an evolving set of rules that match inputs and offer appropriate suggestions. It then receives a reward signal indicating the quality of its decision, which it uses to adjust the rule set that was used to make the decision. This process is subsequently repeated, allowing the algorithm to evaluate the changes it has already made and further refine the rule set.
A key feature of XCS is that, unlike many other machine learning algorithms, it not only learns the optimal input/output mapping, but also produces a minimal set of rules for describing that mapping. This is a big advantage over other learning algorithms such as neural networks whose models are largely opaque to human analysis, making XCS an important tool in any data scientist's tool belt.
The XCS library provides not only an implementation of the standard XCS algorithm, but a set of interfaces which together constitute a framework for implementing and experimenting with other LCS variants. Future plans for the XCS library include continued expansion of the tool set with additional algorithms, and refinement of the interface to support reinforcement learning algorithms in general.
Terminology
Being both a reinforcement learning algorithm and an evolutionary algorithm, XCS requires an understanding of terms pertaining to both.
Situation
A situation is just another term for an input received by the classifier.
Action
An action is an output produced by the classifier.
Scenario
A series of situations, each of which the algorithm must respond to in order with an appropriate action in order to maximize the total reward received for each action. A scenario may be dynamic, meaning that later training cycles can be affected by earlier actions, or static, meaning that each training cycle is independent of the actions that came before it.
Classifier Rule
A classifier rule, sometimes referred to as just a rule or a classifier, is a pairing between a condition, describing which situations can be matched, and a suggested action. Each classifier rule has an associated prediction indicating the expected reward if the suggested action is taken when the condition matches the situation, a fitness indicating its suitability for reproduction and continued use in the population, and a numerosity value which indicates the number of (virtual) instances of the rule in the population. (There are other parameters associated with each rule, as well, but these are visibly important ones.)
Classifier Set
Also referred to as the population, this is the collection of all rules currently used and tracked by the classifier. The genetic algorithm operates on this set of rules over time to optimize them for accuracy and generality in their descriptiveness of the problem space. Note that the population is virtual, meaning that if the same rule has multiple copies in the population, it is represented only once, with an associated numerosity value to indicate the number of virtual instances of the rule in the population.
Match Set
The match set is the set of rules which match against the current situation.
Action Set
The action set is the set of rules which match against the current situation and recommend the selected action. Thus the action set is a subset of the match set. In fact, the match set can be seen as a collection of mutually exclusive and competing action sets, from which only one is to be selected.
Reward
The reward is a floating point value which acts as the signal the algorithm attempts to maximize. There are three types of reward that are commonly mentioned with respect to temporal difference learning algorithms. The immediate reward (aka raw reward) is the original, unaltered reward value returned by the scenario in response to each action. The expected future reward is the estimated payoff for later reward cycles, specifically excluding the current one; the prediction of the action set on the next reward cycle acts in this role in the canonical XCS algorithm. The payoff or combined reward is the combined sum of the immediate reward, plus the discounted expected future reward. (Discounted means the value is multiplied by a non-negative coefficient whose value is less than 1, which causes the algorithm to value immediate reward more highly than reward received later on.) The term reward, when used alone, is generally used to mean the immediate reward.
Prediction
A prediction is an estimate by a classifier rule or an action set as to the payoff expected to be received by taking the suggested action in the given situation. The prediction of an action set is formed by taking the fitness-weighted average of the predictions made by the individual rules within it.
Fitness
Fitness is another floating point value similar in function to the reward, except that in this case it is an internal signal defined by the algorithm itself, which is then used as a guide for selection of which rules are to act as parents to the next generation. Each rule in the population has its own associated fitness value. In XCS, as opposed to strength-based LCS variants such as ZCS, the fitness is actually based on the accuracy of each rule's reward prediction, as opposed to its size. Thus a rule with a very low expected reward can have a high fitness provided it is accurate in its prediction of low reward, whereas a rule with very high expected reward may have low fitness because the reward it receives varies widely from one reward cycle to the next. Using reward prediction accuracy instead of reward prediction size helps XCS find rules that describe the problem in a stable, predictable way.
Installation
To install xcs, you will of course need a Python 3 interpreter. The latest version of the standard CPython distribution is available for download from the Python Software Foundation, or if you prefer a download that comes with a long list of top-notch machine learning and scientific computing packages already built for you, I recommend Anaconda from Continuum Analytics.
Starting with Python 3.4, the standard CPython distribution comes with the package installation tool, pip, as part of the standard distribution. Anaconda comes with pip regardless of the Python version. If you have pip, installation of xcs is straight forward
Step1: Then we import the xcs module and run the built-in test() function. By default, the test() function runs the canonical XCS algorithm on the 11-bit (3-bit address) MUX problem for 10,000 steps.
Step2: ```
INFO
Step3: The XCSAlgorithm class contains the actual XCS algorithm implementation. The ClassifierSet class is used to represent the algorithm's state, in the form of a set of classifier rules. MUXProblem is the classic multiplexer problem, which defaults to 3 address bits (11 bits total). ScenarioObserver is a wrapper for scenarios which logs the inputs, actions, and rewards as the algorithm attempts to solve the problem.
Now that we've imported the necessary tools, we can define the actual problem, telling it to give the algorithm 10,000 reward cycles to attempt to learn the appropriate input/output mapping, and wrapping it with an observer so we can see the algorithm's progress.
Step4: Next, we'll create the algorithm which will be used to manage the classifier set and learn the mapping defined by the problem we have selected
Step5: The algorithm's parameters are set to appropriate defaults for most problems, but it is straight forward to modify them if it becomes necessary.
Step6: Here we have selected an exploration probability of .1, which will sacrifice most (9 out of 10) learning opportunities in favor of taking advantage of what has already been learned so far. This makes sense in real-time learning environment; a lower value is more appropriate in cases where the classifier is being trained in advance or is being used simply to learn a minimal rule set. The discount factor is set to 0, since future rewards are not affected at all by the currently selected action. (This is not strictly necessary, since the scenario will inform the algorithm that reward chaining should not be used, but it is useful to highlight this fact.) We have also elected to turn on GA and action set subsumption, which help the system to converge to the minimal effective rule set more quickly in some types of scenarios.
Next, we create the classifier set
Step7: The algorithm does the work for us, initializing the classifier set as it deems appropriate for the scenario we have provided. It provides the classifier set with the possible actions that can be taken in the given scenario; these are necessary for the classifier set to perform covering operations when the algorithm determines that the classifiers in the population provide insufficient coverage for a particular situation. (Covering is the addition to the population of a randomly generated classifier rule whose condition matches the current situation.)
And finally, this is where all the magic happens
Step8: We pass the scenario to the classifier set and ask it to run to learn the appropriate input/output mapping. It executes training cycles until the scenario dictates that training should stop. Note that if you wish to see the progress as the algorithm interacts with the scenario, you will need to set the logging level to INFO, as described in the previous section, before calling the run() method.
Now we can observe the fruits of our labors.
Step9: ```
10001#10100 => True
Time Stamp
Step10: Defining New Scenario Types
To define a new scenario type, inherit from the Scenario abstract class defined in the xcs.scenarios submodule. Suppose, as an example, that we wish to test the algorithm's ability to find a single important input bit from among a large number of irrelevant input bits.
Step11: We defined a new class, HaystackProblem, to represent this test case, which inherits from Scenario to ensure that we cannot instantiate the problem until the appropriate methods have been implemented.
Now let's define an __init__ method for this class. We'll need a parameter, training_cycles, to determine how many reward cycles the algorithm has to identify the "needle", and another parameter, input_size, to determine how big the "haystack" is.
Step12: The input_size is saved as a member for later use. Likewise, the value of training_cycles was saved in two places
Step13: The implementations for the property and the methods other than sense() and execute() will be trivial, so let's start with those
Step14: Now we are going to get into the meat of the problem. We want to give the algorithm a random string of bits of size input_size and have it pick out the location of the needle bit through trial and error, by telling us what it thinks the value of the needle bit is. For this to be a useful test, the needle bit needs to be in a fixed location, which we have not yet defined. Let's choose a random bit from among inputs on each run.
Step15: The sense() method is going to create a string of random bits of size input_size and return it. But first it will pick out the value of the needle bit, located at needle_index, and store it in a new member, needle_value, so that execute(action) will know what the correct value for action is.
Step16: Now we need to define the execute(action) method. In order to give the algorithm appropriate feedback to make problem solvable, we should return a high reward when it guesses the correct value for the needle bit, and a low value otherwise. Thus we will return a 1 when the action is the value of the needle bit, and a 0 otherwise. We must also make sure to decrement the remaining cycles to prevent the problem from running indefinitely.
Step17: We have now defined all of the methods that Scenario requires. Let's give it a test run.
Step18: ```
INFO
Step19: ```
INFO | Python Code:
import logging
logging.root.setLevel(logging.INFO)
Explanation: XCS Tutorial
This is the official tutorial for the xcs package for Python 3. You can find the latest release and get updates on the project's status at the project home page.
What is XCS?
XCS is a Python 3 implementation of the XCS algorithm as described in the 2001 paper, An Algorithmic Description of XCS, by
Martin Butz and Stewart Wilson. XCS is a type of Learning Classifier System (LCS), a machine learning algorithm that utilizes a genetic algorithm acting on a rule-based system, to solve a reinforcement learning problem.
In its canonical form, XCS accepts a fixed-width string of bits as its input, and attempts to select the best action from a predetermined list of choices using an evolving set of rules that match inputs and offer appropriate suggestions. It then receives a reward signal indicating the quality of its decision, which it uses to adjust the rule set that was used to make the decision. This process is subsequently repeated, allowing the algorithm to evaluate the changes it has already made and further refine the rule set.
A key feature of XCS is that, unlike many other machine learning algorithms, it not only learns the optimal input/output mapping, but also produces a minimal set of rules for describing that mapping. This is a big advantage over other learning algorithms such as neural networks whose models are largely opaque to human analysis, making XCS an important tool in any data scientist's tool belt.
The XCS library provides not only an implementation of the standard XCS algorithm, but a set of interfaces which together constitute a framework for implementing and experimenting with other LCS variants. Future plans for the XCS library include continued expansion of the tool set with additional algorithms, and refinement of the interface to support reinforcement learning algorithms in general.
Terminology
Being both a reinforcement learning algorithm and an evolutionary algorithm, XCS requires an understanding of terms pertaining to both.
Situation
A situation is just another term for an input received by the classifier.
Action
An action is an output produced by the classifier.
Scenario
A series of situations, each of which the algorithm must respond to in order with an appropriate action in order to maximize the total reward received for each action. A scenario may be dynamic, meaning that later training cycles can be affected by earlier actions, or static, meaning that each training cycle is independent of the actions that came before it.
Classifier Rule
A classifier rule, sometimes referred to as just a rule or a classifier, is a pairing between a condition, describing which situations can be matched, and a suggested action. Each classifier rule has an associated prediction indicating the expected reward if the suggested action is taken when the condition matches the situation, a fitness indicating its suitability for reproduction and continued use in the population, and a numerosity value which indicates the number of (virtual) instances of the rule in the population. (There are other parameters associated with each rule, as well, but these are visibly important ones.)
Classifier Set
Also referred to as the population, this is the collection of all rules currently used and tracked by the classifier. The genetic algorithm operates on this set of rules over time to optimize them for accuracy and generality in their descriptiveness of the problem space. Note that the population is virtual, meaning that if the same rule has multiple copies in the population, it is represented only once, with an associated numerosity value to indicate the number of virtual instances of the rule in the population.
Match Set
The match set is the set of rules which match against the current situation.
Action Set
The action set is the set of rules which match against the current situation and recommend the selected action. Thus the action set is a subset of the match set. In fact, the match set can be seen as a collection of mutually exclusive and competing action sets, from which only one is to be selected.
Reward
The reward is a floating point value which acts as the signal the algorithm attempts to maximize. There are three types of reward that are commonly mentioned with respect to temporal difference learning algorithms. The immediate reward (aka raw reward) is the original, unaltered reward value returned by the scenario in response to each action. The expected future reward is the estimated payoff for later reward cycles, specifically excluding the current one; the prediction of the action set on the next reward cycle acts in this role in the canonical XCS algorithm. The payoff or combined reward is the combined sum of the immediate reward, plus the discounted expected future reward. (Discounted means the value is multiplied by a non-negative coefficient whose value is less than 1, which causes the algorithm to value immediate reward more highly than reward received later on.) The term reward, when used alone, is generally used to mean the immediate reward.
Prediction
A prediction is an estimate by a classifier rule or an action set as to the payoff expected to be received by taking the suggested action in the given situation. The prediction of an action set is formed by taking the fitness-weighted average of the predictions made by the individual rules within it.
Fitness
Fitness is another floating point value similar in function to the reward, except that in this case it is an internal signal defined by the algorithm itself, which is then used as a guide for selection of which rules are to act as parents to the next generation. Each rule in the population has its own associated fitness value. In XCS, as opposed to strength-based LCS variants such as ZCS, the fitness is actually based on the accuracy of each rule's reward prediction, as opposed to its size. Thus a rule with a very low expected reward can have a high fitness provided it is accurate in its prediction of low reward, whereas a rule with very high expected reward may have low fitness because the reward it receives varies widely from one reward cycle to the next. Using reward prediction accuracy instead of reward prediction size helps XCS find rules that describe the problem in a stable, predictable way.
Installation
To install xcs, you will of course need a Python 3 interpreter. The latest version of the standard CPython distribution is available for download from the Python Software Foundation, or if you prefer a download that comes with a long list of top-notch machine learning and scientific computing packages already built for you, I recommend Anaconda from Continuum Analytics.
Starting with Python 3.4, the standard CPython distribution comes with the package installation tool, pip, as part of the standard distribution. Anaconda comes with pip regardless of the Python version. If you have pip, installation of xcs is straight forward:
pip install xcs
If all goes as planned, you should see a message like this:
Successfully installed xcs-1.0.0
If for some reason you are unable to use pip, you can still install xcs manually. The latest release can be found here or here. Download the zip file, unpack it, and cd into the directory. Then run:
python setup.py install
You should see a message indicating that the package was successfully installed.
Testing the Newly Installed Package
Let's start things off with a quick test, to verify that everything has been installed properly. First, fire up the Python interpreter. We'll set up Python's built-in logging system so we can see the test's progress.
End of explanation
import xcs
xcs.test()
Explanation: Then we import the xcs module and run the built-in test() function. By default, the test() function runs the canonical XCS algorithm on the 11-bit (3-bit address) MUX problem for 10,000 steps.
End of explanation
from xcs import XCSAlgorithm
from xcs.scenarios import MUXProblem, ScenarioObserver
Explanation: ```
INFO:xcs.scenarios:Possible actions:
INFO:xcs.scenarios: False
INFO:xcs.scenarios: True
INFO:xcs.scenarios:Steps completed: 0
INFO:xcs.scenarios:Average reward per step: 0.00000
INFO:xcs.scenarios:Steps completed: 100
INFO:xcs.scenarios:Average reward per step: 0.57000
INFO:xcs.scenarios:Steps completed: 200
INFO:xcs.scenarios:Average reward per step: 0.58500
.
.
.
001#0###### => False
Time Stamp: 9980
Average Reward: 1.0
Error: 0.0
Fitness: 0.8161150828153352
Experience: 236
Action Set Size: 25.03847865419106
Numerosity: 9
11#######11 => True
Time Stamp: 9994
Average Reward: 1.0
Error: 0.0
Fitness: 0.9749473121531844
Experience: 428
Action Set Size: 20.685392494947063
Numerosity: 11
INFO:xcs:Total time: 15.05068 seconds
```
Your results may vary somewhat from what is shown here. XCS relies on randomization to discover new rules, so unless you set the random seed with random.seed(), each run will be different.
Usage
Now we'll run through a quick demo of how to use existing algorithms and problems. This is essentially the same code that appears in the test() function we called above.
First, we're going to need to import a few things:
End of explanation
scenario = ScenarioObserver(MUXProblem(50000))
Explanation: The XCSAlgorithm class contains the actual XCS algorithm implementation. The ClassifierSet class is used to represent the algorithm's state, in the form of a set of classifier rules. MUXProblem is the classic multiplexer problem, which defaults to 3 address bits (11 bits total). ScenarioObserver is a wrapper for scenarios which logs the inputs, actions, and rewards as the algorithm attempts to solve the problem.
Now that we've imported the necessary tools, we can define the actual problem, telling it to give the algorithm 10,000 reward cycles to attempt to learn the appropriate input/output mapping, and wrapping it with an observer so we can see the algorithm's progress.
End of explanation
algorithm = XCSAlgorithm()
Explanation: Next, we'll create the algorithm which will be used to manage the classifier set and learn the mapping defined by the problem we have selected:
End of explanation
algorithm.exploration_probability = .1
algorithm.discount_factor = 0
algorithm.do_ga_subsumption = True
algorithm.do_action_set_subsumption = True
Explanation: The algorithm's parameters are set to appropriate defaults for most problems, but it is straight forward to modify them if it becomes necessary.
End of explanation
model = algorithm.new_model(scenario)
Explanation: Here we have selected an exploration probability of .1, which will sacrifice most (9 out of 10) learning opportunities in favor of taking advantage of what has already been learned so far. This makes sense in real-time learning environment; a lower value is more appropriate in cases where the classifier is being trained in advance or is being used simply to learn a minimal rule set. The discount factor is set to 0, since future rewards are not affected at all by the currently selected action. (This is not strictly necessary, since the scenario will inform the algorithm that reward chaining should not be used, but it is useful to highlight this fact.) We have also elected to turn on GA and action set subsumption, which help the system to converge to the minimal effective rule set more quickly in some types of scenarios.
Next, we create the classifier set:
End of explanation
model.run(scenario, learn=True)
Explanation: The algorithm does the work for us, initializing the classifier set as it deems appropriate for the scenario we have provided. It provides the classifier set with the possible actions that can be taken in the given scenario; these are necessary for the classifier set to perform covering operations when the algorithm determines that the classifiers in the population provide insufficient coverage for a particular situation. (Covering is the addition to the population of a randomly generated classifier rule whose condition matches the current situation.)
And finally, this is where all the magic happens:
End of explanation
print(model)
Explanation: We pass the scenario to the classifier set and ask it to run to learn the appropriate input/output mapping. It executes training cycles until the scenario dictates that training should stop. Note that if you wish to see the progress as the algorithm interacts with the scenario, you will need to set the logging level to INFO, as described in the previous section, before calling the run() method.
Now we can observe the fruits of our labors.
End of explanation
print(len(model))
for rule in model:
if rule.fitness > .5 and rule.experience >= 10:
print(rule.condition, '=>', rule.action, ' [%.5f]' % rule.fitness)
Explanation: ```
10001#10100 => True
Time Stamp: 41601
Average Reward: 1e-05
Error: 1e-05
Fitness: 1e-05
Experience: 0
Action Set Size: 1
Numerosity: 1
00#00100#00 => True
Time Stamp: 48589
Average Reward: 1e-05
Error: 1e-05
Fitness: 1e-05
Experience: 0
Action Set Size: 1
Numerosity: 1
.
.
.
1111######1 => True
Time Stamp: 49968
Average Reward: 1.0
Error: 0.0
Fitness: 0.9654542879926405
Experience: 131
Action Set Size: 27.598176294274904
Numerosity: 10
010##1##### => True
Time Stamp: 49962
Average Reward: 1.0
Error: 0.0
Fitness: 0.8516265524887351
Experience: 1257
Action Set Size: 27.21325456027306
Numerosity: 13
```
This gives us a printout of each classifier rule, in the form condition => action, followed by various stats about the rule pertaining to the algorithm we selected. The classifier set can also be accessed as an iterable container:
End of explanation
from xcs.scenarios import Scenario
class HaystackProblem(Scenario):
pass
Explanation: Defining New Scenario Types
To define a new scenario type, inherit from the Scenario abstract class defined in the xcs.scenarios submodule. Suppose, as an example, that we wish to test the algorithm's ability to find a single important input bit from among a large number of irrelevant input bits.
End of explanation
from xcs.scenarios import Scenario
class HaystackProblem(Scenario):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
Explanation: We defined a new class, HaystackProblem, to represent this test case, which inherits from Scenario to ensure that we cannot instantiate the problem until the appropriate methods have been implemented.
Now let's define an __init__ method for this class. We'll need a parameter, training_cycles, to determine how many reward cycles the algorithm has to identify the "needle", and another parameter, input_size, to determine how big the "haystack" is.
End of explanation
problem = HaystackProblem()
Explanation: The input_size is saved as a member for later use. Likewise, the value of training_cycles was saved in two places: the remaining_cycles member, which tells the instance how many training cycles remain for the current run, and the initial_training_cycles member, which the instance will use to reset remaining_cycles to the original value for a new run.
We also defined the possible_actions member, which we set to (True, False). This is the value we will return when the algorithm asks for the possible actions. We will expect the algorithm to return True when the needle bit is set, and False when the needle bit is clear, in order to indicate that it has correctly identified the needle's location.
Now let's define some methods for the class. The Scenario base class defines several abstract methods, and one abstract property:
* is_dynamic is a property with a Boolean value that indicates whether the actions from one reward cycle can affect the rewards or situations of later reward cycles.
* get_possible_actions() is a method that should return the actions the algorithm can take.
* reset() should restart the problem for a new run.
* sense() should return a new input (the "situation").
* execute(action) should accept an action from among those returned by get_possible_actions(), in response to the last situation that was returned by sense(). It should then return a reward value indicating how well the algorithm is doing at responding correctly to each situation.
* more() should return a Boolean value to indicate whether the algorithm has remaining reward cycles in which to learn.
The abstract methods and the property must each be defined, or we will get a TypeError when we attempt to instantiate the class:
End of explanation
from xcs.scenarios import Scenario
class HaystackProblem(Scenario):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
@property
def is_dynamic(self):
return False
def get_possible_actions(self):
return self.possible_actions
def reset(self):
self.remaining_cycles = self.initial_training_cycles
def more(self):
return self.remaining_cycles > 0
Explanation: The implementations for the property and the methods other than sense() and execute() will be trivial, so let's start with those:
End of explanation
import random
from xcs.scenarios import Scenario
class HaystackProblem(Scenario):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
self.needle_index = random.randrange(input_size)
@property
def is_dynamic(self):
return False
def get_possible_actions(self):
return self.possible_actions
def reset(self):
self.remaining_cycles = self.initial_training_cycles
self.needle_index = random.randrange(self.input_size)
def more(self):
return self.remaining_cycles > 0
Explanation: Now we are going to get into the meat of the problem. We want to give the algorithm a random string of bits of size input_size and have it pick out the location of the needle bit through trial and error, by telling us what it thinks the value of the needle bit is. For this to be a useful test, the needle bit needs to be in a fixed location, which we have not yet defined. Let's choose a random bit from among inputs on each run.
End of explanation
import random
from xcs.scenarios import Scenario
from xcs.bitstrings import BitString
class HaystackProblem(Scenario):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
self.needle_index = random.randrange(input_size)
self.needle_value = None
@property
def is_dynamic(self):
return False
def get_possible_actions(self):
return self.possible_actions
def reset(self):
self.remaining_cycles = self.initial_training_cycles
self.needle_index = random.randrange(self.input_size)
def more(self):
return self.remaining_cycles > 0
def sense(self):
haystack = BitString.random(self.input_size)
self.needle_value = haystack[self.needle_index]
return haystack
Explanation: The sense() method is going to create a string of random bits of size input_size and return it. But first it will pick out the value of the needle bit, located at needle_index, and store it in a new member, needle_value, so that execute(action) will know what the correct value for action is.
End of explanation
import random
from xcs.scenarios import Scenario
from xcs.bitstrings import BitString
class HaystackProblem(Scenario):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
self.needle_index = random.randrange(input_size)
self.needle_value = None
@property
def is_dynamic(self):
return False
def get_possible_actions(self):
return self.possible_actions
def reset(self):
self.remaining_cycles = self.initial_training_cycles
self.needle_index = random.randrange(self.input_size)
def more(self):
return self.remaining_cycles > 0
def sense(self):
haystack = BitString.random(self.input_size)
self.needle_value = haystack[self.needle_index]
return haystack
def execute(self, action):
self.remaining_cycles -= 1
return action == self.needle_value
Explanation: Now we need to define the execute(action) method. In order to give the algorithm appropriate feedback to make problem solvable, we should return a high reward when it guesses the correct value for the needle bit, and a low value otherwise. Thus we will return a 1 when the action is the value of the needle bit, and a 0 otherwise. We must also make sure to decrement the remaining cycles to prevent the problem from running indefinitely.
End of explanation
import logging
import xcs
from xcs.scenarios import ScenarioObserver
# Setup logging so we can see the test run as it progresses.
logging.root.setLevel(logging.INFO)
# Create the scenario instance
problem = HaystackProblem()
# Wrap the scenario instance in an observer so progress gets logged,
# and pass it on to the test() function.
xcs.test(scenario=ScenarioObserver(problem))
Explanation: We have now defined all of the methods that Scenario requires. Let's give it a test run.
End of explanation
problem = HaystackProblem(training_cycles=10000, input_size=100)
xcs.test(scenario=ScenarioObserver(problem))
Explanation: ```
INFO:xcs.scenarios:Possible actions:
INFO:xcs.scenarios: False
INFO:xcs.scenarios: True
INFO:xcs.scenarios:Steps completed: 0
INFO:xcs.scenarios:Average reward per step: 0.00000
INFO:xcs.scenarios:Steps completed: 100
INFO:xcs.scenarios:Average reward per step: 0.55000
.
.
.
INFO:xcs.scenarios:Steps completed: 900
INFO:xcs.scenarios:Average reward per step: 0.51667
INFO:xcs.scenarios:Steps completed: 1000
INFO:xcs.scenarios:Average reward per step: 0.50900
INFO:xcs.scenarios:Run completed.
INFO:xcs.scenarios:Total steps: 1000
INFO:xcs.scenarios:Total reward received: 509.00000
INFO:xcs.scenarios:Average reward per step: 0.50900
INFO:xcs:Classifiers:
010#11110##001###01#101001#00#1##100110##11#111#00#00#1#10#10#1110#100110#1#1100#10#111#1011100###1#1##1#0#1##011#1#0#0##1011010011#0#0101#00#01#0#0##01101##100#00010111##111010#100110##1101110##11#01110##1#0#110#000#010#1011##10#00#0#101011#000000##11#00#1#0110#0110100010##0100011#1#0###11#110#0###1##0100##1#11#1##101####111011#01#110101011001#110110#011111##1#0##1010#011000101001#10#10#0#00##1#110##1011100#1111##01#00#11#010001100#10####01###010001###1##1110#10####100#0#01#0#10##100####1110#00 => False
Time Stamp: 169
Average Reward: 0.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
11##101#1###11101#0010####01#111##100011010###10##01#1100#010#11##01011#00##0#0#1001111#0#11011100010100101#1#1#01#0001000##101100###11#1#1111011110010#01010#101010###010##010##001#1#10#1001##0#1101111##0#0#0#1#11#01011000####111#1#1##10110##1###1#1#00#110##00000#11101110010###01#0#11#1###1#1#01#100110####11##0000#01#0#0011#01##10#100##00##010111##0#1#100#0##10#01000000001#00##1#11001#1011##1##1100011#1###01#####0#0111111#00#1101101##101#01#101#11##001#0000#1011#01#0#11#0#0#0##0#1010#0#01110110# => False
Time Stamp: 254
Average Reward: 0.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
.
.
.
10010010010110#1#01###000100##0#0##0###01#1#1#100101#01#110#0##011#0100#0#1111001##01010##0#1#01011110#0#100110#00##1100##1011##1##0#0####111##111##000##01#001##110##10#01#0#1#00#110#100#10#1#0#1100#010#110##1011##1110#0#01#00#011#0001110#1110#0110111#0#101#01#101#00#0#1110100#1##0#101101#1###11#11###001100010###0#111101##1#111#111010#1##0011##00111000##11110#0#01#0#0#0#1#0#110000###00110##10001001011111#001101#11#111##01#0#1#10#1##000######0110##01#1#010#011#11#001##10111#1101#0#1001##011#10 => True
Time Stamp: 996
Average Reward: 1.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
0101#0010100011#11##1100##001001###010#111001#####111001#1011#1100#1111#00101111#0#1011##1#1###00001011001#10##00###101##011111##1#00#1011001###10001###11####1##1#01#0#1#0#11100001110##11#001001#01#####0110#011011#0#111#1111##0#110111001#100#011111100110#11####0##01#100#11#1000#10#00#00#0#0#1##0100#100#11###01#1100##1###000##01#10#0#0001#0100#10#1#001#11####1001#110#0##11#0#0100#010##0#011100##11#0#11101#000000010#00101#0#0#11110#0010#1100#11#01#11##10#10#10#1100#1#00#0100#10#1##10#00011010100#0 => True
Time Stamp: 998
Average Reward: 1.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
INFO:xcs:Total time: 2.65542 seconds
```
Hmm, the classifier set didn't do so hot. Maybe we've found a weakness in the algorithm, or maybe some different parameter settings will improve its performance. Let's reduce the size of the haystack and give it more reward cycles so we can see whether it's learning at all.
End of explanation
problem = HaystackProblem(training_cycles=10000, input_size=500)
algorithm = xcs.XCSAlgorithm()
# Default parameter settings in test()
algorithm.exploration_probability = .1
# Modified parameter settings
algorithm.ga_threshold = 1
algorithm.crossover_probability = .5
algorithm.do_action_set_subsumption = True
algorithm.do_ga_subsumption = False
algorithm.wildcard_probability = .998
algorithm.deletion_threshold = 1
algorithm.mutation_probability = .002
xcs.test(algorithm, scenario=ScenarioObserver(problem))
Explanation: ```
INFO:xcs.scenarios:Possible actions:
INFO:xcs.scenarios: False
INFO:xcs.scenarios: True
INFO:xcs.scenarios:Steps completed: 0
INFO:xcs.scenarios:Average reward per step: 0.00000
INFO:xcs.scenarios:Steps completed: 100
INFO:xcs.scenarios:Average reward per step: 0.47000
.
.
.
INFO:xcs.scenarios:Steps completed: 9900
INFO:xcs.scenarios:Average reward per step: 0.49222
INFO:xcs.scenarios:Steps completed: 10000
INFO:xcs.scenarios:Average reward per step: 0.49210
INFO:xcs.scenarios:Run completed.
INFO:xcs.scenarios:Total steps: 10000
INFO:xcs.scenarios:Total reward received: 4921.00000
INFO:xcs.scenarios:Average reward per step: 0.49210
INFO:xcs:Classifiers:
11#1001##0110000#101####001010##111111#1110#00#0100#11100#1###0110110####11#011##0#0#1###011#1#11001 => False
Time Stamp: 9771
Average Reward: 1.0
Error: 0.0
Fitness: 8.5e-07
Experience: 0
Action Set Size: 1
Numerosity: 1
00001100##1010#01111101001#0###0#10#10#11###10#1#0#0#11#11010111111#0#01#111#0#100#00#10000111##000 => False
Time Stamp: 8972
Average Reward: 0.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
.
.
.
100#0010010###0#1001#1#0100##0#1##101#011#0#0101110#1111#11#000##0#1#0##001#1110##001011###1001##01# => True
Time Stamp: 9993
Average Reward: 1.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
10#100##110##00#001##0#100100#00#1110##100##1#1##1111###00#0#1#1##00#010##00011#10#1#11##0#0#01100#0 => False
Time Stamp: 9997
Average Reward: 1.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
INFO:xcs:Total time: 21.50882 seconds
```
It appears the algorithm isn't learning at all, at least not at a visible rate. But after a few rounds of playing with the parameter values, it becomes apparent that with the correct settings and sufficient training cycles, it is possible for the algorithm to handle the new scenario.
End of explanation |
12,292 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<img src="../img/ods_stickers.jpg">
ะัะบััััะน ะบััั ะฟะพ ะผะฐัะธะฝะฝะพะผั ะพะฑััะตะฝะธั
</center>
ะะฒัะพั ะผะฐัะตัะธะฐะปะฐ
Step1: ะัะฝะพะฒะฝัะผะธ ััััะบัััะฐะผะธ ะดะฐะฝะฝัั
ะฒ Pandas ัะฒะปััััั ะบะปะฐััั Series ะธ DataFrame. ะะตัะฒัะน ะธะท ะฝะธั
ะฟัะตะดััะฐะฒะปัะตั ัะพะฑะพะน ะพะดะฝะพะผะตัะฝัะน ะธะฝะดะตะบัะธัะพะฒะฐะฝะฝัะน ะผะฐััะธะฒ ะดะฐะฝะฝัั
ะฝะตะบะพัะพัะพะณะพ ัะธะบัะธัะพะฒะฐะฝะฝะพะณะพ ัะธะฟะฐ. ะัะพัะพะน - ััะพ ะดะฒัั
ะผะตัะฝะฐั ััััะบัััะฐ ะดะฐะฝะฝัั
, ะฟัะตะดััะฐะฒะปัััะฐั ัะพะฑะพะน ัะฐะฑะปะธัั, ะบะฐะถะดัะน ััะพะปะฑะตั ะบะพัะพัะพะน ัะพะดะตัะถะธั ะดะฐะฝะฝัะต ะพะดะฝะพะณะพ ัะธะฟะฐ. ะะพะถะฝะพ ะฟัะตะดััะฐะฒะปััั ะตั ะบะฐะบ ัะปะพะฒะฐัั ะพะฑัะตะบัะพะฒ ัะธะฟะฐ Series. ะกัััะบัััะฐ DataFrame ะพัะปะธัะฝะพ ะฟะพะดั
ะพะดะธั ะดะปั ะฟัะตะดััะฐะฒะปะตะฝะธั ัะตะฐะปัะฝัั
ะดะฐะฝะฝัั
Step2: ะะฝะดะตะบัะธัะพะฒะฐะฝะธะต ะฒะพะทะผะพะถะฝะพ ะฒ ะฒะธะดะต s.Name ะธะปะธ s['Name'].
Step3: Series ะฟะพะดะดะตัะถะธะฒะฐะตั ะฟัะพะฟััะบะธ ะฒ ะดะฐะฝะฝัั
.
Step4: ะะฑัะตะบัั Series ะฟะพั
ะพะถะธ ะฝะฐ ndarray ะธ ะผะพะณัั ะฑััั ะฟะตัะตะดะฐะฝั ะฒ ะบะฐัะตััะฒะต ะฐัะณัะผะตะฝัะพะฒ ะฑะพะปััะธะฝััะฒั ััะฝะบัะธะน ะธะท Numpy.
Step5: DataFrame
ะกะพะทะดะฐะฝะธะต ะธ ะธะทะผะตะฝะตะฝะธะต
ะะตัะตะนะดัะผ ะบ ัะฐััะผะพััะตะฝะธั ะพะฑัะตะบัะพะฒ ัะธะฟะฐ DataFrame. ะขะฐะบะพะน ะพะฑัะตะบั ะผะพะถะฝะพ ัะพะทะดะฐัั ะธะท ะผะฐััะธะฒะฐ numpy, ัะบะฐะทะฐะฒ ะฝะฐะทะฒะฐะฝะธั ัััะพะบ ะธ ััะพะปะฑัะพะฒ.
Step6: ะะปััะตัะฝะฐัะธะฒะฝัะผ ัะฟะพัะพะฑะพะผ ัะฒะปัะตััั ัะพะทะดะฐะฝะธะต DataFrame ะธะท ัะปะพะฒะฐัั numpy ะผะฐััะธะฒะพะฒ ะธะปะธ ัะฟะธัะบะพะฒ.
Step7: ะะฑัะฐัะตะฝะธะต ะบ ัะปะตะผะตะฝัะฐะผ (ะธะปะธ ัะตะปัะผ ะบััะบะฐะผ ััะตะนะผะฐ)
Step8: ะะทะผะตะฝะตะฝะธะต ัะปะตะผะตะฝัะพะฒ ะธ ะดะพะฑะฐะฒะปะตะฝะธะต ะฝะพะฒัั
Step9: ะะฑัะฐะฑะพัะบะฐ ะฟัะพะฟััะตะฝะฝัั
ะทะฝะฐัะตะฝะธะน
Step10: ะัะปะตะฒะฐ ะผะฐัะบะฐ ะดะปั ะฟัะพะฟััะตะฝะฝัั
ะทะฝะฐัะตะฝะธะน (True - ัะฐะผ, ะณะดะต ะฑัะป ะฟัะพะฟััะบ, ะธะฝะฐัะต - False)
Step11: ะะพะถะฝะพ ัะดะฐะปะธัั ะฒัะต ัััะพะบะธ, ะณะดะต ะตััั ั
ะพัั ะฑั ะพะดะธะฝ ะฟัะพะฟััะบ.
Step12: ะัะพะฟััะบะธ ะผะพะถะฝะพ ะทะฐะผะตะฝะธัั ะบะฐะบะธะผ-ัะพ ะทะฝะฐัะตะฝะธะตะผ.
Step13: ะัะธะผะตั ะฟะตัะฒะธัะฝะพะณะพ ะฐะฝะฐะปะธะทะฐ ะดะฐะฝะฝัั
ั Pandas
ะงัะตะฝะธะต ะธะท ัะฐะนะปะฐ ะธ ะฟะตัะฒะธัะฝัะน ะฐะฝะฐะปะธะท
ะะดะฝะฐะบะพ ะฝะฐ ะฟัะฐะบัะธะบะต DataFrame, ั ะบะพัะพััะผ ะฝะฐะผ ะฟัะตะดััะพะธั ัะฐะฑะพัะฐัั, ะฝะตะพะฑั
ะพะดะธะผะพ ััะธัะฐัั ะธะท ะฝะตะบะพัะพัะพะณะพ ัะฐะนะปะฐ. ะ ะฐััะผะพััะธะผ ัะฐะฑะพัั ั DataFrame ะฝะฐ ะฟัะธะผะตัะต ัะปะตะดัััะตะณะพ ะฝะฐะฑะพัะฐ ะดะฐะฝะฝัั
. ะะปั ะบะฐะถะดัะพะณะพ ะพะฟัะพัะตะฝะฝะพะณะพ ะธะผะตะตััั ัะปะตะดัััะฐั ะธะฝัะพัะผะฐัะธั
Step14: ะะพัะผะพััะธะผ ะฝะฐ ัะฐะทะผะตั ะดะฐะฝะฝัั
ะธ ะฝะฐะทะฒะฐะฝะธั ะฟัะธะทะฝะฐะบะพะฒ.
Step15: ะัะธ ัะฐะฑะพัะต ั ะฑะพะปััะธะผะธ ะพะฑััะผะฐะผะธ ะดะฐะฝะฝัั
ะฑัะฒะฐะตั ัะดะพะฑะฝะพ ะฟะพัะผะพััะตัั ัะพะปัะบะพ ะฝะฐ ะฝะตะฑะพะปััะธะต ัะฐััะธ ััะตะนะผะฐ (ะฝะฐะฟัะธะผะตั, ะฝะฐัะฐะปะพ).
Step16: ะะตัะพะด describe ะฟะพะบะฐะทัะฒะฐะตั ะพัะฝะพะฒะฝัะต ััะฐัะธััะธัะตัะบะธะต ั
ะฐัะฐะบัะตัะธััะธะบะธ ะดะฐะฝะฝัั
ะฟะพ ะบะฐะถะดะพะผั ะฟัะธะทะฝะฐะบั
Step17: DataFrame ะผะพะถะฝะพ ะพััะพััะธัะพะฒะฐัั ะฟะพ ะทะฝะฐัะตะฝะธั ะบะฐะบะพะณะพ-ะฝะธะฑัะดั ะธะท ะฟัะธะทะฝะฐะบะพะฒ. ะ ะฝะฐัะตะผ ัะปััะฐะต, ะฝะฐะฟัะธะผะตั, ะฟะพ ัะฐะทะผะตัั ะทะฐัะฐะฑะพัะฝะพะน ะฟะปะฐัั.
Step18: ะะฝะดะตะบัะฐัะธั ะธ ะธะทะฒะปะตัะตะฝะธะต ะดะฐะฝะฝัั
DataFrame ะผะพะถะฝะพ ะธะฝะดะตะบัะธัะพะฒะฐัั ะฟะพ-ัะฐะทะฝะพะผั. ะ ัะฒัะทะธ ั ััะธะผ ัะฐััะผะพััะธะผ ัะฐะทะปะธัะฝัะต ัะฟะพัะพะฑั ะธะฝะดะตะบัะฐัะธะธ ะธ ะธะทะฒะปะตัะตะฝะธั ะฝัะถะฝัั
ะฝะฐะผ ะดะฐะฝะฝัั
ะธะท DataFrame ะฝะฐ ะฟัะธะผะตัะต ะฟัะพัััั
ะฒะพะฟัะพัะพะฒ.
ะะปั ะธะทะฒะปะตัะตะฝะธั ะพัะดะตะปัะฝะพะณะพ ััะพะปะฑัะฐ ะผะพะถะฝะพ ะธัะฟะพะปัะทะพะฒะฐัั ะบะพะฝััััะบัะธั ะฒะธะดะฐ DataFrame['Name']. ะะพัะฟะพะปัะทัะตะผัั ััะธะผ ะดะปั ะพัะฒะตัะฐ ะฝะฐ ะฒะพะฟัะพั
Step19: ะัะตะฝั ัะดะพะฑะฝะพะน ัะฒะปัะตััั ะปะพะณะธัะตัะบะฐั ะธะฝะดะตะบัะฐัะธั DataFrame ะฟะพ ะพะดะฝะพะผั ััะพะปะฑัั. ะัะณะปัะดะธั ะพะฝะฐ ัะปะตะดัััะธะผ ะพะฑัะฐะทะพะผ
Step20: ะะฐะบะพะฒะฐ ะผะฐะบัะธะผะฐะปัะฝะฐั ะทะฐัะฐะฑะพัะฝะฐั ะฟะปะฐัะฐ ััะตะดะธ ะผัะถัะธะฝ, ะธะผะตััะธั
ัะปะตะฝััะฒะพ ะฒ ะฟัะพััะพัะทะต, ะธ ั ะพะฟััะพะผ ัะฐะฑะพัั ะดะพ 10 ะปะตั?
Step21: ะัะธะผะตะฝะตะฝะธะต ััะฝะบัะธะธ ะบ ะบะฐะถะดะพะผั ััะพะปะฑัั
Step22: ะััะฟะฟะธัะพะฒะฐะฝะธะต ะดะฐะฝะฝัั
ะฒ ะทะฐะฒะธัะธะผะพััะธ ะพั ะทะฝะฐัะตะฝะธั ะฟัะธะทะฝะฐะบะฐ looks ะธ ะฟะพะดััะตั ััะตะดะฝะตะณะพ ะทะฝะฐัะตะฝะธั ะฟะพ ะบะฐะถะดะพะผั ััะพะปะฑัั ะฒ ะบะฐะถะดะพะน ะณััะฟะฟะต.
Step23: ะะฑัะฐัะตะฝะธะต ะบ ะบะพะฝะบัะตัะฝะพะน ะณััะฟะฟะต
Step24: ะะธะทัะฐะปะธะทะฐัะธั ะฒ Pandas
ะะตัะพะด scatter_matrix ะฟะพะทะฒะพะปัะตั ะฒะธะทัะฐะปะธะทะธัะพะฒะฐัั ะฟะพะฟะฐัะฝัะต ะทะฐะฒะธัะธะผะพััะธ ะผะตะถะดั ะฟัะธะทะฝะฐะบะฐะผะธ (ะฐ ัะฐะบะถะต ัะฐัะฟัะตะดะตะปะตะฝะธะต ะบะฐะถะดะพะณะพ ะฟัะธะทะฝะฐะบะฐ ะฝะฐ ะดะธะฐะณะพะฝะฐะปะธ). ะัะพะดะตะปะฐะตะผ ััะพ ะดะปั ะฝะตะฑะธะฝะฐัะฝัั
ะฟัะธะทะฝะฐะบะพะฒ.
Step25: ะะปั ะบะฐะถะดะพะณะพ ะฟัะธะทะฝะฐะบะฐ ะผะพะถะฝะพ ะฟะพัััะพะธัั ะพัะดะตะปัะฝัั ะณะธััะพะณัะฐะผะผั
Step26: ะะปะธ ััะฐะทั ะดะปั ะฒัะตั
Step27: ะะพะปะตะทะฝัะผ ัะฐะบะถะต ัะฒะปัะตััั ะณัะฐัะธะบ ัะธะฟะฐ box plot ("ััะธะบ ั ััะฐะผะธ"). ะะฝ ะฟะพะทะฒะพะปัะตั ะบะพะผะฟะฐะบัะฝะพ ะฒะธะทัะฐะปะธะทะธัะพะฒะฐัั ะพัะฝะพะฒะฝัะต ั
ะฐัะฐะบัะตัะธััะธะบะธ (ะผะตะดะธะฐะฝั, ะฝะธะถะฝะธะน ะธ ะฒะตัั
ะฝะธะน ะบะฒะฐััะธะปะธ, ะผะธะฝะธะผะฐะปัะฝะพะต ะธ ะผะฐะบัะธะผะฐะปัะฝะพะต ะทะฝะฐัะตะฝะธะต, ะฒัะฑัะพัั) ัะฐัะฟัะตะดะตะปะตะฝะธั ะฟัะธะทะฝะฐะบะพะฒ.
Step28: ะะพะถะฝะพ ัะดะตะปะฐัั ััะพ, ัะณััะฟะฟะธัะพะฒะฐะฒ ะดะฐะฝะฝัะต ะฟะพ ะบะฐะบะพะผั-ะปะธะฑะพ ะดััะณะพะผั ะฟัะธะทะฝะฐะบั | Python Code:
# Python 2 and 3 compatibility
# pip install future
from __future__ import (absolute_import, division,
print_function, unicode_literals)
# ะพัะบะปััะธะผ ะฟัะตะดัะฟัะตะถะดะตะฝะธั Anaconda
import warnings
warnings.simplefilter('ignore')
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: <center>
<img src="../img/ods_stickers.jpg">
ะัะบััััะน ะบััั ะฟะพ ะผะฐัะธะฝะฝะพะผั ะพะฑััะตะฝะธั
</center>
ะะฒัะพั ะผะฐัะตัะธะฐะปะฐ: ะฟัะพะณัะฐะผะผะธัั-ะธััะปะตะดะพะฒะฐัะตะปั Mail.ru Group, ััะฐััะธะน ะฟัะตะฟะพะดะฐะฒะฐัะตะปั ะคะฐะบัะปััะตัะฐ ะะพะผะฟัััะตัะฝัั
ะะฐัะบ ะะจะญ ะฎัะธะน ะะฐัะฝะธัะบะธะน
<center>ะขะตะผะฐ 1. ะะตัะฒะธัะฝัะน ะฐะฝะฐะปะธะท ะดะฐะฝะฝัั
ั Pandas</center>
<center>ะงะฐััั 1. ะะฑะทะพั ะฑะธะฑะปะธะพัะตะบะธ Pandas</center>
Pandas - ััะพ ะฑะธะฑะปะธะพัะตะบะฐ Python, ะฟัะตะดะพััะฐะฒะปัััะฐั ัะธัะพะบะธะต ะฒะพะทะผะพะถะฝะพััะธ ะดะปั ะฐะฝะฐะปะธะทะฐ ะดะฐะฝะฝัั
. ะก ะตะต ะฟะพะผะพััั ะพัะตะฝั ัะดะพะฑะฝะพ ะทะฐะณััะถะฐัั, ะพะฑัะฐะฑะฐััะฒะฐัั ะธ ะฐะฝะฐะปะธะทะธัะพะฒะฐัั ัะฐะฑะปะธัะฝัะต ะดะฐะฝะฝัะต ั ะฟะพะผะพััั SQL-ะฟะพะดะพะฑะฝัั
ะทะฐะฟัะพัะพะฒ. ะ ัะฒัะทะบะต ั ะฑะธะฑะปะธะพัะตะบะฐะผะธ Matplotlib ะธ Seaborn ะฟะพัะฒะปัะตััั ะฒะพะทะผะพะถะฝะพััั ัะดะพะฑะฝะพะณะพ ะฒะธะทัะฐะปัะฝะพะณะพ ะฐะฝะฐะปะธะทะฐ ัะฐะฑะปะธัะฝัั
ะดะฐะฝะฝัั
.
End of explanation
salaries = pd.Series([400, 300, 200, 250],
index = ['Andrew', 'Bob',
'Charles', 'Ann'])
print(salaries)
salaries[salaries > 250]
Explanation: ะัะฝะพะฒะฝัะผะธ ััััะบัััะฐะผะธ ะดะฐะฝะฝัั
ะฒ Pandas ัะฒะปััััั ะบะปะฐััั Series ะธ DataFrame. ะะตัะฒัะน ะธะท ะฝะธั
ะฟัะตะดััะฐะฒะปัะตั ัะพะฑะพะน ะพะดะฝะพะผะตัะฝัะน ะธะฝะดะตะบัะธัะพะฒะฐะฝะฝัะน ะผะฐััะธะฒ ะดะฐะฝะฝัั
ะฝะตะบะพัะพัะพะณะพ ัะธะบัะธัะพะฒะฐะฝะฝะพะณะพ ัะธะฟะฐ. ะัะพัะพะน - ััะพ ะดะฒัั
ะผะตัะฝะฐั ััััะบัััะฐ ะดะฐะฝะฝัั
, ะฟัะตะดััะฐะฒะปัััะฐั ัะพะฑะพะน ัะฐะฑะปะธัั, ะบะฐะถะดัะน ััะพะปะฑะตั ะบะพัะพัะพะน ัะพะดะตัะถะธั ะดะฐะฝะฝัะต ะพะดะฝะพะณะพ ัะธะฟะฐ. ะะพะถะฝะพ ะฟัะตะดััะฐะฒะปััั ะตั ะบะฐะบ ัะปะพะฒะฐัั ะพะฑัะตะบัะพะฒ ัะธะฟะฐ Series. ะกัััะบัััะฐ DataFrame ะพัะปะธัะฝะพ ะฟะพะดั
ะพะดะธั ะดะปั ะฟัะตะดััะฐะฒะปะตะฝะธั ัะตะฐะปัะฝัั
ะดะฐะฝะฝัั
: ัััะพะบะธ ัะพะพัะฒะตัััะฒััั ะฟัะธะทะฝะฐะบะพะฒัะผ ะพะฟะธัะฐะฝะธัะผ ะพัะดะตะปัะฝัั
ะพะฑัะตะบัะพะฒ, ะฐ ััะพะปะฑัั ัะพะพัะฒะตัััะฒััั ะฟัะธะทะฝะฐะบะฐะผ.
ะะปั ะฝะฐัะฐะปะฐ ัะฐััะผะพััะธะผ ะฟัะพัััะต ะฟัะธะผะตัั ัะพะทะดะฐะฝะธั ัะฐะบะธั
ะพะฑัะตะบัะพะฒ ะธ ะฒะพะทะผะพะถะฝัั
ะพะฟะตัะฐัะธะน ะฝะฐะด ะฝะธะผะธ.
Series
ะกะพะทะดะฐะฝะธะต ะพะฑัะตะบัะฐ Series ะธะท 5 ัะปะตะผะตะฝัะพะฒ, ะธะฝะดะตะบัะธัะพะฒะฐะฝะฝัั
ะฑัะบะฒะฐะผะธ:
End of explanation
print(salaries.Andrew == salaries['Andrew'])
salaries['Carl'] = np.nan
salaries.fillna(salaries.median(), inplace=True)
salaries
Explanation: ะะฝะดะตะบัะธัะพะฒะฐะฝะธะต ะฒะพะทะผะพะถะฝะพ ะฒ ะฒะธะดะต s.Name ะธะปะธ s['Name'].
End of explanation
salaries.c = np.nan # Series can contain missing values
print(salaries)
Explanation: Series ะฟะพะดะดะตัะถะธะฒะฐะตั ะฟัะพะฟััะบะธ ะฒ ะดะฐะฝะฝัั
.
End of explanation
print('Second element of salaries is', salaries[1], '\n')
# Smart indexing
print(salaries[:3], '\n')
print('There are', len(salaries[salaries > 0]), 'positive elements in salaries\n')
# Series obects can be the arguments for Numpy functions
print(np.exp(salaries))
Explanation: ะะฑัะตะบัั Series ะฟะพั
ะพะถะธ ะฝะฐ ndarray ะธ ะผะพะณัั ะฑััั ะฟะตัะตะดะฐะฝั ะฒ ะบะฐัะตััะฒะต ะฐัะณัะผะตะฝัะพะฒ ะฑะพะปััะธะฝััะฒั ััะฝะบัะธะน ะธะท Numpy.
End of explanation
df1 = pd.DataFrame(np.random.randn(5, 3),
index=['o1', 'o2', 'o3', 'o4', 'o5'],
columns=['f1', 'f2', 'f3'])
df1
Explanation: DataFrame
ะกะพะทะดะฐะฝะธะต ะธ ะธะทะผะตะฝะตะฝะธะต
ะะตัะตะนะดัะผ ะบ ัะฐััะผะพััะตะฝะธั ะพะฑัะตะบัะพะฒ ัะธะฟะฐ DataFrame. ะขะฐะบะพะน ะพะฑัะตะบั ะผะพะถะฝะพ ัะพะทะดะฐัั ะธะท ะผะฐััะธะฒะฐ numpy, ัะบะฐะทะฐะฒ ะฝะฐะทะฒะฐะฝะธั ัััะพะบ ะธ ััะพะปะฑัะพะฒ.
End of explanation
df2 = pd.DataFrame({'A': np.random.random(5),
'B': ['a', 'b', 'c', 'd', 'e'],
'C': np.arange(5) > 2})
df2
Explanation: ะะปััะตัะฝะฐัะธะฒะฝัะผ ัะฟะพัะพะฑะพะผ ัะฒะปัะตััั ัะพะทะดะฐะฝะธะต DataFrame ะธะท ัะปะพะฒะฐัั numpy ะผะฐััะธะฒะพะฒ ะธะปะธ ัะฟะธัะบะพะฒ.
End of explanation
print('The element in position 3, B is', df2.at[3, 'B'], '\n')
print(df2.loc[[1, 4], ['A', 'B']])
Explanation: ะะฑัะฐัะตะฝะธะต ะบ ัะปะตะผะตะฝัะฐะผ (ะธะปะธ ัะตะปัะผ ะบััะบะฐะผ ััะตะนะผะฐ):
End of explanation
df2.at[2, 'B'] = 'f'
df2
df2.loc[5] = [3.1415, 'c', False]
df2
df1.columns = ['A', 'B', 'C']
df3 = df1.append(df2)
df3
Explanation: ะะทะผะตะฝะตะฝะธะต ัะปะตะผะตะฝัะพะฒ ะธ ะดะพะฑะฐะฒะปะตะฝะธะต ะฝะพะฒัั
:
End of explanation
df1.at['o2', 'A'] = np.nan
df1.at['o4', 'C'] = np.nan
df1
Explanation: ะะฑัะฐะฑะพัะบะฐ ะฟัะพะฟััะตะฝะฝัั
ะทะฝะฐัะตะฝะธะน
End of explanation
pd.isnull(df1)
Explanation: ะัะปะตะฒะฐ ะผะฐัะบะฐ ะดะปั ะฟัะพะฟััะตะฝะฝัั
ะทะฝะฐัะตะฝะธะน (True - ัะฐะผ, ะณะดะต ะฑัะป ะฟัะพะฟััะบ, ะธะฝะฐัะต - False):
End of explanation
df1.dropna(how='any')
Explanation: ะะพะถะฝะพ ัะดะฐะปะธัั ะฒัะต ัััะพะบะธ, ะณะดะต ะตััั ั
ะพัั ะฑั ะพะดะธะฝ ะฟัะพะฟััะบ.
End of explanation
df1.fillna(0)
Explanation: ะัะพะฟััะบะธ ะผะพะถะฝะพ ะทะฐะผะตะฝะธัั ะบะฐะบะธะผ-ัะพ ะทะฝะฐัะตะฝะธะตะผ.
End of explanation
df = pd.read_csv('../data/beauty.csv', sep = ';')
Explanation: ะัะธะผะตั ะฟะตัะฒะธัะฝะพะณะพ ะฐะฝะฐะปะธะทะฐ ะดะฐะฝะฝัั
ั Pandas
ะงัะตะฝะธะต ะธะท ัะฐะนะปะฐ ะธ ะฟะตัะฒะธัะฝัะน ะฐะฝะฐะปะธะท
ะะดะฝะฐะบะพ ะฝะฐ ะฟัะฐะบัะธะบะต DataFrame, ั ะบะพัะพััะผ ะฝะฐะผ ะฟัะตะดััะพะธั ัะฐะฑะพัะฐัั, ะฝะตะพะฑั
ะพะดะธะผะพ ััะธัะฐัั ะธะท ะฝะตะบะพัะพัะพะณะพ ัะฐะนะปะฐ. ะ ะฐััะผะพััะธะผ ัะฐะฑะพัั ั DataFrame ะฝะฐ ะฟัะธะผะตัะต ัะปะตะดัััะตะณะพ ะฝะฐะฑะพัะฐ ะดะฐะฝะฝัั
. ะะปั ะบะฐะถะดัะพะณะพ ะพะฟัะพัะตะฝะฝะพะณะพ ะธะผะตะตััั ัะปะตะดัััะฐั ะธะฝัะพัะผะฐัะธั: ะทะฐัะฐะฑะพัะฝะฐั ะฟะปะฐัะฐ ะทะฐ ัะฐั ัะฐะฑะพัั, ะพะฟัั ัะฐะฑะพัั, ะพะฑัะฐะทะพะฒะฐะฝะธะต, ะฒะฝะตัะฝัั ะฟัะธะฒะปะตะบะฐัะตะปัะฝะพััั (ะฒ ะฑะฐะปะปะฐั
ะพั 1 ะดะพ 5), ะฑะธะฝะฐัะฝัะต ะฟัะธะทะฝะฐะบะธ: ะฟะพะป, ัะตะผะตะนะฝะพะต ะฟะพะปะพะถะตะฝะธะต, ัะพััะพัะฝะธะต ะทะดะพัะพะฒัั (ั
ะพัะพัะตะต/ะฟะปะพั
ะพะต), ัะปะตะฝััะฒะพ ะฒ ะฟัะพััะพัะทะต, ัะฒะตั ะบะพะถะธ (ะฑะตะปัะน/ัััะฝัะน), ะทะฐะฝััะพััั ะฒ ััะตัะต ะพะฑัะปัะถะธะฒะฐะฝะธั (ะดะฐ/ะฝะตั).
End of explanation
print(df.shape)
print(df.columns.values)
df.head(10)
Explanation: ะะพัะผะพััะธะผ ะฝะฐ ัะฐะทะผะตั ะดะฐะฝะฝัั
ะธ ะฝะฐะทะฒะฐะฝะธั ะฟัะธะทะฝะฐะบะพะฒ.
End of explanation
df.head(4)
Explanation: ะัะธ ัะฐะฑะพัะต ั ะฑะพะปััะธะผะธ ะพะฑััะผะฐะผะธ ะดะฐะฝะฝัั
ะฑัะฒะฐะตั ัะดะพะฑะฝะพ ะฟะพัะผะพััะตัั ัะพะปัะบะพ ะฝะฐ ะฝะตะฑะพะปััะธะต ัะฐััะธ ััะตะนะผะฐ (ะฝะฐะฟัะธะผะตั, ะฝะฐัะฐะปะพ).
End of explanation
df.describe()
Explanation: ะะตัะพะด describe ะฟะพะบะฐะทัะฒะฐะตั ะพัะฝะพะฒะฝัะต ััะฐัะธััะธัะตัะบะธะต ั
ะฐัะฐะบัะตัะธััะธะบะธ ะดะฐะฝะฝัั
ะฟะพ ะบะฐะถะดะพะผั ะฟัะธะทะฝะฐะบั: ัะธัะปะพ ะฝะตะฟัะพะฟััะตะฝะฝัั
ะทะฝะฐัะตะฝะธะน, ััะตะดะฝะตะต, ััะฐะฝะดะฐััะฝะพะต ะพัะบะปะพะฝะตะฝะธะต, ะดะธะฐะฟะฐะทะพะฝ, ะผะตะดะธะฐะฝั, 0.25 ะธ 0.75 ะบะฒะฐััะธะปะธ.
End of explanation
df.sort_values(by='wage', ascending = False).head()
df.sort_values(by=['female', 'wage'],
ascending=[True, False]).head()
Explanation: DataFrame ะผะพะถะฝะพ ะพััะพััะธัะพะฒะฐัั ะฟะพ ะทะฝะฐัะตะฝะธั ะบะฐะบะพะณะพ-ะฝะธะฑัะดั ะธะท ะฟัะธะทะฝะฐะบะพะฒ. ะ ะฝะฐัะตะผ ัะปััะฐะต, ะฝะฐะฟัะธะผะตั, ะฟะพ ัะฐะทะผะตัั ะทะฐัะฐะฑะพัะฝะพะน ะฟะปะฐัั.
End of explanation
df['goodhlth'].mean()
Explanation: ะะฝะดะตะบัะฐัะธั ะธ ะธะทะฒะปะตัะตะฝะธะต ะดะฐะฝะฝัั
DataFrame ะผะพะถะฝะพ ะธะฝะดะตะบัะธัะพะฒะฐัั ะฟะพ-ัะฐะทะฝะพะผั. ะ ัะฒัะทะธ ั ััะธะผ ัะฐััะผะพััะธะผ ัะฐะทะปะธัะฝัะต ัะฟะพัะพะฑั ะธะฝะดะตะบัะฐัะธะธ ะธ ะธะทะฒะปะตัะตะฝะธั ะฝัะถะฝัั
ะฝะฐะผ ะดะฐะฝะฝัั
ะธะท DataFrame ะฝะฐ ะฟัะธะผะตัะต ะฟัะพัััั
ะฒะพะฟัะพัะพะฒ.
ะะปั ะธะทะฒะปะตัะตะฝะธั ะพัะดะตะปัะฝะพะณะพ ััะพะปะฑัะฐ ะผะพะถะฝะพ ะธัะฟะพะปัะทะพะฒะฐัั ะบะพะฝััััะบัะธั ะฒะธะดะฐ DataFrame['Name']. ะะพัะฟะพะปัะทัะตะผัั ััะธะผ ะดะปั ะพัะฒะตัะฐ ะฝะฐ ะฒะพะฟัะพั: ะบะฐะบะพะฒะฐ ะดะพะปั ะปัะดะตะน ั ั
ะพัะพัะธะผ ะทะดะพัะพะฒัะตะผ ััะตะดะธ ะพะฟัะพัะตะฝะฝัั
?
End of explanation
df[df['female'] == 1].head()
df[(df['goodhlth'] == 1) &
(df['female'] == 1)].head()
df[(df['female'] == 0)]['wage'].mean() - \
df[(df['female'] == 1)]['wage'].mean()
Explanation: ะัะตะฝั ัะดะพะฑะฝะพะน ัะฒะปัะตััั ะปะพะณะธัะตัะบะฐั ะธะฝะดะตะบัะฐัะธั DataFrame ะฟะพ ะพะดะฝะพะผั ััะพะปะฑัั. ะัะณะปัะดะธั ะพะฝะฐ ัะปะตะดัััะธะผ ะพะฑัะฐะทะพะผ: df[P(df['Name'])], ะณะดะต P - ััะพ ะฝะตะบะพัะพัะพะต ะปะพะณะธัะตัะบะพะต ััะปะพะฒะธะต, ะฟัะพะฒะตััะตะผะพะต ะดะปั ะบะฐะถะดะพะณะพ ัะปะตะผะตะฝัะฐ ััะพะปะฑัะฐ Name. ะัะพะณะพะผ ัะฐะบะพะน ะธะฝะดะตะบัะฐัะธะธ ัะฒะปัะตััั DataFrame, ัะพััะพััะธะน ัะพะปัะบะพ ะธะท ัััะพะบ, ัะดะพะฒะปะตัะฒะพััััะธั
ััะปะพะฒะธั P ะฟะพ ััะพะปะฑัั Name. ะะพัะฟะพะปัะทัะตะผัั ััะธะผ ะดะปั ะพัะฒะตัะฐ ะฝะฐ ะฒะพะฟัะพั: ะบะฐะบะพะฒะฐ ััะตะดะฝัั ะทะฐัะฐะฑะพัะฝะฐั ะฟะปะฐัะฐ ััะตะดะธ ะถะตะฝัะธะฝ?
End of explanation
df[(df['female'] == 0) & (df['union'] == 1)
& (df['exper'] < 10)]['wage'].max()
Explanation: ะะฐะบะพะฒะฐ ะผะฐะบัะธะผะฐะปัะฝะฐั ะทะฐัะฐะฑะพัะฝะฐั ะฟะปะฐัะฐ ััะตะดะธ ะผัะถัะธะฝ, ะธะผะตััะธั
ัะปะตะฝััะฒะพ ะฒ ะฟัะพััะพัะทะต, ะธ ั ะพะฟััะพะผ ัะฐะฑะพัั ะดะพ 10 ะปะตั?
End of explanation
df.apply(np.mean)
Explanation: ะัะธะผะตะฝะตะฝะธะต ััะฝะบัะธะธ ะบ ะบะฐะถะดะพะผั ััะพะปะฑัั:
End of explanation
df['looks'].describe()
g = df.groupby('looks')
for (i, sub_df) in g:
print(sub_df['wage'].mean(), sub_df['looks'].mean())
Explanation: ะััะฟะฟะธัะพะฒะฐะฝะธะต ะดะฐะฝะฝัั
ะฒ ะทะฐะฒะธัะธะผะพััะธ ะพั ะทะฝะฐัะตะฝะธั ะฟัะธะทะฝะฐะบะฐ looks ะธ ะฟะพะดััะตั ััะตะดะฝะตะณะพ ะทะฝะฐัะตะฝะธั ะฟะพ ะบะฐะถะดะพะผั ััะพะปะฑัั ะฒ ะบะฐะถะดะพะน ะณััะฟะฟะต.
End of explanation
d1 = g.get_group(1)
d1
Explanation: ะะฑัะฐัะตะฝะธะต ะบ ะบะพะฝะบัะตัะฝะพะน ะณััะฟะฟะต:
End of explanation
pd.scatter_matrix(df[['wage', 'exper', 'educ', 'looks']],
figsize=(15, 15), diagonal='kde')
plt.show()
Explanation: ะะธะทัะฐะปะธะทะฐัะธั ะฒ Pandas
ะะตัะพะด scatter_matrix ะฟะพะทะฒะพะปัะตั ะฒะธะทัะฐะปะธะทะธัะพะฒะฐัั ะฟะพะฟะฐัะฝัะต ะทะฐะฒะธัะธะผะพััะธ ะผะตะถะดั ะฟัะธะทะฝะฐะบะฐะผะธ (ะฐ ัะฐะบะถะต ัะฐัะฟัะตะดะตะปะตะฝะธะต ะบะฐะถะดะพะณะพ ะฟัะธะทะฝะฐะบะฐ ะฝะฐ ะดะธะฐะณะพะฝะฐะปะธ). ะัะพะดะตะปะฐะตะผ ััะพ ะดะปั ะฝะตะฑะธะฝะฐัะฝัั
ะฟัะธะทะฝะฐะบะพะฒ.
End of explanation
df['looks'].hist()
Explanation: ะะปั ะบะฐะถะดะพะณะพ ะฟัะธะทะฝะฐะบะฐ ะผะพะถะฝะพ ะฟะพัััะพะธัั ะพัะดะตะปัะฝัั ะณะธััะพะณัะฐะผะผั:
End of explanation
df.hist(color = 'k', bins = 30, figsize=(15,10))
plt.show()
Explanation: ะะปะธ ััะฐะทั ะดะปั ะฒัะตั
:
End of explanation
df.boxplot(column='exper', by='looks')
plt.show()
Explanation: ะะพะปะตะทะฝัะผ ัะฐะบะถะต ัะฒะปัะตััั ะณัะฐัะธะบ ัะธะฟะฐ box plot ("ััะธะบ ั ััะฐะผะธ"). ะะฝ ะฟะพะทะฒะพะปัะตั ะบะพะผะฟะฐะบัะฝะพ ะฒะธะทัะฐะปะธะทะธัะพะฒะฐัั ะพัะฝะพะฒะฝัะต ั
ะฐัะฐะบัะตัะธััะธะบะธ (ะผะตะดะธะฐะฝั, ะฝะธะถะฝะธะน ะธ ะฒะตัั
ะฝะธะน ะบะฒะฐััะธะปะธ, ะผะธะฝะธะผะฐะปัะฝะพะต ะธ ะผะฐะบัะธะผะฐะปัะฝะพะต ะทะฝะฐัะตะฝะธะต, ะฒัะฑัะพัั) ัะฐัะฟัะตะดะตะปะตะฝะธั ะฟัะธะทะฝะฐะบะพะฒ.
End of explanation
df.boxplot(column='exper', by=['female', 'black'],
figsize=(10,10))
plt.show()
Explanation: ะะพะถะฝะพ ัะดะตะปะฐัั ััะพ, ัะณััะฟะฟะธัะพะฒะฐะฒ ะดะฐะฝะฝัะต ะฟะพ ะบะฐะบะพะผั-ะปะธะฑะพ ะดััะณะพะผั ะฟัะธะทะฝะฐะบั:
End of explanation |
12,293 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Duhamel Integral
Problem Data
Step1: Natural Frequency, Damped Frequency
Step2: Computation
Preliminaries
We chose a time step and we compute a number of constants of the integration procedure that depend on the time step
Step3: We initialize a time variable
Step4: We compute the load, the sines and the cosines of $\omega_D t$ and their products
Step5: The main (and only) loop in our code, we initialize A, B and a container for saving the deflections x,
then we compute the next values of A and B, the next value of x is eventually appended to the container.
Step6: It is necessary to plot the response. | Python Code:
M = 600000
T = 0.6
z = 0.10
p0 = 400000
t0, t1, t2, t3 = 0.0, 1.0, 3.0, 6.0
Explanation: Duhamel Integral
Problem Data
End of explanation
wn = 2*np.pi/T
wd = wn*np.sqrt(1-z**2)
Explanation: Natural Frequency, Damped Frequency
End of explanation
dt = 0.05
edt = np.exp(-z*wn*dt)
fac = dt/(2*M*wd)
Explanation: Computation
Preliminaries
We chose a time step and we compute a number of constants of the integration procedure that depend on the time step
End of explanation
t = dt*np.arange(1+int(t3/dt))
Explanation: We initialize a time variable
End of explanation
p = np.where(t<=t1, p0*(t-t0)/(t1-t0), np.where(t<t2, p0*(1-(t-t1)/(t2-t1)), 0))
s = np.sin(wd*t)
c = np.cos(wd*t)
sp = s*p
cp = c*p
plt.plot(t, p/1000)
plt.xlabel('Time/s')
plt.ylabel('Force/kN')
plt.xlim((t0,t3))
plt.grid();
Explanation: We compute the load, the sines and the cosines of $\omega_D t$ and their products
End of explanation
A, B, x = 0, 0, [0]
for i, _ in enumerate(t[1:], 1):
A = A*edt+fac*(cp[i-1]*edt+cp[i])
B = B*edt+fac*(sp[i-1]*edt+sp[i])
x.append(A*s[i]-B*c[i])
Explanation: The main (and only) loop in our code, we initialize A, B and a container for saving the deflections x,
then we compute the next values of A and B, the next value of x is eventually appended to the container.
End of explanation
x = np.array(x)
k = M*wn**2
Dst = p/k
plt.plot(t, x*1000)
plt.plot(t, Dst*1000)
plt.xlabel('Time/s')
plt.ylabel('Deflection/mm')
plt.xlim((t0,t3))
plt.grid()
plt.show();
Explanation: It is necessary to plot the response.
End of explanation |
12,294 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: ๊ฐ์ค์น ํด๋ฌ์คํฐ๋ง ์ข
ํฉ ๊ฐ์ด๋
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: ํด๋ฌ์คํฐ๋ง๋ ๋ชจ๋ธ ์ ์ํ๊ธฐ
์ ์ฒด ๋ชจ๋ธ ํด๋ฌ์คํฐ๋ง(์์ฐจ์ ๋ฐ ํจ์ํ)
๋ชจ๋ธ ์ ํ์ฑ ๊ฐ์ ์ ์ํ ํ
Step3: ์ผ๋ถ ๋ ์ด์ด ํด๋ฌ์คํฐ๋ง(์์ฐจ์ ๋ฐ ๊ธฐ๋ฅ์ ๋ชจ๋ธ)
๋ชจ๋ธ ์ ํ์ฑ ๊ฐ์ ์ ์ํ ํ
Step4: ์ฌ์ฉ์ ์ ์ Keras ๋ ์ด์ด ํด๋ฌ์คํฐ๋ง ๋๋ ํด๋ฌ์คํธ๋ง ๋์ ๋ ์ด์ด์ ๊ฐ์ค์น ์ง์ ํ๊ธฐ
tfmot.clustering.keras.ClusterableLayer๋ ๋ ๊ฐ์ง ์ฌ์ฉ ์ฌ๋ก๋ฅผ ์ ๊ณตํฉ๋๋ค.
์ฌ์ฉ์ ์ ์ Keras ๋ ์ด์ด๋ฅผ ํฌํจํ์ฌ ๊ธฐ๋ณธ์ ์ผ๋ก ์ง์ํ์ง ์๋ ๋ชจ๋ ๋ ์ด์ด๋ฅผ ํด๋ฌ์คํฐ๋งํฉ๋๋ค.
ํด๋ฌ์คํฐ๋งํ ์ง์ ๋ ์ด์ด์ ๊ฐ์ค์น๋ฅผ ์ง์ ํฉ๋๋ค.
์๋ฅผ ๋ค์ด API๋ ๊ธฐ๋ณธ์ ์ผ๋ก Dense ๋ ์ด์ด์ ์ปค๋๋ง ํด๋ฌ์คํฐ๋งํฉ๋๋ค. ์๋์ ์์๋ ๋ฐ์ด์ด์ค๋ฅผ ํด๋ฌ์คํฐ๋งํ๋๋ก ์ด๋ฅผ ์์ ํ๋ ๋ฐฉ๋ฒ์ ๋ณด์ฌ์ค๋๋ค. Keras ๋ ์ด์ด์์ ํ์ํ ๊ฒฝ์ฐ get_clusterable_weights ํจ์๋ฅผ ์ฌ์ ์ํด์ผ ํฉ๋๋ค. ์ฌ๊ธฐ์ ํด๋ฌ์คํฐ๋งํ ํ์ต ๊ฐ๋ฅํ ๋ณ์์ ์ด๋ฆ๊ณผ ํ์ต ๊ฐ๋ฅํ ๋ณ์ ์์ฒด๋ฅผ ์ง์ ํฉ๋๋ค. ์๋ฅผ ๋ค์ด ๋น ๋ชฉ๋ก []์ ๋ฐํํ๋ฉด ๊ฐ์ค์น๋ฅผ ํด๋ฌ์คํฐ๋งํ ์ ์์ต๋๋ค.
์ผ๋ฐ์ ์ธ ์ค์
Step5: tfmot.clustering.keras.ClusterableLayer๋ฅผ ์ฌ์ฉํ์ฌ Keras ์ฌ์ฉ์ ์ ์ ๋ ์ด์ด๋ฅผ ํด๋ฌ์คํฐ๋งํ ์๋ ์์ต๋๋ค. ์ด๋ ๊ฒ ํ๋ ค๋ฉด ํ์์ ๊ฐ์ด tf.keras.Layer๋ฅผ ํ์ฅํ๊ณ __init__, call, build ํจ์๋ฅผ ๊ตฌํํฉ๋๋ค. ๋จ, clusterable_layer.ClusterableLayer ํด๋์ค๋ ํ์ฅํ๊ณ ๋ค์๋ ๊ตฌํํด์ผ ํฉ๋๋ค.
get_clusterable_weights, ์ฌ๊ธฐ์์ ์์ ๊ฐ์ด ํด๋ฌ์คํฐ๋งํ ๊ฐ์ค์น ์ปค๋์ ์ง์ ํฉ๋๋ค.
get_clusterable_algorithm, ์ฌ๊ธฐ์์ ๊ฐ์ค์น ํ
์์ ํด๋ฌ์คํฐ๋ง ์๊ณ ๋ฆฌ์ฆ์ ์ง์ ํฉ๋๋ค. ์ด๋ ํด๋ฌ์คํฐ๋ง์ ์ํด ์ฌ์ฉ์ ์ ์ ๋ ์ด์ด ๊ฐ์ค์น์ ๋ชจ์์ ์ง์ ํด์ผ ํ๊ธฐ ๋๋ฌธ์
๋๋ค. ๋ฐํ๋ ํด๋ฌ์คํฐ๋ง ์๊ณ ๋ฆฌ์ฆ ํด๋์ค๋ clustering_algorithm.ClusteringAlgorithm ํด๋์ค์์ ํ์๋์ด์ผ ํ๋ฉฐ get_pulling_indices ํจ์๋ฅผ ๋ฎ์ด์จ์ผ ํฉ๋๋ค. 1D, 2D, 3D ๋ฑ๊ธ์ ๊ฐ์ค์น๋ฅผ ์ง์ํ๋ ์ด ํจ์์ ์์๋ ์ฌ๊ธฐ์์ ํ์ธํ ์ ์์ต๋๋ค.
์ด ์ฌ์ฉ ์ฌ๋ก์ ์์๋ ์ฌ๊ธฐ์์ ํ์ธํ ์ ์์ต๋๋ค.
ํด๋ฌ์คํฐ๋ง๋ ๋ชจ๋ธ ๊ฒ์ฌ ๋ฐ ์ญ์ง๋ ฌํํ๊ธฐ
์ฌ์ฉ ์ฌ๋ก
Step6: ํด๋ฌ์คํฐ๋ง๋ ๋ชจ๋ธ์ ์ ํ์ฑ ๊ฐ์ ํ๊ธฐ
ํน์ ์ฌ์ฉ ์ฌ๋ก์ ๋ํด ๊ณ ๋ คํ ์ ์๋ ํ์ ์๊ฐํฉ๋๋ค.
์ผํธ๋ก์ด๋ ์ด๊ธฐํ(Centroid initialization)๋ ์ต์ข
์ต์ ํ๋ ๋ชจ๋ธ์ ์ ํ์ฑ์์ ์ค์ํ ์ญํ ์ ํฉ๋๋ค. ์ผ๋ฐ์ ์ผ๋ก kmeans++ ์ด๊ธฐํ๋ ์ ํ, ๋ฐ๋ ๋ฐ ์์ ์ด๊ธฐํ๋ณด๋ค ์ฑ๋ฅ์ด ์ฐ์ํฉ๋๋ค. kmeans++๋ฅผ ์ฌ์ฉํ์ง ์์ ๊ฒฝ์ฐ ์ ํ ์ด๊ธฐํ๋ ํฐ ๊ฐ์ค์น๋ฅผ ๋์น๋ ๊ฒฝํฅ์ด ์๊ธฐ ๋๋ฌธ์ ๋ฐ๋ ๋ฐ ์์ ์ด๊ธฐํ๋ณด๋ค ์ฑ๋ฅ์ด ๋ ์ฐ์ํฉ๋๋ค. ๊ทธ๋ฌ๋, ๋ฐ๋ ์ด๊ธฐํ๋ ๋ฐ์ด๋ชจ๋ฌ ๋ถํฌ๊ฐ ์๋ ๊ฐ์ค์น์ ํด๋ฌ์คํฐ๋ฅผ ๊ฑฐ์ ์ฌ์ฉํ์ง ์๋ ๊ฒฝ์ฐ ๋ ๋์ ์ ํ์ฑ์ ์ ๊ณตํ๋ ๊ฒ์ผ๋ก ๊ด์ฐฐ๋์์ต๋๋ค.
ํด๋ฌ์คํฐ๋ง๋ ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํ ๋ ํ๋ จ์ ์ฌ์ฉ๋๋ ํ์ต๋ฅ ๋ณด๋ค ๋ฎ์ ํ์ต๋ฅ ์ ์ค์ ํฉ๋๋ค.
๋ชจ๋ธ ์ ํ์ฑ์ ๊ฐ์ ํ๊ธฐ ์ํ ์ผ๋ฐ์ ์ธ ์์ด๋์ด๋ฅผ ์ป์ผ๋ ค๋ฉด "ํด๋ฌ์คํฐ๋ง๋ ๋ชจ๋ธ ์ ์ํ๊ธฐ"์์ ์ฌ์ฉ ์ฌ๋ก์ ๋ํ ํ์ ์ดํด๋ณด์ธ์.
๋ฐฐํฌ
ํฌ๊ธฐ ์์ถ์ผ๋ก ๋ชจ๋ธ ๋ด๋ณด๋ด๊ธฐ
์ผ๋ฐ์ ์ธ ์ค์ | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
! pip install -q tensorflow-model-optimization
import tensorflow as tf
import numpy as np
import tempfile
import os
import tensorflow_model_optimization as tfmot
input_dim = 20
output_dim = 20
x_train = np.random.randn(1, input_dim).astype(np.float32)
y_train = tf.keras.utils.to_categorical(np.random.randn(1), num_classes=output_dim)
def setup_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(input_dim, input_shape=[input_dim]),
tf.keras.layers.Flatten()
])
return model
def train_model(model):
model.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy']
)
model.summary()
model.fit(x_train, y_train)
return model
def save_model_weights(model):
_, pretrained_weights = tempfile.mkstemp('.h5')
model.save_weights(pretrained_weights)
return pretrained_weights
def setup_pretrained_weights():
model= setup_model()
model = train_model(model)
pretrained_weights = save_model_weights(model)
return pretrained_weights
def setup_pretrained_model():
model = setup_model()
pretrained_weights = setup_pretrained_weights()
model.load_weights(pretrained_weights)
return model
def save_model_file(model):
_, keras_file = tempfile.mkstemp('.h5')
model.save(keras_file, include_optimizer=False)
return keras_file
def get_gzipped_model_size(model):
# It returns the size of the gzipped model in bytes.
import os
import zipfile
keras_file = save_model_file(model)
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(keras_file)
return os.path.getsize(zipped_file)
setup_model()
pretrained_weights = setup_pretrained_weights()
Explanation: ๊ฐ์ค์น ํด๋ฌ์คํฐ๋ง ์ข
ํฉ ๊ฐ์ด๋
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/model_optimization/guide/clustering/clustering_comprehensive_guide"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/model_optimization/guide/clustering/clustering_comprehensive_guide.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/model_optimization/guide/clustering/clustering_comprehensive_guide.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/model_optimization/guide/clustering/clustering_comprehensive_guide.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">๋
ธํธ๋ถ ๋ค์ด๋ก๋ํ๊ธฐ</a></td>
</table>
์ฌ๊ธฐ์๋ TensorFlow ๋ชจ๋ธ ์ต์ ํ ๋๊ตฌ ํคํธ์ ์ผ๋ถ์ธ ๊ฐ์ค์น ํด๋ฌ์คํฐ๋ง์ ๋ํ ์ข
ํฉ ๊ฐ์ด๋๋ฅผ ์ ๊ณตํฉ๋๋ค.
์ด ํ์ด์ง๋ ๋ค์ํ ์ฌ์ฉ ์ฌ๋ก๋ฅผ ๋ฌธ์ํํ๊ณ ๊ฐ๊ฐ์ ๋ํด API๋ฅผ ์ฌ์ฉํ๋ ๋ฐฉ๋ฒ์ ๋ณด์ฌ์ค๋๋ค. ํ์ํ API๋ฅผ ์๊ณ ๋๋ฉด, API ๋ฌธ์์์ ๋งค๊ฐ๋ณ์์ ํ์ ์์ค์ ์ธ๋ถ ์ ๋ณด๋ฅผ ์ฐพ์๋ณด์ธ์.
๊ฐ์ค์น ํด๋ฌ์คํฐ๋ง์ ์ด์ ๊ณผ ์ง์๋๋ ๊ธฐ๋ฅ์ ๋ณด๋ ค๋ฉด ๊ฐ์๋ฅผ ์ฐธ์กฐํ์ธ์.
๋จ์ผ ์๋ ํฌ ์๋ ์๋ ๊ฐ์ค์น ํด๋ฌ์คํฐ๋ง ์๋ฅผ ์ฐธ์กฐํ์ธ์.
์ด ๊ฐ์ด๋์์๋ ๋ค์ ์ฌ์ฉ ์ฌ๋ก๋ฅผ ๋ค๋ฃน๋๋ค.
ํด๋ฌ์คํฐ๋ง๋ ๋ชจ๋ธ์ ์ ์ํฉ๋๋ค.
ํด๋ฌ์คํฐ๋ง๋ ๋ชจ๋ธ์ ๊ฒ์ฌํ๊ณ ์ญ์ง๋ ฌํํฉ๋๋ค.
ํด๋ฌ์คํฐ๋ง๋ ๋ชจ๋ธ์ ์ ํ์ฑ์ ๊ฐ์ ํฉ๋๋ค.
๋ฐฐํฌ์ ๊ฒฝ์ฐ์๋ง ์์ถ์ ์ด์ ์ ํ์ธํ๊ธฐ ์ํ ๋จ๊ณ๋ฅผ ์ทจํด์ผ ํฉ๋๋ค.
์ค์
End of explanation
import tensorflow_model_optimization as tfmot
cluster_weights = tfmot.clustering.keras.cluster_weights
CentroidInitialization = tfmot.clustering.keras.CentroidInitialization
clustering_params = {
'number_of_clusters': 3,
'cluster_centroids_init': CentroidInitialization.KMEANS_PLUS_PLUS
}
model = setup_model()
model.load_weights(pretrained_weights)
clustered_model = cluster_weights(model, **clustering_params)
clustered_model.summary()
Explanation: ํด๋ฌ์คํฐ๋ง๋ ๋ชจ๋ธ ์ ์ํ๊ธฐ
์ ์ฒด ๋ชจ๋ธ ํด๋ฌ์คํฐ๋ง(์์ฐจ์ ๋ฐ ํจ์ํ)
๋ชจ๋ธ ์ ํ์ฑ ๊ฐ์ ์ ์ํ ํ:
์ด API์ ์์ฉ ๊ฐ๋ฅํ ์ ํ์ฑ์ผ๋ก ๋ฏธ๋ฆฌ ํ๋ จ๋ ๋ชจ๋ธ์ ์ ๋ฌํด์ผ ํฉ๋๋ค. ํด๋ฌ์คํฐ๋ง์ ์ฌ์ฉํ์ฌ ์ฒ์๋ถํฐ ๋ชจ๋ธ์ ํ๋ จํ๋ฉด ์ ํ์ฑ์ด ๋จ์ด์ง๋๋ค.
๊ฒฝ์ฐ์ ๋ฐ๋ผ, ํน์ ๋ ์ด์ด๋ฅผ ํด๋ฌ์คํฐ๋งํ๋ฉด ๋ชจ๋ธ ์ ํ์ฑ์ ๋ถ์ ์ ์ธ ์ํฅ์ ๋ฏธ์นฉ๋๋ค. ์ ํ์ฑ์ ๊ฐ์ฅ ํฐ ์ํฅ์ ๋ฏธ์น๋ ๋ ์ด์ด ํด๋ฌ์คํฐ๋ง์ ๊ฑด๋๋ฐ๋ ๋ฐฉ๋ฒ์ ๋ณด๋ ค๋ฉด "์ผ๋ถ ๋ ์ด์ด ํด๋ฌ์คํฐ๋ง"์ ํ์ธํ์ธ์.
๋ชจ๋ ๋ ์ด์ด๋ฅผ ํด๋ฌ์คํฐ๋งํ๋ ค๋ฉด tfmot.clustering.keras.cluster_weights๋ฅผ ๋ชจ๋ธ์ ์ ์ฉํฉ๋๋ค.
End of explanation
# Create a base model
base_model = setup_model()
base_model.load_weights(pretrained_weights)
# Helper function uses `cluster_weights` to make only
# the Dense layers train with clustering
def apply_clustering_to_dense(layer):
if isinstance(layer, tf.keras.layers.Dense):
return cluster_weights(layer, **clustering_params)
return layer
# Use `tf.keras.models.clone_model` to apply `apply_clustering_to_dense`
# to the layers of the model.
clustered_model = tf.keras.models.clone_model(
base_model,
clone_function=apply_clustering_to_dense,
)
clustered_model.summary()
Explanation: ์ผ๋ถ ๋ ์ด์ด ํด๋ฌ์คํฐ๋ง(์์ฐจ์ ๋ฐ ๊ธฐ๋ฅ์ ๋ชจ๋ธ)
๋ชจ๋ธ ์ ํ์ฑ ๊ฐ์ ์ ์ํ ํ:
You must pass a pre-trained model with acceptable accuracy to this API. Training models from scratch with clustering results in subpar accuracy.
์ด๊ธฐ ๋ ์ด์ด์ ๋ฌ๋ฆฌ ๋ ๋ง์ ์ค๋ณต ๋งค๊ฐ๋ณ์(์: tf.keras.layers.Dense, tf.keras.layers.Conv2D)๋ฅผ ๊ฐ์ง ์ดํ ๋ ์ด์ด๋ฅผ ํด๋ฌ์คํฐ๋งํฉ๋๋ค.
๋ฏธ์ธ ์กฐ์ ์ค ํด๋ฌ์คํฐ๋ง๋ ๋ ์ด์ด ์ด์ ์ ์ด๊ธฐ ๋ ์ด์ด๋ฅผ ๊ณ ์ ํฉ๋๋ค. ๊ณ ์ ๋ ๋ ์ด์ด์ ์๋ฅผ ํ์ดํผ ๋งค๊ฐ๋ณ์๋ก ์ทจ๊ธํฉ๋๋ค. ๊ฒฝํ์ ์ผ๋ก, ๋๋ถ๋ถ์ ์ด๊ธฐ ๋ ์ด์ด๋ฅผ ๊ณ ์ ํ๋ ๊ฒ์ด ํ์ฌ์ ํด๋ฌ์คํฐ๋ง API์ ์ด์์ ์
๋๋ค.
์ค์ ๋ ์ด์ด(์: ์ฃผ์ ๋ฉ์ปค๋์ฆ)์ ํด๋ฌ์คํฐ๋ง์ ํผํฉ๋๋ค.
์ถ๊ฐ ์ ๋ณด: tfmot.clustering.keras.cluster_weights API ๋ฌธ์์์ ๋ ์ด์ด๋ณ๋ก ํด๋ฌ์คํฐ๋ง ๊ตฌ์ฑ์ ๋ณ๊ฒฝํ๋ ๋ฐฉ๋ฒ์ ๋ํ ์ธ๋ถ ์ ๋ณด๋ฅผ ์ ๊ณตํฉ๋๋ค.
End of explanation
class MyDenseLayer(tf.keras.layers.Dense, tfmot.clustering.keras.ClusterableLayer):
def get_clusterable_weights(self):
# Cluster kernel and bias. This is just an example, clustering
# bias usually hurts model accuracy.
return [('kernel', self.kernel), ('bias', self.bias)]
# Use `cluster_weights` to make the `MyDenseLayer` layer train with clustering as usual.
model_for_clustering = tf.keras.Sequential([
tfmot.clustering.keras.cluster_weights(MyDenseLayer(20, input_shape=[input_dim]), **clustering_params),
tf.keras.layers.Flatten()
])
model_for_clustering.summary()
Explanation: ์ฌ์ฉ์ ์ ์ Keras ๋ ์ด์ด ํด๋ฌ์คํฐ๋ง ๋๋ ํด๋ฌ์คํธ๋ง ๋์ ๋ ์ด์ด์ ๊ฐ์ค์น ์ง์ ํ๊ธฐ
tfmot.clustering.keras.ClusterableLayer๋ ๋ ๊ฐ์ง ์ฌ์ฉ ์ฌ๋ก๋ฅผ ์ ๊ณตํฉ๋๋ค.
์ฌ์ฉ์ ์ ์ Keras ๋ ์ด์ด๋ฅผ ํฌํจํ์ฌ ๊ธฐ๋ณธ์ ์ผ๋ก ์ง์ํ์ง ์๋ ๋ชจ๋ ๋ ์ด์ด๋ฅผ ํด๋ฌ์คํฐ๋งํฉ๋๋ค.
ํด๋ฌ์คํฐ๋งํ ์ง์ ๋ ์ด์ด์ ๊ฐ์ค์น๋ฅผ ์ง์ ํฉ๋๋ค.
์๋ฅผ ๋ค์ด API๋ ๊ธฐ๋ณธ์ ์ผ๋ก Dense ๋ ์ด์ด์ ์ปค๋๋ง ํด๋ฌ์คํฐ๋งํฉ๋๋ค. ์๋์ ์์๋ ๋ฐ์ด์ด์ค๋ฅผ ํด๋ฌ์คํฐ๋งํ๋๋ก ์ด๋ฅผ ์์ ํ๋ ๋ฐฉ๋ฒ์ ๋ณด์ฌ์ค๋๋ค. Keras ๋ ์ด์ด์์ ํ์ํ ๊ฒฝ์ฐ get_clusterable_weights ํจ์๋ฅผ ์ฌ์ ์ํด์ผ ํฉ๋๋ค. ์ฌ๊ธฐ์ ํด๋ฌ์คํฐ๋งํ ํ์ต ๊ฐ๋ฅํ ๋ณ์์ ์ด๋ฆ๊ณผ ํ์ต ๊ฐ๋ฅํ ๋ณ์ ์์ฒด๋ฅผ ์ง์ ํฉ๋๋ค. ์๋ฅผ ๋ค์ด ๋น ๋ชฉ๋ก []์ ๋ฐํํ๋ฉด ๊ฐ์ค์น๋ฅผ ํด๋ฌ์คํฐ๋งํ ์ ์์ต๋๋ค.
์ผ๋ฐ์ ์ธ ์ค์: ๋ฐ์ด์ด์ค๋ฅผ ํด๋ฌ์คํฐ๋งํ๋ฉด ์ผ๋ฐ์ ์ผ๋ก ๋ชจ๋ธ ์ ํ์ฑ์ด ๋๋ฌด ๋ง์ด ์์๋ฉ๋๋ค.
End of explanation
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights)
clustered_model = cluster_weights(base_model, **clustering_params)
# Save or checkpoint the model.
_, keras_model_file = tempfile.mkstemp('.h5')
clustered_model.save(keras_model_file, include_optimizer=True)
# `cluster_scope` is needed for deserializing HDF5 models.
with tfmot.clustering.keras.cluster_scope():
loaded_model = tf.keras.models.load_model(keras_model_file)
loaded_model.summary()
Explanation: tfmot.clustering.keras.ClusterableLayer๋ฅผ ์ฌ์ฉํ์ฌ Keras ์ฌ์ฉ์ ์ ์ ๋ ์ด์ด๋ฅผ ํด๋ฌ์คํฐ๋งํ ์๋ ์์ต๋๋ค. ์ด๋ ๊ฒ ํ๋ ค๋ฉด ํ์์ ๊ฐ์ด tf.keras.Layer๋ฅผ ํ์ฅํ๊ณ __init__, call, build ํจ์๋ฅผ ๊ตฌํํฉ๋๋ค. ๋จ, clusterable_layer.ClusterableLayer ํด๋์ค๋ ํ์ฅํ๊ณ ๋ค์๋ ๊ตฌํํด์ผ ํฉ๋๋ค.
get_clusterable_weights, ์ฌ๊ธฐ์์ ์์ ๊ฐ์ด ํด๋ฌ์คํฐ๋งํ ๊ฐ์ค์น ์ปค๋์ ์ง์ ํฉ๋๋ค.
get_clusterable_algorithm, ์ฌ๊ธฐ์์ ๊ฐ์ค์น ํ
์์ ํด๋ฌ์คํฐ๋ง ์๊ณ ๋ฆฌ์ฆ์ ์ง์ ํฉ๋๋ค. ์ด๋ ํด๋ฌ์คํฐ๋ง์ ์ํด ์ฌ์ฉ์ ์ ์ ๋ ์ด์ด ๊ฐ์ค์น์ ๋ชจ์์ ์ง์ ํด์ผ ํ๊ธฐ ๋๋ฌธ์
๋๋ค. ๋ฐํ๋ ํด๋ฌ์คํฐ๋ง ์๊ณ ๋ฆฌ์ฆ ํด๋์ค๋ clustering_algorithm.ClusteringAlgorithm ํด๋์ค์์ ํ์๋์ด์ผ ํ๋ฉฐ get_pulling_indices ํจ์๋ฅผ ๋ฎ์ด์จ์ผ ํฉ๋๋ค. 1D, 2D, 3D ๋ฑ๊ธ์ ๊ฐ์ค์น๋ฅผ ์ง์ํ๋ ์ด ํจ์์ ์์๋ ์ฌ๊ธฐ์์ ํ์ธํ ์ ์์ต๋๋ค.
์ด ์ฌ์ฉ ์ฌ๋ก์ ์์๋ ์ฌ๊ธฐ์์ ํ์ธํ ์ ์์ต๋๋ค.
ํด๋ฌ์คํฐ๋ง๋ ๋ชจ๋ธ ๊ฒ์ฌ ๋ฐ ์ญ์ง๋ ฌํํ๊ธฐ
์ฌ์ฉ ์ฌ๋ก: ์ด ์ฝ๋๋ HDF5 ๋ชจ๋ธ ํ์์๋ง ํ์ํฉ๋๋ค(HDF5 ๊ฐ์ค์น ๋๋ ๊ธฐํ ํ์์ ํด๋น๋์ง ์์).
End of explanation
model = setup_model()
clustered_model = cluster_weights(model, **clustering_params)
clustered_model.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy']
)
clustered_model.fit(
x_train,
y_train
)
final_model = tfmot.clustering.keras.strip_clustering(clustered_model)
print("final model")
final_model.summary()
print("\n")
print("Size of gzipped clustered model without stripping: %.2f bytes"
% (get_gzipped_model_size(clustered_model)))
print("Size of gzipped clustered model with stripping: %.2f bytes"
% (get_gzipped_model_size(final_model)))
Explanation: ํด๋ฌ์คํฐ๋ง๋ ๋ชจ๋ธ์ ์ ํ์ฑ ๊ฐ์ ํ๊ธฐ
ํน์ ์ฌ์ฉ ์ฌ๋ก์ ๋ํด ๊ณ ๋ คํ ์ ์๋ ํ์ ์๊ฐํฉ๋๋ค.
์ผํธ๋ก์ด๋ ์ด๊ธฐํ(Centroid initialization)๋ ์ต์ข
์ต์ ํ๋ ๋ชจ๋ธ์ ์ ํ์ฑ์์ ์ค์ํ ์ญํ ์ ํฉ๋๋ค. ์ผ๋ฐ์ ์ผ๋ก kmeans++ ์ด๊ธฐํ๋ ์ ํ, ๋ฐ๋ ๋ฐ ์์ ์ด๊ธฐํ๋ณด๋ค ์ฑ๋ฅ์ด ์ฐ์ํฉ๋๋ค. kmeans++๋ฅผ ์ฌ์ฉํ์ง ์์ ๊ฒฝ์ฐ ์ ํ ์ด๊ธฐํ๋ ํฐ ๊ฐ์ค์น๋ฅผ ๋์น๋ ๊ฒฝํฅ์ด ์๊ธฐ ๋๋ฌธ์ ๋ฐ๋ ๋ฐ ์์ ์ด๊ธฐํ๋ณด๋ค ์ฑ๋ฅ์ด ๋ ์ฐ์ํฉ๋๋ค. ๊ทธ๋ฌ๋, ๋ฐ๋ ์ด๊ธฐํ๋ ๋ฐ์ด๋ชจ๋ฌ ๋ถํฌ๊ฐ ์๋ ๊ฐ์ค์น์ ํด๋ฌ์คํฐ๋ฅผ ๊ฑฐ์ ์ฌ์ฉํ์ง ์๋ ๊ฒฝ์ฐ ๋ ๋์ ์ ํ์ฑ์ ์ ๊ณตํ๋ ๊ฒ์ผ๋ก ๊ด์ฐฐ๋์์ต๋๋ค.
ํด๋ฌ์คํฐ๋ง๋ ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํ ๋ ํ๋ จ์ ์ฌ์ฉ๋๋ ํ์ต๋ฅ ๋ณด๋ค ๋ฎ์ ํ์ต๋ฅ ์ ์ค์ ํฉ๋๋ค.
๋ชจ๋ธ ์ ํ์ฑ์ ๊ฐ์ ํ๊ธฐ ์ํ ์ผ๋ฐ์ ์ธ ์์ด๋์ด๋ฅผ ์ป์ผ๋ ค๋ฉด "ํด๋ฌ์คํฐ๋ง๋ ๋ชจ๋ธ ์ ์ํ๊ธฐ"์์ ์ฌ์ฉ ์ฌ๋ก์ ๋ํ ํ์ ์ดํด๋ณด์ธ์.
๋ฐฐํฌ
ํฌ๊ธฐ ์์ถ์ผ๋ก ๋ชจ๋ธ ๋ด๋ณด๋ด๊ธฐ
์ผ๋ฐ์ ์ธ ์ค์: ํด๋ฌ์คํฐ๋ง์ ์์ถ ์ด์ ์ ํ์ธํ๋ ค๋ฉด <code>strip_clustering</code>๊ณผ ํ์ค ์์ถ ์๊ณ ๋ฆฌ์ฆ(์: gzip ์ด์ฉ) ์ ์ฉ์ด ๋ชจ๋ ํ์ํฉ๋๋ค.
End of explanation |
12,295 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Iterators and Generators
In this section of the course we will be learning about the difference between iteration and generation in Python and how to construct our own Generators with the yield statement. Generators allow us to generate as we go along, instead of holding everything in memory.
We've touch on this topic in the past when discussing the range() function in Python 2 and the similar xrange(), with the difference being the xrange() was a generator.
Lets explore a little deep. We've learned how to create functions with def and the return statement. Generator functions allow us to write a function that can send back a value and then later resume to pick up where it left off. This type of function is a generator in Python, allowing us to generate a sequence of values over time. The main difference in syntax will be the use of a yield statement.
In most aspects, a generator function will appear very similar to a normal function. The main difference is when a generator function is compiled they become an object that support an iteration protocol. That means when they are called in your code the don't actually return a value and then exit, the generator functions will automatically suspend and resume their execution and state around the last point of value generation. The main advantage here is that instead of having to compute an entire series of values upfront and the generator functions can be suspended, this feature is known as state suspension.
๏ฟผ๏ฟผTo start getting a better understanding of generators, lets go ahead and see how we can create some.
Step1: Great! Now since we have a generator function we don't have to keep track of every single cube we created.
Generators are best for calculating large sets of results (particularly in calculations that involve loops themselves) in cases where we donโt want to allocate the memory for all of the results at the same time.
As we've noted in previous lectures (such as range()) many Standard Library functions that return lists in Python 2 have been modified to return generators in Python 3 because generators.
Lets create another example generator which calculates fibonacci numbers
Step2: What is this was a normal function, what would it look like?
Step3: Notice that if we call some huge value of n (like 100000) the second function will have to keep track of every single result, when in our case we actually only care about the previous result to generate the next one!
next() and iter() built-in functions
A key to fully understanding generators is the next function() and the iter() function.
The next function allows us to access the next element in a sequence. Lets check it out
Step4: After yielding all the values next() caused a StopIteration error. What this error informs us of is that all the values have been yielded.
You might be wondering that why donโt we get this error while using a for loop? The for loop automatically catches this error and stops calling next.
Lets go ahead and check out how to use iter(). You remember that strings are iterables
Step5: But that doesn't mean the string itself is an iterator! We can check this with the next() function
Step6: Interesting, this means that a string object supports iteration, but we can not directly iterate over it as we could with a generator function. The iter() function allows us to do just that! | Python Code:
# Generator function for the cube of numbers (power of 3)
def gencubes(n):
for num in range(n):
yield num**3
for x in gencubes(10):
print x
Explanation: Iterators and Generators
In this section of the course we will be learning about the difference between iteration and generation in Python and how to construct our own Generators with the yield statement. Generators allow us to generate as we go along, instead of holding everything in memory.
We've touch on this topic in the past when discussing the range() function in Python 2 and the similar xrange(), with the difference being the xrange() was a generator.
Lets explore a little deep. We've learned how to create functions with def and the return statement. Generator functions allow us to write a function that can send back a value and then later resume to pick up where it left off. This type of function is a generator in Python, allowing us to generate a sequence of values over time. The main difference in syntax will be the use of a yield statement.
In most aspects, a generator function will appear very similar to a normal function. The main difference is when a generator function is compiled they become an object that support an iteration protocol. That means when they are called in your code the don't actually return a value and then exit, the generator functions will automatically suspend and resume their execution and state around the last point of value generation. The main advantage here is that instead of having to compute an entire series of values upfront and the generator functions can be suspended, this feature is known as state suspension.
๏ฟผ๏ฟผTo start getting a better understanding of generators, lets go ahead and see how we can create some.
End of explanation
def genfibon(n):
'''
Generate a fibonnaci sequence up to n
'''
a = 1
b = 1
for i in range(n):
yield a
a,b = b,a+b
for num in genfibon(10):
print num
Explanation: Great! Now since we have a generator function we don't have to keep track of every single cube we created.
Generators are best for calculating large sets of results (particularly in calculations that involve loops themselves) in cases where we donโt want to allocate the memory for all of the results at the same time.
As we've noted in previous lectures (such as range()) many Standard Library functions that return lists in Python 2 have been modified to return generators in Python 3 because generators.
Lets create another example generator which calculates fibonacci numbers:
End of explanation
def fibon(n):
a = 1
b = 1
output = []
for i in range(n):
output.append(a)
a,b = b,a+b
return output
fibon(10)
Explanation: What is this was a normal function, what would it look like?
End of explanation
def simple_gen():
for x in range(3):
yield x
# Assign simple_gen
g = simple_gen()
print next(g)
print next(g)
print next(g)
print next(g)
Explanation: Notice that if we call some huge value of n (like 100000) the second function will have to keep track of every single result, when in our case we actually only care about the previous result to generate the next one!
next() and iter() built-in functions
A key to fully understanding generators is the next function() and the iter() function.
The next function allows us to access the next element in a sequence. Lets check it out:
End of explanation
s = 'hello'
#Iterate over string
for let in s:
print let
Explanation: After yielding all the values next() caused a StopIteration error. What this error informs us of is that all the values have been yielded.
You might be wondering that why donโt we get this error while using a for loop? The for loop automatically catches this error and stops calling next.
Lets go ahead and check out how to use iter(). You remember that strings are iterables:
End of explanation
next(s)
Explanation: But that doesn't mean the string itself is an iterator! We can check this with the next() function:
End of explanation
s_iter = iter(s)
next(s_iter)
next(s_iter)
Explanation: Interesting, this means that a string object supports iteration, but we can not directly iterate over it as we could with a generator function. The iter() function allows us to do just that!
End of explanation |
12,296 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parallel Monto-Carlo options pricing
This notebook shows how to use IPython.parallel to do Monte-Carlo options pricing in parallel. We will compute the price of a large number of options for different strike prices and volatilities.
Problem setup
Step1: Here are the basic parameters for our computation.
Step3: Monte-Carlo option pricing function
The following function computes the price of a single option. It returns the call and put prices for both European and Asian style options.
Step4: We can time a single call of this function using the %timeit magic
Step5: Parallel computation across strike prices and volatilities
The Client is used to setup the calculation and works with all engines.
Step6: A LoadBalancedView is an interface to the engines that provides dynamic load
balancing at the expense of not knowing which engine will execute the code.
Step7: Submit tasks for each (strike, sigma) pair. Again, we use the %%timeit magic to time the entire computation.
Step8: Process and visualize results
Retrieve the results using the get method
Step9: Assemble the result into a structured NumPy array.
Step10: Plot the value of the European call in (volatility, strike) space.
Step11: Plot the value of the Asian call in (volatility, strike) space.
Step12: Plot the value of the European put in (volatility, strike) space.
Step13: Plot the value of the Asian put in (volatility, strike) space. | Python Code:
%pylab inline
import sys
import time
from IPython.parallel import Client
import numpy as np
Explanation: Parallel Monto-Carlo options pricing
This notebook shows how to use IPython.parallel to do Monte-Carlo options pricing in parallel. We will compute the price of a large number of options for different strike prices and volatilities.
Problem setup
End of explanation
price = 100.0 # Initial price
rate = 0.05 # Interest rate
days = 260 # Days to expiration
paths = 10000 # Number of MC paths
n_strikes = 6 # Number of strike values
min_strike = 90.0 # Min strike price
max_strike = 110.0 # Max strike price
n_sigmas = 5 # Number of volatility values
min_sigma = 0.1 # Min volatility
max_sigma = 0.4 # Max volatility
strike_vals = np.linspace(min_strike, max_strike, n_strikes)
sigma_vals = np.linspace(min_sigma, max_sigma, n_sigmas)
print "Strike prices: ", strike_vals
print "Volatilities: ", sigma_vals
Explanation: Here are the basic parameters for our computation.
End of explanation
def price_option(S=100.0, K=100.0, sigma=0.25, r=0.05, days=260, paths=10000):
Price European and Asian options using a Monte Carlo method.
Parameters
----------
S : float
The initial price of the stock.
K : float
The strike price of the option.
sigma : float
The volatility of the stock.
r : float
The risk free interest rate.
days : int
The number of days until the option expires.
paths : int
The number of Monte Carlo paths used to price the option.
Returns
-------
A tuple of (E. call, E. put, A. call, A. put) option prices.
import numpy as np
from math import exp,sqrt
h = 1.0/days
const1 = exp((r-0.5*sigma**2)*h)
const2 = sigma*sqrt(h)
stock_price = S*np.ones(paths, dtype='float64')
stock_price_sum = np.zeros(paths, dtype='float64')
for j in range(days):
growth_factor = const1*np.exp(const2*np.random.standard_normal(paths))
stock_price = stock_price*growth_factor
stock_price_sum = stock_price_sum + stock_price
stock_price_avg = stock_price_sum/days
zeros = np.zeros(paths, dtype='float64')
r_factor = exp(-r*h*days)
euro_put = r_factor*np.mean(np.maximum(zeros, K-stock_price))
asian_put = r_factor*np.mean(np.maximum(zeros, K-stock_price_avg))
euro_call = r_factor*np.mean(np.maximum(zeros, stock_price-K))
asian_call = r_factor*np.mean(np.maximum(zeros, stock_price_avg-K))
return (euro_call, euro_put, asian_call, asian_put)
Explanation: Monte-Carlo option pricing function
The following function computes the price of a single option. It returns the call and put prices for both European and Asian style options.
End of explanation
%timeit -n1 -r1 print price_option(S=100.0, K=100.0, sigma=0.25, r=0.05, days=260, paths=10000)
Explanation: We can time a single call of this function using the %timeit magic:
End of explanation
c = Client(profile="default")
Explanation: Parallel computation across strike prices and volatilities
The Client is used to setup the calculation and works with all engines.
End of explanation
view = c.load_balanced_view()
Explanation: A LoadBalancedView is an interface to the engines that provides dynamic load
balancing at the expense of not knowing which engine will execute the code.
End of explanation
%%timeit -n1 -r1
async_results = []
for strike in strike_vals:
for sigma in sigma_vals:
# This line submits the tasks for parallel computation.
ar = view.apply_async(price_option, price, strike, sigma, rate, days, paths)
async_results.append(ar)
c.wait(async_results) # Wait until all tasks are done.
len(async_results)
Explanation: Submit tasks for each (strike, sigma) pair. Again, we use the %%timeit magic to time the entire computation.
End of explanation
results = [ar.get() for ar in async_results]
Explanation: Process and visualize results
Retrieve the results using the get method:
End of explanation
prices = np.empty(n_strikes*n_sigmas,
dtype=[('ecall',float),('eput',float),('acall',float),('aput',float)]
)
for i, price in enumerate(results):
prices[i] = tuple(price)
prices.shape = (n_strikes, n_sigmas)
Explanation: Assemble the result into a structured NumPy array.
End of explanation
plt.figure()
plt.contourf(sigma_vals, strike_vals, prices['ecall'])
plt.axis('tight')
plt.colorbar()
plt.title('European Call')
plt.xlabel("Volatility")
plt.ylabel("Strike Price")
Explanation: Plot the value of the European call in (volatility, strike) space.
End of explanation
plt.figure()
plt.contourf(sigma_vals, strike_vals, prices['acall'])
plt.axis('tight')
plt.colorbar()
plt.title("Asian Call")
plt.xlabel("Volatility")
plt.ylabel("Strike Price")
Explanation: Plot the value of the Asian call in (volatility, strike) space.
End of explanation
plt.figure()
plt.contourf(sigma_vals, strike_vals, prices['eput'])
plt.axis('tight')
plt.colorbar()
plt.title("European Put")
plt.xlabel("Volatility")
plt.ylabel("Strike Price")
Explanation: Plot the value of the European put in (volatility, strike) space.
End of explanation
plt.figure()
plt.contourf(sigma_vals, strike_vals, prices['aput'])
plt.axis('tight')
plt.colorbar()
plt.title("Asian Put")
plt.xlabel("Volatility")
plt.ylabel("Strike Price")
Explanation: Plot the value of the Asian put in (volatility, strike) space.
End of explanation |
12,297 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Atherosclerosis of the Aorta
Also known as heart disease or hardening of the arteries. This disease is the number one killer of Americans.
Step1: Peptic Ulcers
There have been long-standing connections noticed between ulcers and atherosclerosis. Partiaully due to smokers having a higher than average incidence of peptic ulcers and atherosclerosis. You can see an editorial in the British Medical Journal all the way back in the 1970's discussing this.
Hearing Loss
From an article from the Journal of Atherosclerosis in 2012
Step2: Arthritis
From the Crohn's and Colitis Foundation of America | Python Code:
print_synonyms('dx::440.0', model)
Explanation: Atherosclerosis of the Aorta
Also known as heart disease or hardening of the arteries. This disease is the number one killer of Americans.
End of explanation
#Crohn's Disease
print_synonyms('dx::555.9', model)
Explanation: Peptic Ulcers
There have been long-standing connections noticed between ulcers and atherosclerosis. Partiaully due to smokers having a higher than average incidence of peptic ulcers and atherosclerosis. You can see an editorial in the British Medical Journal all the way back in the 1970's discussing this.
Hearing Loss
From an article from the Journal of Atherosclerosis in 2012:
Sensorineural hearing loss seemed to be associated with vascular endothelial dysfunction and an increased cardiovascular risk
Knee Joint Replacements
These procedures are common among those with osteoarthritis and there has been a solid correlation between osteoarthritis and atherosclerosis in the literature.
Crohn's Disease
Crohn's disease is a type of inflammatory bowel disease that is caused by a combination of environmental, immune and bacterial factors. Let's see if we can recover some of these connections from the data.
End of explanation
print_synonyms_filt('dx::042', model, 'rx')
Explanation: Arthritis
From the Crohn's and Colitis Foundation of America:
Arthritis, or inflammation of the joints, is the most common extraintestinal complication of IBD. It may affect as many as 25% of people with Crohnโs disease or ulcerative colitis. Although arthritis is typically associated with advancing age, in IBD it often strikes the youngest patients.
Dental Abscesses
While not much medical literature exists with a specific link to dental abscesses and Crohn's (there are general oral issues noticed here), you do see lengthy discussions on the Crohn's forums about abscesses being a common occurance with Crohn's.
Yeast Infections
Candidiasis of skin and nails is a form of yeast infection on the skin. From the journal "Critical Review of Microbiology" here.
It is widely accepted that Candidia could result from an inappropriate inflammatory response to intestinal microorganisms in a genetically susceptible host. Most studies to date have concerned the involvement of bacteria in disease progression. In addition to bacteria, there appears to be a possible link between the commensal yeast Candida albicans and disease development.
Drugs associated with HIV/AIDS
The notion of a 'synonym' can also find connections between clinical data types. Here we look for the drugs most associated with HIV/AIDS
End of explanation |
12,298 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
adrasteia
01 Gaia Universe Model
Step1: Read in the data
Step2: I had to modify the raw data to get it to read in conveniently. I try not to modify raw data formats (for reproducibility purposes) but there didn't seem to be a convenient way otherwise. The problem was the Vtype column is undefined for most of the file, so a fixed-width-file appears to have no column there, which screws up the last few columns. I simply labeled the first three columns as "ajun" to make it look like there was something there. So I will just drop those rows.
Step3: These are the input stars. The absolute magnitude versus effective temperature.
Step4: This is sort-of a Malmquist bias plot. At a given distance, you can detect the brighter stars.
Step5: This is a typical proper motion scatter plot. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
%config InlineBackend.figure_format = "retina"
import pandas as pd
sns.set_context("talk")
Explanation: adrasteia
01 Gaia Universe Model: Milky Way Sample
gully
March 2016
Tasks:
- Read in the data
- Make a plot
End of explanation
names = ['byte_range', 'data_type', 'col_ID', 'desc']
fwf_cols = pd.read_fwf('../data/synthetic/gum_mw_columns.tsv',names=names)
fwf_cols.head()
col_names = fwf_cols.col_ID.values
Explanation: Read in the data
End of explanation
gum_mw_alt = pd.read_fwf('../data/synthetic/gum_mw.sam', names=col_names)
gum_mw_alt.drop(["Vamp", "Vper", "Vphase", "Vtype"], inplace=True, axis=1)
gum_mw_alt.head()
col_names
gum = gum_mw_alt
plt.figure(figsize=[5, 8])
#plt.plot(gum['V-I'], gum.Mbol, '.')
plt.plot(gum.Teff, gum.Mbol, '.')
plt.xlim(10000, 2000)
plt.ylim(20, -5)
plt.xlabel("$T_{\mathrm{eff}}$")
plt.ylabel("$M_{\mathrm{bol}}$");
Explanation: I had to modify the raw data to get it to read in conveniently. I try not to modify raw data formats (for reproducibility purposes) but there didn't seem to be a convenient way otherwise. The problem was the Vtype column is undefined for most of the file, so a fixed-width-file appears to have no column there, which screws up the last few columns. I simply labeled the first three columns as "ajun" to make it look like there was something there. So I will just drop those rows.
End of explanation
plt.figure(figsize=[8, 8])
#plt.plot(gum['V-I'], gum.Mbol, '.')
sc = plt.scatter(gum.r/1000.0, gum.Gmag, c=gum.Teff, s=20, marker='o', vmin=2000, vmax=10000, cmap="Spectral")
plt.xlabel("$d$ (kpc)")
plt.ylabel("$G$")
plt.hlines(12, 0, 10, colors = 'b', linestyles='--')
plt.colorbar(sc)
plt.ylim(20, 5)
plt.xlim(0, 10)
Explanation: These are the input stars. The absolute magnitude versus effective temperature.
End of explanation
plt.figure(figsize=[8, 8])
plt.plot(gum.pmRA, gum.pmDE, '.', alpha=0.2)
plt.xlabel("$\delta_{\mathrm{RA}}$ (mas/yr)")
plt.ylabel("$\delta_{\mathrm{DEC}}$ (mas/yr)")
Explanation: This is sort-of a Malmquist bias plot. At a given distance, you can detect the brighter stars.
End of explanation
plt.figure(figsize=[8, 8])
pm = np.sqrt(gum.pmDE**2 + gum.pmDE**2)
plt.plot(gum.r/1000.0, pm, '.', alpha=0.5)
plt.xlabel("$d$ (kpc)")
plt.ylabel("$\delta$ (mas/yr)")
Explanation: This is a typical proper motion scatter plot.
End of explanation |
12,299 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Evaluation, Cross-Validation, and Model Selection
By Heiko Strathmann - <a href="mailto
Step1: Types of splitting strategies
As said earlier Cross-validation is based upon splitting the data into multiple partitions. Shogun has various strategies for this. The base class for them is CSplittingStrategy.
K-fold cross-validation
Formally, this is achieved via partitioning a dataset $X$ of size $|X|=n$ into $k \leq n$ disjoint partitions $X_i\subseteq X$ such that $X_1 \cup X_2 \cup \dots \cup X_n = X$ and $X_i\cap X_j=\emptyset$ for all $i\neq j$. Then, the algorithm is executed on all $k$ possibilities of merging $k-1$ partitions and subsequently tested on the remaining partition. This results in $k$ performances which are evaluated in some metric of choice (Shogun support multiple ones). The procedure can be repeated (on different splits) in order to obtain less variance in the estimate. See [1] for a nice review on cross-validation using different performance measures.
Step2: Stratified cross-validation
On classificaiton data, the best choice is stratified cross-validation. This divides the data in such way that the fraction of labels in each partition is roughly the same, which reduces the variance of the performance estimate quite a bit, in particular for data with more than two classes. In Shogun this is implemented by CStratifiedCrossValidationSplitting class.
Step3: Leave One Out cross-validation
Leave One Out Cross-validation holds out one sample as the validation set. It is thus a special case of K-fold cross-validation with $k=n$ where $n$ is number of samples. It is implemented in LOOCrossValidationSplitting class.
Let us visualize the generated folds on the toy data.
Step4: Stratified splitting takes care that each fold has almost the same number of samples from each class. This is not the case with normal splitting which usually leads to imbalanced folds.
Toy example
Step5: Ok, we now have performed classification on the training data. How good did this work? We can easily do this for many different performance measures.
Step6: Note how for example error rate is 1-accuracy. All of those numbers represent the training error, i.e. the ability of the classifier to explain the given data.
Now, the training error is zero. This seems good at first. But is this setting of the parameters a good idea? No! A good performance on the training data alone does not mean anything. A simple look up table is able to produce zero error on training data. What we want is that our methods generalises the input data somehow to perform well on unseen data. We will now use cross-validation to estimate the performance on such.
We will use CStratifiedCrossValidationSplitting, which accepts a reference to the labels and the number of partitions as parameters. This instance is then passed to the class CCrossValidation, which does the estimation using the desired splitting strategy. The latter class can take all algorithms that are implemented against the CMachine interface.
Step7: Now this is incredibly bad compared to the training error. In fact, it is very close to random performance (0.5). The lesson
Step8: It is better to average a number of different runs of cross-validation in this case. A nice side effect of this is that the results can be used to estimate error intervals for a given confidence rate.
Step9: Using this machinery, it is very easy to compare multiple kernel parameters against each other to find the best one. It is even possible to compare a different kernel.
Step10: This gives a brute-force way to select paramters of any algorithm implemented under the CMachine interface. The cool thing about this is, that it is also possible to compare different model families against each other. Below, we compare a a number of regression models in Shogun on the Boston Housing dataset.
Regression problem and cross-validation
Various regression models in Shogun are now used to predict house prices using the boston housing dataset. Cross-validation is used to find best parameters and also test the performance of the models.
Step11: Let us use cross-validation to compare various values of tau paramter for ridge regression (Regression notebook). We will use MeanSquaredError as the performance metric. Note that normal splitting is used since it might be impossible to generate "good" splits using Stratified splitting in case of regression since we have continous values for labels.
Step12: A low value of error certifies a good pick for the tau paramter which should be easy to conclude from the plots. In case of Ridge Regression the value of tau i.e. the amount of regularization doesn't seem to matter but does seem to in case of Kernel Ridge Regression. One interpretation of this could be the lack of over fitting in the feature space for ridge regression and the occurence of over fitting in the new kernel space in which Kernel Ridge Regression operates. </br> Next we will compare a range of values for the width of Gaussian Kernel used in Kernel Ridge Regression
Step13: The values for the kernel parameter and tau may not be independent of each other, so the values we have may not be optimal. A brute force way to do this would be to try all the pairs of these values but it is only feasible for a low number of parameters.
Step14: Let us approximately pick the good parameters using the plots. Now that we have the best parameters, let us compare the various regression models on the data set.
Step15: Model selection using Grid Search
A standard way of selecting the best parameters of a learning algorithm is by Grid Search. This is done by an exhaustive search of a specified parameter space. CModelSelectionParameters is used to select various parameters and their ranges to be used for model selection. A tree like structure is used where the nodes can be CSGObject or the parameters to the object. The range of values to be searched for the parameters is set using build_values() method.
Step16: Next we will create CModelSelectionParameters instance with a kernel object which has to be appended the root node. The kernel object itself will be append with a kernel width parameter which is the parameter we wish to search. | Python Code:
%pylab inline
%matplotlib inline
# include all Shogun classes
from modshogun import *
# generate some ultra easy training data
gray()
n=20
title('Toy data for binary classification')
X=hstack((randn(2,n), randn(2,n)+1))
Y=hstack((-ones(n), ones(n)))
_=scatter(X[0], X[1], c=Y , s=100)
p1 = Rectangle((0, 0), 1, 1, fc="w")
p2 = Rectangle((0, 0), 1, 1, fc="k")
legend((p1, p2), ["Class 1", "Class 2"], loc=2)
# training data in Shogun representation
features=RealFeatures(X)
labels=BinaryLabels(Y)
Explanation: Evaluation, Cross-Validation, and Model Selection
By Heiko Strathmann - <a href="mailto:[email protected]">[email protected]</a> - <a href="github.com/karlnapf">github.com/karlnapf</a> - <a href="herrstrathmann.de">herrstrathmann.de</a>. Based on the model selection framework of his <a href="http://www.google-melange.com/gsoc/project/google/gsoc2011/XXX">Google summer of code 2011 project</a> | Saurabh Mahindre - <a href="https://github.com/Saurabh7">github.com/Saurabh7</a> as a part of <a href="http://www.google-melange.com/gsoc/project/details/google/gsoc2014/saurabh7/5750085036015616">Google Summer of Code 2014 project</a> mentored by - Heiko Strathmann
This notebook illustrates the evaluation of prediction algorithms in Shogun using <a href="http://en.wikipedia.org/wiki/Cross-validation_(statistics)">cross-validation</a>, and selecting their parameters using <a href="http://en.wikipedia.org/wiki/Hyperparameter_optimization">grid-search</a>. We demonstrate this for a toy example on <a href="http://en.wikipedia.org/wiki/Binary_classification">Binary Classification</a> using <a href="http://en.wikipedia.org/wiki/Support_vector_machine">Support Vector Machines</a> and also a regression problem on a real world dataset.
General Idea
Splitting Strategies
K-fold cross-validation
Stratified cross-validation
Example: Binary classification
Example: Regression
Model Selection: Grid Search
General Idea
Cross validation aims to estimate an algorithm's performance on unseen data. For example, one might be interested in the average classification accuracy of a Support Vector Machine when being applied to new data, that it was not trained on. This is important in order to compare the performance different algorithms on the same target. Most crucial is the point that the data that was used for running/training the algorithm is not used for testing. Different algorithms here also can mean different parameters of the same algorithm. Thus, cross-validation can be used to tune parameters of learning algorithms, as well as comparing different families of algorithms against each other. Cross-validation estimates are related to the marginal likelihood in Bayesian statistics in the sense that using them for selecting models avoids overfitting.
Evaluating an algorithm's performance on training data should be avoided since the learner may adjust to very specific random features of the training data which are not very important to the general relation. This is called overfitting. Maximising performance on the training examples usually results in algorithms explaining the noise in data (rather than actual patterns), which leads to bad performance on unseen data. This is one of the reasons behind splitting the data and using different splits for training and testing, which can be done using cross-validation.
Let us generate some toy data for binary classification to try cross validation on.
End of explanation
k=5
normal_split=CrossValidationSplitting(labels, k)
Explanation: Types of splitting strategies
As said earlier Cross-validation is based upon splitting the data into multiple partitions. Shogun has various strategies for this. The base class for them is CSplittingStrategy.
K-fold cross-validation
Formally, this is achieved via partitioning a dataset $X$ of size $|X|=n$ into $k \leq n$ disjoint partitions $X_i\subseteq X$ such that $X_1 \cup X_2 \cup \dots \cup X_n = X$ and $X_i\cap X_j=\emptyset$ for all $i\neq j$. Then, the algorithm is executed on all $k$ possibilities of merging $k-1$ partitions and subsequently tested on the remaining partition. This results in $k$ performances which are evaluated in some metric of choice (Shogun support multiple ones). The procedure can be repeated (on different splits) in order to obtain less variance in the estimate. See [1] for a nice review on cross-validation using different performance measures.
End of explanation
stratified_split=StratifiedCrossValidationSplitting(labels, k)
Explanation: Stratified cross-validation
On classificaiton data, the best choice is stratified cross-validation. This divides the data in such way that the fraction of labels in each partition is roughly the same, which reduces the variance of the performance estimate quite a bit, in particular for data with more than two classes. In Shogun this is implemented by CStratifiedCrossValidationSplitting class.
End of explanation
split_strategies=[stratified_split, normal_split]
#code to visualize splitting
def get_folds(split, num):
split.build_subsets()
x=[]
y=[]
lab=[]
for j in range(num):
indices=split.generate_subset_indices(j)
x_=[]
y_=[]
lab_=[]
for i in range(len(indices)):
x_.append(X[0][indices[i]])
y_.append(X[1][indices[i]])
lab_.append(Y[indices[i]])
x.append(x_)
y.append(y_)
lab.append(lab_)
return x, y, lab
def plot_folds(split_strategies, num):
for i in range(len(split_strategies)):
x, y, lab=get_folds(split_strategies[i], num)
figure(figsize=(18,4))
gray()
suptitle(split_strategies[i].get_name(), fontsize=12)
for j in range(0, num):
subplot(1, num, (j+1), title='Fold %s' %(j+1))
scatter(x[j], y[j], c=lab[j], s=100)
_=plot_folds(split_strategies, 4)
Explanation: Leave One Out cross-validation
Leave One Out Cross-validation holds out one sample as the validation set. It is thus a special case of K-fold cross-validation with $k=n$ where $n$ is number of samples. It is implemented in LOOCrossValidationSplitting class.
Let us visualize the generated folds on the toy data.
End of explanation
# define SVM with a small rbf kernel (always normalise the kernel!)
C=1
kernel=GaussianKernel(2, 0.001)
kernel.init(features, features)
kernel.set_normalizer(SqrtDiagKernelNormalizer())
classifier=LibSVM(C, kernel, labels)
# train
_=classifier.train()
Explanation: Stratified splitting takes care that each fold has almost the same number of samples from each class. This is not the case with normal splitting which usually leads to imbalanced folds.
Toy example: Binary Support Vector Classification
Following the example from above, we will tune the performance of a SVM on the binary classification problem. We will
demonstrate how to evaluate a loss function or metric on a given algorithm
then learn how to estimate this metric for the algorithm performing on unseen data
and finally use those techniques to tune the parameters to obtain the best possible results.
The involved methods are
LibSVM as the binary classification algorithms
the area under the ROC curve (AUC) as performance metric
three different kernels to compare
End of explanation
# instanciate a number of Shogun performance measures
metrics=[ROCEvaluation(), AccuracyMeasure(), ErrorRateMeasure(), F1Measure(), PrecisionMeasure(), RecallMeasure(), SpecificityMeasure()]
for metric in metrics:
print metric.get_name(), metric.evaluate(classifier.apply(features), labels)
Explanation: Ok, we now have performed classification on the training data. How good did this work? We can easily do this for many different performance measures.
End of explanation
metric=AccuracyMeasure()
cross=CrossValidation(classifier, features, labels, stratified_split, metric)
# perform the cross-validation, note that this call involved a lot of computation
result=cross.evaluate()
# the result needs to be casted to CrossValidationResult
result=CrossValidationResult.obtain_from_generic(result)
# this class contains a field "mean" which contain the mean performance metric
print "Testing", metric.get_name(), result.mean
Explanation: Note how for example error rate is 1-accuracy. All of those numbers represent the training error, i.e. the ability of the classifier to explain the given data.
Now, the training error is zero. This seems good at first. But is this setting of the parameters a good idea? No! A good performance on the training data alone does not mean anything. A simple look up table is able to produce zero error on training data. What we want is that our methods generalises the input data somehow to perform well on unseen data. We will now use cross-validation to estimate the performance on such.
We will use CStratifiedCrossValidationSplitting, which accepts a reference to the labels and the number of partitions as parameters. This instance is then passed to the class CCrossValidation, which does the estimation using the desired splitting strategy. The latter class can take all algorithms that are implemented against the CMachine interface.
End of explanation
print "Testing", metric.get_name(), [CrossValidationResult.obtain_from_generic(cross.evaluate()).mean for _ in range(10)]
Explanation: Now this is incredibly bad compared to the training error. In fact, it is very close to random performance (0.5). The lesson: Never judge your algorithms based on the performance on training data!
Note that for small data sizes, the cross-validation estimates are quite noisy. If we run it multiple times, we get different results.
End of explanation
# 25 runs and 95% confidence intervals
cross.set_num_runs(25)
# perform x-validation (now even more expensive)
cross.evaluate()
result=cross.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
print "Testing cross-validation mean %.2f " \
% (result.mean)
Explanation: It is better to average a number of different runs of cross-validation in this case. A nice side effect of this is that the results can be used to estimate error intervals for a given confidence rate.
End of explanation
widths=2**linspace(-5,25,10)
results=zeros(len(widths))
for i in range(len(results)):
kernel.set_width(widths[i])
result=CrossValidationResult.obtain_from_generic(cross.evaluate())
results[i]=result.mean
plot(log2(widths), results, 'blue')
xlabel("log2 Kernel width")
ylabel(metric.get_name())
_=title("Accuracy for different kernel widths")
print "Best Gaussian kernel width %.2f" % widths[results.argmax()], "gives", results.max()
# compare this with a linear kernel
classifier.set_kernel(LinearKernel())
lin_k=CrossValidationResult.obtain_from_generic(cross.evaluate())
plot([log2(widths[0]), log2(widths[len(widths)-1])], [lin_k.mean,lin_k.mean], 'r')
# please excuse this horrible code :)
print "Linear kernel gives", lin_k.mean
_=legend(["Gaussian", "Linear"], loc="lower center")
Explanation: Using this machinery, it is very easy to compare multiple kernel parameters against each other to find the best one. It is even possible to compare a different kernel.
End of explanation
feats=RealFeatures(CSVFile('../../../data/uci/housing/fm_housing.dat'))
labels=RegressionLabels(CSVFile('../../../data/uci/housing/housing_label.dat'))
preproc=RescaleFeatures()
preproc.init(feats)
feats.add_preprocessor(preproc)
feats.apply_preprocessor(True)
#Regression models
ls=LeastSquaresRegression(feats, labels)
tau=1
rr=LinearRidgeRegression(tau, feats, labels)
width=1
tau=1
kernel=GaussianKernel(feats, feats, width)
kernel.set_normalizer(SqrtDiagKernelNormalizer())
krr=KernelRidgeRegression(tau, kernel, labels)
regression_models=[ls, rr, krr]
Explanation: This gives a brute-force way to select paramters of any algorithm implemented under the CMachine interface. The cool thing about this is, that it is also possible to compare different model families against each other. Below, we compare a a number of regression models in Shogun on the Boston Housing dataset.
Regression problem and cross-validation
Various regression models in Shogun are now used to predict house prices using the boston housing dataset. Cross-validation is used to find best parameters and also test the performance of the models.
End of explanation
n=30
taus = logspace(-4, 1, n)
#5-fold cross-validation
k=5
split=CrossValidationSplitting(labels, k)
metric=MeanSquaredError()
cross=CrossValidation(rr, feats, labels, split, metric)
cross.set_num_runs(50)
errors=[]
for tau in taus:
#set necessary parameter
rr.set_tau(tau)
result=cross.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
#Enlist mean error for all runs
errors.append(result.mean)
figure(figsize=(20,6))
suptitle("Finding best (tau) parameter using cross-validation", fontsize=12)
p=subplot(121)
title("Ridge Regression")
plot(taus, errors, linewidth=3)
p.set_xscale('log')
p.set_ylim([0, 80])
xlabel("Taus")
ylabel("Mean Squared Error")
cross=CrossValidation(krr, feats, labels, split, metric)
cross.set_num_runs(50)
errors=[]
for tau in taus:
krr.set_tau(tau)
result=cross.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
#print tau, "error", result.mean
errors.append(result.mean)
p2=subplot(122)
title("Kernel Ridge regression")
plot(taus, errors, linewidth=3)
p2.set_xscale('log')
xlabel("Taus")
_=ylabel("Mean Squared Error")
Explanation: Let us use cross-validation to compare various values of tau paramter for ridge regression (Regression notebook). We will use MeanSquaredError as the performance metric. Note that normal splitting is used since it might be impossible to generate "good" splits using Stratified splitting in case of regression since we have continous values for labels.
End of explanation
n=50
widths=logspace(-2, 3, n)
krr.set_tau(0.1)
metric=MeanSquaredError()
k=5
split=CrossValidationSplitting(labels, k)
cross=CrossValidation(krr, feats, labels, split, metric)
cross.set_num_runs(10)
errors=[]
for width in widths:
kernel.set_width(width)
result=cross.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
#print width, "error", result.mean
errors.append(result.mean)
figure(figsize=(15,5))
p=subplot(121)
title("Finding best width using cross-validation")
plot(widths, errors, linewidth=3)
p.set_xscale('log')
xlabel("Widths")
_=ylabel("Mean Squared Error")
Explanation: A low value of error certifies a good pick for the tau paramter which should be easy to conclude from the plots. In case of Ridge Regression the value of tau i.e. the amount of regularization doesn't seem to matter but does seem to in case of Kernel Ridge Regression. One interpretation of this could be the lack of over fitting in the feature space for ridge regression and the occurence of over fitting in the new kernel space in which Kernel Ridge Regression operates. </br> Next we will compare a range of values for the width of Gaussian Kernel used in Kernel Ridge Regression
End of explanation
n=40
taus = logspace(-3, 0, n)
widths=logspace(-1, 4, n)
cross=CrossValidation(krr, feats, labels, split, metric)
cross.set_num_runs(1)
x, y=meshgrid(taus, widths)
grid=array((ravel(x), ravel(y)))
print grid.shape
errors=[]
for i in range(0, n*n):
krr.set_tau(grid[:,i][0])
kernel.set_width(grid[:,i][1])
result=cross.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
errors.append(result.mean)
errors=array(errors).reshape((n, n))
from mpl_toolkits.mplot3d import Axes3D
#taus = logspace(0.5, 1, n)
jet()
fig=figure(figsize(15,7))
ax=subplot(121)
c=pcolor(x, y, errors)
_=contour(x, y, errors, linewidths=1, colors='black')
_=colorbar(c)
xlabel('Taus')
ylabel('Widths')
ax.set_xscale('log')
ax.set_yscale('log')
ax1=fig.add_subplot(122, projection='3d')
ax1.plot_wireframe(log10(y),log10(x), errors, linewidths=2, alpha=0.6)
ax1.view_init(30,-40)
xlabel('Taus')
ylabel('Widths')
_=ax1.set_zlabel('Error')
Explanation: The values for the kernel parameter and tau may not be independent of each other, so the values we have may not be optimal. A brute force way to do this would be to try all the pairs of these values but it is only feasible for a low number of parameters.
End of explanation
#use the best parameters
rr.set_tau(1)
krr.set_tau(0.05)
kernel.set_width(2)
title_='Performance on Boston Housing dataset'
print "%50s" %title_
for machine in regression_models:
metric=MeanSquaredError()
cross=CrossValidation(machine, feats, labels, split, metric)
cross.set_num_runs(25)
result=cross.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
print "-"*80
print "|", "%30s" % machine.get_name(),"|", "%20s" %metric.get_name(),"|","%20s" %result.mean ,"|"
print "-"*80
Explanation: Let us approximately pick the good parameters using the plots. Now that we have the best parameters, let us compare the various regression models on the data set.
End of explanation
#Root
param_tree_root=ModelSelectionParameters()
#Parameter tau
tau=ModelSelectionParameters("tau")
param_tree_root.append_child(tau)
# also R_LINEAR/R_LOG is available as type
min_value=0.01
max_value=1
type_=R_LINEAR
step=0.05
base=2
tau.build_values(min_value, max_value, type_, step, base)
Explanation: Model selection using Grid Search
A standard way of selecting the best parameters of a learning algorithm is by Grid Search. This is done by an exhaustive search of a specified parameter space. CModelSelectionParameters is used to select various parameters and their ranges to be used for model selection. A tree like structure is used where the nodes can be CSGObject or the parameters to the object. The range of values to be searched for the parameters is set using build_values() method.
End of explanation
#kernel object
param_gaussian_kernel=ModelSelectionParameters("kernel", kernel)
gaussian_kernel_width=ModelSelectionParameters("log_width")
gaussian_kernel_width.build_values(0.1, 6.0, R_LINEAR, 0.5, 2.0)
#kernel parameter
param_gaussian_kernel.append_child(gaussian_kernel_width)
param_tree_root.append_child(param_gaussian_kernel)
# cross validation instance used
cross_validation=CrossValidation(krr, feats, labels, split, metric)
cross_validation.set_num_runs(1)
# model selection instance
model_selection=GridSearchModelSelection(cross_validation, param_tree_root)
print_state=False
# TODO: enable it once crossval has been fixed
#best_parameters=model_selection.select_model(print_state)
#best_parameters.apply_to_machine(krr)
#best_parameters.print_tree()
result=cross_validation.evaluate()
result=CrossValidationResult.obtain_from_generic(result)
print 'Error with Best parameters:', result.mean
Explanation: Next we will create CModelSelectionParameters instance with a kernel object which has to be appended the root node. The kernel object itself will be append with a kernel width parameter which is the parameter we wish to search.
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.