Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
13,800 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This iPython notebook is a very basic example of how to read in and manipulate data using the GWpy software package. If you have LIGO.org authentication, instructions on how to run this iPython notebook from a LIGO data grid computing cluster can be found on this wiki.
The first step is to import the GWpy TimeSeries module and read in data. This example uses publicly available data sets, so anyone can install GWpy and run this.
Step1: For our first example, we'll plot a simple time series of the data that we read in.
Step2: If we want to study the output data in the frequency domain, we can also plot the amplitude spectral density
(ASD) using the same data object. In this example, we measure the amplitude spectral density by averaging several FFTs of the time-series with 4 seconds per FFT and 2 seconds of overlap between FFTs.
Step3: To tie it all together, we can look at this signal in the time-frequency plane using a spectrogram. Once again we set the length of our FFT and the overlap in seconds. Since spectrogram2 returns a power spectral density, we take the square root to obtain the amplitude spectral density as a function of time and frequency. We then divide the spectrgram by the average amplitude spectral density in each frequency bin to normalize the plot. | Python Code:
# Inline matplotlib only needed for iPython notebooks for inline image rendering
# If running this code in a script, matplotlib inline can be commented out or omitted
%matplotlib inline
from gwpy.timeseries import TimeSeries
data = TimeSeries.fetch_open_data('L1', 1126259446, 1126259478)
# on a LIGO Data Grid cluster, you can use the following to grab data (leave off frametype if using NDS2):
# data = TimeSeries.get('L1:GDS-CALIB_STRAIN', 1126259446, 1126259478, frametype='L1_HOFT_C00')
Explanation: This iPython notebook is a very basic example of how to read in and manipulate data using the GWpy software package. If you have LIGO.org authentication, instructions on how to run this iPython notebook from a LIGO data grid computing cluster can be found on this wiki.
The first step is to import the GWpy TimeSeries module and read in data. This example uses publicly available data sets, so anyone can install GWpy and run this.
End of explanation
ts = data.plot() # Generate a time series plot from the data
ax = ts.gca() # Grab the current axes of the plot
ax.set_ylabel('Gravitational-wave amplitude [strain]')
ax.set_title('LIGO Livingston Observatory data')
# to save this figure:
# ts.savefig('/full/path/to/image.png')
Explanation: For our first example, we'll plot a simple time series of the data that we read in.
End of explanation
spec = data.asd(4,2) # Calculate the amplitude spectral density of the data
specfig = spec.plot() # Plot the ASD
ax = specfig.gca()
ax.set_xlabel('Frequency [Hz]')
ax.set_xlim(30, 1024)
ax.set_ylabel(r'Noise ASD [1/$\sqrt{\mathrm{Hz}}$]')
ax.set_ylim(1e-24, 1e-19)
ax.grid(True, 'both', 'both')
# to save this figure:
# spec.savefig('/full/path/to/image.png')
Explanation: If we want to study the output data in the frequency domain, we can also plot the amplitude spectral density
(ASD) using the same data object. In this example, we measure the amplitude spectral density by averaging several FFTs of the time-series with 4 seconds per FFT and 2 seconds of overlap between FFTs.
End of explanation
specgram = data.spectrogram2(fftlength=2,overlap=1.75) ** (1/2.) # Generate a spectrogram from the data
medratio = specgram.ratio('median') # Generate a normalized spectrogram
specgramfig = medratio.plot(norm='log', vmin=0.1, vmax=10) # Plot the normalized spectrogram
specgramfig.set_ylim(30,1024)
specgramfig.set_yscale('log')
specgramfig.add_colorbar(label='Amplitude relative to median',cmap='YlGnBu_r')
# to save this figure:
# specgramfig.savefig('/full/path/to/image.png')
Explanation: To tie it all together, we can look at this signal in the time-frequency plane using a spectrogram. Once again we set the length of our FFT and the overlap in seconds. Since spectrogram2 returns a power spectral density, we take the square root to obtain the amplitude spectral density as a function of time and frequency. We then divide the spectrgram by the average amplitude spectral density in each frequency bin to normalize the plot.
End of explanation |
13,801 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Next block caches metadata for retrieving Supreme Court transcripts into a csv
Step1: Should put a csv reading block here for faster processing | Python Code:
import csv
with open("judicialMetadata.csv", "w+") as metadata:
header = allRecentRecords[0].keys()
writer = csv.DictWriter(metadata, fieldnames=header)
writer.writerows(allRecentRecords)
Explanation: Next block caches metadata for retrieving Supreme Court transcripts into a csv
End of explanation
import pdfquery
import requests
def getPDFTree(url, tempURL):
pdf = requests.get(url)
with open(tempURL, "wb+", buffering=0) as fp:
fp.write(pdf.content)
pdfTree = pdfquery.PDFQuery(fp)
pdfTree.load()
return pdfTree
url = "https://www.supremecourt.gov/oral_arguments/argument_transcripts/2000/00-6374.pdf"
tempURL = "temp/argument_transcripts/2004/04-603.x,l"
pdfTree = getPDFTree(url, tempURL)
pdfTree.tree.write("temp/argument_transcripts/2006/05-85.xml", pretty_print=True, encoding="utf-8")
TOContents = pdfTree.pq('LTTextLineHorizontal:contains("C O N T E N T S ")')
assert TOContents != None, "Table of contents is formatted differently"
import re
with open("temp/argument_transcripts/2004/04-603.xml") as fp:
text = fp.read()
tagRegex = r"\</?[^\<\>/]*\>"
text = re.sub(tagRegex, "", text)
text = re.sub(r'\n\s*[0-9]+', "\n", text)
text = text.replace('FOURTEENTH STREET, N.W.WASHINGTON, D.C. 20005(800) FOR DEPO ALDERSON REPORTING COMPANY, INC. 1111 FOURTEENTH STREET, N.W. SUITE 400 WASHINGTON, D.C. 20005 (202)289-2260 (800) FOR DEPO', '')
tocLength = text.find("C O N T E N T S ")
print(text[tocLength : ])
Explanation: Should put a csv reading block here for faster processing
End of explanation |
13,802 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Forecast Tutorial
This tutorial will walk through forecast data from Unidata forecast model data using the forecast.py module within pvlib.
Table of contents
Step1: GFS (0.5 deg)
Step2: GFS (0.25 deg)
Step3: NAM
Step4: NDFD
Step5: RAP
Step6: HRRR
Step7: HRRR (ESRL)
Step8: Quick power calculation | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
# built in python modules
import datetime
import os
# python add-ons
import numpy as np
import pandas as pd
# for accessing UNIDATA THREDD servers
from siphon.catalog import TDSCatalog
from siphon.ncss import NCSS
import pvlib
from pvlib.forecast import GFS, HRRR_ESRL, NAM, NDFD, HRRR, RAP
# Choose a location and time.
# Tucson, AZ
latitude = 32.2
longitude = -110.9
tz = 'America/Phoenix'
start = pd.Timestamp(datetime.date.today(), tz=tz) # today's date
end = start + pd.Timedelta(days=7) # 7 days from today
print(start, end)
Explanation: Forecast Tutorial
This tutorial will walk through forecast data from Unidata forecast model data using the forecast.py module within pvlib.
Table of contents:
1. Setup
2. Intialize and Test Each Forecast Model
This tutorial has been tested against the following package versions:
* Python 3.5.2
* IPython 5.0.0
* pandas 0.18.0
* matplotlib 1.5.1
* netcdf4 1.2.1
* siphon 0.4.0
It should work with other Python and Pandas versions. It requires pvlib >= 0.3.0 and IPython >= 3.0.
Authors:
* Derek Groenendyk (@moonraker), University of Arizona, November 2015
* Will Holmgren (@wholmgren), University of Arizona, November 2015, January 2016, April 2016, July 2016
Setup
End of explanation
from pvlib.forecast import GFS, HRRR_ESRL, NAM, NDFD, HRRR, RAP
# GFS model, defaults to 0.5 degree resolution
fm = GFS()
# retrieve data
data = fm.get_data(latitude, longitude, start, end)
data
data = fm.process_data(data)
data[['ghi', 'dni', 'dhi']].plot()
cs = fm.location.get_clearsky(data.index)
fig, ax = plt.subplots()
cs['ghi'].plot(ax=ax, label='ineichen')
data['ghi'].plot(ax=ax, label='gfs+larson')
ax.set_ylabel('ghi')
ax.legend()
fig, ax = plt.subplots()
cs['dni'].plot(ax=ax, label='ineichen')
data['dni'].plot(ax=ax, label='gfs+larson')
ax.set_ylabel('ghi')
ax.legend()
# retrieve data
data = fm.get_processed_data(latitude, longitude, start, end)
data
data['temp_air'].plot()
plt.ylabel('temperature (%s)' % fm.units['temp_air'])
cloud_vars = ['total_clouds', 'low_clouds', 'mid_clouds', 'high_clouds']
for varname in cloud_vars:
data[varname].plot()
plt.ylabel('Cloud cover' + ' %')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('GFS 0.5 deg')
plt.legend(bbox_to_anchor=(1.18,1.0))
total_cloud_cover = data['total_clouds']
total_cloud_cover.plot(color='r', linewidth=2)
plt.ylabel('Total cloud cover' + ' (%s)' % fm.units['total_clouds'])
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('GFS 0.5 deg')
Explanation: GFS (0.5 deg)
End of explanation
# GFS model at 0.25 degree resolution
fm = GFS(resolution='quarter')
# retrieve data
data = fm.get_processed_data(latitude, longitude, start, end)
for varname in cloud_vars:
data[varname].plot(ls='-', linewidth=2)
plt.ylabel('Cloud cover' + ' %')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('GFS 0.25 deg')
plt.legend(bbox_to_anchor=(1.18,1.0))
data
Explanation: GFS (0.25 deg)
End of explanation
fm = NAM()
# retrieve data
data = fm.get_processed_data(latitude, longitude, start, end)
for varname in cloud_vars:
data[varname].plot(ls='-', linewidth=2)
plt.ylabel('Cloud cover' + ' %')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('NAM')
plt.legend(bbox_to_anchor=(1.18,1.0))
data['ghi'].plot(linewidth=2, ls='-')
plt.ylabel('GHI W/m**2')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
data
Explanation: NAM
End of explanation
fm = NDFD()
# retrieve data
data = fm.get_processed_data(latitude, longitude, start, end)
total_cloud_cover = data['total_clouds']
temp = data['temp_air']
wind = data['wind_speed']
total_cloud_cover.plot(color='r', linewidth=2)
plt.ylabel('Total cloud cover' + ' (%s)' % fm.units['total_clouds'])
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('NDFD')
plt.ylim(0,100)
temp.plot(color='r', linewidth=2)
plt.ylabel('Temperature' + ' (%s)' % fm.units['temp_air'])
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
wind.plot(color='r', linewidth=2)
plt.ylabel('Wind Speed' + ' (%s)' % fm.units['wind_speed'])
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
data
Explanation: NDFD
End of explanation
fm = RAP(resolution=20)
# retrieve data
data = fm.get_processed_data(latitude, longitude, start, end)
cloud_vars = ['total_clouds', 'high_clouds', 'mid_clouds', 'low_clouds']
for varname in cloud_vars:
data[varname].plot(ls='-', linewidth=2)
plt.ylabel('Cloud cover' + ' %')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('RAP')
plt.legend(bbox_to_anchor=(1.18,1.0))
data
Explanation: RAP
End of explanation
fm = HRRR()
data_raw = fm.get_data(latitude, longitude, start, end)
# The HRRR model pulls in u, v winds for 2 layers above ground (10 m, 80 m)
# They are labeled as _0, _1 in the raw data
data_raw
data = fm.get_processed_data(latitude, longitude, start, end)
cloud_vars = ['total_clouds', 'high_clouds', 'mid_clouds', 'low_clouds']
for varname in cloud_vars:
data[varname].plot(ls='-', linewidth=2)
plt.ylabel('Cloud cover' + ' %')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('RAP')
plt.legend(bbox_to_anchor=(1.18,1.0))
data['temp_air'].plot(color='r', linewidth=2)
plt.ylabel('Temperature' + ' (%s)' % fm.units['temp_air'])
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
data['wind_speed'].plot(color='r', linewidth=2)
plt.ylabel('Wind Speed' + ' (%s)' % fm.units['wind_speed'])
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
data
Explanation: HRRR
End of explanation
fm = HRRR_ESRL()
# retrieve data
data = fm.get_processed_data(latitude, longitude, start, end)
cloud_vars = ['total_clouds','high_clouds','mid_clouds','low_clouds']
for varname in cloud_vars:
data[varname].plot(ls='-', linewidth=2)
plt.ylabel('Cloud cover' + ' %')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
plt.title('HRRR_ESRL')
plt.legend(bbox_to_anchor=(1.18,1.0))
data['ghi'].plot(linewidth=2, ls='-')
plt.ylabel('GHI W/m**2')
plt.xlabel('Forecast Time ('+str(data.index.tz)+')')
Explanation: HRRR (ESRL)
End of explanation
from pvlib.pvsystem import PVSystem, retrieve_sam
from pvlib.modelchain import ModelChain
sandia_modules = retrieve_sam('SandiaMod')
sapm_inverters = retrieve_sam('cecinverter')
module = sandia_modules['Canadian_Solar_CS5P_220M___2009_']
inverter = sapm_inverters['ABB__MICRO_0_25_I_OUTD_US_208_208V__CEC_2014_']
system = PVSystem(module_parameters=module,
inverter_parameters=inverter)
# fx is a common abbreviation for forecast
fx_model = GFS()
fx_data = fx_model.get_processed_data(latitude, longitude, start, end)
# use a ModelChain object to calculate modeling intermediates
mc = ModelChain(system, fx_model.location,
orientation_strategy='south_at_latitude_tilt')
# extract relevant data for model chain
mc.run_model(fx_data.index, weather=fx_data)
mc.total_irrad.plot()
mc.temps.plot()
mc.ac.plot()
Explanation: Quick power calculation
End of explanation |
13,803 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-2', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: CSIR-CSIRO
Source ID: SANDBOX-2
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:54
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
13,804 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Hardware Design
This code works through the hardware design process with the the
audience of software developers more in mind. We start with the simple
problem of designing a fibonacci sequence calculator (http
Step1: A normal old python function to return the Nth fibonacci number.
Interative implementation of fibonacci, just iteratively adds a and b to
calculate the nth number in the sequence.
>> [software_fibonacci(x) for x in range(10)]
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
Step2: Attempt 1
Let's convert this into some hardware that computes the same thing. Our first go will be to just replace the 0 and 1 with WireVectors to see
what happens.
Step3: The above looks really nice does not really represent a hardware implementation
of fibonacci.
Let's reason through the code, line by line, to figure out what it would actually build.
a = pyrtl.Const(0)
This makes a wirevector of bitwidth=1 that is driven by a zero. Thus a is a wirevector. Seems good.
b = pyrtl.Const(1)
Just like above, b is a wirevector driven by 1
for i in range(n)
Step4: This is looking much better.
Two registers, a and b store the values from which we
can compute the series.
The line a.next <<= b means that the value of a in the next
cycle should be simply be b from the current cycle.
The line b.next <<= a + b says
to build an adder, with inputs of a and b from the current cycle and assign the value
to b in the next cycle.
A visual representation of the hardware built is as such
Step5: Attempt 4
This is now far enough along that we can simulate the design and see what happens... | Python Code:
import pyrtl
Explanation: Introduction to Hardware Design
This code works through the hardware design process with the the
audience of software developers more in mind. We start with the simple
problem of designing a fibonacci sequence calculator (http://oeis.org/A000045).
End of explanation
def software_fibonacci(n):
a, b = 0, 1
for i in range(n):
a, b = b, a + b
return a
Explanation: A normal old python function to return the Nth fibonacci number.
Interative implementation of fibonacci, just iteratively adds a and b to
calculate the nth number in the sequence.
>> [software_fibonacci(x) for x in range(10)]
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
End of explanation
def attempt1_hardware_fibonacci(n, bitwidth):
a = pyrtl.Const(0)
b = pyrtl.Const(1)
for i in range(n):
a, b = b, a + b
return a
Explanation: Attempt 1
Let's convert this into some hardware that computes the same thing. Our first go will be to just replace the 0 and 1 with WireVectors to see
what happens.
End of explanation
def attempt2_hardware_fibonacci(n, bitwidth):
a = pyrtl.Register(bitwidth, 'a')
b = pyrtl.Register(bitwidth, 'b')
a.next <<= b
b.next <<= a + b
return a
Explanation: The above looks really nice does not really represent a hardware implementation
of fibonacci.
Let's reason through the code, line by line, to figure out what it would actually build.
a = pyrtl.Const(0)
This makes a wirevector of bitwidth=1 that is driven by a zero. Thus a is a wirevector. Seems good.
b = pyrtl.Const(1)
Just like above, b is a wirevector driven by 1
for i in range(n):
Okay, here is where things start to go off the rails a bit. This says to perform the following code 'n' times, but the value 'n' is passed as an input and is not something that is evaluated in the hardware, it is evaluated when you run the PyRTL program which generates (or more specifically elaborates) the hardware. Thus the hardware we are building will have The value of 'n' built into the hardware and won't actually be a run-time parameter. Loops are really useful for building large repetitive hardware structures, but they CAN'T be used to represent hardware that should do a computation iteratively. Instead we are going to need to use some registers to build a state machine.
a, b = b, a + b
Let's break this apart. In the first cycle b is Const(1) and (a + b) builds an adder with a (Const(0)) and b (Const(1) as inputs. Thus (b, a + b) in the first iteration is: ( Const(1), result_of_adding( Const(0), Const(1) ) At the end of the first iteration a and b refer to those two constant values. In each following iteration more adders are built and the names a and b are bound to larger and larger trees of adders but all the inputs are constants!
return a
The final thing that is returned then is the last output from this tree of adders which all have Consts as inputs. Thus this hardware is hard-wired to find only and exactly the value of fibonacci of the value N specified at design time! Probably not what you are intending.
Attempt 2
Let's try a different approach. Let's specify two registers ("a" and "b") and then we can update those values as we iteratively compute fibonacci of N cycle by cycle.
End of explanation
def attempt3_hardware_fibonacci(n, bitwidth):
a = pyrtl.Register(bitwidth, 'a')
b = pyrtl.Register(bitwidth, 'b')
i = pyrtl.Register(bitwidth, 'i')
i.next <<= i + 1
a.next <<= b
b.next <<= a + b
return a, i == n
Explanation: This is looking much better.
Two registers, a and b store the values from which we
can compute the series.
The line a.next <<= b means that the value of a in the next
cycle should be simply be b from the current cycle.
The line b.next <<= a + b says
to build an adder, with inputs of a and b from the current cycle and assign the value
to b in the next cycle.
A visual representation of the hardware built is as such:
+-----+ +---------+
| | | |
+===V==+ | +===V==+ |
| | | | | |
| a | | | b | |
| | | | | |
+===V==+ | +==V===+ |
| | | |
| +-----+ |
| | |
+===V===========V==+ |
\ adder / |
+==============+ |
| |
+---------------+
Note that in the picture the register a and b each have a wirevector which is
the current value (shown flowing out of the bottom of the register) and an input
which is giving the value that should be the value of the register in the following
cycle (shown flowing into the top of the register) which are a and a.next respectively.
When we say return a what we are returning is a reference to the register a in
the picture above.
Attempt 3
Of course one problem is that we don't know when we are done! How do we know we
reached the "nth" number in the sequence? Well, we need to add a register to
count up and see if we are done.
This is very similliar to the example before, except that now we have a register "i"
which keeps track of the iteration that we are on (i.next <<= i + 1).
The function now returns two values, a reference to the register "a" and a reference to a single
bit that tells us if we are done. That bit is calculated by comparing "i" to the
to a wirevector "n" that is passed in to see if they are the same.
End of explanation
def attempt4_hardware_fibonacci(n, req, bitwidth):
a = pyrtl.Register(bitwidth, 'a')
b = pyrtl.Register(bitwidth, 'b')
i = pyrtl.Register(bitwidth, 'i')
local_n = pyrtl.Register(bitwidth, 'local_n')
done = pyrtl.WireVector(bitwidth=1, name='done')
with pyrtl.conditional_assignment:
with req:
local_n.next |= n
i.next |= 0
a.next |= 0
b.next |= 1
with pyrtl.otherwise:
i.next |= i + 1
a.next |= b
b.next |= a + b
done <<= i == local_n
return a, done
Explanation: Attempt 4
This is now far enough along that we can simulate the design and see what happens...
End of explanation |
13,805 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Somme de variables aléatoires
Step1: Somme de deux v.a. discrètes
Step2: Paramètre $p$ différent (TODO)
TODO
Step3: Somme de deux v.a. suivant une loi de poisson
Si $X_1 \sim \mathcal{P}(\lambda_1)$ et $X_2 \sim \mathcal{P}(\lambda_2)$ sont indépendantes, alors $X_1 + X_2 \sim \mathcal{P}(\lambda_1 + \lambda_2)$.
C.f. https
Step4: Somme de deux v.a. suivant une loi normale (convolution de lois normales)
La convolution (somme) de deux lois normales indépendantes forme une loi normale.
Soit
$X_1 \sim \mathcal{N}(\mu_1, \sigma_1)$
et
$X_2 \sim \mathcal{N}(\mu_2, \sigma_2)$ deux v.a. indépendantes suivant une loi Normale.
$X_1 + X_2 \sim \mathcal{N}\left(\mu_1 + \mu_2, \sqrt{\sigma_1^2 + \sigma_2^2}\right)$.
Step5: Somme d'une loi de poisson (loi discrète) et d'une loi normale (loi continue)
Soit
$X \sim \mathcal{P}(\lambda)$,
$Y \sim \mathcal{N}(\mu, \sigma)$ deux v.a. indépendantes suivant respectivement une loi de poisson et une loi normale
et $Z$ la somme de ces deux v.a. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats
Explanation: Somme de variables aléatoires
End of explanation
p = 0.5
n1 = 5
n2 = 8
# Empirical distribution
num_samples = 1000000
x1 = np.random.binomial(n=n1, p=p, size=num_samples)
x2 = np.random.binomial(n=n2, p=p, size=num_samples)
x3 = np.random.binomial(n=n1+n2, p=p, size=num_samples)
x1x2 = x1 + x2
# Probability mass function
xmin = 0
xmax = n1 + n2
x = np.arange(xmin, xmax, 1)
dist1 = scipy.stats.binom(n=n1, p=p)
dist2 = scipy.stats.binom(n=n2, p=p)
dist3 = scipy.stats.binom(n=n1+n2, p=p)
y1 = dist1.pmf(x)
y2 = dist2.pmf(x)
y3 = dist3.pmf(x)
# Plots
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 12))
ax.hist(x1, bins=list(range(n1+2)), label=r"$X1 \sim \mathcal{B}(n_1, p)$", alpha=0.5, normed=True, color="blue")
ax.hist(x2, bins=list(range(n2+2)), label=r"$X2 \sim \mathcal{B}(n_2, p)$", alpha=0.5, normed=True, color="red")
ax.hist(x3, bins=list(range(n1+n2+2)), label=r"$X3 \sim \mathcal{B}(n_1 + n_2, p)$", alpha=0.5, normed=True, color="green")
ax.hist(x1x2, bins=list(range(n1+n2+2)), label=r"$X1 + X2$", alpha=0.5, normed=True, color="black", histtype="step", linewidth=2)
ax.plot(x, y1, 'b.', label=r"PMF $\mathcal{B}(n_1, p)$")
ax.plot(x, y2, 'r.', label=r"PMF $\mathcal{B}(n_2, p)$")
ax.plot(x, y3, 'g.', label=r"PMF $\mathcal{B}(n_1+n_2, p)$")
ax.legend(prop={'size': 18}, loc='best', fancybox=True, framealpha=0.5);
Explanation: Somme de deux v.a. discrètes: cas général
Soit $X$ et $Y$ deux v.a. discrètes de lois respectives ${(x_i,p_i) ; i \in I}$ et ${(y_j,q_j) ; j \in J}$.
La v.a. $Z=X+Y$ est aussi une v.a. discrète dont la loi de probabilité est définie par l'ensemble des valeurs possibles ${(x_i + y_j) ; i \in I, j \in J}$ et par les probabilités associées:
$$
P(Z=z_k) = \sum\left{ \frac{P(X=x_i,Y=y_j)}{x_i+y_j} = z_k \right}
$$
ce qui suppose de connaitre la loi du couple $(X,Y)$.
TODO: clarifier la notation précédente
TODO: exemple
Somme de deux v.a. discrètes indépendantes
Soit $X$ et $Y$ deux v.a. discrètes indépendantes de lois respectives ${(x_i,p_i) ; i \in I}$ et ${(y_j,q_j) ; j \in J}$.
La v.a. $Z=X+Y$ est aussi une v.a. discrète dont la loi de probabilité est définie par l'ensemble des valeurs possibles ${(x_i + y_j) ; i \in I, j \in J}$ et par les probabilités associées:
$$
\begin{eqnarray}
P(Z=z_k) & = \sum_{i \in I} P(X=x_i)P(Y=z_k-x_i) \
& = \sum_{j \in J} P(Y=y_j)P(X=z_k-y_j)
\end{eqnarray}
$$
TODO: clarifier la notation précédente
TODO: exemple
Somme de deux v.a. suivant une loi de Bernoulli
La somme de deux v.a. suivant une loi de Bernoulli est juste un cas particulier de la section suivante (somme de deux v.a. suivant une loi binomiale).
Soit
$X_1 \sim \mathcal{B}(1, p_1)$
et
$X_2 \sim \mathcal{B}(1, p_2)$ deux v.a. indépendantes suivant une loi de Bernoulli.
Si $p_1 = p_2$ alors la somme de ces deux v.a. suit une loi binomiale de paramètre $n = 1 + 1 = 2$ et $p$ : $X_1 + X_2 \sim \mathcal{B}(2, p)$. Exemple intuitif: imaginer une planche de Galton de $1$ rangée à laquelle on ajouterais une seconde rangée...
Si $p_1 \neq p_2$ alors la somme de ces deux v.a. ne suit plus une loi binomiale mais une loi poisson binomiale (loi de probabilité discrète de la somme d'épreuves de Bernoulli indépendantes de paramètre $p$ différents).
C.f. https://math.stackexchange.com/questions/1153576/addition-of-two-binomial-distribution.
Somme de deux v.a. suivant une loi binomiale
Soit $X_1 \sim \mathcal{B}(n_1, p_1)$
et
$X_2 \sim \mathcal{B}(n_2, p_2)$ deux v.a. indépendantes suivant une loi binomiale.
Si $p_1 = p_2$ alors la somme de ces deux v.a. suit une loi binomiale de paramètre $n = n_1 + n_2$ et $p$ : $X_1 + X_2 \sim \mathcal{B}(n_1 + n_2, p)$. Exemple intuitif: imaginer une planche de Galton de $n_1$ rangées à laquelle on ajouterais $n_2$ rangées:
* $X_1$ donne le nombre de "succès" sur $n_1$ expériences de Bernoulli $\mathcal{B}(p)$ e.g. le nombre de fois que la bille est allée à droite dans une planche de Galton de $n_1$ lignes.
* $X_2$ donne le nombre de "succès" sur $n_2$ expériences de Bernoulli $\mathcal{B}(p)$ e.g. le nombre de fois que la bille est allée à droite dans une planche de Galton de $n_2$ lignes.
* Donc la somme de $X_1$ et $X_2$ donne le nombre de "succès" sur $n_1 + n_2$ expériences de Bernoulli $\mathcal{B}(p)$ e.g. le nombre de fois que la bille est allée à droite dans une planche de Galton de $n_1 + n_2$ lignes.
Si $p_1 \neq p_2$ alors la somme de ces deux v.a. ne suit plus une loi binomiale mais une loi poisson binomiale (loi de probabilité discrète de la somme d'épreuves de Bernoulli indépendantes de paramètre $p$ différents).
C.f. https://math.stackexchange.com/questions/1153576/addition-of-two-binomial-distribution.
Paramètre $p$ identique
$X_1 \sim \mathcal{B}(n_1, p_1)$
$X_2 \sim \mathcal{B}(n_2, p_2)$
$X_1 + X_2 \sim \mathcal{B}(n_1 + n_2, p)$ si $X_1$ et $X_2$ sont indépendantes.
End of explanation
p1 = 0.5
p2 = 0.8
n = 10
# Empirical distribution
num_samples = 1000000
x1 = np.random.binomial(n=n, p=p1, size=num_samples)
x2 = np.random.binomial(n=n, p=p2, size=num_samples)
x3 = np.random.binomial(n=2*n, p=p1*p2, size=num_samples)
x1x2 = x1 + x2
# Probability mass function
xmin = 0
xmax = 2 * n
x = np.arange(xmin, xmax, 1)
dist1 = scipy.stats.binom(n=n, p=p1)
dist2 = scipy.stats.binom(n=n, p=p2)
y1 = dist1.pmf(x)
y2 = dist2.pmf(x)
#y3 = (p1 * e^{it} + 1. − p1)**n * (p2 * e^{it} + 1. − p2)**n
# Plots
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 12))
ax.hist(x1, bins=list(range(n+2)), label=r"$X1 \sim \mathcal{B}(n, p_1)$", alpha=0.5, normed=True, color="blue")
ax.hist(x2, bins=list(range(n+2)), label=r"$X2 \sim \mathcal{B}(n, p_2)$", alpha=0.5, normed=True, color="red")
ax.hist(x1x2, bins=list(range(2*n+2)), label=r"$X1 + X2$", alpha=0.5, normed=True, color="black", histtype="step", linewidth=2)
ax.plot(x, y1, 'b.', label=r"PMF $\mathcal{B}(n, p_1)$")
ax.plot(x, y2, 'r.', label=r"PMF $\mathcal{B}(n, p_2)$")
ax.legend(prop={'size': 18}, loc='best', fancybox=True, framealpha=0.5)
print("E(X_1 + X_2) = ", x1x2.mean())
print("n p_1 + n p_2 = ", n * p1 + n * p2)
print()
print("Var(X_1 + X_2) = ", x1x2.var())
print("n p_1 (1 − p_1) + n p_2 (1 − p_2) =", n * p1 * (1-p1) + n * p2 * (1-p2))
Explanation: Paramètre $p$ différent (TODO)
TODO:
$X_1 \sim \mathcal{B}(n, p_1)$
$X_2 \sim \mathcal{B}(n, p_2)$
$X_1 + X_2 \sim ...$ si $X_1$ et $X_2$ sont indépendantes.
$E(X_1 + X_2) = n p_1 + n p_2$
$Var(X_1 + X_2) = n p_1 (1 − p_1) + n p_2 (1 − p_2)$
The characteristic function is $(p_1 e^{it} + 1 − p_1)^n (p_2 e^{it} + 1 − p_2)^n$
End of explanation
lambda1 = 3
lambda2 = 8
# Empirical distribution
num_samples = 1000000
x1 = np.random.poisson(lam=lambda1, size=num_samples)
x2 = np.random.poisson(lam=lambda2, size=num_samples)
x3 = np.random.poisson(lam=lambda1+lambda2, size=num_samples)
x1x2 = x1 + x2
# Probability mass function
xmin = 0
xmax = 30 # TODO
x = np.arange(xmin, xmax, 1)
dist1 = scipy.stats.poisson(lambda1)
dist2 = scipy.stats.poisson(lambda2)
dist3 = scipy.stats.poisson(lambda1+lambda2)
y1 = dist1.pmf(x)
y2 = dist2.pmf(x)
y3 = dist3.pmf(x)
# Plots
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 12))
ax.hist(x1, bins=list(range(xmax+2)), label=r"$X1 \sim \mathcal{P}(\lambda_1)$", alpha=0.5, normed=True, color="blue")
ax.hist(x2, bins=list(range(xmax+2)), label=r"$X2 \sim \mathcal{P}(\lambda_2)$", alpha=0.5, normed=True, color="red")
ax.hist(x3, bins=list(range(xmax+2)), label=r"$X3 \sim \mathcal{P}(\lambda_1 + \lambda_2)$", alpha=0.5, normed=True, color="green")
ax.hist(x1x2, bins=list(range(xmax+2)), label=r"$X1 + X2$", alpha=0.5, normed=True, color="black", histtype="step", linewidth=2)
ax.plot(x, y1, 'b.', label=r"PMF $\mathcal{P}(\lambda_1)$")
ax.plot(x, y2, 'r.', label=r"PMF $\mathcal{P}(\lambda_2)$")
ax.plot(x, y3, 'g.', label=r"PMF $\mathcal{P}(\lambda_1 + \lambda_2)$")
ax.legend(prop={'size': 18}, loc='best', fancybox=True, framealpha=0.5);
Explanation: Somme de deux v.a. suivant une loi de poisson
Si $X_1 \sim \mathcal{P}(\lambda_1)$ et $X_2 \sim \mathcal{P}(\lambda_2)$ sont indépendantes, alors $X_1 + X_2 \sim \mathcal{P}(\lambda_1 + \lambda_2)$.
C.f. https://fr.wikipedia.org/wiki/Loi_de_Poisson#Stabilit.C3.A9_de_la_loi_de_Poisson_par_la_somme.
End of explanation
mu_1 = 1
mu_2 = 15
sigma_1 =2.5
sigma_2 = 1.5
# Empirical distribution
num_samples = 1000000
x1 = np.random.normal(loc=mu_1, scale=sigma_1, size=num_samples)
x2 = np.random.normal(loc=mu_2, scale=sigma_2, size=num_samples)
x3 = np.random.normal(loc=mu_1+mu_2, scale=math.sqrt(sigma_1**2 + sigma_2**2), size=num_samples)
x1x2 = x1 + x2
# Probability mass function
xmin = -10
xmax = 30
x = np.arange(xmin, xmax, 0.01)
dist1 = scipy.stats.norm(loc=mu_1, scale=sigma_1)
dist2 = scipy.stats.norm(loc=mu_2, scale=sigma_2)
dist3 = scipy.stats.norm(loc=mu_1+mu_2, scale=math.sqrt(sigma_1**2 + sigma_2**2))
y1 = dist1.pdf(x)
y2 = dist2.pdf(x)
y3 = dist3.pdf(x)
# Plots
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 12))
bins = np.arange(xmin, xmax, 0.5)
ax.hist(x1, bins=bins, label=r"$X1 \sim \mathcal{N}(\mu_1, \sigma_1)$", alpha=0.5, normed=True, color="blue")
ax.hist(x2, bins=bins, label=r"$X2 \sim \mathcal{N}(\mu_2, \sigma_2)$", alpha=0.5, normed=True, color="red")
ax.hist(x3, bins=bins, label=r"$X3 \sim \mathcal{N}(\mu_1 + \mu_2, \sqrt{\sigma_1^2 + \sigma_2^2})$", alpha=0.5, normed=True, color="green")
ax.hist(x1x2, bins=bins, label=r"$X1 + X2$", alpha=0.5, normed=True, color="black", histtype="step", linewidth=2)
ax.plot(x, y1, 'b', label=r"PDF $\mathcal{N}(\mu_1, \sigma_1)$")
ax.plot(x, y2, 'r', label=r"PDF $\mathcal{N}(\mu_2, \sigma_2)$")
ax.plot(x, y3, 'g', label=r"PDF $\mathcal{N}(\mu_1 + \mu_2, \sqrt{\sigma_1^2 + \sigma_2^2})$")
ax.legend(prop={'size': 18}, loc='best', fancybox=True, framealpha=0.5);
Explanation: Somme de deux v.a. suivant une loi normale (convolution de lois normales)
La convolution (somme) de deux lois normales indépendantes forme une loi normale.
Soit
$X_1 \sim \mathcal{N}(\mu_1, \sigma_1)$
et
$X_2 \sim \mathcal{N}(\mu_2, \sigma_2)$ deux v.a. indépendantes suivant une loi Normale.
$X_1 + X_2 \sim \mathcal{N}\left(\mu_1 + \mu_2, \sqrt{\sigma_1^2 + \sigma_2^2}\right)$.
End of explanation
k = 2 # lambda
mu = 5
sigma = 0.5
# Empirical distribution
num_samples = 10000000
x1 = np.random.poisson(lam=k, size=num_samples)
x2 = np.random.normal(loc=mu, scale=sigma, size=num_samples)
x1x2 = x1 + x2
# Probability mass function
dist1 = scipy.stats.poisson(k)
dist2 = scipy.stats.norm(loc=mu, scale=sigma)
xmin = -30
xmax = 30
x_poisson = np.arange(0, xmax, 1)
x_normal = np.arange(xmin, xmax, 0.01)
y1 = dist1.pmf(x_poisson)
y2 = dist2.pdf(x_normal)
# Plots
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 12))
bins1 = np.arange(xmin, xmax, 1)
bins2 = np.arange(xmin, xmax, 0.1)
ax.hist(x1, bins=list(range(xmax+2)), label=r"$X1 \sim \mathcal{P}(\lambda)$", alpha=0.5, normed=True, color="blue")
ax.hist(x2, bins=bins2, label=r"$X2 \sim \mathcal{N}(\mu, \sigma)$", alpha=0.5, normed=True, color="red")
ax.hist(x1x2, bins=bins2, label=r"$X1 + X2$", alpha=0.5, normed=True, color="black", histtype="step", linewidth=2)
ax.plot(x_poisson, y1, 'b.', label=r"PMF $\mathcal{P}(\lambda)$")
ax.plot(x_normal, y2, 'r', label=r"PDF $\mathcal{N}(\mu, \sigma)$")
ax.legend(prop={'size': 18}, loc='best', fancybox=True, framealpha=0.5)
class PoissonGaussian:
def __init__(self, lambda_, mu, sigma):
self.lambda_ = lambda_
self.mu = mu
self.sigma = sigma
def pdf(self, x):
pdf = 0.
norm_dist = scipy.stats.norm(self.mu, self.sigma)
poisson_dist = scipy.stats.poisson(self.lambda_)
x_poisson = 0
while poisson_dist.cdf(x_poisson) < 0.999: # iterate over the X r.v. (Poisson distribution)
pdf += poisson_dist.pmf(x_poisson) * norm_dist.pdf(x-x_poisson)
x_poisson += 1
return pdf
dist = PoissonGaussian(k, mu, sigma)
y4 = np.array([dist.pdf(x) for x in x_normal])
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 12))
ax.hist(x1x2, bins=bins2, label=r"$X1 + X2$", alpha=0.5, normed=True, color="black", histtype="step", linewidth=2)
#ax.plot(x_normal, y4_cum, 'g'); # Print the CDF
ax.plot(x_normal, y4, 'g'); # Print the PDF
# Approximation by a Normal distribution
print("X1+X2 mean =", x1x2.mean())
print("mu + lambda =", mu + k)
print()
print("X1+X2 std =", x1x2.std())
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 12))
ax.hist(x1x2, bins=bins2, label=r"$X1 + X2$", alpha=0.5, normed=True, color="black", histtype="step", linewidth=2)
dist3 = scipy.stats.norm(loc=x1x2.mean(), scale=x1x2.std())
y3 = dist3.pdf(x_normal)
ax.plot(x_normal, y3, 'g');
#np.diff(y4_cum) / (x_normal[1] - x_normal[0])
Explanation: Somme d'une loi de poisson (loi discrète) et d'une loi normale (loi continue)
Soit
$X \sim \mathcal{P}(\lambda)$,
$Y \sim \mathcal{N}(\mu, \sigma)$ deux v.a. indépendantes suivant respectivement une loi de poisson et une loi normale
et $Z$ la somme de ces deux v.a.:
$Z = X + Y$.
$X_1 + X_2 \sim \dots$ TODO
TODO $$PDF_Z(z) = \sum_{x=0}^{x=\infty} PMF_X(x) PDF_Y(z-x)$$
$E(Z) = $
$V(Z) = $
If X ~ Poisson(lambda), Y ~ N(mu,sigma^2), X, Y independent and Z=X+Y, then the cdf of Z is given by
$$P(Z \leq z) = \sum_{k=0}^{k=\infty} P(X=k) P(Y \leq z-k)$$
avec
$$P(X=k) = \frac{\lambda^k}{k!} e^{-\lambda}$$
et
$$P(Y \leq z-k) = 0.5 * \left(1 + \text{erf}\left(\frac{z - k - \mu}{\sqrt{(2 * \sigma^2)}}\right)\right)$$ (erf: error function)
If needed, you can get the pdf of Z by differentiating the sum with respect to z.
https://math.stackexchange.com/questions/455624/sum-of-poisson-and-gaussian-random-variable
https://fr.mathworks.com/matlabcentral/answers/287934-adding-two-different-distributions-example-gaussian-and-poisson-distribution?requestedDomain=www.mathworks.com
HS mais intéressant quand même:
* https://stats.stackexchange.com/questions/86402/variance-of-the-sum-of-a-poisson-distributed-random-number-of-normally-distribu
* https://en.wikipedia.org/wiki/Compound_Poisson_distribution
End of explanation |
13,806 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
Step1: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is
Step3: In this equation
Step4: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
Step5: Use interact with plot_fermidist to explore the distribution | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
Explanation: Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
Image('fermidist.png')
Explanation: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is:
End of explanation
def fermidist(energy, mu, kT):
Compute the Fermi distribution at energy, mu and kT.
e=energy
m=mu
t=kT
f=1/(np.exp((e-m)/t)+1)
return f
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
Explanation: In this equation:
$\epsilon$ is the single particle energy.
$\mu$ is the chemical potential, which is related to the total number of particles.
$k$ is the Boltzmann constant.
$T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
\begin{equation}
F(\epsilon)=\frac{1}{e^{(\epsilon-\mu)/kT}+1}
\end{equation}
Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
End of explanation
def plot_fermidist(mu, kT):
plt.plot(fermidist(np.linspace(0,10,11),mu,kT),'k')
plt.xlabel('Energy')
plt.ylabel('F($\epsilon$)')
#plt.tick_params #ran out of time
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
End of explanation
interactive(plot_fermidist,mu=(0.0,5.0,.1),kT=(.1,10.0,.1))
Explanation: Use interact with plot_fermidist to explore the distribution:
For mu use a floating point slider over the range $[0.0,5.0]$.
for kT use a floating point slider over the range $[0.1,10.0]$.
End of explanation |
13,807 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Manipulation with skimage
This example builds a simple UI for performing basic image manipulation with scikit-image.
Step1: Let's load an image from scikit-image's collection, stored in the data module. These come back as regular numpy arrays
Step3: Let's make a little utility function for displaying Numpy arrays with the IPython display protocol
Step4: Now, let's create a simple "image editor" function, that allows us to blur the image or change its color balance
Step5: We can call this function manually and get a new image. For example, let's do a little blurring and remove all the red from the image
Step6: But it's a lot easier to explore what this function does by controlling each parameter interactively and getting immediate visual feedback. IPython's ipywidgets package lets us do that with a minimal amount of code
Step7: Browsing the scikit-image gallery, and editing grayscale and jpg images
The coffee cup isn't the only image that ships with scikit-image, the data module has others. Let's make a quick interactive explorer for this
Step8: And now, let's update our editor to cope correctly with grayscale and color images, since some images in the scikit-image collection are grayscale. For these, we ignore the red (R) and blue (B) channels, and treat 'G' as 'Grayscale' | Python Code:
# Stdlib imports
from io import BytesIO
# Third-party libraries
from IPython.display import Image
from ipywidgets import interact, interactive, fixed
import matplotlib as mpl
from skimage import data, filters, io, img_as_float
import numpy as np
Explanation: Image Manipulation with skimage
This example builds a simple UI for performing basic image manipulation with scikit-image.
End of explanation
i = img_as_float(data.coffee())
i.shape
Explanation: Let's load an image from scikit-image's collection, stored in the data module. These come back as regular numpy arrays:
End of explanation
def arr2img(arr):
Display a 2- or 3-d numpy array as an image.
if arr.ndim == 2:
format, cmap = 'png', mpl.cm.gray
elif arr.ndim == 3:
format, cmap = 'jpg', None
else:
raise ValueError("Only 2- or 3-d arrays can be displayed as images.")
# Don't let matplotlib autoscale the color range so we can control overall luminosity
vmax = 255 if arr.dtype == 'uint8' else 1.0
with BytesIO() as buffer:
mpl.image.imsave(buffer, arr, format=format, cmap=cmap, vmin=0, vmax=vmax)
out = buffer.getvalue()
return Image(out)
arr2img(i)
Explanation: Let's make a little utility function for displaying Numpy arrays with the IPython display protocol:
End of explanation
def edit_image(image, sigma=0.1, R=1.0, G=1.0, B=1.0):
new_image = filters.gaussian(image, sigma=sigma, multichannel=True)
new_image[:,:,0] = R*new_image[:,:,0]
new_image[:,:,1] = G*new_image[:,:,1]
new_image[:,:,2] = B*new_image[:,:,2]
return arr2img(new_image)
Explanation: Now, let's create a simple "image editor" function, that allows us to blur the image or change its color balance:
End of explanation
edit_image(i, sigma=5, R=0.1)
Explanation: We can call this function manually and get a new image. For example, let's do a little blurring and remove all the red from the image:
End of explanation
lims = (0.0,1.0,0.01)
interact(edit_image, image=fixed(i), sigma=(0.0,10.0,0.1), R=lims, G=lims, B=lims);
Explanation: But it's a lot easier to explore what this function does by controlling each parameter interactively and getting immediate visual feedback. IPython's ipywidgets package lets us do that with a minimal amount of code:
End of explanation
def choose_img(name):
# Let's store the result in the global `img` that we can then use in our image editor below
global img
img = getattr(data, name)()
return arr2img(img)
# Skip 'load' and 'lena', two functions that don't actually return images
interact(choose_img, name=sorted(set(data.__all__)-{'lena', 'load'}));
Explanation: Browsing the scikit-image gallery, and editing grayscale and jpg images
The coffee cup isn't the only image that ships with scikit-image, the data module has others. Let's make a quick interactive explorer for this:
End of explanation
lims = (0.0, 1.0, 0.01)
def edit_image(image, sigma, R, G, B):
new_image = filters.gaussian(image, sigma=sigma, multichannel=True)
if new_image.ndim == 3:
new_image[:,:,0] = R*new_image[:,:,0]
new_image[:,:,1] = G*new_image[:,:,1]
new_image[:,:,2] = B*new_image[:,:,2]
else:
new_image = G*new_image
return arr2img(new_image)
interact(edit_image, image=fixed(img), sigma=(0.0, 10.0, 0.1),
R=lims, G=lims, B=lims);
Explanation: And now, let's update our editor to cope correctly with grayscale and color images, since some images in the scikit-image collection are grayscale. For these, we ignore the red (R) and blue (B) channels, and treat 'G' as 'Grayscale':
End of explanation |
13,808 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table Visualization
This section demonstrates visualization of tabular data using the Styler
class. For information on visualization with charting please see Chart Visualization. This document is written as a Jupyter Notebook, and can be viewed or downloaded here.
Styler Object and HTML
Styling should be performed after the data in a DataFrame has been processed. The Styler creates an HTML <table> and leverages CSS styling language to manipulate many parameters including colors, fonts, borders, background, etc. See here for more information on styling HTML tables. This allows a lot of flexibility out of the box, and even enables web developers to integrate DataFrames into their exiting user interface designs.
The DataFrame.style attribute is a property that returns a Styler object. It has a _repr_html_ method defined on it so they are rendered automatically in Jupyter Notebook.
Step1: The above output looks very similar to the standard DataFrame HTML representation. But the HTML here has already attached some CSS classes to each cell, even if we haven't yet created any styles. We can view these by calling the [.to_html()][to_html] method, which returns the raw HTML as string, which is useful for further processing or adding to a file - read on in More about CSS and HTML. Below we will show how we can use these to format the DataFrame to be more communicative. For example how we can build s
Step2: Formatting the Display
Formatting Values
Before adding styles it is useful to show that the Styler can distinguish the display value from the actual value, in both datavlaues and index or columns headers. To control the display value, the text is printed in each cell as string, and we can use the .format() and .format_index() methods to manipulate this according to a format spec string or a callable that takes a single value and returns a string. It is possible to define this for the whole table, or index, or for individual columns, or MultiIndex levels.
Additionally, the format function has a precision argument to specifically help formatting floats, as well as decimal and thousands separators to support other locales, an na_rep argument to display missing data, and an escape argument to help displaying safe-HTML or safe-LaTeX. The default formatter is configured to adopt pandas' regular display.precision option, controllable using with pd.option_context('display.precision', 2)
Step3: Using Styler to manipulate the display is a useful feature because maintaining the indexing and datavalues for other purposes gives greater control. You do not have to overwrite your DataFrame to display it how you like. Here is an example of using the formatting functions whilst still relying on the underlying data for indexing and calculations.
Step4: Hiding Data
The index and column headers can be completely hidden, as well subselecting rows or columns that one wishes to exclude. Both these options are performed using the same methods.
The index can be hidden from rendering by calling .hide_index() without any arguments, which might be useful if your index is integer based. Similarly column headers can be hidden by calling .hide_columns() without any arguments.
Specific rows or columns can be hidden from rendering by calling the same .hide_index() or .hide_columns() methods and passing in a row/column label, a list-like or a slice of row/column labels to for the subset argument.
Hiding does not change the integer arrangement of CSS classes, e.g. hiding the first two columns of a DataFrame means the column class indexing will start at col2, since col0 and col1 are simply ignored.
We can update our Styler object from before to hide some data and format the values.
Step5: Methods to Add Styles
There are 3 primary methods of adding custom CSS styles to Styler
Step6: Next we just add a couple more styling artifacts targeting specific parts of the table. Be careful here, since we are chaining methods we need to explicitly instruct the method not to overwrite the existing styles.
Step7: As a convenience method (since version 1.2.0) we can also pass a dict to .set_table_styles() which contains row or column keys. Behind the scenes Styler just indexes the keys and adds relevant .col<m> or .row<n> classes as necessary to the given CSS selectors.
Step8: Setting Classes and Linking to External CSS
If you have designed a website then it is likely you will already have an external CSS file that controls the styling of table and cell objects within it. You may want to use these native files rather than duplicate all the CSS in python (and duplicate any maintenance work).
Table Attributes
It is very easy to add a class to the main <table> using .set_table_attributes(). This method can also attach inline styles - read more in CSS Hierarchies.
Step9: Data Cell CSS Classes
New in version 1.2.0
The .set_td_classes() method accepts a DataFrame with matching indices and columns to the underlying Styler's DataFrame. That DataFrame will contain strings as css-classes to add to individual data cells
Step10: Styler Functions
Acting on Data
We use the following methods to pass your style functions. Both of those methods take a function (and some other keyword arguments) and apply it to the DataFrame in a certain way, rendering CSS styles.
.applymap() (elementwise)
Step11: For example we can build a function that colors text if it is negative, and chain this with a function that partially fades cells of negligible value. Since this looks at each element in turn we use applymap.
Step12: We can also build a function that highlights the maximum value across rows, cols, and the DataFrame all at once. In this case we use apply. Below we highlight the maximum in a column.
Step13: We can use the same function across the different axes, highlighting here the DataFrame maximum in purple, and row maximums in pink.
Step14: This last example shows how some styles have been overwritten by others. In general the most recent style applied is active but you can read more in the section on CSS hierarchies. You can also apply these styles to more granular parts of the DataFrame - read more in section on subset slicing.
It is possible to replicate some of this functionality using just classes but it can be more cumbersome. See item 3) of Optimization
<div class="alert alert-info">
*Debugging Tip*
Step15: Tooltips and Captions
Table captions can be added with the .set_caption() method. You can use table styles to control the CSS relevant to the caption.
Step16: Adding tooltips (since version 1.3.0) can be done using the .set_tooltips() method in the same way you can add CSS classes to data cells by providing a string based DataFrame with intersecting indices and columns. You don't have to specify a css_class name or any css props for the tooltips, since there are standard defaults, but the option is there if you want more visual control.
Step17: The only thing left to do for our table is to add the highlighting borders to draw the audience attention to the tooltips. We will create internal CSS classes as before using table styles. Setting classes always overwrites so we need to make sure we add the previous classes.
Step18: Finer Control with Slicing
The examples we have shown so far for the Styler.apply and Styler.applymap functions have not demonstrated the use of the subset argument. This is a useful argument which permits a lot of flexibility
Step19: We will use subset to highlight the maximum in the third and fourth columns with red text. We will highlight the subset sliced region in yellow.
Step20: If combined with the IndexSlice as suggested then it can index across both dimensions with greater flexibility.
Step21: This also provides the flexibility to sub select rows when used with the axis=1.
Step22: There is also scope to provide conditional filtering.
Suppose we want to highlight the maximum across columns 2 and 4 only in the case that the sum of columns 1 and 3 is less than -2.0 (essentially excluding rows (
Step23: Only label-based slicing is supported right now, not positional, and not callables.
If your style function uses a subset or axis keyword argument, consider wrapping your function in a functools.partial, partialing out that keyword.
python
my_func2 = functools.partial(my_func, subset=42)
Optimization
Generally, for smaller tables and most cases, the rendered HTML does not need to be optimized, and we don't really recommend it. There are two cases where it is worth considering
Step24: <div class="alert alert-info">
<font color=green>This is better
Step25: 2. Use table styles
Use table styles where possible (e.g. for all cells or rows or columns at a time) since the CSS is nearly always more efficient than other formats.
<div class="alert alert-warning">
<font color=red>This is sub-optimal
Step26: <div class="alert alert-info">
<font color=green>This is better
Step27: 3. Set classes instead of using Styler functions
For large DataFrames where the same style is applied to many cells it can be more efficient to declare the styles as classes and then apply those classes to data cells, rather than directly applying styles to cells. It is, however, probably still easier to use the Styler function api when you are not concerned about optimization.
<div class="alert alert-warning">
<font color=red>This is sub-optimal
Step28: <div class="alert alert-info">
<font color=green>This is better
Step29: 4. Don't use tooltips
Tooltips require cell_ids to work and they generate extra HTML elements for every data cell.
5. If every byte counts use string replacement
You can remove unnecessary HTML, or shorten the default class names by replacing the default css dict. You can read a little more about CSS below.
Step30: Builtin Styles
Some styling functions are common enough that we've "built them in" to the Styler, so you don't have to write them and apply them yourself. The current list of such functions is
Step31: Highlight Min or Max
Step32: Highlight Between
This method accepts ranges as float, or NumPy arrays or Series provided the indexes match.
Step33: Highlight Quantile
Useful for detecting the highest or lowest percentile values
Step34: Background Gradient and Text Gradient
You can create "heatmaps" with the background_gradient and text_gradient methods. These require matplotlib, and we'll use Seaborn to get a nice colormap.
Step35: .background_gradient and .text_gradient have a number of keyword arguments to customise the gradients and colors. See the documentation.
Set properties
Use Styler.set_properties when the style doesn't actually depend on the values. This is just a simple wrapper for .applymap where the function returns the same properties for all cells.
Step36: Bar charts
You can include "bar charts" in your DataFrame.
Step37: Additional keyword arguments give more control on centering and positioning, and you can pass a list of [color_negative, color_positive] to highlight lower and higher values or a matplotlib colormap.
To showcase an example here's how you can change the above with the new align option, combined with setting vmin and vmax limits, the width of the figure, and underlying css props of cells, leaving space to display the text and the bars. We also use text_gradient to color the text the same as the bars using a matplotlib colormap (although in this case the visualization is probably better without this additional effect).
Step40: The following example aims to give a highlight of the behavior of the new align options
Step41: Sharing styles
Say you have a lovely style built up for a DataFrame, and now you want to apply the same style to a second DataFrame. Export the style with df1.style.export, and import it on the second DataFrame with df1.style.set
Step42: Notice that you're able to share the styles even though they're data aware. The styles are re-evaluated on the new DataFrame they've been used upon.
Limitations
DataFrame only (use Series.to_frame().style)
The index and columns must be unique
No large repr, and construction performance isn't great; although we have some HTML optimizations
You can only style the values, not the index or columns (except with table_styles above)
You can only apply styles, you can't insert new HTML entities
Some of these might be addressed in the future.
Other Fun and Useful Stuff
Here are a few interesting examples.
Widgets
Styler interacts pretty well with widgets. If you're viewing this online instead of running the notebook yourself, you're missing out on interactively adjusting the color palette.
Step43: Magnify
Step44: Sticky Headers
If you display a large matrix or DataFrame in a notebook, but you want to always see the column and row headers you can use the .set_sticky method which manipulates the table styles CSS.
Step45: It is also possible to stick MultiIndexes and even only specific levels.
Step46: HTML Escaping
Suppose you have to display HTML within HTML, that can be a bit of pain when the renderer can't distinguish. You can use the escape formatting option to handle this, and even use it within a formatter that contains HTML itself.
Step47: Export to Excel
Some support (since version 0.20.0) is available for exporting styled DataFrames to Excel worksheets using the OpenPyXL or XlsxWriter engines. CSS2.2 properties handled include
Step48: A screenshot of the output
Step49: CSS Hierarchies
The examples have shown that when CSS styles overlap, the one that comes last in the HTML render, takes precedence. So the following yield different results
Step50: This is only true for CSS rules that are equivalent in hierarchy, or importance. You can read more about CSS specificity here but for our purposes it suffices to summarize the key points
Step51: This text is red because the generated selector #T_a_ td is worth 101 (ID plus element), whereas #T_a_row0_col0 is only worth 100 (ID), so is considered inferior even though in the HTML it comes after the previous.
Step52: In the above case the text is blue because the selector #T_b_ .cls-1 is worth 110 (ID plus class), which takes precendence.
Step53: Now we have created another table style this time the selector T_c_ td.data (ID plus element plus class) gets bumped up to 111.
If your style fails to be applied, and its really frustrating, try the !important trump card.
Step54: Finally got that green text after all!
Extensibility
The core of pandas is, and will remain, its "high-performance, easy-to-use data structures".
With that in mind, we hope that DataFrame.style accomplishes two goals
Provide an API that is pleasing to use interactively and is "good enough" for many tasks
Provide the foundations for dedicated libraries to build on
If you build a great library on top of this, let us know and we'll link to it.
Subclassing
If the default template doesn't quite suit your needs, you can subclass Styler and extend or override the template.
We'll show an example of extending the default template to insert a custom header before each table.
Step55: We'll use the following template
Step56: Now that we've created a template, we need to set up a subclass of Styler that
knows about it.
Step57: Notice that we include the original loader in our environment's loader.
That's because we extend the original template, so the Jinja environment needs
to be able to find it.
Now we can use that custom styler. It's __init__ takes a DataFrame.
Step58: Our custom template accepts a table_title keyword. We can provide the value in the .to_html method.
Step59: For convenience, we provide the Styler.from_custom_template method that does the same as the custom subclass.
Step60: Template Structure
Here's the template structure for the both the style generation template and the table generation template
Step61: Table template
Step62: See the template in the GitHub repo for more details. | Python Code:
import matplotlib.pyplot
# We have this here to trigger matplotlib's font cache stuff.
# This cell is hidden from the output
import pandas as pd
import numpy as np
import matplotlib as mpl
df = pd.DataFrame([[38.0, 2.0, 18.0, 22.0, 21, np.nan],[19, 439, 6, 452, 226,232]],
index=pd.Index(['Tumour (Positive)', 'Non-Tumour (Negative)'], name='Actual Label:'),
columns=pd.MultiIndex.from_product([['Decision Tree', 'Regression', 'Random'],['Tumour', 'Non-Tumour']], names=['Model:', 'Predicted:']))
df.style
Explanation: Table Visualization
This section demonstrates visualization of tabular data using the Styler
class. For information on visualization with charting please see Chart Visualization. This document is written as a Jupyter Notebook, and can be viewed or downloaded here.
Styler Object and HTML
Styling should be performed after the data in a DataFrame has been processed. The Styler creates an HTML <table> and leverages CSS styling language to manipulate many parameters including colors, fonts, borders, background, etc. See here for more information on styling HTML tables. This allows a lot of flexibility out of the box, and even enables web developers to integrate DataFrames into their exiting user interface designs.
The DataFrame.style attribute is a property that returns a Styler object. It has a _repr_html_ method defined on it so they are rendered automatically in Jupyter Notebook.
End of explanation
# Hidden cell to just create the below example: code is covered throughout the guide.
s = df.style\
.hide_columns([('Random', 'Tumour'), ('Random', 'Non-Tumour')])\
.format('{:.0f}')\
.set_table_styles([{
'selector': '',
'props': 'border-collapse: separate;'
},{
'selector': 'caption',
'props': 'caption-side: bottom; font-size:1.3em;'
},{
'selector': '.index_name',
'props': 'font-style: italic; color: darkgrey; font-weight:normal;'
},{
'selector': 'th:not(.index_name)',
'props': 'background-color: #000066; color: white;'
},{
'selector': 'th.col_heading',
'props': 'text-align: center;'
},{
'selector': 'th.col_heading.level0',
'props': 'font-size: 1.5em;'
},{
'selector': 'th.col2',
'props': 'border-left: 1px solid white;'
},{
'selector': '.col2',
'props': 'border-left: 1px solid #000066;'
},{
'selector': 'td',
'props': 'text-align: center; font-weight:bold;'
},{
'selector': '.true',
'props': 'background-color: #e6ffe6;'
},{
'selector': '.false',
'props': 'background-color: #ffe6e6;'
},{
'selector': '.border-red',
'props': 'border: 2px dashed red;'
},{
'selector': '.border-green',
'props': 'border: 2px dashed green;'
},{
'selector': 'td:hover',
'props': 'background-color: #ffffb3;'
}])\
.set_td_classes(pd.DataFrame([['true border-green', 'false', 'true', 'false border-red', '', ''],
['false', 'true', 'false', 'true', '', '']],
index=df.index, columns=df.columns))\
.set_caption("Confusion matrix for multiple cancer prediction models.")\
.set_tooltips(pd.DataFrame([['This model has a very strong true positive rate', '', '', "This model's total number of false negatives is too high", '', ''],
['', '', '', '', '', '']],
index=df.index, columns=df.columns),
css_class='pd-tt', props=
'visibility: hidden; position: absolute; z-index: 1; border: 1px solid #000066;'
'background-color: white; color: #000066; font-size: 0.8em;'
'transform: translate(0px, -24px); padding: 0.6em; border-radius: 0.5em;')
s
Explanation: The above output looks very similar to the standard DataFrame HTML representation. But the HTML here has already attached some CSS classes to each cell, even if we haven't yet created any styles. We can view these by calling the [.to_html()][to_html] method, which returns the raw HTML as string, which is useful for further processing or adding to a file - read on in More about CSS and HTML. Below we will show how we can use these to format the DataFrame to be more communicative. For example how we can build s:
End of explanation
df.style.format(precision=0, na_rep='MISSING', thousands=" ",
formatter={('Decision Tree', 'Tumour'): "{:.2f}",
('Regression', 'Non-Tumour'): lambda x: "$ {:,.1f}".format(x*-1e6)
})
Explanation: Formatting the Display
Formatting Values
Before adding styles it is useful to show that the Styler can distinguish the display value from the actual value, in both datavlaues and index or columns headers. To control the display value, the text is printed in each cell as string, and we can use the .format() and .format_index() methods to manipulate this according to a format spec string or a callable that takes a single value and returns a string. It is possible to define this for the whole table, or index, or for individual columns, or MultiIndex levels.
Additionally, the format function has a precision argument to specifically help formatting floats, as well as decimal and thousands separators to support other locales, an na_rep argument to display missing data, and an escape argument to help displaying safe-HTML or safe-LaTeX. The default formatter is configured to adopt pandas' regular display.precision option, controllable using with pd.option_context('display.precision', 2):
End of explanation
weather_df = pd.DataFrame(np.random.rand(10,2)*5,
index=pd.date_range(start="2021-01-01", periods=10),
columns=["Tokyo", "Beijing"])
def rain_condition(v):
if v < 1.75:
return "Dry"
elif v < 2.75:
return "Rain"
return "Heavy Rain"
def make_pretty(styler):
styler.set_caption("Weather Conditions")
styler.format(rain_condition)
styler.format_index(lambda v: v.strftime("%A"))
styler.background_gradient(axis=None, vmin=1, vmax=5, cmap="YlGnBu")
return styler
weather_df
weather_df.loc["2021-01-04":"2021-01-08"].style.pipe(make_pretty)
Explanation: Using Styler to manipulate the display is a useful feature because maintaining the indexing and datavalues for other purposes gives greater control. You do not have to overwrite your DataFrame to display it how you like. Here is an example of using the formatting functions whilst still relying on the underlying data for indexing and calculations.
End of explanation
s = df.style.format('{:.0f}').hide_columns([('Random', 'Tumour'), ('Random', 'Non-Tumour')])
s
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s.set_uuid('after_hide')
Explanation: Hiding Data
The index and column headers can be completely hidden, as well subselecting rows or columns that one wishes to exclude. Both these options are performed using the same methods.
The index can be hidden from rendering by calling .hide_index() without any arguments, which might be useful if your index is integer based. Similarly column headers can be hidden by calling .hide_columns() without any arguments.
Specific rows or columns can be hidden from rendering by calling the same .hide_index() or .hide_columns() methods and passing in a row/column label, a list-like or a slice of row/column labels to for the subset argument.
Hiding does not change the integer arrangement of CSS classes, e.g. hiding the first two columns of a DataFrame means the column class indexing will start at col2, since col0 and col1 are simply ignored.
We can update our Styler object from before to hide some data and format the values.
End of explanation
cell_hover = { # for row hover use <tr> instead of <td>
'selector': 'td:hover',
'props': [('background-color', '#ffffb3')]
}
index_names = {
'selector': '.index_name',
'props': 'font-style: italic; color: darkgrey; font-weight:normal;'
}
headers = {
'selector': 'th:not(.index_name)',
'props': 'background-color: #000066; color: white;'
}
s.set_table_styles([cell_hover, index_names, headers])
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s.set_uuid('after_tab_styles1')
Explanation: Methods to Add Styles
There are 3 primary methods of adding custom CSS styles to Styler:
Using .set_table_styles() to control broader areas of the table with specified internal CSS. Although table styles allow the flexibility to add CSS selectors and properties controlling all individual parts of the table, they are unwieldy for individual cell specifications. Also, note that table styles cannot be exported to Excel.
Using .set_td_classes() to directly link either external CSS classes to your data cells or link the internal CSS classes created by .set_table_styles(). See here. These cannot be used on column header rows or indexes, and also won't export to Excel.
Using the .apply() and .applymap() functions to add direct internal CSS to specific data cells. See here. As of v1.4.0 there are also methods that work directly on column header rows or indexes; .apply_index() and .applymap_index(). Note that only these methods add styles that will export to Excel. These methods work in a similar way to DataFrame.apply() and DataFrame.applymap().
Table Styles
Table styles are flexible enough to control all individual parts of the table, including column headers and indexes.
However, they can be unwieldy to type for individual data cells or for any kind of conditional formatting, so we recommend that table styles are used for broad styling, such as entire rows or columns at a time.
Table styles are also used to control features which can apply to the whole table at once such as creating a generic hover functionality. The :hover pseudo-selector, as well as other pseudo-selectors, can only be used this way.
To replicate the normal format of CSS selectors and properties (attribute value pairs), e.g.
tr:hover {
background-color: #ffff99;
}
the necessary format to pass styles to .set_table_styles() is as a list of dicts, each with a CSS-selector tag and CSS-properties. Properties can either be a list of 2-tuples, or a regular CSS-string, for example:
End of explanation
s.set_table_styles([
{'selector': 'th.col_heading', 'props': 'text-align: center;'},
{'selector': 'th.col_heading.level0', 'props': 'font-size: 1.5em;'},
{'selector': 'td', 'props': 'text-align: center; font-weight: bold;'},
], overwrite=False)
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s.set_uuid('after_tab_styles2')
Explanation: Next we just add a couple more styling artifacts targeting specific parts of the table. Be careful here, since we are chaining methods we need to explicitly instruct the method not to overwrite the existing styles.
End of explanation
s.set_table_styles({
('Regression', 'Tumour'): [{'selector': 'th', 'props': 'border-left: 1px solid white'},
{'selector': 'td', 'props': 'border-left: 1px solid #000066'}]
}, overwrite=False, axis=0)
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s.set_uuid('xyz01')
Explanation: As a convenience method (since version 1.2.0) we can also pass a dict to .set_table_styles() which contains row or column keys. Behind the scenes Styler just indexes the keys and adds relevant .col<m> or .row<n> classes as necessary to the given CSS selectors.
End of explanation
out = s.set_table_attributes('class="my-table-cls"').to_html()
print(out[out.find('<table'):][:109])
Explanation: Setting Classes and Linking to External CSS
If you have designed a website then it is likely you will already have an external CSS file that controls the styling of table and cell objects within it. You may want to use these native files rather than duplicate all the CSS in python (and duplicate any maintenance work).
Table Attributes
It is very easy to add a class to the main <table> using .set_table_attributes(). This method can also attach inline styles - read more in CSS Hierarchies.
End of explanation
s.set_table_styles([ # create internal CSS classes
{'selector': '.true', 'props': 'background-color: #e6ffe6;'},
{'selector': '.false', 'props': 'background-color: #ffe6e6;'},
], overwrite=False)
cell_color = pd.DataFrame([['true ', 'false ', 'true ', 'false '],
['false ', 'true ', 'false ', 'true ']],
index=df.index,
columns=df.columns[:4])
s.set_td_classes(cell_color)
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s.set_uuid('after_classes')
Explanation: Data Cell CSS Classes
New in version 1.2.0
The .set_td_classes() method accepts a DataFrame with matching indices and columns to the underlying Styler's DataFrame. That DataFrame will contain strings as css-classes to add to individual data cells: the <td> elements of the <table>. Rather than use external CSS we will create our classes internally and add them to table style. We will save adding the borders until the section on tooltips.
End of explanation
np.random.seed(0)
df2 = pd.DataFrame(np.random.randn(10,4), columns=['A','B','C','D'])
df2.style
Explanation: Styler Functions
Acting on Data
We use the following methods to pass your style functions. Both of those methods take a function (and some other keyword arguments) and apply it to the DataFrame in a certain way, rendering CSS styles.
.applymap() (elementwise): accepts a function that takes a single value and returns a string with the CSS attribute-value pair.
.apply() (column-/row-/table-wise): accepts a function that takes a Series or DataFrame and returns a Series, DataFrame, or numpy array with an identical shape where each element is a string with a CSS attribute-value pair. This method passes each column or row of your DataFrame one-at-a-time or the entire table at once, depending on the axis keyword argument. For columnwise use axis=0, rowwise use axis=1, and for the entire table at once use axis=None.
This method is powerful for applying multiple, complex logic to data cells. We create a new DataFrame to demonstrate this.
End of explanation
def style_negative(v, props=''):
return props if v < 0 else None
s2 = df2.style.applymap(style_negative, props='color:red;')\
.applymap(lambda v: 'opacity: 20%;' if (v < 0.3) and (v > -0.3) else None)
s2
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s2.set_uuid('after_applymap')
Explanation: For example we can build a function that colors text if it is negative, and chain this with a function that partially fades cells of negligible value. Since this looks at each element in turn we use applymap.
End of explanation
def highlight_max(s, props=''):
return np.where(s == np.nanmax(s.values), props, '')
s2.apply(highlight_max, props='color:white;background-color:darkblue', axis=0)
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s2.set_uuid('after_apply')
Explanation: We can also build a function that highlights the maximum value across rows, cols, and the DataFrame all at once. In this case we use apply. Below we highlight the maximum in a column.
End of explanation
s2.apply(highlight_max, props='color:white;background-color:pink;', axis=1)\
.apply(highlight_max, props='color:white;background-color:purple', axis=None)
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s2.set_uuid('after_apply_again')
Explanation: We can use the same function across the different axes, highlighting here the DataFrame maximum in purple, and row maximums in pink.
End of explanation
s2.applymap_index(lambda v: "color:pink;" if v>4 else "color:darkblue;", axis=0)
s2.apply_index(lambda s: np.where(s.isin(["A", "B"]), "color:pink;", "color:darkblue;"), axis=1)
Explanation: This last example shows how some styles have been overwritten by others. In general the most recent style applied is active but you can read more in the section on CSS hierarchies. You can also apply these styles to more granular parts of the DataFrame - read more in section on subset slicing.
It is possible to replicate some of this functionality using just classes but it can be more cumbersome. See item 3) of Optimization
<div class="alert alert-info">
*Debugging Tip*: If you're having trouble writing your style function, try just passing it into ``DataFrame.apply``. Internally, ``Styler.apply`` uses ``DataFrame.apply`` so the result should be the same, and with ``DataFrame.apply`` you will be able to inspect the CSS string output of your intended function in each cell.
</div>
Acting on the Index and Column Headers
Similar application is acheived for headers by using:
.applymap_index() (elementwise): accepts a function that takes a single value and returns a string with the CSS attribute-value pair.
.apply_index() (level-wise): accepts a function that takes a Series and returns a Series, or numpy array with an identical shape where each element is a string with a CSS attribute-value pair. This method passes each level of your Index one-at-a-time. To style the index use axis=0 and to style the column headers use axis=1.
You can select a level of a MultiIndex but currently no similar subset application is available for these methods.
End of explanation
s.set_caption("Confusion matrix for multiple cancer prediction models.")\
.set_table_styles([{
'selector': 'caption',
'props': 'caption-side: bottom; font-size:1.25em;'
}], overwrite=False)
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s.set_uuid('after_caption')
Explanation: Tooltips and Captions
Table captions can be added with the .set_caption() method. You can use table styles to control the CSS relevant to the caption.
End of explanation
tt = pd.DataFrame([['This model has a very strong true positive rate',
"This model's total number of false negatives is too high"]],
index=['Tumour (Positive)'], columns=df.columns[[0,3]])
s.set_tooltips(tt, props='visibility: hidden; position: absolute; z-index: 1; border: 1px solid #000066;'
'background-color: white; color: #000066; font-size: 0.8em;'
'transform: translate(0px, -24px); padding: 0.6em; border-radius: 0.5em;')
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s.set_uuid('after_tooltips')
Explanation: Adding tooltips (since version 1.3.0) can be done using the .set_tooltips() method in the same way you can add CSS classes to data cells by providing a string based DataFrame with intersecting indices and columns. You don't have to specify a css_class name or any css props for the tooltips, since there are standard defaults, but the option is there if you want more visual control.
End of explanation
s.set_table_styles([ # create internal CSS classes
{'selector': '.border-red', 'props': 'border: 2px dashed red;'},
{'selector': '.border-green', 'props': 'border: 2px dashed green;'},
], overwrite=False)
cell_border = pd.DataFrame([['border-green ', ' ', ' ', 'border-red '],
[' ', ' ', ' ', ' ']],
index=df.index,
columns=df.columns[:4])
s.set_td_classes(cell_color + cell_border)
# Hidden cell to avoid CSS clashes and latter code upcoding previous formatting
s.set_uuid('after_borders')
Explanation: The only thing left to do for our table is to add the highlighting borders to draw the audience attention to the tooltips. We will create internal CSS classes as before using table styles. Setting classes always overwrites so we need to make sure we add the previous classes.
End of explanation
df3 = pd.DataFrame(np.random.randn(4,4),
pd.MultiIndex.from_product([['A', 'B'], ['r1', 'r2']]),
columns=['c1','c2','c3','c4'])
df3
Explanation: Finer Control with Slicing
The examples we have shown so far for the Styler.apply and Styler.applymap functions have not demonstrated the use of the subset argument. This is a useful argument which permits a lot of flexibility: it allows you to apply styles to specific rows or columns, without having to code that logic into your style function.
The value passed to subset behaves similar to slicing a DataFrame;
A scalar is treated as a column label
A list (or Series or NumPy array) is treated as multiple column labels
A tuple is treated as (row_indexer, column_indexer)
Consider using pd.IndexSlice to construct the tuple for the last one. We will create a MultiIndexed DataFrame to demonstrate the functionality.
End of explanation
slice_ = ['c3', 'c4']
df3.style.apply(highlight_max, props='color:red;', axis=0, subset=slice_)\
.set_properties(**{'background-color': '#ffffb3'}, subset=slice_)
Explanation: We will use subset to highlight the maximum in the third and fourth columns with red text. We will highlight the subset sliced region in yellow.
End of explanation
idx = pd.IndexSlice
slice_ = idx[idx[:,'r1'], idx['c2':'c4']]
df3.style.apply(highlight_max, props='color:red;', axis=0, subset=slice_)\
.set_properties(**{'background-color': '#ffffb3'}, subset=slice_)
Explanation: If combined with the IndexSlice as suggested then it can index across both dimensions with greater flexibility.
End of explanation
slice_ = idx[idx[:,'r2'], :]
df3.style.apply(highlight_max, props='color:red;', axis=1, subset=slice_)\
.set_properties(**{'background-color': '#ffffb3'}, subset=slice_)
Explanation: This also provides the flexibility to sub select rows when used with the axis=1.
End of explanation
slice_ = idx[idx[(df3['c1'] + df3['c3']) < -2.0], ['c2', 'c4']]
df3.style.apply(highlight_max, props='color:red;', axis=1, subset=slice_)\
.set_properties(**{'background-color': '#ffffb3'}, subset=slice_)
Explanation: There is also scope to provide conditional filtering.
Suppose we want to highlight the maximum across columns 2 and 4 only in the case that the sum of columns 1 and 3 is less than -2.0 (essentially excluding rows (:,'r2')).
End of explanation
df4 = pd.DataFrame([[1,2],[3,4]])
s4 = df4.style
Explanation: Only label-based slicing is supported right now, not positional, and not callables.
If your style function uses a subset or axis keyword argument, consider wrapping your function in a functools.partial, partialing out that keyword.
python
my_func2 = functools.partial(my_func, subset=42)
Optimization
Generally, for smaller tables and most cases, the rendered HTML does not need to be optimized, and we don't really recommend it. There are two cases where it is worth considering:
If you are rendering and styling a very large HTML table, certain browsers have performance issues.
If you are using Styler to dynamically create part of online user interfaces and want to improve network performance.
Here we recommend the following steps to implement:
1. Remove UUID and cell_ids
Ignore the uuid and set cell_ids to False. This will prevent unnecessary HTML.
<div class="alert alert-warning">
<font color=red>This is sub-optimal:</font>
</div>
End of explanation
from pandas.io.formats.style import Styler
s4 = Styler(df4, uuid_len=0, cell_ids=False)
Explanation: <div class="alert alert-info">
<font color=green>This is better:</font>
</div>
End of explanation
props = 'font-family: "Times New Roman", Times, serif; color: #e83e8c; font-size:1.3em;'
df4.style.applymap(lambda x: props, subset=[1])
Explanation: 2. Use table styles
Use table styles where possible (e.g. for all cells or rows or columns at a time) since the CSS is nearly always more efficient than other formats.
<div class="alert alert-warning">
<font color=red>This is sub-optimal:</font>
</div>
End of explanation
df4.style.set_table_styles([{'selector': 'td.col1', 'props': props}])
Explanation: <div class="alert alert-info">
<font color=green>This is better:</font>
</div>
End of explanation
df2.style.apply(highlight_max, props='color:white;background-color:darkblue;', axis=0)\
.apply(highlight_max, props='color:white;background-color:pink;', axis=1)\
.apply(highlight_max, props='color:white;background-color:purple', axis=None)
Explanation: 3. Set classes instead of using Styler functions
For large DataFrames where the same style is applied to many cells it can be more efficient to declare the styles as classes and then apply those classes to data cells, rather than directly applying styles to cells. It is, however, probably still easier to use the Styler function api when you are not concerned about optimization.
<div class="alert alert-warning">
<font color=red>This is sub-optimal:</font>
</div>
End of explanation
build = lambda x: pd.DataFrame(x, index=df2.index, columns=df2.columns)
cls1 = build(df2.apply(highlight_max, props='cls-1 ', axis=0))
cls2 = build(df2.apply(highlight_max, props='cls-2 ', axis=1, result_type='expand').values)
cls3 = build(highlight_max(df2, props='cls-3 '))
df2.style.set_table_styles([
{'selector': '.cls-1', 'props': 'color:white;background-color:darkblue;'},
{'selector': '.cls-2', 'props': 'color:white;background-color:pink;'},
{'selector': '.cls-3', 'props': 'color:white;background-color:purple;'}
]).set_td_classes(cls1 + cls2 + cls3)
Explanation: <div class="alert alert-info">
<font color=green>This is better:</font>
</div>
End of explanation
my_css = {
"row_heading": "",
"col_heading": "",
"index_name": "",
"col": "c",
"row": "r",
"col_trim": "",
"row_trim": "",
"level": "l",
"data": "",
"blank": "",
}
html = Styler(df4, uuid_len=0, cell_ids=False)
html.set_table_styles([{'selector': 'td', 'props': props},
{'selector': '.c1', 'props': 'color:green;'},
{'selector': '.l0', 'props': 'color:blue;'}],
css_class_names=my_css)
print(html.to_html())
html
Explanation: 4. Don't use tooltips
Tooltips require cell_ids to work and they generate extra HTML elements for every data cell.
5. If every byte counts use string replacement
You can remove unnecessary HTML, or shorten the default class names by replacing the default css dict. You can read a little more about CSS below.
End of explanation
df2.iloc[0,2] = np.nan
df2.iloc[4,3] = np.nan
df2.loc[:4].style.highlight_null(null_color='yellow')
Explanation: Builtin Styles
Some styling functions are common enough that we've "built them in" to the Styler, so you don't have to write them and apply them yourself. The current list of such functions is:
.highlight_null: for use with identifying missing data.
.highlight_min and .highlight_max: for use with identifying extremeties in data.
.highlight_between and .highlight_quantile: for use with identifying classes within data.
.background_gradient: a flexible method for highlighting cells based or their, or other, values on a numeric scale.
.text_gradient: similar method for highlighting text based on their, or other, values on a numeric scale.
.bar: to display mini-charts within cell backgrounds.
The individual documentation on each function often gives more examples of their arguments.
Highlight Null
End of explanation
df2.loc[:4].style.highlight_max(axis=1, props='color:white; font-weight:bold; background-color:darkblue;')
Explanation: Highlight Min or Max
End of explanation
left = pd.Series([1.0, 0.0, 1.0], index=["A", "B", "D"])
df2.loc[:4].style.highlight_between(left=left, right=1.5, axis=1, props='color:white; background-color:purple;')
Explanation: Highlight Between
This method accepts ranges as float, or NumPy arrays or Series provided the indexes match.
End of explanation
df2.loc[:4].style.highlight_quantile(q_left=0.85, axis=None, color='yellow')
Explanation: Highlight Quantile
Useful for detecting the highest or lowest percentile values
End of explanation
import seaborn as sns
cm = sns.light_palette("green", as_cmap=True)
df2.style.background_gradient(cmap=cm)
df2.style.text_gradient(cmap=cm)
Explanation: Background Gradient and Text Gradient
You can create "heatmaps" with the background_gradient and text_gradient methods. These require matplotlib, and we'll use Seaborn to get a nice colormap.
End of explanation
df2.loc[:4].style.set_properties(**{'background-color': 'black',
'color': 'lawngreen',
'border-color': 'white'})
Explanation: .background_gradient and .text_gradient have a number of keyword arguments to customise the gradients and colors. See the documentation.
Set properties
Use Styler.set_properties when the style doesn't actually depend on the values. This is just a simple wrapper for .applymap where the function returns the same properties for all cells.
End of explanation
df2.style.bar(subset=['A', 'B'], color='#d65f5f')
Explanation: Bar charts
You can include "bar charts" in your DataFrame.
End of explanation
df2.style.format('{:.3f}', na_rep="")\
.bar(align=0, vmin=-2.5, vmax=2.5, cmap="bwr", height=50,
width=60, props="width: 120px; border-right: 1px solid black;")\
.text_gradient(cmap="bwr", vmin=-2.5, vmax=2.5)
Explanation: Additional keyword arguments give more control on centering and positioning, and you can pass a list of [color_negative, color_positive] to highlight lower and higher values or a matplotlib colormap.
To showcase an example here's how you can change the above with the new align option, combined with setting vmin and vmax limits, the width of the figure, and underlying css props of cells, leaving space to display the text and the bars. We also use text_gradient to color the text the same as the bars using a matplotlib colormap (although in this case the visualization is probably better without this additional effect).
End of explanation
# Hide the construction of the display chart from the user
import pandas as pd
from IPython.display import HTML
# Test series
test1 = pd.Series([-100,-60,-30,-20], name='All Negative')
test2 = pd.Series([-10,-5,0,90], name='Both Pos and Neg')
test3 = pd.Series([10,20,50,100], name='All Positive')
test4 = pd.Series([100, 103, 101, 102], name='Large Positive')
head =
<table>
<thead>
<th>Align</th>
<th>All Negative</th>
<th>Both Neg and Pos</th>
<th>All Positive</th>
<th>Large Positive</th>
</thead>
</tbody>
aligns = ['left', 'right', 'zero', 'mid', 'mean', 99]
for align in aligns:
row = "<tr><th>{}</th>".format(align)
for series in [test1,test2,test3, test4]:
s = series.copy()
s.name=''
row += "<td>{}</td>".format(s.to_frame().style.hide_index().bar(align=align,
color=['#d65f5f', '#5fba7d'],
width=100).to_html()) #testn['width']
row += '</tr>'
head += row
head+=
</tbody>
</table>
HTML(head)
Explanation: The following example aims to give a highlight of the behavior of the new align options:
End of explanation
style1 = df2.style\
.applymap(style_negative, props='color:red;')\
.applymap(lambda v: 'opacity: 20%;' if (v < 0.3) and (v > -0.3) else None)\
.set_table_styles([{"selector": "th", "props": "color: blue;"}])\
.hide_index()
style1
style2 = df3.style
style2.use(style1.export())
style2
Explanation: Sharing styles
Say you have a lovely style built up for a DataFrame, and now you want to apply the same style to a second DataFrame. Export the style with df1.style.export, and import it on the second DataFrame with df1.style.set
End of explanation
from ipywidgets import widgets
@widgets.interact
def f(h_neg=(0, 359, 1), h_pos=(0, 359), s=(0., 99.9), l=(0., 99.9)):
return df2.style.background_gradient(
cmap=sns.palettes.diverging_palette(h_neg=h_neg, h_pos=h_pos, s=s, l=l,
as_cmap=True)
)
Explanation: Notice that you're able to share the styles even though they're data aware. The styles are re-evaluated on the new DataFrame they've been used upon.
Limitations
DataFrame only (use Series.to_frame().style)
The index and columns must be unique
No large repr, and construction performance isn't great; although we have some HTML optimizations
You can only style the values, not the index or columns (except with table_styles above)
You can only apply styles, you can't insert new HTML entities
Some of these might be addressed in the future.
Other Fun and Useful Stuff
Here are a few interesting examples.
Widgets
Styler interacts pretty well with widgets. If you're viewing this online instead of running the notebook yourself, you're missing out on interactively adjusting the color palette.
End of explanation
def magnify():
return [dict(selector="th",
props=[("font-size", "4pt")]),
dict(selector="td",
props=[('padding', "0em 0em")]),
dict(selector="th:hover",
props=[("font-size", "12pt")]),
dict(selector="tr:hover td:hover",
props=[('max-width', '200px'),
('font-size', '12pt')])
]
np.random.seed(25)
cmap = cmap=sns.diverging_palette(5, 250, as_cmap=True)
bigdf = pd.DataFrame(np.random.randn(20, 25)).cumsum()
bigdf.style.background_gradient(cmap, axis=1)\
.set_properties(**{'max-width': '80px', 'font-size': '1pt'})\
.set_caption("Hover to magnify")\
.format(precision=2)\
.set_table_styles(magnify())
Explanation: Magnify
End of explanation
bigdf = pd.DataFrame(np.random.randn(16, 100))
bigdf.style.set_sticky(axis="index")
Explanation: Sticky Headers
If you display a large matrix or DataFrame in a notebook, but you want to always see the column and row headers you can use the .set_sticky method which manipulates the table styles CSS.
End of explanation
bigdf.index = pd.MultiIndex.from_product([["A","B"],[0,1],[0,1,2,3]])
bigdf.style.set_sticky(axis="index", pixel_size=18, levels=[1,2])
Explanation: It is also possible to stick MultiIndexes and even only specific levels.
End of explanation
df4 = pd.DataFrame([['<div></div>', '"&other"', '<span></span>']])
df4.style
df4.style.format(escape="html")
df4.style.format('<a href="https://pandas.pydata.org" target="_blank">{}</a>', escape="html")
Explanation: HTML Escaping
Suppose you have to display HTML within HTML, that can be a bit of pain when the renderer can't distinguish. You can use the escape formatting option to handle this, and even use it within a formatter that contains HTML itself.
End of explanation
df2.style.\
applymap(style_negative, props='color:red;').\
highlight_max(axis=0).\
to_excel('styled.xlsx', engine='openpyxl')
Explanation: Export to Excel
Some support (since version 0.20.0) is available for exporting styled DataFrames to Excel worksheets using the OpenPyXL or XlsxWriter engines. CSS2.2 properties handled include:
background-color
color
font-family
font-style
font-weight
text-align
text-decoration
vertical-align
white-space: nowrap
Currently broken: border-style, border-width, border-color and their {top, right, bottom, left variants}
Only CSS2 named colors and hex colors of the form #rgb or #rrggbb are currently supported.
The following pseudo CSS properties are also available to set excel specific style properties:
number-format
Table level styles, and data cell CSS-classes are not included in the export to Excel: individual cells must have their properties mapped by the Styler.apply and/or Styler.applymap methods.
End of explanation
print(pd.DataFrame([[1,2],[3,4]], index=['i1', 'i2'], columns=['c1', 'c2']).style.to_html())
Explanation: A screenshot of the output:
Export to LaTeX
There is support (since version 1.3.0) to export Styler to LaTeX. The documentation for the .to_latex method gives further detail and numerous examples.
More About CSS and HTML
Cascading Style Sheet (CSS) language, which is designed to influence how a browser renders HTML elements, has its own peculiarities. It never reports errors: it just silently ignores them and doesn't render your objects how you intend so can sometimes be frustrating. Here is a very brief primer on how Styler creates HTML and interacts with CSS, with advice on common pitfalls to avoid.
CSS Classes and Ids
The precise structure of the CSS class attached to each cell is as follows.
Cells with Index and Column names include index_name and level<k> where k is its level in a MultiIndex
Index label cells include
row_heading
level<k> where k is the level in a MultiIndex
row<m> where m is the numeric position of the row
Column label cells include
col_heading
level<k> where k is the level in a MultiIndex
col<n> where n is the numeric position of the column
Data cells include
data
row<m>, where m is the numeric position of the cell.
col<n>, where n is the numeric position of the cell.
Blank cells include blank
Trimmed cells include col_trim or row_trim
The structure of the id is T_uuid_level<k>_row<m>_col<n> where level<k> is used only on headings, and headings will only have either row<m> or col<n> whichever is needed. By default we've also prepended each row/column identifier with a UUID unique to each DataFrame so that the style from one doesn't collide with the styling from another within the same notebook or page. You can read more about the use of UUIDs in Optimization.
We can see example of the HTML by calling the .to_html() method.
End of explanation
df4 = pd.DataFrame([['text']])
df4.style.applymap(lambda x: 'color:green;')\
.applymap(lambda x: 'color:red;')
df4.style.applymap(lambda x: 'color:red;')\
.applymap(lambda x: 'color:green;')
Explanation: CSS Hierarchies
The examples have shown that when CSS styles overlap, the one that comes last in the HTML render, takes precedence. So the following yield different results:
End of explanation
df4.style.set_uuid('a_')\
.set_table_styles([{'selector': 'td', 'props': 'color:red;'}])\
.applymap(lambda x: 'color:green;')
Explanation: This is only true for CSS rules that are equivalent in hierarchy, or importance. You can read more about CSS specificity here but for our purposes it suffices to summarize the key points:
A CSS importance score for each HTML element is derived by starting at zero and adding:
1000 for an inline style attribute
100 for each ID
10 for each attribute, class or pseudo-class
1 for each element name or pseudo-element
Let's use this to describe the action of the following configurations
End of explanation
df4.style.set_uuid('b_')\
.set_table_styles([{'selector': 'td', 'props': 'color:red;'},
{'selector': '.cls-1', 'props': 'color:blue;'}])\
.applymap(lambda x: 'color:green;')\
.set_td_classes(pd.DataFrame([['cls-1']]))
Explanation: This text is red because the generated selector #T_a_ td is worth 101 (ID plus element), whereas #T_a_row0_col0 is only worth 100 (ID), so is considered inferior even though in the HTML it comes after the previous.
End of explanation
df4.style.set_uuid('c_')\
.set_table_styles([{'selector': 'td', 'props': 'color:red;'},
{'selector': '.cls-1', 'props': 'color:blue;'},
{'selector': 'td.data', 'props': 'color:yellow;'}])\
.applymap(lambda x: 'color:green;')\
.set_td_classes(pd.DataFrame([['cls-1']]))
Explanation: In the above case the text is blue because the selector #T_b_ .cls-1 is worth 110 (ID plus class), which takes precendence.
End of explanation
df4.style.set_uuid('d_')\
.set_table_styles([{'selector': 'td', 'props': 'color:red;'},
{'selector': '.cls-1', 'props': 'color:blue;'},
{'selector': 'td.data', 'props': 'color:yellow;'}])\
.applymap(lambda x: 'color:green !important;')\
.set_td_classes(pd.DataFrame([['cls-1']]))
Explanation: Now we have created another table style this time the selector T_c_ td.data (ID plus element plus class) gets bumped up to 111.
If your style fails to be applied, and its really frustrating, try the !important trump card.
End of explanation
from jinja2 import Environment, ChoiceLoader, FileSystemLoader
from IPython.display import HTML
from pandas.io.formats.style import Styler
Explanation: Finally got that green text after all!
Extensibility
The core of pandas is, and will remain, its "high-performance, easy-to-use data structures".
With that in mind, we hope that DataFrame.style accomplishes two goals
Provide an API that is pleasing to use interactively and is "good enough" for many tasks
Provide the foundations for dedicated libraries to build on
If you build a great library on top of this, let us know and we'll link to it.
Subclassing
If the default template doesn't quite suit your needs, you can subclass Styler and extend or override the template.
We'll show an example of extending the default template to insert a custom header before each table.
End of explanation
with open("templates/myhtml.tpl") as f:
print(f.read())
Explanation: We'll use the following template:
End of explanation
class MyStyler(Styler):
env = Environment(
loader=ChoiceLoader([
FileSystemLoader("templates"), # contains ours
Styler.loader, # the default
])
)
template_html_table = env.get_template("myhtml.tpl")
Explanation: Now that we've created a template, we need to set up a subclass of Styler that
knows about it.
End of explanation
MyStyler(df3)
Explanation: Notice that we include the original loader in our environment's loader.
That's because we extend the original template, so the Jinja environment needs
to be able to find it.
Now we can use that custom styler. It's __init__ takes a DataFrame.
End of explanation
HTML(MyStyler(df3).to_html(table_title="Extending Example"))
Explanation: Our custom template accepts a table_title keyword. We can provide the value in the .to_html method.
End of explanation
EasyStyler = Styler.from_custom_template("templates", "myhtml.tpl")
HTML(EasyStyler(df3).to_html(table_title="Another Title"))
Explanation: For convenience, we provide the Styler.from_custom_template method that does the same as the custom subclass.
End of explanation
with open("templates/html_style_structure.html") as f:
style_structure = f.read()
HTML(style_structure)
Explanation: Template Structure
Here's the template structure for the both the style generation template and the table generation template:
Style template:
End of explanation
with open("templates/html_table_structure.html") as f:
table_structure = f.read()
HTML(table_structure)
Explanation: Table template:
End of explanation
# # Hack to get the same style in the notebook as the
# # main site. This is hidden in the docs.
# from IPython.display import HTML
# with open("themes/nature_with_gtoc/static/nature.css_t") as f:
# css = f.read()
# HTML('<style>{}</style>'.format(css))
Explanation: See the template in the GitHub repo for more details.
End of explanation |
13,809 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise
Step8: Training
Step9: Training loss
Here we'll check out the training losses for the generator and discriminator.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, shape=(None, real_dim), name="inputs_real")
inputs_z = tf.placeholder(tf.float32, shape=(None, z_dim), name="inputs_z")
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
from tensorflow.contrib.layers.python.layers import initializers
SEED = 218
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out:
'''
with tf.variable_scope('generator', reuse=reuse): # finish this
# Hidden layer
alpha = tf.constant(alpha, name='alpha')
h1 = tf.layers.dense(
z,
n_units,
activation=None,
kernel_initializer=initializers.xavier_initializer(seed=SEED)
)
# Leaky ReLU
h1 = tf.maximum(h1, tf.multiply(alpha, h1))
# Logits and tanh output
logits = tf.layers.dense(
h1,
out_dim,
activation=None,
kernel_initializer=initializers.xavier_initializer(seed=SEED)
)
out = tf.nn.tanh(logits)
return out
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse): # finish this
# Hidden layer
alpha = tf.constant(alpha, name='alpha')
h1 = tf.layers.dense(
x,
n_units,
activation=None,
kernel_initializer=initializers.xavier_initializer(seed=SEED)
)
# Leaky ReLU
h1 = tf.maximum(h1, tf.multiply(alpha, h1))
logits = tf.layers.dense(
h1,
1,
activation=None,
kernel_initializer=initializers.xavier_initializer(seed=SEED)
)
out = tf.nn.sigmoid(logits)
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(real_dim=input_size, z_dim=z_size)
# Generator network here
g_model = generator(input_z, input_size, g_hidden_size, alpha=alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, n_units=d_hidden_size, alpha=alpha, reuse=True)
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
# Calculate losses
real_labels = tf.ones_like(d_logits_real) * (1 - smooth)
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=real_labels))
fake_labels = tf.zeros_like(d_logits_fake)
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=fake_labels))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake)))
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(g_loss, var_list=g_vars)
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
_ = view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation |
13,810 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Sanity Check for PartialFlow Training
This notebook is a toy example for comparing the training of a network with and without PartialFlow involved. We define two small neural networks with the exact same architecture and train them on MNIST - the first one as usual in Tensorflow, the second one split into multiple sections. To make a comparison possible, the networks are small enough to be trained on a single GPU without any splits.
We compare the network's losses as well as the training times. The results shown here have been obtained from running the trainings on a NVIDIA GeForce GTX 1070 with 8GB memory.
MNIST Data
Step1: Define Network Architectures
As already mentioned, the network architectures are identical. If you want to compare a vanilla training to one with more splits, just add sections to the network.
Step2: Setup
Next, we set up both networks, prepare the training, and initialize the session.
Step3: Training
We run 500 training cycles for both networks and keep track of the loss as well as the duration of each training operation.
Step4: Evaluation
Step5: If PartialFlow works correctly, the losses should be very similar for both networks. The training processes mainly differ in the initializations and the order of the inputs.
Step6: PartialFlow trades additional computation time for lower memory consumption. The time overhead depends on the number and position of splits in the graph. Here, we compare the duration of update operations. | Python Code:
import tensorflow as tf
import numpy as np
# load MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
train_images = np.reshape(mnist.train.images, [-1, 28, 28, 1])
train_labels = mnist.train.labels
test_images = np.reshape(mnist.test.images, [-1, 28, 28, 1])
test_labels = mnist.test.labels
Explanation: Basic Sanity Check for PartialFlow Training
This notebook is a toy example for comparing the training of a network with and without PartialFlow involved. We define two small neural networks with the exact same architecture and train them on MNIST - the first one as usual in Tensorflow, the second one split into multiple sections. To make a comparison possible, the networks are small enough to be trained on a single GPU without any splits.
We compare the network's losses as well as the training times. The results shown here have been obtained from running the trainings on a NVIDIA GeForce GTX 1070 with 8GB memory.
MNIST Data
End of explanation
from BasicNets import BatchnormNet
def buildSectionNet(sm):
batch_size = 250
image, label = tf.train.slice_input_producer([train_images, train_labels])
image_batch, label_batch = tf.train.batch([image, label], batch_size=batch_size)
# flag for batch normalization layers
is_training = tf.placeholder(name='is_training', shape=[], dtype=tf.bool)
net = BatchnormNet(is_training, image_batch)
# first network section with initial convolution and three residual blocks
with sm.new_section():
with tf.variable_scope('initial_conv'):
stream = net.add_conv(net._inputs, n_filters=16)
stream = net.add_bn(stream)
stream = tf.nn.relu(stream)
with tf.variable_scope('scale0'):
for i in range(3):
with tf.variable_scope('block_%d' % i):
stream = net.res_block(stream)
# second network section strided convolution to decrease the input resolution
#with sm.new_section():
with tf.variable_scope('scale1'):
stream = net.res_block(stream, filters_factor=2, first_stride=2)
for i in range(2):
with tf.variable_scope('block_%d' % i):
stream = net.res_block(stream)
# third network section
with sm.new_section():
with tf.variable_scope('scale2'):
stream = net.res_block(stream, filters_factor=2, first_stride=2)
for i in range(4):
with tf.variable_scope('block_%d' % i):
stream = net.res_block(stream)
# fourth network section with final pooling and cross-entropy loss
#with sm.new_section():
with tf.variable_scope('final_pool'):
# global average pooling over image dimensions
stream = tf.reduce_mean(stream, axis=2)
stream = tf.reduce_mean(stream, axis=1)
# final conv for classification
stream = net.add_fc(stream, out_dims=10)
with tf.variable_scope('loss'):
loss = tf.nn.softmax_cross_entropy_with_logits(stream, label_batch)
loss = tf.reduce_mean(loss)
return loss, is_training
def buildBasicNet():
batch_size = 250
image, label = tf.train.slice_input_producer([train_images, train_labels])
image_batch, label_batch = tf.train.batch([image, label], batch_size=batch_size)
# flag for batch normalization layers
is_training = tf.placeholder(name='is_training', shape=[], dtype=tf.bool)
net = BatchnormNet(is_training, image_batch)
# first network section with initial convolution and three residual blocks
with tf.variable_scope('initial_conv'):
stream = net.add_conv(net._inputs, n_filters=16)
stream = net.add_bn(stream)
stream = tf.nn.relu(stream)
with tf.variable_scope('scale0'):
for i in range(3):
with tf.variable_scope('block_%d' % i):
stream = net.res_block(stream)
# second network section strided convolution to decrease the input resolution
with tf.variable_scope('scale1'):
stream = net.res_block(stream, filters_factor=2, first_stride=2)
for i in range(2):
with tf.variable_scope('block_%d' % i):
stream = net.res_block(stream)
# third network section
with tf.variable_scope('scale2'):
stream = net.res_block(stream, filters_factor=2, first_stride=2)
for i in range(4):
with tf.variable_scope('block_%d' % i):
stream = net.res_block(stream)
# fourth network section with final pooling and cross-entropy loss
with tf.variable_scope('final_pool'):
# global average pooling over image dimensions
stream = tf.reduce_mean(stream, axis=2)
stream = tf.reduce_mean(stream, axis=1)
# final conv for classification
stream = net.add_fc(stream, out_dims=10)
with tf.variable_scope('loss'):
loss = tf.nn.softmax_cross_entropy_with_logits(stream, label_batch)
loss = tf.reduce_mean(loss)
return loss, is_training
Explanation: Define Network Architectures
As already mentioned, the network architectures are identical. If you want to compare a vanilla training to one with more splits, just add sections to the network.
End of explanation
from partialflow import GraphSectionManager
# construct network with splits
sm = GraphSectionManager()
with tf.variable_scope('section_net'):
loss_sec, is_training_sec = buildSectionNet(sm)
opt = tf.train.AdamOptimizer(learning_rate=0.0001)
sm.add_training_ops(opt, loss_sec, var_list=tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES), verbose=False)
sm.prepare_training()
# construct same network without splits
with tf.variable_scope('basic_net'):
loss_basic, is_training_basic = buildBasicNet()
opt = tf.train.AdamOptimizer(learning_rate=0.0001)
grads = opt.compute_gradients(loss_basic, var_list=tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES))
train_op = opt.apply_gradients(grads)
# initialize the session
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
_ = tf.train.start_queue_runners(sess=sess)
Explanation: Setup
Next, we set up both networks, prepare the training, and initialize the session.
End of explanation
from time import time
N = 500
losses = np.zeros([2,N], dtype=np.float32)
times = np.zeros([2,N], dtype=np.float32)
for i in range(N):
start = time()
losses[0, i] = sm.run_full_cycle(sess, fetches=loss_sec, basic_feed={is_training_sec:True})
times[0, i] = time() - start
start = time()
_, losses[1, i] = sess.run([train_op, loss_basic], feed_dict={is_training_basic:True})
times[1, i] = time() - start
if i%100 == 0:
print('Processed %d/%d batches' % (i,N))
Explanation: Training
We run 500 training cycles for both networks and keep track of the loss as well as the duration of each training operation.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: Evaluation
End of explanation
plt.plot(losses.T)
plt.xlabel('Batch')
plt.ylabel('Loss')
_ = plt.legend(['with PartialFlow', 'without PartialFlow'])
Explanation: If PartialFlow works correctly, the losses should be very similar for both networks. The training processes mainly differ in the initializations and the order of the inputs.
End of explanation
plt.plot(times.T)
plt.xlabel('Batch')
plt.ylabel('Duration of Batch [s]')
_ = plt.legend(['with PartialFlow', 'without PartialFlow'])
plt.plot(times[0]/times[1] - 1)
plt.xlabel('Batch')
_ = plt.ylabel('Relative Overhead')
print('Mean relative overhead: %.5f' % (np.mean(times[0]/times[1])-1))
Explanation: PartialFlow trades additional computation time for lower memory consumption. The time overhead depends on the number and position of splits in the graph. Here, we compare the duration of update operations.
End of explanation |
13,811 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tracking mutation frequencies
Step1: Run a simulation
Step2: Group mutation trajectories by position and effect size
Max mutation frequencies
Step3: The only fixation has an 'esize' $> 0$, which means that it was positively selected,
Frequency trajectory of fixations | Python Code:
%matplotlib inline
%pylab inline
import fwdpy as fp
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import copy
Explanation: Tracking mutation frequencies
End of explanation
nregions = [fp.Region(0,1,1),fp.Region(2,3,1)]
sregions = [fp.ExpS(1,2,1,-0.1),fp.ExpS(1,2,0.01,0.001)]
rregions = [fp.Region(0,3,1)]
rng = fp.GSLrng(101)
popsizes = np.array([1000],dtype=np.uint32)
popsizes=np.tile(popsizes,10000)
#Initialize a vector with 1 population of size N = 1,000
pops=fp.SpopVec(1,1000)
#This sampler object will record selected mutation
#frequencies over time. A sampler gets the length
#of pops as a constructor argument because you
#need a different sampler object in memory for
#each population.
sampler=fp.FreqSampler(len(pops))
#Record mutation frequencies every generation
#The function evolve_regions sampler takes any
#of fwdpy's temporal samplers and applies them.
#For users familiar with C++, custom samplers will be written,
#and we plan to allow for custom samplers to be written primarily
#using Cython, but we are still experimenting with how best to do so.
rawTraj=fp.evolve_regions_sampler(rng,pops,sampler,
popsizes[0:],0.001,0.001,0.001,
nregions,sregions,rregions,
#The one means we sample every generation.
1)
rawTraj = [i for i in sampler]
#This example has only 1 set of trajectories, so let's make a variable for thet
#single replicate
traj=rawTraj[0]
print traj.head()
print traj.tail()
print traj.freq.max()
Explanation: Run a simulation
End of explanation
mfreq = traj.groupby(['pos','esize']).max().reset_index()
#Print out info for all mutations that hit a frequency of 1 (e.g., fixed)
mfreq[mfreq['freq']==1]
Explanation: Group mutation trajectories by position and effect size
Max mutation frequencies
End of explanation
#Get positions of mutations that hit q = 1
mpos=mfreq[mfreq['freq']==1]['pos']
#Frequency trajectories of fixations
fig = plt.figure()
ax = plt.subplot(111)
plt.xlabel("Time (generations)")
plt.ylabel("Mutation frequency")
ax.set_xlim(traj['generation'].min(),traj['generation'].max())
for i in mpos:
plt.plot(traj[traj['pos']==i]['generation'],traj[traj['pos']==i]['freq'])
#Let's get histogram of effect sizes for all mutations that did not fix
fig = plt.figure()
ax = plt.subplot(111)
plt.xlabel(r'$s$ (selection coefficient)')
plt.ylabel("Number of mutations")
mfreq[mfreq['freq']<1.0]['esize'].hist()
Explanation: The only fixation has an 'esize' $> 0$, which means that it was positively selected,
Frequency trajectory of fixations
End of explanation |
13,812 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Do some initial testing and analysis
Step1: Import all data (data was previously cleaning in other notebooks)
Step2: Now I have one dict with metro stations and one with bike stations
Step3: step 1) create list of bike stations along each line
Compare the distances of each bike station to each metro station of each line. If the bike station is "close" to a metro station, it should be added to a list.
* Build test example with RD line
Step5: Iterate through each metro line, calculating the distance between each station of that line to each bikeshare station.
Use set to drop duplicates
make the below code a function that can be used to generate lists of stations based on different distances
Step6: I now have a dictionary of a list of bike stations (values) within 0.25 miles of each metro line (key).
Save it with pickle for use in other notebooks
Step7: step 2)
how many bike stations are considered close to each line? I need to make sure the numbers are appropriate for doing statistical analysis
Step8: About 10% to 15% of bike stations are within 0.25 miles of each metro line
test my function | Python Code:
import pickle
from geopy.distance import vincenty
Explanation: Do some initial testing and analysis
End of explanation
station_data = pickle.load( open( "station_data.p", "rb" ) )
bike_location = pickle.load( open( "bike_location.p", "rb" ) )
Explanation: Import all data (data was previously cleaning in other notebooks)
End of explanation
print(station_data['RD']['Bethesda'])
print(bike_location['Silver Spring Metro/Colesville Rd & Wayne Ave'])
Explanation: Now I have one dict with metro stations and one with bike stations
End of explanation
vincenty(station_data['RD']['Silver Spring'], bike_location['11th & O St NW']).miles
for key_bike in bike_location:
dist = vincenty(station_data['RD']['Silver Spring'], bike_location[key_bike]).miles
if dist <= 0.3:
print([key_bike ,dist])
Explanation: step 1) create list of bike stations along each line
Compare the distances of each bike station to each metro station of each line. If the bike station is "close" to a metro station, it should be added to a list.
* Build test example with RD line
End of explanation
def close_stations(distance):
This fn will return a dict of bikeshare stations close
to each metro stop based on the suppled distance in miles
lines = ['RD', 'YL', 'GR','BL', 'OR', 'SV']
bikes_close = dict()
for ii in range(len(lines)):
bikes_temp = []
for key_metro in station_data[lines[ii]]:
for key_bike in bike_location:
dist = vincenty(station_data[lines[ii]][key_metro], bike_location[key_bike]).miles
if dist <= distance:
bikes_temp.append(key_bike)
print([lines[ii], key_metro, key_bike, dist])
bikes_close[lines[ii]] = list(set(bikes_temp))
return bikes_close
lines = ['RD', 'YL', 'GR','BL', 'OR', 'SV']
bikes_close = dict()
for ii in range(len(lines)):
bikes_temp = []
for key_metro in station_data[lines[ii]]:
for key_bike in bike_location:
dist = vincenty(station_data[lines[ii]][key_metro], bike_location[key_bike]).miles
if dist <= 0.25:
bikes_temp.append(key_bike)
print([lines[ii], key_metro, key_bike, dist])
bikes_close[lines[ii]] = list(set(bikes_temp))
print(len(bikes_close['GR']))
print(bikes_close['GR'][:5])
Explanation: Iterate through each metro line, calculating the distance between each station of that line to each bikeshare station.
Use set to drop duplicates
make the below code a function that can be used to generate lists of stations based on different distances
End of explanation
pickle.dump( bikes_close, open( "bikes_close.p", "wb" ) )
Explanation: I now have a dictionary of a list of bike stations (values) within 0.25 miles of each metro line (key).
Save it with pickle for use in other notebooks
End of explanation
for ii in bikes_close:
print(ii, len(bikes_close[ii]))
Explanation: step 2)
how many bike stations are considered close to each line? I need to make sure the numbers are appropriate for doing statistical analysis
End of explanation
fn_test = close_stations(0.1)
for ii in fn_test:
print(ii, len(fn_test[ii]))
fn_test
Explanation: About 10% to 15% of bike stations are within 0.25 miles of each metro line
test my function
End of explanation |
13,813 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Filter
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
<table align="left" style="margin-right
Step2: Examples
In the following examples, we create a pipeline with a PCollection of produce with their icon, name, and duration.
Then, we apply Filter in multiple ways to filter out produce by their duration value.
Filter accepts a function that keeps elements that return True, and filters out the remaining elements.
Example 1
Step3: <table align="left" style="margin-right
Step4: <table align="left" style="margin-right
Step5: <table align="left" style="margin-right
Step6: <table align="left" style="margin-right
Step7: <table align="left" style="margin-right | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License")
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
Explanation: <a href="https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/documentation/transforms/python/elementwise/filter-py.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
<table align="left"><td><a target="_blank" href="https://beam.apache.org/documentation/transforms/python/elementwise/filter"><img src="https://beam.apache.org/images/logos/full-color/name-bottom/beam-logo-full-color-name-bottom-100.png" width="32" height="32" />View the docs</a></td></table>
End of explanation
!pip install --quiet -U apache-beam
Explanation: Filter
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
<table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://beam.apache.org/releases/pydoc/current/apache_beam.transforms.core.html#apache_beam.transforms.core.Filter"><img src="https://beam.apache.org/images/logos/sdks/python.png" width="32px" height="32px" alt="Pydoc"/> Pydoc</a>
</td>
</table>
<br/><br/><br/>
Given a predicate, filter out all elements that don't satisfy that predicate.
May also be used to filter based on an inequality with a given value based
on the comparison ordering of the element.
Setup
To run a code cell, you can click the Run cell button at the top left of the cell,
or select it and press Shift+Enter.
Try modifying a code cell and re-running it to see what happens.
To learn more about Colab, see
Welcome to Colaboratory!.
First, let's install the apache-beam module.
End of explanation
import apache_beam as beam
def is_perennial(plant):
return plant['duration'] == 'perennial'
with beam.Pipeline() as pipeline:
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{
'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'
},
{
'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'
},
{
'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'
},
{
'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'
},
{
'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'
},
])
| 'Filter perennials' >> beam.Filter(is_perennial)
| beam.Map(print))
Explanation: Examples
In the following examples, we create a pipeline with a PCollection of produce with their icon, name, and duration.
Then, we apply Filter in multiple ways to filter out produce by their duration value.
Filter accepts a function that keeps elements that return True, and filters out the remaining elements.
Example 1: Filtering with a function
We define a function is_perennial which returns True if the element's duration equals 'perennial', and False otherwise.
End of explanation
import apache_beam as beam
with beam.Pipeline() as pipeline:
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{
'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'
},
{
'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'
},
{
'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'
},
{
'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'
},
{
'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'
},
])
| 'Filter perennials' >>
beam.Filter(lambda plant: plant['duration'] == 'perennial')
| beam.Map(print))
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 2: Filtering with a lambda function
We can also use lambda functions to simplify Example 1.
End of explanation
import apache_beam as beam
def has_duration(plant, duration):
return plant['duration'] == duration
with beam.Pipeline() as pipeline:
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{
'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'
},
{
'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'
},
{
'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'
},
{
'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'
},
{
'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'
},
])
| 'Filter perennials' >> beam.Filter(has_duration, 'perennial')
| beam.Map(print))
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 3: Filtering with multiple arguments
You can pass functions with multiple arguments to Filter.
They are passed as additional positional arguments or keyword arguments to the function.
In this example, has_duration takes plant and duration as arguments.
End of explanation
import apache_beam as beam
with beam.Pipeline() as pipeline:
perennial = pipeline | 'Perennial' >> beam.Create(['perennial'])
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{
'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'
},
{
'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'
},
{
'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'
},
{
'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'
},
{
'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'
},
])
| 'Filter perennials' >> beam.Filter(
lambda plant,
duration: plant['duration'] == duration,
duration=beam.pvalue.AsSingleton(perennial),
)
| beam.Map(print))
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 4: Filtering with side inputs as singletons
If the PCollection has a single value, such as the average from another computation,
passing the PCollection as a singleton accesses that value.
In this example, we pass a PCollection the value 'perennial' as a singleton.
We then use that value to filter out perennials.
End of explanation
import apache_beam as beam
with beam.Pipeline() as pipeline:
valid_durations = pipeline | 'Valid durations' >> beam.Create([
'annual',
'biennial',
'perennial',
])
valid_plants = (
pipeline
| 'Gardening plants' >> beam.Create([
{
'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'
},
{
'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'
},
{
'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'
},
{
'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'
},
{
'icon': '🥔', 'name': 'Potato', 'duration': 'PERENNIAL'
},
])
| 'Filter valid plants' >> beam.Filter(
lambda plant,
valid_durations: plant['duration'] in valid_durations,
valid_durations=beam.pvalue.AsIter(valid_durations),
)
| beam.Map(print))
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 5: Filtering with side inputs as iterators
If the PCollection has multiple values, pass the PCollection as an iterator.
This accesses elements lazily as they are needed,
so it is possible to iterate over large PCollections that won't fit into memory.
End of explanation
import apache_beam as beam
with beam.Pipeline() as pipeline:
keep_duration = pipeline | 'Duration filters' >> beam.Create([
('annual', False),
('biennial', False),
('perennial', True),
])
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{
'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'
},
{
'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'
},
{
'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'
},
{
'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'
},
{
'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'
},
])
| 'Filter plants by duration' >> beam.Filter(
lambda plant,
keep_duration: keep_duration[plant['duration']],
keep_duration=beam.pvalue.AsDict(keep_duration),
)
| beam.Map(print))
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Note: You can pass the PCollection as a list with beam.pvalue.AsList(pcollection),
but this requires that all the elements fit into memory.
Example 6: Filtering with side inputs as dictionaries
If a PCollection is small enough to fit into memory, then that PCollection can be passed as a dictionary.
Each element must be a (key, value) pair.
Note that all the elements of the PCollection must fit into memory for this.
If the PCollection won't fit into memory, use beam.pvalue.AsIter(pcollection) instead.
End of explanation |
13,814 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook, I briefly demonstrate the use of ProximityCellMatch and BestProximityCellMatch tables in meso
Step1: Following demonstrates how to find matches between source scan
Step2: Designate the pairing as what needs to be matched
Step3: Now also specify which units from the source should be matched
Step4: Now we have specified scans to match and source scan units, we can populate ProximityCellMatch
Step5: Now find the best proximity match | Python Code:
from pipeline import meso
Explanation: In this notebook, I briefly demonstrate the use of ProximityCellMatch and BestProximityCellMatch tables in meso
End of explanation
source_scan = dict(animal_id=25133, session=3, scan_idx=11)
target_scan = dict(animal_id=25133, session=4, scan_idx=13)
Explanation: Following demonstrates how to find matches between source scan: 25133-3-11 and target scan 25133-4-13.
We also have a list of unit_ids from the source scan for which we want to find the match.
End of explanation
pairing = (meso.ScanInfo & source_scan).proj(src_session='session', src_scan_idx='scan_idx') * (meso.ScanInfo & target_scan).proj()
meso.ScansToMatch.insert(pairing)
Explanation: Designate the pairing as what needs to be matched:
End of explanation
# 150 units from the source scan
unit_ids = [ 46, 75, 117, 272, 342, 381, 395, 408, 414, 463, 537,
568, 581, 633, 670, 800, 801, 842, 873, 1042, 1078, 1085,
1175, 1193, 1246, 1420, 1440, 1443, 1451, 1464, 1719, 1755, 1823,
1863, 2107, 2128, 2161, 2199, 2231, 2371, 2438, 2522, 2572, 2585,
2644, 2764, 2809, 2810, 2873, 2924, 2973, 2989, 3028, 3035, 3083,
3107, 3129, 3131, 3139, 3189, 3192, 3214, 3318, 3513, 3551, 3613,
3618, 3671, 3680, 3742, 3810, 3945, 3973, 4065, 4069, 4085, 4123,
4131, 4134, 4184, 4221, 4353, 4369, 4426, 4490, 4512, 4532, 4865,
4971, 5140, 5171, 5227, 5276, 5694, 5746, 5810, 5817, 5856, 5910,
6013, 6061, 6078, 6108, 6216, 6254, 6273, 6292, 6301, 6368, 6486,
6497, 6558, 6569, 6618, 6620, 6825, 6887, 6911, 6984, 7091, 7199,
7205, 7242, 7331, 7372, 7415, 7429, 7433, 7659, 7715, 7927, 7946,
8085, 8096, 8181, 8317, 8391, 8392, 8395, 8396, 8415, 8472, 8478,
8572, 8580, 8610, 8663, 8681, 8683, 8700]
# create list of entries
src_units = [dict(source_scan, unit_id=unit) for unit in unit_ids]
meso.SourceUnitsToMatch.insert(meso.ScanSet.Unit.proj() & src_units)
Explanation: Now also specify which units from the source should be matched
End of explanation
meso.ProximityCellMatch.populate(display_progress=True)
Explanation: Now we have specified scans to match and source scan units, we can populate ProximityCellMatch
End of explanation
meso.BestProximityCellMatch().populate()
meso.BestProximityCellMatch()
Explanation: Now find the best proximity match
End of explanation |
13,815 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Color-color plots for LRG targets
The goal of this notebook is to compare the colors of LRG targets against those of the LRG templates.
Step2: Read the targets catalog
Step3: Read the templates and compute colors on a redshift grid.
Step5: Generate some plots | Python Code:
import os
from time import time
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import fitsio
import seaborn as sns
from speclite import filters
from desitarget import desi_mask
from desisim.io import read_basis_templates
%pylab inline
sns.set(style='white', font_scale=1.5, font='sans-serif', palette='Set2')
setcolors = sns.color_palette()
Explanation: Color-color plots for LRG targets
The goal of this notebook is to compare the colors of LRG targets against those of the LRG templates.
End of explanation
def flux2colors(cat):
Convert DECam/WISE fluxes to magnitudes and colors.
colors = dict()
with warnings.catch_warnings(): # ignore missing fluxes (e.g., for QSOs)
warnings.simplefilter('ignore')
for ii, band in zip((1, 2, 4), ('g', 'r', 'z')):
colors[band] = 22.5 - 2.5 * np.log10(cat['DECAM_FLUX'][..., ii].data)
for ii, band in zip((0, 1), ('W1', 'W2')):
colors[band] = 22.5 - 2.5 * np.log10(cat['WISE_FLUX'][..., ii].data)
colors['gr'] = colors['g'] - colors['r']
colors['rz'] = colors['r'] - colors['z']
colors['rW1'] = colors['r'] - colors['W1']
colors['W1W2'] = colors['W1'] - colors['W2']
return colors
lrgfile = os.path.join( os.getenv('DESI_ROOT'), 'data', 'targets-dr3.1-EisDawLRG.fits' )
# Select just LRG targets.
print('Reading {}'.format(lrgfile))
cat = fitsio.read(lrgfile, ext=1, upper=True, columns=['DESI_TARGET'])
these = np.where( (cat['DESI_TARGET'] & desi_mask.LRG) != 0 )[0]
print('Number of LRG targets = {}'.format(len(these)))
cat = fitsio.read(lrgfile, ext=1, upper=True, rows=these)
data = flux2colors(cat)
Explanation: Read the targets catalog
End of explanation
filt = filters.load_filters('decam2014-g', 'decam2014-r', 'decam2014-z', 'wise2010-W1')
flux, wave, meta = read_basis_templates(objtype='LRG')
nt = len(meta)
print('Number of templates = {}'.format(nt))
zmin, zmax, dz = 0.0, 2.0, 0.1
nz = np.round( (zmax - zmin) / dz ).astype('i2')
print('Number of redshift points = {}'.format(nz))
cc = dict(
redshift = np.linspace(0.0, 2.0, nz),
gr = np.zeros( (nt, nz) ),
rz = np.zeros( (nt, nz) ),
rW1 = np.zeros( (nt, nz), )
)
t0 = time()
for iz, red in enumerate(cc['redshift']):
zwave = wave.astype('float') * (1 + red)
phot = filt.get_ab_maggies(flux, zwave, mask_invalid=False)
cc['gr'][:, iz] = -2.5 * np.log10( phot['decam2014-g'] / phot['decam2014-r'] )
cc['rz'][:, iz] = -2.5 * np.log10( phot['decam2014-r'] / phot['decam2014-z'] )
cc['rW1'][:, iz] = -2.5 * np.log10( phot['decam2014-r'] / phot['wise2010-W1'] )
print('Total time = {:.2f} sec.'.format(time() - t0))
Explanation: Read the templates and compute colors on a redshift grid.
End of explanation
figsize = (8, 6)
grrange = (0.0, 3.0)
rzrange = (0.0, 2.5)
rW1range = (-1, 5)
mzrange = (17.5, 20.5)
ntspace = 5 # spacing between model curves
def rzz(pngfile=None):
r-z vs apparent magnitude z
fig, ax = plt.subplots(figsize=figsize)
hb = ax.hexbin(data['z'], data['rz'], bins='log', cmap='Blues_r',
mincnt=100, extent=mzrange+rzrange)
ax.set_xlabel('z')
ax.set_ylabel('r - z')
ax.set_xlim(mzrange)
ax.set_ylim(rzrange)
cb = fig.colorbar(hb, ax=ax)
cb.set_label(r'log$_{10}$ (Number of Galaxies per Bin)')
if pngfile:
fig.savefig(pngfile)
def grz(models=False, pngfile=None):
fig, ax = plt.subplots(figsize=figsize)
hb = ax.hexbin(data['rz'], data['gr'], bins='log', cmap='Blues_r',
mincnt=100, extent=rzrange+grrange)
ax.set_xlabel('r - z')
ax.set_ylabel('g - r')
ax.set_xlim(rzrange)
ax.set_ylim(grrange)
cb = fig.colorbar(hb, ax=ax)
cb.set_label(r'log$_{10}$ (Number of Galaxies per Bin)')
if models:
for tt in np.arange(0, nt, ntspace):
ax.scatter(cc['rz'][tt, 0], cc['gr'][tt, 0], marker='o',
facecolors='none', s=80, edgecolors='k',
linewidth=1)
ax.plot(cc['rz'][tt, :], cc['gr'][tt, :], marker='s',
markersize=5, ls='-', alpha=0.5)
ax.text(0.1, 0.05, 'z=0', ha='left', va='bottom',
transform=ax.transAxes, fontsize=14)
if pngfile:
fig.savefig(pngfile)
def rzW1(models=False, pngfile=None):
fig, ax = plt.subplots(figsize=figsize)
hb = ax.hexbin(data['rz'], data['rW1'], bins='log', cmap='Blues_r',
mincnt=100, extent=rzrange+grrange)
ax.set_xlabel('r - z')
ax.set_ylabel('r - W1')
ax.set_xlim(rzrange)
ax.set_ylim(rW1range)
cb = fig.colorbar(hb, ax=ax)
cb.set_label(r'log$_{10}$ (Number of Galaxies per Bin)')
if models:
for tt in np.arange(0, nt, ntspace):
ax.scatter(cc['rz'][tt, 0], cc['rW1'][tt, 0], marker='o',
facecolors='none', s=80, edgecolors='k',
linewidth=1)
ax.plot(cc['rz'][tt, :], cc['rW1'][tt, :], marker='s',
markersize=5, ls='-', alpha=0.5)
ax.text(0.1, 0.05, 'z=0', ha='left', va='bottom',
transform=ax.transAxes, fontsize=14)
if pngfile:
fig.savefig(pngfile)
grz(models=True)
rzW1(models=True)
rzz()
Explanation: Generate some plots
End of explanation |
13,816 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have a numpy array which contains time series data. I want to bin that array into equal partitions of a given length (it is fine to drop the last partition if it is not the same size) and then calculate the mean of each of those bins. | Problem:
import numpy as np
data = np.array([4, 2, 5, 6, 7, 5, 4, 3, 5, 7])
bin_size = 3
bin_data_mean = data[:(data.size // bin_size) * bin_size].reshape(-1, bin_size).mean(axis=1) |
13,817 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
연습문제
아래 문제들을 해결하는 코드를 W04-Exc.py 파일에 작성하여 제출하라.
연습 1
양의 정수 n을 입력 받아 0과 1 사이의 값을 n등분하는 숫자들의
리스트를 리턴하는 함수 n_divide(n)을 작성하라. (힌트
Step1: 연습 2
문장을 인자로 받으면 문장에 사용된 단어들을 차례대로 print하는 함수
sen2word() 함수를 정의하라.
예제
Step2: 연습 3
2보다 큰 양의 정수 n을 입력 받아 n개의 피보나찌 수열값으로 구성된 리스트를
리턴하는 함수 fibo() 함수를 구현하라.
피보나찌 수열은 다음과 같이 정의된다.
f(0) = f(1) = 1
f(n+2) = f(n) + f(n+1)
예제
Step3: 연습 4
이름과 점수로 구성된 길이가 2인 튜플들의 리스트를 입력받으면 점수 순서대로
정렬하여 리턴하는 함수 sort_notes()를 작성하라.
단, 성적이 가장 좋은 경우가 제일 먼저 오도록 한다.
예제
Step4: 연습 5
리스트 xs를 입력 받아 xs의 항목 중에서 정수 또는 실수들만 모두 더하는 함수
num_sum 를 정의하라. 어떠한 부작용도 발생하지 않도록 함수를 작성해야 한다.
예제 | Python Code:
def n_divide(n):
L = []
for i in range(n+1):
L.append(i * 1.0/n)
return L
n_divide(10)
Explanation: 연습문제
아래 문제들을 해결하는 코드를 W04-Exc.py 파일에 작성하여 제출하라.
연습 1
양의 정수 n을 입력 받아 0과 1 사이의 값을 n등분하는 숫자들의
리스트를 리턴하는 함수 n_divide(n)을 작성하라. (힌트: range 함수 활용)
예제:
In [1]: n_divide(10)
out[1]: [0, 0.1, 0.2, ..., 0.9, 1]
End of explanation
def sen2word(xs):
for i in xs.split():
print(i)
sen2word("I am learning Python. It's quite interesting.")
Explanation: 연습 2
문장을 인자로 받으면 문장에 사용된 단어들을 차례대로 print하는 함수
sen2word() 함수를 정의하라.
예제:
In [1]: sen2word("I am learning Python.")
I
am
learning
Python.
End of explanation
def fibo(n):
F = [1, 1]
for i in range(2, n):
F.append(F[i-1] + F[i-2])
return F
fibo(5)
Explanation: 연습 3
2보다 큰 양의 정수 n을 입력 받아 n개의 피보나찌 수열값으로 구성된 리스트를
리턴하는 함수 fibo() 함수를 구현하라.
피보나찌 수열은 다음과 같이 정의된다.
f(0) = f(1) = 1
f(n+2) = f(n) + f(n+1)
예제:
In [13]: fibo(5)
out[13]: [1, 1, 2, 3, 5]
End of explanation
def second(t):
return t[1]
def sort_notes(xs):
xs.sort(key=second, reverse=True)
return xs
L = [("Lee", 45), ("Kim", 30), ("Kang", 70), ("Park", 99), ("Cho", 65)]
sort_notes(L)
Explanation: 연습 4
이름과 점수로 구성된 길이가 2인 튜플들의 리스트를 입력받으면 점수 순서대로
정렬하여 리턴하는 함수 sort_notes()를 작성하라.
단, 성적이 가장 좋은 경우가 제일 먼저 오도록 한다.
예제:
In [30]: sort_notes([("Lee", 45), ("Kim", 30), ("Kang", 70), ("Park", 99), ("Cho", 65)])
Out[30]: [("Park", 99), ("Kang", 70), ("Cho", 65), ("Lee", 45), ("Kim", 30)]
End of explanation
def num_sum(xs):
L = []
for i in xs:
if isinstance(i, int) or isinstance(i, float):
L.append(i)
else:
pass
return sum(L)
L = [5, 'abc', 2, [2,3]]
num_sum(L)
L
Explanation: 연습 5
리스트 xs를 입력 받아 xs의 항목 중에서 정수 또는 실수들만 모두 더하는 함수
num_sum 를 정의하라. 어떠한 부작용도 발생하지 않도록 함수를 작성해야 한다.
예제:
In [19]: L = [5, 'abc', 2, [2,3]]
num_sum(L)
Out[19]: 7
In [20]: L
Out[20]: [5, 'abc', 2, [2,3]]
End of explanation |
13,818 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
scikit-learn-random forest
Credits
Step1: Random Forest Classifier
Random forests are an example of an ensemble learner built on decision trees.
For this reason we'll start by discussing decision trees themselves.
Decision trees are extremely intuitive ways to classify or label objects
Step2: The binary splitting makes this extremely efficient.
As always, though, the trick is to ask the right questions.
This is where the algorithmic process comes in
Step3: Notice that at each increase in depth, every node is split in two except those nodes which contain only a single class.
The result is a very fast non-parametric classification, and can be extremely useful in practice.
Question
Step4: The details of the classifications are completely different! That is an indication of over-fitting
Step5: See how the details of the model change as a function of the sample, while the larger characteristics remain the same!
The random forest classifier will do something similar to this, but use a combined version of all these trees to arrive at a final answer
Step6: By averaging over 100 randomly perturbed models, we end up with an overall model which is a much better fit to our data!
(Note | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn;
from sklearn.linear_model import LinearRegression
from scipy import stats
import pylab as pl
seaborn.set()
Explanation: scikit-learn-random forest
Credits: Forked from PyCon 2015 Scikit-learn Tutorial by Jake VanderPlas
End of explanation
import fig_code
fig_code.plot_example_decision_tree()
Explanation: Random Forest Classifier
Random forests are an example of an ensemble learner built on decision trees.
For this reason we'll start by discussing decision trees themselves.
Decision trees are extremely intuitive ways to classify or label objects: you simply ask a series of questions designed to zero-in on the classification:
End of explanation
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=1.0)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow');
# We have some convenience functions in the repository that help
from fig_code import visualize_tree, plot_tree_interactive
# Now using IPython's ``interact`` (available in IPython 2.0+, and requires a live kernel) we can view the decision tree splits:
plot_tree_interactive(X, y);
Explanation: The binary splitting makes this extremely efficient.
As always, though, the trick is to ask the right questions.
This is where the algorithmic process comes in: in training a decision tree classifier, the algorithm looks at the features and decides which questions (or "splits") contain the most information.
Creating a Decision Tree
Here's an example of a decision tree classifier in scikit-learn. We'll start by defining some two-dimensional labeled data:
End of explanation
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
plt.figure()
visualize_tree(clf, X[:200], y[:200], boundaries=False)
plt.figure()
visualize_tree(clf, X[-200:], y[-200:], boundaries=False)
Explanation: Notice that at each increase in depth, every node is split in two except those nodes which contain only a single class.
The result is a very fast non-parametric classification, and can be extremely useful in practice.
Question: Do you see any problems with this?
Decision Trees and over-fitting
One issue with decision trees is that it is very easy to create trees which over-fit the data. That is, they are flexible enough that they can learn the structure of the noise in the data rather than the signal! For example, take a look at two trees built on two subsets of this dataset:
End of explanation
def fit_randomized_tree(random_state=0):
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=2.0)
clf = DecisionTreeClassifier(max_depth=15)
rng = np.random.RandomState(random_state)
i = np.arange(len(y))
rng.shuffle(i)
visualize_tree(clf, X[i[:250]], y[i[:250]], boundaries=False,
xlim=(X[:, 0].min(), X[:, 0].max()),
ylim=(X[:, 1].min(), X[:, 1].max()))
from IPython.html.widgets import interact
interact(fit_randomized_tree, random_state=[0, 100]);
Explanation: The details of the classifications are completely different! That is an indication of over-fitting: when you predict the value for a new point, the result is more reflective of the noise in the model rather than the signal.
Ensembles of Estimators: Random Forests
One possible way to address over-fitting is to use an Ensemble Method: this is a meta-estimator which essentially averages the results of many individual estimators which over-fit the data. Somewhat surprisingly, the resulting estimates are much more robust and accurate than the individual estimates which make them up!
One of the most common ensemble methods is the Random Forest, in which the ensemble is made up of many decision trees which are in some way perturbed.
There are volumes of theory and precedent about how to randomize these trees, but as an example, let's imagine an ensemble of estimators fit on subsets of the data. We can get an idea of what these might look like as follows:
End of explanation
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=100, random_state=0, n_jobs=-1)
visualize_tree(clf, X, y, boundaries=False);
Explanation: See how the details of the model change as a function of the sample, while the larger characteristics remain the same!
The random forest classifier will do something similar to this, but use a combined version of all these trees to arrive at a final answer:
End of explanation
from sklearn.ensemble import RandomForestRegressor
x = 10 * np.random.rand(100)
def model(x, sigma=0.3):
fast_oscillation = np.sin(5 * x)
slow_oscillation = np.sin(0.5 * x)
noise = sigma * np.random.randn(len(x))
return slow_oscillation + fast_oscillation + noise
y = model(x)
plt.errorbar(x, y, 0.3, fmt='o');
xfit = np.linspace(0, 10, 1000)
yfit = RandomForestRegressor(100).fit(x[:, None], y).predict(xfit[:, None])
ytrue = model(xfit, 0)
plt.errorbar(x, y, 0.3, fmt='o')
plt.plot(xfit, yfit, '-r');
plt.plot(xfit, ytrue, '-k', alpha=0.5);
Explanation: By averaging over 100 randomly perturbed models, we end up with an overall model which is a much better fit to our data!
(Note: above we randomized the model through sub-sampling... Random Forests use more sophisticated means of randomization, which you can read about in, e.g. the scikit-learn documentation)
Not good for random forest:
lots of 0, few 1
structured data like images, neural network might be better
small data, might overfit
high dimensional data, linear model might work better
Random Forest Regressor
Above we were considering random forests within the context of classification.
Random forests can also be made to work in the case of regression (that is, continuous rather than categorical variables). The estimator to use for this is sklearn.ensemble.RandomForestRegressor.
Let's quickly demonstrate how this can be used:
End of explanation |
13,819 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Forest Fire Model
A rapid introduction to Mesa
The Forest Fire Model is one of the simplest examples of a model that exhibits self-organized criticality.
Mesa is a new, Pythonic agent-based modeling framework. A big advantage of using Python is that it a great language for interactive data analysis. Unlike some other ABM frameworks, with Mesa you can write a model, run it, and analyze it all in the same environment. (You don't have to, of course. But you can).
In this notebook, we'll go over a rapid-fire (pun intended, sorry) introduction to building and analyzing a model with Mesa.
First, some imports. We'll go over what all the Mesa ones mean just below.
Step1: Building the model
Most models consist of basically two things
Step2: Now we need to define the model object itself. The main thing the model needs is the grid, which the trees are placed on. But since the model is dynamic, it also needs to include time -- it needs a schedule, to manage the trees activation as they spread the fire from one to the other.
The model also needs a few parameters
Step3: Running the model
Let's create a model with a 100 x 100 grid, and a tree density of 0.6. Remember, ForestFire takes the arguments height, width, density.
Step4: To run the model until it's done (that is, until it sets its running property to False) just use the run_model() method. This is implemented in the Model parent object, so we didn't need to implement it above.
Step5: That's all there is to it!
But... so what? This code doesn't include a visualization, after all.
TODO
Step6: And chart it, to see the dynamics.
Step7: In this case, the fire burned itself out after about 90 steps, with many trees left unburned.
You can try changing the density parameter and rerunning the code above, to see how different densities yield different dynamics. For example
Step8: ... But to really understand how the final outcome varies with density, we can't just tweak the parameter by hand over and over again. We need to do a batch run.
Batch runs
Batch runs, also called parameter sweeps, allow use to systemically vary the density parameter, run the model, and check the output. Mesa provides a BatchRunner object which takes a model class, a dictionary of parameters and the range of values they can take and runs the model at each combination of these values. We can also give it reporters, which collect some data on the model at the end of each run and store it, associated with the parameters that produced it.
For ease of typing and reading, we'll first create the parameters to vary and the reporter, and then assign them to a new BatchRunner.
Step9: Now the BatchRunner, which we've named param_run, is ready to go. To run the model at every combination of parameters (in this case, every density value), just use the run_all() method.
Step10: Like with the data collector, we can extract the data the batch runner collected into a dataframe
Step11: As you can see, each row here is a run of the model, identified by its parameter values (and given a unique index by the Run column). To view how the BurnedOut fraction varies with density, we can easily just plot them
Step12: And we see the very clear emergence of a critical value around 0.5, where the model quickly shifts from almost no trees being burned, to almost all of them.
In this case we ran the model only once at each value. However, it's easy to have the BatchRunner execute multiple runs at each parameter combination, in order to generate more statistically reliable results. We do this using the iteration argument.
Let's run the model 5 times at each parameter point, and export and plot the results as above. | Python Code:
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from mesa import Model, Agent
from mesa.time import RandomActivation
from mesa.space import Grid
from mesa.datacollection import DataCollector
from mesa.batchrunner import BatchRunner
Explanation: The Forest Fire Model
A rapid introduction to Mesa
The Forest Fire Model is one of the simplest examples of a model that exhibits self-organized criticality.
Mesa is a new, Pythonic agent-based modeling framework. A big advantage of using Python is that it a great language for interactive data analysis. Unlike some other ABM frameworks, with Mesa you can write a model, run it, and analyze it all in the same environment. (You don't have to, of course. But you can).
In this notebook, we'll go over a rapid-fire (pun intended, sorry) introduction to building and analyzing a model with Mesa.
First, some imports. We'll go over what all the Mesa ones mean just below.
End of explanation
class TreeCell(Agent):
'''
A tree cell.
Attributes:
x, y: Grid coordinates
condition: Can be "Fine", "On Fire", or "Burned Out"
unique_id: (x,y) tuple.
unique_id isn't strictly necessary here, but it's good practice to give one to each
agent anyway.
'''
def __init__(self, model, pos):
'''
Create a new tree.
Args:
pos: The tree's coordinates on the grid. Used as the unique_id
'''
super().__init__(pos, model)
self.pos = pos
self.unique_id = pos
self.condition = "Fine"
def step(self):
'''
If the tree is on fire, spread it to fine trees nearby.
'''
if self.condition == "On Fire":
neighbors = self.model.grid.get_neighbors(self.pos, moore=False)
for neighbor in neighbors:
if neighbor.condition == "Fine":
neighbor.condition = "On Fire"
self.condition = "Burned Out"
Explanation: Building the model
Most models consist of basically two things: agents, and an world for the agents to be in. The Forest Fire model has only one kind of agent: a tree. A tree can either be unburned, on fire, or already burned. The environment is a grid, where each cell can either be empty or contain a tree.
First, let's define our tree agent. The agent needs to be assigned x and y coordinates on the grid, and that's about it. We could assign agents a condition to be in, but for now let's have them all start as being 'Fine'. Since the agent doesn't move, and there is only at most one tree per cell, we can use a tuple of its coordinates as a unique identifier.
Next, we define the agent's step method. This gets called whenever the agent needs to act in the world and takes the model object to which it belongs as an input. The tree's behavior is simple: If it is currently on fire, it spreads the fire to any trees above, below, to the left and the right of it that are not themselves burned out or on fire; then it burns itself out.
End of explanation
class ForestFire(Model):
'''
Simple Forest Fire model.
'''
def __init__(self, height, width, density):
'''
Create a new forest fire model.
Args:
height, width: The size of the grid to model
density: What fraction of grid cells have a tree in them.
'''
# Initialize model parameters
self.height = height
self.width = width
self.density = density
# Set up model objects
self.schedule = RandomActivation(self)
self.grid = Grid(height, width, torus=False)
self.dc = DataCollector({"Fine": lambda m: self.count_type(m, "Fine"),
"On Fire": lambda m: self.count_type(m, "On Fire"),
"Burned Out": lambda m: self.count_type(m, "Burned Out")})
# Place a tree in each cell with Prob = density
for x in range(self.width):
for y in range(self.height):
if random.random() < self.density:
# Create a tree
new_tree = TreeCell(self, (x, y))
# Set all trees in the first column on fire.
if x == 0:
new_tree.condition = "On Fire"
self.grid[y][x] = new_tree
self.schedule.add(new_tree)
self.running = True
def step(self):
'''
Advance the model by one step.
'''
self.schedule.step()
self.dc.collect(self)
# Halt if no more fire
if self.count_type(self, "On Fire") == 0:
self.running = False
@staticmethod
def count_type(model, tree_condition):
'''
Helper method to count trees in a given condition in a given model.
'''
count = 0
for tree in model.schedule.agents:
if tree.condition == tree_condition:
count += 1
return count
Explanation: Now we need to define the model object itself. The main thing the model needs is the grid, which the trees are placed on. But since the model is dynamic, it also needs to include time -- it needs a schedule, to manage the trees activation as they spread the fire from one to the other.
The model also needs a few parameters: how large the grid is and what the density of trees on it will be. Density will be the key parameter we'll explore below.
Finally, we'll give the model a data collector. This is a Mesa object which collects and stores data on the model as it runs for later analysis.
The constructor needs to do a few things. It instantiates all the model-level variables and objects; it randomly places trees on the grid, based on the density parameter; and it starts the fire by setting all the trees on one edge of the grid (x=0) as being On "Fire".
Next, the model needs a step method. Like at the agent level, this method defines what happens every step of the model. We want to activate all the trees, one at a time; then we run the data collector, to count how many trees are currently on fire, burned out, or still fine. If there are no trees left on fire, we stop the model by setting its running property to False.
End of explanation
fire = ForestFire(100, 100, 0.6)
Explanation: Running the model
Let's create a model with a 100 x 100 grid, and a tree density of 0.6. Remember, ForestFire takes the arguments height, width, density.
End of explanation
fire.run_model()
Explanation: To run the model until it's done (that is, until it sets its running property to False) just use the run_model() method. This is implemented in the Model parent object, so we didn't need to implement it above.
End of explanation
results = fire.dc.get_model_vars_dataframe()
Explanation: That's all there is to it!
But... so what? This code doesn't include a visualization, after all.
TODO: Add a MatPlotLib visualization
Remember the data collector? Now we can put the data it collected into a pandas DataFrame:
End of explanation
results.plot()
Explanation: And chart it, to see the dynamics.
End of explanation
fire = ForestFire(100, 100, 0.8)
fire.run_model()
results = fire.dc.get_model_vars_dataframe()
results.plot()
Explanation: In this case, the fire burned itself out after about 90 steps, with many trees left unburned.
You can try changing the density parameter and rerunning the code above, to see how different densities yield different dynamics. For example:
End of explanation
param_set = dict(height=50, # Height and width are constant
width=50,
# Vary density from 0.01 to 1, in 0.01 increments:
density=np.linspace(0,1,101)[1:])
# At the end of each model run, calculate the fraction of trees which are Burned Out
model_reporter = {"BurnedOut": lambda m: (ForestFire.count_type(m, "Burned Out") /
m.schedule.get_agent_count()) }
# Create the batch runner
param_run = BatchRunner(ForestFire, param_set, model_reporters=model_reporter)
Explanation: ... But to really understand how the final outcome varies with density, we can't just tweak the parameter by hand over and over again. We need to do a batch run.
Batch runs
Batch runs, also called parameter sweeps, allow use to systemically vary the density parameter, run the model, and check the output. Mesa provides a BatchRunner object which takes a model class, a dictionary of parameters and the range of values they can take and runs the model at each combination of these values. We can also give it reporters, which collect some data on the model at the end of each run and store it, associated with the parameters that produced it.
For ease of typing and reading, we'll first create the parameters to vary and the reporter, and then assign them to a new BatchRunner.
End of explanation
param_run.run_all()
Explanation: Now the BatchRunner, which we've named param_run, is ready to go. To run the model at every combination of parameters (in this case, every density value), just use the run_all() method.
End of explanation
df = param_run.get_model_vars_dataframe()
df.head()
Explanation: Like with the data collector, we can extract the data the batch runner collected into a dataframe:
End of explanation
plt.scatter(df.density, df.BurnedOut)
plt.xlim(0,1)
Explanation: As you can see, each row here is a run of the model, identified by its parameter values (and given a unique index by the Run column). To view how the BurnedOut fraction varies with density, we can easily just plot them:
End of explanation
param_run = BatchRunner(ForestFire, param_set, iterations=5, model_reporters=model_reporter)
param_run.run_all()
df = param_run.get_model_vars_dataframe()
plt.scatter(df.density, df.BurnedOut)
plt.xlim(0,1)
Explanation: And we see the very clear emergence of a critical value around 0.5, where the model quickly shifts from almost no trees being burned, to almost all of them.
In this case we ran the model only once at each value. However, it's easy to have the BatchRunner execute multiple runs at each parameter combination, in order to generate more statistically reliable results. We do this using the iteration argument.
Let's run the model 5 times at each parameter point, and export and plot the results as above.
End of explanation |
13,820 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Processing cellpy batch - preparing the data
{{cookiecutter.project_name}}
Step1: Creating pages and initialise the cellpy batch object
If you need to create Journal Pages, please provide appropriate names for the project and the experiment to allow cellpy to build the pages.
Step2: Set optional parameters
You should set overall parameters before creating the journal and lodaing the data. The most common ones are given below (uncomment what you need).
Step3: Create the journal and appropriate folder structure
Step4: 2. Loading data
Step5: 3. Initial investigation of the batch experiment
Step6: 4. Packaging data
The notebooks are set up such that they read the journal from the current folder. This is not the default location of the journal files. To be able to load the journal easily, you should duplicate it (run the cell below).
If you also would like to share the experiment with others, you should run the duplicate_cellpy_files method so that all the cellpy-files will be copied to the data/interim folder. | Python Code:
import sys
import cellpy
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from cellpy import prms
from cellpy import prmreader
from cellpy.utils import batch, plotutils
%matplotlib inline
print(f"cellpy version: {cellpy.__version__}")
Explanation: Processing cellpy batch - preparing the data
{{cookiecutter.project_name}}::{{cookiecutter.session_id}}
Experimental-id: {{cookiecutter.notebook_name}}
Short-name: {{cookiecutter.session_id}}
Project: {{cookiecutter.project_name}}
By: {{cookiecutter.author_name}}
Date: {{cookiecutter.date}}
1. Setting up everything
Note! This template was made for cellpy version 0.4.1.a3
Imports
End of explanation
# Parameters for the batch
project = "{{cookiecutter.project_name}}"
name = "{{cookiecutter.session_id}}"
batch_col = "b01" # edit this if you are not using the standard batch column
# Create the batch object
b = batch.init(name, project, batch_col=batch_col)
Explanation: Creating pages and initialise the cellpy batch object
If you need to create Journal Pages, please provide appropriate names for the project and the experiment to allow cellpy to build the pages.
End of explanation
## Setting some prms if default values are not OK for you
b.experiment.export_raw = False
b.experiment.export_cycles = True
b.experiment.export_ica = False
Explanation: Set optional parameters
You should set overall parameters before creating the journal and lodaing the data. The most common ones are given below (uncomment what you need).
End of explanation
# load info from your db and write the journal pages
b.create_journal()
# Create the apropriate folders
b.paginate()
# Show the journal pages
b.pages
Explanation: Create the journal and appropriate folder structure
End of explanation
# load the data (and save .csv-files if you have set export_(raw/cycles/ica) = True)
# (this might take some time)
b.update()
Explanation: 2. Loading data
End of explanation
# Collect summary-data (e.g. charge capacity vs cycle number) from each cell and export to .csv-file(s).
b.combine_summaries()
# Plot the charge capacity and the C.E. (and resistance) vs. cycle number (standard plot)
b.plot_summaries()
Explanation: 3. Initial investigation of the batch experiment
End of explanation
## If you have made any changes to your journal pages, you should save it again.
# b.save_journal()
# Copy the journal to the notebook working folder
b.duplicate_journal()
## If you want to share the experiment notebooks, run this cell to also copy cellpy files to the data folder instead:
# b.duplicate_cellpy_files(location="standard")
Explanation: 4. Packaging data
The notebooks are set up such that they read the journal from the current folder. This is not the default location of the journal files. To be able to load the journal easily, you should duplicate it (run the cell below).
If you also would like to share the experiment with others, you should run the duplicate_cellpy_files method so that all the cellpy-files will be copied to the data/interim folder.
End of explanation |
13,821 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CSE 6040, Fall 2015 [09]
Step1: sqlite maintains databases as files; in this example, the name of that file is example.db.
If the named file does not yet exist, connecting to it in this way will create it.
To issue commands to the database, you also need to create a cursor.
Step2: A cursor tracks the current state of the database, and you will mostly be using the cursor to manipulate or query the database.
Tables and Basic Queries
The main object of a relational database is a table.
Conceptually, your data consists of items and attributes. In a database table, the items are rows and the attributes are columns.
For instance, suppose we wish to maintain a database of Georgia Tech students, whose attributes are their names and GT IDs. You might start by creating a table named Students to hold this data. You can create the table using the command, create table.
Step3: Note
Step4: Given a table, the most common operation is a query. The simplest kind of query is called a select.
The following example selects all rows (items) from the Students table.
Step5: Conceptually, the database is now in a new state in which you can ask for results of the query. One way to do that is to call fetchone() on the cursor object, which will return a tuple corresponding to a row of the table.
This example calls fetchone() twice to get the first two query results.
Step6: An alternative to fetchone() is fetchall(), which will return a list of tuples for all rows, starting at the cursor.
Since the preceding code has already fetched the first two results, calling fetchall() at this point will return all remaining results.
Step7: Question. What will calling fetchone() at this point return?
Step8: Here is an alternative, an arguably more natural, idiom for executing a query and iterating over its results.
Step9: An insertion idiom
Another common operation is to perform a bunch of insertions into a table from a list of tuples. In this case, you can use executemany().
Step10: Exercise. Suppose we wish to maintain a second table, called Takes, which records classes that students have taken and the grades they earn.
In particular, each row of Takes stores a student by his/her GT ID, the course he/she took, and the grade he/she earned. More formally, suppose this table is defined as follows
Step11: Write a command to insert the following records into the Takes table.
Vuduc
Step12: Join queries
The "big idea" in a relational database is to build queries that combine information from multiple tables. A join query is one such operation.
There are many types of joins, but the simplest is one in which you use the where clause of a select statement to specify how to match rows from the tables being joined.
For example, recall that the Takes table stores classes taken by each student. However, these classes are recorded by a student's GT ID. Suppose we want a report where we want each student's name rather than his/her ID. We can get the matching name from the Students table. Here is a query to accomplish this matching
Step13: Exercise. Write a query to select only the names and grades of students who took CSE 6040.
Aggregations
Another common style of query is an aggregation, which is a summary of information across multiple records, rather than the raw records themselves.
For instance, suppose we want to compute the GPA for each unique GT ID from the Takes table. Here is a query that does it
Step14: Exercise. Compute the GPA of every student, but report the name (rather than GT ID) and GPA.
Cleanup
As one final bit of information, it's good practice to shutdown the cursor and connection, the same way you close files. | Python Code:
import sqlite3 as db
# Connect to a database (or create one if it doesn't exist)
conn = db.connect ('example.db')
Explanation: CSE 6040, Fall 2015 [09]: Relational Databases via SQL
Today's lab is a crash-course in relational databases, as well as SQL (Structured Query Language), which is the most popular language for managing relational databases.
There are many database management system ("DBMS") products that support SQL. The one we will consider in this class is the simplest, called sqlite3. It stores the database in a simple file and can be run in a "standalone" mode. However, we will consider invoking it from Python.
With a little luck, you might by the end of this class understand this xkcd comic on SQL injection attacks.
Getting started
In Python, you connect to an sqlite3 database by creating a connection object.
End of explanation
# Create a 'cursor' for executing commands
c = conn.cursor ()
Explanation: sqlite maintains databases as files; in this example, the name of that file is example.db.
If the named file does not yet exist, connecting to it in this way will create it.
To issue commands to the database, you also need to create a cursor.
End of explanation
c.execute ("create table Students (gtid integer, name text)")
Explanation: A cursor tracks the current state of the database, and you will mostly be using the cursor to manipulate or query the database.
Tables and Basic Queries
The main object of a relational database is a table.
Conceptually, your data consists of items and attributes. In a database table, the items are rows and the attributes are columns.
For instance, suppose we wish to maintain a database of Georgia Tech students, whose attributes are their names and GT IDs. You might start by creating a table named Students to hold this data. You can create the table using the command, create table.
End of explanation
c.execute ("insert into Students values (123, 'Vuduc')")
c.execute ("insert into Students values (456, 'Chau')")
c.execute ("insert into Students values (381, 'Bader')")
c.execute ("insert into Students values (991, 'Sokol')")
Explanation: Note: This command will fail if the table already exists. If you are trying to carry out these exercises from scratch, you may need to remove any existing example.db first.
To populate the table with items, you can use the command, insert into.
End of explanation
c.execute ("select * from Students")
Explanation: Given a table, the most common operation is a query. The simplest kind of query is called a select.
The following example selects all rows (items) from the Students table.
End of explanation
print (c.fetchone ())
print (c.fetchone ())
Explanation: Conceptually, the database is now in a new state in which you can ask for results of the query. One way to do that is to call fetchone() on the cursor object, which will return a tuple corresponding to a row of the table.
This example calls fetchone() twice to get the first two query results.
End of explanation
print (c.fetchall ())
Explanation: An alternative to fetchone() is fetchall(), which will return a list of tuples for all rows, starting at the cursor.
Since the preceding code has already fetched the first two results, calling fetchall() at this point will return all remaining results.
End of explanation
print (c.fetchone ())
Explanation: Question. What will calling fetchone() at this point return?
End of explanation
query = 'select * from Students'
for student in c.execute (query):
print (student)
Explanation: Here is an alternative, an arguably more natural, idiom for executing a query and iterating over its results.
End of explanation
# An important (and secure!) idiom
more_students = [(723, 'Rozga'),
(882, 'Zha'),
(401, 'Park'),
(377, 'Vetter'),
(904, 'Brown')]
c.executemany ('insert into Students values (?, ?)', more_students)
query = 'select * from Students'
for student in c.execute (query):
print (student)
Explanation: An insertion idiom
Another common operation is to perform a bunch of insertions into a table from a list of tuples. In this case, you can use executemany().
End of explanation
c.execute ('create table Takes (gtid integer, course text, grade real)')
Explanation: Exercise. Suppose we wish to maintain a second table, called Takes, which records classes that students have taken and the grades they earn.
In particular, each row of Takes stores a student by his/her GT ID, the course he/she took, and the grade he/she earned. More formally, suppose this table is defined as follows:
End of explanation
# Insert your solution here; use the next cell to test the output
taken_spring2015 = [(991, "CSE 6040", 4.0),
(456, "CSE 6040", 4.0),
(123, "CSE 6040", 2.0),
(123, "ISYE 6644", 3.0),
(123, "MGMT 8803", 1.0),
(991, "ISYE 6740", 4.0),
(456, "CSE 6740", 2.0),
(456, "MGMT 8803", 3.0)]
c.executemany ('insert into Takes values (?, ?, ?)', taken_spring2015)
# Displays the results of your code
for row in c.execute ('select * from Takes'):
print (row)
Explanation: Write a command to insert the following records into the Takes table.
Vuduc: CSE 6040 - A (4.0), ISYE 6644 - B (3.0), MGMT 8803 - D (1.0)
Sokol: CSE 6040 - A (4.0), ISYE 6740 - A (4.0)
Chau: CSE 6040 - C (2.0), CSE 6740 - C (2.0), MGMT 8803 - B (3.0)
End of explanation
# See all (name, course, grade) tuples
query = '''
select Students.name, Takes.course, Takes.grade
from Students, Takes
where Students.gtid=Takes.gtid
'''
for match in c.execute (query):
print (match)
Explanation: Join queries
The "big idea" in a relational database is to build queries that combine information from multiple tables. A join query is one such operation.
There are many types of joins, but the simplest is one in which you use the where clause of a select statement to specify how to match rows from the tables being joined.
For example, recall that the Takes table stores classes taken by each student. However, these classes are recorded by a student's GT ID. Suppose we want a report where we want each student's name rather than his/her ID. We can get the matching name from the Students table. Here is a query to accomplish this matching:
End of explanation
query = '''
select Students.name, avg (Takes.grade)
from Takes, Students
where Students.gtid=Takes.gtid
group by Takes.gtid
'''
for match in c.execute (query):
print (match)
Explanation: Exercise. Write a query to select only the names and grades of students who took CSE 6040.
Aggregations
Another common style of query is an aggregation, which is a summary of information across multiple records, rather than the raw records themselves.
For instance, suppose we want to compute the GPA for each unique GT ID from the Takes table. Here is a query that does it:
End of explanation
c.close()
conn.close()
Explanation: Exercise. Compute the GPA of every student, but report the name (rather than GT ID) and GPA.
Cleanup
As one final bit of information, it's good practice to shutdown the cursor and connection, the same way you close files.
End of explanation |
13,822 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook has been tested with
Python 3.5
Keras 2.0.8
Tensorflow 1.3.0
Step1: Use VGG16 model with pre-trained weights
Keras documentation for detail
Step2: Use only convolutional part of VGG16 and add dense layer
Create your own input format
Create your own model
Step3: Fine tune VGG16 model with a designated layer | Python Code:
from keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input
from keras.layers import Input, Flatten, Dense
from keras.models import Model
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D, ZeroPadding2D
Explanation: This notebook has been tested with
Python 3.5
Keras 2.0.8
Tensorflow 1.3.0
End of explanation
# Use whole vgg16 model
# Input image format: (224 X 224 X 3)
vgg16_with_top = VGG16(include_top=True, weights='imagenet',
input_tensor=None, input_shape=None,
pooling=None,
classes=1000)
vgg16_with_top.summary()
Explanation: Use VGG16 model with pre-trained weights
Keras documentation for detail:
https://keras.io/applications/
End of explanation
# Get back the convolutional part of a VGG network trained on ImageNet
model_vgg16_conv = VGG16(weights='imagenet', include_top=False)
model_vgg16_conv.summary()
# Stop to train weights of convolutional layers, if you want to fit them, markdown the following two lines
for layer in model_vgg16_conv.layers:
layer.trainable = False
# Create your own input format (here 224 X 224 X 3)
inputs = Input(shape=(224,224,3),name = 'image_input')
# Use the generated model
output_vgg16_conv = model_vgg16_conv(inputs)
# Add the fully-connected layers
x = Flatten(name='flatten')(output_vgg16_conv)
#x = Dense(128, activation='relu', name='fc1')(x)
#x = Dense(128, activation='relu', name='fc2')(x)
x = Dense(1000, activation='softmax', name='predictions')(x)
# Create your own model
my_model = Model(inputs=inputs, output=x)
# In the summary, weights and layers from VGG part will be hidden, and they will not be fit during traning
my_model.summary()
Explanation: Use only convolutional part of VGG16 and add dense layer
Create your own input format
Create your own model
End of explanation
# Generate a model with all layers (with top)
my_vgg16_model = VGG16(weights='imagenet', include_top=True)
# Stop to train weights of VGG16 layers
for layer in my_vgg16_model.layers:
layer.trainable = False
# Add a layer where input is the output of the second last layer
#x = Flatten(name='flatten')(my_vgg16_model.layers[-2].output)
# Add a layer where input is the output of the fourth last layer
x = Dropout(0.9, noise_shape=None, seed=None)(my_vgg16_model.layers[-4].output)
#x = Dense(128, activation='relu', name='fc1')(x)
#x = Dense(128, activation='relu', name='fc2')(x)
x = Dense(1000, activation='softmax', name='predictions')(x)
# Then create the corresponding model
my_model = Model(inputs=my_vgg16_model.input, output=x)
my_model.summary()
Explanation: Fine tune VGG16 model with a designated layer
End of explanation |
13,823 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predict the Sun Hours by Decision Tree
Import Modules
Step1: Import Data
Below is the Daily Weather Observations of Sydney, New South Wales between Aug 2015 and Aug 2016.
Step2: Column Meanings
| Heading | Meaning | Units |
|-----------------|----------------------------------------------------------|---------------------|
| Day | Day of the week | first two letters |
| Temps_min | Minimum temperature in the 24 hours to 9am. | degrees Celsius |
| Temps_max | Maximum temperature in the 24 hours from 9am. | degrees Celsius |
| Rain | Precipitation (rainfall) in the 24 hours to 9am. | millimetres |
| Evap | Class A pan evaporation in the 24 hours to 9am | millimetres |
| Sun_hours | Bright sunshine in the 24 hours to midnight | hours |
| Max_wind_dir | Direction of strongest gust in the 24 hours to midnight | 16 compass points |
| Max_wind_spd | Speed of strongest wind gust in the 24 hours to midnight | kilometres per hour |
| Max_wind_time | Time of strongest wind gust | local time hh
Step3: We firstly category Sun_hours into three levels
Step4: Preprocessing (Handling Missing Values)
From time to time, observations will not be available, for a variety of reasons. We need to handle missing values before training a classifier.
Step5: output the number of missing values with in each column
pd.isnull(data).sum()
For CLD_at_9am, Max_wind_dir, Max_wind_spd and Max_wind_dir, we simply drop the rows with missing values
Step6: According to the Bureau of Meteorology (http
Step7: Select Feature(s)
Step8: If you have domain knowledge, you will know that the difference between max and min temperature might be highly correlated to sun hours. Let us add such column.
Step9: Let's try the features with higher correlations to Sun_level
Step10: Build Decision Tree
We preserve 80% of the data as training data, and the rest will be used to tune the classifier.
Step11: Now we generate features
Step12: And we use Sun_level as labels
Step13: With features and labels, we can then train our decision tree
Step14: Evaluations and Investigations of the Trained Decision Tree
Firstly let's see the accuracy of the decision tree on the training dataset
Step15: Let's see the accuracy on the testing dataset
Step16: We can also get the importance of each feature
Step17: Apparently the above tree is overfitting. One way to deal with it is to change the maximum height of the decision tree.
Step18: Predict the missing Sun_level value
Step19: Although we cannot remember what happened on these 7 days (can you?), it seems the predictions are probably correct. E.g.,
* the first instance is predicted as 'high' and it is a summer day, without rain or even cloud;
* the last instance is predicted as 'low' as there was heavy rainfall, as Rain = 61.0 (mm). Note that the model didn't use the rain feature at all!
Visualize of the Decision Tree
You need to install pydotplus (use pip install pydotplus) and graphviz on your computer to get the following code run
Step20: You can generate a pdf file if you want | Python Code:
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from IPython.display import Image
from sklearn.externals.six import StringIO
from sklearn.cross_validation import train_test_split
import matplotlib.pyplot as plt
%matplotlib inline
RANDOM_SEED = 9
Explanation: Predict the Sun Hours by Decision Tree
Import Modules
End of explanation
data = pd.read_csv('./asset/Daily_Weather_Observations.csv', sep=',')
print(data.shape)
data.head()
Explanation: Import Data
Below is the Daily Weather Observations of Sydney, New South Wales between Aug 2015 and Aug 2016.
End of explanation
data_missing_sun_hours = data[pd.isnull(data['Sun_hours'])]
data_missing_sun_hours
data = data[pd.notnull(data['Sun_hours'])]
print(data.shape)
Explanation: Column Meanings
| Heading | Meaning | Units |
|-----------------|----------------------------------------------------------|---------------------|
| Day | Day of the week | first two letters |
| Temps_min | Minimum temperature in the 24 hours to 9am. | degrees Celsius |
| Temps_max | Maximum temperature in the 24 hours from 9am. | degrees Celsius |
| Rain | Precipitation (rainfall) in the 24 hours to 9am. | millimetres |
| Evap | Class A pan evaporation in the 24 hours to 9am | millimetres |
| Sun_hours | Bright sunshine in the 24 hours to midnight | hours |
| Max_wind_dir | Direction of strongest gust in the 24 hours to midnight | 16 compass points |
| Max_wind_spd | Speed of strongest wind gust in the 24 hours to midnight | kilometres per hour |
| Max_wind_time | Time of strongest wind gust | local time hh:mm |
| Temp_at_9am | Temperature at 9 am | degrees Celsius |
| RH_at_9am | Relative humidity at 9 am | percent |
| CLD_at_9am | Fraction of sky obscured by cloud at 9 am | eighths |
| Wind_dir_at_9am | Wind direction averaged over 10 minutes prior to 9 am | compass points |
| Wind_spd_at_9am | Wind speed averaged over 10 minutes prior to 9 am | kilometres per hour |
| MSLP_at_9am | Atmospheric pressure reduced to mean sea level at 9 am | hectopascals |
| Temp_at_3pm | Temperature at 3 pm | degrees Celsius |
| RH_at_3pm | Relative humidity at 3 pm | percent |
| CLD_at_3pm | Fraction of sky obscured by cloud at 3 pm | eighths |
| Wind_dir_at_3pm | Wind direction averaged over 10 minutes prior to 3 pm | compass points |
| Wind_spd_at_3pm | Wind speed averaged over 10 minutes prior to 3 pm | kilometres per hour |
| MSLP_at_3pm | Atmospheric pressure reduced to mean sea level at 3 pm | hectopascals |
We have noticed that the Sun_hours values are missing in some rows. We want to predict these missing Sun_hours values.
We firstly seperate those rows with missing Sun_hours from the original dataset.
End of explanation
labels = ['Low','Med','High']
data['Sun_level'] = pd.cut(data.Sun_hours, [-1,5,10,25], labels=labels)
data[['Sun_hours','Sun_level']].head()
Explanation: We firstly category Sun_hours into three levels: High(>10), Med(>5 and <=10), and Low(<=5)
End of explanation
# output all rows with missing values
data[data.isnull().any(axis=1)]
Explanation: Preprocessing (Handling Missing Values)
From time to time, observations will not be available, for a variety of reasons. We need to handle missing values before training a classifier.
End of explanation
data = data.dropna(subset = ['CLD_at_9am', 'Max_wind_dir', 'Max_wind_spd', 'Max_wind_dir'])
print(data.shape)
Explanation: output the number of missing values with in each column
pd.isnull(data).sum()
For CLD_at_9am, Max_wind_dir, Max_wind_spd and Max_wind_dir, we simply drop the rows with missing values
End of explanation
bitmap1 = data.Evap.notnull()
bitmap2 = bitmap1.shift(1)
bitmap2[0] = True
data = data[bitmap1 & bitmap2]
print(data.shape)
Explanation: According to the Bureau of Meteorology (http://www.bom.gov.au/climate/dwo/IDCJDW0000.shtml), sometimes when the evaporation are missing, the next value given has been accumulated over several days rather than the normal one day. Therefore, we will not only drop the rows with missing Evap data, but also the rows below them.
End of explanation
def corr(data):
c = data.corr()
df = c.Sun_level_num.to_frame()
df['abs'] = abs(df['Sun_level_num'])
df.sort_values(by = 'abs', ascending=False)
print(df.sort_values(by = 'abs', ascending=False))
# we need to convert labels (string) into numeric to get the correlation
labels = [0,1,2]
data['Sun_level_num'] = pd.cut(data.Sun_hours, [-1,5,10,25], labels=labels)
data[['Sun_level_num']] = data[['Sun_level_num']].apply(pd.to_numeric)
corr(data)
# c = data.corr()
# c.sort_values(by='Sun_level_num', ascending=True)['Sun_level_num']
Explanation: Select Feature(s)
End of explanation
data['Temps_diff'] = data['Temps_max'] - data['Temps_min']
corr(data)
Explanation: If you have domain knowledge, you will know that the difference between max and min temperature might be highly correlated to sun hours. Let us add such column.
End of explanation
feature_list = ['CLD_at_9am', 'CLD_at_3pm', 'RH_at_3pm', 'RH_at_9am', 'Temps_min', 'Temps_diff', 'Month']
Explanation: Let's try the features with higher correlations to Sun_level
End of explanation
train, test = train_test_split(data, test_size = 0.2)
Explanation: Build Decision Tree
We preserve 80% of the data as training data, and the rest will be used to tune the classifier.
End of explanation
X_train = train[feature_list]
X_test = test[feature_list]
X_train.head()
Explanation: Now we generate features
End of explanation
y_train = train.Sun_level
y_test = test.Sun_level
Explanation: And we use Sun_level as labels
End of explanation
clf = DecisionTreeClassifier(criterion = "gini")
dtree = clf.fit(X_train, y_train)
Explanation: With features and labels, we can then train our decision tree
End of explanation
dtree.score(X_train,y_train) # output the accuracy of the trained decision tree
Explanation: Evaluations and Investigations of the Trained Decision Tree
Firstly let's see the accuracy of the decision tree on the training dataset
End of explanation
dtree.score(X_test,y_test)
Explanation: Let's see the accuracy on the testing dataset
End of explanation
feature = pd.DataFrame({'name': pd.Series(feature_list),
'importance': pd.Series(dtree.feature_importances_)})
feature.sort_values(by = 'importance', ascending = False)
Explanation: We can also get the importance of each feature
End of explanation
def experiment(train, test, features, depth=5):
X_train = train[features]
y_train = train.Sun_level
clf = DecisionTreeClassifier(criterion = "gini", max_depth = depth, random_state = RANDOM_SEED)
dtree = clf.fit(X_train, y_train)
err_training = dtree.score(X_train,y_train)
X_test = test[features]
y_test = test.Sun_level
err_testing = dtree.score(X_test,y_test)
err_diff = err_training - err_testing
print('{}, {}, {}'.format(err_training, err_testing, err_diff))
return err_training, err_testing
def evaluate(features, repeat_times = 10, depth = 5):
print('features: {}'.format(features))
print('max_depth: {}\n'.format(depth))
total_err_training = 0
total_err_testing = 0
for i in range(repeat_times):
train, test = train_test_split(data, test_size = 0.2, random_state = RANDOM_SEED + i)
err_training, err_testing = experiment(train, test, features, depth)
total_err_training += err_training
total_err_testing += err_testing
print('==============')
print('avg. training error:\t{}'.format(total_err_training/repeat_times))
print('avg. testing error:\t{}'.format(total_err_testing/repeat_times))
print('avg. difference:\t{}'.format((total_err_training - total_err_testing)/repeat_times))
print('============================')
feature_list = ['CLD_at_9am', 'CLD_at_3pm', 'RH_at_3pm', 'RH_at_9am', 'Temps_min', 'Temps_diff', 'Month']
evaluate(feature_list, 10, 6)
evaluate(feature_list, 10, 5)
evaluate(feature_list, 10, 4)
evaluate(feature_list, 10, 3)
Explanation: Apparently the above tree is overfitting. One way to deal with it is to change the maximum height of the decision tree.
End of explanation
X = data[feature_list]
y = data.Sun_level
clf = DecisionTreeClassifier(criterion = "gini", max_depth = 4, random_state = RANDOM_SEED)
dtree = clf.fit(X, y)
data_missing_sun_hours['Temps_diff'] = data_missing_sun_hours['Temps_max'] - data_missing_sun_hours['Temps_min']
X_pred = data_missing_sun_hours[feature_list]
data_missing_sun_hours['Sun_level_pred'] = dtree.predict(X_pred)
data_missing_sun_hours
Explanation: Predict the missing Sun_level value
End of explanation
import pydotplus
import sys
# sys.path.append("C:\\Program Files (x86)\\Graphviz2.38\bin")
dotfile = StringIO()
export_graphviz(dtree, out_file = dotfile, feature_names = X_train.columns)
graph = pydotplus.graph_from_dot_data(dotfile.getvalue())
Image(graph.create_png())
Explanation: Although we cannot remember what happened on these 7 days (can you?), it seems the predictions are probably correct. E.g.,
* the first instance is predicted as 'high' and it is a summer day, without rain or even cloud;
* the last instance is predicted as 'low' as there was heavy rainfall, as Rain = 61.0 (mm). Note that the model didn't use the rain feature at all!
Visualize of the Decision Tree
You need to install pydotplus (use pip install pydotplus) and graphviz on your computer to get the following code run
End of explanation
graph.write_pdf("./asset/dtree.pdf")
Explanation: You can generate a pdf file if you want
End of explanation |
13,824 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: 语言翻译
在此项目中,你将了解神经网络机器翻译这一领域。你将用由英语和法语语句组成的数据集,训练一个序列到序列模型(sequence to sequence model),该模型能够将新的英语句子翻译成法语。
获取数据
因为将整个英语语言内容翻译成法语需要大量训练时间,所以我们提供了一小部分的英语语料库。
Step3: 探索数据
研究 view_sentence_range,查看并熟悉该数据的不同部分。
Step6: 实现预处理函数
文本到单词 id
和之前的 RNN 一样,你必须首先将文本转换为数字,这样计算机才能读懂。在函数 text_to_ids() 中,你需要将单词中的 source_text 和 target_text 转为 id。但是,你需要在 target_text 中每个句子的末尾,添加 <EOS> 单词 id。这样可以帮助神经网络预测句子应该在什么地方结束。
你可以通过以下代码获取 <EOS> 单词ID:
python
target_vocab_to_int['<EOS>']
你可以使用 source_vocab_to_int 和 target_vocab_to_int 获得其他单词 id。
Step8: 预处理所有数据并保存
运行以下代码单元,预处理所有数据,并保存到文件中。
Step10: 检查点
这是你的第一个检查点。如果你什么时候决定再回到该记事本,或需要重新启动该记事本,可以从这里继续。预处理的数据已保存到磁盘上。
Step12: 检查 TensorFlow 版本,确认可访问 GPU
这一检查步骤,可以确保你使用的是正确版本的 TensorFlow,并且能够访问 GPU。
Step15: 构建神经网络
你将通过实现以下函数,构建出要构建一个序列到序列模型所需的组件:
model_inputs
process_decoding_input
encoding_layer
decoding_layer_train
decoding_layer_infer
decoding_layer
seq2seq_model
输入
实现 model_inputs() 函数,为神经网络创建 TF 占位符。该函数应该创建以下占位符:
名为 “input” 的输入文本占位符,并使用 TF Placeholder 名称参数(等级(Rank)为 2)。
目标占位符(等级为 2)。
学习速率占位符(等级为 0)。
名为 “keep_prob” 的保留率占位符,并使用 TF Placeholder 名称参数(等级为 0)。
在以下元祖(tuple)中返回占位符:(输入、目标、学习速率、保留率)
Step18: 处理解码输入
使用 TensorFlow 实现 process_decoding_input,以便删掉 target_data 中每个批次的最后一个单词 ID,并将 GO ID 放到每个批次的开头。
Step21: 编码
实现 encoding_layer(),以使用 tf.nn.dynamic_rnn() 创建编码器 RNN 层级。
Step24: 解码 - 训练
使用 tf.contrib.seq2seq.simple_decoder_fn_train() 和 tf.contrib.seq2seq.dynamic_rnn_decoder() 创建训练分对数(training logits)。将 output_fn 应用到 tf.contrib.seq2seq.dynamic_rnn_decoder() 输出上。
Step27: 解码 - 推论
使用 tf.contrib.seq2seq.simple_decoder_fn_inference() 和 tf.contrib.seq2seq.dynamic_rnn_decoder() 创建推论分对数(inference logits)。
Step30: 构建解码层级
实现 decoding_layer() 以创建解码器 RNN 层级。
使用 rnn_size 和 num_layers 创建解码 RNN 单元。
使用 lambda 创建输出函数,将输入,也就是分对数转换为类分对数(class logits)。
使用 decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) 函数获取训练分对数。
使用 decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) 函数获取推论分对数。
注意:你将需要使用 tf.variable_scope 在训练和推论分对数间分享变量。
Step33: 构建神经网络
应用你在上方实现的函数,以:
向编码器的输入数据应用嵌入。
使用 encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob) 编码输入。
使用 process_decoding_input(target_data, target_vocab_to_int, batch_size) 函数处理目标数据。
向解码器的目标数据应用嵌入。
使用 decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) 解码编码的输入数据。
Step34: 训练神经网络
超参数
调试以下参数:
将 epochs 设为 epoch 次数。
将 batch_size 设为批次大小。
将 rnn_size 设为 RNN 的大小。
将 num_layers 设为层级数量。
将 encoding_embedding_size 设为编码器嵌入大小。
将 decoding_embedding_size 设为解码器嵌入大小
将 learning_rate 设为训练速率。
将 keep_probability 设为丢弃保留率(Dropout keep probability)。
Step36: 构建图表
使用你实现的神经网络构建图表。
Step39: 训练
利用预处理的数据训练神经网络。如果很难获得低损失值,请访问我们的论坛,看看其他人是否遇到了相同的问题。
Step41: 保存参数
保存 batch_size 和 save_path 参数以进行推论(for inference)。
Step43: 检查点
Step46: 句子到序列
要向模型提供要翻译的句子,你首先需要预处理该句子。实现函数 sentence_to_seq() 以预处理新的句子。
将句子转换为小写形式
使用 vocab_to_int 将单词转换为 id
如果单词不在词汇表中,将其转换为<UNK> 单词 id
Step48: 翻译
将 translate_sentence 从英语翻译成法语。 | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: 语言翻译
在此项目中,你将了解神经网络机器翻译这一领域。你将用由英语和法语语句组成的数据集,训练一个序列到序列模型(sequence to sequence model),该模型能够将新的英语句子翻译成法语。
获取数据
因为将整个英语语言内容翻译成法语需要大量训练时间,所以我们提供了一小部分的英语语料库。
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words in source: {}'.format(len({word: None for word in source_text.split()})))
print('Roughly the number of unique words in target: {}'.format(len({word: None for word in target_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: 探索数据
研究 view_sentence_range,查看并熟悉该数据的不同部分。
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_sentences = source_text.split('\n')
source_id_text = []
for sentence in source_sentences:
source_id_text.append([source_vocab_to_int[word] for word in sentence.split()])
target_sentences = target_text.split('\n')
target_id_text = []
for sentence in target_sentences:
target_id_text.append([target_vocab_to_int[word] for word in sentence.split()]+[target_vocab_to_int['<EOS>']])
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: 实现预处理函数
文本到单词 id
和之前的 RNN 一样,你必须首先将文本转换为数字,这样计算机才能读懂。在函数 text_to_ids() 中,你需要将单词中的 source_text 和 target_text 转为 id。但是,你需要在 target_text 中每个句子的末尾,添加 <EOS> 单词 id。这样可以帮助神经网络预测句子应该在什么地方结束。
你可以通过以下代码获取 <EOS> 单词ID:
python
target_vocab_to_int['<EOS>']
你可以使用 source_vocab_to_int 和 target_vocab_to_int 获得其他单词 id。
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: 预处理所有数据并保存
运行以下代码单元,预处理所有数据,并保存到文件中。
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: 检查点
这是你的第一个检查点。如果你什么时候决定再回到该记事本,或需要重新启动该记事本,可以从这里继续。预处理的数据已保存到磁盘上。
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: 检查 TensorFlow 版本,确认可访问 GPU
这一检查步骤,可以确保你使用的是正确版本的 TensorFlow,并且能够访问 GPU。
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
input_ = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, None, name='keep_prob')
return input_, targets, learning_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: 构建神经网络
你将通过实现以下函数,构建出要构建一个序列到序列模型所需的组件:
model_inputs
process_decoding_input
encoding_layer
decoding_layer_train
decoding_layer_infer
decoding_layer
seq2seq_model
输入
实现 model_inputs() 函数,为神经网络创建 TF 占位符。该函数应该创建以下占位符:
名为 “input” 的输入文本占位符,并使用 TF Placeholder 名称参数(等级(Rank)为 2)。
目标占位符(等级为 2)。
学习速率占位符(等级为 0)。
名为 “keep_prob” 的保留率占位符,并使用 TF Placeholder 名称参数(等级为 0)。
在以下元祖(tuple)中返回占位符:(输入、目标、学习速率、保留率)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
del_last_datas = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1,1])
go_ids = tf.fill([batch_size, 1], target_vocab_to_int['<GO>'])
return tf.concat([go_ids, del_last_datas], axis=1)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: 处理解码输入
使用 TensorFlow 实现 process_decoding_input,以便删掉 target_data 中每个批次的最后一个单词 ID,并将 GO ID 放到每个批次的开头。
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
def lstm_cell():
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
return tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)])
outputs, final_state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32)
return final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: 编码
实现 encoding_layer(),以使用 tf.nn.dynamic_rnn() 创建编码器 RNN 层级。
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
# Training Decoder
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(tf.nn.dropout(train_pred, keep_prob))
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: 解码 - 训练
使用 tf.contrib.seq2seq.simple_decoder_fn_train() 和 tf.contrib.seq2seq.dynamic_rnn_decoder() 创建训练分对数(training logits)。将 output_fn 应用到 tf.contrib.seq2seq.dynamic_rnn_decoder() 输出上。
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length - 1, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: 解码 - 推论
使用 tf.contrib.seq2seq.simple_decoder_fn_inference() 和 tf.contrib.seq2seq.dynamic_rnn_decoder() 创建推论分对数(inference logits)。
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
# Decoder RNNs
def lstm_cell():
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
return tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
dec_cell = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)])
with tf.variable_scope("decoding") as decoding_scope:
output_fn = lambda x: tf.contrib.layers.fully_connected(
x, vocab_size, activation_fn=None, scope=decoding_scope)
training_decoder_output = decoding_layer_train(
encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
start_of_sequence_id = target_vocab_to_int["<GO>"]
end_of_sequence_id = target_vocab_to_int["<EOS>"]
inference_decoder_output = decoding_layer_infer(
encoder_state, dec_cell, dec_embeddings,start_of_sequence_id, end_of_sequence_id,
sequence_length, vocab_size, decoding_scope, output_fn, keep_prob)
return training_decoder_output, inference_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: 构建解码层级
实现 decoding_layer() 以创建解码器 RNN 层级。
使用 rnn_size 和 num_layers 创建解码 RNN 单元。
使用 lambda 创建输出函数,将输入,也就是分对数转换为类分对数(class logits)。
使用 decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) 函数获取训练分对数。
使用 decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) 函数获取推论分对数。
注意:你将需要使用 tf.variable_scope 在训练和推论分对数间分享变量。
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
# Encoder embedding
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
# Encoder
enc_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
# Process Decoding Input
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
# Decoder Embedding
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
train_logits, refer_logits = decoding_layer(
dec_embed_input, dec_embeddings, enc_state, target_vocab_size,
sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return train_logits, refer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: 构建神经网络
应用你在上方实现的函数,以:
向编码器的输入数据应用嵌入。
使用 encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob) 编码输入。
使用 process_decoding_input(target_data, target_vocab_to_int, batch_size) 函数处理目标数据。
向解码器的目标数据应用嵌入。
使用 decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) 解码编码的输入数据。
End of explanation
# Number of Epochs
epochs = 5
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 200
decoding_embedding_size = 300
# Learning Rate
learning_rate = 0.002
# Dropout Keep Probability
keep_probability = 0.8
Explanation: 训练神经网络
超参数
调试以下参数:
将 epochs 设为 epoch 次数。
将 batch_size 设为批次大小。
将 rnn_size 设为 RNN 的大小。
将 num_layers 设为层级数量。
将 encoding_embedding_size 设为编码器嵌入大小。
将 decoding_embedding_size 设为解码器嵌入大小
将 learning_rate 设为训练速率。
将 keep_probability 设为丢弃保留率(Dropout keep probability)。
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: 构建图表
使用你实现的神经网络构建图表。
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: 训练
利用预处理的数据训练神经网络。如果很难获得低损失值,请访问我们的论坛,看看其他人是否遇到了相同的问题。
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: 保存参数
保存 batch_size 和 save_path 参数以进行推论(for inference)。
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: 检查点
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
return [vocab_to_int.get(word, vocab_to_int["<UNK>"]) for word in sentence.lower().split()]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: 句子到序列
要向模型提供要翻译的句子,你首先需要预处理该句子。实现函数 sentence_to_seq() 以预处理新的句子。
将句子转换为小写形式
使用 vocab_to_int 将单词转换为 id
如果单词不在词汇表中,将其转换为<UNK> 单词 id
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: 翻译
将 translate_sentence 从英语翻译成法语。
End of explanation |
13,825 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
automaton.coaccessible
Create a new automaton from the coaccessible part of the input, i.e., the subautomaton whose states can be reach a final state.
Preconditions
Step1: The following automaton has states that cannot be reach any final(s) states
Step2: Calling coaccessible returns the same automaton, but without its non-coaccessible states | Python Code:
import vcsn
Explanation: automaton.coaccessible
Create a new automaton from the coaccessible part of the input, i.e., the subautomaton whose states can be reach a final state.
Preconditions:
- None
Postconditions:
- Result.is_coaccessible
See also:
- automaton.is_coaccessible
- automaton.accessible
- automaton.trim
Examples
End of explanation
%%automaton a
context = "lal_char(abc), b"
$ -> 0
0 -> 1 a
1 -> $
2 -> 0 a
1 -> 3 a
a.is_coaccessible()
Explanation: The following automaton has states that cannot be reach any final(s) states:
End of explanation
a.coaccessible()
a.coaccessible().is_coaccessible()
Explanation: Calling coaccessible returns the same automaton, but without its non-coaccessible states:
End of explanation |
13,826 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have the following kind of strings in my column seen below. I would like to parse out everything after the last _ of each string, and if there is no _ then leave the string as-is. (as my below try will just exclude strings with no _) | Problem:
import pandas as pd
strs = ['Stackoverflow_1234',
'Stack_Over_Flow_1234',
'Stackoverflow',
'Stack_Overflow_1234']
df = pd.DataFrame(data={'SOURCE_NAME': strs})
def g(df):
df['SOURCE_NAME'] = df['SOURCE_NAME'].str.rsplit('_', 1).str.get(0)
return df
df = g(df.copy()) |
13,827 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: version 1.0.1
+
Web Server Log Analysis with Apache Spark
This lab will demonstrate how easy it is to perform web server log analysis with Apache Spark.
Server log analysis is an ideal use case for Spark. It's a very large, common data source and contains a rich set of information. Spark allows you to store your logs in files on disk cheaply, while still providing a quick and simple way to perform data analysis on them. This homework will show you how to use Apache Spark on real-world text-based production logs and fully harness the power of that data. Log data comes from many sources, such as web, file, and compute servers, application logs, user-generated content, and can be used for monitoring servers, improving business and customer intelligence, building recommendation systems, fraud detection, and much more.
How to complete this assignment
This assignment is broken up into sections with bite-sized examples for demonstrating Spark functionality for log processing. For each problem, you should start by thinking about the algorithm that you will use to efficiently process the log in a parallel, distributed manner. This means using the various RDD operations along with lambda functions that are applied at each worker.
This assignment consists of 4 parts
Step4: (1b) Configuration and Initial RDD Creation
We are ready to specify the input log file and create an RDD containing the parsed log file data. The log file has already been downloaded for you.
To create the primary RDD that we'll use in the rest of this assignment, we first load the text file using sc.textfile(logFile) to convert each line of the file into an element in an RDD.
Next, we use map(parseApacheLogLine) to apply the parse function to each element (that is, a line from the log file) in the RDD and turn each line into a pair Row object.
Finally, we cache the RDD in memory since we'll use it throughout this notebook.
Step5: (1c) Data Cleaning
Notice that there are a large number of log lines that failed to parse. Examine the sample of invalid lines and compare them to the correctly parsed line, an example is included below. Based on your observations, alter the APACHE_ACCESS_LOG_PATTERN regular expression below so that the failed lines will correctly parse, and press Shift-Enter to rerun parseLogs().
127.0.0.1 - - [01/Aug/1995
Step6: Part 2
Step7: (2b) Example
Step9: (2c) Example
Step10: (2d) Example
Step11: (2e) Example
Step12: (2f) Example
Step13: Part 3
Step14: (3b) Exercise
Step15: (3c) Exercise
Step16: (3d) Exercise
Step17: (3e) Exercise
Step18: (3f) Exercise
Step19: Part 4
Step20: (4b) Exercise
Step21: (4c) Exercise
Step22: (4d) Exercise
Step23: (4e) Exercise
Step24: (4f) Exercise
Step25: (4g) Exercise
Step26: (4h) Exercise
Step27: (4i) Exercise | Python Code:
import re
import datetime
from pyspark.sql import Row
month_map = {'Jan': 1, 'Feb': 2, 'Mar':3, 'Apr':4, 'May':5, 'Jun':6, 'Jul':7,
'Aug':8, 'Sep': 9, 'Oct':10, 'Nov': 11, 'Dec': 12}
def parse_apache_time(s):
Convert Apache time format into a Python datetime object
Args:
s (str): date and time in Apache time format
Returns:
datetime: datetime object (ignore timezone for now)
return datetime.datetime(int(s[7:11]),
month_map[s[3:6]],
int(s[0:2]),
int(s[12:14]),
int(s[15:17]),
int(s[18:20]))
def parseApacheLogLine(logline):
Parse a line in the Apache Common Log format
Args:
logline (str): a line of text in the Apache Common Log format
Returns:
tuple: either a dictionary containing the parts of the Apache Access Log and 1,
or the original invalid log line and 0
match = re.search(APACHE_ACCESS_LOG_PATTERN, logline)
if match is None:
return (logline, 0)
size_field = match.group(9)
if size_field == '-':
size = long(0)
else:
size = long(match.group(9))
return (Row(
host = match.group(1),
client_identd = match.group(2),
user_id = match.group(3),
date_time = parse_apache_time(match.group(4)),
method = match.group(5),
endpoint = match.group(6),
protocol = match.group(7),
response_code = int(match.group(8)),
content_size = size
), 1)
# A regular expression pattern to extract fields from the log line
APACHE_ACCESS_LOG_PATTERN = '^(\S+) (\S+) (\S+) \[([\w:/]+\s[+\-]\d{4})\] "(\S+) (\S+)\s*(\S*)" (\d{3}) (\S+)'
Explanation: version 1.0.1
+
Web Server Log Analysis with Apache Spark
This lab will demonstrate how easy it is to perform web server log analysis with Apache Spark.
Server log analysis is an ideal use case for Spark. It's a very large, common data source and contains a rich set of information. Spark allows you to store your logs in files on disk cheaply, while still providing a quick and simple way to perform data analysis on them. This homework will show you how to use Apache Spark on real-world text-based production logs and fully harness the power of that data. Log data comes from many sources, such as web, file, and compute servers, application logs, user-generated content, and can be used for monitoring servers, improving business and customer intelligence, building recommendation systems, fraud detection, and much more.
How to complete this assignment
This assignment is broken up into sections with bite-sized examples for demonstrating Spark functionality for log processing. For each problem, you should start by thinking about the algorithm that you will use to efficiently process the log in a parallel, distributed manner. This means using the various RDD operations along with lambda functions that are applied at each worker.
This assignment consists of 4 parts:
Part 1: Apache Web Server Log file format
Part 2: Sample Analyses on the Web Server Log File
Part 3: Analyzing Web Server Log File
Part 4: Exploring 404 Response Codes
Part 1: Apache Web Server Log file format
The log files that we use for this assignment are in the Apache Common Log Format (CLF). The log file entries produced in CLF will look something like this:
127.0.0.1 - - [01/Aug/1995:00:00:01 -0400] "GET /images/launch-logo.gif HTTP/1.0" 200 1839
Each part of this log entry is described below.
127.0.0.1
This is the IP address (or host name, if available) of the client (remote host) which made the request to the server.
-
The "hyphen" in the output indicates that the requested piece of information (user identity from remote machine) is not available.
-
The "hyphen" in the output indicates that the requested piece of information (user identity from local logon) is not available.
[01/Aug/1995:00:00:01 -0400]
The time that the server finished processing the request. The format is:
[day/month/year:hour:minute:second timezone]
* ####day = 2 digits
* ####month = 3 letters
* ####year = 4 digits
* ####hour = 2 digits
* ####minute = 2 digits
* ####second = 2 digits
* ####zone = (+ | -) 4 digits
"GET /images/launch-logo.gif HTTP/1.0"
This is the first line of the request string from the client. It consists of a three components: the request method (e.g., GET, POST, etc.), the endpoint (a Uniform Resource Identifier), and the client protocol version.
200
This is the status code that the server sends back to the client. This information is very valuable, because it reveals whether the request resulted in a successful response (codes beginning in 2), a redirection (codes beginning in 3), an error caused by the client (codes beginning in 4), or an error in the server (codes beginning in 5). The full list of possible status codes can be found in the HTTP specification (RFC 2616 section 10).
1839
The last entry indicates the size of the object returned to the client, not including the response headers. If no content was returned to the client, this value will be "-" (or sometimes 0).
Note that log files contain information supplied directly by the client, without escaping. Therefore, it is possible for malicious clients to insert control-characters in the log files, so care must be taken in dealing with raw logs.
NASA-HTTP Web Server Log
For this assignment, we will use a data set from NASA Kennedy Space Center WWW server in Florida. The full data set is freely available (http://ita.ee.lbl.gov/html/contrib/NASA-HTTP.html) and contains two month's of all HTTP requests. We are using a subset that only contains several days worth of requests.
(1a) Parsing Each Log Line
Using the CLF as defined above, we create a regular expression pattern to extract the nine fields of the log line using the Python regular expression search function. The function returns a pair consisting of a Row object and 1. If the log line fails to match the regular expression, the function returns a pair consisting of the log line string and 0. A '-' value in the content size field is cleaned up by substituting it with 0. The function converts the log line's date string into a Python datetime object using the given parse_apache_time function.
End of explanation
import sys
import os
from test_helper import Test
baseDir = os.path.join('data')
inputPath = os.path.join('cs100', 'lab2', 'apache.access.log.PROJECT')
logFile = os.path.join(baseDir, inputPath)
def parseLogs():
Read and parse log file
parsed_logs = (sc
.textFile(logFile)
.map(parseApacheLogLine)
.cache())
access_logs = (parsed_logs
.filter(lambda s: s[1] == 1)
.map(lambda s: s[0])
.cache())
failed_logs = (parsed_logs
.filter(lambda s: s[1] == 0)
.map(lambda s: s[0]))
failed_logs_count = failed_logs.count()
if failed_logs_count > 0:
print 'Number of invalid logline: %d' % failed_logs.count()
for line in failed_logs.take(20):
print 'Invalid logline: %s' % line
print 'Read %d lines, successfully parsed %d lines, failed to parse %d lines' % (parsed_logs.count(), access_logs.count(), failed_logs.count())
return parsed_logs, access_logs, failed_logs
parsed_logs, access_logs, failed_logs = parseLogs()
Explanation: (1b) Configuration and Initial RDD Creation
We are ready to specify the input log file and create an RDD containing the parsed log file data. The log file has already been downloaded for you.
To create the primary RDD that we'll use in the rest of this assignment, we first load the text file using sc.textfile(logFile) to convert each line of the file into an element in an RDD.
Next, we use map(parseApacheLogLine) to apply the parse function to each element (that is, a line from the log file) in the RDD and turn each line into a pair Row object.
Finally, we cache the RDD in memory since we'll use it throughout this notebook.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# This was originally '^(\S+) (\S+) (\S+) \[([\w:/]+\s[+\-]\d{4})\] "(\S+) (\S+)\s*(\S*)" (\d{3}) (\S+)'
APACHE_ACCESS_LOG_PATTERN = '^(\S+) (\S+) (\S+) \[([\w:/]+\s[+\-]\d{4})\] "(\S+) (\S+)\s*(\S*)\s*" (\d{3}) (\S+)'
parsed_logs, access_logs, failed_logs = parseLogs()
# TEST Data cleaning (1c)
Test.assertEquals(failed_logs.count(), 0, 'incorrect failed_logs.count()')
Test.assertEquals(parsed_logs.count(), 1043177 , 'incorrect parsed_logs.count()')
Test.assertEquals(access_logs.count(), parsed_logs.count(), 'incorrect access_logs.count()')
Explanation: (1c) Data Cleaning
Notice that there are a large number of log lines that failed to parse. Examine the sample of invalid lines and compare them to the correctly parsed line, an example is included below. Based on your observations, alter the APACHE_ACCESS_LOG_PATTERN regular expression below so that the failed lines will correctly parse, and press Shift-Enter to rerun parseLogs().
127.0.0.1 - - [01/Aug/1995:00:00:01 -0400] "GET /images/launch-logo.gif HTTP/1.0" 200 1839
If you not familar with Python regular expression search function, now would be a good time to check up on the documentation. One tip that might be useful is to use an online tester like http://pythex.org or http://www.pythonregex.com. To use it, copy and paste the regular expression string below (located between the single quotes ') and test it against one of the 'Invalid logline' above.
End of explanation
# Calculate statistics based on the content size.
content_sizes = access_logs.map(lambda log: log.content_size).cache()
print 'Content Size Avg: %i, Min: %i, Max: %s' % (
content_sizes.reduce(lambda a, b : a + b) / content_sizes.count(),
content_sizes.min(),
content_sizes.max())
Explanation: Part 2: Sample Analyses on the Web Server Log File
Now that we have an RDD containing the log file as a set of Row objects, we can perform various analyses.
(2a) Example: Content Size Statistics
Let's compute some statistics about the sizes of content being returned by the web server. In particular, we'd like to know what are the average, minimum, and maximum content sizes.
We can compute the statistics by applying a map to the access_logs RDD. The lambda function we want for the map is to extract the content_size field from the RDD. The map produces a new RDD containing only the content_sizes (one element for each Row object in the access_logs RDD). To compute the minimum and maximum statistics, we can use min() and max() functions on the new RDD. We can compute the average statistic by using the reduce function with a lambda function that sums the two inputs, which represent two elements from the new RDD that are being reduced together. The result of the reduce() is the total content size from the log and it is to be divided by the number of requests as determined using the count() function on the new RDD.
End of explanation
# Response Code to Count
responseCodeToCount = (access_logs
.map(lambda log: (log.response_code, 1))
.reduceByKey(lambda a, b : a + b)
.cache())
responseCodeToCountList = responseCodeToCount.take(100)
print 'Found %d response codes' % len(responseCodeToCountList)
print 'Response Code Counts: %s' % responseCodeToCountList
assert len(responseCodeToCountList) == 7
assert sorted(responseCodeToCountList) == [(200, 940847), (302, 16244), (304, 79824), (403, 58), (404, 6185), (500, 2), (501, 17)]
Explanation: (2b) Example: Response Code Analysis
Next, lets look at the response codes that appear in the log. As with the content size analysis, first we create a new RDD by using a lambda function to extract the response_code field from the access_logs RDD. The difference here is that we will use a pair tuple instead of just the field itself. Using a pair tuple consisting of the response code and 1 will let us count how many records have a particular response code. Using the new RDD, we perform a reduceByKey function. reduceByKey performs a reduce on a per-key basis by applying the lambda function to each element, pairwise with the same key. We use the simple lambda function of adding the two values. Then, we cache the resulting RDD and create a list by using the take function.
End of explanation
labels = responseCodeToCount.map(lambda (x, y): x).collect()
print labels
count = access_logs.count()
fracs = responseCodeToCount.map(lambda (x, y): (float(y) / count)).collect()
print fracs
import matplotlib.pyplot as plt
def pie_pct_format(value):
Determine the appropriate format string for the pie chart percentage label
Args:
value: value of the pie slice
Returns:
str: formated string label; if the slice is too small to fit, returns an empty string for label
return '' if value < 7 else '%.0f%%' % value
fig = plt.figure(figsize=(4.5, 4.5), facecolor='white', edgecolor='white')
colors = ['yellowgreen', 'lightskyblue', 'gold', 'purple', 'lightcoral', 'yellow', 'black']
explode = (0.05, 0.05, 0.1, 0, 0, 0, 0)
patches, texts, autotexts = plt.pie(fracs, labels=labels, colors=colors,
explode=explode, autopct=pie_pct_format,
shadow=False, startangle=125)
for text, autotext in zip(texts, autotexts):
if autotext.get_text() == '':
text.set_text('') # If the slice is small to fit, don't show a text label
plt.legend(labels, loc=(0.80, -0.1), shadow=True)
pass
Explanation: (2c) Example: Response Code Graphing with matplotlib
Now, lets visualize the results from the last example. We can visualize the results from the last example using matplotlib. First we need to extract the labels and fractions for the graph. We do this with two separate map functions with a lambda functions. The first map function extracts a list of of the response code values, and the second map function extracts a list of the per response code counts divided by the total size of the access logs. Next, we create a figure with figure() constructor and use the pie() method to create the pie plot.
End of explanation
# Any hosts that has accessed the server more than 10 times.
hostCountPairTuple = access_logs.map(lambda log: (log.host, 1))
hostSum = hostCountPairTuple.reduceByKey(lambda a, b : a + b)
hostMoreThan10 = hostSum.filter(lambda s: s[1] > 10)
hostsPick20 = (hostMoreThan10
.map(lambda s: s[0])
.take(20))
print 'Any 20 hosts that have accessed more then 10 times: %s' % hostsPick20
# An example: [u'204.120.34.185', u'204.243.249.9', u'slip1-32.acs.ohio-state.edu', u'lapdog-14.baylor.edu', u'199.77.67.3', u'gs1.cs.ttu.edu', u'haskell.limbex.com', u'alfred.uib.no', u'146.129.66.31', u'manaus.bologna.maraut.it', u'dialup98-110.swipnet.se', u'slip-ppp02.feldspar.com', u'ad03-053.compuserve.com', u'srawlin.opsys.nwa.com', u'199.202.200.52', u'ix-den7-23.ix.netcom.com', u'151.99.247.114', u'w20-575-104.mit.edu', u'205.25.227.20', u'ns.rmc.com']
Explanation: (2d) Example: Frequent Hosts
Let's look at hosts that have accessed the server multiple times (e.g., more than ten times). As with the response code analysis in (2b), first we create a new RDD by using a lambda function to extract the host field from the access_logs RDD using a pair tuple consisting of the host and 1 which will let us count how many records were created by a particular host's request. Using the new RDD, we perform a reduceByKey function with a lambda function that adds the two values. We then filter the result based on the count of accesses by each host (the second element of each pair) being greater than ten. Next, we extract the host name by performing a map with a lambda function that returns the first element of each pair. Finally, we extract 20 elements from the resulting RDD - note that the choice of which elements are returned is not guaranteed to be deterministic.
End of explanation
endpoints = (access_logs
.map(lambda log: (log.endpoint, 1))
.reduceByKey(lambda a, b : a + b)
.cache())
ends = endpoints.map(lambda (x, y): x).collect()
counts = endpoints.map(lambda (x, y): y).collect()
fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white')
plt.axis([0, len(ends), 0, max(counts)])
plt.grid(b=True, which='major', axis='y')
plt.xlabel('Endpoints')
plt.ylabel('Number of Hits')
plt.plot(counts)
pass
Explanation: (2e) Example: Visualizing Endpoints
Now, lets visualize the number of hits to endpoints (URIs) in the log. To perform this task, we first create a new RDD by using a lambda function to extract the endpoint field from the access_logs RDD using a pair tuple consisting of the endpoint and 1 which will let us count how many records were created by a particular host's request. Using the new RDD, we perform a reduceByKey function with a lambda function that adds the two values. We then cache the results.
Next we visualize the results using matplotlib. We previously imported the matplotlib.pyplot library, so we do not need to import it again. We perform two separate map functions with lambda functions. The first map function extracts a list of endpoint values, and the second map function extracts a list of the visits per endpoint values. Next, we create a figure with figure() constructor, set various features of the plot (axis limits, grid lines, and labels), and use the plot() method to create the line plot.
End of explanation
# Top Endpoints
endpointCounts = (access_logs
.map(lambda log: (log.endpoint, 1))
.reduceByKey(lambda a, b : a + b))
topEndpoints = endpointCounts.takeOrdered(10, lambda s: -1 * s[1])
print 'Top Ten Endpoints: %s' % topEndpoints
assert topEndpoints == [(u'/images/NASA-logosmall.gif', 59737), (u'/images/KSC-logosmall.gif', 50452), (u'/images/MOSAIC-logosmall.gif', 43890), (u'/images/USA-logosmall.gif', 43664), (u'/images/WORLD-logosmall.gif', 43277), (u'/images/ksclogo-medium.gif', 41336), (u'/ksc.html', 28582), (u'/history/apollo/images/apollo-logo1.gif', 26778), (u'/images/launch-logo.gif', 24755), (u'/', 20292)], 'incorrect Top Ten Endpoints'
Explanation: (2f) Example: Top Endpoints
For the final example, we'll look at the top endpoints (URIs) in the log. To determine them, we first create a new RDD by using a lambda function to extract the endpoint field from the access_logs RDD using a pair tuple consisting of the endpoint and 1 which will let us count how many records were created by a particular host's request. Using the new RDD, we perform a reduceByKey function with a lambda function that adds the two values. We then extract the top ten endpoints by performing a takeOrdered with a value of 10 and a lambda function that multiplies the count (the second element of each pair) by -1 to create a sorted list with the top endpoints at the bottom.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# HINT: Each of these <FILL IN> below could be completed with a single transformation or action.
# You are welcome to structure your solution in a different way, so long as
# you ensure the variables used in the next Test section are defined (ie. endpointSum, topTenErrURLs).
not200 = access_logs.filter(lambda log: log.response_code != 200)
endpointCountPairTuple = not200.map(lambda log: (log.endpoint, 1))
endpointSum = endpointCountPairTuple.reduceByKey(lambda a, b : a + b)
topTenErrURLs = endpointSum.takeOrdered(10, lambda s: -1 * s[1])
print 'Top Ten failed URLs: %s' % topTenErrURLs
# TEST Top ten error endpoints (3a)
Test.assertEquals(endpointSum.count(), 7689, 'incorrect count for endpointSum')
Test.assertEquals(topTenErrURLs, [(u'/images/NASA-logosmall.gif', 8761), (u'/images/KSC-logosmall.gif', 7236), (u'/images/MOSAIC-logosmall.gif', 5197), (u'/images/USA-logosmall.gif', 5157), (u'/images/WORLD-logosmall.gif', 5020), (u'/images/ksclogo-medium.gif', 4728), (u'/history/apollo/images/apollo-logo1.gif', 2907), (u'/images/launch-logo.gif', 2811), (u'/', 2199), (u'/images/ksclogosmall.gif', 1622)], 'incorrect Top Ten failed URLs (topTenErrURLs)')
Explanation: Part 3: Analyzing Web Server Log File
Now it is your turn to perform analyses on web server log files.
(3a) Exercise: Top Ten Error Endpoints
What are the top ten endpoints which did not have return code 200? Create a sorted list containing top ten endpoints and the number of times that they were accessed with non-200 return code.
Think about the steps that you need to perform to determine which endpoints did not have a 200 return code, how you will uniquely count those endpoints, and sort the list.
You might want to refer back to the previous Lab (Lab 1 Word Count) for insights.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# HINT: Do you recall the tips from (3a)? Each of these <FILL IN> could be an transformation or action.
hosts = access_logs.map(lambda log: log.host)
uniqueHosts = hosts.distinct()
uniqueHostCount = uniqueHosts.count()
print 'Unique hosts: %d' % uniqueHostCount
# TEST Number of unique hosts (3b)
Test.assertEquals(uniqueHostCount, 54507, 'incorrect uniqueHostCount')
Explanation: (3b) Exercise: Number of Unique Hosts
How many unique hosts are there in the entire log?
Think about the steps that you need to perform to count the number of different hosts in the log.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
dayToHostPairTuple = access_logs.map(lambda log: (log.date_time.day, log.host)).distinct()
dayGroupedHosts = dayToHostPairTuple.groupByKey()
dayHostCount = dayGroupedHosts.map(lambda (day, hosts): (day, len(hosts)))
dailyHosts = (dayHostCount
.sortByKey()
.cache())
dailyHostsList = dailyHosts.take(30)
print 'Unique hosts per day: %s' % dailyHostsList
# TEST Number of unique daily hosts (3c)
Test.assertEquals(dailyHosts.count(), 21, 'incorrect dailyHosts.count()')
Test.assertEquals(dailyHostsList, [(1, 2582), (3, 3222), (4, 4190), (5, 2502), (6, 2537), (7, 4106), (8, 4406), (9, 4317), (10, 4523), (11, 4346), (12, 2864), (13, 2650), (14, 4454), (15, 4214), (16, 4340), (17, 4385), (18, 4168), (19, 2550), (20, 2560), (21, 4134), (22, 4456)], 'incorrect dailyHostsList')
Test.assertTrue(dailyHosts.is_cached, 'incorrect dailyHosts.is_cached')
Explanation: (3c) Exercise: Number of Unique Daily Hosts
For an advanced exercise, let's determine the number of unique hosts in the entire log on a day-by-day basis. This computation will give us counts of the number of unique daily hosts. We'd like a list sorted by increasing day of the month which includes the day of the month and the associated number of unique hosts for that day. Make sure you cache the resulting RDD dailyHosts so that we can reuse it in the next exercise.
Think about the steps that you need to perform to count the number of different hosts that make requests each day.
Since the log only covers a single month, you can ignore the month.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
daysWithHosts = dailyHosts.map(lambda log: log[0]).collect()
hosts = dailyHosts.map(lambda log: log[1]).collect()
# TEST Visualizing unique daily hosts (3d)
test_days = range(1, 23)
test_days.remove(2)
Test.assertEquals(daysWithHosts, test_days, 'incorrect days')
Test.assertEquals(hosts, [2582, 3222, 4190, 2502, 2537, 4106, 4406, 4317, 4523, 4346, 2864, 2650, 4454, 4214, 4340, 4385, 4168, 2550, 2560, 4134, 4456], 'incorrect hosts')
fig = plt.figure(figsize=(8,4.5), facecolor='white', edgecolor='white')
plt.axis([min(daysWithHosts), max(daysWithHosts), 0, max(hosts)+500])
plt.grid(b=True, which='major', axis='y')
plt.xlabel('Day')
plt.ylabel('Hosts')
plt.plot(daysWithHosts, hosts)
pass
Explanation: (3d) Exercise: Visualizing the Number of Unique Daily Hosts
Using the results from the previous exercise, use matplotlib to plot a "Line" graph of the unique hosts requests by day.
daysWithHosts should be a list of days and hosts should be a list of number of unique hosts for each corresponding day.
* How could you convert a RDD into a list? See the collect() method*
End of explanation
# TODO: Replace <FILL IN> with appropriate code
dayAndHostTuple = access_logs.map(lambda log: (log.date_time.day, log.host))
groupedByDay = dayAndHostTuple.groupByKey()
sortedByDay = groupedByDay.sortByKey()
avgDailyReqPerHost = (sortedByDay
.map(lambda(day, requests): (day, len(requests)))
.join(dailyHosts)
.map(lambda(day, (totalRequests, numOfHosts)): (day, totalRequests/numOfHosts))
.sortByKey()
.cache())
avgDailyReqPerHostList = avgDailyReqPerHost.take(30)
print 'Average number of daily requests per Hosts is %s' % avgDailyReqPerHostList
# TEST Average number of daily requests per hosts (3e)
Test.assertEquals(avgDailyReqPerHostList, [(1, 13), (3, 12), (4, 14), (5, 12), (6, 12), (7, 13), (8, 13), (9, 14), (10, 13), (11, 14), (12, 13), (13, 13), (14, 13), (15, 13), (16, 13), (17, 13), (18, 13), (19, 12), (20, 12), (21, 13), (22, 12)], 'incorrect avgDailyReqPerHostList')
Test.assertTrue(avgDailyReqPerHost.is_cached, 'incorrect avgDailyReqPerHost.is_cache')
Explanation: (3e) Exercise: Average Number of Daily Requests per Hosts
Next, let's determine the average number of requests on a day-by-day basis. We'd like a list by increasing day of the month and the associated average number of requests per host for that day. Make sure you cache the resulting RDD avgDailyReqPerHost so that we can reuse it in the next exercise.
To compute the average number of requests per host, get the total number of request across all hosts and divide that by the number of unique hosts.
Since the log only covers a single month, you can skip checking for the month.
Also to keep it simple, when calculating the approximate average use the integer value - you do not need to upcast to float
End of explanation
# TODO: Replace <FILL IN> with appropriate code
daysWithAvg = avgDailyReqPerHost.map(lambda (day, avg): day).collect()
avgs = avgDailyReqPerHost.map(lambda (day, avg): avg).collect()
# TEST Average Daily Requests per Unique Host (3f)
Test.assertEquals(daysWithAvg, [1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22], 'incorrect days')
Test.assertEquals(avgs, [13, 12, 14, 12, 12, 13, 13, 14, 13, 14, 13, 13, 13, 13, 13, 13, 13, 12, 12, 13, 12], 'incorrect avgs')
fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white')
plt.axis([0, max(daysWithAvg), 0, max(avgs)+2])
plt.grid(b=True, which='major', axis='y')
plt.xlabel('Day')
plt.ylabel('Average')
plt.plot(daysWithAvg, avgs)
pass
Explanation: (3f) Exercise: Visualizing the Average Daily Requests per Unique Host
Using the result avgDailyReqPerHost from the previous exercise, use matplotlib to plot a "Line" graph of the average daily requests per unique host by day.
daysWithAvg should be a list of days and avgs should be a list of average daily requests per unique hosts for each corresponding day.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
badRecords = (access_logs
.filter(lambda log: log.response_code == 404)
.cache())
print 'Found %d 404 URLs' % badRecords.count()
# TEST Counting 404 (4a)
Test.assertEquals(badRecords.count(), 6185, 'incorrect badRecords.count()')
Test.assertTrue(badRecords.is_cached, 'incorrect badRecords.is_cached')
Explanation: Part 4: Exploring 404 Response Codes
Let's drill down and explore the error 404 response code records. 404 errors are returned when an endpoint is not found by the server (i.e., a missing page or object).
(4a) Exercise: Counting 404 Response Codes
Create a RDD containing only log records with a 404 response code. Make sure you cache() the RDD badRecords as we will use it in the rest of this exercise.
How many 404 records are in the log?
End of explanation
# TODO: Replace <FILL IN> with appropriate code
badEndpoints = badRecords.map(lambda log: log.endpoint)
badUniqueEndpoints = badEndpoints.distinct()
badUniqueEndpointsPick40 = badUniqueEndpoints.take(40)
print '404 URLS: %s' % badUniqueEndpointsPick40
# TEST Listing 404 records (4b)
badUniqueEndpointsSet40 = set(badUniqueEndpointsPick40)
Test.assertEquals(len(badUniqueEndpointsSet40), 40, 'badUniqueEndpointsPick40 not distinct')
Explanation: (4b) Exercise: Listing 404 Response Code Records
Using the RDD containing only log records with a 404 response code that you cached in part (4a), print out a list up to 40 distinct endpoints that generate 404 errors - no endpoint should appear more than once in your list.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
badEndpointsCountPairTuple = badRecords.map(lambda log: (log.endpoint, 1))
badEndpointsSum = badEndpointsCountPairTuple.reduceByKey(lambda a, b: a + b)
badEndpointsTop20 = badEndpointsSum.takeOrdered(20, lambda a: -a[1])
print 'Top Twenty 404 URLs: %s' % badEndpointsTop20
# TEST Top twenty 404 URLs (4c)
Test.assertEquals(badEndpointsTop20, [(u'/pub/winvn/readme.txt', 633), (u'/pub/winvn/release.txt', 494), (u'/shuttle/missions/STS-69/mission-STS-69.html', 431), (u'/images/nasa-logo.gif', 319), (u'/elv/DELTA/uncons.htm', 178), (u'/shuttle/missions/sts-68/ksc-upclose.gif', 156), (u'/history/apollo/sa-1/sa-1-patch-small.gif', 146), (u'/images/crawlerway-logo.gif', 120), (u'/://spacelink.msfc.nasa.gov', 117), (u'/history/apollo/pad-abort-test-1/pad-abort-test-1-patch-small.gif', 100), (u'/history/apollo/a-001/a-001-patch-small.gif', 97), (u'/images/Nasa-logo.gif', 85), (u'/shuttle/resources/orbiters/atlantis.gif', 64), (u'/history/apollo/images/little-joe.jpg', 62), (u'/images/lf-logo.gif', 59), (u'/shuttle/resources/orbiters/discovery.gif', 56), (u'/shuttle/resources/orbiters/challenger.gif', 54), (u'/robots.txt', 53), (u'/elv/new01.gif>', 43), (u'/history/apollo/pad-abort-test-2/pad-abort-test-2-patch-small.gif', 38)], 'incorrect badEndpointsTop20')
Explanation: (4c) Exercise: Listing the Top Twenty 404 Response Code Endpoints
Using the RDD containing only log records with a 404 response code that you cached in part (4a), print out a list of the top twenty endpoints that generate the most 404 errors.
Remember, top endpoints should be in sorted order
End of explanation
# TODO: Replace <FILL IN> with appropriate code
errHostsCountPairTuple = badRecords.map(lambda log: (log.host, 1))
errHostsSum = errHostsCountPairTuple.reduceByKey(lambda a, b: a + b)
errHostsTop25 = errHostsSum.takeOrdered(25, lambda a: -a[1])
print 'Top 25 hosts that generated errors: %s' % errHostsTop25
# TEST Top twenty-five 404 response code hosts (4d)
Test.assertEquals(len(errHostsTop25), 25, 'length of errHostsTop25 is not 25')
Test.assertEquals(len(set(errHostsTop25) - set([(u'maz3.maz.net', 39), (u'piweba3y.prodigy.com', 39), (u'gate.barr.com', 38), (u'm38-370-9.mit.edu', 37), (u'ts8-1.westwood.ts.ucla.edu', 37), (u'nexus.mlckew.edu.au', 37), (u'204.62.245.32', 33), (u'163.206.104.34', 27), (u'spica.sci.isas.ac.jp', 27), (u'www-d4.proxy.aol.com', 26), (u'www-c4.proxy.aol.com', 25), (u'203.13.168.24', 25), (u'203.13.168.17', 25), (u'internet-gw.watson.ibm.com', 24), (u'scooter.pa-x.dec.com', 23), (u'crl5.crl.com', 23), (u'piweba5y.prodigy.com', 23), (u'onramp2-9.onr.com', 22), (u'slip145-189.ut.nl.ibm.net', 22), (u'198.40.25.102.sap2.artic.edu', 21), (u'gn2.getnet.com', 20), (u'msp1-16.nas.mr.net', 20), (u'isou24.vilspa.esa.es', 19), (u'dial055.mbnet.mb.ca', 19), (u'tigger.nashscene.com', 19)])), 0, 'incorrect errHostsTop25')
Explanation: (4d) Exercise: Listing the Top Twenty-five 404 Response Code Hosts
Instead of looking at the endpoints that generated 404 errors, let's look at the hosts that encountered 404 errors. Using the RDD containing only log records with a 404 response code that you cached in part (4a), print out a list of the top twenty-five hosts that generate the most 404 errors.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
errDateCountPairTuple = badRecords.map(lambda log: (log.date_time.day, 1))
errDateSum = errDateCountPairTuple.reduceByKey(lambda a, b: a + b)
errDateSorted = (errDateSum
.sortByKey()
.cache())
errByDate = errDateSorted.collect()
print '404 Errors by day: %s' % errByDate
# TEST 404 response codes per day (4e)
Test.assertEquals(errByDate, [(1, 243), (3, 303), (4, 346), (5, 234), (6, 372), (7, 532), (8, 381), (9, 279), (10, 314), (11, 263), (12, 195), (13, 216), (14, 287), (15, 326), (16, 258), (17, 269), (18, 255), (19, 207), (20, 312), (21, 305), (22, 288)], 'incorrect errByDate')
Test.assertTrue(errDateSorted.is_cached, 'incorrect errDateSorted.is_cached')
Explanation: (4e) Exercise: Listing 404 Response Codes per Day
Let's explore the 404 records temporally. Break down the 404 requests by day (cache() the RDD errDateSorted) and get the daily counts sorted by day as a list.
Since the log only covers a single month, you can ignore the month in your checks.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
daysWithErrors404 = errDateSorted.map(lambda (day, total): day).collect()
errors404ByDay = errDateSorted.map(lambda (day, total): total).collect()
# TEST Visualizing the 404 Response Codes by Day (4f)
Test.assertEquals(daysWithErrors404, [1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22], 'incorrect daysWithErrors404')
Test.assertEquals(errors404ByDay, [243, 303, 346, 234, 372, 532, 381, 279, 314, 263, 195, 216, 287, 326, 258, 269, 255, 207, 312, 305, 288], 'incorrect errors404ByDay')
fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white')
plt.axis([0, max(daysWithErrors404), 0, max(errors404ByDay)])
plt.grid(b=True, which='major', axis='y')
plt.xlabel('Day')
plt.ylabel('404 Errors')
plt.plot(daysWithErrors404, errors404ByDay)
pass
Explanation: (4f) Exercise: Visualizing the 404 Response Codes by Day
Using the results from the previous exercise, use matplotlib to plot a "Line" or "Bar" graph of the 404 response codes by day.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
topErrDate = errDateSorted.takeOrdered(5, lambda a: -a[1])
print 'Top Five dates for 404 requests: %s' % topErrDate
# TEST Five dates for 404 requests (4g)
Test.assertEquals(topErrDate, [(7, 532), (8, 381), (6, 372), (4, 346), (15, 326)], 'incorrect topErrDate')
Explanation: (4g) Exercise: Top Five Days for 404 Response Codes
Using the RDD errDateSorted you cached in the part (4e), what are the top five days for 404 response codes and the corresponding counts of 404 response codes?
End of explanation
# TODO: Replace <FILL IN> with appropriate code
hourCountPairTuple = badRecords.map(lambda log: (log.date_time.hour, 1))
hourRecordsSum = hourCountPairTuple.reduceByKey(lambda a, b: a + b)
hourRecordsSorted = (hourRecordsSum
.sortByKey()
.cache())
errHourList = hourRecordsSorted.collect()
print 'Top hours for 404 requests: %s' % errHourList
# TEST Hourly 404 response codes (4h)
Test.assertEquals(errHourList, [(0, 175), (1, 171), (2, 422), (3, 272), (4, 102), (5, 95), (6, 93), (7, 122), (8, 199), (9, 185), (10, 329), (11, 263), (12, 438), (13, 397), (14, 318), (15, 347), (16, 373), (17, 330), (18, 268), (19, 269), (20, 270), (21, 241), (22, 234), (23, 272)], 'incorrect errHourList')
Test.assertTrue(hourRecordsSorted.is_cached, 'incorrect hourRecordsSorted.is_cached')
Explanation: (4h) Exercise: Hourly 404 Response Codes
Using the RDD badRecords you cached in the part (4a) and by hour of the day and in increasing order, create an RDD containing how many requests had a 404 return code for each hour of the day (midnight starts at 0). Cache the resulting RDD hourRecordsSorted and print that as a list.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
hoursWithErrors404 = hourRecordsSorted.map(lambda (hr, total): hr).collect()
errors404ByHours = hourRecordsSorted.map(lambda (hr, total): total).collect()
# TEST Visualizing the 404 Response Codes by Hour (4i)
Test.assertEquals(hoursWithErrors404, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], 'incorrect hoursWithErrors404')
Test.assertEquals(errors404ByHours, [175, 171, 422, 272, 102, 95, 93, 122, 199, 185, 329, 263, 438, 397, 318, 347, 373, 330, 268, 269, 270, 241, 234, 272], 'incorrect errors404ByHours')
fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white')
plt.axis([0, max(hoursWithErrors404), 0, max(errors404ByHours)])
plt.grid(b=True, which='major', axis='y')
plt.xlabel('Hour')
plt.ylabel('404 Errors')
plt.plot(hoursWithErrors404, errors404ByHours)
pass
Explanation: (4i) Exercise: Visualizing the 404 Response Codes by Hour
Using the results from the previous exercise, use matplotlib to plot a "Line" or "Bar" graph of the 404 response codes by hour.
End of explanation |
13,828 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
return x/255.0
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
one_hot = np.zeros(shape=(len(x), 10))
for i in range(len(x)):
for j in range(10):
one_hot[i][j] = (j == x[i])
return one_hot
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=(None,image_shape[0], image_shape[1], image_shape[2]), name='x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=(None, n_classes), name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
input_channel_depth = int(x_tensor.get_shape()[3])
# The shape of the filter weight is (height, width, input_depth, output_depth)
filter_weights = tf.Variable(tf.truncated_normal([*conv_ksize, input_channel_depth, conv_num_outputs], dtype=tf.float32))
# The shape of the biases is equal the the number of outputs of the conv layer
filter_biases = tf.Variable(tf.constant(0, shape=[conv_num_outputs], dtype=tf.float32))
layer = tf.nn.conv2d(input=x_tensor, filter=filter_weights, strides=[1, *conv_strides, 1], padding='SAME')
layer += filter_biases
layer = tf.nn.relu(layer)
layer = tf.nn.max_pool(layer, [1, *pool_ksize, 1], strides=[1, *pool_strides, 1], padding='SAME')
return layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
return tf.contrib.layers.flatten(x_tensor)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
shape = x_tensor.get_shape().as_list()
weights = tf.Variable(tf.truncated_normal(
[shape[1], num_outputs],
mean=0.0, stddev=0.70
))
biases = tf.Variable(tf.zeros([num_outputs]))
result = tf.add(tf.matmul(x_tensor, weights), biases)
return tf.nn.relu(result)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
weights = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1],num_outputs],mean=0.0, stddev=0.08))
mul = tf.matmul(x_tensor,weights,name='mul')
bias = tf.Variable(tf.zeros(num_outputs))
y = tf.add(mul,bias)
return y
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
layer1 = conv2d_maxpool(x, 24, (3, 3), (1, 1), (2, 2), (2, 2))
layer2 = conv2d_maxpool(layer1, 48, (3, 3), (1, 1), (2, 2), (2, 2))
layer3 = conv2d_maxpool(layer2, 128, (3, 3), (1, 1), (2, 2), (2, 2))
#conv_num_outputs = 32
#num_outputs = 10
#conv_ksize = (3, 3)
#conv_strides = (3, 3)
#pool_ksize = (3, 3)
#pool_strides = (3, 3)
#x_tensor = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
flat1 = flatten(layer3)
#x_tensor = flatten(x_tensor)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
#x_tensor = fully_conn(x_tensor, conv_num_outputs)
fc1 = fully_conn(flat1, 512)
fc1 = tf.nn.dropout(fc1, keep_prob)
fc2 = fully_conn(fc1, 512)
fc2 = tf.nn.dropout(fc2, keep_prob)
#apply dropout
#x_tensor = tf.nn.dropout(x_tensor, keep_prob)
out = output(fc2, 10)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
#x_tensor = output(x_tensor, num_outputs)
# TODO: return output
#return x_tensor
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = sess.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.})
valid_acc = sess.run(accuracy, feed_dict={x:valid_features, y: valid_labels, keep_prob: 1.})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 25
batch_size = 128
keep_probability = .5
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
13,829 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Corrugated Shells
Init symbols for sympy
Step1: Corrugated cylindrical coordinates
Step2: Base Vectors $\vec{R}_1, \vec{R}_2, \vec{R}_3$
Step3: Base Vectors $\vec{R}^1, \vec{R}^2, \vec{R}^3$
Step4: Jacobi matrix
Step5: Metric tensor
${\displaystyle \hat{G}=\sum_{i,j} g^{ij}\vec{R}_i\vec{R}_j}$
Step6: ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$
Step7: Derivatives of vectors
Derivative of base vectors
Step8: $ \frac { d\vec{R_1} } { d\alpha_1} = -\frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \vec{R_3} $
Step9: $ \frac { d\vec{R_1} } { d\alpha_3} = \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} $
Step10: $ \frac { d\vec{R_3} } { d\alpha_1} = \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} $
Step11: $ \frac { d\vec{R_3} } { d\alpha_3} = \vec{0} $
Derivative of vectors
$ \vec{u} = u^1 \vec{R_1} + u^2\vec{R_2} + u^3\vec{R_3} $
$ \frac { d\vec{u} } { d\alpha_1} = \frac { d(u^1\vec{R_1}) } { d\alpha_1} + \frac { d(u^2\vec{R_2}) } { d\alpha_1}+ \frac { d(u^3\vec{R_3}) } { d\alpha_1} = \frac { du^1 } { d\alpha_1} \vec{R_1} + u^1 \frac { d\vec{R_1} } { d\alpha_1} + \frac { du^2 } { d\alpha_1} \vec{R_2} + u^2 \frac { d\vec{R_2} } { d\alpha_1} + \frac { du^3 } { d\alpha_1} \vec{R_3} + u^3 \frac { d\vec{R_3} } { d\alpha_1} = \frac { du^1 } { d\alpha_1} \vec{R_1} - u^1 \frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \vec{R_3} + \frac { du^2 } { d\alpha_1} \vec{R_2}+ \frac { du^3 } { d\alpha_1} \vec{R_3} + u^3 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1}$
Then
$ \frac { d\vec{u} } { d\alpha_1} = \left( \frac { du^1 } { d\alpha_1} + u^3 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \right) \vec{R_1} + \frac { du^2 } { d\alpha_1} \vec{R_2} + \left( \frac { du^3 } { d\alpha_1} - u^1 \frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \right) \vec{R_3}$
$ \frac { d\vec{u} } { d\alpha_2} = \frac { d(u^1\vec{R_1}) } { d\alpha_2} + \frac { d(u^2\vec{R_2}) } { d\alpha_2}+ \frac { d(u^3\vec{R_3}) } { d\alpha_2} = \frac { du^1 } { d\alpha_2} \vec{R_1} + \frac { du^2 } { d\alpha_2} \vec{R_2} + \frac { du^3 } { d\alpha_2} \vec{R_3} $
Then
$ \frac { d\vec{u} } { d\alpha_2} = \frac { du^1 } { d\alpha_2} \vec{R_1} + \frac { du^2 } { d\alpha_2} \vec{R_2} + \frac { du^3 } { d\alpha_2} \vec{R_3} $
$ \frac { d\vec{u} } { d\alpha_3} = \frac { d(u^1\vec{R_1}) } { d\alpha_3} + \frac { d(u^2\vec{R_2}) } { d\alpha_3}+ \frac { d(u^3\vec{R_3}) } { d\alpha_3} =
\frac { du^1 } { d\alpha_3} \vec{R_1} + u^1 \frac { d\vec{R_1} } { d\alpha_3} + \frac { du^2 } { d\alpha_3} \vec{R_2} + u^2 \frac { d\vec{R_2} } { d\alpha_3} + \frac { du^3 } { d\alpha_3} \vec{R_3} + u^3 \frac { d\vec{R_3} } { d\alpha_3} = \frac { du^1 } { d\alpha_3} \vec{R_1} + u^1 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} + \frac { du^2 } { d\alpha_3} \vec{R_2}+ \frac { du^3 } { d\alpha_3} \vec{R_3} $
Then
$ \frac { d\vec{u} } { d\alpha_3} = \left( \frac { du^1 } { d\alpha_3} + u^1 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \right) \vec{R_1} + \frac { du^2 } { d\alpha_3} \vec{R_2}+ \frac { du^3 } { d\alpha_3} \vec{R_3}$
Gradient of vector
$\nabla_1 u^1 = \frac { \partial u^1 } { \partial \alpha_1} + u^3 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}}$
$\nabla_1 u^2 = \frac { \partial u^2 } { \partial \alpha_1} $
$\nabla_1 u^3 = \frac { \partial u^3 } { \partial \alpha_1} - u^1 \frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) $
$\nabla_2 u^1 = \frac { \partial u^1 } { \partial \alpha_2}$
$\nabla_2 u^2 = \frac { \partial u^2 } { \partial \alpha_2}$
$\nabla_2 u^3 = \frac { \partial u^3 } { \partial \alpha_2}$
$\nabla_3 u^1 = \frac { \partial u^1 } { \partial \alpha_3} + u^1 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}}$
$\nabla_3 u^2 = \frac { \partial u^2 } { \partial \alpha_3} $
$\nabla_3 u^3 = \frac { \partial u^3 } { \partial \alpha_3}$
$ \nabla \vec{u} = \left(
\begin{array}{ccc}
\nabla_1 u^1 & \nabla_1 u^2 & \nabla_1 u^3 \
\nabla_2 u^1 & \nabla_2 u^2 & \nabla_2 u^3 \
\nabla_3 u^1 & \nabla_3 u^2 & \nabla_3 u^3 \
\end{array}
\right)$
Step12: $
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)
=
\left(
\begin{array}{c}
\left( 1+\frac{\alpha_2}{R} \right)^2 \frac { \partial u^1 } { \partial \alpha_1} + u^3 \frac {\left( 1+\frac{\alpha_3}{R} \right)}{R} \
\left( 1+\frac{\alpha_2}{R} \right)^2 \frac { \partial u^1 } { \partial \alpha_2} \
\left( 1+\frac{\alpha_3}{R} \right)^2 \frac { \partial u^1 } { \partial \alpha_3} + u^1 \frac {\left( 1+\frac{\alpha_3}{R} \right)}{R} \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
\frac { \partial u^3 } { \partial \alpha_1} - u^1 \frac {\left( 1+\frac{\alpha_3}{R} \right)}{R} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right)
$
$
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)
=
B \cdot
\left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right)
$
Step13: Deformations tensor
Step14: Tymoshenko theory
$u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $
$u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $
$ \left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right) = T \cdot
\left(
\begin{array}{c}
u \
\frac { \partial u } { \partial \alpha_1} \
\gamma \
\frac { \partial \gamma } { \partial \alpha_1} \
w \
\frac { \partial w } { \partial \alpha_1} \
\end{array}
\right) $
Step15: Elasticity tensor(stiffness tensor)
General form
Step16: Include symmetry
Step17: Isotropic material
Step18: Orthotropic material
Step19: Orthotropic material in shell coordinates
Step20: Physical coordinates
$u^1=\frac{u_{[1]}}{1+\frac{\alpha_3}{R}}$
$\frac{\partial u^1} {\partial \alpha_3}=\frac{1}{1+\frac{\alpha_3}{R}} \frac{\partial u_{[1]}} {\partial \alpha_3} + u_{[1]} \frac{\partial} {\partial \alpha_3} \left( \frac{1}{1+\frac{\alpha_3}{R}} \right) = =\frac{1}{1+\frac{\alpha_3}{R}} \frac{\partial u_{[1]}} {\partial \alpha_3} - u_{[1]} \frac{1}{R \left( 1+\frac{\alpha_3}{R} \right)^2} $
Step21: Stiffness tensor
Step22: Tymoshenko
Step23: Square of segment
$A=\frac {\theta}{2} \left( R + h_2 \right)^2-\frac {\theta}{2} \left( R + h_1 \right)^2$
Step24: ${\displaystyle A=\int_{0}^{L}\int_{h_1}^{h_2} \left( 1+\frac{\alpha_3}{R} \right) d \alpha_1 d \alpha_3}, L=R \theta$
Step25: Virtual work
Isotropic material physical coordinates
Step26: Isotropic material physical coordinates - Tymoshenko | Python Code:
from sympy import *
from sympy.vector import CoordSys3D
N = CoordSys3D('N')
x1, x2, x3 = symbols("x_1 x_2 x_3")
alpha1, alpha2, alpha3 = symbols("alpha_1 alpha_2 alpha_3")
R, L, ga, gv = symbols("R L g_a g_v")
init_printing()
Explanation: Corrugated Shells
Init symbols for sympy
End of explanation
a1 = pi / 2 + (L / 2 - alpha1)/R
x = (R + alpha3 + ga * cos(gv * a1)) * cos(a1)
y = alpha2
z = (R + alpha3 + ga * cos(gv * a1)) * sin(a1)
r = x*N.i + y*N.j + z*N.k
Explanation: Corrugated cylindrical coordinates
End of explanation
R1=r.diff(alpha1)
R2=r.diff(alpha2)
R3=r.diff(alpha3)
trigsimp(R1)
R2
R3
Explanation: Base Vectors $\vec{R}_1, \vec{R}_2, \vec{R}_3$
End of explanation
eps=trigsimp(R1.dot(R2.cross(R3)))
R_1=simplify(trigsimp(R2.cross(R3)/eps))
R_2=simplify(trigsimp(R3.cross(R1)/eps))
R_3=simplify(trigsimp(R1.cross(R2)/eps))
R_1
R_2
R_3
Explanation: Base Vectors $\vec{R}^1, \vec{R}^2, \vec{R}^3$
End of explanation
dx1da1=R1.dot(N.i)
dx1da2=R2.dot(N.i)
dx1da3=R3.dot(N.i)
dx2da1=R1.dot(N.j)
dx2da2=R2.dot(N.j)
dx2da3=R3.dot(N.j)
dx3da1=R1.dot(N.k)
dx3da2=R2.dot(N.k)
dx3da3=R3.dot(N.k)
A=Matrix([[dx1da1, dx1da2, dx1da3], [dx2da1, dx2da2, dx2da3], [dx3da1, dx3da2, dx3da3]])
simplify(A)
A_inv = A**-1
trigsimp(A_inv[0,0])
trigsimp(A.det())
Explanation: Jacobi matrix:
$ A = \left(
\begin{array}{ccc}
\frac{\partial x_1}{\partial \alpha_1} & \frac{\partial x_1}{\partial \alpha_2} & \frac{\partial x_1}{\partial \alpha_3} \
\frac{\partial x_2}{\partial \alpha_1} & \frac{\partial x_2}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \
\frac{\partial x_3}{\partial \alpha_1} & \frac{\partial x_3}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \
\end{array}
\right)$
$ \left[
\begin{array}{ccc}
\vec{R}_1 & \vec{R}_2 & \vec{R}_3
\end{array}
\right] = \left[
\begin{array}{ccc}
\vec{e}_1 & \vec{e}_2 & \vec{e}_3
\end{array}
\right] \cdot \left(
\begin{array}{ccc}
\frac{\partial x_1}{\partial \alpha_1} & \frac{\partial x_1}{\partial \alpha_2} & \frac{\partial x_1}{\partial \alpha_3} \
\frac{\partial x_2}{\partial \alpha_1} & \frac{\partial x_2}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \
\frac{\partial x_3}{\partial \alpha_1} & \frac{\partial x_3}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \
\end{array}
\right) = \left[
\begin{array}{ccc}
\vec{e}_1 & \vec{e}_2 & \vec{e}_3
\end{array}
\right] \cdot A$
$ \left[
\begin{array}{ccc}
\vec{e}_1 & \vec{e}_2 & \vec{e}_3
\end{array}
\right] =\left[
\begin{array}{ccc}
\vec{R}_1 & \vec{R}_2 & \vec{R}_3
\end{array}
\right] \cdot A^{-1}$
End of explanation
g11=R1.dot(R1)
g12=R1.dot(R2)
g13=R1.dot(R3)
g21=R2.dot(R1)
g22=R2.dot(R2)
g23=R2.dot(R3)
g31=R3.dot(R1)
g32=R3.dot(R2)
g33=R3.dot(R3)
G=Matrix([[g11, g12, g13],[g21, g22, g23], [g31, g32, g33]])
G=trigsimp(G)
G
Explanation: Metric tensor
${\displaystyle \hat{G}=\sum_{i,j} g^{ij}\vec{R}_i\vec{R}_j}$
End of explanation
g_11=R_1.dot(R_1)
g_12=R_1.dot(R_2)
g_13=R_1.dot(R_3)
g_21=R_2.dot(R_1)
g_22=R_2.dot(R_2)
g_23=R_2.dot(R_3)
g_31=R_3.dot(R_1)
g_32=R_3.dot(R_2)
g_33=R_3.dot(R_3)
G_con=Matrix([[g_11, g_12, g_13],[g_21, g_22, g_23], [g_31, g_32, g_33]])
G_con=trigsimp(G_con)
G_con
G_inv = G**-1
G_inv
Explanation: ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$
End of explanation
dR1dalpha1 = trigsimp(R1.diff(alpha1))
dR1dalpha1
Explanation: Derivatives of vectors
Derivative of base vectors
End of explanation
dR1dalpha2 = trigsimp(R1.diff(alpha2))
dR1dalpha2
dR1dalpha3 = trigsimp(R1.diff(alpha3))
dR1dalpha3
Explanation: $ \frac { d\vec{R_1} } { d\alpha_1} = -\frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \vec{R_3} $
End of explanation
dR2dalpha1 = trigsimp(R2.diff(alpha1))
dR2dalpha1
dR2dalpha2 = trigsimp(R2.diff(alpha2))
dR2dalpha2
dR2dalpha3 = trigsimp(R2.diff(alpha3))
dR2dalpha3
dR3dalpha1 = trigsimp(R3.diff(alpha1))
dR3dalpha1
Explanation: $ \frac { d\vec{R_1} } { d\alpha_3} = \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} $
End of explanation
dR3dalpha2 = trigsimp(R3.diff(alpha2))
dR3dalpha2
dR3dalpha3 = trigsimp(R3.diff(alpha3))
dR3dalpha3
Explanation: $ \frac { d\vec{R_3} } { d\alpha_1} = \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} $
End of explanation
u1=Function('u^1')
u2=Function('u^2')
u3=Function('u^3')
q=Function('q') # q(alpha3) = 1+alpha3/R
K = Symbol('K') # K = 1/R
u1_nabla1 = u1(alpha1, alpha2, alpha3).diff(alpha1) + u3(alpha1, alpha2, alpha3) * K / q(alpha3)
u2_nabla1 = u2(alpha1, alpha2, alpha3).diff(alpha1)
u3_nabla1 = u3(alpha1, alpha2, alpha3).diff(alpha1) - u1(alpha1, alpha2, alpha3) * K * q(alpha3)
u1_nabla2 = u1(alpha1, alpha2, alpha3).diff(alpha2)
u2_nabla2 = u2(alpha1, alpha2, alpha3).diff(alpha2)
u3_nabla2 = u3(alpha1, alpha2, alpha3).diff(alpha2)
u1_nabla3 = u1(alpha1, alpha2, alpha3).diff(alpha3) + u1(alpha1, alpha2, alpha3) * K / q(alpha3)
u2_nabla3 = u2(alpha1, alpha2, alpha3).diff(alpha3)
u3_nabla3 = u3(alpha1, alpha2, alpha3).diff(alpha3)
# $\nabla_2 u^2 = \frac { \partial u^2 } { \partial \alpha_2}$
grad_u = Matrix([[u1_nabla1, u2_nabla1, u3_nabla1],[u1_nabla2, u2_nabla2, u3_nabla2], [u1_nabla3, u2_nabla3, u3_nabla3]])
grad_u
G_s = Matrix([[q(alpha3)**2, 0, 0],[0, 1, 0], [0, 0, 1]])
grad_u_down=grad_u*G_s
expand(simplify(grad_u_down))
Explanation: $ \frac { d\vec{R_3} } { d\alpha_3} = \vec{0} $
Derivative of vectors
$ \vec{u} = u^1 \vec{R_1} + u^2\vec{R_2} + u^3\vec{R_3} $
$ \frac { d\vec{u} } { d\alpha_1} = \frac { d(u^1\vec{R_1}) } { d\alpha_1} + \frac { d(u^2\vec{R_2}) } { d\alpha_1}+ \frac { d(u^3\vec{R_3}) } { d\alpha_1} = \frac { du^1 } { d\alpha_1} \vec{R_1} + u^1 \frac { d\vec{R_1} } { d\alpha_1} + \frac { du^2 } { d\alpha_1} \vec{R_2} + u^2 \frac { d\vec{R_2} } { d\alpha_1} + \frac { du^3 } { d\alpha_1} \vec{R_3} + u^3 \frac { d\vec{R_3} } { d\alpha_1} = \frac { du^1 } { d\alpha_1} \vec{R_1} - u^1 \frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \vec{R_3} + \frac { du^2 } { d\alpha_1} \vec{R_2}+ \frac { du^3 } { d\alpha_1} \vec{R_3} + u^3 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1}$
Then
$ \frac { d\vec{u} } { d\alpha_1} = \left( \frac { du^1 } { d\alpha_1} + u^3 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \right) \vec{R_1} + \frac { du^2 } { d\alpha_1} \vec{R_2} + \left( \frac { du^3 } { d\alpha_1} - u^1 \frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \right) \vec{R_3}$
$ \frac { d\vec{u} } { d\alpha_2} = \frac { d(u^1\vec{R_1}) } { d\alpha_2} + \frac { d(u^2\vec{R_2}) } { d\alpha_2}+ \frac { d(u^3\vec{R_3}) } { d\alpha_2} = \frac { du^1 } { d\alpha_2} \vec{R_1} + \frac { du^2 } { d\alpha_2} \vec{R_2} + \frac { du^3 } { d\alpha_2} \vec{R_3} $
Then
$ \frac { d\vec{u} } { d\alpha_2} = \frac { du^1 } { d\alpha_2} \vec{R_1} + \frac { du^2 } { d\alpha_2} \vec{R_2} + \frac { du^3 } { d\alpha_2} \vec{R_3} $
$ \frac { d\vec{u} } { d\alpha_3} = \frac { d(u^1\vec{R_1}) } { d\alpha_3} + \frac { d(u^2\vec{R_2}) } { d\alpha_3}+ \frac { d(u^3\vec{R_3}) } { d\alpha_3} =
\frac { du^1 } { d\alpha_3} \vec{R_1} + u^1 \frac { d\vec{R_1} } { d\alpha_3} + \frac { du^2 } { d\alpha_3} \vec{R_2} + u^2 \frac { d\vec{R_2} } { d\alpha_3} + \frac { du^3 } { d\alpha_3} \vec{R_3} + u^3 \frac { d\vec{R_3} } { d\alpha_3} = \frac { du^1 } { d\alpha_3} \vec{R_1} + u^1 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} + \frac { du^2 } { d\alpha_3} \vec{R_2}+ \frac { du^3 } { d\alpha_3} \vec{R_3} $
Then
$ \frac { d\vec{u} } { d\alpha_3} = \left( \frac { du^1 } { d\alpha_3} + u^1 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \right) \vec{R_1} + \frac { du^2 } { d\alpha_3} \vec{R_2}+ \frac { du^3 } { d\alpha_3} \vec{R_3}$
Gradient of vector
$\nabla_1 u^1 = \frac { \partial u^1 } { \partial \alpha_1} + u^3 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}}$
$\nabla_1 u^2 = \frac { \partial u^2 } { \partial \alpha_1} $
$\nabla_1 u^3 = \frac { \partial u^3 } { \partial \alpha_1} - u^1 \frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) $
$\nabla_2 u^1 = \frac { \partial u^1 } { \partial \alpha_2}$
$\nabla_2 u^2 = \frac { \partial u^2 } { \partial \alpha_2}$
$\nabla_2 u^3 = \frac { \partial u^3 } { \partial \alpha_2}$
$\nabla_3 u^1 = \frac { \partial u^1 } { \partial \alpha_3} + u^1 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}}$
$\nabla_3 u^2 = \frac { \partial u^2 } { \partial \alpha_3} $
$\nabla_3 u^3 = \frac { \partial u^3 } { \partial \alpha_3}$
$ \nabla \vec{u} = \left(
\begin{array}{ccc}
\nabla_1 u^1 & \nabla_1 u^2 & \nabla_1 u^3 \
\nabla_2 u^1 & \nabla_2 u^2 & \nabla_2 u^3 \
\nabla_3 u^1 & \nabla_3 u^2 & \nabla_3 u^3 \
\end{array}
\right)$
End of explanation
B = zeros(9, 12)
B[0,1] = (1+alpha3/R)**2
B[0,8] = (1+alpha3/R)/R
B[1,2] = (1+alpha3/R)**2
B[2,0] = (1+alpha3/R)/R
B[2,3] = (1+alpha3/R)**2
B[3,5] = S(1)
B[4,6] = S(1)
B[5,7] = S(1)
B[6,9] = S(1)
B[6,0] = -(1+alpha3/R)/R
B[7,10] = S(1)
B[8,11] = S(1)
B
Explanation: $
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)
=
\left(
\begin{array}{c}
\left( 1+\frac{\alpha_2}{R} \right)^2 \frac { \partial u^1 } { \partial \alpha_1} + u^3 \frac {\left( 1+\frac{\alpha_3}{R} \right)}{R} \
\left( 1+\frac{\alpha_2}{R} \right)^2 \frac { \partial u^1 } { \partial \alpha_2} \
\left( 1+\frac{\alpha_3}{R} \right)^2 \frac { \partial u^1 } { \partial \alpha_3} + u^1 \frac {\left( 1+\frac{\alpha_3}{R} \right)}{R} \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
\frac { \partial u^3 } { \partial \alpha_1} - u^1 \frac {\left( 1+\frac{\alpha_3}{R} \right)}{R} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right)
$
$
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)
=
B \cdot
\left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right)
$
End of explanation
E=zeros(6,9)
E[0,0]=1
E[1,4]=1
E[2,8]=1
E[3,1]=1
E[3,3]=1
E[4,2]=1
E[4,6]=1
E[5,5]=1
E[5,7]=1
E
Q=E*B
Q=simplify(Q)
Q
Explanation: Deformations tensor
End of explanation
T=zeros(12,6)
T[0,0]=1
T[0,2]=alpha3
T[1,1]=1
T[1,3]=alpha3
T[3,2]=1
T[8,4]=1
T[9,5]=1
T
Q=E*B*T
Q=simplify(Q)
Q
Explanation: Tymoshenko theory
$u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $
$u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $
$ \left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right) = T \cdot
\left(
\begin{array}{c}
u \
\frac { \partial u } { \partial \alpha_1} \
\gamma \
\frac { \partial \gamma } { \partial \alpha_1} \
w \
\frac { \partial w } { \partial \alpha_1} \
\end{array}
\right) $
End of explanation
from sympy import MutableDenseNDimArray
C_x = MutableDenseNDimArray.zeros(3, 3, 3, 3)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
elem_index = 'C^{{{}{}{}{}}}'.format(i+1, j+1, k+1, l+1)
el = Symbol(elem_index)
C_x[i,j,k,l] = el
C_x
Explanation: Elasticity tensor(stiffness tensor)
General form
End of explanation
C_x_symmetry = MutableDenseNDimArray.zeros(3, 3, 3, 3)
def getCIndecies(index):
if (index == 0):
return 0, 0
elif (index == 1):
return 1, 1
elif (index == 2):
return 2, 2
elif (index == 3):
return 0, 1
elif (index == 4):
return 0, 2
elif (index == 5):
return 1, 2
for s in range(6):
for t in range(s, 6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
elem_index = 'C^{{{}{}{}{}}}'.format(i+1, j+1, k+1, l+1)
el = Symbol(elem_index)
C_x_symmetry[i,j,k,l] = el
C_x_symmetry[i,j,l,k] = el
C_x_symmetry[j,i,k,l] = el
C_x_symmetry[j,i,l,k] = el
C_x_symmetry[k,l,i,j] = el
C_x_symmetry[k,l,j,i] = el
C_x_symmetry[l,k,i,j] = el
C_x_symmetry[l,k,j,i] = el
C_x_symmetry
Explanation: Include symmetry
End of explanation
C_isotropic = MutableDenseNDimArray.zeros(3, 3, 3, 3)
C_isotropic_matrix = zeros(6)
mu = Symbol('mu')
la = Symbol('lambda')
for s in range(6):
for t in range(s, 6):
if (s < 3 and t < 3):
if(t != s):
C_isotropic_matrix[s,t] = la
C_isotropic_matrix[t,s] = la
else:
C_isotropic_matrix[s,t] = 2*mu+la
C_isotropic_matrix[t,s] = 2*mu+la
elif (s == t):
C_isotropic_matrix[s,t] = mu
C_isotropic_matrix[t,s] = mu
for s in range(6):
for t in range(s, 6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
el = C_isotropic_matrix[s, t]
C_isotropic[i,j,k,l] = el
C_isotropic[i,j,l,k] = el
C_isotropic[j,i,k,l] = el
C_isotropic[j,i,l,k] = el
C_isotropic[k,l,i,j] = el
C_isotropic[k,l,j,i] = el
C_isotropic[l,k,i,j] = el
C_isotropic[l,k,j,i] = el
C_isotropic
def getCalpha(C, A, q, p, s, t):
res = S(0)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
res += C[i,j,k,l]*A[q,i]*A[p,j]*A[s,k]*A[t,l]
return simplify(trigsimp(res))
C_isotropic_alpha = MutableDenseNDimArray.zeros(3, 3, 3, 3)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
c = getCalpha(C_isotropic, A_inv, i, j, k, l)
C_isotropic_alpha[i,j,k,l] = c
C_isotropic_alpha[0,0,0,0]
C_isotropic_matrix_alpha = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_isotropic_matrix_alpha[s,t] = C_isotropic_alpha[i,j,k,l]
C_isotropic_matrix_alpha
Explanation: Isotropic material
End of explanation
C_orthotropic = MutableDenseNDimArray.zeros(3, 3, 3, 3)
C_orthotropic_matrix = zeros(6)
for s in range(6):
for t in range(s, 6):
elem_index = 'C^{{{}{}}}'.format(s+1, t+1)
el = Symbol(elem_index)
if ((s < 3 and t < 3) or t == s):
C_orthotropic_matrix[s,t] = el
C_orthotropic_matrix[t,s] = el
for s in range(6):
for t in range(s, 6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
el = C_orthotropic_matrix[s, t]
C_orthotropic[i,j,k,l] = el
C_orthotropic[i,j,l,k] = el
C_orthotropic[j,i,k,l] = el
C_orthotropic[j,i,l,k] = el
C_orthotropic[k,l,i,j] = el
C_orthotropic[k,l,j,i] = el
C_orthotropic[l,k,i,j] = el
C_orthotropic[l,k,j,i] = el
C_orthotropic
Explanation: Orthotropic material
End of explanation
def getCalpha(C, A, q, p, s, t):
res = S(0)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
res += C[i,j,k,l]*A[q,i]*A[p,j]*A[s,k]*A[t,l]
return simplify(trigsimp(res))
C_orthotropic_alpha = MutableDenseNDimArray.zeros(3, 3, 3, 3)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
c = getCalpha(C_orthotropic, A_inv, i, j, k, l)
C_orthotropic_alpha[i,j,k,l] = c
C_orthotropic_alpha[0,0,0,0]
C_orthotropic_matrix_alpha = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_orthotropic_matrix_alpha[s,t] = C_orthotropic_alpha[i,j,k,l]
C_orthotropic_matrix_alpha
Explanation: Orthotropic material in shell coordinates
End of explanation
P=eye(12,12)
P[0,0]=1/(1+alpha3/R)
P[1,1]=1/(1+alpha3/R)
P[2,2]=1/(1+alpha3/R)
P[3,0]=-1/(R*(1+alpha3/R)**2)
P[3,3]=1/(1+alpha3/R)
P
Def=simplify(E*B*P)
Def
rows, cols = Def.shape
D_p=zeros(rows, cols)
q = 1+alpha3/R
for i in range(rows):
ratio = 1
if (i==0):
ratio = q*q
elif (i==3 or i == 4):
ratio = q
for j in range(cols):
D_p[i,j] = Def[i,j] / ratio
D_p = simplify(D_p)
D_p
Explanation: Physical coordinates
$u^1=\frac{u_{[1]}}{1+\frac{\alpha_3}{R}}$
$\frac{\partial u^1} {\partial \alpha_3}=\frac{1}{1+\frac{\alpha_3}{R}} \frac{\partial u_{[1]}} {\partial \alpha_3} + u_{[1]} \frac{\partial} {\partial \alpha_3} \left( \frac{1}{1+\frac{\alpha_3}{R}} \right) = =\frac{1}{1+\frac{\alpha_3}{R}} \frac{\partial u_{[1]}} {\partial \alpha_3} - u_{[1]} \frac{1}{R \left( 1+\frac{\alpha_3}{R} \right)^2} $
End of explanation
C_isotropic_alpha_p = MutableDenseNDimArray.zeros(3, 3, 3, 3)
q=1+alpha3/R
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
fact = 1
if (i==0):
fact = fact*q
if (j==0):
fact = fact*q
if (k==0):
fact = fact*q
if (l==0):
fact = fact*q
C_isotropic_alpha_p[i,j,k,l] = simplify(C_isotropic_alpha[i,j,k,l]*fact)
C_isotropic_matrix_alpha_p = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_isotropic_matrix_alpha_p[s,t] = C_isotropic_alpha_p[i,j,k,l]
C_isotropic_matrix_alpha_p
C_orthotropic_alpha_p = MutableDenseNDimArray.zeros(3, 3, 3, 3)
q=1+alpha3/R
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
fact = 1
if (i==0):
fact = fact*q
if (j==0):
fact = fact*q
if (k==0):
fact = fact*q
if (l==0):
fact = fact*q
C_orthotropic_alpha_p[i,j,k,l] = simplify(C_orthotropic_alpha[i,j,k,l]*fact)
C_orthotropic_matrix_alpha_p = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_orthotropic_matrix_alpha_p[s,t] = C_orthotropic_alpha_p[i,j,k,l]
C_orthotropic_matrix_alpha_p
Explanation: Stiffness tensor
End of explanation
D_p_T = D_p*T
K = Symbol('K')
D_p_T = D_p_T.subs(R, 1/K)
simplify(D_p_T)
Explanation: Tymoshenko
End of explanation
theta, h1, h2=symbols('theta h_1 h_2')
square_geom=theta/2*(R+h2)**2-theta/2*(R+h1)**2
expand(simplify(square_geom))
Explanation: Square of segment
$A=\frac {\theta}{2} \left( R + h_2 \right)^2-\frac {\theta}{2} \left( R + h_1 \right)^2$
End of explanation
square_int=integrate(integrate(1+alpha3/R, (alpha3, h1, h2)), (alpha1, 0, theta*R))
expand(simplify(square_int))
Explanation: ${\displaystyle A=\int_{0}^{L}\int_{h_1}^{h_2} \left( 1+\frac{\alpha_3}{R} \right) d \alpha_1 d \alpha_3}, L=R \theta$
End of explanation
simplify(D_p.T*C_isotropic_matrix_alpha_p*D_p)
Explanation: Virtual work
Isotropic material physical coordinates
End of explanation
W = simplify(D_p_T.T*C_isotropic_matrix_alpha_p*D_p_T*(1+alpha3*K)**2)
W
h=Symbol('h')
E=Symbol('E')
v=Symbol('nu')
W_a3 = integrate(W, (alpha3, -h/2, h/2))
W_a3 = simplify(W_a3)
W_a3.subs(la, E*v/((1+v)*(1-2*v))).subs(mu, E/((1+v)*2))
A_M = zeros(3)
A_M[0,0] = E*h/(1-v**2)
A_M[1,1] = 5*E*h/(12*(1+v))
A_M[2,2] = E*h**3/(12*(1-v**2))
Q_M = zeros(3,6)
Q_M[0,1] = 1
Q_M[0,4] = K
Q_M[1,0] = -K
Q_M[1,2] = 1
Q_M[1,5] = 1
Q_M[2,3] = 1
W_M=Q_M.T*A_M*Q_M
W_M
Explanation: Isotropic material physical coordinates - Tymoshenko
End of explanation |
13,830 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step4: Exercises about Numpy
Author
Step5: This notebook reviews some of the Python modules that make it possible to work with data structures in an easy an efficient manner. We will review Numpy arrays and matrices, and some of the common operations which are needed when working with these data structures in Machine Learning.
1. Create numpy arrays of different types
The following code fragment defines variable x as a list of 4 integers, you can check that by printing the type of any element of x. Use python command map() to create a new list with the same elements as x, but where each element of the list is a float. Note that, since in Python 3 map() returns an iterable object, you need to call function list() to populate the list.
Step6: Numpy arrays can be defined directly using methods such as np.arange(), np.ones(), np.zeros(), as well as random number generators. Alternatively, you can easily generate them from python lists (or lists of lists) containing elements of numeric type.
You can easily check the shape of any numpy vector with the property .shape, and reshape it with the method reshape(). Note the difference between 1-D and N-D numpy arrays (ndarrays). You should also be aware of the existence of another numpy data type
Step7: Some other useful Numpy methods are
Step8: 2. Products and powers of numpy arrays and matrices
* and ** when used with Numpy arrays implement elementwise product and exponentiation
* and ** when used with Numpy matrices implement matrix product and exponentiation
Method np.dot() implements matrix multiplication, and can be used both with numpy arrays and matrices.
So you have to be careful about the types you are using for each variable
Step9: 3. Numpy methods that can be carried out along different dimensions
Compare the result of the following commands
Step10: Other numpy methods where you can specify the axis along with a certain operation should be carried out are
Step11: 5. Slicing
Particular elements of numpy arrays (both unidimensional and multidimensional) can be accessed using standard python slicing. When working with multidimensional arrays, slicing can be carried out along the different dimensions at once
Step12: Extracting columns and rows from multidimensional arrays
Something to be aware of when extracting rows or columns from numpy arrays is that if you specify just the index of the row or column you want to extract, the result will be a 1-D numpy array in any case. For instance, the following code prints the second column and third row of the numpy array X, and shows its dimensions. Notice that in both cases you get arrays with 1 dimension only.
Step13: If you wish that the extracted row or column is still a 2-D row or column vector, it is important to specify an interval instead of a single value, even if such interval consists of just one value.
Many numpy functions will also return 1-D vectors. It is important to be aware of such behavior to avoid and detect bugs in your code that may give place to undesired behaviors.
Step14: 6. Matrix inversion
Non singular matrices can be inverted with method np.linalg.inv(). Invert square matrices $X\cdot X^\top$ and $X^\top \cdot X$, and see what happens when trying to invert a singular matrix. The rank of a matrix can be studied with method numpy.linalg.matrix_rank().
Step15: 7. Exercises
In this section, you will complete three exercises where you will carry out some common operations when working with data structures. For this exercise you will work with the 2-D numpy array X, assuming that it contains the values of two different variables for 8 data patterns. A first column of ones has already been introduced in a previous exercise
Step16: 7.1. Non-linear transformations
Create a new matrix Z, where additional features are created by carrying out the following non-linear transformations
Step17: Repeat the previous exercise, this time using the map() method together with function log_transform(). This function needs to be defined in such a way that guarantees that variable Z_map is the same as the previously computed variable Z.
Step18: Repeat the previous exercise once more. This time, define a lambda function for the task.
Step19: 7.2. Polynomial transformations
Similarly to the previous exercise, now we are interested in obtaining another matrix that will be used to evaluate a polynomial model. In order to do so, compute matrix Z_poly as follows
Step20: 7.3. Model evaluation
Finally, we can use previous data matrices Z and Z_poly to efficiently compute the output of the corresponding non-linear models over all the patterns in the data set. In this exercise, we consider the two following linear-in-the-parameters models to be evaluated | Python Code:
# Import some libraries that will be necessary for working with data and displaying plots
import numpy as np
import hashlib
# Test functions
def hashstr(str1):
Implements the secure hash of a string
return hashlib.sha1(str1).hexdigest()
def test_arrayequal(x1, x2, err_msg, ok_msg='Test passed'):
Test if all elements in arrays x1 and x2 are the same item by item
:param x1: First array for the comparison
:param x2: Second array for the comparison
:param err_msg: Display message if both arrays are not the same
:param ok_msg: Display message if arrays are the same (optional)
try:
np.testing.assert_array_equal(x1, x2)
print(ok_msg)
except:
print(err_msg)
def test_strequal(str1, str2, err_msg, ok_msg='Test passed'):
Test if str1 and str2 are the same string
:param str1: First string for the comparison
:param str2: Second string for the comparison
:param err_msg: Display message if both strings are not the same
:param ok_msg: Display message if strings are the same (optional)
try:
np.testing.assert_string_equal(str1, str2)
print(ok_msg)
except:
print(err_msg)
def test_hashedequal(str1, str2, err_msg, ok_msg='Test passed'):
Test if hashed(str1) and str2 are the same string
:param str1: First string for the comparison
str1 will be hashed for the comparison
:param str2: Second string for the comparison
:param err_msg: Display message if both strings are not the same
:param ok_msg: Display message if strings are the same (optional)
try:
np.testing.assert_string_equal(hashstr(str1), str2)
print(ok_msg)
except:
print(err_msg)
Explanation: Exercises about Numpy
Author: Jerónimo Arenas García ([email protected])
Notebook version: 1.1 (Sep 20, 2017)
Changes: v.1.0 (Mar 15, 2016) - First version
v.1.1 (Sep 20, 2017) - Compatibility with python 2 and python 3
Display messages in English
Pending changes:
* Add a section 7.4. representing f_poly as a function of x
End of explanation
x = [5, 4, 3, 4]
print(type(x[0]))
# Create a list of floats containing the same elements as in x
# x_f = list(map(<FILL IN>))
x_f = list(map(float, x))
test_arrayequal(x, x_f, 'Elements of both lists are not the same')
if ((type(x[-2])==int) & (type(x_f[-2])==float)):
print('Test passed')
else:
print('Type conversion incorrect')
Explanation: This notebook reviews some of the Python modules that make it possible to work with data structures in an easy an efficient manner. We will review Numpy arrays and matrices, and some of the common operations which are needed when working with these data structures in Machine Learning.
1. Create numpy arrays of different types
The following code fragment defines variable x as a list of 4 integers, you can check that by printing the type of any element of x. Use python command map() to create a new list with the same elements as x, but where each element of the list is a float. Note that, since in Python 3 map() returns an iterable object, you need to call function list() to populate the list.
End of explanation
# Numpy arrays can be created from numeric lists or using different numpy methods
y = np.arange(8)+1
x = np.array(x_f)
# Check the different data types involved
print('Variable x_f is of type', type(x_f))
print('Variable x is of type ', type(x))
print('Variable y is of type', type(y))
# Print the shapes of the numpy arrays
print('Variable y has dimension', y.shape)
print('Variable x has dimension', x.shape)
#Complete the following exercises
# Convert x into a variable x_matrix, of type `numpy.matrixlib.defmatrix.matrix` using command
# np.matrix(). The resulting matrix should be of dimensions 4x1
# x_matrix = <FILL IN>
x_matrix = np.matrix(x).T
# Convert x into a variable x_array, of type `ndarray`, and shape (4,1)
# x_array = <FILL IN>
x_array = x[:,np.newaxis]
# Reshape array y into a numpy array of shape (4,2) using command np.reshape()
# y = <FILL IN>
y = y.reshape((4,2))
test_strequal(str(type(x_matrix)), "<class 'numpy.matrixlib.defmatrix.matrix'>", 'x_matrix is not defined as a matrix')
test_hashedequal(x_matrix.tostring(), '1215ced5d82501bf03e04b30f16c45a4bdcb8838', 'Incorrect variable x_matrix')
test_strequal(str(type(x_array)), "<class 'numpy.ndarray'>", 'x_array is not defined as numpy ndarray')
test_hashedequal(x_array.tostring(), '1215ced5d82501bf03e04b30f16c45a4bdcb8838', 'Incorrect variable x_array')
test_strequal(str(type(y)), "<class 'numpy.ndarray'>", 'y is not defined as a numpy ndarray')
test_hashedequal(y.tostring(), '0b61a85386775357e0710800497771a34fdc8ae5', 'Incorrect variable y')
Explanation: Numpy arrays can be defined directly using methods such as np.arange(), np.ones(), np.zeros(), as well as random number generators. Alternatively, you can easily generate them from python lists (or lists of lists) containing elements of numeric type.
You can easily check the shape of any numpy vector with the property .shape, and reshape it with the method reshape(). Note the difference between 1-D and N-D numpy arrays (ndarrays). You should also be aware of the existence of another numpy data type: Numpy matrices (http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.matrix.html) are inherently 2-D structures where operators * and ** have the meaning of matrix multiplication and matrix power.
In the code below, you can check the types and shapes of different numpy arrays. Complete also the exercise where you are asked to convert a unidimensional array into a vector of size $4\times2$.
End of explanation
print('Applying flatten() to matrix x_matrix (of type matrix)')
print('x_matrix.flatten():', x_matrix.flatten())
print('Its type:', type(x_matrix.flatten()))
print('Its dimensions:', x_matrix.flatten().shape)
print('\nApplying flatten() to matrix y (of type ndarray)')
print('y.flatten():', y.flatten())
print('Its type:', type(y.flatten()))
print('Its dimensions:', y.flatten().shape)
print('\nApplying tolist() to x_matrix (of type matrix) and to the 2D vector y (of type ndarray)')
print('x_matrix.tolist():', x_matrix.tolist())
print('y.tolist():', y.tolist())
Explanation: Some other useful Numpy methods are:
np.flatten(): converts a numpy array or matrix into a vector by concatenating the elements in the different dimension. Note that the result of the method keeps the type of the original variable, so the result is a 1-D ndarray when invoked on a numpy array, and a numpy matrix (and necessarily 2-D) when invoked on a matrix.
np.tolist(): converts a numpy array or matrix into a python list.
These uses are illustrated in the code fragment below.
End of explanation
# Try to run the following command on variable x_matrix, and check what happens
print(x_array**2)
print('Remember that the shape of x_array is', x_array.shape)
print('Remember that the shape of y is', y.shape)
# Complete the following exercises. You can print the partial results to visualize them
# Multiply the 2-D array `y` by 2
# y_by2 = <FILL IN>
y_by2 = y * 2
# Multiply each of the columns in `y` by the column vector x_array
# z_4_2 = <FILL IN>
z_4_2 = x_array * y
# Obtain the matrix product of the transpose of x_array and y
# x_by_y = <FILL IN>
x_by_y = x_array.T.dot(y)
# Repeat the previous calculation, this time using x_matrix (of type numpy matrix) instead of x_array
# Note that in this case you do not need to use method dot()
# x_by_y2 = <FILL IN>
x_by_y2 = x_matrix.T * y
# Multiply vector x_array by its transpose to obtain a 4 x 4 matrix
#x_4_4 = <FILL IN>
x_4_4 = x_array.dot(x_array.T)
# Multiply the transpose of vector x_array by vector x_array. The result is the squared-norm of the vector
#x_norm2 = <FILL IN>
x_norm2 = x_array.T.dot(x_array)
test_hashedequal(y_by2.tostring(),'1b54af8620657d5b8da424ca6be8d58b6627bf9a','Incorrect result for variable y_by2')
test_hashedequal(z_4_2.tostring(),'0727ed01af0aa4175316d3916fd1c8fe2eb98f27','Incorrect result for variable z_4_2')
test_hashedequal(x_by_y.tostring(),'b33f700fec2b6bd66e76260d31948ce07b8c15d3','Incorrect result for variable x_by_y')
test_hashedequal(x_by_y2.tostring(),'b33f700fec2b6bd66e76260d31948ce07b8c15d3','Incorrect result for variable x_by_y2')
test_hashedequal(x_4_4.tostring(),'832c97cc2d69298287838350b0bae66deec58b03','Incorrect result for variable x_4_4')
test_hashedequal(x_norm2.tostring(),'33b80b953557002511474aa340441d5b0728bbaf','Incorrect result for variable x_norm2')
Explanation: 2. Products and powers of numpy arrays and matrices
* and ** when used with Numpy arrays implement elementwise product and exponentiation
* and ** when used with Numpy matrices implement matrix product and exponentiation
Method np.dot() implements matrix multiplication, and can be used both with numpy arrays and matrices.
So you have to be careful about the types you are using for each variable
End of explanation
print(z_4_2.shape)
print(np.mean(z_4_2))
print(np.mean(z_4_2,axis=0))
print(np.mean(z_4_2,axis=1))
Explanation: 3. Numpy methods that can be carried out along different dimensions
Compare the result of the following commands:
End of explanation
# Previous check that you are working with the right matrices
test_hashedequal(z_4_2.tostring(),'0727ed01af0aa4175316d3916fd1c8fe2eb98f27','Incorrect result for variable z_4_2')
test_hashedequal(x_array.tostring(), '1215ced5d82501bf03e04b30f16c45a4bdcb8838', 'Incorrect variable x_array')
# Vertically stack matrix z_4_2 with itself
# ex1_res = <FILL IN>
ex1_res = np.vstack((z_4_2,z_4_2))
# Horizontally stack matrix z_4_2 and vector x_array
# ex2_res = <FILL IN>
ex2_res = np.hstack((z_4_2,x_array))
# Horizontally stack a column vector of ones with the result of the first exercise (variable ex1_res)
# X = <FILL IN>
X = np.hstack((np.ones((8,1)),ex1_res))
test_hashedequal(ex1_res.tostring(),'e740ea91c885cdae95499eaf53ec6f1429943d9c','Wrong value for variable ex1_res')
test_hashedequal(ex2_res.tostring(),'d5f18a630b2380fcae912f449b2a87766528e0f2','Wrong value for variable ex2_res')
test_hashedequal(X.tostring(),'bdf94b49c2b7c6ae71a916beb647236918ead39f','Wrong value for variable X')
Explanation: Other numpy methods where you can specify the axis along with a certain operation should be carried out are:
np.median()
np.std()
np.var()
np.percentile()
np.sort()
np.argsort()
If the axis argument is not provided, the array is flattened before carriying out the corresponding operation.
4. Concatenating matrices and vectors
Provided that the necessary dimensions fit, horizontal and vertical stacking of matrices can be carried out with methods np.hstack() and np.vstack().
Complete the following exercises to practice with matrix concatenation:
End of explanation
# Keep last row of matrix X
# X_sub1 = <FILL IN>
X_sub1 = X[-1,]
# Keep first column of the three first rows of X
# X_sub2 = <FILL IN>
X_sub2 = X[:3,0]
# Keep first two columns of the three first rows of X
# X_sub3 = <FILL IN>
X_sub3 = X[:3,:2]
# Invert the order of the rows of X
# X_sub4 = <FILL IN>
X_sub4 = X[::-1,:]
test_hashedequal(X_sub1.tostring(),'51fb613567c9ef5fc33e7190c60ff37e0cd56706','Wrong value for variable X_sub1')
test_hashedequal(X_sub2.tostring(),'12a72e95677fc01de6b7bfb7f62d772d0bdb5b87','Wrong value for variable X_sub2')
test_hashedequal(X_sub3.tostring(),'f45247c6c31f9bcccfcb2a8dec9d288ea41e6acc','Wrong value for variable X_sub3')
test_hashedequal(X_sub4.tostring(),'1fd985c087ba518c6d040799e49a967e4b1d433a','Wrong value for variable X_sub4')
Explanation: 5. Slicing
Particular elements of numpy arrays (both unidimensional and multidimensional) can be accessed using standard python slicing. When working with multidimensional arrays, slicing can be carried out along the different dimensions at once
End of explanation
X_col2 = X[:,1]
X_row3 = X[2,]
print('Matrix X is\n', X)
print('Second column of matrix X:', X_col2, '; Dimensions:', X_col2.shape)
print('Third row of matrix X:', X_row3, '; Dimensions:', X_row3.shape)
Explanation: Extracting columns and rows from multidimensional arrays
Something to be aware of when extracting rows or columns from numpy arrays is that if you specify just the index of the row or column you want to extract, the result will be a 1-D numpy array in any case. For instance, the following code prints the second column and third row of the numpy array X, and shows its dimensions. Notice that in both cases you get arrays with 1 dimension only.
End of explanation
X_col2 = X[:,1:2]
X_row3 = X[2:3,]
print('Second column of matrix X:', X_col2, '; Dimensions:', X_col2.shape)
print('Third row of matrix X:', X_row3, '; Dimensions:', X_row3.shape)
Explanation: If you wish that the extracted row or column is still a 2-D row or column vector, it is important to specify an interval instead of a single value, even if such interval consists of just one value.
Many numpy functions will also return 1-D vectors. It is important to be aware of such behavior to avoid and detect bugs in your code that may give place to undesired behaviors.
End of explanation
print(X.shape)
print(X.dot(X.T))
print(X.T.dot(X))
print(np.linalg.inv(X.T.dot(X)))
#print np.linalg.inv(X.dot(X.T))
Explanation: 6. Matrix inversion
Non singular matrices can be inverted with method np.linalg.inv(). Invert square matrices $X\cdot X^\top$ and $X^\top \cdot X$, and see what happens when trying to invert a singular matrix. The rank of a matrix can be studied with method numpy.linalg.matrix_rank().
End of explanation
test_hashedequal(X.tostring(),'bdf94b49c2b7c6ae71a916beb647236918ead39f','Wrong value for variable X')
Explanation: 7. Exercises
In this section, you will complete three exercises where you will carry out some common operations when working with data structures. For this exercise you will work with the 2-D numpy array X, assuming that it contains the values of two different variables for 8 data patterns. A first column of ones has already been introduced in a previous exercise:
$${\bf X} = \left[ \begin{array}{ccc} 1 & x_1^{(1)} & x_2^{(1)} \ 1 & x_1^{(2)} & x_2^{(2)} \ \vdots & \vdots & \vdots \ 1 & x_1^{(8)} & x_2^{(8)}\end{array}\right]$$
First of all, let us check that you are working with the right matrix
End of explanation
# Obtain matrix Z using concatenation functions
# Z = np.hstack(<FILL IN>)
Z = np.hstack((X,np.log(X[:,1:])))
test_hashedequal(Z.tostring(),'737dee4c168c5ce8fc53a5ec5cad43b5a53c7656','Incorrect matrix Z')
Explanation: 7.1. Non-linear transformations
Create a new matrix Z, where additional features are created by carrying out the following non-linear transformations:
$${\bf Z} = \left[ \begin{array}{ccc} 1 & x_1^{(1)} & x_2^{(1)} & \log\left(x_1^{(1)}\right) & \log\left(x_2^{(1)}\right)\ 1 & x_1^{(2)} & x_2^{(2)} & \log\left(x_1^{(2)}\right) & \log\left(x_2^{(2)}\right) \ \vdots & \vdots & \vdots \ 1 & x_1^{(8)} & x_2^{(8)} & \log\left(x_1^{(8)}\right) & \log\left(x_2^{(8)}\right)\end{array}\right] = \left[ \begin{array}{ccc} 1 & z_1^{(1)} & z_2^{(1)} & z_3^{(1)} & z_4^{(1)}\ 1 & z_1^{(2)} & z_2^{(2)} & z_3^{(1)} & z_4^{(1)} \ \vdots & \vdots & \vdots \ 1 & z_1^{(8)} & z_2^{(8)} & z_3^{(1)} & z_4^{(1)} \end{array}\right]$$
In other words, we are calculating the logarightmic values of the two original variables. From now on, any function involving linear transformations of the variables in Z, will be in fact a non-linear function of the original variables.
End of explanation
def log_transform(x):
# return <FILL IN>
return np.hstack((x,np.log(x[1]),np.log(x[2])))
Z_map = np.array(list(map(log_transform,X)))
test_hashedequal(Z_map.tostring(),'737dee4c168c5ce8fc53a5ec5cad43b5a53c7656','Incorrect matrix Z')
Explanation: Repeat the previous exercise, this time using the map() method together with function log_transform(). This function needs to be defined in such a way that guarantees that variable Z_map is the same as the previously computed variable Z.
End of explanation
# Z_lambda = np.array(list(map(lambda x: <FILL IN>,X)))
Z_lambda = np.array(list(map(lambda x: np.hstack((x,np.log(x[1]),np.log(x[2]))),X)))
test_hashedequal(Z_lambda.tostring(),'737dee4c168c5ce8fc53a5ec5cad43b5a53c7656','Incorrect matrix Z')
Explanation: Repeat the previous exercise once more. This time, define a lambda function for the task.
End of explanation
# Calculate variable Z_poly, using any method that you want
# Z_poly = <FILL IN>
Z_poly = np.array(list(map(lambda x: np.array([x[1]**k for k in range(4)]),X)))
test_hashedequal(Z_poly.tostring(),'7e025512fcee1c1db317a1a30f01a0d4b5e46e67','Wrong variable Z_poly')
Explanation: 7.2. Polynomial transformations
Similarly to the previous exercise, now we are interested in obtaining another matrix that will be used to evaluate a polynomial model. In order to do so, compute matrix Z_poly as follows:
$$Z_\text{poly} = \left[ \begin{array}{cccc} 1 & x_1^{(1)} & (x_1^{(1)})^2 & (x_1^{(1)})^3 \ 1 & x_1^{(2)} & (x_1^{(2)})^2 & (x_1^{(2)})^3 \ \vdots & \vdots & \vdots \ 1 & x_1^{(8)} & (x_1^{(8)})^2 & (x_1^{(8)})^3 \end{array}\right]$$
Note that, in this case, only the first variable of each pattern is used.
End of explanation
w_log = np.array([3.3, 0.5, -2.4, 3.7, -2.9])
w_poly = np.array([3.2, 4.5, -3.2, 0.7])
# f_log = <FILL IN>
f_log = Z.dot(w_log)
# f_poly = <FILL IN>
f_poly = Z_poly.dot(w_poly)
test_hashedequal(f_log.tostring(),'d5801dfbd603f6db7010b9ef80fa48e351c0b38b','Incorrect evaluation of the logarithmic model')
test_hashedequal(f_poly.tostring(),'32abdcc0e32e76500947d0691cfa9917113d7019','Incorrect evaluation of the polynomial model')
Explanation: 7.3. Model evaluation
Finally, we can use previous data matrices Z and Z_poly to efficiently compute the output of the corresponding non-linear models over all the patterns in the data set. In this exercise, we consider the two following linear-in-the-parameters models to be evaluated:
$$f_\text{log}({\bf x}) = w_0 + w_1 \cdot x_1 + w_2 \cdot x_2 + w_3 \cdot \log(x_1) + w_4 \cdot \log(x_2)$$
$$f_\text{poly}({\bf x}) = w_0 + w_1 \cdot x_1 + w_2 \cdot x_1^2 + w_3 \cdot x_1^3$$
Compute the output of the two models for the particular weights that are defined in the code below. Your output variables f_log and f_poly should contain the outputs of the model for all eight patterns in the data set.
Note that for this task, you just need to implement appropriate matricial products among the extended data matrices, Z and Z_poly, and the provided weight vectors.
End of explanation |
13,831 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Setup lightning
Step2: Iris
Step3: CIFAR | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import os
assert os.environ["COLAB_TPU_ADDR"], "Make sure to select TPU from Edit > Notebook settings > Hardware accelerator"
#!pip install -q cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8-cp37-cp37m-linux_x86_64.whl
!pip install -q cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.9-cp37-cp37m-linux_x86_64.whl
Explanation: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/logreg_tpu_pytorch_lightning_bolts.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Logistic regression on MNIST using TPUs and PyTorch Lightning
Code is from
https://lightning-bolts.readthedocs.io/en/latest/introduction_guide.html#logistic-regression
Setup TPU
Be sure to select Runtime=TPU in the drop-down menu!
See
https://colab.sandbox.google.com/github/pytorch/xla/blob/master/contrib/colab/getting-started.ipynb#scrollTo=3P6b3uqfzpDI
See also
https://pytorch-lightning.readthedocs.io/en/latest/notebooks/lightning_examples/cifar10-baseline.html#
End of explanation
#!pip install -q lightning-bolts
!pip install --quiet torchmetrics lightning-bolts torchvision torch pytorch-lightning
from pl_bolts.models.regression import LogisticRegression
import pytorch_lightning as pl
from pl_bolts.datamodules import MNISTDataModule, FashionMNISTDataModule, CIFAR10DataModule, ImagenetDataModule
Explanation: Setup lightning
End of explanation
from sklearn.datasets import load_iris
from pl_bolts.datamodules import SklearnDataModule
import pytorch_lightning as pl
# use any numpy or sklearn dataset
X, y = load_iris(return_X_y=True)
dm = SklearnDataModule(X, y, batch_size=12)
# build model
model = LogisticRegression(input_dim=4, num_classes=3)
# fit
trainer = pl.Trainer(tpu_cores=8)
trainer.fit(model, train_dataloader=dm.train_dataloader(), val_dataloaders=dm.val_dataloader())
trainer.test(test_dataloaders=dm.test_dataloader())
Explanation: Iris
End of explanation
# create dataset
# dm = MNISTDataModule(num_workers=0, data_dir='data')
dm = CIFAR10DataModule(num_workers=0, data_dir="data")
dm.prepare_data() # force download now
print(dm.size())
print(dm.num_classes)
ndims = np.prod(dm.size())
nclasses = dm.num_classes
print([ndims, nclasses, ndims * nclasses])
model = LogisticRegression(input_dim=ndims, num_classes=nclasses, learning_rate=0.001)
print(model)
trainer = pl.Trainer(tpu_cores=8, max_epochs=2)
# trainer = pl.Trainer(max_epochs=2)
trainer.fit(model, datamodule=dm)
trainer.test(model, test_dataloaders=dm.val_dataloader())
Explanation: CIFAR
End of explanation |
13,832 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python范儿:Coding Pythonically
1 数学定义:解析(Comprehensions,或称推导式)
1.1 让代码飞
找到0-9之间的偶数
Step1: 我们做的事情在数学定义上看来像是什么呢?
${x|x \in {0,1,2,....,9}, s.t. x\%2==0 }$
Step2: 这种代码形式称为Comprehensions,也就是解析(推导式)。
形式: {expr(item) for item in iterable if cond_expr(item)} 或者中括号、小括号包裹
第一部分:元素,对元素的操作(运算与函数都可以)
第二部分:遍历行为
第三部分:筛选条件(可选)
最后:用小括号,中括号,大括号包含住三部分,得到不同的数据结构或对象
Step3: 1.2 使用中括号进行列表解析
回忆enumerate
任务:把一个list里的元素和索引号找出来,更新回原来的list中去
循环操作可变类型
Step4: List Comprehension构造新列表
Step5: 老朋友Iterable,isinstance()检查
Step6: 字典也是Iterable
Step7: 多重解析
Step8: 1.3 使用小括号进行生成器
使用小括号做Comprehension返回生成器对象,占用O(1)内存空间:
Step9: 1.4 使用大括号解析得到集合或字典
使用小括号解析并不会返回一个不可变元组而是生成器,是一个需要强记的规则;然而大括号解析式就普通了许多。
Step10: 注意生成字典时使用key
Step11: 取回原来的列表:
Step12: 用*把Your_choice的内容而不是它本身作为参数传递
Step13: * 告诉Python即将传入的参数Your_choice不是单独一个序列,而是把Your_choice中的每一项作为参数
从下面的字典里按值来排序,取到值最大或者值最小的那条记录:
Step14: 花样传参
刚才已经知道能把列表加星号保持有序地作为一个个参数(argument)传给方法/函数:
Step15: 带默认值的参数叫keyword arguments(kargs)
Step16: 3 变量之变:深浅拷贝
深浅拷贝:关系到变量的正确修改与复制
变量的属性:
身份:就像身份证(或者内存地址)那样,id()
属性:表示变量的类型,type()或者isInstance()确认
值:这个地址存的数据,通过与名字绑定的方法来读取
3.1 浅拷贝
完全切片(Slicing)操作[
Step17: 假设一下,你的英雄打到了豪华套装(your_hero),或者游戏中的英雄(my_hero)有了一个嗜血的加成效果,所以数值会出现一些不一样的地方。
Step18: 为什么相互影响了呢?
Step19: 第一个对象是不可变的(字符串),而且还重新被赋值
而第二个对象是可变的(一个列表),而且修改的是第二个对象里面的内容,第二个对象本身指向的地址不变
3.2 资深玩家的选择:深拷贝
Step20: 用id( )验证一下
Step21: 3.3 不想Debug太烦?
规避修改复杂可变类型内容,维护唯一拷贝
规避使用可变类型
理解Clojure
4 异常处理:Try-Except-Else-Finally
会犯错的是人,能原谅人的是……
try
except
finally
Python允许程序在运行当中检测错误。
每检测到一个错误,Python解释器就引发一个异常并报出详细的错误信息:
Step22: 我们执行了一个除零操作(这显然是非法的),报出了"ZeroDivisionError
Step23: 4.2 笔下无错,心中有错
常见的python中的异常,举例来说:
* 有上文中已经出现的除零错误(ZeroDivisionError)
* 尝试访问未声明的变量(NameError)
* 语法错误 (Syntax Error)
* 请求索引超过索引范围 (IndexError,常见于切片操作中)
* 输入/输出错误 (IOError)
Step24: 4.3 异常处理:完全体
通过try-except-else-finally来感受异常处理的完全体吧!
try 下面是可能有异常的代码块
except 下面是对异常的处理
else 下面是在并没有异常的时候执行的代码块
finally 下面是收尾工作,无论是否有异常都执行
并用另一种方式记录异常。 | Python Code:
#number = range(10)
size = 10
even_numbers = []
n = 0
while n < size:
if n % 2==0:
even_numbers.append(n)
n += 1
print even_numbers
Explanation: Python范儿:Coding Pythonically
1 数学定义:解析(Comprehensions,或称推导式)
1.1 让代码飞
找到0-9之间的偶数
End of explanation
{ x for x in range(10) if x % 2==0 }
Explanation: 我们做的事情在数学定义上看来像是什么呢?
${x|x \in {0,1,2,....,9}, s.t. x\%2==0 }$
End of explanation
print [ x**2 for x in xrange(10)]
Explanation: 这种代码形式称为Comprehensions,也就是解析(推导式)。
形式: {expr(item) for item in iterable if cond_expr(item)} 或者中括号、小括号包裹
第一部分:元素,对元素的操作(运算与函数都可以)
第二部分:遍历行为
第三部分:筛选条件(可选)
最后:用小括号,中括号,大括号包含住三部分,得到不同的数据结构或对象
End of explanation
Lord_of_ring = ['Ainur','Dragons','Dwarves','Elves','Ents','Hobbits','Men','Orcs']
print type(enumerate(Lord_of_ring)),enumerate(Lord_of_ring)
for idx,element in enumerate(Lord_of_ring):
Lord_of_ring[idx] ="{0}:{1}".format(idx,element)
print Lord_of_ring
Explanation: 1.2 使用中括号进行列表解析
回忆enumerate
任务:把一个list里的元素和索引号找出来,更新回原来的list中去
循环操作可变类型
End of explanation
test =['Ainur','Dragons','Dwarves','Elves','Ents','Hobbits','Men','Orcs']
def _trans(idx,element):
return '{0}:{1}'.format(idx,element)
print [_trans(idx,element) for idx,element in enumerate(test)]
print ['{0}:{1}'.format(idx,element) for idx,element in enumerate(test) ]
Explanation: List Comprehension构造新列表
End of explanation
import collections
print isinstance("Hello,world", collections.Iterable)
print isinstance( test, collections.Iterable)
Explanation: 老朋友Iterable,isinstance()检查
End of explanation
language={"Scala":"Martin Odersky",\
"Clojure":"Richy Hickey",\
"C":"Dennis Ritchie",\
"Standard ML":"Robin Milner"}
['{0:<12} created by {1:<15}'.format(la,ua)\
for la,ua in language.iteritems()]
Explanation: 字典也是Iterable:
End of explanation
print [(x+1,y+1) for x in xrange(4) for y in xrange(4)]
print [(x+1,y+1) for x in xrange(4) for y in xrange(4) if y<x]
print [(x+1,y+1) for x in xrange(4) for y in xrange(x)]
Explanation: 多重解析
End of explanation
num=range(0,20)
simple_generator=(x**2 for x in num if x > 0)
print simple_generator
for element in simple_generator:
print element,
Explanation: 1.3 使用小括号进行生成器
使用小括号做Comprehension返回生成器对象,占用O(1)内存空间:
End of explanation
x = range(10)
print { i for i in x if i%2==0 }
print { idx:i**2 for idx,i in enumerate(x) if i%2==0 }
Explanation: 1.4 使用大括号解析得到集合或字典
使用小括号解析并不会返回一个不可变元组而是生成器,是一个需要强记的规则;然而大括号解析式就普通了许多。
End of explanation
war3_char = ['Orc','Humans','Undead','Night Elves']
dota_hero = ['Blade Master','Archmage','Death King','Demon Hunter']
Your_choice=zip(war3_char,dota_hero)
print Your_choice
Explanation: 注意生成字典时使用key:value的形式就可以了。
2 花样传参:zip与星号操作
zip: 拉链函数
*: 经常和zip在一起,用于传递参数。
**: 用于传递关键字型参数
2.1 Zip(拉链)
enumerate: 返回生成器,生成器每次给出下标和Iterable的内容
sorted: 返回列表,可以进行排序
zip: 把多个长度相同的列表当成数据列组成的数据表,返回一个包含着元组的列表,每个元组是数据表中的一行
End of explanation
choice1,choice2,choice3,choice4 = Your_choice
print zip(choice1,choice2,choice3,choice4)
Explanation: 取回原来的列表:
End of explanation
print zip(*Your_choice)
Explanation: 用*把Your_choice的内容而不是它本身作为参数传递
End of explanation
Base_Damage={'Blade Master':48,'Death King':65,'Tauren Chieftain':51}
print zip(Base_Damage.itervalues(),Base_Damage.iterkeys())
max_Damage=max(zip(Base_Damage.itervalues(),Base_Damage.iterkeys()))
min_Damage=min(zip(Base_Damage.itervalues(),Base_Damage.iterkeys()))
print max_Damage,min_Damage
Explanation: * 告诉Python即将传入的参数Your_choice不是单独一个序列,而是把Your_choice中的每一项作为参数
从下面的字典里按值来排序,取到值最大或者值最小的那条记录:
End of explanation
def triplesum(a,b,c):
return a*100+b*10+c
print triplesum(*[1,2,3])
print triplesum(1,2,3)
Explanation: 花样传参
刚才已经知道能把列表加星号保持有序地作为一个个参数(argument)传给方法/函数:
End of explanation
def triplesum_default(a=0,b=0,c=0,*args):
return a*100+b*10+c
def ntuplesum_default(*args):
sum = 0
for i in args:
sum*=10
sum+=i
return sum
print triplesum_default(*[1,2,3,4])
print ntuplesum_default(*[1,2,3,5])
print ntuplesum_default(1,2,3,5,6,7,8)
print triplesum_default(*[1,3])
print triplesum_default(**{'b':2,'c':3,'a':1})
print triplesum_default(**{'c':3,'a':1})
Explanation: 带默认值的参数叫keyword arguments(kargs):
End of explanation
import copy
unit = ['name',['Base_Damage',65.00]]
my_hero = unit[:] #使用切片拷贝
your_hero = list(unit) #使用工厂方法拷贝
its_hero = copy.copy(unit) #使用浅拷贝
print [id(x) for x in unit,my_hero,your_hero,its_hero]
print [id(x[0]) for x in unit,my_hero,your_hero,its_hero]
print [id(x[1]) for x in unit,my_hero,your_hero,its_hero]
Explanation: 3 变量之变:深浅拷贝
深浅拷贝:关系到变量的正确修改与复制
变量的属性:
身份:就像身份证(或者内存地址)那样,id()
属性:表示变量的类型,type()或者isInstance()确认
值:这个地址存的数据,通过与名字绑定的方法来读取
3.1 浅拷贝
完全切片(Slicing)操作[:]
工厂函数,比如list(),tuple()
copy中的copy
End of explanation
my_hero[0] = 'Kel\'Thuzad'
your_hero[0] = 'Mirana'
its_hero[0] = 'Morphling'
print my_hero,your_hero,its_hero
my_hero[1][1]=100.00 # 先更改一个对象的值
print my_hero,your_hero,its_hero # 可以看出值的情况不对,my_hero
Explanation: 假设一下,你的英雄打到了豪华套装(your_hero),或者游戏中的英雄(my_hero)有了一个嗜血的加成效果,所以数值会出现一些不一样的地方。
End of explanation
print [id(x[0]) for x in unit,my_hero,your_hero,its_hero]
print [id(x[1]) for x in unit,my_hero,your_hero,its_hero]
Explanation: 为什么相互影响了呢?
End of explanation
import copy
unit = ['name',['Base_Damage',[65.00]]]
my_hero = copy.deepcopy(unit)
your_hero = copy.deepcopy(unit)
my_hero[0] = 'Kel\'Thuzad'
your_hero[0] = 'Mirana'
my_hero[1][1][0] = 100.00
print my_hero,your_hero
Explanation: 第一个对象是不可变的(字符串),而且还重新被赋值
而第二个对象是可变的(一个列表),而且修改的是第二个对象里面的内容,第二个对象本身指向的地址不变
3.2 资深玩家的选择:深拷贝
End of explanation
print [id(x) for x in my_hero]
print [id(x) for x in your_hero]
Explanation: 用id( )验证一下
End of explanation
y = 6
x = 5
x,y = y,x
1/0
Explanation: 3.3 不想Debug太烦?
规避修改复杂可变类型内容,维护唯一拷贝
规避使用可变类型
理解Clojure
4 异常处理:Try-Except-Else-Finally
会犯错的是人,能原谅人的是……
try
except
finally
Python允许程序在运行当中检测错误。
每检测到一个错误,Python解释器就引发一个异常并报出详细的错误信息:
End of explanation
try:
x = 1/0
y = range(10)[10]
except Exception:
print "Wow, such cute error"
Explanation: 我们执行了一个除零操作(这显然是非法的),报出了"ZeroDivisionError: integer division or modulo by zero"
在使用Python编写程序时,认真查看报错信息
建议多使用Ipython notebook 完成小代码块
4.1 基本用法:try-except Exception
如果你需要添加错误检测和异常处理,需要把你想要书写的代码组封装在try-except语句中。
End of explanation
try:
x = 1/0
y = range(10)[10]
except ZeroDivisionError,e1:
print "Wow, such cute divisor"
except IndexError,e2:
print "Wow, such cute index"
print e1
try:
x = 1/2
y = range(10)[10]
except ZeroDivisionError,e1:
print "Wow, such cute divisor"
except IndexError,e2:
print "Wow, such cute index"
print e2
Explanation: 4.2 笔下无错,心中有错
常见的python中的异常,举例来说:
* 有上文中已经出现的除零错误(ZeroDivisionError)
* 尝试访问未声明的变量(NameError)
* 语法错误 (Syntax Error)
* 请求索引超过索引范围 (IndexError,常见于切片操作中)
* 输入/输出错误 (IOError)
End of explanation
import sys
try:
x = 1/0
y = range(10)[-1]
except Exception,error:
x = 0
y = 9
print 'X,Y is corrected.'
info = sys.exc_info()
else:
print 'Catch no exceptions. Great!'
finally:
z = x + y
print z**2 + x**2 + y**2
print 'Finished'
print '\n',error,'\n',info[0],'\n',info[1],'\n',info[2],info[2].tb_lineno
Explanation: 4.3 异常处理:完全体
通过try-except-else-finally来感受异常处理的完全体吧!
try 下面是可能有异常的代码块
except 下面是对异常的处理
else 下面是在并没有异常的时候执行的代码块
finally 下面是收尾工作,无论是否有异常都执行
并用另一种方式记录异常。
End of explanation |
13,833 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Viewing and manipulating FITS images
Authors
Lia Corrales, Kris Stern, Stephanie T. Douglas, Kelle Cruz
Learning Goals
Open FITS files and load image data
Make a 2D histogram with image data
Stack several images into a single image
Write image data to a FITS file
Keywords
FITS, file input/output, image manipulation, numpy, matplotlib, histogram, colorbar
Summary
This tutorial demonstrates the use of astropy.utils.data to download a data file, then uses astropy.io.fits to open the file, and lastly uses matplotlib to view the image with different color scales and stretches and to make histograms. In this tutorial we've also included a demonstration of simple image stacking.
Step1: Download the example FITS files for this tutorial.
Step2: Opening FITS files and loading the image data
Let's open the FITS file to find out what it contains.
Step3: Generally, the image information is located in the <code>PRIMARY</code> block. The blocks are numbered and can be accessed by indexing <code>hdu_list</code>.
Step4: Our data is now stored as a 2D numpy array. But how do we know the dimensions of the image? We can look at the shape of the array.
Step5: Great! At this point, we can close the FITS file because we've stored everything we wanted to a variable.
Step6: SHORTCUT
If you don't need to examine the FITS header, you can call fits.getdata to bypass the previous steps.
Step7: Viewing the image data and getting basic statistics
Step8: Let's get some basic statistics about our image
Step9: Plotting a histogram
To make a histogram with matplotlib.pyplot.hist(), we'll need to cast the data from a 2D array to something one dimensional.
In this case, let's use the ndarray.flatten() to return a 1D numpy array.
Step10: Displaying the image with a logarithmic scale
What if we want to use a logarithmic color scale? To do so, we'll need to load the LogNorm object from matplotlib.
Step11: Basic image math
Step12: Now we'll stack the images by summing the concatenated list.
Step13: We're going to show the image, but need to decide on the best stretch. To do so let's plot a histogram of the data.
Step14: We'll use the keywords vmin and vmax to set limits on the color scaling for imshow.
Step15: Writing image data to a FITS file
We can do this with the writeto() method.
Warning | Python Code:
import numpy as np
# Set up matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
from astropy.io import fits
Explanation: Viewing and manipulating FITS images
Authors
Lia Corrales, Kris Stern, Stephanie T. Douglas, Kelle Cruz
Learning Goals
Open FITS files and load image data
Make a 2D histogram with image data
Stack several images into a single image
Write image data to a FITS file
Keywords
FITS, file input/output, image manipulation, numpy, matplotlib, histogram, colorbar
Summary
This tutorial demonstrates the use of astropy.utils.data to download a data file, then uses astropy.io.fits to open the file, and lastly uses matplotlib to view the image with different color scales and stretches and to make histograms. In this tutorial we've also included a demonstration of simple image stacking.
End of explanation
from astropy.utils.data import download_file
image_file = download_file('http://data.astropy.org/tutorials/FITS-images/HorseHead.fits', cache=True )
Explanation: Download the example FITS files for this tutorial.
End of explanation
hdu_list = fits.open(image_file)
hdu_list.info()
Explanation: Opening FITS files and loading the image data
Let's open the FITS file to find out what it contains.
End of explanation
image_data = hdu_list[0].data
Explanation: Generally, the image information is located in the <code>PRIMARY</code> block. The blocks are numbered and can be accessed by indexing <code>hdu_list</code>.
End of explanation
print(type(image_data))
print(image_data.shape)
Explanation: Our data is now stored as a 2D numpy array. But how do we know the dimensions of the image? We can look at the shape of the array.
End of explanation
hdu_list.close()
Explanation: Great! At this point, we can close the FITS file because we've stored everything we wanted to a variable.
End of explanation
image_data = fits.getdata(image_file)
print(type(image_data))
print(image_data.shape)
Explanation: SHORTCUT
If you don't need to examine the FITS header, you can call fits.getdata to bypass the previous steps.
End of explanation
plt.imshow(image_data, cmap='gray')
plt.colorbar()
# To see more color maps
# http://wiki.scipy.org/Cookbook/Matplotlib/Show_colormaps
Explanation: Viewing the image data and getting basic statistics
End of explanation
print('Min:', np.min(image_data))
print('Max:', np.max(image_data))
print('Mean:', np.mean(image_data))
print('Stdev:', np.std(image_data))
Explanation: Let's get some basic statistics about our image:
End of explanation
print(type(image_data.flatten()))
histogram = plt.hist(image_data.flatten(), bins='auto')
Explanation: Plotting a histogram
To make a histogram with matplotlib.pyplot.hist(), we'll need to cast the data from a 2D array to something one dimensional.
In this case, let's use the ndarray.flatten() to return a 1D numpy array.
End of explanation
from matplotlib.colors import LogNorm
plt.imshow(image_data, cmap='gray', norm=LogNorm())
# I chose the tick marks based on the histogram above
cbar = plt.colorbar(ticks=[5.e3,1.e4,2.e4])
cbar.ax.set_yticklabels(['5,000','10,000','20,000'])
Explanation: Displaying the image with a logarithmic scale
What if we want to use a logarithmic color scale? To do so, we'll need to load the LogNorm object from matplotlib.
End of explanation
base_url = 'http://data.astropy.org/tutorials/FITS-images/M13_blue_{0:04d}.fits'
image_list = [download_file(base_url.format(n), cache=True)
for n in range(1, 5+1)]
image_concat = [fits.getdata(image) for image in image_list]
Explanation: Basic image math: image stacking
You can also perform math with the image data like any other numpy array. In this particular example, we'll stack several images of M13 taken with a ~10'' telescope.
Let's start by opening a series of FITS files and store the data in a list, which we've named image_concat.
End of explanation
# The long way
final_image = np.zeros(shape=image_concat[0].shape)
for image in image_concat:
final_image += image
# The short way
# final_image = np.sum(image_concat, axis=0)
Explanation: Now we'll stack the images by summing the concatenated list.
End of explanation
image_hist = plt.hist(final_image.flatten(), bins='auto')
Explanation: We're going to show the image, but need to decide on the best stretch. To do so let's plot a histogram of the data.
End of explanation
plt.imshow(final_image, cmap='gray', vmin=2E3, vmax=3E3)
plt.colorbar()
Explanation: We'll use the keywords vmin and vmax to set limits on the color scaling for imshow.
End of explanation
outfile = 'stacked_M13_blue.fits'
hdu = fits.PrimaryHDU(final_image)
hdu.writeto(outfile, overwrite=True)
Explanation: Writing image data to a FITS file
We can do this with the writeto() method.
Warning: you'll receive an error if the file you are trying to write already exists. That's why we've set overwrite=True.
End of explanation |
13,834 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Trying out features
Learning Objectives
Step1: Next, we'll load our data set.
Step2: Examine and split the data
It's a good idea to get to know your data a little bit before you work with it.
We'll print out a quick summary of a few useful statistics on each column.
This will include things like mean, standard deviation, max, min, and various quantiles.
Step3: Now, split the data into two parts -- training and evaluation.
Step4: Training and Evaluation
In this exercise, we'll be trying to predict median_house_value It will be our label (sometimes also called a target).
We'll modify the feature_cols and input function to represent the features you want to use.
We divide total_rooms by households to get avg_rooms_per_house which we expect to positively correlate with median_house_value.
We also divide population by total_rooms to get avg_persons_per_room which we expect to negatively correlate with median_house_value. | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.5
import math
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
print(tf.__version__)
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
Explanation: Trying out features
Learning Objectives:
* Improve the accuracy of a model by adding new features with the appropriate representation
The data is based on 1990 census data from California. This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively.
Set Up
In this first cell, we'll load the necessary libraries.
End of explanation
df = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep=",")
Explanation: Next, we'll load our data set.
End of explanation
df.head()
df.describe()
Explanation: Examine and split the data
It's a good idea to get to know your data a little bit before you work with it.
We'll print out a quick summary of a few useful statistics on each column.
This will include things like mean, standard deviation, max, min, and various quantiles.
End of explanation
np.random.seed(seed=1) #makes result reproducible
msk = np.random.rand(len(df)) < 0.8
traindf = df[msk]
evaldf = df[~msk]
Explanation: Now, split the data into two parts -- training and evaluation.
End of explanation
def add_more_features(df):
df['avg_rooms_per_house'] = df['total_rooms'] / df['households'] #expect positive correlation
df['avg_persons_per_room'] = df['population'] / df['total_rooms'] #expect negative correlation
return df
# Create pandas input function
def make_input_fn(df, num_epochs):
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x = add_more_features(df),
y = df['median_house_value'] / 100000, # will talk about why later in the course
batch_size = 128,
num_epochs = num_epochs,
shuffle = True,
queue_capacity = 1000,
num_threads = 1
)
# Define your feature columns
def create_feature_cols():
return [
tf.feature_column.numeric_column('housing_median_age'),
tf.feature_column.bucketized_column(tf.feature_column.numeric_column('latitude'), boundaries = np.arange(32.0, 42, 1).tolist()),
tf.feature_column.numeric_column('avg_rooms_per_house'),
tf.feature_column.numeric_column('avg_persons_per_room'),
tf.feature_column.numeric_column('median_income')
]
# Create estimator train and evaluate function
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.compat.v1.estimator.LinearRegressor(model_dir = output_dir, feature_columns = create_feature_cols())
train_spec = tf.estimator.TrainSpec(input_fn = make_input_fn(traindf, None),
max_steps = num_train_steps)
eval_spec = tf.estimator.EvalSpec(input_fn = make_input_fn(evaldf, 1),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds,
throttle_secs = 5) # evaluate every N seconds
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
OUTDIR = './trained_model'
# Run the model
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
tf.compat.v1.summary.FileWriterCache.clear()
train_and_evaluate(OUTDIR, 2000)
Explanation: Training and Evaluation
In this exercise, we'll be trying to predict median_house_value It will be our label (sometimes also called a target).
We'll modify the feature_cols and input function to represent the features you want to use.
We divide total_rooms by households to get avg_rooms_per_house which we expect to positively correlate with median_house_value.
We also divide population by total_rooms to get avg_persons_per_room which we expect to negatively correlate with median_house_value.
End of explanation |
13,835 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Translating French to English with Pytorch
Step1: Prepare corpus
The French-English parallel corpus can be downloaded from http
Step2: To make this problem a little simpler so we can train our model more quickly, we'll just learn to translate questions that begin with 'Wh' (e.g. what, why, where which). Here are our regexps that filter the sentences we want.
Step3: Because it takes a while to load the data, we save the results to make it easier to load in later.
Step4: Because we are translating at word level, we need to tokenize the text first. (Note that it is also possible to translate at character level, which doesn't require tokenizing.) There are many tokenizers available, but we found we got best results using these simple heuristics.
Step5: Special tokens used to pad the end of sentences, and to mark the start of a sentence.
Step6: Enumerate the unique words (vocab) in the corpus, and also create the reverse map (word->index). Then use this mapping to encode every sentence as a list of int indices.
Step7: Word vectors
Stanford's GloVe word vectors can be downloaded from https
Step8: For French word vectors, we're using those from http
Step9: We need to map each word index in our vocabs to their word vector. Not every word in our vocabs will be in our word vectors, since our tokenization approach won't be identical to the word vector creators - in these cases we simply create a random vector.
Step10: Prep data
Each sentence has to be of equal length. Keras has a convenient function pad_sequences to truncate and/or pad each sentence as required - even although we're not using keras for the neural net, we can still use any functions from it we need!
Step11: And of course we need to separate our training and test sets...
Step12: Here's an example of a French and English sentence, after encoding and padding.
Step13: Model
Basic encoder-decoder
Step14: Turning a sequence into a representation can be done using an RNN (called the 'encoder'. This approach is useful because RNN's are able to keep track of state and memory, which is obviously important in forming a complete understanding of a sentence.
* bidirectional=True passes the original sequence through an RNN, and the reversed sequence through a different RNN and concatenates the results. This allows us to look forward and backwards.
* We do this because in language things that happen later often influence what came before (i.e. in Spanish, "el chico, la chica" means the boy, the girl; the word for "the" is determined by the gender of the subject, which comes after).
Step15: Finally, we arrive at a vector representation of the sequence which captures everything we need to translate it. We feed this vector into more RNN's, which are trying to generate the labels. After this, we make a classification for what each word is in the output sequence.
Step16: This graph demonstrates the accuracy decay for a neural translation task. With an encoding/decoding technique, larger input sequences result in less accuracy.
<img src="https
Step17: But Pytorch doesn't support broadcasting. So let's add it to the basic operators, and to a general tensor dot product
Step18: Let's test!
Step19: Attentional model
Step20: Attention testing
Pytorch makes it easy to check intermediate results, when creating a custom architecture such as this one, since you can interactively run each function.
Step21: Train
Pytorch has limited functionality for training models automatically - you will generally have to write your own training loops. However, Pytorch makes it far easier to customize how this training is done, such as using teacher forcing.
Step22: Run
Step23: Testing | Python Code:
%matplotlib inline
import re, pickle, collections, bcolz, numpy as np, keras, sklearn, math, operator
from gensim.models import word2vec, KeyedVectors # - added KeyedVectors.load_word2vec_format
import torch, torch.nn as nn
from torch.autograd import Variable
from torch import optim
import torch.nn.functional as F
# path='/data/datasets/fr-en-109-corpus/'
# dpath = '/data/translate/'
path='data/translate/fr-en-109-corpus/'
dpath = 'data/translate/'
Explanation: Translating French to English with Pytorch
End of explanation
fname=path+'giga-fren.release2.fixed'
en_fname = fname+'.en'
fr_fname = fname+'.fr'
Explanation: Prepare corpus
The French-English parallel corpus can be downloaded from http://www.statmt.org/wmt10/training-giga-fren.tar. It was created by Chris Callison-Burch, who crawled millions of web pages and then used 'a set of simple heuristics to transform French URLs onto English URLs (i.e. replacing "fr" with "en" and about 40 other hand-written rules), and assume that these documents are translations of each other'.
End of explanation
re_eq = re.compile('^(Wh[^?.!]+\?)')
re_fq = re.compile('^([^?.!]+\?)')
lines = ((re_eq.search(eq), re_fq.search(fq))
for eq, fq in zip(open(en_fname), open(fr_fname)))
qs = [(e.group(), f.group()) for e,f in lines if e and f]; len(qs)
qs[:6]
Explanation: To make this problem a little simpler so we can train our model more quickly, we'll just learn to translate questions that begin with 'Wh' (e.g. what, why, where which). Here are our regexps that filter the sentences we want.
End of explanation
pickle.dump(qs, open(dpath+'fr-en-qs.pkl', 'wb'))
qs = pickle.load(open(dpath+'fr-en-qs.pkl', 'rb'))
en_qs, fr_qs = zip(*qs)
Explanation: Because it takes a while to load the data, we save the results to make it easier to load in later.
End of explanation
re_apos = re.compile(r"(\w)'s\b") # make 's a separate word
re_mw_punc = re.compile(r"(\w[’'])(\w)") # other ' in a word creates 2 words
re_punc = re.compile("([\"().,;:/_?!—])") # add spaces around punctuation
re_mult_space = re.compile(r" *") # replace multiple spaces with just one
def simple_toks(sent):
sent = re_apos.sub(r"\1 's", sent)
sent = re_mw_punc.sub(r"\1 \2", sent)
sent = re_punc.sub(r" \1 ", sent).replace('-', ' ')
sent = re_mult_space.sub(' ', sent)
return sent.lower().split()
fr_qtoks = list(map(simple_toks, fr_qs)); fr_qtoks[:4]
en_qtoks = list(map(simple_toks, en_qs)); en_qtoks[:4]
simple_toks("Rachel's baby is cuter than other's.")
Explanation: Because we are translating at word level, we need to tokenize the text first. (Note that it is also possible to translate at character level, which doesn't require tokenizing.) There are many tokenizers available, but we found we got best results using these simple heuristics.
End of explanation
PAD = 0; SOS = 1
Explanation: Special tokens used to pad the end of sentences, and to mark the start of a sentence.
End of explanation
def toks2ids(sents):
voc_cnt = collections.Counter(t for sent in sents for t in sent)
vocab = sorted(voc_cnt, key=voc_cnt.get, reverse=True)
vocab.insert(PAD, "<PAD>")
vocab.insert(SOS, "<SOS>")
w2id = {w:i for i,w in enumerate(vocab)}
ids = [[w2id[t] for t in sent] for sent in sents]
return ids, vocab, w2id, voc_cnt
fr_ids, fr_vocab, fr_w2id, fr_counts = toks2ids(fr_qtoks)
en_ids, en_vocab, en_w2id, en_counts = toks2ids(en_qtoks)
Explanation: Enumerate the unique words (vocab) in the corpus, and also create the reverse map (word->index). Then use this mapping to encode every sentence as a list of int indices.
End of explanation
def load_glove(loc):
return (bcolz.open(loc+'.dat')[:],
pickle.load(open(loc+'_words.pkl','rb'), encoding='latin1'),
pickle.load(open(loc+'_idx.pkl','rb'), encoding='latin1'))
en_vecs, en_wv_word, en_wv_idx = load_glove('data/glove/results/6B.100d')
en_w2v = {w: en_vecs[en_wv_idx[w]] for w in en_wv_word}
n_en_vec, dim_en_vec = en_vecs.shape
en_w2v['king']
Explanation: Word vectors
Stanford's GloVe word vectors can be downloaded from https://nlp.stanford.edu/projects/glove/ (in the code below we have preprocessed them into a bcolz array). We use these because each individual word has a single word vector, which is what we need for translation. Word2vec, on the other hand, often uses multi-word phrases.
End of explanation
# w2v_path='/data/datasets/nlp/frWac_non_lem_no_postag_no_phrase_200_skip_cut100.bin'
w2v_path='data/frwac/frWac_non_lem_no_postag_no_phrase_200_skip_cut100.bin'
# fr_model = word2vec.Word2Vec.load_word2vec_format(w2v_path, binary=True) # - Deprecated
fr_model = KeyedVectors.load_word2vec_format(w2v_path, binary=True)
fr_voc = fr_model.vocab
dim_fr_vec = 200
Explanation: For French word vectors, we're using those from http://fauconnier.github.io/index.html
End of explanation
def create_emb(w2v, targ_vocab, dim_vec):
vocab_size = len(targ_vocab)
emb = np.zeros((vocab_size, dim_vec))
found=0
for i, word in enumerate(targ_vocab):
try: emb[i] = w2v[word]; found+=1
except KeyError: emb[i] = np.random.normal(scale=0.6, size=(dim_vec,))
return emb, found
en_embs, found = create_emb(en_w2v, en_vocab, dim_en_vec); en_embs.shape, found
fr_embs, found = create_emb(fr_model, fr_vocab, dim_fr_vec); fr_embs.shape, found
Explanation: We need to map each word index in our vocabs to their word vector. Not every word in our vocabs will be in our word vectors, since our tokenization approach won't be identical to the word vector creators - in these cases we simply create a random vector.
End of explanation
from keras.preprocessing.sequence import pad_sequences
maxlen = 30
en_padded = pad_sequences(en_ids, maxlen, 'int64', "post", "post")
fr_padded = pad_sequences(fr_ids, maxlen, 'int64', "post", "post")
en_padded.shape, fr_padded.shape, en_embs.shape
Explanation: Prep data
Each sentence has to be of equal length. Keras has a convenient function pad_sequences to truncate and/or pad each sentence as required - even although we're not using keras for the neural net, we can still use any functions from it we need!
End of explanation
from sklearn import model_selection
fr_train, fr_test, en_train, en_test = model_selection.train_test_split(
fr_padded, en_padded, test_size=0.1)
[o.shape for o in (fr_train, fr_test, en_train, en_test)]
Explanation: And of course we need to separate our training and test sets...
End of explanation
fr_train[0], en_train[0]
Explanation: Here's an example of a French and English sentence, after encoding and padding.
End of explanation
def long_t(arr): return Variable(torch.LongTensor(arr)).cuda()
fr_emb_t = torch.FloatTensor(fr_embs).cuda()
en_emb_t = torch.FloatTensor(en_embs).cuda()
def create_emb(emb_mat, non_trainable=False):
output_size, emb_size = emb_mat.size()
emb = nn.Embedding(output_size, emb_size)
emb.load_state_dict({'weight': emb_mat})
if non_trainable:
for param in emb.parameters():
param.requires_grad = False
return emb, emb_size, output_size
Explanation: Model
Basic encoder-decoder
End of explanation
class EncoderRNN(nn.Module):
def __init__(self, embs, hidden_size, n_layers=2):
super(EncoderRNN, self).__init__()
self.emb, emb_size, output_size = create_emb(embs, True)
self.n_layers = n_layers
self.hidden_size = hidden_size
self.gru = nn.GRU(emb_size, hidden_size, batch_first=True, num_layers=n_layers)
# ,bidirectional=True)
def forward(self, input, hidden):
return self.gru(self.emb(input), hidden)
def initHidden(self, batch_size):
return Variable(torch.zeros(self.n_layers, batch_size, self.hidden_size))
def encode(inp, encoder):
batch_size, input_length = inp.size()
hidden = encoder.initHidden(batch_size).cuda()
enc_outputs, hidden = encoder(inp, hidden)
return long_t([SOS]*batch_size), enc_outputs, hidden
Explanation: Turning a sequence into a representation can be done using an RNN (called the 'encoder'. This approach is useful because RNN's are able to keep track of state and memory, which is obviously important in forming a complete understanding of a sentence.
* bidirectional=True passes the original sequence through an RNN, and the reversed sequence through a different RNN and concatenates the results. This allows us to look forward and backwards.
* We do this because in language things that happen later often influence what came before (i.e. in Spanish, "el chico, la chica" means the boy, the girl; the word for "the" is determined by the gender of the subject, which comes after).
End of explanation
class DecoderRNN(nn.Module):
def __init__(self, embs, hidden_size, n_layers=2):
super(DecoderRNN, self).__init__()
self.emb, emb_size, output_size = create_emb(embs)
self.gru = nn.GRU(emb_size, hidden_size, batch_first=True, num_layers=n_layers)
self.out = nn.Linear(hidden_size, output_size)
def forward(self, inp, hidden):
emb = self.emb(inp).unsqueeze(1)
res, hidden = self.gru(emb, hidden)
res = F.log_softmax(self.out(res[:,0]))
return res, hidden
Explanation: Finally, we arrive at a vector representation of the sequence which captures everything we need to translate it. We feed this vector into more RNN's, which are trying to generate the labels. After this, we make a classification for what each word is in the output sequence.
End of explanation
v=np.array([1,2,3]); v, v.shape
m=np.array([v,v*2,v*3]); m, m.shape
m+v
v1=np.expand_dims(v,-1); v1, v1.shape
m+v1
Explanation: This graph demonstrates the accuracy decay for a neural translation task. With an encoding/decoding technique, larger input sequences result in less accuracy.
<img src="https://smerity.com/media/images/articles/2016/bahdanau_attn.png" width="600">
This can be mitigated using an attentional model.
Adding broadcasting to Pytorch
Using broadcasting makes a lot of numerical programming far simpler. Here's a couple of examples, using numpy:
End of explanation
def unit_prefix(x, n=1):
for i in range(n): x = x.unsqueeze(0)
return x
def align(x, y, start_dim=2):
xd, yd = x.dim(), y.dim()
if xd > yd: y = unit_prefix(y, xd - yd)
elif yd > xd: x = unit_prefix(x, yd - xd)
xs, ys = list(x.size()), list(y.size())
nd = len(ys)
for i in range(start_dim, nd):
td = nd-i-1
if ys[td]==1: ys[td] = xs[td]
elif xs[td]==1: xs[td] = ys[td]
return x.expand(*xs), y.expand(*ys)
# def aligned_op(x,y,f): return f(*align(x,y,0))
# def add(x, y): return aligned_op(x, y, operator.add)
# def sub(x, y): return aligned_op(x, y, operator.sub)
# def mul(x, y): return aligned_op(x, y, operator.mul)
# def div(x, y): return aligned_op(x, y, operator.truediv)
# - Redefining the functions so that built-in Pytorch broadcasting will be used
def add(x, y): return x + y
def sub(x, y): return x - y
def mul(x, y): return x * y
def div(x, y): return x / y
def dot(x, y):
assert(1<y.dim()<5)
x, y = align(x, y)
if y.dim() == 2: return x.mm(y)
elif y.dim() == 3: return x.bmm(y)
else:
xs,ys = x.size(), y.size()
res = torch.zeros(*(xs[:-1] + (ys[-1],)))
for i in range(xs[0]): res[i].baddbmm_(x[i], (y[i]))
return res
Explanation: But Pytorch doesn't support broadcasting. So let's add it to the basic operators, and to a general tensor dot product:
End of explanation
def Arr(*sz): return torch.randn(sz)/math.sqrt(sz[0])
m = Arr(3, 2); m2 = Arr(4, 3)
v = Arr(2)
b = Arr(4,3,2); t = Arr(5,4,3,2)
mt,bt,tt = m.transpose(0,1), b.transpose(1,2), t.transpose(2,3)
def check_eq(x,y): assert(torch.equal(x,y))
check_eq(dot(m,mt),m.mm(mt))
check_eq(dot(v,mt), v.unsqueeze(0).mm(mt))
check_eq(dot(b,bt),b.bmm(bt))
check_eq(dot(b,mt),b.bmm(unit_prefix(mt).expand_as(bt)))
exp = t.view(-1,3,2).bmm(tt.contiguous().view(-1,2,3)).view(5,4,3,3)
check_eq(dot(t,tt),exp)
check_eq(add(m,v),m+unit_prefix(v).expand_as(m))
check_eq(add(v,m),m+unit_prefix(v).expand_as(m))
check_eq(add(m,t),t+unit_prefix(m,2).expand_as(t))
check_eq(sub(m,v),m-unit_prefix(v).expand_as(m))
check_eq(mul(m,v),m*unit_prefix(v).expand_as(m))
check_eq(div(m,v),m/unit_prefix(v).expand_as(m))
Explanation: Let's test!
End of explanation
def Var(*sz): return nn.Parameter(Arr(*sz)).cuda()
class AttnDecoderRNN(nn.Module):
def __init__(self, embs, hidden_size, n_layers=2, p=0.1):
super(AttnDecoderRNN, self).__init__()
self.emb, emb_size, output_size = create_emb(embs)
self.W1 = Var(hidden_size, hidden_size)
self.W2 = Var(hidden_size, hidden_size)
self.W3 = Var(emb_size+hidden_size, hidden_size)
self.b2 = Var(hidden_size)
self.b3 = Var(hidden_size)
self.V = Var(hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size, num_layers=2)
self.out = nn.Linear(hidden_size, output_size)
def forward(self, inp, hidden, enc_outputs):
emb_inp = self.emb(inp)
w1e = dot(enc_outputs, self.W1)
w2h = add(dot(hidden[-1], self.W2), self.b2).unsqueeze(1)
u = F.tanh(add(w1e, w2h))
a = mul(self.V,u).sum(2).squeeze(1) # - replaced .squeeze(2) that generates a dimension error
a = F.softmax(a).unsqueeze(2)
Xa = mul(a, enc_outputs).sum(1)
res = dot(torch.cat([emb_inp, Xa.squeeze(1)], 1), self.W3)
res = add(res, self.b3).unsqueeze(0)
res, hidden = self.gru(res, hidden)
res = F.log_softmax(self.out(res.squeeze(0)))
return res, hidden
Explanation: Attentional model
End of explanation
def get_batch(x, y, batch_size=16):
idxs = np.random.permutation(len(x))[:batch_size]
return x[idxs], y[idxs]
hidden_size = 128
fra, eng = get_batch(fr_train, en_train, 4)
inp = long_t(fra)
targ = long_t(eng)
emb, emb_size, output_size = create_emb(en_emb_t)
emb.cuda()
inp.size()
W1 = Var(hidden_size, hidden_size)
W2 = Var(hidden_size, hidden_size)
W3 = Var(emb_size+hidden_size, hidden_size)
b2 = Var(1,hidden_size)
b3 = Var(1,hidden_size)
V = Var(1,1,hidden_size)
gru = nn.GRU(hidden_size, hidden_size, num_layers=2).cuda()
out = nn.Linear(hidden_size, output_size).cuda()
# - Added the encoder creation in this cell
encoder = EncoderRNN(fr_emb_t, hidden_size).cuda()
dec_inputs, enc_outputs, hidden = encode(inp, encoder)
enc_outputs.size(), hidden.size()
emb_inp = emb(dec_inputs); emb_inp.size()
w1e = dot(enc_outputs, W1); w1e.size()
w2h = dot(hidden[-1], W2)
w2h = (w2h+b2.expand_as(w2h)).unsqueeze(1); w2h.size()
u = F.tanh(w1e + w2h.expand_as(w1e))
a = (V.expand_as(u)*u).sum(2).squeeze(1) # - replaced .squeeze(2) that generates a dimension error
a = F.softmax(a).unsqueeze(2); a.size(),a.sum(1).squeeze(1)
Xa = (a.expand_as(enc_outputs) * enc_outputs).sum(1); Xa.size()
res = dot(torch.cat([emb_inp, Xa.squeeze(1)], 1), W3)
res = (res+b3.expand_as(res)).unsqueeze(0); res.size()
res, hidden = gru(res, hidden); res.size(), hidden.size()
res = F.log_softmax(out(res.squeeze(0))); res.size()
Explanation: Attention testing
Pytorch makes it easy to check intermediate results, when creating a custom architecture such as this one, since you can interactively run each function.
End of explanation
def train(inp, targ, encoder, decoder, enc_opt, dec_opt, crit):
decoder_input, encoder_outputs, hidden = encode(inp, encoder)
target_length = targ.size()[1]
enc_opt.zero_grad(); dec_opt.zero_grad()
loss = 0
for di in range(target_length):
decoder_output, hidden = decoder(decoder_input, hidden, encoder_outputs)
decoder_input = targ[:, di]
loss += crit(decoder_output, decoder_input)
loss.backward()
enc_opt.step(); dec_opt.step()
return loss.data[0] / target_length
def req_grad_params(o):
return (p for p in o.parameters() if p.requires_grad)
def trainEpochs(encoder, decoder, n_epochs, print_every=1000, lr=0.01):
loss_total = 0 # Reset every print_every
enc_opt = optim.RMSprop(req_grad_params(encoder), lr=lr)
dec_opt = optim.RMSprop(decoder.parameters(), lr=lr)
crit = nn.NLLLoss().cuda()
for epoch in range(n_epochs):
fra, eng = get_batch(fr_train, en_train, 64)
inp = long_t(fra)
targ = long_t(eng)
loss = train(inp, targ, encoder, decoder, enc_opt, dec_opt, crit)
loss_total += loss
if epoch % print_every == print_every-1:
print('%d %d%% %.4f' % (epoch, epoch / n_epochs * 100, loss_total / print_every))
loss_total = 0
Explanation: Train
Pytorch has limited functionality for training models automatically - you will generally have to write your own training loops. However, Pytorch makes it far easier to customize how this training is done, such as using teacher forcing.
End of explanation
hidden_size = 128
encoder = EncoderRNN(fr_emb_t, hidden_size).cuda()
decoder = AttnDecoderRNN(en_emb_t, hidden_size).cuda()
trainEpochs(encoder, decoder, 10000, print_every=500, lr=0.005)
Explanation: Run
End of explanation
def evaluate(inp):
decoder_input, encoder_outputs, hidden = encode(inp, encoder)
target_length = maxlen
decoded_words = []
for di in range(target_length):
decoder_output, hidden = decoder(decoder_input, hidden, encoder_outputs)
topv, topi = decoder_output.data.topk(1)
ni = topi[0][0]
if ni==PAD: break
decoded_words.append(en_vocab[ni])
decoder_input = long_t([ni])
return decoded_words
def sent2ids(sent):
ids = [fr_w2id[t] for t in simple_toks(sent)]
return pad_sequences([ids], maxlen, 'int64', "post", "post")
def fr2en(sent):
ids = long_t(sent2ids(sent))
trans = evaluate(ids)
return ' '.join(trans)
i=8
print(en_qs[i],fr_qs[i])
fr2en(fr_qs[i])
Explanation: Testing
End of explanation |
13,836 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
auto_arima
Pmdarima bring R's auto.arima functionality to Python by wrapping statsmodel ARIMA and SARIMAX models into a singular scikit-learn-esque estimator (pmdarima.arima.ARIMA) and adding several layers of degree and seasonal differencing tests to identify the optimal model parameters.
Pmdarima ARIMA models
Step1: We'll start by defining an array of data from an R time-series, wineind
Step2: Fitting an ARIMA
We will first fit a seasonal ARIMA. Note that you do not need to call auto_arima in order to fit a model—if you know the order and seasonality of your data, you can simply fit an ARIMA with the defined hyper-parameters
Step3: Also note that your data does not have to exhibit seasonality to work with an ARIMA. We could fit an ARIMA against the same data with no seasonal terms whatsoever (but it is unlikely that it will perform better; quite the opposite, likely).
Step4: Finding the optimal model hyper-parameters using auto_arima
Step5: Fitting a random search
If you don't want to use the stepwise search, auto_arima can fit a random search by enabling random=True. If your random search returns too many invalid (nan) models, you might try increasing n_fits or making it an exhaustive search (stepwise=False, random=False).
Step6: Inspecting goodness of fit
We can look at how well the model fits in-sample data
Step7: Predicting future values
After your model is fit, you can forecast future values using the predict function, just like in sci-kit learn
Step8: Updating your model
ARIMAs create forecasts by using the latest observations. Over time, your forecasts will drift, and you'll need to update the model with the observed values. There are several solutions to this problem | Python Code:
import numpy as np
import pmdarima as pm
print('numpy version: %r' % np.__version__)
print('pmdarima version: %r' % pm.__version__)
Explanation: auto_arima
Pmdarima bring R's auto.arima functionality to Python by wrapping statsmodel ARIMA and SARIMAX models into a singular scikit-learn-esque estimator (pmdarima.arima.ARIMA) and adding several layers of degree and seasonal differencing tests to identify the optimal model parameters.
Pmdarima ARIMA models:
Are fully picklable for easy persistence and model deployment
Can handle seasonal terms (unlike statsmodels ARIMAs)
Follow sklearn model fit/predict conventions
End of explanation
from pmdarima.datasets import load_wineind
# this is a dataset from R
wineind = load_wineind().astype(np.float64)
Explanation: We'll start by defining an array of data from an R time-series, wineind:
```r
forecast::wineind
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
1980 15136 16733 20016 17708 18019 19227 22893 23739 21133 22591 26786 29740
1981 15028 17977 20008 21354 19498 22125 25817 28779 20960 22254 27392 29945
1982 16933 17892 20533 23569 22417 22084 26580 27454 24081 23451 28991 31386
1983 16896 20045 23471 21747 25621 23859 25500 30998 24475 23145 29701 34365
1984 17556 22077 25702 22214 26886 23191 27831 35406 23195 25110 30009 36242
1985 18450 21845 26488 22394 28057 25451 24872 33424 24052 28449 33533 37351
1986 19969 21701 26249 24493 24603 26485 30723 34569 26689 26157 32064 38870
1987 21337 19419 23166 28286 24570 24001 33151 24878 26804 28967 33311 40226
1988 20504 23060 23562 27562 23940 24584 34303 25517 23494 29095 32903 34379
1989 16991 21109 23740 25552 21752 20294 29009 25500 24166 26960 31222 38641
1990 14672 17543 25453 32683 22449 22316 27595 25451 25421 25288 32568 35110
1991 16052 22146 21198 19543 22084 23816 29961 26773 26635 26972 30207 38687
1992 16974 21697 24179 23757 25013 24019 30345 24488 25156 25650 30923 37240
1993 17466 19463 24352 26805 25236 24735 29356 31234 22724 28496 32857 37198
1994 13652 22784 23565 26323 23779 27549 29660 23356
```
Note that the frequency of the data is 12:
```r
frequency(forecast::wineind)
[1] 12
```
End of explanation
from pmdarima.arima import ARIMA
fit = ARIMA(order=(1, 1, 1), seasonal_order=(0, 1, 1, 12)).fit(y=wineind)
Explanation: Fitting an ARIMA
We will first fit a seasonal ARIMA. Note that you do not need to call auto_arima in order to fit a model—if you know the order and seasonality of your data, you can simply fit an ARIMA with the defined hyper-parameters:
End of explanation
fit = ARIMA(order=(1, 1, 1), seasonal_order=None).fit(y=wineind)
Explanation: Also note that your data does not have to exhibit seasonality to work with an ARIMA. We could fit an ARIMA against the same data with no seasonal terms whatsoever (but it is unlikely that it will perform better; quite the opposite, likely).
End of explanation
# fitting a stepwise model:
stepwise_fit = pm.auto_arima(wineind, start_p=1, start_q=1, max_p=3, max_q=3, m=12,
start_P=0, seasonal=True, d=1, D=1, trace=True,
error_action='ignore', # don't want to know if an order does not work
suppress_warnings=True, # don't want convergence warnings
stepwise=True) # set to stepwise
stepwise_fit.summary()
Explanation: Finding the optimal model hyper-parameters using auto_arima:
If you are unsure (as is common) of the best parameters for your model, let auto_arima figure it out for you. auto_arima is similar to an ARIMA-specific grid search, but (by default) uses a more intelligent stepwise algorithm laid out in a paper by Hyndman and Khandakar (2008). If stepwise is False, the models will be fit similar to a gridsearch. Note that it is possible for auto_arima not to find a model that will converge; if this is the case, it will raise a ValueError.
Fitting a stepwise search:
End of explanation
rs_fit = pm.auto_arima(wineind, start_p=1, start_q=1, max_p=3, max_q=3, m=12,
start_P=0, seasonal=True, d=1, D=1, trace=True,
n_jobs=-1, # We can run this in parallel by controlling this option
error_action='ignore', # don't want to know if an order does not work
suppress_warnings=True, # don't want convergence warnings
stepwise=False, random=True, random_state=42, # we can fit a random search (not exhaustive)
n_fits=25)
rs_fit.summary()
Explanation: Fitting a random search
If you don't want to use the stepwise search, auto_arima can fit a random search by enabling random=True. If your random search returns too many invalid (nan) models, you might try increasing n_fits or making it an exhaustive search (stepwise=False, random=False).
End of explanation
from bokeh.plotting import figure, show, output_notebook
import pandas as pd
# init bokeh
output_notebook()
def plot_arima(truth, forecasts, title="ARIMA", xaxis_label='Time',
yaxis_label='Value', c1='#A6CEE3', c2='#B2DF8A',
forecast_start=None, **kwargs):
# make truth and forecasts into pandas series
n_truth = truth.shape[0]
n_forecasts = forecasts.shape[0]
# always plot truth the same
truth = pd.Series(truth, index=np.arange(truth.shape[0]))
# if no defined forecast start, start at the end
if forecast_start is None:
idx = np.arange(n_truth, n_truth + n_forecasts)
else:
idx = np.arange(forecast_start, n_forecasts)
forecasts = pd.Series(forecasts, index=idx)
# set up the plot
p = figure(title=title, plot_height=400, **kwargs)
p.grid.grid_line_alpha=0.3
p.xaxis.axis_label = xaxis_label
p.yaxis.axis_label = yaxis_label
# add the lines
p.line(truth.index, truth.values, color=c1, legend='Observed')
p.line(forecasts.index, forecasts.values, color=c2, legend='Forecasted')
return p
in_sample_preds = stepwise_fit.predict_in_sample()
in_sample_preds[:10]
show(plot_arima(wineind, in_sample_preds,
title="Original Series & In-sample Predictions",
c2='#FF0000', forecast_start=0))
Explanation: Inspecting goodness of fit
We can look at how well the model fits in-sample data:
End of explanation
next_25 = stepwise_fit.predict(n_periods=25)
next_25
# call the plotting func
show(plot_arima(wineind, next_25))
Explanation: Predicting future values
After your model is fit, you can forecast future values using the predict function, just like in sci-kit learn:
End of explanation
stepwise_fit.update(next_25, maxiter=10) # take 10 more steps
stepwise_fit.summary()
updated_data = np.concatenate([wineind, next_25])
# visualize new forecasts
show(plot_arima(updated_data, stepwise_fit.predict(n_periods=10)))
Explanation: Updating your model
ARIMAs create forecasts by using the latest observations. Over time, your forecasts will drift, and you'll need to update the model with the observed values. There are several solutions to this problem:
Fit a new ARIMA with the new data added to your training sample
You can either re-use the order discovered in the auto_arima function, or re-run auto_arima altogether.
Use the update method (preferred). This will allow your model to update its parameters by taking several more MLE steps on new observations (controlled by the maxiter arg) starting from the parameters it's already discovered. This approach will help you avoid over-fitting.
For this example, let's update our existing model with the next_25 we just computed, as if they were actually observed values.
End of explanation |
13,837 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<img src="http
Step1: Warning
Step2: <div id='example2' />
Example 2
Back to toc
Considere el siguiente BVP
Step3: <div id='example3s' />
Example 3
Back to toc
Considere que tiene la siguiente colección de datos
Step4: <div id='example1s' />
Solution Example 1
Back to toc
Considere el siguiente BVP
Step5: The discrete equation at $x_i$ is the following
Step6: SM
Considere el siguiente BVP
Step7: Caso $a=0$
FD
Considere el siguiente BVP
Step8: SM
In this case the BVP becomes
Step9: Notice that in this particular case the solution for the case when $a\neq 0$ and $a=0$ are close to each other, but they are not the same.
In particular, the value at $x=0$ is different.
<div id='example2s' />
Solution Example 2
Back to toc
Considere el siguiente BVP
Step10: SM
Considere el siguiente BVP | Python Code:
import numpy as np
import scipy as sp
# To solve IVP, notice this is different that odeint!
from scipy.integrate import solve_ivp
# To integrate use one of the followings:
from scipy.integrate import quad, quadrature, trapezoid, simpson
# For least-square problems
from scipy.sparse.linalg import lsqr
from scipy.linalg import qr
# For interpolation
from scipy.interpolate import BarycentricInterpolator
# The wonderful GMRes
from scipy.sparse.linalg import gmres
# The wonderful**2 Newton method coupled to GMRes by a matrix-free call!
from scipy.optimize import newton_krylov
from scipy.optimize import root
from scipy.linalg import toeplitz
import matplotlib.pyplot as plt
from ipywidgets import interact
from colorama import Fore, Back, Style
# https://pypi.org/project/colorama/
# Fore: BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE, RESET.
# Back: BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE, RESET.
# Style: DIM, NORMAL, BRIGHT, RESET_ALL
textBold = lambda x: Style.BRIGHT+x+Style.RESET_ALL
textBoldH = lambda x: Style.BRIGHT+Back.YELLOW+x+Style.RESET_ALL
Explanation: <center>
<img src="http://sct.inf.utfsm.cl/wp-content/uploads/2020/04/logo_di.png" style="width:60%">
<h1> INF285 - Computación Científica </h1>
<h2> BVP linear and nonlinear with Finite Difference and the Shooting Method</h2>
<h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2>
<h2> Version: 1.01</h2>
</center>
<div id='toc' />
Table of Contents
Example 1
Example 2
Example 3 and its solution
Solution Example 1
Solution Example 2
Acknowledgements
No debe utilizar bibliotecas adicionales.
End of explanation
'''
input:
a : (double) coeficiente 'a'
b : (callable) función b(x)>0
c : (callable) función c(x)
f : (callable) función f(x)
y0 : (double) y_0
y1 : (double) y_1
N : (integer) número de puntos en la discretización espacial de x
output:
xi : (ndarray) discretización equiespaciada de N puntos x
yi : (ndarray) aproximación numérica de y(x) en los puntos xi
'''
def find_y_FD(a, b, c, f, y0, y1, N):
# Your own code.
return xi, yi
'''
input:
a : (double) coeficiente 'a'
b : (callable) función b(x)>0
c : (callable) función c(x)
f : (callable) función f(x)
y0 : (double) y_0
y1 : (double) y_1
N : (integer) número de puntos en la discretización espacial de x
output:
xi : (ndarray) discretización equiespaciada de N puntos x
yi : (ndarray) aproximación numérica de y(x) en los puntos xi
'''
def find_y_SM(a, b, c, f, y0, y1, N):
# Your own code.
return xi, yi
Explanation: Warning:
The following numerical solutions give the 'core' to produce the required answers for the questions presented, you should still work on how to put the components together to generate the particular answers requested.
<div id='example1' />
Example 1
Back to toc
Considere el siguiente BVP:
\begin{align}
a\,y''(x)+b(x)\,y'(x)+c(x)\,y(x) & = f(x), \quad \text{para $x\in]0,1[$}\
a\,(y(0) - y_0)& = 0,\
y(1) & = y_1,
\end{align}
donde $a\in\mathbb{R}$, $b(x)>0$, y $x\in[0,1]$
1. Construya un solver para el problema anterior considerando todos los posibles casos utilizando diferencias finitas.
Construya un solver para el problema anterior considerando todos los posibles casos utilizando el método del disparo.
Hint: Does you solvers really consider all the cases? Even when $a=0$?
Obtenga con cada solver: $\int_0^1 y(x)\,dx$, $\int_0^1 y'(x)\,dx$, y $\int_0^1 y''(x)\,dx$ usando los siguientes algoritmos:
Trapecio
Punto Medio
Simpson
Cuadratura Gaussiana
Algebraicamente
Hint2: Take a look to the additional parameters requiered in the definitions of the functions below.
End of explanation
'''
input:
yi : (ndarray) vector donde se evalua la matrix Jacobiana
v : (ndarray) vector que será multiplicado por la matriz Jacobiana
output:
Jv : (ndarray) matrix Jacobiana evaluada en yi y multiplicada por v, es decir np.dot(J,v).
Esto es muy útil para acoplarlo con GMRes.
'''
def build_jacobian_matrix_FD(yj):
# Your own code.
return Jv
'''
input:
y0 : (callable) initial guess y0(x)
N : (integer) número de puntos en la discretización espacial de x
output:
xi : (ndarray) discretización equiespaciada de N puntos x
yi : (ndarray) aproximación numérica de y(x) en los puntos xi
'''
def solve_nonlinear_ode_FD(y0, N):
# Your own code.
return xi, yi
'''
input:
y0 : (callable) initial guess y0(x)
N : (integer) número de puntos en la discretización espacial de x
output:
xi : (ndarray) discretización equiespaciada de N puntos x
yi : (ndarray) aproximación numérica de y(x) en los puntos xi
'''
def solve_nonlinear_ode_SM(y0, N):
# Your own code.
return xi, yi
Explanation: <div id='example2' />
Example 2
Back to toc
Considere el siguiente BVP:
\begin{align}
y''(x)+3\,\exp(y(x)) & = 0, \quad \text{para $x \in]0,1[$}\
y(0) &= 0,\
y(1) &= 0,\
\end{align}
donde $x\in[0,1]$. Note que $y(x)=0$ no es una solución.
1. Construya un algoritmo basado en diferencias finitas que obtenga la aproximación numérica $y(x)$ considerando como initial guess $y_0(x)$. Utilice el método de Newton con GMRes para resolver el sistema de ecuaciones no-lineales asociado o el módulo newton_krylov de scipy, en este último caso no es necesario implementar build_jacobian_matrix_FD.
2. Construya un algoritmo basado en el método del disparo que obtenga la aproximación numérica $y(x)$ considerando como initial guess $y_0(x)$.
3. Resuelva con el algoritmo basado en diferencias finitas el BVP con $y_0(x)=0$ con N=20.
4. Resuelva con el algoritmo basado en el método del disparo el BVP con $y_0(x)=0$ con N=20.
5. ¿Obtiene aproximadamente la misma aproximación numérica en la pregunta 3 y 4?
6. Considere la siguiente familia de initial guesses $y_0^{[m]}(x)=m\,(x-x^2)$, para $m\in{-10,-9,\dots,10}$ y N=20.
1. Resuelva con el algoritmo basado en diferencias finitas el BVP con cada $y_0^{[m]}(x)$.
2. Resuelva con el algoritmo basado en el método del disparo el BVP con cada $y_0^{[m]}(x)$.
3. ¿Se obtinen las mismas soluciones en cada caso?
4. ¿Cuantas soluciones distintas se obtienen?
Hint: Take a look to the additional parameters requiered in the definitions of the functions below.
End of explanation
'''
input:
N : (integer) número de puntos en la discretización espacial del intervalo [0,100]
Tw : (double) ventana deslizante utilizada en la sumatoria [t-Tw,t]
gamma : (double) coeficiente usado en conjunto con tanh
Ti : (double) tiempo inicial de simulación
Tf : (double) tiempo final de simulación
ti : (ndarray) ti data de entrada
yi : (ndarray) yi data de entrada
output:
t_out : (ndarray) discretización equiespaciada de N puntos t
y_out : (ndarray) aproximación numérica de y(t) en los puntos ti
'''
def solve_almost_LS_IVP(N,Tw,gamma,Ti,Tf,ti,yi):
# Your own code.
return t_out, y_out
# Consider the following data
np.random.seed(0)
Ndata = 1000
ti = np.linspace(0,100,Ndata)
yi = 0.2*np.cos(2*ti)+np.sin(0.1*ti)+0.1*np.random.rand(Ndata)
'''
input:
N : (integer) número de puntos en la discretización espacial del intervalo [0,100]
Tw : (double) ventana deslizante utilizada en la sumatoria [t-Tw,t]
gamma : (double) coeficiente usado en conjunto con tanh
Ti : (double) tiempo inicial de simulación
Tf : (double) tiempo final de simulación
ti : (ndarray) ti data de entrada
yi : (ndarray) yi data de entrada
output:
t_out : (ndarray) discretización equiespaciada de N puntos t
y_out : (ndarray) aproximación numérica de y(t) en los puntos ti
'''
def solve_almost_LS_IVP(N,Tw,gamma,Ti,Tf,ti,yi):
def my_f(t,y):
ysample = yi[np.logical_and((t-Tw)<ti,ti<t)]
if len(ysample)>=1:
return gamma*np.tanh((np.sum(ysample)-len(ysample)*y)/gamma)
else:
return 0
sol = solve_ivp(my_f,(Ti,Tf),(yi[0],),t_eval=np.linspace(Ti,Tf,N))
t_out = sol.t
y_out = sol.y[0]
return t_out, y_out
def show_output_LS_IVP(Tw=3,gamma=1):
N = 10000
Ti = 0
Tf = 120
t_out, y_out = solve_almost_LS_IVP(N,Tw,gamma,Ti,Tf,ti,yi)
plt.figure(figsize=(16,8))
plt.plot(ti,yi,'.',label=r'$y_i$')
plt.plot(t_out,y_out,'r-', label=r'$y(t)$')
plt.xlabel('t')
plt.grid(True)
plt.legend(loc='best')
plt.show()
print(textBold("Suggestion: "),textBoldH("Evaluate the approximation using small values of 'Tw' and 'gamma'."))
interact(show_output_LS_IVP,Tw=(0.1,100,0.1), gamma=(0.01,10,0.01))
Explanation: <div id='example3s' />
Example 3
Back to toc
Considere que tiene la siguiente colección de datos:
\begin{align}
{(t_1, y_1),(t_2,y_2),\dots,(t_n,y_n)},
\end{align}
donde sabemos que $0 \leq t_i \leq 100$ para $i\in{1,2,\dots,n}$.
Una aproximación tradicional de mínimos cuadrados requeriría proponer una función $y(t)$, por ejemplo lineal $a+b\,t$, para minimizar el error cuadrático, $E=\sum_{i=1}^n \left(y(t_i)-y_i\right)^2$.
Esto nos entregaría los coeficientes de la estructura propuesta para $y(t)$.
La mayor desventaja de este procedimiento es que tenemos que conocer a priori la estructura algebraica de $y(t)$.
Una alternativa sería construir numéricamente una aproximación de $y(t)$, por ejemplo reemplazando la minimización por un problema de valor inicial, para lo cual se propone el siguiente IVP,
\begin{align}
\dot{y}(t) &= \gamma\,\tanh\left(\dfrac{\displaystyle\sum_{t_i\in[t-T,t]} (y_i-y(t))}{\gamma}\right),\
y(0) &= y_1,
\end{align}
donde $\gamma=1$.
1. Implemente el solver.
2. ¿Se suaviza la data?
3. ¿Cómo depende la aproximación en función de $T$?
4. ¿Cómo depende la aproximación en función de $\gamma$?
Nota: Si el conjunto $t_i\in[t-T,t]$ es vacio, entonces el lado derecho de $\dot{y}(t)$ se considera que es $0$.
End of explanation
# This function builds the h-less differentiation matrices for
# the approximation of the first and second derivatives.
# h-less means that it still needs to add the corresponding
# h coefficient in the approximation.
def build_D_D2(M):
# First derivative - Central difference differentiation matrix
D = toeplitz(np.append(np.array([0, -1.]), np.zeros(M-2)),
np.append(np.array([0, 1.]), np.zeros(M-2)))
# Second derivative - differentiation matrix
D2 = toeplitz(np.append(np.array([-2, 1.]), np.zeros(M-2)))
return D, D2
D , D2 = build_D_D2(5)
print('D: \n', D)
print('D2: \n', D2)
Explanation: <div id='example1s' />
Solution Example 1
Back to toc
Considere el siguiente BVP:
\begin{align}
a\,y''(x)+b(x)\,y'(x)+c(x)\,y(x) & = f(x), \quad \text{para $x\in]0,1[$}\
a\,(y(0) - y_0)& = 0,\
y(1) & = y_1,
\end{align}
donde $a\in\mathbb{R}$, $b(x)>0$, y $x\in[0,1]$
Caso $a\neq0$
FD:
Considere el siguiente BVP:
\begin{align}
a\,y''(x)+b(x)\,y'(x)+c(x)\,y(x) & = f(x), \quad \text{para $x\in]0,1[$}\
a\,(y(0) - y_0)& = 0,\
y(1) & = y_1,
\end{align}
donde $a\in\mathbb{R}$, $b(x)>0$, y $x\in[0,1]$
Answer: Consider that $x_i = \frac{i}{N-1}$ for $i\,{0,1,\dots,N-1}$, and $y(x_i) \approx w_i$, where we know that $w_{0}=y_0$ and $w_{N-1}=y_1$.
For simplicity we will consider $\mathbf{w}=[w_1,w_2,\dots,w_{N-2}]$.
The finite diference discretizations that we will use are the followings:
\begin{align}
y''(x_i) &\approx \dfrac{w_{i+1}-2\,w_i+w_{i-1}}{h^2},\
y'(x_i) &\approx \dfrac{w_{i+1}-w_{i-1}}{2\,h}.
\end{align}
Thus, the discrete version of the ode at $x_i$ will be the following:
\begin{align}
a\,y''(x_i) &\approx a\,\dfrac{w_{i+1}-2\,w_i+w_{i-1}}{h^2},\
b(x_i)\,y'(x_i) &\approx b(x_i)\,\dfrac{w_{i+1}-w_{i-1}}{2\,h},\
c(x_i)\,y(x_i) &\approx c(x_i)\,w_i,\
f(x_i) &\approx f(x_i).
\end{align}
By using the unknowns vector $\mathbf{w}$ and the know vector $\mathbf{x}=[x_1,x_2,\dots,x_{N-2}]$ we can define the following matrices:
\begin{align}
D_2 &=
\begin{bmatrix}
-2 & 1 & 0 & 0 & 0 & 0 & 0 \
1 & -2 & 1 & 0 & 0 & 0 & 0 \
0 & 1 & -2 & 1 & 0 & 0 & 0 \
\vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots \
0 & 0 & 0 & 0 & 1 & -2 & 1 \
0 & 0 & 0 & 0 & 0 & 1 & -2 \
\end{bmatrix},\
D &=
\begin{bmatrix}
0 & 1 & 0 & 0 & 0 & 0 & 0 \
-1 & 0 & 1 & 0 & 0 & 0 & 0 \
0 & -1 & 0 & 1 & 0 & 0 & 0 \
\vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots \
0 & 0 & 0 & 0 & -1 & 0 & 1 \
0 & 0 & 0 & 0 & 0 & -1 & 0 \
\end{bmatrix}.\
\end{align}
End of explanation
# Data definition
N=100
a = 1
b = lambda x: 10+x
c = lambda x: -10+x
f = lambda x: 1+20*np.sin(10*x)
x = np.linspace(0,1,N)
h = 1/(N-1)
y0 = 0
y1 = 1
def build_A_and_b(a,b,c,f,h,x,y0,y1,N):
D, D2 = build_D_D2(N-2)
x_interior = x[1:-1]
A_N=(a/(h**2))*D2+(1/(2*h))*np.dot(np.diag(b(x_interior)),D)+np.diag(c(x_interior))
b_N = f(x_interior)
b_N[0] = b_N[0]-(a/(h**2))*y0+(b(x_interior[0])/(2*h))*y0
b_N[-1] = b_N[-1]-(a/(h**2))*y1-(b(x_interior[-1])/(2*h))*y1
return A_N, b_N
A_N, b_N = build_A_and_b(a,b,c,f,h,x,y0,y1,N)
w = np.linalg.solve(A_N,b_N)
w = np.append(y0,w)
w = np.append(w,y1)
plt.figure(figsize=(16,8))
plt.plot(x,w,'.',label=r'$w_i$')
plt.xlabel(r'$x_i$')
plt.grid(True)
plt.legend(loc='best')
plt.show()
Explanation: The discrete equation at $x_i$ is the following:
\begin{equation}
\dfrac{a}{h^2} \left(w_{i+1}-2\,w_i+w_{i-1}\right)
+
\dfrac{b(x_i)}{2\,h} \left(w_{i+1}-w_{i-1}\right)
+
c(x_i)\,w_i
=
f(x_i).
\end{equation}
There are two special cases, for $x_1$ and $x_{N-2}$, they generate the following equations:
\begin{align}
\dfrac{a}{h^2} \left(w_{2}-2\,w_1+w_0\right)
+
\dfrac{b(x_1)}{2\,h} \left(w_{2}-w_{0}\right)
+
c(x_1)\,w_1
&=
f(x_1),\
\dfrac{a}{h^2} \left(w_{N-1}-2\,w_{N-2}+w_{N-3}\right)
+
\dfrac{b(x_{N-2})}{2\,h} \left(w_{N-1}-w_{N-3}\right)
+
c(x_{N-2})\,w_{N-2}
&=
f(x_{N-2}).
\end{align}
But, since we know $w_0$ and $w_{N-1}$, they become,
\begin{align}
\dfrac{a}{h^2} \left(w_{2}-2\,w_1\right)
+
\dfrac{b(x_1)}{2\,h} w_{2}
+
c(x_1)\,w_1
&=
f(x_1)-\dfrac{a}{h^2} w_0+\dfrac{b(x_1)}{2\,h}\,w_0,\
\dfrac{a}{h^2} \left(-2\,w_{N-2}+w_{N-3}\right)
+
\dfrac{b(x_{N-2})}{2\,h} \left(-w_{N-3}\right)
+
c(x_{N-2})\,w_{N-2}
&=
f(x_{N-2})-\dfrac{a}{h^2}\,w_{N-1}-\dfrac{b(x_{N-2})}{2\,h}\,w_{N-1}.
\end{align}
This analysis allows us to write the discrete equation in the following way for the unknown vector $\mathbf{w}=[w_1,w_2,\dots,w_{N-2}]$ and $\mathbf{x}=[x_1,x_2,\dots,x_{N-2}]$:
\begin{equation}
\dfrac{a}{h^2}\,D_2\,\mathbf{w}
+
\dfrac{1}{2\,h}\,\text{diag}(b(\mathbf{x}))\,D\,\mathbf{w}
+
\text{diag}(c(\mathbf{x}))\,\mathbf{w}
=
\begin{bmatrix}
f(x_1)-\dfrac{a}{h^2} w_0+\dfrac{b(x_1)}{2\,h}\,w_0\
f(x_2)\
\vdots\
f(x_{N-3})\
f(x_{N-2})-\dfrac{a}{h^2}\,w_{N-1}-\dfrac{b(x_{N-2})}{2\,h}\,w_{N-1}
\end{bmatrix},
\end{equation}
but since $w_0=y_0$ and $w_{N-1}=y_1$ we get,
\begin{align}
\dfrac{a}{h^2}\,D_2\,\mathbf{w}
+
\dfrac{1}{2\,h}\,\text{diag}(b(\mathbf{x}))\,D\,\mathbf{w}
+
\text{diag}(c(\mathbf{x}))\,\mathbf{w}
&=
\begin{bmatrix}
f(x_1)-\dfrac{a}{h^2} y_0+\dfrac{b(x_1)}{2\,h}\,y_0\
f(x_2)\
\vdots\
f(x_{N-3})\
f(x_{N-2})-\dfrac{a}{h^2}\,y_1-\dfrac{b(x_{N-2})}{2\,h}\,y_1
\end{bmatrix}\
&= \mathbf{b}_N.
\end{align}
Factoring out the unknown vector $\mathbf{w}$ we obtain,
\begin{align}
\underbrace{\left(\dfrac{a}{h^2}\,D_2
+
\dfrac{1}{2\,h}\,\text{diag}(b(\mathbf{x}))\,D
+
\text{diag}(c(\mathbf{x}))\right)}_{\displaystyle{A_N}}\,\mathbf{w}
&=\mathbf{b}_N.
\end{align}
Thus, we only need to solve now the linear system of equations $A_N\,\mathbf{w}=\mathbf{b}$ and we are done!
Notice that the sub-index in $A_N$ is just to indicate we have a discretization with $N$ points.
Notice that we moved from the discrete equations
$\dfrac{a}{h^2} \left(w_{i+1}-2\,w_i+w_{i-1}\right)+
\dfrac{b(x_i)}{2\,h} \left(w_{i+1}-w_{i-1}\right)
+
c(x_i)\,w_i
=
f(x_i)$ to the matrix equations $\dfrac{a}{h^2}\,D_2\,\mathbf{w}
+
\dfrac{1}{2\,h}\,\text{diag}(b(\mathbf{x}))\,D\,\mathbf{w}
+
\text{diag}(c(\mathbf{x}))\,\mathbf{w}
=\mathbf{b}_N$, however it is recommended to perform this step, at least for an small problem, manually so it can be understood better. Thus we encourage to do this, for instance for $N=7$.
End of explanation
# RHS of dynamical system
def my_f1(t,w,a,b,c,f):
w1 = w[0]
w2 = w[1]
w1dot = w2
w2dot = (f(t)-b(t)*w2-c(t)*w1)/a
return np.array([w1dot,w2dot])
# Function to be used to apply the Shooting Method
def F_SM_1(alpha,a,b,c,f,y0,y1,N):
t = np.linspace(0,1,N)
initial_condition = np.zeros(2)
initial_condition[0] = y0
initial_condition[1] = alpha
sol = solve_ivp(my_f1,(0,1),initial_condition,t_eval=t,args=(a,b,c,f))
return sol.y[0,-1]-y1
F_root_1 = lambda alpha: F_SM_1(alpha,a,b,c,f,y0,y1,N)
alpha_r = root(F_root_1, 0.).x[0]
sol = solve_ivp(my_f1,(0,1),np.array([y0,alpha_r]),t_eval=np.linspace(0,1,N),args=(a,b,c,f))
plt.figure(figsize=(16,8))
plt.plot(sol.t,sol.y[0,:],'rd',label='SM',alpha=0.5)
plt.plot(x,w,'.',label=r'$w_i$')
plt.legend(loc='best')
plt.xlabel(r'$x_i$')
plt.grid(True)
plt.show()
Explanation: SM
Considere el siguiente BVP:
\begin{align}
a\,y''(x)+b(x)\,y'(x)+c(x)\,y(x) & = f(x), \quad \text{para $x\in]0,1[$}\
a\,(y(0) - y_0)& = 0,\
y(1) & = y_1,
\end{align}
donde $a\in\mathbb{R}$, $b(x)>0$, y $x\in[0,1]$
We first need to rewrite the BVP as a dynamical system and consider $x\rightarrow t$:
\begin{align}
w_1(t) &= y(t),\
w_2(t) &= y'(t),\
\end{align}
so,
\begin{align}
\dot{w}_1 &= y'(t) = w_2,\
\dot{w}_2(t) &= y''(t)\
&= \dfrac{1}{a}\left(f(t) - b(t)\,w_2-c(t)\,w_1\right),\
w_1(0) &= y_0,\
w_2(0) &= \alpha.
\end{align}
End of explanation
def build_DF(M):
# First derivative - Forward difference differentiation matrix
DF = toeplitz(np.append(np.array([-1]), np.zeros(M-1)), np.append(np.array([0,1]), np.zeros(M-2)))
return DF
def build_A_hat_and_b_hat(b,c,f,h,x,y1,N):
DF = build_DF(N-1)
x_interior = x[:-1]
A_hat_N=(1/(h))*np.dot(np.diag(b(x_interior)),DF)+np.diag(c(x_interior))
b_hat_N = f(x_interior)
b_hat_N[-1] = b_hat_N[-1]-(b(x_interior[-1])/(h))*y1
return A_hat_N, b_hat_N
A_hat_N, b_hat_N = build_A_hat_and_b_hat(b,c,f,h,x,y1,N)
w = np.linalg.solve(A_hat_N, b_hat_N)
w = np.append(w,y1)
plt.figure(figsize=(16,8))
plt.plot(x,w,'.',label=r'$w_i$')
plt.xlabel(r'$x_i$')
plt.grid(True)
plt.legend(loc='best')
plt.ylim([0,1.1])
plt.show()
Explanation: Caso $a=0$
FD
Considere el siguiente BVP:
\begin{align}
a\,y''(x)+b(x)\,y'(x)+c(x)\,y(x) & = f(x), \quad \text{para $x\in]0,1[$}\
a\,(y(0) - y_0)& = 0,\
y(1) & = y_1,
\end{align}
donde $a\in\mathbb{R}$, $b(x)>0$, y $x\in[0,1]$
Answer:
In this case the BVP becomes:
\begin{align}
b(x)\,y'(x)+c(x)\,y(x) & = f(x), \quad \text{para $x\in]0,1[$}\
y(1) & = y_1.
\end{align}
So, if we use forward difference we obtain:
\begin{equation}
\dfrac{b(x_i)}{h} \left(w_{i+1}-w_{i}\right)
+
c(x_i)\,w_i
=
f(x_i).
\end{equation}
There is only one special case now, for $x_{N-2}$, it generates the following equations:
\begin{equation}
\dfrac{b(x_{N-2})}{h} \left(w_{N-1}-w_{N-2}\right)
+
c(x_{N-2})\,w_{N-2}
=
f(x_{N-2}).
\end{equation}
But, since we know $w_{N-1}$, we obtain,
\begin{equation}
\dfrac{b(x_{N-2})}{h} \left(-w_{N-2}\right)
+
c(x_{N-2})\,w_{N-2}
=
f(x_{N-2})-\dfrac{b(x_{N-2})}{h}\,w_{N-1}.
\end{equation}
This analysis allows us to write the discrete equation in the following way for the unknown vector $\mathbf{w}=[w_1,w_2,\dots,w_{N-2}]$ and $\mathbf{x}=[x_1,x_2,\dots,x_{N-2}]$:
\begin{equation}
\dfrac{1}{h}\,\text{diag}(b(\mathbf{x}))\,D^{\text{F}}\,\mathbf{w}
+
\text{diag}(c(\mathbf{x}))\,\mathbf{w}
=
\begin{bmatrix}
f(x_1)\
f(x_2)\
\vdots\
f(x_{N-3})\
f(x_{N-2})-\dfrac{b(x_{N-2})}{h}\,w_{N-1}.
\end{bmatrix},
\end{equation}
but since $w_{N-1}=y_1$ we get,
\begin{equation}
\dfrac{1}{h}\,\text{diag}(b(\mathbf{x}))\,D^{\text{F}}\,\mathbf{w}
+
\text{diag}(c(\mathbf{x}))\,\mathbf{w}
=
\begin{bmatrix}
f(x_1)\
f(x_2)\
\vdots\
f(x_{N-3})\
f(x_{N-2})-\dfrac{b(x_{N-2})}{h}\,y_1.
\end{bmatrix},
\end{equation}
where
\begin{equation}
D^{\text{F}}
=
\begin{bmatrix}
-1 & 1 & 0 & 0 & 0 & 0 & 0 \
0 & -1 & 1 & 0 & 0 & 0 & 0 \
0 & 0 & -1 & 1 & 0 & 0 & 0 \
\vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots \
0 & 0 & 0 & 0 & 0 & -1 & 1 \
0 & 0 & 0 & 0 & 0 & 0 & -1 \
\end{bmatrix}.\
\end{equation}
Factoring out the unknown vector $\mathbf{w}$ we obtain,
\begin{align}
\underbrace{
\left(
\dfrac{1}{h}\,
\text{diag}(b(\mathbf{x}))\,D^{\text{F}}
+
\text{diag}(c(\mathbf{x}))\right)
}_{\displaystyle{\widehat{A}_N}}\,\mathbf{w}
&=\widehat{\mathbf{b}}_N.
\end{align}
Thus, we only need to solve now the linear system of equations $\widehat{A}_N\,\mathbf{w}=\widehat{\mathbf{b}}$ and we are done!
End of explanation
# RHS of IVP
def my_f2(t,w,b,c,f):
return (f(t)-c(t)*w)/b(t)
# Function to be used to apply the Shooting Method
def F_SM_2(alpha,b,c,f,y1,N):
sol = solve_ivp(my_f2,(0,1),alpha,t_eval=np.linspace(0,1,N),args=(b,c,f))
return sol.y[0][-1]-y1
F_root_2 = lambda alpha: F_SM_2(alpha,b,c,f,y1,N)
# Notice that the initial guess for ther root must be choose wisely
alpha_r = root(F_root_2, 1).x[0]
sol = solve_ivp(my_f2,(0,1),(alpha_r,),t_eval=np.linspace(0,1,N),args=(b,c,f))
plt.figure(figsize=(16,8))
plt.plot(sol.t,sol.y[0,:],'rd',label='SM',alpha=0.5)
plt.plot(x,w,'.',label=r'$w_i$')
plt.legend(loc='best')
plt.xlabel(r'$x_i$')
plt.grid(True)
plt.show()
Explanation: SM
In this case the BVP becomes:
\begin{align}
b(x)\,y'(x)+c(x)\,y(x) & = f(x), \quad \text{para $x\in]0,1[$}\
y(1) & = y_1.
\end{align}
This ode can be easily transform in an IVP as follows,
\begin{align}
\dot{y} &= \dfrac{f(t)-c(t)\,y}{b(t)},\
y(0) &= \alpha.
\end{align}
Notice that we used $\alpha$ since we don't know the initial condition, we only know a final condition, i.e. $y(1)=y_1$.
End of explanation
N = 100
x = np.linspace(0,1,N)
h = 1/(N-1)
_, D2 = build_D_D2(N-2)
def F(w):
return np.dot(D2,w)/(h**2)+3*np.exp(w)
w0 = lambda m: m*(x[1:-1]-np.power(x[1:-1],2))
# First solution
w = newton_krylov(F,w0(0))
w = np.append(0,w)
w = np.append(w,0)
# First solution, notice that the initial guess is different
w2 = newton_krylov(F,w0(8))
w2 = np.append(0,w2)
w2 = np.append(w2,0)
plt.figure(figsize=(16,8))
plt.plot(x,w,'.',label=r'$w$')
plt.plot(x,w2,'.',label=r'$w2$')
plt.legend(loc='best')
plt.xlabel(r'$x_i$')
plt.grid(True)
plt.show()
Explanation: Notice that in this particular case the solution for the case when $a\neq 0$ and $a=0$ are close to each other, but they are not the same.
In particular, the value at $x=0$ is different.
<div id='example2s' />
Solution Example 2
Back to toc
Considere el siguiente BVP:
\begin{align}
y''(x)+3\,\exp(y(x)) & = 0, \quad \text{para $x \in]0,1[$}\
y(0) &= 0,\
y(1) &= 0,\
\end{align}
donde $x\in[0,1]$. Note que $y(x)=0$ no es una solución.
1. Construya un algoritmo basado en diferencias finitas que obtenga la aproximación numérica $y(x)$ considerando como initial guess $y_0(x)$. Utilice el método de Newton con GMRes para resolver el sistema de ecuaciones no-lineales asociado o el módulo newton_krylov de scipy, en este último caso no es necesario implementar build_jacobian_matrix_FD.
2. Construya un algoritmo basado en el método del disparo que obtenga la aproximación numérica $y(x)$ considerando como initial guess $y_0(x)$.
3. Resuelva con el algoritmo basado en diferencias finitas el BVP con $y_0(x)=0$ con N=20.
4. Resuelva con el algoritmo basado en el método del disparo el BVP con $y_0(x)=0$ con N=20.
5. ¿Obtiene aproximadamente la misma aproximación numérica en la pregunta 3 y 4?
6. Considere la siguiente familia de initial guesses $y_0^{[m]}(x)=m\,(x-x^2)$, para $m\in{-10,-9,\dots,10}$ y N=20.
1. Resuelva con el algoritmo basado en diferencias finitas el BVP con cada $y_0^{[m]}(x)$.
2. Resuelva con el algoritmo basado en el método del disparo el BVP con cada $y_0^{[m]}(x)$.
3. ¿Se obtinen las mismas soluciones en cada caso?
4. ¿Cuantas soluciones distintas se obtienen?
FD
In this case we will re-use $D_2$ from the previous analysis, so the discrete equation becomes:
The discrete equation at $x_i$ is the following:
\begin{equation}
\dfrac{1}{h^2} \left(w_{i+1}-2\,w_i+w_{i-1}\right)
+3\,\,\exp(w_i)
=
0.
\end{equation}
There are two special cases, for $x_1$ and $x_{N-2}$, they generate the following equations:
\begin{align}
\dfrac{1}{h^2} \left(w_{2}-2\,w_{1}+w_0\right)
+3\,\,\exp(w_1)
&=
0.\
\dfrac{1}{h^2} \left(w_{N-1}-2\,w_{N-2}+w_{N-3}\right)
+3\,\,\exp(w_{N-2})
&=
0.
\end{align}
But, since we know $w_0$ and $w_{N-1}$, they become,
\begin{align}
\dfrac{1}{h^2} \left(w_{2}-2\,w_{1}\right)
+3\,\,\exp(w_1)
&=
-\dfrac{1}{h^2}\,w_0.\
\dfrac{1}{h^2} \left(-2\,w_{N-2}+w_{N-3}\right)
+3\,\,\exp(w_{N-2})
&=
-\dfrac{1}{h^2}\,w_{N-1}.
\end{align}
This analysis allows us to write the discrete equation in the following way for the unknown vector $\mathbf{w}=[w_1,w_2,\dots,w_{N-2}]$ and $\mathbf{x}=[x_1,x_2,\dots,x_{N-2}]$:
\begin{equation}
\dfrac{1}{h^2}\,D_2\,\mathbf{w}
+
3\,\begin{bmatrix}
\exp(w_1)\
\exp(w_2)\
\vdots\
\exp(w_{N-3})\
\exp(w_{N-2})
\end{bmatrix}
=
\begin{bmatrix}
-\dfrac{1}{h^2}\,w_0\
0\
\vdots\
0\
-\dfrac{1}{h^2}\,w_{N-1}
\end{bmatrix},
\end{equation}
but since $w_0=0$ and $w_{N-1}=0$ we get,
\begin{equation}
\dfrac{1}{h^2}\,D_2\,\mathbf{w}
+
3\,\begin{bmatrix}
\exp(w_1)\
\exp(w_2)\
\vdots\
\exp(w_{N-3})\
\exp(w_{N-2})
\end{bmatrix}
=
\begin{bmatrix}
0\
0\
\vdots\
0\
0
\end{bmatrix},
\end{equation}
In this case we can't factor out the unknown vector $\mathbf{w}$ since it is not a linear problem.
We need to ask for help to Sir Isaac Newton and to Professor Aleksey Nikolaevich Krylov!
For simplicity we will build $\mathbf{F}(\mathbf{w})$, i.e. the high dimensional function that we need to find the root.
\begin{equation}
\mathbf{F}(\mathbf{w})=\dfrac{1}{h^2}\,D_2\,\mathbf{w}
+
3\,\begin{bmatrix}
\exp(w_1)\
\exp(w_2)\
\vdots\
\exp(w_{N-3})\
\exp(w_{N-2})
\end{bmatrix}
\end{equation}
To solve this equation we will use newton_krylov!
End of explanation
# RHS of dynamical system
def my_f_NL(t,w):
w1 = w[0]
w2 = w[1]
w1dot = w2
w2dot = -3*np.exp(w1)
return np.array([w1dot,w2dot])
# Function to be used to apply the Shooting Method
def F_SM_NL(alpha,N):
initial_condition = np.zeros(2)
initial_condition[1] = alpha
sol = solve_ivp(my_f_NL,(0,1),initial_condition,t_eval=np.linspace(0,1,N))
return sol.y[0,-1]
F_root_NL = lambda alpha: F_SM_NL(alpha,N)
# First solution with initial guess for alpha=0
alpha_r = root(F_root_NL, 0.).x[0]
sol = solve_ivp(my_f_NL,(0,1),np.array([0,alpha_r]),t_eval=np.linspace(0,1,N))
# Second solution with initial guess for alpha=8
alpha_r = root(F_root_NL, 8).x[0]
sol2 = solve_ivp(my_f_NL,(0,1),np.array([0,alpha_r]),t_eval=np.linspace(0,1,N))
plt.figure(figsize=(16,8))
plt.plot(sol.t,sol.y[0,:],'md',label='SM1',alpha=0.5)
plt.plot(sol2.t,sol2.y[0,:],'gs',label='SM2',alpha=0.5)
plt.plot(x,w,'.',label=r'$w$')
plt.plot(x,w2,'.',label=r'$w2$')
plt.legend(loc='best')
plt.xlabel(r'$x_i$')
plt.grid(True)
plt.show()
Explanation: SM
Considere el siguiente BVP:
\begin{align}
y''(x)+3\,\exp(y(x)) & = 0, \quad \text{para $x \in]0,1[$}\
y(0) &= 0,\
y(1) &= 0,\
\end{align}
donde $x\in[0,1]$. Note que $y(x)=0$ no es una solución.
We first need to rewrite the BVP as a dynamical system and consider $x\rightarrow t$:
\begin{align}
w_1(t) &= y(t),\
w_2(t) &= y'(t),\
\end{align}
so,
\begin{align}
\dot{w}_1 &= y'(t) = w_2,\
\dot{w}_2(t) &= y''(t)\
&= -3\,\exp(y(t)) = -3\,\exp(w_1),\
w_1(0) &= 0,\
w_2(0) &= \alpha.
\end{align}
Notice that in this case the only degree of freedom we have is how we initilize $\alpha$ when we look for the root. In particular the way we can use $y_0^{[m]}(x)=m\,(x-x^2)$ is by computing its slope at $x=0$, this will help us to define a convenient initial guess for $\alpha$.
End of explanation |
13,838 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
$$ \LaTeX \text{ command declarations here.}
\newcommand{\N}{\mathcal{N}}
\newcommand{\R}{\mathbb{R}}
\renewcommand{\vec}[1]{\mathbf{#1}}
\newcommand{\norm}[1]{\|#1\|_2}
\newcommand{\d}{\mathop{}!\mathrm{d}}
\newcommand{\qed}{\qquad \mathbf{Q.E.D.}}
\newcommand{\vx}{\mathbf{x}}
\newcommand{\vy}{\mathbf{y}}
\newcommand{\vt}{\mathbf{t}}
\newcommand{\vb}{\mathbf{b}}
\newcommand{\vw}{\mathbf{w}}
\newcommand{\vm}{\mathbf{m}}
\newcommand{\I}{\mathbb{I}}
\newcommand{\th}{\text{th}}
$$
EECS 445
Step1: Problem
Step2: Problem
Step3: OSMH Dual Formulation
The previous objective function is referred to as the Primal
With $N$ datapoints in $d$ dimensions, the Primal optimizes over $d + 1$ variables ($\vw, b$).
But the Dual of this optimization problem has $N$ variables, one $\alpha_i$ for each example $i$!
$$
\begin{split}
\underset{\alpha, \beta}{\text{maximize}} \quad & -\frac12 \sum \nolimits_{i,j = 1}^n \alpha_i \alpha_j t_i t_j \vx_i^T \vx_j + \sum \nolimits_{i = 1}^n \alpha_i\
\text{subject to} \quad & 0 \leq \alpha_i \leq C/n \quad \forall i\ \
\quad & \sum \nolimits_{i=1}^n \alpha_i t_i = 0
\end{split}
$$
Often the Dual problem is easier to solve.
Once you solve the dual problem for $\alpha^_1, \ldots, \alpha^_N$, you get a primal solution as well!
Can you figure out a way (without using an optimization solver!) to determine the optimal dual parameters?
open ended, you can try different ideas
feel free to use the fact that you already know the support vectors
How do you know that you did indeed find the optimal $\alpha$'s?
How can you compute the primal variables $\vec{w}, b$ from these $\alpha$'s? | Python Code:
%pylab inline
import numpy as np
center1 = np.array([3.0,3.0])
center2 = np.array([-3.0,-3.0])
X = np.zeros((100,2)); Y = np.zeros((100,))
X[:50,:] = np.random.multivariate_normal(center1, np.eye(2),(50,))
Y[:50] = +1
X[50:,:] = np.random.multivariate_normal(center2, np.eye(2),(50,))
Y[50:] = -1
plt.scatter(X[:,0], X[:,1], c = Y)
Explanation: $$ \LaTeX \text{ command declarations here.}
\newcommand{\N}{\mathcal{N}}
\newcommand{\R}{\mathbb{R}}
\renewcommand{\vec}[1]{\mathbf{#1}}
\newcommand{\norm}[1]{\|#1\|_2}
\newcommand{\d}{\mathop{}!\mathrm{d}}
\newcommand{\qed}{\qquad \mathbf{Q.E.D.}}
\newcommand{\vx}{\mathbf{x}}
\newcommand{\vy}{\mathbf{y}}
\newcommand{\vt}{\mathbf{t}}
\newcommand{\vb}{\mathbf{b}}
\newcommand{\vw}{\mathbf{w}}
\newcommand{\vm}{\mathbf{m}}
\newcommand{\I}{\mathbb{I}}
\newcommand{\th}{\text{th}}
$$
EECS 445: Machine Learning
Hands On 09: Support Vector Machines
Instructor: Ben Bray, Chansoo Lee, Jia Deng, Jake Abernethy
Date: October 10, 2016
NEW: Finished Course website: http://eecs445-f16.github.io
Brute Force Search for Max-Margin SVM Solution
In the hard-margin support vector machine formulation, we want to try to find the hyperplane the maximizes the margin and correctly classifies the data.
Let's generate some data!
End of explanation
wvec = np.array([-4.0,7.0])
bval = -2.4
# Does this wvec and b correctly classify data within margin?
Explanation: Problem: Hard-margin SVM
First pick one vector and offset term $(\vec{w}, b)$ that correctly classifies the data
Determine the size of the margin for this $\vec{w}$
Challenging: Do a brute force search (over a grid) to find the max-margin $\vec{w}$!
> Note, this is not a good idea in general, since this algorithm has time complexity exponential in the dimension, but it's not so bad in 2d!
Find the support vectors and plot them
Modify the dataset above such that there is no feasible solution $\vec{w}$ (but just barely)
How do you know when there is no feasible $\vec{w}$?
End of explanation
# put some code in here!
Explanation: Problem: Soft-margin SVM objective
Recall original OSMH problem is
$$
\begin{split}
\underset{\vw, b, \xi}{\text{minimize}} \quad & \frac12 {\| \vw \|}^2 + \frac{C}{n} \sum \nolimits_{i = 1}^n \xi_i\
\text{subject to} \quad & y_i(\vw^T\vx_i + b) \geq 1 - \xi_i \quad \text{ and } \quad \xi_i \geq 0 \quad \forall i\
\end{split}
$$
Another way to write this is as follows:
$$
\begin{split}
\underset{\vw, b, \xi}{\text{minimize}} \quad & \frac12 {\| \vw \|}^2 + \frac{C}{n} \sum \nolimits_{i = 1}^n \max(0,1 - y_i(\vw^T\vx_i + b))
\end{split}
$$
You modified the dataset above to ensure there is no feasible $\vec{w}$. Now find the $\vec{w}$ that minimizes the OMSH objective using a brute force search
Find two values of $C$ where the support vectors of the solution are different. Plot these in both cases.
End of explanation
# let's find some alphas!
Explanation: OSMH Dual Formulation
The previous objective function is referred to as the Primal
With $N$ datapoints in $d$ dimensions, the Primal optimizes over $d + 1$ variables ($\vw, b$).
But the Dual of this optimization problem has $N$ variables, one $\alpha_i$ for each example $i$!
$$
\begin{split}
\underset{\alpha, \beta}{\text{maximize}} \quad & -\frac12 \sum \nolimits_{i,j = 1}^n \alpha_i \alpha_j t_i t_j \vx_i^T \vx_j + \sum \nolimits_{i = 1}^n \alpha_i\
\text{subject to} \quad & 0 \leq \alpha_i \leq C/n \quad \forall i\ \
\quad & \sum \nolimits_{i=1}^n \alpha_i t_i = 0
\end{split}
$$
Often the Dual problem is easier to solve.
Once you solve the dual problem for $\alpha^_1, \ldots, \alpha^_N$, you get a primal solution as well!
Can you figure out a way (without using an optimization solver!) to determine the optimal dual parameters?
open ended, you can try different ideas
feel free to use the fact that you already know the support vectors
How do you know that you did indeed find the optimal $\alpha$'s?
How can you compute the primal variables $\vec{w}, b$ from these $\alpha$'s?
End of explanation |
13,839 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Variability of the Sample Mean
By the Central Limit Theorem, the probability distribution of the mean of a large random sample is roughly normal. The bell curve is centered at the population mean. Some of the sample means are higher, and some lower, but the deviations from the population mean are roughly symmetric on either side, as we have seen repeatedly. Formally, probability theory shows that the sample mean is an unbiased estimate of the population mean.
In our simulations, we also noticed that the means of larger samples tend to be more tightly clustered around the population mean than means of smaller samples. In this section, we will quantify the variability of the sample mean and develop a relation between the variability and the sample size.
Let's start with our table of flight delays. The mean delay is about 16.7 minutes, and the distribution of delays is skewed to the right.
Step2: Now let's take random samples and look at the probability distribution of the sample mean. As usual, we will use simulation to get an empirical approximation to this distribution.
We will define a function simulate_sample_mean to do this, because we are going to vary the sample size later. The arguments are the name of the table, the label of the column containing the variable, the sample size, and the number of simulations.
Step3: Let us simulate the mean of a random sample of 100 delays, then of 400 delays, and finally of 625 delays. We will perform 1000 repetitions of each of these process.
You can interact with the buttons below to show the distribution of sample means for different sample sizes.
Step4: You can see the Central Limit Theorem in action – the histograms of the sample means are roughly normal, even though the histogram of the delays themselves is far from normal.
You can also see that each of the three histograms of the sample means is centered very close to the population mean. In each case, the "average of sample means" is very close to 16.66 minutes, the population mean. Both values are provided in the printout above each histogram. As expected, the sample mean is an unbiased estimate of the population mean.
The SD of All the Sample Means
You can also see that the histograms get narrower, and hence taller, as the sample size increases. We have seen that before, but now we will pay closer attention to the measure of spread.
The SD of the population of all delays is about 40 minutes.
Step5: Take a look at the SDs in the sample mean histograms above. In all three of them, the SD of the population of delays is about 40 minutes, because all the samples were taken from the same population.
Now look at the SD of all 10,000 sample means, when the sample size is 100. That SD is about one-tenth of the population SD. When the sample size is 400, the SD of all the sample means is about one-twentieth of the population SD. When the sample size is 625, the SD of the sample means is about one-twentyfifth of the population SD.
It seems like a good idea to compare the SD of the empirical distribution of the sample means to the quantity "population SD divided by the square root of the sample size."
Here are the numerical values. For each sample size in the first column, 10,000 random samples of that size were drawn, and the 10,000 sample means were calculated. The second column contains the SD of those 10,000 sample means. The third column contains the result of the calculation "population SD divided by the square root of the sample size."
Step6: The values in the second and third columns are very close. If we plot each of those columns with the sample size on the horizontal axis, the two graphs are essentially indistinguishable. | Python Code:
united = Table.read_table('http://inferentialthinking.com/notebooks/united_summer2015.csv')
delay = united.select('Delay')
pop_mean = np.mean(delay.column('Delay'))
pop_mean
delay_opts = {
'xlabel': 'Delay (minute)',
'ylabel': 'Percent per minute',
'xlim': (-20, 200),
'ylim': (0, 0.037),
'bins': 22,
}
nbi.hist(united.column('Delay'), options=delay_opts)
Explanation: The Variability of the Sample Mean
By the Central Limit Theorem, the probability distribution of the mean of a large random sample is roughly normal. The bell curve is centered at the population mean. Some of the sample means are higher, and some lower, but the deviations from the population mean are roughly symmetric on either side, as we have seen repeatedly. Formally, probability theory shows that the sample mean is an unbiased estimate of the population mean.
In our simulations, we also noticed that the means of larger samples tend to be more tightly clustered around the population mean than means of smaller samples. In this section, we will quantify the variability of the sample mean and develop a relation between the variability and the sample size.
Let's start with our table of flight delays. The mean delay is about 16.7 minutes, and the distribution of delays is skewed to the right.
End of explanation
Empirical distribution of random sample means
def simulate_sample_mean(table, label, sample_size, repetitions=1000):
means = make_array()
for i in range(repetitions):
new_sample = table.sample(sample_size)
new_sample_mean = np.mean(new_sample.column(label))
means = np.append(means, new_sample_mean)
# Print all relevant quantities
print("Sample size: ", sample_size)
print("Population mean:", np.mean(table.column(label)))
print("Average of sample means: ", np.mean(means))
print("Population SD:", np.std(table.column(label)))
print("SD of sample means:", np.std(means))
return means
Explanation: Now let's take random samples and look at the probability distribution of the sample mean. As usual, we will use simulation to get an empirical approximation to this distribution.
We will define a function simulate_sample_mean to do this, because we are going to vary the sample size later. The arguments are the name of the table, the label of the column containing the variable, the sample size, and the number of simulations.
End of explanation
means_opts = {
'xlabel': 'Sample Means',
'ylabel': 'Percent per unit',
'xlim': (5, 35),
'ylim': (0, 0.25),
'bins': 30,
}
nbi.hist(simulate_sample_mean, table=fixed(delay), label=fixed('Delay'),
sample_size=widgets.ToggleButtons(options=[100, 400, 625]),
options=means_opts)
Explanation: Let us simulate the mean of a random sample of 100 delays, then of 400 delays, and finally of 625 delays. We will perform 1000 repetitions of each of these process.
You can interact with the buttons below to show the distribution of sample means for different sample sizes.
End of explanation
pop_sd = np.std(delay.column('Delay'))
pop_sd
Explanation: You can see the Central Limit Theorem in action – the histograms of the sample means are roughly normal, even though the histogram of the delays themselves is far from normal.
You can also see that each of the three histograms of the sample means is centered very close to the population mean. In each case, the "average of sample means" is very close to 16.66 minutes, the population mean. Both values are provided in the printout above each histogram. As expected, the sample mean is an unbiased estimate of the population mean.
The SD of All the Sample Means
You can also see that the histograms get narrower, and hence taller, as the sample size increases. We have seen that before, but now we will pay closer attention to the measure of spread.
The SD of the population of all delays is about 40 minutes.
End of explanation
sd_comparison
Explanation: Take a look at the SDs in the sample mean histograms above. In all three of them, the SD of the population of delays is about 40 minutes, because all the samples were taken from the same population.
Now look at the SD of all 10,000 sample means, when the sample size is 100. That SD is about one-tenth of the population SD. When the sample size is 400, the SD of all the sample means is about one-twentieth of the population SD. When the sample size is 625, the SD of the sample means is about one-twentyfifth of the population SD.
It seems like a good idea to compare the SD of the empirical distribution of the sample means to the quantity "population SD divided by the square root of the sample size."
Here are the numerical values. For each sample size in the first column, 10,000 random samples of that size were drawn, and the 10,000 sample means were calculated. The second column contains the SD of those 10,000 sample means. The third column contains the result of the calculation "population SD divided by the square root of the sample size."
End of explanation
sd_comparison.plot('Sample Size n')
Explanation: The values in the second and third columns are very close. If we plot each of those columns with the sample size on the horizontal axis, the two graphs are essentially indistinguishable.
End of explanation |
13,840 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Maquetación Python 3 (Markdown)
Step1: Encabezado de tres "###"
Encabezado de cinco "#####"
Step2: texto en cursiva
texto en negrita
asteriscos y cursiva en una sola línea
~~texto tachado~~
Step3: Mediante número seguido de punto se define un elemento ordenado de una lista
Segundo elemento del listado (2.)
Tercer elemento (3.)
Elemento no numerado (mediante "*", también son válidos "-" y "+")
Elemento de segundo orden con numeración
Elemento 1
Elemento 2
Elemento 3
Sub-elemento 1
Sub-elemento 2
Step4: Encabezado mediante MARKDOWN
<h4>Encabezado mediante HTML</h4>
'<!--
Se puede comentar texto como si de JAVA se tratase
-->'
<pre>
<code>
// Comentarios
línea 1 de código
línea 2 de código
línea 3 de código
</code>
</pre>
Step5: Creación de párrafos
Párrafo 1
Párrafo 2
Párrafo 3
Step6: Ejemplo de comentario de un código
js
grunt.initConfig({
assemble
Step7: | Opción | Descripción |
|
Step8: Enlace básico
Enlace con información al realizar un mouseover | Python Code:
# Los HEADERS o encabezados se definen mediante #, existen 6 niveles siendo un sólo # el de mayor tamaño y ###### el de menor
# Ejemplo:
Explanation: Maquetación Python 3 (Markdown)
End of explanation
# Emphasis o estilos del texto
# *cursiva*
# **negrita**
# ~~tachado~~
# Ejemplos:
Explanation: Encabezado de tres "###"
Encabezado de cinco "#####"
End of explanation
# Como crear listas ordenadas o elementos no númerados
# Ejemplos:
Explanation: texto en cursiva
texto en negrita
asteriscos y cursiva en una sola línea
~~texto tachado~~
End of explanation
# En iPython se puede escribir código HTML para maquetar o presentar datos
# por ejemplo para definir un encabezado se puede escribir <h4>h1 Heading</h4> o, como ya hemos visto, #### Heading 1
# Ejemplo:
Explanation: Mediante número seguido de punto se define un elemento ordenado de una lista
Segundo elemento del listado (2.)
Tercer elemento (3.)
Elemento no numerado (mediante "*", también son válidos "-" y "+")
Elemento de segundo orden con numeración
Elemento 1
Elemento 2
Elemento 3
Sub-elemento 1
Sub-elemento 2
End of explanation
# Creación de párrafos mediante MARKDOWN
# > especifica el primer nivel de párrafo, sucesivos > profundizan en la sangría de los mismos
Explanation: Encabezado mediante MARKDOWN
<h4>Encabezado mediante HTML</h4>
'<!--
Se puede comentar texto como si de JAVA se tratase
-->'
<pre>
<code>
// Comentarios
línea 1 de código
línea 2 de código
línea 3 de código
</code>
</pre>
End of explanation
# Mediante el uso de ''' ''' se puede mantener una estructura de un comentario,
# por ejemplo cuando se escribe código para que se vea legible.
Explanation: Creación de párrafos
Párrafo 1
Párrafo 2
Párrafo 3
End of explanation
# Creación de tablas en MARKDOWN
# | Option | Description |
# | ------ | ----------- |
# Si se usa : en la linea anterior se alinea el texto a izquiera o derecha, con : a ambos lados se alinea centrado
# | ------: | :----------- |
# | data: | path to data files to supply the data that will be passed into templates. |
# | engine | engine to be used for processing templates. Handlebars is the default. |
# | ext | extension to be used for dest files. |
Explanation: Ejemplo de comentario de un código
js
grunt.initConfig({
assemble: {
options: {
assets: 'docs/assets',
data: 'src/data/*.{json,yml}',
helpers: 'src/custom-helpers.js',
partials: ['src/partials/**/*.{hbs,md}']
},
pages: {
options: {
layout: 'default.hbs'
},
files: {
'./': ['src/templates/pages/index.hbs']
}
}
}
};
End of explanation
# Enlaces incrustados mediante MARKDOWN
# [Texto](http://web "comentario mouseover")
Explanation: | Opción | Descripción |
| :----: | :---------- |
| datos 1 | texto 1 |
| datos 2 | texto 2 |
| datos 3 | texto 3 |
End of explanation
# Incrustado de imágenes
# 
Explanation: Enlace básico
Enlace con información al realizar un mouseover
End of explanation |
13,841 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Custom Observer
This example defines a plane-observer in python.
Step3: As test, we propagate some particles in a random field with a sheet observer
Step4: and plot the final position of the particles in 3D
Step5: or as a histogram. Note the width of the X distribution, which is due to the particles being detected after crossing. | Python Code:
import crpropa
class ObserverPlane(crpropa.ObserverFeature):
Detects all particles after crossing the plane. Defined by position (any
point in the plane) and vectors v1 and v2.
def __init__(self, position, v1, v2):
crpropa.ObserverFeature.__init__(self)
# calculate three points of a plane
self.__v1 = v1
self.__v2 = v2
self.__x0 = position
def distanceToPlane(self, X):
Always positive for one side of plane and negative for the other side.
dX = np.asarray([X.x - self.__x0[0], X.y - self.__x0[1], X.z - self.__x0[2]])
V = np.linalg.det([self.__v1, self.__v2, dX])
return V
def checkDetection(self, candidate):
currentDistance = self.distanceToPlane(candidate.current.getPosition())
previousDistance = self.distanceToPlane(candidate.previous.getPosition())
candidate.limitNextStep(abs(currentDistance))
if np.sign(currentDistance) == np.sign(previousDistance):
return crpropa.NOTHING
else:
return crpropa.DETECTED
Explanation: Custom Observer
This example defines a plane-observer in python.
End of explanation
from crpropa import Mpc, nG, EeV
import numpy as np
turbSpectrum = crpropa.SimpleTurbulenceSpectrum(Brms=1*nG, lMin = 2*Mpc, lMax=5*Mpc, sIndex=5./3.)
gridprops = crpropa.GridProperties(crpropa.Vector3d(0), 128, 1 * Mpc)
BField = crpropa.SimpleGridTurbulence(turbSpectrum, gridprops)
m = crpropa.ModuleList()
m.add(crpropa.PropagationCK(BField, 1e-4, 0.1 * Mpc, 5 * Mpc))
m.add(crpropa.MaximumTrajectoryLength(25 * Mpc))
# Observer
out = crpropa.TextOutput("sheet.txt")
o = crpropa.Observer()
# The Observer feature has to be created outside of the class attribute
# o.add(ObserverPlane(...)) will not work for custom python modules
plo = ObserverPlane(np.asarray([0., 0, 0]) * Mpc, np.asarray([0., 1., 0.]) * Mpc, np.asarray([0., 0., 1.]) * Mpc)
o.add(plo)
o.setDeactivateOnDetection(False)
o.onDetection(out)
m.add(o)
# source setup
source = crpropa.Source()
source.add(crpropa.SourcePosition(crpropa.Vector3d(0, 0, 0) * Mpc))
source.add(crpropa.SourceIsotropicEmission())
source.add(crpropa.SourceParticleType(crpropa.nucleusId(1, 1)))
source.add(crpropa.SourceEnergy(1 * EeV))
m.run(source, 1000)
out.close()
Explanation: As test, we propagate some particles in a random field with a sheet observer:
End of explanation
%matplotlib inline
from mpl_toolkits.mplot3d import Axes3D
import pylab as plt
ax = plt.subplot(111, projection='3d')
data = plt.loadtxt('sheet.txt')
ax.scatter(data[:,5], data[:,6], data[:,7] )
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
ax.set_xlim(20,-20)
ax.set_ylim(20,-20)
ax.set_zlim(20,-20)
ax.view_init(25, 95)
Explanation: and plot the final position of the particles in 3D
End of explanation
bins = np.linspace(-20,20, 50)
plt.hist(data[:,5], bins=bins, label='X', histtype='step')
plt.hist(data[:,6], bins=bins, label='Y', histtype='step')
plt.hist(data[:,7], bins=bins, label='Z', histtype='step')
plt.legend()
plt.show()
Explanation: or as a histogram. Note the width of the X distribution, which is due to the particles being detected after crossing.
End of explanation |
13,842 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rechunking
Rechunking lets us re-distribute how datasets are split between variables and chunks across a Beam PCollection.
To get started we'll recreate our dummy data from the data model tutorial
Step1: Choosing chunks
Chunking can be essential for some operations. Some operations are very hard or impossible to perform with certain chunking schemes. For example, to make a plot all the data needs to come toether on a single machine. Other calculations such as calculating a median are possible to perform on distributed data, but require tricky algorithms and/or approximation.
More broadly, chunking can have critical performance implications, similar to those for Xarray and Dask. As a rule of thumb, chunk sizes of 10-100 MB work well. The optimal chunk size is a balance among a number of considerations, adapted here from Dask docs
Step2: Adjusting chunks
You can also adjust chunks in a dataset to distribute arrays of different sizes. Here you have two choices of API
Step3: Note that because these transformations only split or consolidate, they cannot necessary fully rechunk a dataset in a single step if the new chunk sizes are not multiples of old chunks (with consolidate) or do not even divide the old chunks (with split), e.g.,
Step4: For such uneven cases, you'll need to use split followed by consolidate
Step5: High level rechunking
Alternatively, the high-level Rechunk() method applies multiple split and consolidate steps based on the Rechunker algorithm | Python Code:
import apache_beam as beam
import numpy as np
import xarray_beam as xbeam
import xarray
def create_records():
for offset in [0, 4]:
key = xbeam.Key({'x': offset, 'y': 0})
data = 2 * offset + np.arange(8).reshape(4, 2)
chunk = xarray.Dataset({
'foo': (('x', 'y'), data),
'bar': (('x', 'y'), 100 + data),
})
yield key, chunk
inputs = list(create_records())
Explanation: Rechunking
Rechunking lets us re-distribute how datasets are split between variables and chunks across a Beam PCollection.
To get started we'll recreate our dummy data from the data model tutorial:
End of explanation
inputs | xbeam.SplitVariables()
Explanation: Choosing chunks
Chunking can be essential for some operations. Some operations are very hard or impossible to perform with certain chunking schemes. For example, to make a plot all the data needs to come toether on a single machine. Other calculations such as calculating a median are possible to perform on distributed data, but require tricky algorithms and/or approximation.
More broadly, chunking can have critical performance implications, similar to those for Xarray and Dask. As a rule of thumb, chunk sizes of 10-100 MB work well. The optimal chunk size is a balance among a number of considerations, adapted here from Dask docs:
Chunks should be small enough to fit comfortably into memory on a single machine. As an upper limit, chunks over roughly 2 GB in size will not fit into the protocol buffers Beam uses to pass data between workers.
There should be enough chunks for Beam runners (like Cloud Dataflow) to elastically shard work over many workers.
Chunks should be large enough to amortize the overhead of networking and the Python interpreter, which starts to become noticeable for arrays with fewer than 1 million elements.
The nbytes attribute on both NumPy arrays and xarray.Dataset objects is a good easy way to figure out how larger chunks are.
Adjusting variables
The simplest transformation is splitting (or consoldating) different variables in a Dataset with SplitVariables() and ConsolidateVariables(), e.g.,
End of explanation
inputs | xbeam.ConsolidateChunks({'x': -1})
Explanation: Adjusting chunks
You can also adjust chunks in a dataset to distribute arrays of different sizes. Here you have two choices of API:
The lower level {py:class}~xarray_beam.SplitChunks and {py:class}~xarray_beam.ConsolidateChunks. These transformations apply a single splitting (with indexing) or consolidation (with {py:function}xarray.concat) function to array elements.
The high level {py:class}~xarray_beam.Rechunk, which uses a pipeline of multiple split/consolidate steps (as needed) to efficiently rechunk a dataset.
Low level rechunking
For minor adjustments (e.g., mostly along a single dimension), the more explicit SplitChunks() and ConsolidateChunks() are good options. They take a dict of desired chunk sizes as a parameter, which can also be -1 to indicate "no chunking" along a dimension:
End of explanation
inputs | xbeam.SplitChunks({'x': 5}) # notice that the first two chunks are still separate!
Explanation: Note that because these transformations only split or consolidate, they cannot necessary fully rechunk a dataset in a single step if the new chunk sizes are not multiples of old chunks (with consolidate) or do not even divide the old chunks (with split), e.g.,
End of explanation
inputs | xbeam.SplitChunks({'x': 5}) | xbeam.ConsolidateChunks({'x': 5})
Explanation: For such uneven cases, you'll need to use split followed by consolidate:
End of explanation
inputs | xbeam.Rechunk(dim_sizes={'x': 6}, source_chunks={'x': 3}, target_chunks={'x': 5}, itemsize=8)
Explanation: High level rechunking
Alternatively, the high-level Rechunk() method applies multiple split and consolidate steps based on the Rechunker algorithm:
End of explanation |
13,843 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
Preparation
Introduction
Data Collection
Data Preprocessing
Building and Training the Model
Qualitative Analysis of Player Vectors
t-SNE
PCA
ScatterPlot3D
Player Algebra
Nearest Neighbors
Opposite-handed Doppelgängers
Modeling Previously Unseen At-Bat Matchups
Preparation
I use Python 3, but everything should work with Python 2.
Install HDF5.
Install other packages
Step1: And now we'll prepare some variables for organizing the data.
Step2: Next, we'll read in the data. Unfortunately, this is going to be a bunch of spaghetti code. The goal is to collect the batter, pitcher, and outcome (e.g., strike out, home run) for every at-bat. By the end of the following code block, we'll have a Python list of dictionaries where each element has the format <code>{"batter"
Step3: Data Preprocessing
OK, now that we have our raw data, we're going to establish some cutoffs so that we're only analyzing players with a reasonable number of observations. Let's just focus on the most frequent batters and pitchers who were involved in 90% of the at-bats.
Step4: As you can see, only 32% of batters and 46% of pitchers were involved in 90% of at-bats. Let's use these new cutoff points to build the final data set.
Step5: As you can see, we still retain a large amount of data even after removing infrequent batters and pitchers. Next, we're going to associate an integer index with each of our batters, pitchers, and outcomes, respectively.
Step6: We'll then use these newly defined integer indices to build the appropriate NumPy arrays for our model.
Step7: Building and Training the Model
We're now ready to build our model with Keras. The model is similar in spirit to the <code>word2vec</code> model in that we're trying to learn the player vectors that best predict the outcome of an at-bat (the "target word" in <code>word2vec</code>) given a certain batter and pitcher (the "context" in <code>word2vec</code>). We'll learn separate embedding matrices for batters and pitchers.
Step8: And now we're ready to train our model. We'll save the weights at the end of training.
Step9: We'll also train a logistic regression model so that we have something to compare to <code>(batter|pitcher)2vec</code>.
Step10: Qualitative Analysis of Player Vectors
Having trained the model, let's go ahead and fetch the distributed representations for all players. To do so, we need to define some functions that return a vector when provided with a player's integer index.
Step11: Alright, let's find out if these representations are revealing anything interesting. First, let's collect some information about the players.
Step13: t-SNE
Next, we'll use the t-SNE algorithm to visualize the player vectors in two and three dimensions.
Step14: PCA
Let's also visualize the first few PCs of a principal component analysis (PCA) of the vectors and color them with various interesting properties.
Step16: As you can see, there are some interesting patterns emerging from the representations. For example, right-handed hitters are clearly separated from left-handed and switch hitters. Similarly, frequent singles hitters are far from infrequent singles hitters. So, the model is clearly learning something, but whether or not what it's learning is non-trivial remains to be seen. Let's go ahead and save the t-SNE map and PC scores to CSV files so that we can play around with them elsewhere.
Step18: Let's also save the raw player vectors.
Step20: ScatterPlot3D
To gain some additional intuition with the player representations, I recommend exploring them in my open source scatter plot visualization application, ScatterPlot3D. To run it
Step22: At a first glance, the nearest neighbors produced by the embedding do seem to support baseball intuition. Both Mike Trout and Paul Goldschmidt are known for their rare blend of speed and power. Like Dee Gordon, Ichiro Suzuki has a knack for being able to get on base.
Zack Greinke's presence among Clayton Kershaw's nearest neighbors is interesting as they are considered one of the best pitching duos of all time. The similarities between Craig Stammen and Kershaw are not obvious to my ignorant baseball eye, but we would expect a method like <code>(batter|pitcher)2vec</code> (if effective) to occasionally discover surprising neighbors or else it wouldn't be particularly useful.
Aroldis Chapman's nearest neighbors are fairly unsurprising with Craig Kimbrel and Andrew Miller both being elite relief pitchers.
When clustering players using common MLB stats (e.g., HRs, RBIs), Mike Trout's ten nearest neighbors for the 2015 season are
Step23: Bryce Harper's presence among Mike Trout's left-handed doppelgängers is particularly satisfying. As for Dee Gordon's right-handed doppelgängers, Tyler Saladino is known for "legging 'em out".
Modeling Previously Unseen At-Bat Matchups
Measuring how well the <code>(batter|pitcher)2vec</code> representations predict outcome distributions for unseen matchups is the ultimate test of whether the representations are capturing anything meaningful about players. To test the model, we'll look at matchups from the 2016 season that were not seen in the training set.
Step25: To determine the effectiveness of <code>(batter|pitcher)2vec</code>, we need something to first establish a baseline. We'll use a naïve prediction strategy to fill that role. For any given batter, we'll define their expected outcome distribution as
Step26: We can then calculate the log loss of this naïve approach on unseen matchups.
Step27: And we can now see how <code>(batter|pitcher)2vec</code> compares.
Step28: As you can see, <code>(batter|pitcher)2vec</code> is a significantly better at modeling outcome distributions for unseen batter/pitcher matchups than the naïve baseline. But is an improvement of only 0.94% over the baseline particularly impressive? Let's see how our logistic regression model fairs. | Python Code:
import urllib.request
import zipfile
from os import makedirs
from os.path import exists
project_directory = "/home/airalcorn2/Projects/batter_pitcher_2vec/batter-pitcher-2vec/" # Change this.
zip_name = "2010seve"
data_directory = project_directory + zip_name
if not exists(data_directory):
makedirs(project_directory, exist_ok = True)
zip_f = data_directory + ".zip"
urllib.request.urlretrieve("http://www.retrosheet.org/events/{0}.zip".format(zip_name), zip_f)
zip_ref = zipfile.ZipFile(zip_f, "r")
zip_ref.extractall(project_directory + zip_name)
zip_ref.close()
Explanation: Table of Contents
Preparation
Introduction
Data Collection
Data Preprocessing
Building and Training the Model
Qualitative Analysis of Player Vectors
t-SNE
PCA
ScatterPlot3D
Player Algebra
Nearest Neighbors
Opposite-handed Doppelgängers
Modeling Previously Unseen At-Bat Matchups
Preparation
I use Python 3, but everything should work with Python 2.
Install HDF5.
Install other packages:
<code>pip install h5py keras matplotlib numpy pyyaml scipy scikit-learn seaborn tensorflow theano urllib3</code>
Introduction
The goal of this project was to learn distributed representations of MLB players. Theoretically, meaningful representations (i.e., representations that capture real baseball qualities of players) could then be used for other types of analyses, such as simulating season outcomes following trades. <code>(batter|pitcher)2vec</code> was inspired by <code>word2vec</code> (hence the name), which is a model that learns distributed representations of words. These learned word vectors often have interesting properties; for example, Paris - France + Italy in the word vector space is very close to the vector for Rome (see here and here for more details). In this notebook, I'll show you how I built a model that simultaneously learns distributed representations of pitchers and batters from at-bat data.
Data Collection
To start things off, let's download and extract some data from Retrosheet.org. We'll use play-by-play data from the 2013, 2014, 2015, and 2016 seasons.
End of explanation
import re
from os import listdir
from os.path import isfile, join
data_files = [f for f in listdir(data_directory) if isfile(join(data_directory, f))]
at_bats = {}
home_runs = {}
singles = {}
doubles = {}
counts = {"batter": {}, "pitcher": {}}
data = {}
train_years = ["2013", "2014", "2015"]
test_year = "2016"
year_match = r"201(3|4|5|6)"
for year in train_years + [test_year]:
data[year] = []
Explanation: And now we'll prepare some variables for organizing the data.
End of explanation
import string
for data_file in data_files:
year_re = re.search(year_match, data_file)
if year_re is None:
continue
year = year_re.group()
# Skip non-event files.
if not (".EVA" in data_file or ".EVN" in data_file):
continue
f = open(join(data_directory, data_file))
home_pitcher = None
away_pitcher = None
line = f.readline().strip()
while line != "":
parts = line.split(",")
# Get starting pitchers.
if parts[0] == "id":
while parts[0] != "play":
line = f.readline().strip()
parts = line.split(",")
if parts[0] == "start" and parts[-1] == "1":
if parts[3] == "0":
away_pitcher = parts[1]
else:
home_pitcher = parts[1]
# Get at-bat data.
if parts[0] == "play":
batter = parts[3]
pitcher = home_pitcher
if parts[2] == "1":
pitcher = away_pitcher
outcome = ""
# Handle balks, intentional, walks, hit by a pitch,
# strike outs, and walks..
if parts[-1][:2] in {"BK", "IW", "HP"}:
outcome = "p_" + parts[-1][:2]
elif parts[-1][0] in {"K", "I", "W"}:
outcome = "p_" + parts[-1][0]
# If the last pitch resulted in contact, figure out the pitch outcome.
# See "Events made by the batter at the plate" here: http://www.retrosheet.org/eventfile.htm#8.
pitches = parts[5]
if len(pitches) > 0 and pitches[-1] == "X":
play_parts = parts[6].split("/")
main_play = play_parts[0]
play = main_play.split(".")[0]
if play[0] == "H":
play = "HR"
elif play[0] in string.digits:
play = play[0]
elif play[0] in {"S", "D", "T"}:
play = play[:2]
# Try to get first ball handler.
if len(play) < 2:
try:
handlers = play_parts[1]
if handlers in string.digits:
play = play[0] + handlers[0]
except IndexError:
play = play[0] + "X"
elif play[:2] == "FC":
play = play[2]
outcome = "h_" + play
if play == "HR":
home_runs[batter] = home_runs.get(batter, 0) + 1
elif play[0] == "S":
singles[batter] = singles.get(batter, 0) + 1
elif play[0] == "D":
doubles[batter] = doubles.get(batter, 0) + 1
# Ignore catcher interference and ambiguous singles.
if outcome not in {"h_C", "h_S"} and outcome != "":
data[year].append({"batter": batter, "pitcher": pitcher, "outcome": outcome})
at_bats[batter] = at_bats.get(batter, 0) + 1
counts["batter"][batter] = counts["batter"].get(batter, 0) + 1
counts["pitcher"][pitcher] = counts["pitcher"].get(pitcher, 0) + 1
# Handle pitcher changes.
if parts[0] == "sub":
if parts[-1] == "1":
if parts[3] == "0":
away_pitcher = parts[1]
else:
home_pitcher = parts[1]
line = f.readline().strip()
f.close()
Explanation: Next, we'll read in the data. Unfortunately, this is going to be a bunch of spaghetti code. The goal is to collect the batter, pitcher, and outcome (e.g., strike out, home run) for every at-bat. By the end of the following code block, we'll have a Python list of dictionaries where each element has the format <code>{"batter": batter, "pitcher": pitcher, "outcome": outcome}</code>. To best understand what's going on in the code, you'll have to read through Retrosheet's game file documentation.
End of explanation
cutoffs = {}
percentile_cutoff = 0.9
for player_type in ["batter", "pitcher"]:
counts_list = list(counts[player_type].values())
counts_list.sort(reverse = True)
total_at_bats = sum(counts_list)
cumulative_percentage = [sum(counts_list[:i + 1]) / total_at_bats for i in range(len(counts_list))]
cutoff_index = sum([1 for total in cumulative_percentage if total <= percentile_cutoff])
cutoff = counts_list[cutoff_index]
cutoffs[player_type] = cutoff
print("Original: {0}\tNew: {1}\tProportion: {2:.2f}".format(
len(counts[player_type]), cutoff_index, cutoff_index / len(counts[player_type])))
Explanation: Data Preprocessing
OK, now that we have our raw data, we're going to establish some cutoffs so that we're only analyzing players with a reasonable number of observations. Let's just focus on the most frequent batters and pitchers who were involved in 90% of the at-bats.
End of explanation
final_data = []
original_data = 0
matchups = set()
for year in train_years:
original_data += len(data[year])
for sample in data[year]:
batter = sample["batter"]
pitcher = sample["pitcher"]
matchups.add("{0}_{1}".format(batter, pitcher))
if counts["batter"][batter] >= cutoffs["batter"] and counts["pitcher"][pitcher] >= cutoffs["pitcher"]:
final_data.append(sample)
print("Original: {0}\tReduced: {1}".format(original_data, len(final_data)))
print("{0:.2f}% of original data set.".format(len(final_data) / original_data))
Explanation: As you can see, only 32% of batters and 46% of pitchers were involved in 90% of at-bats. Let's use these new cutoff points to build the final data set.
End of explanation
import random
FAV_NUM = 2010
random.seed(FAV_NUM)
random.shuffle(final_data)
categories = {"batter": set(), "pitcher": set(), "outcome": set()}
for sample in final_data:
categories["batter"].add(sample["batter"])
categories["pitcher"].add(sample["pitcher"])
categories["outcome"].add(sample["outcome"])
for column in categories:
categories[column] = list(categories[column])
categories[column].sort()
NUM_OUTCOMES = len(categories["outcome"])
print("NUM_OUTCOMES: {0}".format(NUM_OUTCOMES))
print(" ".join(categories["outcome"]))
category_to_int = {}
for column in categories:
category_to_int[column] = {categories[column][i]: i for i in range(len(categories[column]))}
import matplotlib.pyplot as plt
import seaborn as sns
outcome_counts = {}
for year in train_years:
for sample in data[year]:
outcome = sample["outcome"]
outcome_counts[outcome] = outcome_counts.get(outcome, 0) + 1
outcome_counts = list(outcome_counts.items())
outcome_counts.sort(key = lambda x: x[1], reverse = True)
val = [x[1] for x in outcome_counts]
symbols = [x[0] for x in outcome_counts]
pos = range(len(outcome_counts))
fig, ax = plt.subplots()
fig.set_size_inches(30, 30)
ax = sns.barplot(x = val, y = symbols)
plt.show()
Explanation: As you can see, we still retain a large amount of data even after removing infrequent batters and pitchers. Next, we're going to associate an integer index with each of our batters, pitchers, and outcomes, respectively.
End of explanation
import numpy as np
np.random.seed(FAV_NUM)
from keras.utils import np_utils
data_sets = {"batter": [], "pitcher": [], "outcome": []}
for sample in final_data:
for column in sample:
value = sample[column]
value_index = category_to_int[column][value]
data_sets[column].append([value_index])
for column in ["batter", "pitcher"]:
data_sets[column] = np.array(data_sets[column])
data_sets["outcome"] = np_utils.to_categorical(np.array(data_sets["outcome"]), NUM_OUTCOMES)
Explanation: We'll then use these newly defined integer indices to build the appropriate NumPy arrays for our model.
End of explanation
from keras import optimizers
from keras.layers import Activation, concatenate, Dense, Dropout, Embedding, Input, Reshape
from keras.models import Model
NUM_BATTERS = len(categories["batter"])
NUM_PITCHERS = len(categories["pitcher"])
VEC_SIZE = 9
ACTIVATION = "sigmoid"
batter_idx = Input(shape = (1, ), dtype = "int32", name = "batter_idx")
batter_embed = Embedding(NUM_BATTERS, VEC_SIZE, input_length = 1)(batter_idx)
batter_embed = Reshape((VEC_SIZE, ), name = "batter_embed")(batter_embed)
batter_embed = Activation(ACTIVATION)(batter_embed)
pitcher_idx = Input(shape = (1, ), dtype = "int32", name = "pitcher_idx")
pitcher_embed = Embedding(NUM_PITCHERS, VEC_SIZE, input_length = 1)(pitcher_idx)
pitcher_embed = Reshape((VEC_SIZE, ), name = "pitcher_embed")(pitcher_embed)
pitcher_embed = Activation(ACTIVATION)(pitcher_embed)
batter_pitcher = concatenate([batter_embed, pitcher_embed], name = "batter_pitcher")
output = Dense(NUM_OUTCOMES, activation = "softmax")(batter_pitcher)
model = Model(inputs = [batter_idx, pitcher_idx], outputs = [output])
sgd = optimizers.SGD(lr = 0.01, decay = 1e-6, momentum = 0.9, nesterov = True)
model.compile(optimizer = sgd, loss = "categorical_crossentropy")
Explanation: Building and Training the Model
We're now ready to build our model with Keras. The model is similar in spirit to the <code>word2vec</code> model in that we're trying to learn the player vectors that best predict the outcome of an at-bat (the "target word" in <code>word2vec</code>) given a certain batter and pitcher (the "context" in <code>word2vec</code>). We'll learn separate embedding matrices for batters and pitchers.
End of explanation
BATCH_SIZE = 100
NUM_EPOCHS = 100
VALID = False
validation_split = 0.0
callbacks = None
if VALID:
from keras.callbacks import ModelCheckpoint
validation_split = 0.01
callbacks = [ModelCheckpoint("weights.h5", save_best_only = True, save_weights_only = True)]
X_list = [data_sets["batter"], data_sets["pitcher"]]
y = data_sets["outcome"]
history = model.fit(X_list, y, epochs = NUM_EPOCHS, batch_size = BATCH_SIZE,
verbose = 2, shuffle = True, callbacks = callbacks, validation_split = validation_split)
if not VALID:
model.save_weights("weights.h5")
model.load_weights("weights.h5")
if VALID:
plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.title("model loss")
plt.ylabel("loss")
plt.xlabel("epoch")
plt.legend(["train", "valid"], loc = "upper right")
plt.show()
Explanation: And now we're ready to train our model. We'll save the weights at the end of training.
End of explanation
TRAIN_ALT = True
alt_model = None
if TRAIN_ALT:
from scipy.sparse import csr_matrix, hstack
from sklearn.linear_model import LogisticRegression
X_batters = csr_matrix(np_utils.to_categorical(np.array(data_sets["batter"]), NUM_BATTERS))
X_pitchers = csr_matrix(np_utils.to_categorical(np.array(data_sets["pitcher"]), NUM_PITCHERS))
X = hstack([X_batters, X_pitchers])
y = np.argmax(data_sets["outcome"], axis = 1)
alt_model = LogisticRegression(n_jobs = -1)
results = alt_model.fit(X, y)
Explanation: We'll also train a logistic regression model so that we have something to compare to <code>(batter|pitcher)2vec</code>.
End of explanation
from keras import backend
get_batter_vec = backend.function([batter_idx], [batter_embed])
get_pitcher_vec = backend.function([pitcher_idx], [pitcher_embed])
# Retrieve distributed representation of players.
batter_vecs = get_batter_vec([np.array(range(NUM_BATTERS)).reshape((NUM_BATTERS, 1))])[0]
pitcher_vecs = get_pitcher_vec([np.array(range(NUM_PITCHERS)).reshape((NUM_PITCHERS, 1))])[0]
player_vecs = {"batter": batter_vecs, "pitcher": pitcher_vecs}
Explanation: Qualitative Analysis of Player Vectors
Having trained the model, let's go ahead and fetch the distributed representations for all players. To do so, we need to define some functions that return a vector when provided with a player's integer index.
End of explanation
# Retrieve player data.
player_data = {}
for data_file in data_files:
if ".ROS" in data_file:
f = open(join(data_directory, data_file))
for line in f:
parts = line.strip().split(",")
player_id = parts[0]
last_name = parts[1]
first_name = parts[2]
name = first_name + " " + last_name
batting_hand = parts[3]
throwing_hand = parts[4]
position = parts[6]
player_data[player_id] = {"name": name, "batting_hand": batting_hand,
"throwing_hand": throwing_hand, "position": position}
Explanation: Alright, let's find out if these representations are revealing anything interesting. First, let's collect some information about the players.
End of explanation
from mpl_toolkits.mplot3d import Axes3D
from sklearn.manifold import TSNE
NUM_PLAYERS = {"batter": NUM_BATTERS, "pitcher": NUM_PITCHERS}
def run_tsne(player_type):
Run t-SNE on the player vectors.
:param player_type:
:return:
params = {"batter": {"perplexity": 20, "learning_rate": 200, "init": "pca"},
"pitcher": {"perplexity": 20, "learning_rate": 200, "init": "random"}}
tsne = TSNE(n_components = 3, **params[player_type])
manifold_3d = tsne.fit_transform(player_vecs[player_type])
fig = plt.figure()
ax = fig.add_subplot(111, projection = "3d")
ax.scatter(manifold_3d[:, 0], manifold_3d[:, 1], manifold_3d[:, 2], color = "gray")
plt.show()
params = {"batter": {"perplexity": 20, "learning_rate": 550, "init": "pca"},
"pitcher": {"perplexity": 20, "learning_rate": 200, "init": "random"}}
tsne = TSNE(n_components = 2, **params[player_type])
manifold_2d = tsne.fit_transform(player_vecs[player_type])
(x, y) = (manifold_2d[:, 0], manifold_2d[:, 1])
plt.scatter(x, y, color = "gray")
interesting_batters = {"Mike Trout", "Paul Goldschmidt", "Dee Gordon", "Ichiro Suzuki",
"Bryce Harper"}
interesting_pitchers = {"Clayton Kershaw", "Felix Hernandez", "Madison Bumgarner",
"Aroldis Chapman", "Dellin Betances"}
interesting_players = {"batter": interesting_batters, "pitcher": interesting_pitchers}
for i in range(NUM_PLAYERS[player_type]):
player_id = categories[player_type][i]
player_name = player_data[player_id]["name"]
if player_name in interesting_players[player_type]:
plt.text(x[i], y[i], player_name, va = "top", family = "monospace")
plt.show()
return manifold_3d
tsne_batters = run_tsne("batter")
tsne_pitchers = run_tsne("pitcher")
Explanation: t-SNE
Next, we'll use the t-SNE algorithm to visualize the player vectors in two and three dimensions.
End of explanation
import csv
import pandas as pd
playerID_to_retroID = {}
reader = csv.DictReader(open("Master.csv"))
for row in reader:
playerID = row["playerID"]
retroID = row["retroID"]
playerID_to_retroID[playerID] = retroID
# Get player salaries.
reader = csv.DictReader(open("Salaries.csv"))
salaries = {}
for row in reader:
if row["yearID"] == "2015":
playerID = row["playerID"]
retroID = playerID_to_retroID[playerID]
log_salary = np.log2(int(row["salary"]))
salaries[retroID] = log_salary
# Set up other inteteresting data for coloring.
max_hr_rate = max([home_runs.get(batter_id, 0) / at_bats[batter_id] for batter_id in at_bats if batter_id in categories["batter"]])
max_single_rate = max([singles.get(batter_id, 0) / at_bats[batter_id] for batter_id in at_bats if batter_id in categories["batter"]])
max_double_rate = max([doubles.get(batter_id, 0) / at_bats[batter_id] for batter_id in at_bats if batter_id in categories["batter"]])
max_salary = max([salaries.get(batter_id, 0) for batter_id in at_bats if batter_id in categories["batter"]])
batter_colors = {"player_id": [], "hand": [], "Home Runs": [], "Singles": [], "Doubles": [], "salary": []}
for i in range(NUM_BATTERS):
batter_id = categories["batter"][i]
batting_hand = player_data[batter_id]["batting_hand"]
batter_colors["player_id"].append(batter_id)
batter_colors["hand"].append(batting_hand)
# batter_colors["Home Runs"].append(str((home_runs.get(batter_id, 0) / at_bats[batter_id]) / max_hr_rate))
batter_colors["Home Runs"].append(str(home_runs.get(batter_id, 0) / at_bats[batter_id]))
batter_colors["Singles"].append(str(singles.get(batter_id, 0) / at_bats[batter_id]))
batter_colors["Doubles"].append(str((doubles.get(batter_id, 0) / at_bats[batter_id]) / max_double_rate))
batter_colors["salary"].append(str((salaries.get(batter_id, 0) / max_salary)))
df = pd.DataFrame(batter_colors)
from sklearn import decomposition
# Run PCA.
pca = decomposition.PCA()
pca.fit(batter_vecs)
print(pca.explained_variance_ratio_)
projected_batters = pca.transform(batter_vecs)
pca.fit(pitcher_vecs)
print(pca.explained_variance_ratio_)
projected_pitchers = pca.transform(pitcher_vecs)
for i in range(3):
df["pc{0}".format(i + 1)] = projected_batters[:, i]
cmap = sns.cubehelix_palette(as_cmap = True)
# fig = plt.figure()
# ax = fig.add_subplot(111, projection = "3d")
# ax.scatter(projected_batters[:, 0], projected_batters[:, 1], projected_batters[:, 2], color = df["Home Runs"], cmap = cmap)
# ax.set_title("Batters")
# plt.show()
cs = sns.color_palette("hls", 8)
batting_hand_color = {"Left": cs[0], "Right": cs[3], "Both": cs[5]}
legend_data = []
legend_names = []
for (hand, color) in batting_hand_color.items():
batter_hands = df[df["hand"] == hand[0]]
legend_data.append(plt.scatter(batter_hands["pc1"], batter_hands["pc2"], s = 50, color = color))
legend_names.append(hand)
plt.title("Batting Hand")
plt.legend(legend_data, legend_names)
plt.show()
for batter_color in ["Singles", "Home Runs", "Doubles", "salary"]:
(f, ax) = plt.subplots()
points = ax.scatter(df["pc1"], df["pc2"], c = df[batter_color], s = 50, cmap = cmap)
f.colorbar(points)
ax.set_title(batter_color)
plt.show()
Explanation: PCA
Let's also visualize the first few PCs of a principal component analysis (PCA) of the vectors and color them with various interesting properties.
End of explanation
import csv
def write_viz_data(player_type, projected, fieldnames, projection):
Write the visualization coordinates of the players to a file.
:param player_type:
:param projected:
:param fieldnames:
:return:
out = open("{0}s_{1}.csv".format(player_type, projection), "w")
output = csv.DictWriter(out, fieldnames = fieldnames)
output.writeheader()
for i in range(NUM_PLAYERS[player_type]):
player_id = categories[player_type][i]
row = {}
for col in fieldnames:
if col in player_data[player_id]:
row[col] = player_data[player_id][col]
row["2015_salary"] = 2 ** salaries.get(player_id, 0)
xyz = ["x", "y", "z"]
for j in range(3):
if projection == "pca":
row["PC{0}".format(j + 1)] = projected[i][j]
else:
row[xyz[j]] = projected[i][j]
row["player_id"] = player_id
if player_type == "batter":
row["hr_rate"] = home_runs.get(player_id, 0) / at_bats[player_id]
nothing = output.writerow(row)
out.close()
fieldnames = ["player_id", "name", "2015_salary", "position", "batting_hand", "throwing_hand", "hr_rate", "PC1", "PC2", "PC3"]
write_viz_data("batter", projected_batters, fieldnames, "pca")
write_viz_data("batter", tsne_batters, fieldnames[:-3] + ["x", "y", "z"], "tsne")
fieldnames = ["player_id", "name", "2015_salary", "throwing_hand", "PC1", "PC2", "PC3"]
write_viz_data("pitcher", projected_pitchers, fieldnames, "pca")
write_viz_data("pitcher", tsne_pitchers, fieldnames[:-3] + ["x", "y", "z"], "tsne")
Explanation: As you can see, there are some interesting patterns emerging from the representations. For example, right-handed hitters are clearly separated from left-handed and switch hitters. Similarly, frequent singles hitters are far from infrequent singles hitters. So, the model is clearly learning something, but whether or not what it's learning is non-trivial remains to be seen. Let's go ahead and save the t-SNE map and PC scores to CSV files so that we can play around with them elsewhere.
End of explanation
def write_distributed_representations(player_type, player_vecs):
Write the player vectors to a file.
:param player_type:
:param player_vecs:
:return:
out = open("{0}s_latent.csv".format(player_type), "w")
fieldnames = ["player_id", "name"] + ["latent_{0}".format(i + 1) for i in range(VEC_SIZE)]
output = csv.DictWriter(out, fieldnames = fieldnames)
output.writeheader()
for i in range(NUM_PLAYERS[player_type]):
player_id = categories[player_type][i]
row = {"player_id": player_id,
"name": player_data[player_id]["name"]}
for j in range(VEC_SIZE):
row["latent_{0}".format(j + 1)] = player_vecs[i][j]
nothing = output.writerow(row)
out.close()
write_distributed_representations("batter", batter_vecs)
write_distributed_representations("pitcher", pitcher_vecs)
Explanation: Let's also save the raw player vectors.
End of explanation
import pandas as pd
def get_nearest_neighbors(name, data, latent_vecs, player_names, k = 5):
Print the k nearest neighbors (in the latent space) of a given player.
:param name:
:param data:
:param latent_vecs:
:param player_names:
:param k:
:return:
player_index = np.where(data["name"] == name)[0]
player_latent = latent_vecs[player_index]
print(player_latent[0])
# distances = list(np.linalg.norm(latent_vecs - player_latent, axis = 1))
distances = 1 - np.dot(latent_vecs, player_latent.T).flatten() / (np.linalg.norm(latent_vecs, axis = 1) * np.linalg.norm(player_latent))
distances_and_ids = list(zip(player_names, distances))
distances_and_ids.sort(key = lambda x: x[1])
return distances_and_ids[1:1 + k]
data_files = ["batters_latent.csv", "pitchers_latent.csv"]
player_df = {}
player_names = {}
player_ids = {}
latent_vecs = {}
for player_type in ["batter", "pitcher"]:
data_file = "{0}s_latent.csv".format(player_type)
player_df[player_type] = pd.read_csv(data_file)
player_ids[player_type] = list(player_df[player_type]["player_id"])
player_names[player_type] = list(player_df[player_type]["name"])
latent_vecs[player_type] = np.array(player_df[player_type].iloc[:, 2:])
for batter in ["Mike Trout", "Dee Gordon"]:
print(batter)
print(get_nearest_neighbors(batter, player_df["batter"], latent_vecs["batter"], player_names["batter"]))
print()
for pitcher in ["Clayton Kershaw", "Aroldis Chapman", "Jake Arrieta", "Felix Hernandez"]:
print(pitcher)
print(get_nearest_neighbors(pitcher, player_df["pitcher"], latent_vecs["pitcher"], player_names["pitcher"]))
print()
Explanation: ScatterPlot3D
To gain some additional intuition with the player representations, I recommend exploring them in my open source scatter plot visualization application, ScatterPlot3D. To run it:
Download the appropriate build.
Run with <code>java -jar ScatterPlot3D-<version>.jar</code> on Linux systems or by double-clicking the JAR on Windows.
Load the data.
Put 5, 6, and 7 for x, y, and z for "pitchers_tsne.csv" or 8, 9, and 10 for "batters_tsne.csv".
Click "Submit".
You can then search, zoom, and rotate the data, and click on individual points for more details. For example:
<img src="batters_tsne_all.png" width="600">
<img src="trout_goldschmidt.png" width="600">
Documentation can be downloaded here and a gallery of application screenshots can be found here.
Player Algebra
Nearest Neighbors
So, do these vectors contain any non-obvious information? Maybe comparing nearest neighbors will provide some insight.
End of explanation
def get_opposite_hand(name, batting_hand, df, latent_vecs, player_names, k = 10):
Find the player's opposite batting hand doppelgänger.
:param name:
:param batting_hand:
:param df:
:param latent_vecs:
:param player_names:
:param k:
:return:
player_index = np.where(df["name"] == name)[0]
player_latent = latent_vecs[player_index]
player_latent + average_batters["R"]
opposite_hand = None
if batting_hand == "R":
opposite_hand = player_latent - average_batters["R"] + average_batters["L"]
else:
opposite_hand = player_latent - average_batters["L"] + average_batters["R"]
# distances = list(np.linalg.norm(latent_vecs - opposite_hand, axis = 1))
distances = 1 - np.dot(latent_vecs, opposite_hand.T).flatten() / (np.linalg.norm(latent_vecs, axis = 1) * np.linalg.norm(opposite_hand))
distances_and_ids = list(zip(player_names, distances))
distances_and_ids.sort(key = lambda x: x[1])
return distances_and_ids[:k]
# Generate average vectors for each batting hand.
average_batters = {"R": [], "L": [], "B": []}
for player_id in player_data:
hand = player_data[player_id]["batting_hand"]
batter_index = np.where(player_df["batter"]["player_id"] == player_id)[0]
batter_latent = latent_vecs["batter"][batter_index]
if len(batter_latent) > 0:
average_batters[hand] += [batter_latent]
for batting_hand in average_batters:
average_batters[batting_hand] = np.array(average_batters[batting_hand]).mean(axis = 0)
# Get opposite-handed doppelgängers.
print("Mike Trout")
print(get_opposite_hand("Mike Trout", "R", player_df["batter"], latent_vecs["batter"], player_names["batter"]))
print()
print("Dee Gordon")
print(get_opposite_hand("Dee Gordon", "L", player_df["batter"], latent_vecs["batter"], player_names["batter"]))
print()
Explanation: At a first glance, the nearest neighbors produced by the embedding do seem to support baseball intuition. Both Mike Trout and Paul Goldschmidt are known for their rare blend of speed and power. Like Dee Gordon, Ichiro Suzuki has a knack for being able to get on base.
Zack Greinke's presence among Clayton Kershaw's nearest neighbors is interesting as they are considered one of the best pitching duos of all time. The similarities between Craig Stammen and Kershaw are not obvious to my ignorant baseball eye, but we would expect a method like <code>(batter|pitcher)2vec</code> (if effective) to occasionally discover surprising neighbors or else it wouldn't be particularly useful.
Aroldis Chapman's nearest neighbors are fairly unsurprising with Craig Kimbrel and Andrew Miller both being elite relief pitchers.
When clustering players using common MLB stats (e.g., HRs, RBIs), Mike Trout's ten nearest neighbors for the 2015 season are: Bryce Harper, Julio Daniel Martinez, Andrew McCutchen, Justin Upton, Matt Carpenter, Joey Votto, Curtis Granderson, Kris Bryant, Chris Davis, and Brian Dozier (R code here). So there is some overlap between the two neighborhood methods, but, intriguingly, the nearest neighbor from each method is not found in the neighborhood of the other method. Similarly, Ichiro isn't among Dee Gordon's ten nearest neighbors when clustering on standard MLB stats.
Opposite-handed Doppelgängers
Another fun thing to try is analogies. As I mentioned at the beginning of this notebook, word embeddings often contain interesting analogy properties. Erik Erlandson, a colleague of mine at Red Hat, suggested I use average vectors for right-handed and left-handed batters to generate opposite-handed doppelgängers for different players. Let's see what that looks like.
End of explanation
matchup_counts = {}
outcome_counts = {}
for sample in data[test_year]:
batter = sample["batter"]
pitcher = sample["pitcher"]
matchup = "{0}_{1}".format(batter, pitcher)
if batter in categories["batter"] and pitcher in categories["pitcher"] and matchup not in matchups:
matchup_counts[matchup] = matchup_counts.get(matchup, 0) + 1
if matchup not in outcome_counts:
outcome_counts[matchup] = {}
outcome_counts[matchup][outcome] = outcome_counts[matchup].get(outcome, 0) + 1
matchup_counts = list(matchup_counts.items())
matchup_counts.sort(key = lambda x: -x[1])
Explanation: Bryce Harper's presence among Mike Trout's left-handed doppelgängers is particularly satisfying. As for Dee Gordon's right-handed doppelgängers, Tyler Saladino is known for "legging 'em out".
Modeling Previously Unseen At-Bat Matchups
Measuring how well the <code>(batter|pitcher)2vec</code> representations predict outcome distributions for unseen matchups is the ultimate test of whether the representations are capturing anything meaningful about players. To test the model, we'll look at matchups from the 2016 season that were not seen in the training set.
End of explanation
def get_past_outcome_counts(train_years, data, test_players, player_type):
Retrieve past outcome counts for a given player in the training set.
:param train_years:
:param data:
:param test_players:
:param player_type:
past_outcome_counts = {}
for year in train_years:
for sample in data[year]:
player = sample[player_type]
if player in test_players:
outcome = sample["outcome"]
if player not in past_outcome_counts:
past_outcome_counts[player] = {}
past_outcome_counts[player][outcome] = past_outcome_counts[player].get(outcome, 0) + 1
return past_outcome_counts
cutoff = 0
total_above = sum(1 for matchup_count in matchup_counts if matchup_count[1] >= cutoff)
TOP_MATCHUPS = total_above
print("Total Matchups: {0}".format(TOP_MATCHUPS))
test_batters = {matchup[0].split("_")[0] for matchup in matchup_counts[:TOP_MATCHUPS]}
test_pitchers = {matchup[0].split("_")[1] for matchup in matchup_counts[:TOP_MATCHUPS]}
test_matchups = {matchup[0] for matchup in matchup_counts[:TOP_MATCHUPS]}
past_batter_outcome_counts = get_past_outcome_counts(train_years, data, test_batters, "batter")
past_pitcher_outcome_counts = get_past_outcome_counts(train_years, data, test_pitchers, "pitcher")
# Get total outcome counts from training data.
train_outcome_counts = {}
for year in train_years:
for sample in data[year]:
outcome = sample["outcome"]
train_outcome_counts[outcome] = train_outcome_counts.get(outcome, 0) + 1
# Convert total outcome counts into a probability distribution.
total_outcomes = sum(train_outcome_counts.values())
for outcome in train_outcome_counts:
train_outcome_counts[outcome] /= total_outcomes
past_batter_probs = {}
for batter in test_batters:
past_batter_outcome_total = sum(past_batter_outcome_counts[batter].values())
past_batter_probs[batter] = {}
for outcome in train_outcome_counts:
past_batter_probs[batter][outcome] = (past_batter_outcome_counts[batter].get(outcome, 0) + train_outcome_counts[outcome]) / (past_batter_outcome_total + 1)
past_pitcher_probs = {}
for pitcher in test_pitchers:
past_pitcher_outcome_total = sum(past_pitcher_outcome_counts[pitcher].values())
past_pitcher_probs[pitcher] = {}
for outcome in train_outcome_counts:
past_pitcher_probs[pitcher][outcome] = (past_pitcher_outcome_counts[pitcher].get(outcome, 0) + train_outcome_counts[outcome]) / (past_pitcher_outcome_total + 1)
Explanation: To determine the effectiveness of <code>(batter|pitcher)2vec</code>, we need something to first establish a baseline. We'll use a naïve prediction strategy to fill that role. For any given batter, we'll define their expected outcome distribution as:
$$p(o_i|b_j)=\frac{c_{i,j} + r_i}{\sum_{k=1}^{K} c_{j,k} + 1}$$
where $o_i$ denotes the outcome indexed by $i$, $c_{i,j}$ is the number of times the player indexed by $j$ had an at-bat resulting in the outcome indexed by $i$ in the training data, $r_i$ is the proportion of all at-bats that resulted in the outcome indexed by $i$ in the training data, and $K$ is the number of possible outcomes. Essentialy, the procedure adds one at-bat to each batter, but distributes the mass of that single bat across all outcomes based on data from all batters. You can think of $r_i$ as a type of "prior" or smoothing factor. $p(o_i|p_j)$ will be similarly defined. Finally, we'll define the expected outcome distribution for a given batter/pitcher matchup as:
$$p(o_i|b_j,p_k) = \frac{p(o_i|b_j) + p(o_i|p_k)}{2}$$
End of explanation
from statsmodels.stats.weightstats import ttest_ind
test_data_sets = {"batter": [], "pitcher": [], "outcome": []}
naive_losses = []
for sample in data[test_year]:
batter = sample["batter"]
pitcher = sample["pitcher"]
matchup = "{0}_{1}".format(batter, pitcher)
if matchup not in test_matchups:
continue
outcome = sample["outcome"]
past_batter_prob = past_batter_probs[batter][outcome]
past_pitcher_prob = past_pitcher_probs[pitcher][outcome]
naive_prob = (past_batter_prob + past_pitcher_prob) / 2
naive_loss = -np.log(naive_prob)
naive_losses.append(naive_loss)
for column in sample:
value = sample[column]
value_index = category_to_int[column][value]
test_data_sets[column].append([value_index])
avg_naive_loss = sum(naive_losses) / len(naive_losses)
print("Naïve Loss: {0:.4f}".format(avg_naive_loss))
print(len(naive_losses))
Explanation: We can then calculate the log loss of this naïve approach on unseen matchups.
End of explanation
for column in ["batter", "pitcher"]:
test_data_sets[column] = np.array(test_data_sets[column])
X_list = [test_data_sets["batter"], test_data_sets["pitcher"]]
y = test_data_sets["outcome"]
preds = model.predict(X_list)
# result = model.evaluate(X_list, np_utils.to_categorical(np.array(test_data_sets["outcome"]), NUM_OUTCOMES), verbose = 0)
# print(result)
net_losses = []
for i in range(preds.shape[0]):
net_loss = -np.log(preds[i][y[i]][0])
net_losses.append(net_loss)
avg_net_loss = sum(net_losses) / len(net_losses)
print("(batter|pitcher)2vec: {0:.4f}".format(avg_net_loss))
print(len(net_losses))
print("{0:.2f}% fewer bits on average.".format(100 * (1 - avg_net_loss / avg_naive_loss)))
print(ttest_ind(net_losses, naive_losses, alternative = "smaller"))
Explanation: And we can now see how <code>(batter|pitcher)2vec</code> compares.
End of explanation
if TRAIN_ALT:
X_batters = csr_matrix(np_utils.to_categorical(np.array(test_data_sets["batter"]), NUM_BATTERS))
X_pitchers = csr_matrix(np_utils.to_categorical(np.array(test_data_sets["pitcher"]), NUM_PITCHERS))
X = hstack([X_batters, X_pitchers])
preds = alt_model.predict_proba(X)
lr_losses = []
for i in range(preds.shape[0]):
lr_loss = -np.log(preds[i][y[i]][0])
lr_losses.append(lr_loss)
avg_lr_loss = sum(lr_losses) / len(lr_losses)
print("Logistic Regression: {0:.4f}".format(avg_lr_loss))
print(len(lr_losses))
print("{0:.2f}% fewer bits on average.".format(100 * (1 - avg_lr_loss / avg_naive_loss)))
print(ttest_ind(lr_losses, naive_losses, alternative = "smaller"))
Explanation: As you can see, <code>(batter|pitcher)2vec</code> is a significantly better at modeling outcome distributions for unseen batter/pitcher matchups than the naïve baseline. But is an improvement of only 0.94% over the baseline particularly impressive? Let's see how our logistic regression model fairs.
End of explanation |
13,844 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Combine constructiveness and toxicity annotations from different batches
Step1: Write contructiveness and toxicity combined CSV | Python Code:
dfs = []
for batch in batches:
filename = aggregated_data_path + 'batch' + str(batch) + '_constructiveness_and_toxicity_combined.csv'
dfs.append(pd.read_csv(filename))
combined_annotations_df = pd.concat(dfs)
# Sort the merged dataframe on constructiveness and toxicity
combined_annotations_df.shape
# Relevant columns
cols = (['article_id', 'article_author', 'article_published_date',
'article_title', 'article_url', 'article_text',
'comment_author', 'comment_counter', 'comment_text',
'agree_constructiveness_expt', 'agree_toxicity_expt', 'constructive', 'constructive_internal_gold',
'crowd_toxicity_level', 'crowd_toxicity_level_internal_gold',
'has_content', 'crowd_discard',
'constructive_characteristics', 'non_constructive_characteristics',
'toxicity_characteristics',
'crowd_comments_constructiveness_expt',
'crowd_comments_toxicity_expt',
'other_con_chars', 'other_noncon_chars', 'other_toxic_chars'
])
Explanation: Combine constructiveness and toxicity annotations from different batches
End of explanation
output_dir = '../../CF_output/annotated_data/'
combined_annotations_df.to_csv( output_dir + 'constructiveness_and_toxicity_annotations.csv', columns = cols, index = False)
Explanation: Write contructiveness and toxicity combined CSV
End of explanation |
13,845 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib Exercise 3
Imports
Step2: Contour plots of 2d wavefunctions
The wavefunction of a 2d quantum well is
Step3: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction
Step4: Next make a visualization using one of the pcolor functions | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Matplotlib Exercise 3
Imports
End of explanation
def well2d(x, y, nx, ny, L=1.0):
Compute the 2d quantum well wave function.
sci=2/L*np.sin((nx*np.pi*x)/L)*np.sin((ny*np.pi*y)/L)
return sci
psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)
assert len(psi)==10
assert psi.shape==(10,)
Explanation: Contour plots of 2d wavefunctions
The wavefunction of a 2d quantum well is:
$$ \psi_{n_x,n_y}(x,y) = \frac{2}{L}
\sin{\left( \frac{n_x \pi x}{L} \right)}
\sin{\left( \frac{n_y \pi y}{L} \right)} $$
This is a scalar field and $n_x$ and $n_y$ are quantum numbers that measure the level of excitation in the x and y directions. $L$ is the size of the well.
Define a function well2d that computes this wavefunction for values of x and y that are NumPy arrays.
End of explanation
x=np.linspace(0.0,1.0,100)
y=np.linspace(0.0,1.0,100)
n,m=np.meshgrid(x,y)#makes the grid
plt.contour(well2d(n,m,3,2))
plt.title('Wave Function Visualization')#makes it pretty
plt.xlabel('x')
plt.ylabel('y')
assert True # use this cell for grading the contour plot
Explanation: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction:
Use $n_x=3$, $n_y=2$ and $L=0$.
Use the limits $[0,1]$ for the x and y axis.
Customize your plot to make it effective and beautiful.
Use a non-default colormap.
Add a colorbar to you visualization.
First make a plot using one of the contour functions:
End of explanation
plt.pcolor(well2d(n,m,3,2))
plt.title('Wave Function Visulization')
plt.xlabel('x')
plt.ylabel('y')
assert True # use this cell for grading the pcolor plot
Explanation: Next make a visualization using one of the pcolor functions:
End of explanation |
13,846 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PDE
The acoustic wave equation for the square slowness m and a source q is given in 3D by
Step1: Time and space discretization as a Taylor expansion.
The time discretization is define as a second order ( $ O (dt^2)) $) centered finite difference to get an explicit Euler scheme easy to solve by steping in time.
$ \frac{d^2 u(x,t)}{dt^2} \simeq \frac{u(x,t+dt) - 2 u(x,t) + u(x,t-dt)}{dt^2} + O(dt^2) $
And we define the space discretization also as a Taylor serie, with oder chosen by the user. This can either be a direct expansion of the second derivative bulding the laplacian, or a combination of first oder space derivative. The second option can be a better choice in case you would want to extand the method to more complex wave equations involving first order derivatives in chain only.
$ \frac{d^2 u(x,t)}{dt^2} \simeq \frac{1}{dx^2} \sum_k \alpha_k (u(x+k dx,t)+u(x-k dx,t)) + O(dx^k) $
Step2: Solve forward in time
The wave equation with absorbing boundary conditions writes
$ \eta \frac{d u(x,t)}{dt} + \frac{d^2 u(x,t)}{dt^2} - \nabla^2 u(x,t) =q $
and the adjont wave equation
$ -\eta \frac{d u(x,t)}{dt} + \frac{d^2 u(x,t)}{dt^2} - \nabla^2 u(x,t) =q $
where $ \eta$ is a damping factor equal to zero inside the physical domain and decreasing inside the absorbing layer from the pysical domain to the border
Step3: Rewriting the discret PDE as part of an Inversion
Accuracy and rigourousness of the dicretization
The above axpression are good for modelling. However, if you want to include a wave equation solver into an Inversion workflow, a more rigourous study of the discretization must be done. We can rewrite a single time step as follows
$ A_3 u(x,t+dt) = A_1 u(x,t) + A_2 u(x,t-dt) +q(x,t)$
where $ A_1,A_2,A_3 $ are square, invertible matrices, and symetric without any boundary conditions. In more details we have
Step4: Define the discrete model
Step5: Create functions for the PDE
The Gradient/Born are here so that everything is at the correct place, it is described later
Step6: A Forward propagation example
Step7: Adjoint test
In ordr to guaranty we have the gradient we need to make sure that the solution of the adjoint wave equation is indeed the true adjoint. Tod os so one should check that
$ <Ax,y> - <x,A^Ty> = 0$
where $A$ is the wave_equation, $A^T$ is wave_equationA and $x,y$ are any random vectors in the range of each operator. This can however be expensive as this two vector would be of size $N * n_t$. To test our operator we will the relax test by
$ <P_r A P_s^T x,y> - <x,P_SA^TP_r^Ty> = 0$
where $P_r , P_s^T$ are the source and recevier projection operator mapping the source and receiver locations and times onto the full domain. This allow to have only a random source of size $n_t$ at a random postion.
Step8: Least square objective Gradient
We will consider here the least square objective, as this is the one in need of an adjoint. The test that will follow are however necessary for any objective and associated gradient in a optimization framework. The objective function can be written
$ min_m \Phi(m)
Step9: Adjoint test for the gradient
The adjoint of the FWI Gradient is the Born modelling operator, implementing a double propagation forward in time with a wavefield scaled by the model perturbation for the second propagation
$ J dm = - A^{-1}(\frac{d A^{-1}q}{dt^2}) dm $
Step10: Jacobian test
The last part is to check that the operators are consistent with the problem. There is then two properties to be satisfied
$ U(m + hdm) = U(m) + \mathcal{O} (h) \
U(m + h dm) = U(m) + h J[m]dm + \mathcal{O} (h^2) $
which are the linearization conditions for the objective. This is a bit slow to run here but here is the way to test it.
1 - Genrate data for the true model m
2 - Define a smooth initial model $m_0$ and comput the data $d_0$ for this model
3 - You now have $U(m_0)$
4 - Define $ dm = m-m_0$ and $ h = {1,.1,.01,.001,...}$
5 - For each $h$ compute $U(m_0 + h dm)$ by generating data for $m_0 + h dm$ and compute $(J[m_0 + h dm]^T\delta |d) $
6 - Plot in Loglog the two lines of equation above
Step11: Gradient test
The last part is to check that the operators are consistent with the problem. There is then two properties to be satisfied
$ \Phi(m + hdm) = \Phi(m) + \mathcal{O} (h) \
\Phi(m + h dm) = \Phi(m) + h (J[m]^T\delta |d)dm + \mathcal{O} (h^2) $
which are the linearization conditions for the objective. This is a bit slow to run here but here is the way to test it.
1 - Genrate data for the true model m
2 - Define a smooth initial model $m_0$ and comput the data $d_0$ for this model
3 - You now have $\Phi(m_0)$
4 - Define $ dm = m-m_0$ and $ h = {1,.1,.01,.001,...}$
5 - For each $h$ compute $\Phi(m_0 + h dm)$ by generating data for $m_0 + h dm$ and compute $(J[m_0 + h dm]^T\delta |d) $
6 - Plot in Loglog the two lines of equation above | Python Code:
p=Function('p')
m,s,h = symbols('m s h')
m=M(x,y,z)
q=Q(x,y,t)
d=D(x,y,t)
e=E(x,y)
Explanation: PDE
The acoustic wave equation for the square slowness m and a source q is given in 3D by :
\begin{cases}
&m \frac{d^2 u(x,t)}{dt^2} - \nabla^2 u(x,t) =q \
&u(.,0) = 0 \
&\frac{d u(x,t)}{dt}|_{t=0} = 0
\end{cases}
with the zero initial conditons to guaranty unicity of the solution
End of explanation
dtt=as_finite_diff(p(x,y,z,t).diff(t,t), [t-s,t, t+s])
dt=as_finite_diff(p(x,y,t).diff(t), [t-s, t+s])
# Spacial finite differences can easily be extended to higher order by increasing the list of sampling point in the next expression.
# Be sure to keep this stencil symmetric and everything else in the notebook will follow.
dxx=as_finite_diff(p(x,y,z,t).diff(x,x), [x-h,x, x+h])
dyy=as_finite_diff(p(x,y,z,t).diff(y,y), [y-h,y, y+h])
dzz=as_finite_diff(p(x,y,z,t).diff(z,z), [z-h,z, z+h])
dtt,dxx,dyy,dt
Explanation: Time and space discretization as a Taylor expansion.
The time discretization is define as a second order ( $ O (dt^2)) $) centered finite difference to get an explicit Euler scheme easy to solve by steping in time.
$ \frac{d^2 u(x,t)}{dt^2} \simeq \frac{u(x,t+dt) - 2 u(x,t) + u(x,t-dt)}{dt^2} + O(dt^2) $
And we define the space discretization also as a Taylor serie, with oder chosen by the user. This can either be a direct expansion of the second derivative bulding the laplacian, or a combination of first oder space derivative. The second option can be a better choice in case you would want to extand the method to more complex wave equations involving first order derivatives in chain only.
$ \frac{d^2 u(x,t)}{dt^2} \simeq \frac{1}{dx^2} \sum_k \alpha_k (u(x+k dx,t)+u(x-k dx,t)) + O(dx^k) $
End of explanation
# Forward wave equation
wave_equation = m*dtt- (dxx+dyy+dzz)
stencil = solve(wave_equation,p(x,y,z,t+s))[0]
ts=lambdify((p(x,y,t-s),p(x-h,y,t), p(x,y,t), p(x+h,y,t),p(x,y-h,t), p(x,y+h,t), q , m, s, h,e),stencil,"numpy")
eq=Eq(p(x,y,z,t+s),stencil)
eq
Explanation: Solve forward in time
The wave equation with absorbing boundary conditions writes
$ \eta \frac{d u(x,t)}{dt} + \frac{d^2 u(x,t)}{dt^2} - \nabla^2 u(x,t) =q $
and the adjont wave equation
$ -\eta \frac{d u(x,t)}{dt} + \frac{d^2 u(x,t)}{dt^2} - \nabla^2 u(x,t) =q $
where $ \eta$ is a damping factor equal to zero inside the physical domain and decreasing inside the absorbing layer from the pysical domain to the border
End of explanation
# Adjoint wave equation
wave_equationA = m*dtt- (dxx+dyy) - D(x,y,t) - e*dt
stencilA = solve(wave_equationA,p(x,y,t-s))[0]
tsA=lambdify((p(x,y,t+s),p(x-h,y,t), p(x,y,t), p(x+h,y,t),p(x,y-h,t), p(x,y+h,t), d , m, s, h,e),stencilA,"numpy")
stencilA
Explanation: Rewriting the discret PDE as part of an Inversion
Accuracy and rigourousness of the dicretization
The above axpression are good for modelling. However, if you want to include a wave equation solver into an Inversion workflow, a more rigourous study of the discretization must be done. We can rewrite a single time step as follows
$ A_3 u(x,t+dt) = A_1 u(x,t) + A_2 u(x,t-dt) +q(x,t)$
where $ A_1,A_2,A_3 $ are square, invertible matrices, and symetric without any boundary conditions. In more details we have :
\begin{align}
& A_1 = \frac{2}{dt^2 m} + \Delta \
& A_2 = \frac{-1}{dt^2 m} \
& A_3 = \frac{1}{dt^2 m}
\end{align}
We can the write the action of the adjoint wave equation operator. The adjoint wave equation is defined by
\begin{cases}
&m \frac{d^2 v(x,t)}{dt^2} - \nabla^2 v(x,t) = \delta d \
&v(.,T) = 0 \
&\frac{d v(x,t)}{dt}|_{t=T} = 0
\end{cases}
but by choosing to discretize first we will not discretize this equation. Instead we will take the adjoint of the forward wave equation operator and by testing that the operator is the true adjoint, we will guaranty solving the adjoint wave equation. We have the the single time step for the adjoint wavefield going backward in time in order to keep an explicit Euler scheme
$ A_2^T v(x,t-dt) = A_1^T v(x,t) + A_3^T v(x,t+dt) + \delta d(x,t)$
and as $A_2$ and $A_3$ are diagonal matrices there is no issue in inverting it. We can also see that choosing a asymetric stencil for the spacial derivative may lead to erro has the Laplacian would stop to be self-adjoint, and the actual adjoint finite difference scheme should be implemented.
End of explanation
import matplotlib.pyplot as plt
from matplotlib import animation
hstep=25 #space increment d = minv/(10*f0);
tstep=2 #time increment dt < .5 * hstep /maxv;
tmin=0.0 #initial time
tmax=300 #simulate until
xmin=-875.0 #left bound
xmax=875.0 #right bound...assume packet never reaches boundary
ymin=-875.0 #left bound
ymax=875.0 #right bound...assume packet never reaches boundary
f0=.010
t0=1/.010
nbpml=10
nx = int((xmax-xmin)/hstep) + 1 #number of points on x grid
ny = int((ymax-ymin)/hstep) + 1 #number of points on x grid
nt = int((tmax-tmin)/tstep) + 2 #number of points on t grid
xsrc=-400
ysrc=0.0
xrec = nbpml+4
#set source as Ricker wavelet for f0
def source(x,y,t):
r = (np.pi*f0*(t-t0))
val = (1-2.*r**2)*np.exp(-r**2)
if abs(x-xsrc)<hstep/2 and abs(y-ysrc)<hstep/2:
return val
else:
return 0.0
def dampx(x):
dampcoeff=1.5*np.log(1.0/0.001)/(5.0*hstep);
if x<nbpml:
return dampcoeff*((nbpml-x)/nbpml)**2
elif x>nx-nbpml-1:
return dampcoeff*((x-nx+nbpml)/nbpml)**2
else:
return 0.0
def dampy(y):
dampcoeff=1.5*np.log(1.0/0.001)/(5.0*hstep);
if y<nbpml:
return dampcoeff*((nbpml-y)/nbpml)**2
elif y>ny-nbpml-1:
return dampcoeff*((y-ny+nbpml)/nbpml)**2
else:
return 0.0
# Velocity models
def smooth10(vel,nx,ny):
out=np.ones((nx,ny))
out[:,:]=vel[:,:]
for a in range(5,nx-6):
out[a,:]=np.sum(vel[a-5:a+5,:], axis=0) /10
return out
# True velocity
vel=np.ones((nx,ny)) + 2.0
vel[floor(nx/2):nx,:]=4.5
mt=vel**-2
# Smooth velocity
v0=smooth10(vel,nx,ny)
m0=v0**-2
dm=m0-mt
Explanation: Define the discrete model
End of explanation
def Forward(nt,nx,ny,m):
u=np.zeros((nt,nx,ny))
rec=np.zeros((nt,ny-2))
for ti in range(0,nt):
for a in range(1,nx-1):
for b in range(1,ny-1):
src = source(xmin+a*hstep,ymin+b*hstep,tstep*ti)
damp=dampx(a)+dampy(b)
if ti==0:
u[ti,a,b]=ts(0,0,0,0,0,0,src,m[a,b],tstep,hstep,damp)
elif ti==1:
u[ti,a,b]=ts(0,u[ti-1,a-1,b],u[ti-1,a,b],u[ti-1,a+1,b],u[ti-1,a,b-1],u[ti-1,a,b+1],src,m[a,b],tstep,hstep,damp)
else:
u[ti,a,b]=ts(u[ti-2,a,b],u[ti-1,a-1,b],u[ti-1,a,b],u[ti-1,a+1,b],u[ti-1,a,b-1],u[ti-1,a,b+1],src,m[a,b],tstep,hstep,damp)
if a==xrec :
rec[ti,b-1]=u[ti,a,b]
return rec,u
def Adjoint(nt,nx,ny,m,rec):
v=np.zeros((nt,nx,ny))
srca=np.zeros((nt))
for ti in range(nt-1, -1, -1):
for a in range(1,nx-1):
for b in range(1,ny-1):
if a==xrec:
resid=rec[ti,b-1]
else:
resid=0
damp=dampx(a)+dampy(b)
if ti==nt-1:
v[ti,a,b]=tsA(0,0,0,0,0,0,resid,m[a,b],tstep,hstep,damp)
elif ti==nt-2:
v[ti,a,b]=tsA(0,v[ti+1,a-1,b],v[ti+1,a,b],v[ti+1,a+1,b],v[ti+1,a,b-1],v[ti+1,a,b+1],resid,m[a,b],tstep,hstep,damp)
else:
v[ti,a,b]=tsA(v[ti+2,a,b],v[ti+1,a-1,b],v[ti+1,a,b],v[ti+1,a+1,b],v[ti+1,a,b-1],v[ti+1,a,b+1],resid,m[a,b],tstep,hstep,damp)
if abs(xmin+a*hstep-xsrc)<hstep/2 and abs(ymin+b*hstep-ysrc)<hstep/2:
srca[ti]=v[ti,a,b]
return srca,v
def Gradient(nt,nx,ny,m,rec,u):
v1=np.zeros((nx,ny))
v2=np.zeros((nx,ny))
v3=np.zeros((nx,ny))
grad=np.zeros((nx,ny))
for ti in range(nt-1,-1,-1):
for a in range(1,nx-1):
for b in range(1,ny-1):
if a==xrec:
resid=rec[ti,b-1]
else:
resid=0
damp=dampx(a)+dampy(b)
v3[a,b]=tsA(v1[a,b],v2[a-1,b],v2[a,b],v2[a+1,b],v2[a,b-1],v2[a,b+1],resid,m[a,b],tstep,hstep,damp)
grad[a,b]=grad[a,b]-(v3[a,b]-2*v2[a,b]+v1[a,b])*(u[ti,a,b])
v1,v2,v3=v2,v3,v1
return tstep**-2*grad
def Born(nt,nx,ny,m,dm):
u1=np.zeros((nx,ny))
U1=np.zeros((nx,ny))
u2=np.zeros((nx,ny))
U2=np.zeros((nx,ny))
u3=np.zeros((nx,ny))
U3=np.zeros((nx,ny))
rec=np.zeros((nt,ny-2))
src2=0
for ti in range(0,nt):
for a in range(1,nx-1):
for b in range(1,ny-1):
damp=dampx(a)+dampy(b)
src = source(xmin+a*hstep,ymin+b*hstep,tstep*ti)
u3[a,b]=ts(u1[a,b],u2[a-1,b],u2[a,b],u2[a+1,b],u2[a,b-1],u2[a,b+1],src,m[a,b],tstep,hstep,damp)
src2 = -tstep**-2*(u3[a,b]-2*u2[a,b]+u1[a,b])*dm[a,b]
U3[a,b]=ts(U1[a,b],U2[a-1,b],U2[a,b],U2[a+1,b],U2[a,b-1],U2[a,b+1],src2,m[a,b],tstep,hstep,damp)
if a==xrec :
rec[ti,b-1]=U3[a,b]
u1,u2,u3=u2,u3,u1
U1,U2,U3=U2,U3,U1
return rec
Explanation: Create functions for the PDE
The Gradient/Born are here so that everything is at the correct place, it is described later
End of explanation
(rect,ut)=Forward(nt,nx,ny,mt)
fig = plt.figure()
plts = [] # get ready to populate this list the Line artists to be plotted
plt.hold("off")
for i in range(nt):
r = plt.imshow(ut[i,:,:]) # this is how you'd plot a single line...
plts.append( [r] )
ani = animation.ArtistAnimation(fig, plts, interval=50, repeat = False) # run the animation
plt.show()
fig2 = plt.figure()
plt.hold("off")
shotrec = plt.imshow(rect) # this is how you'd plot a single line...
#plt.show()
Explanation: A Forward propagation example
End of explanation
(rec0,u0)=Forward(nt,nx,ny,m0)
(srca,v)=Adjoint(nt,nx,ny,m0,rec0)
plts = [] # get ready to populate this list the Line artists to be plotted
plt.hold("off")
for i in range(0,nt):
r = plt.imshow(v[i,:,:],vmin=-100, vmax=100) # this is how you'd plot a single line...
plts.append( [r] )
ani = animation.ArtistAnimation(fig, plts, interval=50, repeat = False) # run the animation
plt.show()
shotrec = plt.plot(srca) # this is how you'd plot a single line...
#plt.show()
# Actual adjoint test
term1=0
for ti in range(0,nt):
term1=term1+srca[ti]*source(xsrc,ysrc,(ti)*tstep)
term2=LA.norm(rec0)**2
term1,term2,term1-term2,term1/term2
Explanation: Adjoint test
In ordr to guaranty we have the gradient we need to make sure that the solution of the adjoint wave equation is indeed the true adjoint. Tod os so one should check that
$ <Ax,y> - <x,A^Ty> = 0$
where $A$ is the wave_equation, $A^T$ is wave_equationA and $x,y$ are any random vectors in the range of each operator. This can however be expensive as this two vector would be of size $N * n_t$. To test our operator we will the relax test by
$ <P_r A P_s^T x,y> - <x,P_SA^TP_r^Ty> = 0$
where $P_r , P_s^T$ are the source and recevier projection operator mapping the source and receiver locations and times onto the full domain. This allow to have only a random source of size $n_t$ at a random postion.
End of explanation
# Misfit
F0=.5*LA.norm(rec0-rect)**2
F0
Im1=Gradient(nt,nx,ny,m0,rec0-rect,u0)
shotrec = plt.imshow(rect,vmin=-1,vmax=1) # this is how you'd plot a single line...
shotrec = plt.imshow(rec0,vmin=-1,vmax=1) # this is how you'd plot a single line...
shotrec = plt.imshow(rec0-rect,vmin=-.1,vmax=.1) # this is how you'd plot a single line...
shotrec = plt.imshow(Im1,vmin=-1,vmax=1) # this is how you'd plot a single line...
#plt.show()
Explanation: Least square objective Gradient
We will consider here the least square objective, as this is the one in need of an adjoint. The test that will follow are however necessary for any objective and associated gradient in a optimization framework. The objective function can be written
$ min_m \Phi(m) := \frac{1}{2} \| P_r A^{-1}(m) q - d\|_2^2$
And it's gradient becomes
$ \nabla_m \Phi(m) = - (\frac{dA(m)u}{dm})^T v $
where v is the soltuion if the adjoint wave equation. For the simple acoustic case the gradient can be rewritten as
$ \nabla_m \Phi(m) = - \sum_{t=1}^{nt} \frac{d^2u(t)}{dt^2} v(t) $
End of explanation
Im2=Gradient(nt,nx,ny,m0,rec0,u0)
du1=Born(nt,nx,ny,m0,dm)
term11=np.dot((rec0).reshape(-1),du1.reshape(-1))
term21=np.dot(Im2.reshape(-1),dm.reshape(-1))
term11,term21,term11-term21,term11/term21
Explanation: Adjoint test for the gradient
The adjoint of the FWI Gradient is the Born modelling operator, implementing a double propagation forward in time with a wavefield scaled by the model perturbation for the second propagation
$ J dm = - A^{-1}(\frac{d A^{-1}q}{dt^2}) dm $
End of explanation
H=[1,0.1,0.01,.001,0.0001,0.00001,0.000001]
(D1,u0)=Forward(nt,nx,ny,m0)
dub=Born(nt,nx,ny,m0,dm)
error1=np.zeros((7))
error2=np.zeros((7))
for i in range(0,7):
mloc=m0+H[i]*dm
(d,u)=Forward(nt,nx,ny,mloc)
error1[i] = LA.norm(d - D1,ord=1)
error2[i] = LA.norm(d - D1 - H[i]*dub,ord=1)
hh=np.zeros((7))
for i in range(0,7):
hh[i]=H[i]*H[i]
shotrec = plt.loglog(H,error1,H,H) # this is how you'd plot a single line...
plt.show()
shotrec = plt.loglog(H,error2,H,hh) # this is howyou'd plot a single line...
plt.show()
Explanation: Jacobian test
The last part is to check that the operators are consistent with the problem. There is then two properties to be satisfied
$ U(m + hdm) = U(m) + \mathcal{O} (h) \
U(m + h dm) = U(m) + h J[m]dm + \mathcal{O} (h^2) $
which are the linearization conditions for the objective. This is a bit slow to run here but here is the way to test it.
1 - Genrate data for the true model m
2 - Define a smooth initial model $m_0$ and comput the data $d_0$ for this model
3 - You now have $U(m_0)$
4 - Define $ dm = m-m_0$ and $ h = {1,.1,.01,.001,...}$
5 - For each $h$ compute $U(m_0 + h dm)$ by generating data for $m_0 + h dm$ and compute $(J[m_0 + h dm]^T\delta |d) $
6 - Plot in Loglog the two lines of equation above
End of explanation
(DT,uT)=Forward(nt,nx,ny,mt)
(D1,u0)=Forward(nt,nx,ny,m0)
F0=.5*LA.norm(D1-DT)**2
g=Gradient(nt,nx,ny,m0,D1-DT,u0)
G=np.dot(g.reshape(-1),dm.reshape(-1));
error21=np.zeros((7))
error22=np.zeros((7))
for i in range(0,7):
mloc=m0+H[i]*dm
(D,u)=Forward(nt,nx,ny,mloc)
error21[i] = .5*LA.norm(D-DT)**2 -F0
error22[i] = .5*LA.norm(D-DT)**2 -F0 - H[i]*G
shotrec = plt.loglog(H,error21,H,H) # this is how you'd plot a single line...
plt.show()
shotrec = plt.loglog(H,error22,H,hh) # this is how you'd plot a single line...
plt.show()
Explanation: Gradient test
The last part is to check that the operators are consistent with the problem. There is then two properties to be satisfied
$ \Phi(m + hdm) = \Phi(m) + \mathcal{O} (h) \
\Phi(m + h dm) = \Phi(m) + h (J[m]^T\delta |d)dm + \mathcal{O} (h^2) $
which are the linearization conditions for the objective. This is a bit slow to run here but here is the way to test it.
1 - Genrate data for the true model m
2 - Define a smooth initial model $m_0$ and comput the data $d_0$ for this model
3 - You now have $\Phi(m_0)$
4 - Define $ dm = m-m_0$ and $ h = {1,.1,.01,.001,...}$
5 - For each $h$ compute $\Phi(m_0 + h dm)$ by generating data for $m_0 + h dm$ and compute $(J[m_0 + h dm]^T\delta |d) $
6 - Plot in Loglog the two lines of equation above
End of explanation |
13,847 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Python for fun and profit
Juan Luis Cano Rodríguez
Madrid, 2016-05-13 @ ETS Asset Management Factory
Outline
Introduction
Python for Data Science
Python for IT
General advice
Conclusions
Outline
Introduction
Python for Data Science
Interactive computation with Jupyter
Numerical analysis with NumPy, SciPy
Visualization with matplotlib and others
Data manipulation with pandas
Machine Learning with scikit-learn
Python for IT
Data gathering with Requests and Scrapy
Information extraction with lxml, BeautifulSoup and others
User interfaces with PyQt, xlwings and others
Other
Step2: It's highly extensible!
Some extensions https
Step3: NumPy is much more
Step4: There are many alternatives to matplotlib, each one with its use cases, design decisions, and tradeoffs. Here are some of them | Python Code:
from ipywidgets import interact, fixed
from sympy import init_printing, Symbol, Eq, factor
init_printing(use_latex=True)
x = Symbol('x')
def factorit(n):
return Eq(x**n-1, factor(x**n-1))
interact(factorit, n=(2,40))
# Import matplotlib (plotting), skimage (image processing) and interact (user interfaces)
# This enables their use in the Notebook.
%matplotlib inline
from matplotlib import pyplot as plt
from skimage import data
from skimage.feature import blob_doh
from skimage.color import rgb2gray
# Extract the first 500px square of the Hubble Deep Field.
image = data.hubble_deep_field()[0:500, 0:500]
image_gray = rgb2gray(image)
def plot_blobs(max_sigma=30, threshold=0.1, gray=False):
Plot the image and the blobs that have been found.
blobs = blob_doh(image_gray, max_sigma=max_sigma, threshold=threshold)
fig, ax = plt.subplots(figsize=(8,8))
ax.set_title('Galaxies in the Hubble Deep Field')
if gray:
ax.imshow(image_gray, interpolation='nearest', cmap='gray_r')
circle_color = 'red'
else:
ax.imshow(image, interpolation='nearest')
circle_color = 'yellow'
for blob in blobs:
y, x, r = blob
c = plt.Circle((x, y), r, color=circle_color, linewidth=2, fill=False)
ax.add_patch(c)
interact(plot_blobs, max_sigma=(10, 40, 2), threshold=(0.005, 0.02, 0.001))
Explanation: Python for fun and profit
Juan Luis Cano Rodríguez
Madrid, 2016-05-13 @ ETS Asset Management Factory
Outline
Introduction
Python for Data Science
Python for IT
General advice
Conclusions
Outline
Introduction
Python for Data Science
Interactive computation with Jupyter
Numerical analysis with NumPy, SciPy
Visualization with matplotlib and others
Data manipulation with pandas
Machine Learning with scikit-learn
Python for IT
Data gathering with Requests and Scrapy
Information extraction with lxml, BeautifulSoup and others
User interfaces with PyQt, xlwings and others
Other: memcached, SOA
General advice
Python packaging
The future of Python
Conclusions
>>> print(self)
<img src="static/pyconse.jpg" width="350px" style="float: right" />
Almost Aerospace Engineer
Quant Developer for BBVA at Indizen
Writer and furious tweeter at Pybonacci
Chair ~~and BDFL~~ of Python España non-profit
Co-creator and charismatic leader of AeroPython (*not the Lorena Barba course)
When time permits (rare) writes some open source Python code
Python for Data Science
<img src="static/scipy_eco.png" width="350px" style="float: right" />
Python is a dynamic, interpreted* language that is easy to learn
Very popular in science, research
Rich ecosystem of packages that interoperate
Multiple languages are used (FORTRAN, C/C++) and wrapped from Python for a convenient interface
Jupyter
Interactive computation environment in a browser
Traces its roots to IPython, created in 2001
Nowadays it's language-agnostic (40 languages)
Jupyter
Notebook
Exporting
Interactive Widgets
Slides
Extensions https://github.com/ipython-contrib/IPython-notebook-extensions
It's a notebook!
Code is computed in cells
These can contain text, code, images, videos...
All resulting plots can be integrated in the interface
We can export it to different formats using nbconvert or from the UI
It's interactive!
End of explanation
import numpy as np
my_list = list(range(0,100000))
res1 = %timeit -o sum(my_list)
array = np.arange(0, 100000)
res2 = %timeit -o np.sum(array)
res1.best / res2.best
Explanation: It's highly extensible!
Some extensions https://github.com/ipython-contrib/IPython-notebook-extensions
A thorough guide http://mindtrove.info/4-ways-to-extend-jupyter-notebook/
NumPy
<img src="static/numpy.png" width="350px" style="float: right" />
N-dimensional data structure.
Homogeneously typed.
Efficient!
A universal function (or ufunc for short) is a function that operates on ndarrays. It is a “vectorized function".
End of explanation
# This line integrates matplotlib with the notebook
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(-2, 10)
plt.plot(x, np.sin(x) / x)
def g(x, y):
return np.cos(x) + np.sin(y) ** 2
x = np.linspace(-2, 3, 1000)
y = np.linspace(-2, 3, 1000)
xx, yy = np.meshgrid(x, y)
zz = g(xx, yy)
fig = plt.figure(figsize=(6, 6))
cs = plt.contourf(xx, yy, zz, np.linspace(-1, 2, 13), cmap=plt.cm.viridis)
plt.colorbar()
cs = plt.contour(xx, yy, zz, np.linspace(-1, 2, 13), colors='k')
plt.clabel(cs)
plt.xlabel("x")
plt.ylabel("y")
plt.title(r"Function $g(x, y) = \cos{x} + \sin^2{y}$")
plt.close()
fig
Explanation: NumPy is much more:
<img src="static/broadcast_visual.png" width="350px" style="float: right" />
Advanced manipulation tricks: broadcasting, fancy indexing
Functions: generalized linear algebra, Fast Fourier transforms
Use case:
In-memory, fits-in-my-computer, homogeneous data
Easily vectorized operations
SciPy
<img src="static/scipy2016.png" width="350px" style="float: right" />
General purpose scientific computing library
scipy.linalg: ATLAS LAPACK and BLAS libraries
scipy.stats: distributions, statistical functions...
scipy.integrate: integration of functions and ODEs
scipy.optimization: local and global optimization, fitting, root finding...
scipy.interpolate: interpolation, splines...
scipy.fftpack: Fourier trasnforms
scipy.signal: Signal processing
scipy.special: Special functions
scipy.io: Reading/Writing scientific formats
matplotlib
<img src="static/matplotlib.png" width="350px" style="float: right" />
The father of all Python visualization packages
Modeled after MATLAB API
Powerful and versatile, but often complex and not so well documented
Undergoing a deep default style change
End of explanation
import numpy as np
import pandas as pd
dates = pd.date_range('20130101', periods=6)
df = pd.DataFrame(np.random.randn(6,4), index=dates, columns=list('ABCD'))
df
Explanation: There are many alternatives to matplotlib, each one with its use cases, design decisions, and tradeoffs. Here are some of them:
<img src="static/encrucijada.jpg" width="350px" style="float: right" />
seaborn: High level layer on top of matplotlib, easier API and beautiful defaults for common visualizations
ggplot: For those who prefer R-like plotting (API and appearance)
plotly: 2D and 3D interactive plots in the browser as a web service
Bokeh: targets modern web browsers and big data
pyqtgraph: Qt embedding, realtime plots
Others: pygal, mpld3, bqplot...
Use the best tool for the job! And in case of doubt, just get matplotlib :)
pandas
<img src="static/pydata_cover.jpg" width="350px" style="float: right" />
High-performance, easy-to-use data structures and data analysis
Inspired by R DataFrames
Not just NumPy on steroids
Input/Output functions for a variety of formats
SQL and query-like operations
End of explanation |
13,848 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id="coastline_classifier_top"></a>
Coastline Classifier
This coastal boundary algorithm is used to classify a given pixel as either coastline or not coastline using a simple binary format like in the table before.
<br>
$\begin{array}{|c|c|}
\hline
1& Coastline \ \hline
0& Not Coastline \ \hline
\end{array}$
<br>
The algorithm makes a classification by examining surrounding pixels and making a determination based on how many pixels around it are water
<br>
<br>
If the count of land pixels surrounding a pixel exceeds 5, then it's likely not coastline.
If the count of land pixels surrounding a pixel does not exceed 1, then it's likely not a coastline
<br>
$$
Classification(pixel) = \begin{cases}
1 & 2\le count_water_surrounding(pixel) \leq 5 \
0 &
\end{cases}
$$
<br>
Counting by applying a convolutional kernel
A convolution applies a kernel to a point and it's surrounding pixels. Then maps the product to a new grid.
In the case of coastal boundary classification, A convolution the following kernel is applied to a grid of water, not-water pixels.
<br>
$$
Kernel =
\begin{bmatrix}
1 & 1 & 1\
1 & 0 & 1\
1 & 1 & 1\
\end{bmatrix}
$$
<br>
There exist more complicated differential kernels that would also work( see sobel operator).
The one used in this notebooks operates on binary variables and is easier to work with and easy to debug.
<hr>
Index
Import Dependencies and Connect to the Data Cube
Choose Platform and Product
Define the Extents of the Analysis
Load Data from the Data Cube and Create a Composite
Obtain Water Classifications and Coastal Change
<span id="coastline_classifier_import">Import Dependencies and Connect to the Data Cube ▴</span>
Step1: <span id="coastline_classifier_plat_prod">Choose Platform and Product ▴</span>
Step2: <span id="coastline_classifier_define_extents">Define the Extents of the Analysis ▴</span>
West Africa is subject to considerable coastal erosion in some areas. The links listed below are references regarding coastal erosion in West Africa and coastal erosion in general.
World Bank WACA program brochure (2015) - link
USAID - Adapting to Coastal Climate Change (2009) - - link
Step3: Visualize the selected area
Step4: <span id="coastline_classifier_retrieve_data">Load Data from the Data Cube and Create a Composite ▴</span>
Step5: Obtain the clean mask
Step6: Create a composite
Step7: Visualize Composited imagery
Step8: <span id="coastline_classifier_water_cls_and_coastal_change">Obtain Water Classifications and Coastal Change ▴</span>
Step9: <br> | Python Code:
import scipy.ndimage.filters as conv
import numpy as np
def _coastline_classification(dataset, water_band='wofs'):
kern = np.array([[1, 1, 1], [1, 0.001, 1], [1, 1, 1]])
convolved = conv.convolve(dataset[water_band], kern, mode='constant') // 1
ds = dataset.where(convolved > 0)
ds = ds.where(convolved < 6)
ds.wofs.values[~np.isnan(ds.wofs.values)] = 1
ds.wofs.values[np.isnan(ds.wofs.values)] = 0
return ds.rename({"wofs": "coastline"})
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
import datacube
dc = datacube.Datacube(app = "Coastline classification")
Explanation: <a id="coastline_classifier_top"></a>
Coastline Classifier
This coastal boundary algorithm is used to classify a given pixel as either coastline or not coastline using a simple binary format like in the table before.
<br>
$\begin{array}{|c|c|}
\hline
1& Coastline \ \hline
0& Not Coastline \ \hline
\end{array}$
<br>
The algorithm makes a classification by examining surrounding pixels and making a determination based on how many pixels around it are water
<br>
<br>
If the count of land pixels surrounding a pixel exceeds 5, then it's likely not coastline.
If the count of land pixels surrounding a pixel does not exceed 1, then it's likely not a coastline
<br>
$$
Classification(pixel) = \begin{cases}
1 & 2\le count_water_surrounding(pixel) \leq 5 \
0 &
\end{cases}
$$
<br>
Counting by applying a convolutional kernel
A convolution applies a kernel to a point and it's surrounding pixels. Then maps the product to a new grid.
In the case of coastal boundary classification, A convolution the following kernel is applied to a grid of water, not-water pixels.
<br>
$$
Kernel =
\begin{bmatrix}
1 & 1 & 1\
1 & 0 & 1\
1 & 1 & 1\
\end{bmatrix}
$$
<br>
There exist more complicated differential kernels that would also work( see sobel operator).
The one used in this notebooks operates on binary variables and is easier to work with and easy to debug.
<hr>
Index
Import Dependencies and Connect to the Data Cube
Choose Platform and Product
Define the Extents of the Analysis
Load Data from the Data Cube and Create a Composite
Obtain Water Classifications and Coastal Change
<span id="coastline_classifier_import">Import Dependencies and Connect to the Data Cube ▴</span>
End of explanation
platform = 'LANDSAT_8'
product = 'ls8_usgs_sr_scene'
collection = 'c1'
level = 'l2'
Explanation: <span id="coastline_classifier_plat_prod">Choose Platform and Product ▴</span>
End of explanation
# Ghana
lon = (0.0520, 0.3458)
lat = (5.6581, 5.8113)
Explanation: <span id="coastline_classifier_define_extents">Define the Extents of the Analysis ▴</span>
West Africa is subject to considerable coastal erosion in some areas. The links listed below are references regarding coastal erosion in West Africa and coastal erosion in general.
World Bank WACA program brochure (2015) - link
USAID - Adapting to Coastal Climate Change (2009) - - link
End of explanation
from utils.data_cube_utilities.dc_display_map import display_map
display_map(lat, lon)
Explanation: Visualize the selected area
End of explanation
from datetime import datetime
params = dict(platform=platform,
product=product,
time=(datetime(2013,1,1), datetime(2013,12,31)) ,
lon= lon,
lat= lat,
measurements = ['red', 'green', 'blue', 'nir', 'swir1', 'swir2', 'pixel_qa'],
dask_chunks={'time':1, 'latitude':1000, 'longitude':1000})
dataset = dc.load(**params).persist()
Explanation: <span id="coastline_classifier_retrieve_data">Load Data from the Data Cube and Create a Composite ▴</span>
End of explanation
from utils.data_cube_utilities.clean_mask import landsat_clean_mask_full
clean_mask = landsat_clean_mask_full(dc, dataset, product=product, platform=platform,
collection=collection, level=level).persist()
Explanation: Obtain the clean mask
End of explanation
from utils.data_cube_utilities.dc_mosaic import create_median_mosaic
from utils.data_cube_utilities.dc_utilities import ignore_warnings
composited_dataset = ignore_warnings(create_median_mosaic, dataset, clean_mask).persist()
Explanation: Create a composite
End of explanation
from utils.data_cube_utilities.plotter_utils import figure_ratio
composited_dataset.swir1.plot(cmap = "Greys", figsize = figure_ratio(dataset, fixed_width = 20))
Explanation: Visualize Composited imagery
End of explanation
from utils.data_cube_utilities.dc_water_classifier import wofs_classify
water_classification = ignore_warnings(wofs_classify, composited_dataset, mosaic = True).persist()
water_classification.wofs.plot(cmap = "Blues", figsize = figure_ratio(dataset, fixed_width = 20))
Explanation: <span id="coastline_classifier_water_cls_and_coastal_change">Obtain Water Classifications and Coastal Change ▴</span>
End of explanation
coast = _coastline_classification(water_classification, water_band='wofs').persist()
coast.coastline.plot(cmap = "Blues", figsize = figure_ratio(dataset, fixed_width = 20))
Explanation: <br>
End of explanation |
13,849 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
OpenMC includes a few convenience functions for generationing TRISO particle locations and placing them in a lattice. To be clear, this capability is not a stochastic geometry capability like that included in MCNP. It's also important to note that OpenMC does not use delta tracking, which would normally speed up calculations in geometries with tons of surfaces and cells. However, the computational burden can be eased by placing TRISO particles in a lattice.
Step1: Let's first start by creating materials that will be used in our TRISO particles and the background material.
Step2: To actually create individual TRISO particles, we first need to create a universe that will be used within each particle. The reason we use the same universe for each TRISO particle is to reduce the total number of cells/surfaces needed which can substantially improve performance over using unique cells/surfaces in each.
Step3: Next, we need a region to pack the TRISO particles in. We will use a 1 cm x 1 cm x 1 cm box centered at the origin.
Step4: Now we need to randomly select locations for the TRISO particles. In this example, we will select locations at random within the box with a packing fraction of 30%. Note that pack_spheres can handle up to the theoretical maximum of 60% (it will just be slow).
Step5: Now that we have the locations of the TRISO particles determined and a universe that can be used for each particle, we can create the TRISO particles.
Step6: Each TRISO object actually is a Cell, in fact; we can look at the properties of the TRISO just as we would a cell
Step7: Let's confirm that all our TRISO particles are within the box.
Step8: We can also look at what the actual packing fraction turned out to be
Step9: Now that we have our TRISO particles created, we need to place them in a lattice to provide optimal tracking performance in OpenMC. We can use the box we created above to place the lattice in. Actually creating a lattice containing TRISO particles can be done with the model.create_triso_lattice() function. This function requires that we give it a list of TRISO particles, the lower-left coordinates of the lattice, the pitch of each lattice cell, the overall shape of the lattice (number of cells in each direction), and a background material.
Step10: Now we can set the fill of our box cell to be the lattice
Step11: Finally, let's take a look at our geometry by putting the box in a universe and plotting it. We're going to use the Fortran-side plotter since it's much faster.
Step12: If we plot the universe by material rather than by cell, we can see that the entire background is just graphite. | Python Code:
%matplotlib inline
from math import pi
import numpy as np
import matplotlib.pyplot as plt
import openmc
import openmc.model
Explanation: OpenMC includes a few convenience functions for generationing TRISO particle locations and placing them in a lattice. To be clear, this capability is not a stochastic geometry capability like that included in MCNP. It's also important to note that OpenMC does not use delta tracking, which would normally speed up calculations in geometries with tons of surfaces and cells. However, the computational burden can be eased by placing TRISO particles in a lattice.
End of explanation
fuel = openmc.Material(name='Fuel')
fuel.set_density('g/cm3', 10.5)
fuel.add_nuclide('U235', 4.6716e-02)
fuel.add_nuclide('U238', 2.8697e-01)
fuel.add_nuclide('O16', 5.0000e-01)
fuel.add_element('C', 1.6667e-01)
buff = openmc.Material(name='Buffer')
buff.set_density('g/cm3', 1.0)
buff.add_element('C', 1.0)
buff.add_s_alpha_beta('c_Graphite')
PyC1 = openmc.Material(name='PyC1')
PyC1.set_density('g/cm3', 1.9)
PyC1.add_element('C', 1.0)
PyC1.add_s_alpha_beta('c_Graphite')
PyC2 = openmc.Material(name='PyC2')
PyC2.set_density('g/cm3', 1.87)
PyC2.add_element('C', 1.0)
PyC2.add_s_alpha_beta('c_Graphite')
SiC = openmc.Material(name='SiC')
SiC.set_density('g/cm3', 3.2)
SiC.add_element('C', 0.5)
SiC.add_element('Si', 0.5)
graphite = openmc.Material()
graphite.set_density('g/cm3', 1.1995)
graphite.add_element('C', 1.0)
graphite.add_s_alpha_beta('c_Graphite')
Explanation: Let's first start by creating materials that will be used in our TRISO particles and the background material.
End of explanation
# Create TRISO universe
spheres = [openmc.Sphere(r=1e-4*r)
for r in [215., 315., 350., 385.]]
cells = [openmc.Cell(fill=fuel, region=-spheres[0]),
openmc.Cell(fill=buff, region=+spheres[0] & -spheres[1]),
openmc.Cell(fill=PyC1, region=+spheres[1] & -spheres[2]),
openmc.Cell(fill=SiC, region=+spheres[2] & -spheres[3]),
openmc.Cell(fill=PyC2, region=+spheres[3])]
triso_univ = openmc.Universe(cells=cells)
Explanation: To actually create individual TRISO particles, we first need to create a universe that will be used within each particle. The reason we use the same universe for each TRISO particle is to reduce the total number of cells/surfaces needed which can substantially improve performance over using unique cells/surfaces in each.
End of explanation
min_x = openmc.XPlane(x0=-0.5, boundary_type='reflective')
max_x = openmc.XPlane(x0=0.5, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.5, boundary_type='reflective')
max_y = openmc.YPlane(y0=0.5, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-0.5, boundary_type='reflective')
max_z = openmc.ZPlane(z0=0.5, boundary_type='reflective')
region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
Explanation: Next, we need a region to pack the TRISO particles in. We will use a 1 cm x 1 cm x 1 cm box centered at the origin.
End of explanation
outer_radius = 425.*1e-4
centers = openmc.model.pack_spheres(radius=outer_radius, region=region, pf=0.3)
Explanation: Now we need to randomly select locations for the TRISO particles. In this example, we will select locations at random within the box with a packing fraction of 30%. Note that pack_spheres can handle up to the theoretical maximum of 60% (it will just be slow).
End of explanation
trisos = [openmc.model.TRISO(outer_radius, triso_univ, c) for c in centers]
Explanation: Now that we have the locations of the TRISO particles determined and a universe that can be used for each particle, we can create the TRISO particles.
End of explanation
print(trisos[0])
Explanation: Each TRISO object actually is a Cell, in fact; we can look at the properties of the TRISO just as we would a cell:
End of explanation
centers = np.vstack([t.center for t in trisos])
print(centers.min(axis=0))
print(centers.max(axis=0))
Explanation: Let's confirm that all our TRISO particles are within the box.
End of explanation
len(trisos)*4/3*pi*outer_radius**3
Explanation: We can also look at what the actual packing fraction turned out to be:
End of explanation
box = openmc.Cell(region=region)
lower_left, upper_right = box.region.bounding_box
shape = (3, 3, 3)
pitch = (upper_right - lower_left)/shape
lattice = openmc.model.create_triso_lattice(
trisos, lower_left, pitch, shape, graphite)
Explanation: Now that we have our TRISO particles created, we need to place them in a lattice to provide optimal tracking performance in OpenMC. We can use the box we created above to place the lattice in. Actually creating a lattice containing TRISO particles can be done with the model.create_triso_lattice() function. This function requires that we give it a list of TRISO particles, the lower-left coordinates of the lattice, the pitch of each lattice cell, the overall shape of the lattice (number of cells in each direction), and a background material.
End of explanation
box.fill = lattice
Explanation: Now we can set the fill of our box cell to be the lattice:
End of explanation
univ = openmc.Universe(cells=[box])
geom = openmc.Geometry(univ)
geom.export_to_xml()
mats = list(geom.get_all_materials().values())
openmc.Materials(mats).export_to_xml()
settings = openmc.Settings()
settings.run_mode = 'plot'
settings.export_to_xml()
p = openmc.Plot.from_geometry(geom)
p.to_ipython_image()
Explanation: Finally, let's take a look at our geometry by putting the box in a universe and plotting it. We're going to use the Fortran-side plotter since it's much faster.
End of explanation
p.color_by = 'material'
p.colors = {graphite: 'gray'}
p.to_ipython_image()
Explanation: If we plot the universe by material rather than by cell, we can see that the entire background is just graphite.
End of explanation |
13,850 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Connexion
Connexion is a python framework based on Flask.
It streamlines the creation of contract-first REST APIs.
Once you have your OAS3 spec, connexion uses it to
Step1: Now run the spec in a terminal using
connexion run /code/notebooks/oas3/ex-01-info-ok.yaml
Remember
Step2: Defining endpoints in OAS3
Now that we have added our metadata, we can provide informations about the endpoints.
OAS3 allows multiple endpoints because good APIs have many.
Every endpoint can start with a prefix path (eg. /datetime/v1).
```
One or more server
You can add production, staging and test environments.
We
sandbox instances
servers
Step3: Solution on the unimplemented method
$ curl http
Step4: Exercise
Edit ex-03-02-path.yaml so that every /status response uses
the Problem schema.
Look at simple.yaml to
see a complete implementation. | Python Code:
# At first ensure connexion is installed
# together with the swagger module used to render the OAS3 spec
# in the web-ui
!pip install connexion[swagger-ui] connexion
Explanation: Connexion
Connexion is a python framework based on Flask.
It streamlines the creation of contract-first REST APIs.
Once you have your OAS3 spec, connexion uses it to:
dispatch requests
serve mock responses on unimplemented methods
validate input and output of the called methods
apply authentication policies
provide an API Documentation UI (Swagger UI) where we can browse our API.
End of explanation
# A request on a generic PATH on the server returns a
# nicely formatted and explicative error.
# Remember that we haven't already defined an operation.
!curl http://0.0.0.0:5000 -kv
render_markdown(f'''
Open the [documentation URL]({api_server_url('ui')}) and check the outcome!
Play a bit with Swagger UI.''')
Explanation: Now run the spec in a terminal using
connexion run /code/notebooks/oas3/ex-01-info-ok.yaml
Remember:
default port is :5000
the Swagger GUI is at the /ui path.
End of explanation
# Exercise: what's the expected output of the following command?
!curl http://0.0.0.0:5000/datetime/v1/status
# Exercise: what happens if you GET an unexisting path?
!curl http://0.0.0.0:5000/datetime/v1/MISSING
Explanation: Defining endpoints in OAS3
Now that we have added our metadata, we can provide informations about the endpoints.
OAS3 allows multiple endpoints because good APIs have many.
Every endpoint can start with a prefix path (eg. /datetime/v1).
```
One or more server
You can add production, staging and test environments.
We
sandbox instances
servers:
- description: |
An interoperable API has many endpoints.
One for development...
url: https://localhost:8443/datetime/v1
description:
One for testing in a sandboxed environment. This
is especially important to avoid clients to
test in production.
We are using the custom x-sandbox to identify
url: https://api.example.com/datetime/v1
x-sandbox: true
description: |
Then we have our production endpoint.
The custom x-healthCheck parameter
can be used to declare how to check the API.
url: https://api.example.com/datetime/v1/status
x-healthCheck:
url: https://api.example.com/datetime/v1/status
interval: 300
timeout: 15
```
Exercise: the servers parameter
Edit the servers attribute so that it points to your actual endpoint URL (eg. your IP/port).
Now check the outcome.
connexion run /code/notebooks/oas3/ex-02-servers-ok.yaml
Defining paths
Now we can define our first path that is the /status one.
An interoperable API should declare an URL for checking its status.
This allows implementers to plan a suitable method for testing it (eg. it could be
a simple OK/KO method or can execute basic checks like. databases are reachable, smoke testing other components, ..)
Caveats on /status
NB: the /status path is not a replacement for proper monitoring your APIs, but a way to communicate to your peers that you're online.
Paths anatomy
An OAS3 path references:
the associated METHOD (eg. get|post|..)
a summary and a description of the operation
/status:
get:
summary: Returns the application status.
description: |
This path can randomly return an error
for testing purposes. The returned object
is always a problem+json.
a reference to the python object to call when the
operationId: get_status
the http statuses of the possible responses, each with its description,
content-type and examples
```
responses:
'200':
description: |
The application is working properly.
content:
application/problem+json:
example:
status: 200
title: OK
detail: API is working properly.
default:
description: |
If none of the above statuses is returned, then this applies
content:
application/problem+json:
example:
status: 500
title: Internal Server Error
detail: API is not responding correctly
```
Exercise
open the ex-03-02-path.yaml
eventually copy/paste the code from/to the swagger editor.
complete the get /status path
We haven't already implemented the function get_status() referenced by operationId,
so to run the spec in a terminal we tell the server
to ignore this with --stub
connexion run /code/notebooks/oas3/ex-03-02-path.yaml --stub
Exercise
1- What happens if I get the /status resource of my API now?
2- And if I invoke another path which is not mentioned in the spec?
3- Restart the server via
connexion run /code/notebooks/oas3/ex-03-02-path.yaml --mock notimplemented
End of explanation
print(show_component('https://teamdigitale.github.io/openapi/0.0.5/definitions.yaml#/schemas/Problem'))
# Exercise: use the yaml and requests libraries
# to download the Problem schema
from requests import get
ret = get('https://teamdigitale.github.io/openapi/0.0.5/definitions.yaml')
# Yaml parse the definitions
definitions = yaml.safe_load(ret.content)
# Nicely print the Problem schema
print(yaml.dump(definitions['schemas']['Problem']))
### Exercise
# Read the definitions above
# - https://teamdigitale.github.io/openapi/0.0.5/definitions.yaml
#
# Then use this cell to list all the structures present in definitions
for sections, v in definitions.items():
for items, vv in v.items():
print(f'{sections}.{items}')
Explanation: Solution on the unimplemented method
$ curl http://0.0.0.0:8889/datetime/v1/status
{
"detail": "Empty module name",
"status": 501,
"title": "Not Implemented",
"type": "about:blank"
}
Solution on other paths
$ curl http://0.0.0.0:8889/datetime/v1/missing
{
"detail": "The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.",
"status": 404,
"title": "Not Found",
"type": "about:blank"
}
Schemas
OAS3 allows defining, using and reusing schemas.
They can be defined inline, in the component section or referenced from another file, like below.
The URL fragment part can be used to navigate inside the yaml (eg. #/schemas/Problem).
```
components:
schemas:
Problem:
$ref: 'https://teamdigitale.github.io/openapi/0.0.5/definitions.yaml#/schemas/Problem'
```
End of explanation
## Exercise
#Test the new setup
Explanation: Exercise
Edit ex-03-02-path.yaml so that every /status response uses
the Problem schema.
Look at simple.yaml to
see a complete implementation.
End of explanation |
13,851 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Sample data
Step3: A function to calculate three parameter transformation based on commmon points. The point coordinates are stored in dictionaries, the key is the point ID/name and each dictionary item stores a list of coordinates [x, y, z].
Step5: Function to select two point from all points.
Step7: Apply the transformation parameters to points.
Step8: Iterating the transformation using two points in all combinations. | Python Code:
import numpy as np
from math import atan2, sqrt, sin, cos, pi
import re
X, Y, Z, MX, MY, MZ = 0, 1, 2, 3, 4, 5 # indices in coord dictionary items
RO = 180 * 3600 / pi
Explanation: <a href="https://colab.research.google.com/github/OSGeoLabBp/tutorials/blob/master/english/data_processing/lessons/trans.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Robust 2D transformation
Sometimes we have to check the stability or the horizontal reference system, which is established by the control points marked on the field.
The proposed method to check the movement of the control point:
Initially make observations and adjust the horizontal network as a free network with blunder elimination
Checking the stability of the control points, make observations and adjust the network again as a free network with blunder elimination
Calculate transformation parameters between the two free network using a robust method. If we suppose some control points moved between the two measurements, the moving points cannot be used to calculate the transformation parameters.
The robust method for the calculation of the transformation parameters can be the L1 norm, where we minimize the sum of the absolute value of corrections. It can be solved ba simplex method of linear programming or iterating the LSM solution using two points only for the parameter calculation. As the scale change between the two determinations of the points is not excepted 3 parameters ortogonal transformation (offset and rotation) is used.
Advanteges of the proposed method:
It is not neccessary to have the same points, there may be new and destroyed points
End of explanation
# initial coordinates
coo1 = {'K1': [ 0.0, 5.9427, 0.9950],
'K2': [ 6.0242, 0.0, 1.3998],
'K3': [ 9.7954, 5.3061, 1.8230],
'K4': [17.9716, 5.2726, 1.8389],
'K5': [31.6363, 5.5274, 1.0126],
'K6': [33.2002, 7.0923, 1.1090],
'K7': [35.9246, 14.5219, 1.3326],
'K8': [40.6884, 21.0337, 1.4709],
'K9': [32.501, 22.8658, 1.6797]
}
# coordinatess from the second adjustment
coo2 = {'K1': [ 0.0002, 5.9422, 0.9948],
'K2': [ 6.0252, -0.0006, 1.3997],
'K3': [ 9.7959, 5.3061, 1.8230],
'K4': [17.9716, 5.2729, 1.8389],
'K5': [31.6366, 5.5280, 1.0129],
'K6': [33.1994, 7.0916, 1.1091],
'K7': [35.9235, 14.5207, 1.3327],
'K8': [40.6888, 21.0319, 1.4711],
'K9': [32.2494, 22.8644, 1.6799]
}
Explanation: Sample data
End of explanation
def tr3(src, dst, x0):
Three parameter orthogonal transformation
:param src: dictionary of source points and coordinates
:param dst: dictionary of target points and coordinates
:param x0: preliminary transformation parameter values
:returns: x_offset y_offset rotation
# find common points
s = set(src.keys())
d = set(dst.keys())
common = s.intersection(d)
n = len(common)
A = np.zeros((2*n, 3))
l = np.zeros(2*n)
i = 0
# set up equations
for key in common:
A[i] = np.array([1.0, 0.0, -src[key][X] * sin(x0[2]) -
src[key][Y] * cos(x0[2])])
l[i] = dst[key][X] - (x0[0] + src[key][X] * cos(x0[2]) -
src[key][Y] * sin(x0[2]))
i += 1
A[i] = np.array([0.0, 1.0, src[key][X] * cos(x0[2]) -
src[key][Y] * sin(x0[2])])
l[i] = dst[key][1] - (x0[1] + src[key][X] * sin(x0[2]) +
src[key][Y] * cos(x0[2]))
i += 1
# solve equation
ATA = np.dot(A.transpose(), A)
ATl = np.dot(A.transpose(), l)
param = np.linalg.solve(ATA, ATl) # x0, y0, rotation
v = np.dot(A, param+x0) - l # corrections
return param + x0, v
Explanation: A function to calculate three parameter transformation based on commmon points. The point coordinates are stored in dictionaries, the key is the point ID/name and each dictionary item stores a list of coordinates [x, y, z].
End of explanation
def sel(coo, keys):
select points from coordinate list based on point IDs or regexp
:param coo: dictionary with coordinates
:param keys: dictionary keys/point IDS to select or a regexp for point ids
if isinstance(keys, str):
r = re.compile(keys)
w = list(filter(r.search, coo.keys()))
else:
w = keys
return {k : coo[k] for k in w if k in coo}
Explanation: Function to select two point from all points.
End of explanation
def coo_tr(coo, param):
transform coordinates in coo using transformation parameters
:param coo: dictionary of coordinates to transform
:param param: transformation parameters x0, y0, alfa, scale
if len(param) == 4:
x0, y0, alpha, scale = param
else:
x0, y0, alpha = param
scale = 1.0
return {k: [x0 + coo[k][X] * scale * cos(alpha) - coo[k][Y] * scale * sin(alpha),
y0 + coo[k][X] * scale * sin(alpha) + coo[k][Y] * scale * cos(alpha),
coo[k][Z]] for k in coo}
Explanation: Apply the transformation parameters to points.
End of explanation
key_list = list(coo1.keys())
n_key = len(key_list)
min_v = 1e38
print('P1 P2 X0 Y0 Alpha" sum(|v|)')
print('----------------------------------------------')
for i in range(n_key):
k1 = key_list[i]
for j in range(i+1, n_key):
k2 = key_list[j]
p, v = tr3(sel(coo1, [k1, k2]), sel(coo2, [k1, k2]), [0.0, 0.0, 0.0])
coo1_tr = coo_tr(coo1, p)
sum_v = 0
# calculate sum of absolute value of corrections
for k in coo1:
sum_v += abs(coo1_tr[k][X] - coo2[k][X]) + \
abs(coo1_tr[k][Y] - coo2[k][Y])
if sum_v < min_v:
opt = [k1, k2, p, sum_v]
min_v = sum_v
print(f'{k1:4s} {k2:4s} {p[0]:8.3f} {p[1]:8.3f} {p[2] * RO:6.1f} {sum_v:8.3f}')
print('optimal:')
print(f'{opt[0]:4s} {opt[1]:4s} {opt[2][0]:8.3f} {opt[2][1]:8.3f} {opt[2][2] * RO:6.1f} {opt[3]:8.3f}')
Explanation: Iterating the transformation using two points in all combinations.
End of explanation |
13,852 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
These notes follow the official python tutorial pretty closely
Step1: Lists
Lists group together data. Many languages have arrays (we'll look at those in a bit in python). But unlike arrays in most languages, lists can hold data of all different types -- they don't need to be homogeneos. The data can be a mix of integers, floating point or complex #s, strings, or other objects (including other lists).
A list is defined using square brackets
Step2: We can index a list to get a single element -- remember that python starts counting at 0
Step3: Like with strings, mathematical operators are defined on lists
Step4: The len() function returns the length of a list
Step5: Unlike strings, lists are mutable -- you can change elements in a list easily
Step6: Note that lists can even contain other lists
Step7: Just like everything else in python, a list is an object that is the instance of a class. Classes have methods (functions) that know how to operate on an object of that class.
There are lots of methods that work on lists. Two of the most useful are append, to add to the end of a list, and pop, to remove the last element
Step8: <div style="background-color
Step9: copying may seem a little counterintuitive at first. The best way to think about this is that your list lives in memory somewhere and when you do
a = [1, 2, 3, 4]
then the variable a is set to point to that location in memory, so it refers to the list.
If we then do
b = a
then b will also point to that same location in memory -- the exact same list object.
Since these are both pointing to the same location in memory, if we change the list through a, the change is reflected in b as well
Step10: if you want to create a new object in memory that is a copy of another, then you can either index the list, using
Step11: Things get a little complicated when a list contains another mutable object, like another list. Then the copy we looked at above is only a shallow copy. Look at this example—the list within the list here is still the same object in memory for our two copies
Step12: Now we are going to change an element of that list [2, 3] inside of our main list. We need to index f once to get that list, and then a second time to index that list
Step13: Note that the change occured in both—since that inner list is shared in memory between the two. Note that we can still change one of the other values without it being reflected in the other list—this was made distinct by our shallow copy
Step14: Note
Step15: There are lots of other methods that work on lists (remember, ask for help)
Step16: joining two lists is simple. Like with strings, the + operator concatenates
Step17: Dictionaries
A dictionary stores data as a key
Step18: you can add a new key
Step19: Note that a dictionary is unordered.
You can also easily get the list of keys that are defined in a dictionary
Step20: and check easily whether a key exists in the dictionary using the in operator
Step21: List Comprehensions
list comprehensions provide a compact way to initialize lists. Some examples from the tutorial
Step22: here we use another python type, the tuple, to combine numbers from two lists into a pair
Step23: <div style="background-color
Step24: We can unpack a tuple
Step25: Since a tuple is immutable, we cannot change an element
Step26: But we can turn it into a list, and then we can change it
Step27: Control Flow
To write a program, we need the ability to iterate and take action based on the values of a variable. This includes if-tests and loops.
Python uses whitespace to denote a block of code.
While loop
A simple while loop—notice the indentation to denote the block that is part of the loop.
Here we also use the compact += operator
Step28: This was a very simple example. But often we'll use the range() function in this situation. Note that range() can take a stride.
Step29: if statements
if allows for branching. python does not have a select/case statement like some other languages, but if, elif, and else can reproduce any branching functionality you might need.
Step30: Iterating over elements
it's easy to loop over items in a list or any iterable object. The in operator is the key here.
Step31: We can combine loops and if-tests to do more complex logic, like break out of the loop when you find what you're looking for
Step32: (for that example, however, there is a simpler way)
Step33: for dictionaries, you can also loop over the elements
Step34: sometimes we want to loop over a list element and know its index -- enumerate() helps here | Python Code:
from __future__ import print_function
Explanation: These notes follow the official python tutorial pretty closely: http://docs.python.org/3/tutorial/
End of explanation
a = [1, 2.0, "my list", 4]
print(a)
Explanation: Lists
Lists group together data. Many languages have arrays (we'll look at those in a bit in python). But unlike arrays in most languages, lists can hold data of all different types -- they don't need to be homogeneos. The data can be a mix of integers, floating point or complex #s, strings, or other objects (including other lists).
A list is defined using square brackets:
End of explanation
print(a[2])
Explanation: We can index a list to get a single element -- remember that python starts counting at 0:
End of explanation
print(a*2)
Explanation: Like with strings, mathematical operators are defined on lists:
End of explanation
print(len(a))
Explanation: The len() function returns the length of a list
End of explanation
a[1] = -2.0
a
a[0:1] = [-1, -2.1] # this will put two items in the spot where 1 existed before
a
Explanation: Unlike strings, lists are mutable -- you can change elements in a list easily
End of explanation
a[1] = ["other list", 3]
a
Explanation: Note that lists can even contain other lists:
End of explanation
a.append(6)
a
a.pop()
a
Explanation: Just like everything else in python, a list is an object that is the instance of a class. Classes have methods (functions) that know how to operate on an object of that class.
There are lots of methods that work on lists. Two of the most useful are append, to add to the end of a list, and pop, to remove the last element:
End of explanation
a = []
a.append(1)
a.append(2)
a.append(3)
a.append(4)
a.append(5)
a
a.pop()
a.pop()
a.pop()
a.pop()
a.pop()
a.pop()
Explanation: <div style="background-color:yellow; padding: 10px"><h3><span class="fa fa-flash"></span> Quick Exercise:</h3></div>
An operation we'll see a lot is to begin with an empty list and add elements to it. An empty list is created as:
a = []
Create an empty list
Append the integers 1 through 10 to it.
Now pop them out of the list one by one.
<hr>
End of explanation
a = [1, 2, 3, 4]
b = a # both a and b refer to the same list object in memory
print(a)
a[0] = "changed"
print(b)
Explanation: copying may seem a little counterintuitive at first. The best way to think about this is that your list lives in memory somewhere and when you do
a = [1, 2, 3, 4]
then the variable a is set to point to that location in memory, so it refers to the list.
If we then do
b = a
then b will also point to that same location in memory -- the exact same list object.
Since these are both pointing to the same location in memory, if we change the list through a, the change is reflected in b as well:
End of explanation
c = list(a) # you can also do c = a[:], which basically slices the entire list
a[1] = "two"
print(a)
print(c)
Explanation: if you want to create a new object in memory that is a copy of another, then you can either index the list, using : to get all the elements, or use the list() function:
End of explanation
f = [1, [2, 3], 4]
print(f)
g = list(f)
print(g)
Explanation: Things get a little complicated when a list contains another mutable object, like another list. Then the copy we looked at above is only a shallow copy. Look at this example—the list within the list here is still the same object in memory for our two copies:
End of explanation
f[1][0] = "a"
print(f)
print(g)
Explanation: Now we are going to change an element of that list [2, 3] inside of our main list. We need to index f once to get that list, and then a second time to index that list:
End of explanation
f[0] = -1
print(g)
print(f)
Explanation: Note that the change occured in both—since that inner list is shared in memory between the two. Note that we can still change one of the other values without it being reflected in the other list—this was made distinct by our shallow copy:
End of explanation
print(id(a), id(b), id(c))
Explanation: Note: this is what is referred to as a shallow copy. If the original list had any special objects in it (like another list), then the new copy and the old copy will still point to that same object. There is a deep copy method when you really want everything to be unique in memory.
When in doubt, use the id() function to figure out where in memory an object lies (you shouldn't worry about the what value of the numbers you get from id mean, but just whether they are the same as those for another object)
End of explanation
my_list = [10, -1, 5, 24, 2, 9]
my_list.sort()
print(my_list)
print(my_list.count(-1))
my_list
help(a.insert)
a.insert(3, "my inserted element")
a
Explanation: There are lots of other methods that work on lists (remember, ask for help)
End of explanation
b = [1, 2, 3]
c = [4, 5, 6]
d = b + c
print(d)
Explanation: joining two lists is simple. Like with strings, the + operator concatenates:
End of explanation
my_dict = {"key1":1, "key2":2, "key3":3}
print(my_dict["key1"])
Explanation: Dictionaries
A dictionary stores data as a key:value pair. Unlike a list where you have a particular order, the keys in a dictionary allow you to access information anywhere easily:
End of explanation
my_dict["newkey"] = "new"
print(my_dict)
Explanation: you can add a new key:pair easily, and it can be of any type
End of explanation
keys = list(my_dict.keys())
print(keys)
Explanation: Note that a dictionary is unordered.
You can also easily get the list of keys that are defined in a dictionary
End of explanation
print("key1" in keys)
print("invalidKey" in keys)
Explanation: and check easily whether a key exists in the dictionary using the in operator
End of explanation
squares = [x**2 for x in range(10)]
squares
Explanation: List Comprehensions
list comprehensions provide a compact way to initialize lists. Some examples from the tutorial
End of explanation
[(x, y) for x in [1,2,3] for y in [3,1,4] if x != y]
Explanation: here we use another python type, the tuple, to combine numbers from two lists into a pair
End of explanation
a = (1, 2, 3, 4)
print(a)
Explanation: <div style="background-color:yellow; padding: 10px"><h3><span class="fa fa-flash"></span> Quick Exercise:</h3></div>
Use a list comprehension to create a new list from squares containing only the even numbers. It might be helpful to use the modulus operator, %
<hr>
Tuples
tuples are immutable -- they cannot be changed, but they are useful for organizing data in some situations. We use () to indicate a tuple:
End of explanation
w, x, y, z = a
print(w)
print(w, x, y, z)
Explanation: We can unpack a tuple:
End of explanation
a[0] = 2
Explanation: Since a tuple is immutable, we cannot change an element:
End of explanation
z = list(a)
z[0] = "new"
print(z)
Explanation: But we can turn it into a list, and then we can change it
End of explanation
n = 0
while n < 10:
print(n)
n += 1
Explanation: Control Flow
To write a program, we need the ability to iterate and take action based on the values of a variable. This includes if-tests and loops.
Python uses whitespace to denote a block of code.
While loop
A simple while loop—notice the indentation to denote the block that is part of the loop.
Here we also use the compact += operator: n += 1 is the same as n = n + 1
End of explanation
for n in range(2, 10, 2):
print(n)
print(list(range(10)))
Explanation: This was a very simple example. But often we'll use the range() function in this situation. Note that range() can take a stride.
End of explanation
x = 0
if x < 0:
print("negative")
elif x == 0:
print("zero")
else:
print("positive")
Explanation: if statements
if allows for branching. python does not have a select/case statement like some other languages, but if, elif, and else can reproduce any branching functionality you might need.
End of explanation
alist = [1, 2.0, "three", 4]
for a in alist:
print(a)
for c in "this is a string":
print(c)
Explanation: Iterating over elements
it's easy to loop over items in a list or any iterable object. The in operator is the key here.
End of explanation
n = 0
for a in alist:
if a == "three":
break
else:
n += 1
print(n)
Explanation: We can combine loops and if-tests to do more complex logic, like break out of the loop when you find what you're looking for
End of explanation
print(alist.index("three"))
Explanation: (for that example, however, there is a simpler way)
End of explanation
my_dict = {"key1":1, "key2":2, "key3":3}
for k, v in my_dict.items():
print("key = {}, value = {}".format(k, v)) # notice how we do the formatting here
for k in sorted(my_dict):
print(k, my_dict[k])
Explanation: for dictionaries, you can also loop over the elements
End of explanation
for n, a in enumerate(alist):
print(n, a)
Explanation: sometimes we want to loop over a list element and know its index -- enumerate() helps here:
End of explanation |
13,853 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Conditions in Python
Python has a very natural looking syntax for conditionals and boolean operations
if statment in Python
if True
Step1: if else statement
Step2: if - else if Statment
Python has the elif statement to represent an else if condition | Python Code:
import random
toss = random.random() # returns a random value between 0 and 1
if toss > 0.5:
print 'I won'
Explanation: Conditions in Python
Python has a very natural looking syntax for conditionals and boolean operations
if statment in Python
if True:
do something
End of explanation
toss = random.random()
if toss > 0.5:
print 'I won'
else:
print 'You won'
Explanation: if else statement
End of explanation
fruits = ['apple', 'orange', 'banana', 'water melon']
fruit_index = random.randint(0, 3) # Get a random number between 0 and length of the fruit list
fruit = fruits[fruit_index] # use fruit_index as an index to randomly select a fruit
if fruit == 'apple':
print 'red'
elif fruit == 'orange':
print 'orange'
elif fruit == 'banana':
print 'yellow'
else:
print 'green'
Explanation: if - else if Statment
Python has the elif statement to represent an else if condition
End of explanation |
13,854 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TF-Agents Authors.
Step1: 环境
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step9: Python 环境
Python 环境的 step(action) -> next_time_step 方法可将操作应用于环境,并返回有关下一步的以下信息:
observation:此为环境状态的一部分,可供代理观测以选择下一步的操作。
reward:代理会进行学习,目标是实现多个步骤奖励总和的最大化。
step_type:与环境的交互通常是序列/片段的一部分。例如,下象棋时多次移动棋子。step_type 可以是 FIRST、MID 或 LAST 之一,分别指示该时间步骤是序列中的第一步、中间步或最后一步。
discount:此为一个浮点数,表示下一个时间步骤的奖励相对于当前时间步骤的奖励的权重。
它们被分组到一个命名元组 TimeStep(step_type, reward, discount, observation)。
environments/py_environment.PyEnvironment 内包含了所有 python 环境必须实现的接口。主要方法为:
Step10: 除了 step() 方法外,环境还提供了一个 reset() 方法,该方法可以启动新的序列并提供初始 TimeStep。不必显式调用 reset 方法。我们假定在片段结束或首次调用 step() 时环境均会自动重置。
请注意,子类不会直接实现 step() 或 reset()。相反,它们会重写 _step() 和 _reset() 方法。这些方法返回的时间步骤将通过 current_time_step() 缓存和公开。
observation_spec 和 action_spec 方法会返回一组 (Bounded)ArraySpecs 嵌套,分别描述观测值和操作的名称、形状、数据类型和范围。
我们在 TF-Agents 中反复提及嵌套,其定义为由列表、元组、命名元组或字典组成的任何树状结构。这些内容可以任意组合以保持观测值和操作的结构。我们发现,对于包含许多观测值和操作的更复杂环境而言,这种结构非常实用。
使用标准环境
TF Agents 针对许多标准环境(如 OpenAI Gym、DeepMind-control 和 Atari)内置了包装器,因此它们支持我们的 py_environment.PyEnvironment 接口。这些包装的环境可以使用我们的环境套件轻松加载。让我们通过 OpenAI Gym 加载 CartPole 环境,并查看操作和 time_step_spec。
Step11: 可以看到, 环境所预期的操作类型为 [0, 1] 区间内的 int64,当观测值为长度等于 4 的 float32 向量且折扣因子为 [0.0, 1.0] 区间内的 float32 时会返回 TimeSteps。现在,让我们尝试对整个片段采取固定操作 (1,)。
Step12: 创建自己的 Python 环境
对于许多客户而言,一个常见用例是采用 TF-Agents 中的一个标准代理(请参见 agents/)解决他们的问题。为此,客户需要将问题视为环境。那么,让我们看一下如何在 Python 中实现环境。
假设我们要训练一个代理来玩以下纸牌游戏(受 21 点玩法启发):
使用无限张数字为 1 到 10 的纸牌进行游戏。
代理每个回合可以做两件事:随机抽取一张新的纸牌,或者停止当前回合。
目标是在回合结束时使您的纸牌上数字的总和尽可能接近 21,但不大于 21。
代表游戏的环境可能如下所示:
操作:有 2 个操作。操作 0 为抽取一张新的纸牌;操作 1 为终止当前回合。
观测值:当前回合的纸牌上数字的总和。
奖励:目标是尽可能接近 21 但不超过 21,因此我们可以在回合结束时使用以下奖励实现这一目标:sum_of_cards - 21 if sum_of_cards <= 21, else -21
Step13: 让我们确保已正确地定义了上述环境。创建自己的环境时,您必须确保生成的观测值和 time_step 符合规范中定义的正确形状和类型。这些内容用于生成 TensorFlow 计算图,因此如有差错,可能会造成难以调试的问题。
为了验证我们的环境,我们将使用随机策略来生成操作,并将迭代 5 个片段以确保按预期进行。如果我们收到的 time_step 不符合环境规范,则会提示错误。
Step14: 现在我们可以确定环境正在按预期工作,让我们使用固定策略运行此环境:抽取 3 张纸牌,然后结束该回合。
Step15: 环境包装器
环境包装器使用 python 环境,并返回该环境的修改版本。原始环境和修改后的环境均为 py_environment.PyEnvironment 的实例,并且可以将多个包装器链接在一起。
可以在 environments/wrappers.py 中找到一些常用的包装器。例如:
ActionDiscretizeWrapper:将连续操作空间转换成离散操作空间。
RunStats:捕获环境的运行统计信息,例如采用的步数、完成的片段数等。
TimeLimit:在固定步数后终止片段。
示例 1:操作离散化包装器
InvertedPendulum 是一个接受 [-2, 2] 区间内连续操作的 PyBullet 环境。如果要在此环境中训练离散操作代理(例如 DQN),则必须离散化(量化)操作空间。这正是 ActionDiscretizeWrapper 的工作。请对比包装前后的 action_spec:
Step25: 包装后的 discrete_action_env 为 py_environment.PyEnvironment 的实例,可视为常规 python 环境。
TensorFlow 环境
TF 环境的接口在 environments/tf_environment.TFEnvironment 中定义,其与 Python 环境非常相似。TF 环境与 python 环境在以下两个方面有所不同:
TF 环境生成张量对象而非数组
与规范相比,TF 环境会为生成的张量添加批次维度。
将 python 环境转换为 TF 环境可以使 tensorflow 支持并行化运算。例如,用户可以定义 collect_experience_op 从环境中收集数据并添加到 replay_buffer,并定义 train_op 从 replay_buffer 中读取数据并训练代理,然后在 TensorFlow 中自然地并行运行二者。
Step26: current_time_step() 方法会返回当前 time_step 并在需要时初始化环境。
reset() 方法会在环境中强制执行重置并返回 current_step。
如果 action 不依赖于上一个 time_step,则在 Graph 模式下将需要 tf.control_dependency。
现在,让我们看看如何创建 TFEnvironments。
创建自己的 TensorFlow 环境
此操作比在 Python 中创建环境复杂得多,因此,我们将不会在本 Colab 中进行介绍。此处提供了一个示例。更常见的用例是在 Python 中实现您的环境,并使用我们的 TFPyEnvironment 包装器将其包装为 TensorFlow 环境(请参见下文)。
将 Python 环境包装为 TensorFlow 环境
我们可以使用 TFPyEnvironment 包装器将任何 Python 环境轻松包装为 TensorFlow 环境。
Step27: 请注意,规范的类型现在为:(Bounded)TensorSpec。
用法示例
简单示例
Step28: 整个片段 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TF-Agents Authors.
End of explanation
!pip install "gym>=0.21.0"
!pip install tf-agents
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import tensorflow as tf
import numpy as np
from tf_agents.environments import py_environment
from tf_agents.environments import tf_environment
from tf_agents.environments import tf_py_environment
from tf_agents.environments import utils
from tf_agents.specs import array_spec
from tf_agents.environments import wrappers
from tf_agents.environments import suite_gym
from tf_agents.trajectories import time_step as ts
Explanation: 环境
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://tensorflow.google.cn/agents/tutorials/2_environments_tutorial"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 tensorflow.google.cn 上查看</a>
</td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/agents/tutorials/2_environments_tutorial.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 运行</a>
</td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/agents/tutorials/2_environments_tutorial.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 Github 上查看源代码</a>
</td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/agents/tutorials/2_environments_tutorial.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a> </td>
</table>
简介
强化学习 (RL) 的目标是设计可通过与环境交互进行学习的代理。在标准 RL 设置中,代理在每个时间步骤都会收到一个观测值并选择一个操作。该操作将应用于环境,而环境会返回奖励和新的观测值。代理会训练策略以选择合适的操作,旨在使奖励总和(即回报)最大化。
在 TF-Agents 中,可以使用 Python 或 TensorFlow 实现环境。Python 环境通常更易于实现、理解和调试,但 TensorFlow 环境则更为高效并且支持自然并行化。最常见的工作流是在 Python 中实现环境,然后使用我们的包装器之一将其自动转换为 TensorFlow。
让我们首先看一下 Python 环境。TensorFlow 环境采用非常相似的 API。
设置
如果尚未安装 TF-Agents 或 Gym,请运行以下命令:
End of explanation
class PyEnvironment(object):
def reset(self):
Return initial_time_step.
self._current_time_step = self._reset()
return self._current_time_step
def step(self, action):
Apply action and return new time_step.
if self._current_time_step is None:
return self.reset()
self._current_time_step = self._step(action)
return self._current_time_step
def current_time_step(self):
return self._current_time_step
def time_step_spec(self):
Return time_step_spec.
@abc.abstractmethod
def observation_spec(self):
Return observation_spec.
@abc.abstractmethod
def action_spec(self):
Return action_spec.
@abc.abstractmethod
def _reset(self):
Return initial_time_step.
@abc.abstractmethod
def _step(self, action):
Apply action and return new time_step.
Explanation: Python 环境
Python 环境的 step(action) -> next_time_step 方法可将操作应用于环境,并返回有关下一步的以下信息:
observation:此为环境状态的一部分,可供代理观测以选择下一步的操作。
reward:代理会进行学习,目标是实现多个步骤奖励总和的最大化。
step_type:与环境的交互通常是序列/片段的一部分。例如,下象棋时多次移动棋子。step_type 可以是 FIRST、MID 或 LAST 之一,分别指示该时间步骤是序列中的第一步、中间步或最后一步。
discount:此为一个浮点数,表示下一个时间步骤的奖励相对于当前时间步骤的奖励的权重。
它们被分组到一个命名元组 TimeStep(step_type, reward, discount, observation)。
environments/py_environment.PyEnvironment 内包含了所有 python 环境必须实现的接口。主要方法为:
End of explanation
environment = suite_gym.load('CartPole-v0')
print('action_spec:', environment.action_spec())
print('time_step_spec.observation:', environment.time_step_spec().observation)
print('time_step_spec.step_type:', environment.time_step_spec().step_type)
print('time_step_spec.discount:', environment.time_step_spec().discount)
print('time_step_spec.reward:', environment.time_step_spec().reward)
Explanation: 除了 step() 方法外,环境还提供了一个 reset() 方法,该方法可以启动新的序列并提供初始 TimeStep。不必显式调用 reset 方法。我们假定在片段结束或首次调用 step() 时环境均会自动重置。
请注意,子类不会直接实现 step() 或 reset()。相反,它们会重写 _step() 和 _reset() 方法。这些方法返回的时间步骤将通过 current_time_step() 缓存和公开。
observation_spec 和 action_spec 方法会返回一组 (Bounded)ArraySpecs 嵌套,分别描述观测值和操作的名称、形状、数据类型和范围。
我们在 TF-Agents 中反复提及嵌套,其定义为由列表、元组、命名元组或字典组成的任何树状结构。这些内容可以任意组合以保持观测值和操作的结构。我们发现,对于包含许多观测值和操作的更复杂环境而言,这种结构非常实用。
使用标准环境
TF Agents 针对许多标准环境(如 OpenAI Gym、DeepMind-control 和 Atari)内置了包装器,因此它们支持我们的 py_environment.PyEnvironment 接口。这些包装的环境可以使用我们的环境套件轻松加载。让我们通过 OpenAI Gym 加载 CartPole 环境,并查看操作和 time_step_spec。
End of explanation
action = np.array(1, dtype=np.int32)
time_step = environment.reset()
print(time_step)
while not time_step.is_last():
time_step = environment.step(action)
print(time_step)
Explanation: 可以看到, 环境所预期的操作类型为 [0, 1] 区间内的 int64,当观测值为长度等于 4 的 float32 向量且折扣因子为 [0.0, 1.0] 区间内的 float32 时会返回 TimeSteps。现在,让我们尝试对整个片段采取固定操作 (1,)。
End of explanation
class CardGameEnv(py_environment.PyEnvironment):
def __init__(self):
self._action_spec = array_spec.BoundedArraySpec(
shape=(), dtype=np.int32, minimum=0, maximum=1, name='action')
self._observation_spec = array_spec.BoundedArraySpec(
shape=(1,), dtype=np.int32, minimum=0, name='observation')
self._state = 0
self._episode_ended = False
def action_spec(self):
return self._action_spec
def observation_spec(self):
return self._observation_spec
def _reset(self):
self._state = 0
self._episode_ended = False
return ts.restart(np.array([self._state], dtype=np.int32))
def _step(self, action):
if self._episode_ended:
# The last action ended the episode. Ignore the current action and start
# a new episode.
return self.reset()
# Make sure episodes don't go on forever.
if action == 1:
self._episode_ended = True
elif action == 0:
new_card = np.random.randint(1, 11)
self._state += new_card
else:
raise ValueError('`action` should be 0 or 1.')
if self._episode_ended or self._state >= 21:
reward = self._state - 21 if self._state <= 21 else -21
return ts.termination(np.array([self._state], dtype=np.int32), reward)
else:
return ts.transition(
np.array([self._state], dtype=np.int32), reward=0.0, discount=1.0)
Explanation: 创建自己的 Python 环境
对于许多客户而言,一个常见用例是采用 TF-Agents 中的一个标准代理(请参见 agents/)解决他们的问题。为此,客户需要将问题视为环境。那么,让我们看一下如何在 Python 中实现环境。
假设我们要训练一个代理来玩以下纸牌游戏(受 21 点玩法启发):
使用无限张数字为 1 到 10 的纸牌进行游戏。
代理每个回合可以做两件事:随机抽取一张新的纸牌,或者停止当前回合。
目标是在回合结束时使您的纸牌上数字的总和尽可能接近 21,但不大于 21。
代表游戏的环境可能如下所示:
操作:有 2 个操作。操作 0 为抽取一张新的纸牌;操作 1 为终止当前回合。
观测值:当前回合的纸牌上数字的总和。
奖励:目标是尽可能接近 21 但不超过 21,因此我们可以在回合结束时使用以下奖励实现这一目标:sum_of_cards - 21 if sum_of_cards <= 21, else -21
End of explanation
environment = CardGameEnv()
utils.validate_py_environment(environment, episodes=5)
Explanation: 让我们确保已正确地定义了上述环境。创建自己的环境时,您必须确保生成的观测值和 time_step 符合规范中定义的正确形状和类型。这些内容用于生成 TensorFlow 计算图,因此如有差错,可能会造成难以调试的问题。
为了验证我们的环境,我们将使用随机策略来生成操作,并将迭代 5 个片段以确保按预期进行。如果我们收到的 time_step 不符合环境规范,则会提示错误。
End of explanation
get_new_card_action = np.array(0, dtype=np.int32)
end_round_action = np.array(1, dtype=np.int32)
environment = CardGameEnv()
time_step = environment.reset()
print(time_step)
cumulative_reward = time_step.reward
for _ in range(3):
time_step = environment.step(get_new_card_action)
print(time_step)
cumulative_reward += time_step.reward
time_step = environment.step(end_round_action)
print(time_step)
cumulative_reward += time_step.reward
print('Final Reward = ', cumulative_reward)
Explanation: 现在我们可以确定环境正在按预期工作,让我们使用固定策略运行此环境:抽取 3 张纸牌,然后结束该回合。
End of explanation
env = suite_gym.load('Pendulum-v1')
print('Action Spec:', env.action_spec())
discrete_action_env = wrappers.ActionDiscretizeWrapper(env, num_actions=5)
print('Discretized Action Spec:', discrete_action_env.action_spec())
Explanation: 环境包装器
环境包装器使用 python 环境,并返回该环境的修改版本。原始环境和修改后的环境均为 py_environment.PyEnvironment 的实例,并且可以将多个包装器链接在一起。
可以在 environments/wrappers.py 中找到一些常用的包装器。例如:
ActionDiscretizeWrapper:将连续操作空间转换成离散操作空间。
RunStats:捕获环境的运行统计信息,例如采用的步数、完成的片段数等。
TimeLimit:在固定步数后终止片段。
示例 1:操作离散化包装器
InvertedPendulum 是一个接受 [-2, 2] 区间内连续操作的 PyBullet 环境。如果要在此环境中训练离散操作代理(例如 DQN),则必须离散化(量化)操作空间。这正是 ActionDiscretizeWrapper 的工作。请对比包装前后的 action_spec:
End of explanation
class TFEnvironment(object):
def time_step_spec(self):
Describes the `TimeStep` tensors returned by `step()`.
def observation_spec(self):
Defines the `TensorSpec` of observations provided by the environment.
def action_spec(self):
Describes the TensorSpecs of the action expected by `step(action)`.
def reset(self):
Returns the current `TimeStep` after resetting the Environment.
return self._reset()
def current_time_step(self):
Returns the current `TimeStep`.
return self._current_time_step()
def step(self, action):
Applies the action and returns the new `TimeStep`.
return self._step(action)
@abc.abstractmethod
def _reset(self):
Returns the current `TimeStep` after resetting the Environment.
@abc.abstractmethod
def _current_time_step(self):
Returns the current `TimeStep`.
@abc.abstractmethod
def _step(self, action):
Applies the action and returns the new `TimeStep`.
Explanation: 包装后的 discrete_action_env 为 py_environment.PyEnvironment 的实例,可视为常规 python 环境。
TensorFlow 环境
TF 环境的接口在 environments/tf_environment.TFEnvironment 中定义,其与 Python 环境非常相似。TF 环境与 python 环境在以下两个方面有所不同:
TF 环境生成张量对象而非数组
与规范相比,TF 环境会为生成的张量添加批次维度。
将 python 环境转换为 TF 环境可以使 tensorflow 支持并行化运算。例如,用户可以定义 collect_experience_op 从环境中收集数据并添加到 replay_buffer,并定义 train_op 从 replay_buffer 中读取数据并训练代理,然后在 TensorFlow 中自然地并行运行二者。
End of explanation
env = suite_gym.load('CartPole-v0')
tf_env = tf_py_environment.TFPyEnvironment(env)
print(isinstance(tf_env, tf_environment.TFEnvironment))
print("TimeStep Specs:", tf_env.time_step_spec())
print("Action Specs:", tf_env.action_spec())
Explanation: current_time_step() 方法会返回当前 time_step 并在需要时初始化环境。
reset() 方法会在环境中强制执行重置并返回 current_step。
如果 action 不依赖于上一个 time_step,则在 Graph 模式下将需要 tf.control_dependency。
现在,让我们看看如何创建 TFEnvironments。
创建自己的 TensorFlow 环境
此操作比在 Python 中创建环境复杂得多,因此,我们将不会在本 Colab 中进行介绍。此处提供了一个示例。更常见的用例是在 Python 中实现您的环境,并使用我们的 TFPyEnvironment 包装器将其包装为 TensorFlow 环境(请参见下文)。
将 Python 环境包装为 TensorFlow 环境
我们可以使用 TFPyEnvironment 包装器将任何 Python 环境轻松包装为 TensorFlow 环境。
End of explanation
env = suite_gym.load('CartPole-v0')
tf_env = tf_py_environment.TFPyEnvironment(env)
# reset() creates the initial time_step after resetting the environment.
time_step = tf_env.reset()
num_steps = 3
transitions = []
reward = 0
for i in range(num_steps):
action = tf.constant([i % 2])
# applies the action and returns the new TimeStep.
next_time_step = tf_env.step(action)
transitions.append([time_step, action, next_time_step])
reward += next_time_step.reward
time_step = next_time_step
np_transitions = tf.nest.map_structure(lambda x: x.numpy(), transitions)
print('\n'.join(map(str, np_transitions)))
print('Total reward:', reward.numpy())
Explanation: 请注意,规范的类型现在为:(Bounded)TensorSpec。
用法示例
简单示例
End of explanation
env = suite_gym.load('CartPole-v0')
tf_env = tf_py_environment.TFPyEnvironment(env)
time_step = tf_env.reset()
rewards = []
steps = []
num_episodes = 5
for _ in range(num_episodes):
episode_reward = 0
episode_steps = 0
while not time_step.is_last():
action = tf.random.uniform([1], 0, 2, dtype=tf.int32)
time_step = tf_env.step(action)
episode_steps += 1
episode_reward += time_step.reward.numpy()
rewards.append(episode_reward)
steps.append(episode_steps)
time_step = tf_env.reset()
num_steps = np.sum(steps)
avg_length = np.mean(steps)
avg_reward = np.mean(rewards)
print('num_episodes:', num_episodes, 'num_steps:', num_steps)
print('avg_length', avg_length, 'avg_reward:', avg_reward)
Explanation: 整个片段
End of explanation |
13,855 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example analysis of Spark metrics collected with sparkMeasure
This is an example analysis of workload metrics collected with sparkMeasure https
Step1: Read data from storage and register as Spark temporary view
Step3: Print aggregated metrics
Step5: Comments
Step7: Plot number of concurrent running tasks vs. time
Step9: Comments
Step12: Comment
Step14: Comment
Step15: Comment | Python Code:
# This is the file path and name where the metrics are stored
metrics_filename = "<path>/myPerfTaskMetrics1"
# This defines the time window for analysis
# when using metrics coming from taskMetrics.runAndMeasure,
# get the info from: taskMetrics.beginSnapshot and taskMetrics.endSnapshot
# if you don't have the details, set begin_time and end_time to 0
begin_time = 1490554321913
end_time = 1490554663808
# Initialize libraries used later for plotting
import matplotlib.pyplot as plt
import seaborn as sns; sns.set() # cosmetics
%matplotlib inline
Explanation: Example analysis of Spark metrics collected with sparkMeasure
This is an example analysis of workload metrics collected with sparkMeasure https://github.com/LucaCanali/sparkMeasure
Workload data is produced as described in Example 2 of the blog entry http://db-blog.web.cern.ch/blog/luca-canali/2017-03-measuring-apache-spark-workload-metrics-performance-troubleshooting
The details of how to generate the load and measurements are also reported at the end of this notebook.
This Jupyter notebook was generated running pyspark/Spark version 2.1.0
Author: [email protected], March 2017
Configuration
End of explanation
# Read the metrics from metrics_filename
# it assumes the file is in json format, if a different format is used, update the command
df = spark.read.json(metrics_filename)
# Register data into a temporary view: PerfTaskMetrics
# with some data manipulation:
# filter data to limit the time window for analysis
from pyspark.sql import functions as F
if (end_time == 0):
end_time = df.agg(F.max(df.finishTime)).collect()[0][0]
if (begin_time == 0):
begin_time = df.agg(F.min(df.launchTime)).collect()[0][0]
df.filter("launchTime >= {0} and finishTime <= {1}".format(begin_time, end_time)).createOrReplaceTempView("PerfTaskMetrics")
Explanation: Read data from storage and register as Spark temporary view
End of explanation
# Prints the aggregated values of the metrics using Pandas to display as HTML table
# this notebook was tested using Anaconda, so among others Pandas are imported by default
# Note that the metrics referring to time measurements are in millisecond
report = spark.sql(
select count(*) numtasks, max(finishTime) - min(launchTime) as elapsedTime, sum(duration), sum(schedulerDelay),
sum(executorRunTime), sum(executorCpuTime), sum(executorDeserializeTime), sum(executorDeserializeCpuTime),
sum(resultSerializationTime), sum(jvmGCTime), sum(shuffleFetchWaitTime), sum(shuffleWriteTime), sum(gettingResultTime),
max(resultSize), sum(numUpdatedBlockStatuses), sum(diskBytesSpilled), sum(memoryBytesSpilled),
max(peakExecutionMemory), sum(recordsRead), sum(bytesRead), sum(recordsWritten), sum(bytesWritten),
sum(shuffleTotalBytesRead), sum(shuffleTotalBlocksFetched), sum(shuffleLocalBlocksFetched),
sum(shuffleRemoteBlocksFetched), sum(shuffleBytesWritten), sum(shuffleRecordsWritten)
from PerfTaskMetrics
).toPandas().transpose()
report.columns=['Metric value']
report
Explanation: Print aggregated metrics
End of explanation
# Define the reference time range samples, as equispaced time intervals from begin_time and end_time
# define a temporary view hich will be used in the following SQL
# currently the time interval is hardcoded to 1 sec (= 1000 ms = 10^3 ms)
spark.sql("select id as time, int((id - {0})/1000) as time_normalized from range({0}, {1}, 1000)".
format(round(begin_time,-3), round(end_time,-3))).createOrReplaceTempView("TimeRange")
# For each reference time value taken from TimeRange, list the number of running tasks
# the output is a temporary view ConcurrentRunningTasks
spark.sql(
select TimeRange.time_normalized as time, PTM.*
from TimeRange left outer join PerfTaskMetrics PTM
where TimeRange.time between PTM.launchTime and PTM.finishTime
order by TimeRange.time_normalized
).createOrReplaceTempView("ConcurrentRunningTasks")
Explanation: Comments: the report shows that the workload is CPU-bound. The execution time is dominated by workload executing on CPU.
Finding: the job allocates 56 cores/tasks. However the average amount of CPU used for the duration of the job can be calculated from the metrics as sum(executorcputime) / elapsedtime = 10190371 / 341393 ~ 30.
Additional drill down (in the following cells) shows more details of why the average CPU utilization considerably lower than the available CPU?
Prepare data to compute number of concurrent running tasks vs. time
End of explanation
plot_num_running_tasks = spark.sql(
select time, count(*) num_running_tasks
from ConcurrentRunningTasks
group by time
order by time
).toPandas()
ax = plot_num_running_tasks.plot(x='time', y='num_running_tasks', linestyle='solid', linewidth=4, figsize=(12, 8))
ax.set_title("Number of running Spark tasks vs. Time", fontsize=24)
ax.set_xlabel("Time (sec)", fontsize=18)
ax.set_ylabel("Number of tasks running concurrently at a given time", fontsize=18)
plt.savefig("/home/luca/Spark/test/blog_image_orig.png")
Explanation: Plot number of concurrent running tasks vs. time
End of explanation
# load into Pandas the values of number of concurrent running tasks per host and time sample
# see also the heatmap visualization in the next cell
plot_heatmap_running_tasks_per_host = spark.sql(
select time, host, count(*) num_running_tasks
from ConcurrentRunningTasks
group by time, host
order by time, host
).toPandas()
pivoted_heatmapPandas = plot_heatmap_running_tasks_per_host.pivot(index='time', columns='host', values='num_running_tasks')
# plot heatmap
plt.figure(figsize=(16, 10))
ax = sns.heatmap(pivoted_heatmapPandas.T, cmap="YlGnBu", xticklabels=10)
ax.set_title("Heatmap: Number of concurrent tasks vs. Host name and Time", fontsize=24)
ax.set_xlabel("Time (sec)", fontsize=18)
ax.set_ylabel("Host name", fontsize=18)
ax.set_
plt.show()
Explanation: Comments: the graph of the number of active tasks as funciton of time shows that the execution has a long tail and stragglers where the CPU utilization is slow. For the first 150 seconds of the execution all available CPU is used (56 cores) then a slow ramp down phase is seen, finally ending in a "long tail" with stragglers.
Heatmap of number of concurrent tasks as function of host name and time
End of explanation
spark.sql(
select host, min(duration), round(avg(duration),0), max(duration), sum(duration), count(*) num_tasks
from PerfTaskMetrics
group by host
order by 3 desc
).show()
spark.sql(
select host, avg(duration) avg_duration, max(duration) max_duration
from PerfTaskMetrics
group by host).toPandas().plot(x='host', kind='bar', figsize=(12, 8))
Explanation: Comment: The heatmap shows that the execution suffers from a small number of stragglers. In particualr server13 is the last one to finish, seconds last is server05.
Investigate task duration by host.
End of explanation
spark.sql(
select host, avg(duration) avg_duration, avg(executorCpuTime) avg_CPU
from PerfTaskMetrics
group by host).toPandas().plot(x='host', kind='bar', figsize=(12, 8))
Explanation: Comment: server13 and server14 are the slowest on average to execute the tasks for this workload. Additional investigations not reported here, reveal that server13 and server14 are of lower specs than the rest of the servers in the cluster.
End of explanation
spark.sql("desc PerfTaskMetrics").show(50,False)
Explanation: Comment: This graphs reiterates the point that the workload is fully CPU bound.
As a reference, this are the metrics avaliable. Continue your exploration from here!
End of explanation |
13,856 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table width="100%" border="0">
<tr>
<td><img src="./images/ing.png" alt="" align="left" /></td>
<td><img src="./images/ucv.png" alt="" align="center" height="100" width="100" /></td>
<td><img src="./images/mec.png" alt="" align="right"/></td>
</tr>
</table>
<br>
<h1 style="text-align
Step1: Uso básico de interact
At the most basic level, interact autogenerates UI controls for function arguments, and then calls the function with those arguments when you manipulate the controls interactively. To use interact, you need to define a function that you want to explore. Here is a function that prints its only argument x.
A nivel más básico, interact genera controles automáticamente mediante una interfaz gráfica de usuario para los argumentos de una función que debe ser definida previamente. Así, interact llama a dicha función con los argumentos que se definieron para manipularlos con controles de forma interactiva.
Por tanto, para utilizar interact, es necesario definir la función en Python que desea explorar. Vamos a crear es una función que imprime su único argumento x
Step2: When you pass this function as the first argument to interact along with an integer keyword argument (x=10), a slider is generated and bound to the function.
Ahora utilizaremos esta función como primer argumento para interact junto con un argumento predifinido a un entero (x = 10), se autogenera un control deslizante (slider) y con que nos permite interactuar con esta función.
Step3: Al mover el deslizador, se llama a la función y el valor actual de x se imprime.
Si hacemos pasar un True o False, interact generará una casilla de verificación
Step4: Si hacemos pasar una cadena de texto (string), interact generará un campo de texto
Step5: Fijando argumentos usando fixed
Hay momentos en los que es posible que vayamos a explorar una función utilizando interact, pero queramos fijar una o más de sus argumentos a valores específicos. Esto puede lograrse haciendo uso la función fixed.
Step6: Cuando llamamos a interact, hacemos pasar fixed(20) para mantener q fijado al valor de 20.
Step7: Fijate que el deslizador solo responde a p y que el valor de q está fijado con el valor 20.
Widget abbreviation
Cuando pasamos un entero como argumento (x=10) a interact, se genera un deslizador de valores enteros con el rango de $[-10,+3\times10]$. En este caso, 10 es una abreviación de un widget slider tipo
Step8: Este ejemplo aclara como interactuar con con procesos mediante sus argumentos clave
Step9: Si se pasa un tupla con tres enteros (min,max,step), lo que hacemos es determinar también el tamaño de paso
Step10: Para que el widget devuelva valores en coma flotante, debemos pasar una tupla con valores en coma flotante. En el siguiente ejemplo, el mínimo es 0.0, el máximo es 10.0 y el intervalo es de 0.1 (por defecto).
Step11: De nuevo, el tamaño del intervalo se puede determinar haciendo uso de un tercer elemento en la tupla
Step12: Para ambos sliders de enteros y flotantes, se puede escoger el valor inicial del widget pasando un argumento de palabra clave por defecto a la función Python subyacente. Aquí establecemos el valor inicial de un slider con valores en coma flotante a 5.5.
Step13: También podemos crear menús desplegables haciendo pasar una tupla de cadenas de texto (strings). En este caso, las cadenas de texto son utilizados como los nombres de la interfaz de usuario del menú desplegable y pasan a la función Python subyacente.
Step14: Si quieres un menú desplegable que pasa valores que no son cadenas de texto a la función de Python, se puede pasar un diccionario. Las palabras claves del diccionario se utilizan para los nombres de la interfaz de usuario del menú desplegable y los valores son los argumentos que se pasan a la función Python subyacente. Se puede entender mejor viendo este ejemplo
Step15: Widgets interactivos y Matplotlib
Como ya sabes, IPython de maravilla con Matplotlib. Veamos cómo podemos combinar los widgets interactivos con esta bibliote gráfica.
Repaso
Step16: Haciendo un gráfico interactivo
En el siguiente ejemplo, una función que representa gráficamente la suma de dos ondas sinusoidales la combinamos con interact.
Step17: Referencias
Ejemplos de IPython | Python Code:
from __future__ import print_function
from IPython.html.widgets import interact, interactive, fixed
from IPython.html import widgets
Explanation: <table width="100%" border="0">
<tr>
<td><img src="./images/ing.png" alt="" align="left" /></td>
<td><img src="./images/ucv.png" alt="" align="center" height="100" width="100" /></td>
<td><img src="./images/mec.png" alt="" align="right"/></td>
</tr>
</table>
<br>
<h1 style="text-align: center;"> Curso de Python para Ingenieros Mecánicos </h1>
<h3 style="text-align: center;"> Por: Eduardo Vieira</h3>
<br>
<br>
<h1 style="text-align: center;"> Uso de widgets interactivos en Jupyter </h1>
<br>
Después de haber aprendido a utilizar las bibliotecas de Python científico claves (NumPy, matplotlib y SymPy), con los módulos interactivos de IPython / Jupyter podemos obtener resultados interactivas de muy alta calidad y altamente personalizables.
La función interactive (IPython.html.widgets.interact) crea automáticamente y con un comando una interfaz gráfica de usuario (GUI) para la exploración de código y datos de forma interactiva. Es la forma más fácil de comenzar a utilizar los widgets de IPython.
_ Este notebook es una traducción parcial de un tutorial de Jupyter dado por sus desarroladores durante la Strata Silicon Valley 2015. Vamos a empezar por lo más sencillo._
En primer lugar, importamos los módulos que vamos a utilizar:
End of explanation
def f(x):
print(x)
Explanation: Uso básico de interact
At the most basic level, interact autogenerates UI controls for function arguments, and then calls the function with those arguments when you manipulate the controls interactively. To use interact, you need to define a function that you want to explore. Here is a function that prints its only argument x.
A nivel más básico, interact genera controles automáticamente mediante una interfaz gráfica de usuario para los argumentos de una función que debe ser definida previamente. Así, interact llama a dicha función con los argumentos que se definieron para manipularlos con controles de forma interactiva.
Por tanto, para utilizar interact, es necesario definir la función en Python que desea explorar. Vamos a crear es una función que imprime su único argumento x:
End of explanation
interact(f, x=10);
Explanation: When you pass this function as the first argument to interact along with an integer keyword argument (x=10), a slider is generated and bound to the function.
Ahora utilizaremos esta función como primer argumento para interact junto con un argumento predifinido a un entero (x = 10), se autogenera un control deslizante (slider) y con que nos permite interactuar con esta función.
End of explanation
interact(f, x=True);
Explanation: Al mover el deslizador, se llama a la función y el valor actual de x se imprime.
Si hacemos pasar un True o False, interact generará una casilla de verificación:
End of explanation
interact(f, x=u'¡Hola!');
Explanation: Si hacemos pasar una cadena de texto (string), interact generará un campo de texto:
End of explanation
def h(p, q):
print(p, q)
Explanation: Fijando argumentos usando fixed
Hay momentos en los que es posible que vayamos a explorar una función utilizando interact, pero queramos fijar una o más de sus argumentos a valores específicos. Esto puede lograrse haciendo uso la función fixed.
End of explanation
interact(h, p=5, q=fixed(20));
Explanation: Cuando llamamos a interact, hacemos pasar fixed(20) para mantener q fijado al valor de 20.
End of explanation
interact(f, x=widgets.IntSlider(min=-10,max=30,step=1,value=10));
Explanation: Fijate que el deslizador solo responde a p y que el valor de q está fijado con el valor 20.
Widget abbreviation
Cuando pasamos un entero como argumento (x=10) a interact, se genera un deslizador de valores enteros con el rango de $[-10,+3\times10]$. En este caso, 10 es una abreviación de un widget slider tipo:
python
IntSliderWidget(min=-10,max=30,step=1,value=10)
De hecho, podemos obtener el mismo resultado si pasamos este IntSliderWidget como el argumento de palabra clave para x:
End of explanation
interact(f, x=(0,4));
Explanation: Este ejemplo aclara como interactuar con con procesos mediante sus argumentos clave:
Si el argumento clave es Widget con un atributo tipo value, se utiliza el widget tipo slider. Cualquier widget con un atributo que resulte en un valor numérico se puede utilizar, incluso los personalizados.
De lo contrario, el valor se trata como una widget abbreviation que se convierte en un widget antes de su uso.
La siguiente tabla ofrece una visión general de los diferentes widget abbreviation:
<table class="table table-condensed table-bordered">
<tr><td><strong>Keyword argument</strong></td><td><strong>Widget</strong></td></tr>
<tr><td>`True` or `False`</td><td>CheckboxWidget</td></tr>
<tr><td>`'Hola'`</td><td>TextareaWidget</td></tr>
<tr><td>`value` or `(min,max)` or `(min,max,step)` if integers are passed</td><td>IntSliderWidget</td></tr>
<tr><td>`value` or `(min,max)` or `(min,max,step)` if floats are passed</td><td>FloatSliderWidget</td></tr>
<tr><td>`('naranja','manzana')` or `{'uno':1,'dos':2}`</td><td>Dropdown</td></tr>
</table>
Hemos visto arriba cómo funcionan los widgets checkbox y de área de texto. Veamos, más detalles sobre las diferentes abreviaturas para deslizadores y menús desplegables.
Si se pasa un tupla con dos enteros (min, max) un deslizador con valores también enteros se crea utilizando los mismos como máximo y mínimo. En este caso, se utiliza el tamaño de paso predeterminado de 1.
End of explanation
interact(f, x=(0,8,2));
Explanation: Si se pasa un tupla con tres enteros (min,max,step), lo que hacemos es determinar también el tamaño de paso:
End of explanation
interact(f, x=(0.0,10.0));
Explanation: Para que el widget devuelva valores en coma flotante, debemos pasar una tupla con valores en coma flotante. En el siguiente ejemplo, el mínimo es 0.0, el máximo es 10.0 y el intervalo es de 0.1 (por defecto).
End of explanation
interact(f, x=(0.0,10.0,0.01));
Explanation: De nuevo, el tamaño del intervalo se puede determinar haciendo uso de un tercer elemento en la tupla:
End of explanation
def h(x=5.5):
print(x)
interact(h, x=(0.0,20.0,0.5));
Explanation: Para ambos sliders de enteros y flotantes, se puede escoger el valor inicial del widget pasando un argumento de palabra clave por defecto a la función Python subyacente. Aquí establecemos el valor inicial de un slider con valores en coma flotante a 5.5.
End of explanation
interact(f, x=['manzanas','naranjas']);
Explanation: También podemos crear menús desplegables haciendo pasar una tupla de cadenas de texto (strings). En este caso, las cadenas de texto son utilizados como los nombres de la interfaz de usuario del menú desplegable y pasan a la función Python subyacente.
End of explanation
interact(f, x={'uno': 10, 'dos': 20});
Explanation: Si quieres un menú desplegable que pasa valores que no son cadenas de texto a la función de Python, se puede pasar un diccionario. Las palabras claves del diccionario se utilizan para los nombres de la interfaz de usuario del menú desplegable y los valores son los argumentos que se pasan a la función Python subyacente. Se puede entender mejor viendo este ejemplo:
End of explanation
from __future__ import print_function
from IPython.html.widgets import interact, interactive, fixed
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
# Vamos a cambiar el estilo por defecto (opcional)
plt.style.use('ggplot')
# Creamos un array con valores de x
x = np.linspace(0, 3*np.pi, 500)
y = np.sin(x**2)
plt.plot(x, y)
plt.title(u'Un gráfico simple');
Explanation: Widgets interactivos y Matplotlib
Como ya sabes, IPython de maravilla con Matplotlib. Veamos cómo podemos combinar los widgets interactivos con esta bibliote gráfica.
Repaso: creando un gráfico
A modo de repaso, veamos cómo creabamos un gráfico simple. Primero debemos de cargar las bibliotecas:
End of explanation
def dibuja_ondas(frequencia1, frequencia2):
x = np.linspace(0, 3*np.pi, 500)
y = np.sin(x*frequencia1) + np.sin(x*frequencia2)
plt.plot(x,y)
plt.title(u'¡Mira mamá, gráficos interactivos!')
interact( dibuja_ondas, frequencia1=20., frequencia2=21.)
Explanation: Haciendo un gráfico interactivo
En el siguiente ejemplo, una función que representa gráficamente la suma de dos ondas sinusoidales la combinamos con interact.
End of explanation
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = './css/aeropython.css'
HTML(open(css_file, "r").read())
Explanation: Referencias
Ejemplos de IPython
End of explanation |
13,857 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visual Diagnosis of Text Analysis with Baleen
This notebook has been created as part of the Yellowbrick user study. I hope to explore how visual methods might improve the workflow of text classification on a small to medium sized corpus.
Dataset
The dataset used in this study is a sample of the Baleen Corpus. The Baleen corpus has been ingesting RSS feeds on the hour from a variety of topical feeds since March 2016, including news, hobbies, and political documents and currently has over 1.2M posts from 373 feeds. Baleen (an open source system) has a sister library called Minke that provides multiprocessing support for dealing with Gigabytes worth of text.
The dataset I'll use in this study is a sample of the larger data set that contains 68,052 or roughly 6% of the total corpus. For this test, I've chosen to use the preprocessed corpus, which means I won't have to do any tokenization, but can still apply normalization techniques. The corpus is described as follows
Step4: Loading Data
In order to load data, I'd typically use a CorpusReader. However, for the sake of simplicity, I'll load data using some simple Python generator functions. I need to create two primary methods, the first loads the documents using pickle, and the second returns the vector of targets for supervised learning.
Step8: Feature Extraction and Normalization
In order to conduct analyses with Scikit-Learn, I'll need some helper transformers to modify the loaded data into a form that can be used by the sklearn.feature_extraction text transformers. I'll be mostly using the CountVectorizer and TfidfVectorizer, so these normalizer transformers and identity functions help a lot.
Step9: Corpus Analysis
At this stage, I'd like to get a feel for what was in my corpus, so that I can start thinking about how to best vectorize the text and do different types of counting. With the Yellowbrick 0.3.3 release, support has been added for two text visualizers, which I think I will test out at scale using this corpus.
Step10: Classification
The primary task for this kind of corpus is classification - sentiment analysis, etc. | Python Code:
%matplotlib inline
import os
import sys
import nltk
import pickle
# To import yellowbrick
sys.path.append("../..")
Explanation: Visual Diagnosis of Text Analysis with Baleen
This notebook has been created as part of the Yellowbrick user study. I hope to explore how visual methods might improve the workflow of text classification on a small to medium sized corpus.
Dataset
The dataset used in this study is a sample of the Baleen Corpus. The Baleen corpus has been ingesting RSS feeds on the hour from a variety of topical feeds since March 2016, including news, hobbies, and political documents and currently has over 1.2M posts from 373 feeds. Baleen (an open source system) has a sister library called Minke that provides multiprocessing support for dealing with Gigabytes worth of text.
The dataset I'll use in this study is a sample of the larger data set that contains 68,052 or roughly 6% of the total corpus. For this test, I've chosen to use the preprocessed corpus, which means I won't have to do any tokenization, but can still apply normalization techniques. The corpus is described as follows:
Baleen corpus contains 68,052 files in 12 categories.
Structured as:
1,200,378 paragraphs (17.639 mean paragraphs per file)
2,058,635 sentences (1.715 mean sentences per paragraph).
Word count of 44,821,870 with a vocabulary of 303,034 (147.910 lexical diversity).
Category Counts:
books: 1,700 docs
business: 9,248 docs
cinema: 2,072 docs
cooking: 733 docs
data science: 692 docs
design: 1,259 docs
do it yourself: 2,620 docs
gaming: 2,884 docs
news: 33,253 docs
politics: 3,793 docs
sports: 4,710 docs
tech: 5,088 docs
This is quite a lot of data, so for now we'll simply create a classifier for the "hobbies" categories: e.g. books, cinema, cooking, diy, gaming, and sports.
Note: this data set is not currently publically available, but I am happy to provide it on request.
End of explanation
CORPUS_ROOT = os.path.join(os.getcwd(), "data")
CATEGORIES = ["books", "cinema", "cooking", "diy", "gaming", "sports"]
def fileids(root=CORPUS_ROOT, categories=CATEGORIES):
Fetch the paths, filtering on categories (pass None for all).
for name in os.listdir(root):
dpath = os.path.join(root, name)
if not os.path.isdir(dpath):
continue
if categories and name in categories:
for fname in os.listdir(dpath):
yield os.path.join(dpath, fname)
def documents(root=CORPUS_ROOT, categories=CATEGORIES):
Load the pickled documents and yield one at a time.
for path in fileids(root, categories):
with open(path, 'rb') as f:
yield pickle.load(f)
def labels(root=CORPUS_ROOT, categories=CATEGORIES):
Return a list of the labels associated with each document.
for path in fileids(root, categories):
dpath = os.path.dirname(path)
yield dpath.split(os.path.sep)[-1]
Explanation: Loading Data
In order to load data, I'd typically use a CorpusReader. However, for the sake of simplicity, I'll load data using some simple Python generator functions. I need to create two primary methods, the first loads the documents using pickle, and the second returns the vector of targets for supervised learning.
End of explanation
from nltk.corpus import wordnet as wn
from nltk.stem import WordNetLemmatizer
from unicodedata import category as ucat
from nltk.corpus import stopwords as swcorpus
from sklearn.base import BaseEstimator, TransformerMixin
def identity(args):
The identity function is used as the "tokenizer" for
pre-tokenized text. It just passes back it's arguments.
return args
def is_punctuation(token):
Returns true if all characters in the token are
unicode punctuation (works for most punct).
return all(
ucat(c).startswith('P')
for c in token
)
def wnpos(tag):
Returns the wn part of speech tag from the penn treebank tag.
return {
"N": wn.NOUN,
"V": wn.VERB,
"J": wn.ADJ,
"R": wn.ADV,
}.get(tag[0], wn.NOUN)
class TextNormalizer(BaseEstimator, TransformerMixin):
def __init__(self, stopwords='english', lowercase=True, lemmatize=True, depunct=True):
self.stopwords = frozenset(swcorpus.words(stopwords)) if stopwords else frozenset()
self.lowercase = lowercase
self.depunct = depunct
self.lemmatizer = WordNetLemmatizer() if lemmatize else None
def fit(self, docs, labels=None):
return self
def transform(self, docs):
for doc in docs:
yield list(self.normalize(doc))
def normalize(self, doc):
for paragraph in doc:
for sentence in paragraph:
for token, tag in sentence:
if token.lower() in self.stopwords:
continue
if self.depunct and is_punctuation(token):
continue
if self.lowercase:
token = token.lower()
if self.lemmatizer:
token = self.lemmatizer.lemmatize(token, wnpos(tag))
yield token
Explanation: Feature Extraction and Normalization
In order to conduct analyses with Scikit-Learn, I'll need some helper transformers to modify the loaded data into a form that can be used by the sklearn.feature_extraction text transformers. I'll be mostly using the CountVectorizer and TfidfVectorizer, so these normalizer transformers and identity functions help a lot.
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer
from yellowbrick.text import FreqDistVisualizer
visualizer = Pipeline([
('norm', TextNormalizer()),
('count', CountVectorizer(tokenizer=lambda x: x, preprocessor=None, lowercase=False)),
('viz', FreqDistVisualizer())
])
visualizer.fit_transform(documents(), labels())
visualizer.named_steps['viz'].poof()
vect = Pipeline([
('norm', TextNormalizer()),
('count', CountVectorizer(tokenizer=lambda x: x, preprocessor=None, lowercase=False)),
])
docs = vect.fit_transform(documents(), labels())
viz = FreqDistVisualizer()
viz.fit(docs, vect.named_steps['count'].get_feature_names())
viz.poof()
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from yellowbrick.text import TSNEVisualizer
vect = Pipeline([
('norm', TextNormalizer()),
('tfidf', TfidfVectorizer(tokenizer=lambda x: x, preprocessor=None, lowercase=False)),
])
docs = vect.fit_transform(documents(), labels())
viz = TSNEVisualizer()
viz.fit(docs, labels())
viz.poof()
Explanation: Corpus Analysis
At this stage, I'd like to get a feel for what was in my corpus, so that I can start thinking about how to best vectorize the text and do different types of counting. With the Yellowbrick 0.3.3 release, support has been added for two text visualizers, which I think I will test out at scale using this corpus.
End of explanation
from sklearn.model_selection import train_test_split as tts
docs_train, docs_test, labels_train, labels_test = tts(docs, list(labels()), test_size=0.2)
from sklearn.linear_model import LogisticRegression
from yellowbrick.classifier import ClassBalance, ClassificationReport, ROCAUC
logit = LogisticRegression()
logit.fit(docs_train, labels_train)
logit_balance = ClassBalance(logit, classes=set(labels_test))
logit_balance.score(docs_test, labels_test)
logit_balance.poof()
logit_balance = ClassificationReport(logit, classes=set(labels_test))
logit_balance.score(docs_test, labels_test)
logit_balance.poof()
logit_balance = ClassificationReport(LogisticRegression())
logit_balance.fit(docs_train, labels_train)
logit_balance.score(docs_test, labels_test)
logit_balance.poof()
logit_balance = ROCAUC(logit)
logit_balance.score(docs_test, labels_test)
logit_balance.poof()
Explanation: Classification
The primary task for this kind of corpus is classification - sentiment analysis, etc.
End of explanation |
13,858 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kaggle San Francisco Crime Classification
Berkeley MIDS W207 Final Project
Step1: Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets.
Step2: Logistic Regression
Hyperparameter tuning
Step3: LR with L1-Penalty Hyperparameter Tuning
Step4: Dataframe for Coefficients
Step5: Plot for Coefficients
Step6: LR with L2-Penalty Hyperparameter Tuning
Step7: Dataframe for Coefficients
Step8: Plot of Coefficients | Python Code:
# Additional Libraries
%matplotlib inline
import matplotlib.pyplot as plt
# Import relevant libraries:
import time
import numpy as np
import pandas as pd
from sklearn.neighbors import KNeighborsClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import GaussianNB
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.metrics import log_loss
from sklearn.linear_model import LogisticRegression
from sklearn import svm
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
# Import Meta-estimators
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import GradientBoostingClassifier
# Import Calibration tools
from sklearn.calibration import CalibratedClassifierCV
# Set random seed and format print output:
np.random.seed(0)
np.set_printoptions(precision=3)
Explanation: Kaggle San Francisco Crime Classification
Berkeley MIDS W207 Final Project: Sam Goodgame, Sarah Cha, Kalvin Kao, Bryan Moore
Environment and Data
End of explanation
# Data path to your local copy of Kalvin's "x_data.csv", which was produced by the negated cell above
data_path = "./data/x_data_3.csv"
df = pd.read_csv(data_path, header=0)
x_data = df.drop('category', 1)
y = df.category.as_matrix()
# Impute missing values with mean values:
#x_complete = df.fillna(df.mean())
x_complete = x_data.fillna(x_data.mean())
X_raw = x_complete.as_matrix()
# Scale the data between 0 and 1:
X = MinMaxScaler().fit_transform(X_raw)
# Shuffle data to remove any underlying pattern that may exist. Must re-run random seed step each time:
np.random.seed(0)
shuffle = np.random.permutation(np.arange(X.shape[0]))
X, y = X[shuffle], y[shuffle]
print(np.where(y == 'TREA'))
print(np.where(y == 'PORNOGRAPHY/OBSCENE MAT'))
## Due to difficulties with log loss and set(y_pred) needing to match set(labels), we will remove the extremely rare
## crimes from the data for quality issues.
#X_minus_trea = X[np.where(y != 'TREA')]
#y_minus_trea = y[np.where(y != 'TREA')]
#X_final = X_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')]
#y_final = y_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')]
## Separate training, dev, and test data:
#test_data, test_labels = X_final[800000:], y_final[800000:]
#dev_data, dev_labels = X_final[700000:800000], y_final[700000:800000]
#train_data, train_labels = X_final[100000:700000], y_final[100000:700000]
#calibrate_data, calibrate_labels = X_final[:100000], y_final[:100000]
test_data, test_labels = X[800000:], y[800000:]
dev_data, dev_labels = X[700000:800000], y[700000:800000]
#train_data, train_labels = X[100000:700000], y[100000:700000]
train_data, train_labels = X[:700000], y[:700000]
#calibrate_data, calibrate_labels = X[:100000], y[:100000]
# Create mini versions of the above sets
#mini_train_data, mini_train_labels = X_final[:20000], y_final[:20000]
#mini_calibrate_data, mini_calibrate_labels = X_final[19000:28000], y_final[19000:28000]
#mini_dev_data, mini_dev_labels = X_final[49000:60000], y_final[49000:60000]
#mini_train_data, mini_train_labels = X[:20000], y[:20000]
mini_train_data, mini_train_labels = X[:200000], y[:200000]
#mini_calibrate_data, mini_calibrate_labels = X[19000:28000], y[19000:28000]
mini_dev_data, mini_dev_labels = X[430000:480000], y[430000:480000]
## Create list of the crime type labels. This will act as the "labels" parameter for the log loss functions that follow
#crime_labels = list(set(y_final))
#crime_labels_mini_train = list(set(mini_train_labels))
#crime_labels_mini_dev = list(set(mini_dev_labels))
#crime_labels_mini_calibrate = list(set(mini_calibrate_labels))
#print(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev),len(crime_labels_mini_calibrate))
crime_labels = list(set(y))
crime_labels_mini_train = list(set(mini_train_labels))
crime_labels_mini_dev = list(set(mini_dev_labels))
#crime_labels_mini_calibrate = list(set(mini_calibrate_labels))
#print(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev),len(crime_labels_mini_calibrate))
print(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev))
print(len(train_data),len(train_labels))
print(len(dev_data),len(dev_labels))
print(len(mini_train_data),len(mini_train_labels))
print(len(mini_dev_data),len(mini_dev_labels))
print(len(test_data),len(test_labels))
#print(len(mini_calibrate_data),len(mini_calibrate_labels))
#print(len(calibrate_data),len(calibrate_labels))
Explanation: Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets.
End of explanation
#log_reg = LogisticRegression(penalty='l1').fit(mini_train_data, mini_train_labels)
#log_reg = LogisticRegression().fit(mini_train_data, mini_train_labels)
#eval_prediction_probabilities = log_reg.predict_proba(mini_dev_data)
#eval_predictions = log_reg.predict(mini_dev_data)
#print("Multi-class Log Loss:", log_loss(y_true = mini_dev_labels, y_pred = eval_prediction_probabilities, labels = crime_labels_mini_dev), "\n\n")
#columns = ['hour_of_day','dayofweek',\
# 'x','y','bayview','ingleside','northern',\
# 'central','mission','southern','tenderloin',\
# 'park','richmond','taraval','HOURLYDRYBULBTEMPF',\
# 'HOURLYRelativeHumidity','HOURLYWindSpeed',\
# 'HOURLYSeaLevelPressure','HOURLYVISIBILITY',\
# 'Daylight']
##print(len(columns))
#allCoefs = pd.DataFrame(index=columns)
#for a in range(len(log_reg.coef_)):
# #print(crime_labels_mini_dev[a])
# #print(pd.DataFrame(log_reg.coef_[a], index=columns))
# allCoefs[crime_labels_mini_dev[a]] = log_reg.coef_[a]
# #print()
#allCoefs
#%matplotlib inline
#import matplotlib.pyplot as plt
#
#f = plt.figure(figsize=(15,8))
#allCoefs.plot(kind='bar', figsize=(15,8))
#plt.legend(loc='center left', bbox_to_anchor=(1.0,0.5))
#plt.show()
Explanation: Logistic Regression
Hyperparameter tuning:
For the Logistic Regression classifier, we can seek to optimize the following classifier parameters: penalty (l1 or l2), C (inverse of regularization strength), solver ('newton-cg', 'lbfgs', 'liblinear', or 'sag')
Model calibration:
See above
End of explanation
lr_param_grid_1 = {'C': [0, 0.0001, 0.001, 0.01, 0.1, 0.5, 1.0, 5.0, 10.0]}
#lr_param_grid_1 = {'C': [0.0001, 0.01, 0.5, 5.0, 10.0]}
LR_l1 = GridSearchCV(LogisticRegression(penalty='l1'), param_grid=lr_param_grid_1, scoring='neg_log_loss')
LR_l1.fit(train_data, train_labels)
print('L1: best C value:', str(LR_l1.best_params_['C']))
LR_l1_prediction_probabilities = LR_l1.predict_proba(dev_data)
LR_l1_predictions = LR_l1.predict(dev_data)
print("L1 Multi-class Log Loss:", log_loss(y_true = dev_labels, y_pred = LR_l1_prediction_probabilities, labels = crime_labels), "\n\n")
Explanation: LR with L1-Penalty Hyperparameter Tuning
End of explanation
columns = ['hour_of_day','dayofweek',\
'x','y','bayview','ingleside','northern',\
'central','mission','southern','tenderloin',\
'park','richmond','taraval','HOURLYDRYBULBTEMPF',\
'HOURLYRelativeHumidity','HOURLYWindSpeed',\
'HOURLYSeaLevelPressure','HOURLYVISIBILITY',\
'Daylight']
allCoefsL1 = pd.DataFrame(index=columns)
for a in range(len(LR_l1.coef_)):
allCoefsL1[crime_labels[a]] = LR_l1.coef_[a]
allCoefsL1
Explanation: Dataframe for Coefficients
End of explanation
f = plt.figure(figsize=(15,8))
allCoefsL1.plot(kind='bar', figsize=(15,8))
plt.legend(loc='center left', bbox_to_anchor=(1.0,0.5))
plt.show()
Explanation: Plot for Coefficients
End of explanation
lr_param_grid_2 = {'C': [0, 0.0001, 0.001, 0.01, 0.1, 0.5, 1.0, 5.0, 10.0], \
'solver':['liblinear','newton-cg','lbfgs', 'sag']}
LR_l2 = GridSearchCV(LogisticRegression(penalty='l2'), param_grid=lr_param_grid_2, scoring='neg_log_loss')
LR_l2.fit(train_data, train_labels)
print('L2: best C value:', str(LR_l2.best_params_['C']))
print('L2: best solver:', str(LR_l2.best_params_['solver']))
LR_l2_prediction_probabilities = LR_l2.predict_proba(dev_data)
LR_l2_predictions = LR_l2.predict(dev_data)
print("L2 Multi-class Log Loss:", log_loss(y_true = dev_labels, y_pred = LR_l2_prediction_probabilities, labels = crime_labels), "\n\n")
Explanation: LR with L2-Penalty Hyperparameter Tuning
End of explanation
columns = ['hour_of_day','dayofweek',\
'x','y','bayview','ingleside','northern',\
'central','mission','southern','tenderloin',\
'park','richmond','taraval','HOURLYDRYBULBTEMPF',\
'HOURLYRelativeHumidity','HOURLYWindSpeed',\
'HOURLYSeaLevelPressure','HOURLYVISIBILITY',\
'Daylight']
allCoefsL2 = pd.DataFrame(index=columns)
for a in range(len(LR_l2.coef_)):
allCoefsL2[crime_labels[a]] = LR_l2.coef_[a]
allCoefsL2
Explanation: Dataframe for Coefficients
End of explanation
f = plt.figure(figsize=(15,8))
allCoefsL2.plot(kind='bar', figsize=(15,8))
plt.legend(loc='center left', bbox_to_anchor=(1.0,0.5))
plt.show()
Explanation: Plot of Coefficients
End of explanation |
13,859 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
Step1: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
<img src='assets/convolutional_autoencoder.png' width=500px>
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see transposed convolution layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, tf.nn.conv2d_transpose.
However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise
Step2: Training
As before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
Step3: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practice. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise
Step4: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
End of explanation
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1))
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1))
### Encoder
conv1 = tf.layers.conv2d(inputs_, 8, (5, 5), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 16, (5, 5), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 32, (5, 5), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (5, 5), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 16, (5, 5), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 32, (5, 5), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
<img src='assets/convolutional_autoencoder.png' width=500px>
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see transposed convolution layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, tf.nn.conv2d_transpose.
However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor. For convolutional layers, use tf.layers.conv2d. For example, you would write conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu) for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use tf.layers.max_pooling2d.
End of explanation
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Training
As before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
End of explanation
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practice. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
Explanation: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
End of explanation |
13,860 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Содержание<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Встроенная-сортировка" data-toc-modified-id="Встроенная-сортировка-1">Встроенная сортировка</a></span></li><li><span><a href="#Сортировка-в-обратном-порядке" data-toc-modified-id="Сортировка-в-обратном-порядке-2">Сортировка в обратном порядке</a></span></li><li><span><a href="#Сортировка-по-ключу" data-toc-modified-id="Сортировка-по-ключу-3">Сортировка по ключу</a></span></li></ul></div>
Встроенная сортировка
В Python есть метод sort для списков.
Step1: Сортировка в обратном порядке
Для сортировки в обратном порядке можно указать параметр reverse.
Step2: Сортировка по ключу
Сортировка по ключу позволяет отсортировать список не по значению самого элемента, а по чему-то другому.
Step3: В качестве параметра key можно указывать не только встроенные функции, но и самостоятельно определённые. Такая функция должна принимать один аргумент, элемент списка, и возращать значение, по которому надо сортировать. | Python Code:
a = [5, 3, -2, 9, 1]
# Метод sort меняет существующий список
a.sort()
print(a)
Explanation: <h1>Содержание<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Встроенная-сортировка" data-toc-modified-id="Встроенная-сортировка-1">Встроенная сортировка</a></span></li><li><span><a href="#Сортировка-в-обратном-порядке" data-toc-modified-id="Сортировка-в-обратном-порядке-2">Сортировка в обратном порядке</a></span></li><li><span><a href="#Сортировка-по-ключу" data-toc-modified-id="Сортировка-по-ключу-3">Сортировка по ключу</a></span></li></ul></div>
Встроенная сортировка
В Python есть метод sort для списков.
End of explanation
a = [5, 3, -2, 9, 1]
a.sort(reverse=True)
print(a)
Explanation: Сортировка в обратном порядке
Для сортировки в обратном порядке можно указать параметр reverse.
End of explanation
# Обычно строки сортируются в алфавитном порядке
a = ["bee", "all", "accessibility", "zen", "treasure"]
a.sort()
print(a)
# А используя сортировку по ключу можно сортировать, например, по длине
a = ["bee", "all", "accessibility", "zen", "treasure"]
a.sort(key=len)
print(a)
Explanation: Сортировка по ключу
Сортировка по ключу позволяет отсортировать список не по значению самого элемента, а по чему-то другому.
End of explanation
# Сортируем по остатку от деления на 10
def mod(x):
return x % 10
a = [1, 15, 143, 8, 0, 5, 17, 48]
a.sort(key=mod)
print(a)
# Обычно списки сортируются сначала по первому элементу, потом по второму и так далее
a = [[4, 3], [1, 5], [2, 15], [1, 6], [2, 9], [4, 1]]
a.sort()
print(a)
# А так можно отсортировать сначала по первому по возрастанию, а при равенсте — по втором
def my_key(x):
return x[0], -x[1]
a = [[4, 3], [1, 5], [2, 15], [1, 6], [2, 9], [4, 1]]
a.sort(key=my_key)
print(a)
Explanation: В качестве параметра key можно указывать не только встроенные функции, но и самостоятельно определённые. Такая функция должна принимать один аргумент, элемент списка, и возращать значение, по которому надо сортировать.
End of explanation |
13,861 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: One solution
Below is one way to use a class to do this
I'm not a software developer in python, so there may be much nicer ways, but this ought to be good enough. There are many variants and extensions you can do, and of course, if you wanted to record some error function (like if you knew the true answer $x^\star$ and wanted to record $\|x-x^\star\|$ every iteration), you could easily incorporate that too.
Step2: ... so we see that mySimpleSolver didn't have to do anything. Now to get the information, we just ask for the value of objective.history as follows | Python Code:
import numpy as np
def mySimpleSolver(f,x0,maxIters=13):
x = np.asarray(x0,dtype='float64').copy()
for k in range(maxIters):
fx = f(x)
x -= .001*x # some weird update rule, just to make something interesting happen
return x
# Let's solve this in 1D
f = lambda x : x**2
x = mySimpleSolver( f, 1 )
print(x)
# ... of course this isn't a real solver, so it won't converge to the right answer
# But anyhow, how can we see the history of function values?
Explanation: <a href="https://colab.research.google.com/github/stephenbeckr/convex-optimization-class/blob/master/Homeworks/APPM5630_HW8_helper.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
How to get the history of function values?
For HW 8, you'll make a function and give it to a scipy or other standard solver, and then you want to plot the history of all function evaluations. Sometimes solvers save this for you, sometimes they don't.
Here's one way to save that information in case the solver doesn't save it for you. We'll assume that mySimpleSolver is some builtin solver to a package, and that you cannot modify it easily. So we need to find another way to save the information.
The trick is to essentially use a global variable, but we can make it a bit nicer to at least hiding that variable inside a class.
End of explanation
f = lambda x : x**2
class fcn:
def __init__(self):
self.history = []
def evaluate(self,x):
# Whatever objective function you're implementing
# This also sees objective in the parent workspace,
# so you can just call those
fx = f(x)
self.history.append(fx)
return fx
def reset(self):
self.history = []
objective = fcn()
F = lambda x : objective.evaluate(x) # alternatively, have your class return a function
x = mySimpleSolver( F, 1 )
print(x)
Explanation: One solution
Below is one way to use a class to do this
I'm not a software developer in python, so there may be much nicer ways, but this ought to be good enough. There are many variants and extensions you can do, and of course, if you wanted to record some error function (like if you knew the true answer $x^\star$ and wanted to record $\|x-x^\star\|$ every iteration), you could easily incorporate that too.
End of explanation
objective.history
Explanation: ... so we see that mySimpleSolver didn't have to do anything. Now to get the information, we just ask for the value of objective.history as follows:
End of explanation |
13,862 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Collaborative Filtering
This is a starter notebook forked from last year's competition. This is an implementation of Collaborative filtering starter with Keras. Uses only the win(1) and loss(0) label of each match and categorical encoding of team Ids as training data.
Essentially the formula used is shown below
Step1: Preparing the training data
Simple win to 1 loss to 0 encoding
Step2: Display
number of unique elements in "team1"
Step3: Shuffle the training to include some randomness
Step4: Lets Start to create our CL network with Keras
Start by creating the embeddings and bias for the dot products of the 2 teams
Step5: Now that we have defined our network its time to determine the correct set of numbers that will make our predictions close to actual outputs of a match
Lets learn these numbers by minimising the loss using the Adam optimisation algorithm. | Python Code:
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
from subprocess import check_output
print(check_output(["ls", "../input"]).decode("utf8"))
# Any results you write to the current directory are saved as output.
import matplotlib.pyplot as plt
%matplotlib inline
from keras.layers import Input, Dense, Dropout, Flatten, Embedding, merge
from keras.regularizers import l2
from keras.optimizers import Adam
from keras.models import Model
dr = pd.read_csv("../input/RegularSeasonDetailedResults.csv")
dr.tail(n=30)
Explanation: Collaborative Filtering
This is a starter notebook forked from last year's competition. This is an implementation of Collaborative filtering starter with Keras. Uses only the win(1) and loss(0) label of each match and categorical encoding of team Ids as training data.
Essentially the formula used is shown below:
Model prediction = Dot product of the 2 teams in each match (embedding vectors)+ (Team1 bias) + (Team2 bias)
End of explanation
simple_df_1 = pd.DataFrame()
simple_df_1[["team1", "team2"]] =dr[["WTeamID", "LTeamID"]].copy()
simple_df_1["pred"] = 1
simple_df_2 = pd.DataFrame()
simple_df_2[["team1", "team2"]] =dr[["LTeamID", "WTeamID"]]
simple_df_2["pred"] = 0
simple_df = pd.concat((simple_df_1, simple_df_2), axis=0)
simple_df.head()
Explanation: Preparing the training data
Simple win to 1 loss to 0 encoding
End of explanation
n = simple_df.team1.nunique()
n
trans_dict = {t: i for i, t in enumerate(simple_df.team1.unique())}
simple_df["team1"] = simple_df["team1"].apply(lambda x: trans_dict[x])
simple_df["team2"] = simple_df["team2"].apply(lambda x: trans_dict[x])
simple_df.head()
Explanation: Display
number of unique elements in "team1"
End of explanation
train = simple_df.values
np.random.shuffle(train)
def embedding_input(name, n_in, n_out, reg):
inp = Input(shape=(1,), dtype="int64", name=name)
return inp, Embedding(n_in, n_out, input_length=1, W_regularizer=l2(reg))(inp)
def create_bias(inp, n_in):
x = Embedding(n_in, 1, input_length=1)(inp)
return Flatten()(x)
Explanation: Shuffle the training to include some randomness
End of explanation
n_factors = 50
team1_in, t1 = embedding_input("team1_in", n, n_factors, 1e-4)
team2_in, t2 = embedding_input("team2_in", n, n_factors, 1e-4)
b1 = create_bias(team1_in, n)
b2 = create_bias(team2_in, n)
x = merge([t1, t2], mode="dot")
x = Flatten()(x)
x = merge([x, b1], mode="sum")
x = merge([x, b2], mode="sum")
x = Dense(1, activation="softmax")(x)
model = Model([team1_in, team2_in], x)
model.compile(Adam(0.001), loss="binary_crossentropy")
model.summary()
Explanation: Lets Start to create our CL network with Keras
Start by creating the embeddings and bias for the dot products of the 2 teams
End of explanation
train.shape
#print(train.head())
history = model.fit([train[:, 0], train[:, 1]], train[:, 2],validation_split=0.33, batch_size=64, nb_epoch=5, verbose=2)
plt.plot(history.history["loss"])
plt.show()
# list all data in history
print(history.history.keys())
# summarize history for accuracy
#plt.plot(history.history['acc'])
#plt.plot(history.history['val_acc'])
#plt.title('model accuracy')
#plt.ylabel('accuracy')
#plt.xlabel('epoch')
#plt.legend(['train', 'test'], loc='upper left')
#plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
sub = pd.read_csv('../input/SampleSubmissionStage2.csv')
sub["team1"] = sub["ID"].apply(lambda x: trans_dict[int(x.split("_")[1])])
sub["team2"] = sub["ID"].apply(lambda x: trans_dict[int(x.split("_")[2])])
sub.head()
sub["pred"] = model.predict([sub.team1, sub.team2])
sub = sub[["ID", "pred"]]
sub.head()
sub.to_csv("CF_sm.csv", index=False)
Explanation: Now that we have defined our network its time to determine the correct set of numbers that will make our predictions close to actual outputs of a match
Lets learn these numbers by minimising the loss using the Adam optimisation algorithm.
End of explanation |
13,863 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Constraint Satisfaction Problems Lab
Introduction
Constraint Satisfaction is a technique for solving problems by expressing limits on the values of each variable in the solution with mathematical constraints. We've used constraints before -- constraints in the Sudoku project are enforced implicitly by filtering the legal values for each box, and the planning project represents constraints as arcs connecting nodes in the planning graph -- but in this lab exercise we will use a symbolic math library to explicitly construct binary constraints and then use Backtracking to solve the N-queens problem (which is a generalization 8-queens problem). Using symbolic constraints should make it easier to visualize and reason about the constraints (especially for debugging), but comes with a performance penalty.
Briefly, the 8-queens problem asks you to place 8 queens on a standard 8x8 chessboard such that none of the queens are in "check" (i.e., no two queens occupy the same row, column, or diagonal). The N-queens problem generalizes the puzzle to to any size square board.
I. Lab Overview
Students should read through the code and the wikipedia page (or other resources) to understand the N-queens problem, then
Step1: II. Representing the N-Queens Problem
There are many acceptable ways to represent the N-queens problem, but one convenient way is to recognize that one of the constraints (either the row or column constraint) can be enforced implicitly by the encoding. If we represent a solution as an array with N elements, then each position in the array can represent a column of the board, and the value at each position can represent which row the queen is placed on.
In this encoding, we only need a constraint to make sure that no two queens occupy the same row, and one to make sure that no two queens occupy the same diagonal.
Define Symbolic Expressions for the Problem Constraints
Before implementing the board class, we need to construct the symbolic constraints that will be used in the CSP. Declare any symbolic terms required, and then declare two generic constraint generators
Step8: The N-Queens CSP Class
Implement the CSP class as described above, with constraints to make sure each queen is on a different row and different diagonal than every other queen, and a variable for each column defining the row that containing a queen in that column.
Step13: III. Backtracking Search
Implement the backtracking search algorithm (required) and helper functions (optional) from the AIMA text.
Step14: Solve the CSP
With backtracking implemented, now you can use it to solve instances of the problem. We've started with the classical 8-queen version, but you can try other sizes as well. Boards larger than 12x12 may take some time to solve because sympy is slow in the way its being used here, and because the selection and value ordering methods haven't been implemented. See if you can implement any of the techniques in the AIMA text to speed up the solver! | Python Code:
import copy
import timeit
import matplotlib as mpl
import matplotlib.pyplot as plt
from util import constraint, displayBoard
from sympy import *
from IPython.display import display
init_printing()
%matplotlib inline
Explanation: Constraint Satisfaction Problems Lab
Introduction
Constraint Satisfaction is a technique for solving problems by expressing limits on the values of each variable in the solution with mathematical constraints. We've used constraints before -- constraints in the Sudoku project are enforced implicitly by filtering the legal values for each box, and the planning project represents constraints as arcs connecting nodes in the planning graph -- but in this lab exercise we will use a symbolic math library to explicitly construct binary constraints and then use Backtracking to solve the N-queens problem (which is a generalization 8-queens problem). Using symbolic constraints should make it easier to visualize and reason about the constraints (especially for debugging), but comes with a performance penalty.
Briefly, the 8-queens problem asks you to place 8 queens on a standard 8x8 chessboard such that none of the queens are in "check" (i.e., no two queens occupy the same row, column, or diagonal). The N-queens problem generalizes the puzzle to to any size square board.
I. Lab Overview
Students should read through the code and the wikipedia page (or other resources) to understand the N-queens problem, then:
Complete the warmup exercises in the Sympy_Intro notebook to become familiar with they sympy library and symbolic representation for constraints
Implement the NQueensCSP class to develop an efficient encoding of the N-queens problem and explicitly generate the constraints bounding the solution
Write the search functions for recursive backtracking, and use them to solve the N-queens problem
(Optional) Conduct additional experiments with CSPs and various modifications to the search order (minimum remaining values, least constraining value, etc.)
End of explanation
# Declare any required symbolic variables
r1, r2 = symbols(['r1', 'r2'])
c1, c2 = symbols(['c1', 'c2'])
# Define diffRow and diffDiag constraints
diffRow = constraint('DiffRow', ~Eq(r1, r2))
diffDiag = constraint('DiffDiag', ~Eq(abs(r1 - r2), abs(c1 - c2)))
# Test diffRow and diffDiag
_x = symbols('x:3')
# generate a diffRow instance for testing
diffRow_test = diffRow.subs({r1: _x[0], r2: _x[1]})
assert(len(diffRow_test.free_symbols) == 2)
assert(diffRow_test.subs({_x[0]: 0, _x[1]: 1}) == True)
assert(diffRow_test.subs({_x[0]: 0, _x[1]: 0}) == False)
assert(diffRow_test.subs({_x[0]: 0}) != False) # partial assignment is not false
print("Passed all diffRow tests.")
# generate a diffDiag instance for testing
diffDiag_test = diffDiag.subs({r1: _x[0], r2: _x[2], c1:0, c2:2})
assert(len(diffDiag_test.free_symbols) == 2)
assert(diffDiag_test.subs({_x[0]: 0, _x[2]: 2}) == False)
assert(diffDiag_test.subs({_x[0]: 0, _x[2]: 0}) == True)
assert(diffDiag_test.subs({_x[0]: 0}) != False) # partial assignment is not false
print("Passed all diffDiag tests.")
Explanation: II. Representing the N-Queens Problem
There are many acceptable ways to represent the N-queens problem, but one convenient way is to recognize that one of the constraints (either the row or column constraint) can be enforced implicitly by the encoding. If we represent a solution as an array with N elements, then each position in the array can represent a column of the board, and the value at each position can represent which row the queen is placed on.
In this encoding, we only need a constraint to make sure that no two queens occupy the same row, and one to make sure that no two queens occupy the same diagonal.
Define Symbolic Expressions for the Problem Constraints
Before implementing the board class, we need to construct the symbolic constraints that will be used in the CSP. Declare any symbolic terms required, and then declare two generic constraint generators:
- diffRow - generate constraints that return True if the two arguments do not match
- diffDiag - generate constraints that return True if two arguments are not on the same diagonal (Hint: you can easily test whether queens in two columns are on the same diagonal by testing if the difference in the number of rows and the number of columns match)
Both generators should produce binary constraints (i.e., each should have two free symbols) once they're bound to specific variables in the CSP. For example, Eq((a + b), (b + c)) is not a binary constraint, but Eq((a + b), (b + c)).subs(b, 1) is a binary constraint because one of the terms has been bound to a constant, so there are only two free variables remaining.
End of explanation
class NQueensCSP:
CSP representation of the N-queens problem
Parameters
----------
N : Integer
The side length of a square chess board to use for the problem, and
the number of queens that must be placed on the board
def __init__(self, N):
_vars = symbols(f'A0:{N}')
_domain = set(range(N))
self.size = N
self.variables = _vars
self.domains = {v: _domain for v in _vars}
self._constraints = {x: set() for x in _vars}
# add constraints - for each pair of variables xi and xj, create
# a diffRow(xi, xj) and a diffDiag(xi, xj) instance, and add them
# to the self._constraints dictionary keyed to both xi and xj;
# (i.e., add them to both self._constraints[xi] and self._constraints[xj])
for i in range(N):
for j in range(i + 1, N):
diffRowConstraint = diffRow.subs({r1: _vars[i], r2: _vars[j]})
diffDiagConstraint = diffDiag.subs({r1: _vars[i], r2: _vars[j], c1:i, c2:j})
self._constraints[_vars[i]].add(diffRowConstraint)
self._constraints[_vars[i]].add(diffDiagConstraint)
self._constraints[_vars[j]].add(diffRowConstraint)
self._constraints[_vars[j]].add(diffDiagConstraint)
@property
def constraints(self):
Read-only list of constraints -- cannot be used for evaluation
constraints = set()
for _cons in self._constraints.values():
constraints |= _cons
return list(constraints)
def is_complete(self, assignment):
An assignment is complete if it is consistent, and all constraints
are satisfied.
Hint: Backtracking search checks consistency of each assignment, so checking
for completeness can be done very efficiently
Parameters
----------
assignment : dict(sympy.Symbol: Integer)
An assignment of values to variables that have previously been checked
for consistency with the CSP constraints
return len(assignment) == self.size
def is_consistent(self, var, value, assignment):
Check consistency of a proposed variable assignment
self._constraints[x] returns a set of constraints that involve variable `x`.
An assignment is consistent unless the assignment it causes a constraint to
return False (partial assignments are always consistent).
Parameters
----------
var : sympy.Symbol
One of the symbolic variables in the CSP
value : Numeric
A valid value (i.e., in the domain of) the variable `var` for assignment
assignment : dict(sympy.Symbol: Integer)
A dictionary mapping CSP variables to row assignment of each queen
assignment[var] = value
constraints = list(self._constraints[var])
for constraint in constraints:
for arg in constraint.args:
if arg in assignment.keys():
constraint = constraint.subs({arg: assignment[arg]})
if not constraint:
return False
return True
def inference(self, var, value):
Perform logical inference based on proposed variable assignment
Returns an empty dictionary by default; function can be overridden to
check arc-, path-, or k-consistency; returning None signals "failure".
Parameters
----------
var : sympy.Symbol
One of the symbolic variables in the CSP
value : Integer
A valid value (i.e., in the domain of) the variable `var` for assignment
Returns
-------
dict(sympy.Symbol: Integer) or None
A partial set of values mapped to variables in the CSP based on inferred
constraints from previous mappings, or None to indicate failure
# TODO (Optional): Implement this function based on AIMA discussion
return {}
def show(self, assignment):
Display a chessboard with queens drawn in the locations specified by an
assignment
Parameters
----------
assignment : dict(sympy.Symbol: Integer)
A dictionary mapping CSP variables to row assignment of each queen
locations = [(i, assignment[j]) for i, j in enumerate(self.variables)
if assignment.get(j, None) is not None]
displayBoard(locations, self.size)
Explanation: The N-Queens CSP Class
Implement the CSP class as described above, with constraints to make sure each queen is on a different row and different diagonal than every other queen, and a variable for each column defining the row that containing a queen in that column.
End of explanation
def select(csp, assignment):
Choose an unassigned variable in a constraint satisfaction problem
# TODO (Optional): Implement a more sophisticated selection routine from AIMA
for var in csp.variables:
if var not in assignment:
return var
return None
def order_values(var, assignment, csp):
Select the order of the values in the domain of a variable for checking during search;
the default is lexicographically.
# TODO (Optional): Implement a more sophisticated search ordering routine from AIMA
return csp.domains[var]
def backtracking_search(csp):
Helper function used to initiate backtracking search
return backtrack({}, csp)
def backtrack(assignment, csp):
Perform backtracking search for a valid assignment to a CSP
Parameters
----------
assignment : dict(sympy.Symbol: Integer)
An partial set of values mapped to variables in the CSP
csp : CSP
A problem encoded as a CSP. Interface should include csp.variables, csp.domains,
csp.inference(), csp.is_consistent(), and csp.is_complete().
Returns
-------
dict(sympy.Symbol: Integer) or None
A partial set of values mapped to variables in the CSP, or None to indicate failure
if csp.is_complete(assignment):
return assignment
var = select(csp, assignment)
for value in order_values(var, assignment, csp):
if csp.is_consistent(var, value, assignment):
assignment[var] = value
assignment_copy = copy.deepcopy(assignment)
result = backtrack(assignment_copy, csp)
if result is not None:
return result
Explanation: III. Backtracking Search
Implement the backtracking search algorithm (required) and helper functions (optional) from the AIMA text.
End of explanation
start = timeit.default_timer()
num_queens = 12
csp = NQueensCSP(num_queens)
var = csp.variables[0]
print("CSP problems have variables, each variable has a domain, and the problem has a list of constraints.")
print("Showing the variables for the N-Queens CSP:")
display(csp.variables)
print("Showing domain for {}:".format(var))
display(csp.domains[var])
print("And showing the constraints for {}:".format(var))
display(csp._constraints[var])
print("Solving N-Queens CSP...")
assn = backtracking_search(csp)
if assn is not None:
csp.show(assn)
print("Solution found:\n{!s}".format(assn))
else:
print("No solution found.")
end = timeit.default_timer() - start
print(f'N-Queens size {num_queens} solved in {end} seconds')
Explanation: Solve the CSP
With backtracking implemented, now you can use it to solve instances of the problem. We've started with the classical 8-queen version, but you can try other sizes as well. Boards larger than 12x12 may take some time to solve because sympy is slow in the way its being used here, and because the selection and value ordering methods haven't been implemented. See if you can implement any of the techniques in the AIMA text to speed up the solver!
End of explanation |
13,864 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib Exercise 2
Imports
Step1: Exoplanet properties
Over the past few decades, astronomers have discovered thousands of extrasolar planets. The following paper describes the properties of some of these planets.
http
Step2: Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data
Step3: Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
Pick the number of bins for the histogram appropriately.
Step4: Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Matplotlib Exercise 2
Imports
End of explanation
!head -n 30 open_exoplanet_catalogue.txt
Explanation: Exoplanet properties
Over the past few decades, astronomers have discovered thousands of extrasolar planets. The following paper describes the properties of some of these planets.
http://iopscience.iop.org/1402-4896/2008/T130/014001
Your job is to reproduce Figures 2 and 4 from this paper using an up-to-date dataset of extrasolar planets found on this GitHub repo:
https://github.com/OpenExoplanetCatalogue/open_exoplanet_catalogue
A text version of the dataset has already been put into this directory. The top of the file has documentation about each column of data:
End of explanation
# YOUR CODE HERE
f = np.genfromtxt('open_exoplanet_catalogue.txt', delimiter=',')
data = np.array(f)
data
assert data.shape==(1993,24)
Explanation: Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data:
End of explanation
# YOUR CODE HERE
flat = np.ravel(data)
mass = flat[2::24]
k = [i for i in mass if str(i) != 'nan'] #I used the user's Lego Stormtrooper advice on Stackoverflow to filter out the nan's
type(k[1])
# print(k)
f = plt.figure(figsize=(15,6))
plt.hist(k, bins=len(k))
ax = plt.gca()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
plt.xlim(0.0, 14.0)
plt.ylim(0.0, 300.0)
plt.xlabel('Masses of Planets (M_jup)')
plt.ylabel('Number of Planets')
assert True # leave for grading
Explanation: Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
Pick the number of bins for the histogram appropriately.
End of explanation
# YOUR CODE HERE
orbit = flat[6::24]
y = [i for i in orbit]
print(len(y))
axis = flat[5::24]
x = [i for i in axis]
print(len(x))
f = plt.figure(figsize=(11,5))
plt.scatter(x, y, color='r', s=20, alpha=0.5)
plt.xscale('log')
plt.xlabel('Semimajor Axis(log)')
plt.ylabel('Orbital Eccentricity')
plt.ylim(0.0, 1.0)
plt.xlim(0.01, 100)
assert True # leave for grading
Explanation: Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation |
13,865 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let us first explore an example that falls under novelty detection. Here, we train a model on data with some distribution and no outliers. The test data, has some "novel" subset of data that does not follow that distribution.
Step1: Use the np.random module to generate a normal distribution of 1,000 data points in two dimensions (e.g. x, y) - choose whatever mean and sigma^2 you like. Generate another 1,000 data points with a normal distribution in two dimensions that are well separated from the first set. You now have two "clusters". Concatenate them so you have 2,000 data points in two dimensions. Plot the points. This will be the training set.
Step2: Plot the points.
Step3: Generate 100 data points with the same distribution as your first random normal 2-d set, and 100 data points with the same distribution as your second random normal 2-d set. This will be the test set labeled X_test_normal.
Step4: Generate 100 data points with a random uniform distribution. This will be the test set labeled X_test_uniform.
Step5: Define a model classifier with the svm.OneClassSVM
Step6: Fit the model to the training data.
Step7: Use the trained model to predict whether X_test_normal data point are in the same distributions. Calculate the fraction of "false" predictions.
Step8: Use the trained model to predict whether X_test_uniform is in the same distribution. Calculate the fraction of "false" predictions.
Step9: Use the trained model to see how well it recovers the training data. (Predict on the training data, and calculate the fraction of "false" predictions.)
Step10: Create another instance of the model classifier, but change the kwarg value for nu. Hint
Step11: Redo the prediction on the training set, prediction on X_test_random, and prediction on X_test.
Step12: Plot in scatter points the X_train in blue, X_test_normal in red, and X_test_uniform in black. Overplot the trained model decision function boundary for the first instance of the model classifier.
Step13: Do the same for the second instance of the model classifier.
Step14: Test how well EllipticEnvelope predicts the outliers when you concatenate the training data with the X_test_uniform data.
Step15: Compute and plot the mahanalobis distances of X_test, X_train_normal, X_train_uniform | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm
%matplotlib inline
Explanation: Let us first explore an example that falls under novelty detection. Here, we train a model on data with some distribution and no outliers. The test data, has some "novel" subset of data that does not follow that distribution.
End of explanation
mu,sigma=3,0.1
x=np.random.normal(mu,sigma,1000)
y=np.random.normal(mu,sigma,1000)
x_0=np.random.normal(2,sigma,1000)
y_0=np.random.normal(2,sigma,1000)
X_train_normal=np.ndarray(shape=(2000,2))
for i in range(0,2000):
if (i<1000):
X_train_normal[i]=[x[i],y[i]]
else:
X_train_normal[i]=[x_0[i-1001],y_0[i-1001]]
print(xy)
Explanation: Use the np.random module to generate a normal distribution of 1,000 data points in two dimensions (e.g. x, y) - choose whatever mean and sigma^2 you like. Generate another 1,000 data points with a normal distribution in two dimensions that are well separated from the first set. You now have two "clusters". Concatenate them so you have 2,000 data points in two dimensions. Plot the points. This will be the training set.
End of explanation
p_x=X_train_normal[:,0]
p_y=X_train_normal[:,1]
print(p_y)
plt.scatter(p_x,p_y)
plt.show()
Explanation: Plot the points.
End of explanation
X_test_normal=np.concatenate((0.1*np.random.randn(100,2)+2,0.1*np.random.randn(100,2)+3))
plt.scatter(X_test_normal[:,0],X_test_normal[:,1])
Explanation: Generate 100 data points with the same distribution as your first random normal 2-d set, and 100 data points with the same distribution as your second random normal 2-d set. This will be the test set labeled X_test_normal.
End of explanation
X_test_uniform=np.random.rand(100,2)+3
plt.scatter(X_test_uniform[:,0],X_test_uniform[:,1])
Explanation: Generate 100 data points with a random uniform distribution. This will be the test set labeled X_test_uniform.
End of explanation
model = svm.OneClassSVM()
Explanation: Define a model classifier with the svm.OneClassSVM
End of explanation
model.fit(X_train_normal)
Explanation: Fit the model to the training data.
End of explanation
predicted=model.predict(X_test_normal)-1
print(np.count_nonzero(predicted))
Explanation: Use the trained model to predict whether X_test_normal data point are in the same distributions. Calculate the fraction of "false" predictions.
End of explanation
uniform=model.predict(X_test_uniform)-1
print(np.count_nonzero(uniform))
Explanation: Use the trained model to predict whether X_test_uniform is in the same distribution. Calculate the fraction of "false" predictions.
End of explanation
trained=model.predict(X_train_normal)-1
print(np.count_nonzero(trained))
Explanation: Use the trained model to see how well it recovers the training data. (Predict on the training data, and calculate the fraction of "false" predictions.)
End of explanation
new_model=svm.OneClassSVM(nu=0.1)
new_model.fit(X_train_normal)
Explanation: Create another instance of the model classifier, but change the kwarg value for nu. Hint: Use help to figure out what the kwargs are.
End of explanation
new_predicted=new_model.predict(X_test_normal)-1
new_uniform=new_model.predict(X_test_uniform)-1
new_trained=new_model.predict(X_train_normal)-1
print(np.count_nonzero(new_trained))
print(np.count_nonzero(new_predicted))
print(np.count_nonzero(new_uniform))
Explanation: Redo the prediction on the training set, prediction on X_test_random, and prediction on X_test.
End of explanation
plt.scatter(X_train_normal[:,0],X_train_normal[:,1],color='blue')
plt.scatter(X_test_uniform[:,0],X_test_uniform[:,1],color='black')
plt.scatter(X_test_normal[:,0],X_test_normal[:,1],color='red')
xx1, yy1 = np.meshgrid(np.linspace(1.5, 4, 1000), np.linspace(1.5, 4,1000))
Z1 =model.decision_function(np.c_[xx1.ravel(), yy1.ravel()])
Z1 = Z1.reshape(xx1.shape)
plt.contour(xx1, yy1, Z1, levels=[0],
linewidths=2)
Explanation: Plot in scatter points the X_train in blue, X_test_normal in red, and X_test_uniform in black. Overplot the trained model decision function boundary for the first instance of the model classifier.
End of explanation
plt.scatter(X_train_normal[:,0],X_train_normal[:,1],color='blue')
plt.scatter(X_test_uniform[:,0],X_test_uniform[:,1],color='black')
plt.scatter(X_test_normal[:,0],X_test_normal[:,1],color='red')
xx1, yy1 = np.meshgrid(np.linspace(1.5, 4, 1000), np.linspace(1.5, 4,1000))
Z1 =new_model.decision_function(np.c_[xx1.ravel(), yy1.ravel()])
Z1 = Z1.reshape(xx1.shape)
plt.contour(xx1, yy1, Z1, levels=[0],
linewidths=2)
from sklearn.covariance import EllipticEnvelope
Explanation: Do the same for the second instance of the model classifier.
End of explanation
train_uniform=np.concatenate((X_train_normal,X_test_uniform))
envelope=EllipticEnvelope()
envelope.fit(train_uniform)
envelope.predict(train_uniform)
Explanation: Test how well EllipticEnvelope predicts the outliers when you concatenate the training data with the X_test_uniform data.
End of explanation
print(range(100))
plt.scatter(range(100),envelope.mahalanobis(X_test_uniform),color='black')
plt.scatter(range(2000),envelope.mahalanobis(X_train_normal),color='blue')
plt.scatter(range(200),envelope.mahalanobis(X_test_normal),color='red')
plt.show()
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAXQAAAD8CAYAAABn919SAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAAEp0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMC4wcmMyKzI5MjAuZzExNWJhZGUsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy9bT2XBAAAgAElEQVR4nO3df5AkZ33f8fd353aB2cOWNHdRXSR2RsTEKcWVgNhScIGphAMMFwcpiaNwLGHBlLdYxS5RVMoIb1XKSdW6AFewZWKOWiHIWTMGORhKKkIAmeDE/gOZPRA/hJB16HZOUkm6HxKg4wBJu9/80T27s7PdMz0z3fOj5/Oq6tqZ3p6eZ57u/s4z3366H3N3RERk/E0NuwAiIpIOBXQRkZxQQBcRyQkFdBGRnFBAFxHJCQV0EZGcUEAXEckJBXQRkZxQQBcRyYl9SRYys0uAjwG/BDjwG8ADwB1ABdgAbnD3p9qt58CBA16pVHovrYjIBDpx4sQ5dz/YaTlLcum/mR0H/trdP2ZmM0AR+F3gSXd/v5ndDFzq7u9tt575+XlfX19P9glERAQAMzvh7vOdluuYcjGznwdeDdwG4O7PuPsPgOuA4+Fix4Hrey+uiIj0K0kO/SrgLPAJM/uGmX3MzGaBy939sXCZx4HLo15sZktmtm5m62fPnk2n1CIiskeSgL4PuAY45u4vA34M3Ny8gAd5m8jcjbuvufu8u88fPNgxBSQiIj1KEtAfAR5x93vC558mCPBPmNkhgPDvmWyKKCIiSXQM6O7+OPCwmf1iOOsw8F3gLmAxnLcI3JlJCUVEJJGk/dB/G6iZ2beAlwK/D7wfeJ2ZPQi8Nnw+MLVajUqlwtTUFJVKhVqtNsi3FxEZOYn6obv7vUBUl5nD6RYnmVqtxtLSEhcvXgSgXq+ztLQEwMLCwjCKJCIydGN5pejKysp2MG+4ePEiKysrQyqRiMjwjWVAP336dFfzRUQmwVgG9Lm5ua7mi4hMgrEM6KurqxSLxV3zisUiq6urQyqRiMjwjWVAX1hYYG1tjXK5jJlRLpdZW1vTCVERmWiJbs6VFt2cS0Ske6ndnEtERMaDArqISE4ooIuI5IQCuohITiigi4jkhAK6iEhOKKCLiOSEArqISE4ooIuI5IQCuohITiigi4jkhAK6iEhOKKCLiOSEArqISE4ooIuI5IQCuohITiigi4jkhAK6iEhOKKCLiOTEviQLmdkG8DSwCTzn7vNmdhlwB1ABNoAb3P2pbIopIiKddNNC/xfu/tKmgUpvBr7s7i8Bvhw+FxGRIekn5XIdcDx8fBy4vv/iiIhIr5IGdAe+ZGYnzGwpnHe5uz8WPn4cuDzqhWa2ZGbrZrZ+9uzZPosrIiJxEuXQgVe5+6Nm9veAu83se83/dHc3M496obuvAWsA8/PzkcuIiEj/ErXQ3f3R8O8Z4LPAtcATZnYIIPx7JqtCiohIZx0DupnNmtkLG4+B1wPfAe4CFsPFFoE7syqkiIh0liTlcjnwWTNrLP9n7v4FM/sa8Odm9k6gDtyQXTFFRKSTjgHd3R8C/mnE/PPA4SwKJSIi3dOVoiIiOTG2Ab1Wq1GpVJiamqJSqVCr1YZdJBGRoUrabXGk1Go1lpaWuHjxIgD1ep2lpaB7/MLCwjCLJiIyNGPZQl9ZWdkO5g0XL15kZWVlSCUSERm+sQzop0+f7mq+iMgkGMuAPjc319V8EZFJMJYBfXV1lWKxuGtesVhkdXV1SCUSERm+sQzoCwsLrK2tUS6XMTPK5TJra2s6ISoiE83cB3e/rPn5eV9fXx/Y+4mI5IGZnWgaiyLWWLbQRURkLwV0EZGcUEAXEckJBXQRkZxQQBcRyQkFdBGRnFBAFxHJCQV0EZGcUEAXEckJBXQRkZxQQBcRyQkFdBGRnMhlQNd4oyIyicZyTNF2NN6oiEyq3LXQNd6oiEyqxAHdzApm9g0z+1z4/Cozu8fMTprZHWY2k10x47WmV+r1euRyGm9URPKumxb6TcD9Tc8/APyhu/8C8BTwzjQLlkQjvVKv13F36vU6Zha5rMYbFZG8SxTQzexK4F8CHwufG/Aa4NPhIseB67MoYKvmFvni4uKe9Iq77wnqGm9URCZB0hb6HwG/A2yFz0vAD9z9ufD5I8AVKZdtj9YW+ebmZuRy7q7xRkVk4nTs5WJmvwaccfcTZvbPu30DM1sClqD/tEfUCc8o5XKZjY2Nvt5LRGTcJGmhvxJ4k5ltAJ8iSLXcAlxiZo0vhCuBR6Ne7O5r7j7v7vMHDx7sq7BJTmwqvSIik6pjQHf397n7le5eAd4M/B93XwC+Avx6uNgicGdmpQzFtfALhYLSKyIy8frph/5e4D1mdpIgp35bOkWKt7q6SrFY3DWvWCxy/Phxtra22NjYUDAXkYnV1ZWi7v5XwF+Fjx8Crk2/SPEawXplZYXTp08zNzfH6uqqgriICDm8UlREZFKN1b1cdJ8WEZF4Y9VC131aRETijVVAj+u2qPu0iIiMWUCP67ao+7SIiIxZQI/rtqgLiURExiygLywssLa2pvu0iIhEMHcf2JvNz8/7+vr6wN5PRCQPzOyEu893Wm6sWugiIhJPAV1EJCcU0EVEckIBXUQkJxTQRURyQgFdRCQnFNBFRHJCAV1EJCcU0EVEckIBXUQkJxTQRURyYuwDeq1Wo1KpMDU1RaVSoVarDbtIIiJDMVZD0LXSkHQiIjvGuoWuIelERHaMdUDXkHQiIjvGOqBrSDoRkR1jHdA1JJ2IyI6OAd3Mnm9mf2tm3zSz+8zsv4TzrzKze8zspJndYWYz2Rd3Nw1JJyKyo+MQdGZmwKy7XzCzaeBvgJuA9wCfcfdPmdlHgW+6+7F269IQdCIi3UttCDoPXAifToeTA68BPh3OPw5c32NZRUQkBYly6GZWMLN7gTPA3cD3gR+4+3PhIo8AV2RTRBERSSJRQHf3TXd/KXAlcC3wj5K+gZktmdm6ma2fPXu2x2LupqtDRUT26qqXi7v/APgK8MvAJWbWuNL0SuDRmNesufu8u88fPHiwr8LCztWh9Xodd9++OlRBXUQmXZJeLgfN7JLw8QuA1wH3EwT2Xw8XWwTuzKqQzXR1qIhItCQt9EPAV8zsW8DXgLvd/XPAe4H3mNlJoATcll0xd8RdBVqv19VKF5GJlqSXy7fc/WXu/k/c/Zfc/b+G8x9y92vd/Rfc/d+5+8+yL277q0CbUy/Ks4vIpBm7uy0eOXKEY8eiu7s3p150F0YRmTQdLyxKUxoXFlUqFer1euz/zYy5ubnIZcrlMhsbG329v4jIoKV2YdGo6XQnxbm5OeXZRWQijV1Ab5dDb9yYK2meXUQkT8YuoEfdYRGgVCpt35grbhlQF0cRya+xC+hRd1isVqucO3du+4RnY5k4GgBDRPJo7E6KdiPuBKpOjorIOMntSdFuaAAMEZkkuQ7oGgBDRCZJrlMuIiJ5oJRLE90GQEQmwdhd+t+txu12dRsAEcm73LfQdbtdEZkUuQ/o7W4DICKSJ7kP6HG3ATAz5dJFJFdyH9CPHDkSOd/dlXYRkVzJfUD//Oc/H/s/3QJARPIk9wG9XdBud1dGEZFxk/uA3i6HrlsAiEie5D6gR93Pxcx417vepX7oIpIruQ/oUfdzuf322/nIRz4y7KKJiKRK93IRERlxupeLiMiEUUAXEcmJsQ7ououiiMiOjgHdzF5kZl8xs++a2X1mdlM4/zIzu9vMHgz/Xpp9cXc07qJYr9dx9+27KCqoi8ik6nhS1MwOAYfc/etm9kLgBHA98HbgSXd/v5ndDFzq7u9tt640T4pqvFARmRSpnRR198fc/evh46eB+4ErgOuA4+FixwmC/MDEXQGqy/lFZFJ1lUM3swrwMuAe4HJ3fyz81+PA5amWrIO4K0CnpqaUUxeRiZQ4oJvZfuAvgHe7+4+a/+dB3iYyd2NmS2a2bmbrZ8+e7auwEOTODxw4EHs/883NTeXURWQiJQroZjZNEMxr7v6ZcPYTYX69kWc/E/Vad19z93l3nz948GBfha3VarzjHe/g/Pnze/43NbX3o2hkIhGZJEl6uRhwG3C/u3+o6V93AYvh40XgzvSLt9vKygrPPvts5P+2trYi5yunLiKTIkkL/ZXAfwBeY2b3htMR4P3A68zsQeC14fNM9RKcdYtcmVS1GlQqMDUV/FX2Mf/2dVrA3f8GsJh/H063OO3Nzc3F5s5LpRI/+clPdg0IXSwWdYtcmUi1GiwtQeNwqNeD5wC6yWh+jdWVoqurq0xPT++ZPzMzwy233MLi4iKFQgGAQqHA4uKibpErE2llZSeYN1y8GMyX/Bq7uy3WajVuuumm7ROjU1NTbG1tUSqVePrpp3nmmWe2ly0Wi6ytrSmoy8SZmoKoQ9sMYk43yQjL7d0WFxYWuOWWWyiVSsDOydDz58/vCuagXi4yueJOHemUUr6NXUBv3MMlqutilHq9rguNZOKsrkLLQF0Ui8F8ya+xC+grKyu7TnwmoQuNZNIsLMDaGpTLQZqlXA6eK/uYb2MX0PvpV64UTL6oW157CwuwsRHkzDc2FMwnwdgF9Hb9yqenpymVSgTXQkWr1+vceOONWRRNBqjRLa9eD07+NbrlKahLEnltDIxdQF9dXaXYmhwk6If+iU98gnPnzrG1tUW5XI5dx7FjxxTUx5y65fUmr4GsG50aA2NdR+4+sOnlL3+596tarXqpVGrcDMxLpZIvLy97uVx2M/NyuezVatWr1aoXi8Xt5VqnQqHQd1lkeMzcg8Nx92Q27JKNrmrVvVjcXV/FYjB/kpTL0ftOuTy6dQSse4IYO1b90Bs9XJpPik5PT2Nmkf3PAd761rfGrm+Qn13SVakELatW5XKQL5a9VGeBdn305+ZGs46S9kMfq4AeN0pRlMbIRfv27WNzc3PP/wuFAs8991zPZZHhar20HYJueerJEU8XGwXafbGdPj2adZTLC4u66eHSCPxLjRtYtIibL+NB3fK6p4uNAu366I99HSXJy6Q19ZtDL5fLsTnx1snMvBomvpaXl71QKGznzpeXl/sqRxaq1SCHZ7aTyxNJ06jmh4ch7ngb1ToiYQ59rAJ61InO6enp2KBeKpX2nCwdRaO6E0n+qOHQ2SjWUdKAPlY5dAhOjK6srHD69Gnm5uZYXV1te+Kz2ajerEsnq0SknVzm0CG4OdfGxgZbW1tsbEe7twCngM3w79HI147qlaJxpwY02JKIdGPsAnqzG2+8kbe+9X8Ba0CF4ONUgFuJC+qjOCTd2J+IEZGRMJYBvVarceDAAY4dOwb8PjDbssRsOH+vURySTnfGE5E0jF1A33v73LgAvXf+zMxMJkPS9XupsLrgiUgaxu6k6N6Li04RpFlabQBX7ZpTKpU4d+5cX+/fShe4iEymWi24d9Dp00F6dHU1u2M+tydF9+bAfxf4ccu8H4fzdzt//nzqA11keZOosb5JkHSk7Tu+RvZun0n6NqY1pXFzruiLi446nHLYDP8ebfu/YrGYWp/0rG4Spb7p+abtO97a3eArC+TxwiL36IuL4qejDhdaKv2Cw1Evp1TzpVL0hi2V+lvvoHcYGSxt396NwoU/g77bZ9KAPnYpl4WFBdbW1iiXy5hZ2/uet+sBM4rdF5upb3q+afv2ZlRSHaPa1XjsAjrsvbgoPqjH94D5rcsuSyWB+eST3c1PalR3GEmHtu9uSc8njMrAJiPb1bhTEx74OHAG+E7TvMuAu4EHw7+XJvk5kEbKJUp8GuZU5M+io/yhPzsz03cCs1p1LxSy+emsHGu+afvu6KYuekl1ZJWiGWTqh7Ry6MCrgWtaAvoHgZvDxzcDH0jyZmncnKtxs61SqbQ9clHznRQJ77RImxx63fb3HYWjdsK0D8xRyBXKbmluk1HYvqNQhm7OJ3R77qGfL85RqJuG1AJ6sC4qLQH9AeBQ+PgQ8ECS9fQT0JOeDN1798XdvVymp9/uW1F7RJuv+agNG7djFQoKvHmVt1b1MD5P1LHUTau7WnVv/XE9MxNf5l5PPneqm0EH+6wD+g+aHlvz84jXLgHrwPrc3FzPH6ibe6F3mh7uIk8St2G7/E6QHMhbz5RBf564Yymup1hUOapV9+np3ctNT8cH1F57o4zauKMDC+jh86eSrKefFvpOGqX/6Sj4hdYtFbNFymX3o1T9FGXfxPwUZT9KNbPcuYyuvA1MPejPExckS6XkAbLbL6Fev7Ta1c0wvtiTBvRee7k8YWaHAMK/Z3pcT2Jp3lTrk8BvAo8UCh1vnvLKeo1bWaJCnSmcCnVuZYkbNmujeZZbMtNrz5RhXBGa5D0H3dMmrkvmk08mv5dRN909azW4cGHv/CTHabu6Gekup0miPntb6H/A7pOiH0yynkHk0AGfmZlJtJyZdUyGPVwoR34dP1woj9RJk6xk/RnTXn+W5e3lp/aw8tRJ3nPQZUujZZt0HXGdFmZng18EnfaPdnXTTRnS2hdJsZfLJ4HHgGeBR4B3AiXgywTdFv8SuCzJm6XZy2V2djYyQC8vL3u1Wt1O0RwFPwW+Gf492rT8byf4rbdF9G+vLcb0d3YX4nbq5eV0dtSo9Td+6vay3qj1TU8nO4C7eY+4sSij5g/j53k37znIRkkaXyDt1tH8WeJSoq1T44RqVD30M+5o2l+WqQX0NKc0A3q5XPbl5eU9zxtdGRvBvDVXvhlOdTP/yf747ovVahAITlGOXObpUrnHz7B7J0krOGYhLjC05hd73VHj1t/rejutL6sWaLuDN408dTdBt1qN/+yDyPV3Kuvy8k6wLRSC592uKy74tuus0G6ane2+4dLpc6b9RZ67gB6Vcmm0whvBvLXL4qleti5B67txJv0oVb/A7q19gaK/fbqaSgsy6qBrt5MPUlwwSmtHTbL+btabtLyFQrpfoO0O3k69JToF6qStwcb7tKuDrE/YJ+nql7TV2m0LN8mXebdTPw2XtE845y6g99JtcbPHLflwobxrVlQvl14OkKQ7XSO1P2zdHCS97KhJ1t+83ubA1WjlJUlvtJvSaLG3O3jbpa3apYdKpfjufM37XtKW6SD6y3dqlWZxAVG7nHYWU9JjXi30DuK6LbbLkZ/qZYvNzPhbwoCdZhBr91M4zQ3frXatxHY57qjydpuPjepT3EvgagTCduXLsq7bBZS4tFoaQajTezdPg2ggdGqVdtNq7fQlmeQXSdL9YWoqeZ0nPeaVQ+8gqoUelSO/wE5Q/zDEXxUaM/2U6e0WeGPqpYXeHNxKpb1Xt6W143R633YnBLv5Od+c80/a4kzSA6RdvTS/vttfC5C83lvrut0XU1QOeHm5feCIqtMud8s9U6EQrKvXtFUWJ0MH0UI3S34stbtoqTFNT0fvu+0aLknrcCR7uaQ59ZtDT5ojP9Xh/5222inK20+7yaF303Lo5SDsXEedr2JtrLs5MLVOne7lHrWj9pIr7tSybc699lp/SXo+tB6kcV9Ys7PRr9+3L3mZooJHr1OnOozarnEnENPovZQkh97psv20jqFG+TudU4g70Xr4cPRrGue3or7Es0xr5TKgt/Yvj8uRb3b4/xZBnrz964OnSXu5dGqldTPF9Rnu1DsmzVxitztmpxZq1OdL8hO8194LjXV0+qXUehI6ac+eXqe01gNBubtJi0HwpRaXYkgjQHVK4bW7bL+fnirNU/MXdK+57E4NlCTvnabcBfSolMupmFo9xRUOR/1MzP/PUHKID9abmB+l6kepJrqRV6eWQC8HarMkO3parb7WHTdpiy3uAGjXKk5ysPX6JdUob2u9RAWz5sCVZh1mPTW+jJrTCo1WeFpf7kkCVKOLb2sZku4jnVIy3Uy99gdv3dfb1XncL7XG/7OQu4AedVI0OodeDHPcm36G6CjXCOhHqfpmzIVDZyjtSbU0T+f2lzv+lE/rIEq6o6ddjnYHS9LcersdP6512fyF1umLsl2ru5sAkeQn+iCmxjZsPv/RyzZK83N06v8dd2K7NbD3etK0223YKuq8UmtdR+1D7bZPuzJkIXcBPa7bYtDL5Yo9JyzBY4P1Jta0TPSWaXcydedLI72DJmoHb+hmR0+7pR530MS9z/79uw/4dmOuxgXc5i+OJGO2RuU7s66HLKfWFuQguuW128dmZuK/sDuddGzk5pN0wezncyZpGXeb0unlC6a5MaKTom1Uq1WP73O+uatSG71S4oJy80nPuLRL3Gu3INNg3ryDNyTd0VvTJN10x+pmJ09SnkarPar1Vigk66lQLsf/vG0E9Hat0ax/sWQ5zc7ublVm/X5XX905957F+7aeqO+1lR51Erz5V8QgvhQb09RU0MhQt8UOGpf17+17fut2pUX1SmmeWlvXcb1YzhB9FDV/GWQ19ZJDb7SiWtMg7Q7ExsHTTZfKxvq7OVijdvg0D552/4+6eCfr7ZfFlMWX86hNSa6ijntdp18BozL1mpLJZUCvVqv+FvbmzbfAf8j+7ZZ5XG1ugX+Y5T3/iupnHhfos26dN3bQdl0i46bWwJx2EOh0t7lRnZpbgWn0Adc0vG3YOjUaLMM+99HN5+hFLgO6u/u5Nlvup8x0vNz/h8w6xF8s1Dz/DCX/IbO+RfBlcIbSQAI6xH+TD2vHbT7Btbw8nDL0M+3b1zkvr2m8pnEJ4s2TWujNqm26EYZTkv9/gcN7TpheoOgfZnlPq7x1fYNqpTd22FJpdx55GDtxc756WK3zNH5tNFro45p20TTe0yBy6BYsOxjz8/O+vr7e8+svHDjA/vPn+y6HEwyE2uo5Cuxjs+PrNyhzFRt9l2NcVKvB36UluHix9/VMTcHWVjpl6lWhAJudN7FIKsrlYCSjublglKSoUZiSMLMT7j7fabl9va1+OIopBHOIDuYAhQTBHGCOURhrajBKpWAnrFT6C+Yw/GAOCuZRCgV4wQuih2uT/qQRzLvR65iiQ3Ge2WEXAYDTZDTo4ogpFuGGG4JgXq8PuzSShVIJrrxSwTwr7sGxs7Q0mLFkxyag33gjwPMyW39cGiZquc9xJLNyjJJKBW67TcE8z86f1/YdhIsX4W1vyz6oj01AX1uDEk9ltv4kwbyx3L/nzzMrxyj57nfhmWeGXQqRfNjagsXFbIP62AT0zU34WYYt9G4cIJ1cvohMls1NuOmm7NY/NgH9i7yW5/PTYRdDRKQvKfXtiDQ2Af11fDlxWiRrg+voKSKS3NgE9FEyKl8sIjJ+LMMAooDeo6MMoA+SiOROltdy9hXQzewNZvaAmZ00s5vTKlSUn/L8LFffFQNuIcMzGyIiPeg5oJtZAfgT4I3A1cBRM7s6rYK1eh4/y2rVPVFPFxEZNf200K8FTrr7Q+7+DPAp4Lp0irXXpFydKSL5dnVmzd7+AvoVwMNNzx8J52XiQ6VVfsp0Vqvv2jlKwy6CiIwZM7jvvuzWn/lJUTNbMrN1M1s/e/Zsz+v5Z7cssDz9Cc5S2h57bovOXQiTLBf3v7j5P2OGm7ilwztL3pVKcPhwcHOrKDMzwR0mZaeusuzhMeqmp+H22zN+kyT32I2agF8Gvtj0/H3A+9q9pv8Ri/aOF/jXy1U/R2l7EIofMuvnrORbmJ+e2hm44i3bA1cEQ9c1lt+0KT//96/eNW+LYCDpL3DY6xaMTbo5VfAtgiHofnO2GjvW5f79O+MYdjOQQqm0e6T65vuel0rBGIVp3gu9MWBFr+M4Nj5nY7s0f9bGZ+lnoOao+p2aCtbdPMReY39oXX7//r1D8rWOLxk3PF+hELw2zUF+m/fhXveLduVoLWvca6Lev3nwkqj6afwtlXbXf9yoWp3qK26fa8xL+n6dPnvrcdT6WaNee/hw5/2ik6g6HPlBogluvfsQcBUwA3wT+MftXpPGiEVZyuIAHpTmss/O7gwIUSgEO+kwPleSHbtdnY/z9pD2tG27kzSg9zXAhZkdAf4IKAAfd/fVdsv3O8CFiMgkGsgAF+7+eeDz/axDRETSoVM2IiI5oYAuIpITCugiIjmhgC4ikhMK6CIiOaGALiKSEwroIiI50deFRV2/mdlZoJ7Cqg4A51JYT9pGsVyjWCZQuboximUClasb/Zap7O4HOy000ICeFjNbT3LV1KCNYrlGsUygcnVjFMsEKlc3BlUmpVxERHJCAV1EJCfGNaCvDbsAMUaxXKNYJlC5ujGKZQKVqxsDKdNY5tBFRGSvcW2hi4hIi7EL6Gb2BjN7wMxOmtnNA3zfF5nZV8zsu2Z2n5ndFM7/PTN71MzuDacjTa95X1jOB8zsVzMs24aZfTt8//Vw3mVmdreZPRj+vTScb2b2x2G5vmVm12RQnl9sqo97zexHZvbuYdSVmX3czM6Y2Xea5nVdN2a2GC7/oJktZlSuPzCz74Xv/VkzuyScXzGznzTV20ebXvPycNufDMve1yBvMeXqeruleZzGlOmOpvJsmNm94fxB1lVcTBje/pVkFIxRmQgG0vg+8GJ2Rkm6ekDvfQi4Jnz8QuDvgKuB3wP+U8TyV4flex7BqE7fBwoZlW0DONAy74PAzeHjm4EPhI+PAP8bMOAVwD0D2GaPA+Vh1BXwauAa4Du91g1wGcHoXJcBl4aPL82gXK8H9oWPP9BUrkrzci3r+duwrBaW/Y0ZlKur7Zb2cRpVppb//zfgPw+hruJiwtD2r3FroV8LnHT3h9z9GeBTwHWDeGN3f8zdvx4+fhq4H7iizUuuAz7l7j9z91PASYLyD8p1wPHw8XHg+qb5f+qBrwKXmNmhDMtxGPi+u7e7oCyzunL3/wc8GfF+3dTNrwJ3u/uT7v4UcDfwhrTL5e5fcvfnwqdfBa5st46wbD/n7l/1IDL8adNnSa1cbcRtt1SP03ZlClvZNwCfbLeOjOoqLiYMbf8at4B+BfBw0/NHaB9UM2FmFeBlwD3hrN8Kf0J9vPHzisGW1YEvmdkJM1sK513u7o+Fjx8HLh9CuQDezO6Dbdh1Bd3XzTD2u98gaM01XGVm3zCz/2tmvxLOuyIsyyDK1c12G2R9/QrwhLs/2DRv4HXVEhOGtn+NW0AfOjPbD/wF8G53/xFwDPgHwEuBxwh+/g3aq9z9GuCNwH80s1c3/zNskQy8O5OZzQBvAv5nOGsU6lm6fWsAAAJGSURBVGqXYdVNO2a2AjwH1MJZjwFz7v4y4D3An5nZzw2wSCO33ZocZXeDYeB1FRETtg16/xq3gP4o8KKm51eG8wbCzKYJNlzN3T8D4O5PuPumu28Bt7KTKhhYWd390fDvGeCzYRmeaKRSwr9nBl0ugi+Yr7v7E2H5hl5XoW7rZmDlM7O3A78GLITBgDClcT58fIIgP/0PwzI0p2UyKVcP220g9WVm+4B/A9zRVNaB1lVUTGCI+9e4BfSvAS8xs6vC1t+bgbsG8cZhru424H53/1DT/Ob8878GGmfi7wLebGbPM7OrgJcQnJRJu1yzZvbCxmOCE2vfCd+/cbZ8EbizqVxvC8+4vwL4YdPPw7Ttaj0Nu66adFs3XwReb2aXhumG14fzUmVmbwB+B3iTu19smn/QzArh4xcT1M9DYdl+ZGavCPfPtzV9ljTL1e12G9Rx+lrge+6+nUoZZF3FxQSGuX/1c5Z3GBPBmeK/I/jmXRng+76K4KfTt4B7w+kIcDvw7XD+XcChpteshOV8gD7PqLcp14sJehF8E7ivUSdACfgy8CDwl8Bl4XwD/iQs17eB+YzKNQucB36+ad7A64rgC+Ux4FmC3OQ7e6kbgpz2yXB6R0blOkmQS23sXx8Nl/234ba9F/g68K+a1jNPEGC/D/x3wosFUy5X19stzeM0qkzh/P8BvKtl2UHWVVxMGNr+pStFRURyYtxSLiIiEkMBXUQkJxTQRURyQgFdRCQnFNBFRHJCAV1EJCcU0EVEckIBXUQkJ/4/88JKSM/h1cwAAAAASUVORK5CYII=
Explanation: Compute and plot the mahanalobis distances of X_test, X_train_normal, X_train_uniform
End of explanation |
13,866 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Upper Air Analysis using Declarative Syntax
The MetPy declarative syntax allows for a simplified interface to creating common
meteorological analyses including upper air observation plots.
Step1: Getting the data
In this example, data is originally from the Iowa State Upper-air archive
(https
Step2: Plotting the data
Use the declarative plotting interface to create a CONUS upper-air map for 500 hPa | Python Code:
from datetime import datetime
import pandas as pd
from metpy.cbook import get_test_data
import metpy.plots as mpplots
from metpy.units import units
Explanation: Upper Air Analysis using Declarative Syntax
The MetPy declarative syntax allows for a simplified interface to creating common
meteorological analyses including upper air observation plots.
End of explanation
data = pd.read_csv(get_test_data('UPA_obs.csv', as_file_obj=False))
# In a real-world case, you could obtain and preprocess the data with code such as
# from siphon.simplewebservice.iastate import IAStateUpperAir
# from metpy.io import add_station_lat_lon
# data = IAStateUpperAir().request_all_data(datetime(2021, 8, 25, 12))
# data = add_station_lat_lon(data)
Explanation: Getting the data
In this example, data is originally from the Iowa State Upper-air archive
(https://mesonet.agron.iastate.edu/archive/raob/) available through a Siphon method.
The data are pre-processed to attach latitude/longitude locations for each RAOB site.
End of explanation
# Plotting the Observations
obs = mpplots.PlotObs()
obs.data = data
obs.time = datetime(1993, 3, 14, 0)
obs.level = 500 * units.hPa
obs.fields = ['temperature', 'dewpoint', 'height']
obs.locations = ['NW', 'SW', 'NE']
obs.formats = [None, None, lambda v: format(v, '.0f')[:3]]
obs.vector_field = ('u_wind', 'v_wind')
obs.reduce_points = 0
# Add map features for the particular panel
panel = mpplots.MapPanel()
panel.layout = (1, 1, 1)
panel.area = (-124, -72, 20, 53)
panel.projection = 'lcc'
panel.layers = ['coastline', 'borders', 'states', 'land', 'ocean']
panel.plots = [obs]
# Collecting panels for complete figure
pc = mpplots.PanelContainer()
pc.size = (15, 10)
pc.panels = [panel]
# Showing the results
pc.show()
Explanation: Plotting the data
Use the declarative plotting interface to create a CONUS upper-air map for 500 hPa
End of explanation |
13,867 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
notes
Now I know I should think in column vector, and Tensorflow is very picky about the shape of data. But in numpy, the normal 1D ndarray is represented as column vector already. If I reshape $\mathbb{R}^n$ as $\mathbb{R}^{n\times1}$, It's not the same as column vector anymore. It's Matrix with 1 column. And I got troubles with scipy optimizer.
So I should just treat tensorflow's data as special case. Keep using the convention of numpy world.
Step1: sigmoid function
Step2: cost function
$max(\ell(\theta)) = min(-\ell(\theta))$
choose $-\ell(\theta)$ as the cost function
<img style="float
Step3: looking good, be careful of the data shape
gradient
this is batch gradient
translate this into vector computation $\frac{1}{m} X^T( Sigmoid(X\theta) - y )$
<img style="float
Step4: fit the parameter
here I'm using scipy.optimize.minimize to find the parameters
and I use this model without understanding.... what is Jacobian ...
Step5: predict and validate from training set
now we are using training set to evaluate the model, this is not the best practice, but the course just begin, I guess Andrew will cover how to do model validation properlly later
Step6: find the decision boundary
http
Step7: you know the intercept would be around 125 for both x and y | Python Code:
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('fivethirtyeight')
import sys
sys.path.append('..')
from helper import logistic_regression as lr # my own module
from helper import general as general
from sklearn.metrics import classification_report
# prepare data
data = pd.read_csv('ex2data1.txt', names=['exam1', 'exam2', 'admitted'])
data.head()
X = general.get_X(data)
print(X.shape)
y = general.get_y(data)
print(y.shape)
Explanation: notes
Now I know I should think in column vector, and Tensorflow is very picky about the shape of data. But in numpy, the normal 1D ndarray is represented as column vector already. If I reshape $\mathbb{R}^n$ as $\mathbb{R}^{n\times1}$, It's not the same as column vector anymore. It's Matrix with 1 column. And I got troubles with scipy optimizer.
So I should just treat tensorflow's data as special case. Keep using the convention of numpy world.
End of explanation
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(np.arange(-10, 10, step=0.01),
lr.sigmoid(np.arange(-10, 10, step=0.01)))
ax.set_ylim((-0.1,1.1))
ax.set_xlabel('z', fontsize=18)
ax.set_ylabel('g(z)', fontsize=18)
ax.set_title('sigmoid function', fontsize=18)
Explanation: sigmoid function
End of explanation
theta = theta=np.zeros(3) # X(m*n) so theta is n*1
theta
lr.cost(theta, X, y)
Explanation: cost function
$max(\ell(\theta)) = min(-\ell(\theta))$
choose $-\ell(\theta)$ as the cost function
<img style="float: left;" src="../img/logistic_cost.png">
End of explanation
lr.gradient(theta, X, y)
Explanation: looking good, be careful of the data shape
gradient
this is batch gradient
translate this into vector computation $\frac{1}{m} X^T( Sigmoid(X\theta) - y )$
<img style="float: left;" src="../img/logistic_gradient.png">
End of explanation
import scipy.optimize as opt
res = opt.minimize(fun=lr.cost, x0=theta, args=(X, y), method='Newton-CG', jac=lr.gradient)
print(res)
Explanation: fit the parameter
here I'm using scipy.optimize.minimize to find the parameters
and I use this model without understanding.... what is Jacobian ...
End of explanation
final_theta = res.x
y_pred = lr.predict(X, final_theta)
print(classification_report(y, y_pred))
Explanation: predict and validate from training set
now we are using training set to evaluate the model, this is not the best practice, but the course just begin, I guess Andrew will cover how to do model validation properlly later
End of explanation
print(res.x) # this is final theta
coef = -(res.x / res.x[2]) # find the equation
print(coef)
x = np.arange(130, step=0.1)
y = coef[0] + coef[1]*x
data.describe() # find the range of x and y
Explanation: find the decision boundary
http://stats.stackexchange.com/questions/93569/why-is-logistic-regression-a-linear-classifier
$X \times \theta = 0$ (this is the line)
End of explanation
sns.set(context="notebook", style="ticks", font_scale=1.5)
sns.lmplot('exam1', 'exam2', hue='admitted', data=data,
size=6,
fit_reg=False,
scatter_kws={"s": 25}
)
plt.plot(x, y, 'grey')
plt.xlim(0, 130)
plt.ylim(0, 130)
plt.title('Decision Boundary')
Explanation: you know the intercept would be around 125 for both x and y
End of explanation |
13,868 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
¿Qué es programar?
Un programa de ordenador es una serie de instrucciones que le dicen a la máquina qué tiene que hacer. Las máquinas no entienden nuestro lenguaje, por lo que tenemos que aprender un lenguaje para poder comunicarnos con ellas y darles órdenes. Hay muchísimos lenguajes de programación hoy en día, cada lenguaje se usa para hacer un tipo de programa. Por ejemplo, si quieres hacer una página web puedes usar HTML, CSS y Javascript, si quieres hacer un programa para consultar una base de datos puedes usar SQL. En el caso de Python, es un lenguaje usado para muchas cosas
Step1: Ahora imagina que quieres hacer un programa para saludar a tu amiga Marta, ¿cómo lo harías? De primeras, podrías hacer lo siguiente
Step2: Así, tendríamos un programa para saludar a cualquier Marta del mundo, pero, ¿y si quisiéramos un programa para poder saludar a cualquier persona? Para poder llevar esto a cabo necesitamos introducir un concepto de programación básico
Step3: Operando con variables
En Python podemos hacer operaciones sobre las variables que tenemos de forma muy sencilla. A modo de resumen, las principales operaciones en python son
Step4: También podemos multiplicar una palabra por un número usando el operador *.
Step5: Y por último, también podemos hacer operaciones numéricas. En las operaciones numéricas Python respeta el orden de los operadores
Step6: En el primer caso hemos realizado la operación
$$5 + (6 * 2) = 5 + 12 = 17$$
y en el segundo, en cambio
$$ (5 + 6) * 2 = 11 * 2 = 22$$
Ejercicio
Step7: Usando sólo la instrucción import debemos preceder la instrucción que queremos del nombre de la librería. Si este nombre es muy largo, podemos importar la librería usando un alias
Step8: Ahora bien, si sólo vamos a usar unas operaciones concretas de una librería, podemos especificar cuáles son y así no tener que usar el nombre de la librería para poder utilizarlas.
Step9: Esta librería tiene muchas más operaciones que puedes consultar en la documentación oficial
Estructuras de control
Hasta ahora, nuestros programas se basan en una serie de instrucciones que se ejecutan una detrás de otra. Esto limita mucho los programas que podemos hacer, ya que no nos permite controlar el flujo de ejecución (control flow) de nuestro programa. A continuación vamos a ver una serie de instrucciones especiales que nos permiten hacer precisamente eso.
if
Imagina que estás operando con raíces cuadradas, como sabrás la raíz cuadrada de un número es negativa, y quieres evitar hacer la raíz cuadrada si el número introducido por el usuario es negativo.
Step10: ¿Qué podemos hacer para que no ocurra esto? Controlar con un if la condición de que el número sea positivo para hacer la raíz cuadrada y avisar al usuario en caso contrario
Step11: Si quisiéramos controlar una condición más, usaríamos la instrucción elif, que en otros lenguajes como C es conocida como else if
Step12: Ejercicio
Haz un programa que le pida al usuario un número (de ninjas). Si dicho número es menor que 50 y es par, el programa imprimirá "puedo con ellos!", en caso contrario imprimirá "no me vendría mal una ayudita..."
Nota
Step13: Ejercicio
Haz un bucle while que imprima todos los números desde el 0 hasta un número que introduzca el usuario. Si el número que introduce es negativo puedes tomar dos decisiones
Step14: ¿Qué es eso de la función range()? Sirve para generar una secuencia de números. Puedes consultar más sobre esta función en la documentación de Python. En Python 2, existían tanto range como xrange aunque en Python 3, range hace lo mismo que hacía xrange en Python 2.
Ejercicio
Genera con range los números pares del 0 al 10, ambos inclusive. ¿Qué cambiarías para generar del 2 al 10?
break
La sentencia break sirve para detener un bucle antes de que llegue al final (en un bucle for) o antes de que la condición sea falsa (en un bucle while).
Los bucles (for y while) pueden tener una sentencia else como los if. Esta sentencia else se ejecuta si el bucle no ha terminado por un break y nos puede servir para controlar cuando un bucle termina o no debido a un break de forma sencilla.
El código siguiente refleja muy bien esto
Step15: Cuando $n \% x = 0$, dejamos de hacer módulos con $n$ pues ya sabemos que no es primo. Por tanto, nos salimos del bucle con un break. Al salirnos con el break, no entramos en el else sino que volvemos al bucle inicial. En cambio, no hemos encontrado ningún $x$ tal que $n \% x = 0$, ejecutamos la condición else para decir que $n$ es primo.
Ejercicio
¿Cuál es la diferencia entre la sentencia break y la sentencia continue?
Pista
Step16: Al imprimir la lista vemos los diferentes elementos que contiene. Pero, ¿y si queremos acceder a sólo uno de los elementos? En ese caso, necesitarás acceder mediante índices. Cada elemento de la lista tiene un número asociado con su posición dentro de la misma
Step17: Si intentamos acceder a un índice superior al último de todos, 4, obtendremos un error
Step18: Entonces, uno podría pensar que está obligado a conocer la longitud de la lista si quiere acceder al último elemento pero nada más lejos de la realidad! En python, también existen los índices inversos que nos permiten acceder a los elementos de la lista al revés
Step19: Otra cosa que nos podemos hacer usando índices es quedarnos con una sublista. Por ejemplo, si quisiéramos quedarnos únicamente con un top 3 de amigos guays podríamos hacerlo usando el operador
Step20: Pero si queremos quedarnos con los tres primeros, podemos simplemente hacerlo de la siguiente forma
Step21: ¿Y si queremos saber el resto? Simplemente, lo hacemos al revés!
Step22: Ejercicio
Step23: Ahora bien, en Python existe una forma mucho más cómoda de iterar sobre los valores de una lista sin tener que estar pendiente de un índice i
Step24: En Python también existe lo que se llama list comprehesions, que son una forma mucho más sencilla y fácil de leer para crear listas. Por ejemplo, si queremos hacer una lista con las potencias de 2, podríamos hacerlo de la siguiente forma
Step25: Tendríamos nuestra lista de potencias de 2 en tres líneas, pero con los list comprehesions podemos hacerlo en una única línea
Step26: Ejercicio
Crea una lista con todos los números pares del 0 al 10 en una única línea.
Listas anidadas
Seguramente te habrás preguntado si se puede hacer una lista cuyos elementos sean listas, y la respuesta es ¡sí!. Esta representación de listas anidadas se suele usar para representar matrices. Por ejemplo, si queremos representar la siguiente matriz en Python
Step27: Ejercicio
Crea la siguiente matriz en una línea
Step28: Por tanto, para guardar en una lista tanto el nombre como la edad de nuestros amigos podríamos hacerlo de la siguiente forma
Step29: Los valores de las tuplas también tienen índices, por tanto, en nuestro caso el nombre tendría el índice 0 y la edad el índice 1. Por tanto, si queremos acceder a la edad de Paloma tenemos que usar el operador [] dos veces
Step30: Si queremos crear dos variables separadas para guardar el nombre y la edad de nuestro mejor amigo, podemos hacerlo en una sola línea!
Step31: Cuidado! Si por casualidad un amigo cumpliese un año más no podríamos ponerlo en la tupla, debido a que las tuplas no pueden modificarse.
Step32: Ejercicio
Step33: Además, podemos hacer las operaciones típicas sobre conjuntos.
Step34: Elementos de A que no están en B (diferencia)
Step35: Elementos que están o bien en A o bien en B (unión)
Step36: Elementos que están tanto en A como en B (intersección)
Step37: Elementos que están en A o en B pero no en ambos (diferencia simétrica)
Step38: ¿Podríamos hacer un conjunto de listas?
Step39: ¡No! Esto es debido a que los elementos de los conjuntos deben ser hashables. Tal y como se explica en la documentación, un objeto es hashable si su clase define la función __hash__(), que calcula un entero para caracterizar a un objeto, por lo que objetos con el mismo valor deben tener el mismo entero hash, y la función __eq__(), que sirve para comparar objetos.
Las estructuras de datos que pueden cambiar su valor, como las listas o los diccionarios, no son hashables y, por lo tanto, no pueden ser elementos de un conjunto.
Iterando sobre conjuntos
Al igual que con las listas, también es posible iterar sobre conjuntos y hacer set comprehesions
Step40: Ejercicio
¿Es buena idea usar la función set para eliminar los elementos repetidos de una lista?
Un último detalle sobre conjuntos
Step41: Para obtener la edad de Paloma, antes teníamos que estar mirando qué índice tenía en la lista, en cambio, ahora lo tenemos mucho más sencillo
Step42: Si queremos saber los nombres de cada uno de nuestros amigos podemos listar las claves
Step43: Y también podemos ver si hemos incluido a un amigo o no en nuestro diccionario
Step44: La función dict nos permite hacer diccionarios directamente desde Tuplas de la siguiente forma
Step45: Y cuando las keys son simples string también es posible definir un diccionario de la siguiente forma
Step46: Iterando sobre diccionarios
Es posible crear diccionarios en una sola línea, usando las dict comprehesions. En el siguiente ejemplo, cada clave almacena su valor al cuadrado
Step47: Aunque también es posible iterar sobre diccionarios usando bucles for. Para ello, usamos la función items
Step48: Cuando iteramos sobre una lista o un conjunto, podemos usar la función enumerate para obtener la posición y el elemento de la misma forma
Step49: Por último, si queremos iterar sobre dos listas o conjuntos a la vez (del mismo tamaño), podemos hacerlo usando la función zip | Python Code:
print("Hola mundo!")
Explanation: ¿Qué es programar?
Un programa de ordenador es una serie de instrucciones que le dicen a la máquina qué tiene que hacer. Las máquinas no entienden nuestro lenguaje, por lo que tenemos que aprender un lenguaje para poder comunicarnos con ellas y darles órdenes. Hay muchísimos lenguajes de programación hoy en día, cada lenguaje se usa para hacer un tipo de programa. Por ejemplo, si quieres hacer una página web puedes usar HTML, CSS y Javascript, si quieres hacer un programa para consultar una base de datos puedes usar SQL. En el caso de Python, es un lenguaje usado para muchas cosas: desde hacer cálculos científicos hasta programación web, pasando por robótica, seguridad, ciencia de datos, física y muchísimas cosas más. Además, Python es muy sencillo de entender y programar, ya que simplifica muchas tareas que en otros lenguajes como C son muy tediosas. Por eso, es ideal para entender los conceptos básicos de programación y para iniciarse en este mundillo.
Instalación
Linux
Python viene instalado por defecto en todas las distribuciones Linux. En Ubuntu, la versión que viene instalada es python2 y en otras distribuciones como Arch Linux, la versión es python3. Lo recomendable si vas a iniciarte en Python es que empieces directamente por python3. Para comprobar la versión que tenemos instalada ejecutamos el siguiente comando en consola:
[marta@marta-PC ~]$ python -V
Python 3.6.0
En mi caso, tengo la versión 3.6 instalada. En el caso de que tengas una versión de python2, instala la versión 3 usando el siguiente comando:
Ubuntu: sudo apt-get install python3
CentOS: sudo yum install python3
Windows y Mac OS
Para instalar Python en otros sistemas operativos, descarga el instalador de la página oficial de python.
La consola de Python
Python es un lenguaje de programación interpretado, ¿qué quiere decir esto? que interpreta cada instrucción que hemos escrito en nuestro programa a la hora de ejecutar. Otros lenguajes, como C o Java son compilados, por lo que necesitamos generar un ejecutable (los famosos .exe, para que nos entendamos) para poder ejecutarlos. El hecho de que Python sea interpretado nos permite tener una consola donde introducir comandos sin tener que crear un programa y compilarlo. Podemos ejecutar esta consola simplemente ejecutando el siguiente comando:
```
[marta@marta-PC ~]$ python
Python 3.6.0 (default, Dec 24 2016, 08:03:08)
[GCC 6.2.1 20160830] on linux
Type "help", "copyright", "credits" or "license" for more information.
```
También tenemos disponible la consola ipython, una consola de Python que resalta el código que escribimos y autocompleta si pulsamos el tabulador. Para ejecutarla simplemente tenemos que poner el siguiente comando en consola:
```
[marta@marta-PC ~]$ ipython
Python 3.6.0 (default, Dec 24 2016, 08:03:08)
Type "copyright", "credits" or "license" for more information.
IPython 5.1.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]:
```
Tanto el >>> de la consola de Python como el In [1]: de la consola de iPython son conocidos como el prompt, donde podemos escribir instrucciones para que el intérprete las ejecute.
Jupyter
Un notebook de Jupyter es una especie de consola de Python que nos permite poder escribir comentarios en Markdown. Es muy útil para poder hacer tutoriales como éste, ya que dispones de la explicación y el código para ejecutar y probar en el mismo sitio. Vamos a empezar a usar la consola de python incluida en jupyter para hacer las primeras pruebas. Para instalar Jupyter podemos usar tanto pip como anaconda:
[marta@marta-PC ~]$ sudo pip3 install --upgrade pip && sudo pip3 install jupyter
También puedes seguir las instrucciones de la página oficial de Jupyter
```
Hola mundo!
Una vez tenemos Python instalado y hemos elegido nuestro intérprete favorito, vamos a empezar a programar. El primer programa que se suele hacer en todos los lenguajes de programación es conocido como el hola mundo. En otros lenguajes como C, tendríamos que hacer varias líneas de código y, después, compilar nuestro programa para poder ejecutarlo, en cambio, en Python lo podemos ejecutar en una sola línea!
End of explanation
print("Hola Marta!")
Explanation: Ahora imagina que quieres hacer un programa para saludar a tu amiga Marta, ¿cómo lo harías? De primeras, podrías hacer lo siguiente:
End of explanation
nombre = input("¿Cómo te llamas? ")
print("Hola ", nombre,"!")
Explanation: Así, tendríamos un programa para saludar a cualquier Marta del mundo, pero, ¿y si quisiéramos un programa para poder saludar a cualquier persona? Para poder llevar esto a cabo necesitamos introducir un concepto de programación básico: las variables.
Variables
Una variable en programación describe un lugar donde guardar información, que puede ser diferente en cada ejecución. En nuestro caso, queremos saludar al usuario que ejecute nuestro programa, pero no sabemos su nombre a priori y no lo sabremos hasta la ejecución, por tanto, necesitamos reservar un espacio para poder guardar ahí el nombre del usuario durante la ejecución de nuestro programa.
Hay muchos tipos de variables:
números enteros (int)
números reales (float o double)
caracteres (char) y cadenas de caracteres (string)
booleanos (bool), estos últimos sólo pueden tomar dos valores: verdadero (True) o falso (False).
En otros lenguajes de programación, como C, necesitamos indicar el tipo que tiene cada variable que declaramos. En cambio, en python no es necesario ya que el propio intérprete lo infiere por sí mismo.
Una vez hemos aprendido el concepto de variable, ya sí podemos hacer nuestro programa para saludar a cualquier persona.
End of explanation
nombre1 = "Marta"
nombre2 = "María"
suma_nombres = nombre1 + " y " + nombre2
print(suma_nombres)
Explanation: Operando con variables
En Python podemos hacer operaciones sobre las variables que tenemos de forma muy sencilla. A modo de resumen, las principales operaciones en python son:
| Símbolo | Operación |
|:-------:|:-------------:|
|+ | suma |
|- | resta |
|* | multiplicación|
|/ | división |
Estos operadores se pueden usar con cualquier tipo de variable, tanto números como letras. A continuación, te muestro varios ejemplos.
Podemos concatenar dos palabras usando el operador +.
End of explanation
num1 = 5
mult_letra_num = nombre1 * 5
print(mult_letra_num)
Explanation: También podemos multiplicar una palabra por un número usando el operador *.
End of explanation
num2 = 6
operacion1 = num1 + num2 * 2
print(operacion1)
operacion2 = (num1 + num2) * 2
print(operacion2)
Explanation: Y por último, también podemos hacer operaciones numéricas. En las operaciones numéricas Python respeta el orden de los operadores: primero se realizan las multiplicaciones y divisiones y, después, las sumas y restas. Si queremos cambiar este orden simplemente tenemos que usar paréntesis.
End of explanation
import math
print(math.sqrt(4))
Explanation: En el primer caso hemos realizado la operación
$$5 + (6 * 2) = 5 + 12 = 17$$
y en el segundo, en cambio
$$ (5 + 6) * 2 = 11 * 2 = 22$$
Ejercicio:
Haz un pequeño programa que le pida al usuario introducir dos números ($x_1$ y $x_2$), calcule la siguiente operación y muestre el resultado de la misma ($x$):
$$ x = \frac{20 * x_1 - x_2}{x_2 + 3} $$
Si intentas operar con el resultado de la función input obtendrás un error que te informa que no se pueden restar dos datos de tipo str. Usa la función int para convertir los datos introducidos por teclado a datos numéricos.
Librería math
Una librería es un conjunto de operaciones relacionadas entre sí, guardadas en una especie de "paquete". En este caso, vamos a hablar de la librería math que tiene operaciones matemáticas más avanzadas tales como la raíz cuadrada.
Para poder usar esta librería debemos importarla a nuestro programa. Esto se hace usando la instrucción import:
End of explanation
import math as m
print(m.sqrt(4))
Explanation: Usando sólo la instrucción import debemos preceder la instrucción que queremos del nombre de la librería. Si este nombre es muy largo, podemos importar la librería usando un alias:
End of explanation
from math import sqrt
print(m.sqrt(4))
Explanation: Ahora bien, si sólo vamos a usar unas operaciones concretas de una librería, podemos especificar cuáles son y así no tener que usar el nombre de la librería para poder utilizarlas.
End of explanation
num = int(input("Introduce un número: "))
raiz = sqrt(num)
Explanation: Esta librería tiene muchas más operaciones que puedes consultar en la documentación oficial
Estructuras de control
Hasta ahora, nuestros programas se basan en una serie de instrucciones que se ejecutan una detrás de otra. Esto limita mucho los programas que podemos hacer, ya que no nos permite controlar el flujo de ejecución (control flow) de nuestro programa. A continuación vamos a ver una serie de instrucciones especiales que nos permiten hacer precisamente eso.
if
Imagina que estás operando con raíces cuadradas, como sabrás la raíz cuadrada de un número es negativa, y quieres evitar hacer la raíz cuadrada si el número introducido por el usuario es negativo.
End of explanation
num = int(input("Introduce un número: "))
if num > 0:
raiz = sqrt(num)
else:
print("No puedo hacer la raíz de un número negativo!")
Explanation: ¿Qué podemos hacer para que no ocurra esto? Controlar con un if la condición de que el número sea positivo para hacer la raíz cuadrada y avisar al usuario en caso contrario:
End of explanation
if num > 0:
raiz = sqrt(num)
elif num == 0:
print("Para qué quieres saber eso jaja saludos")
else:
print("No puedo hacer la raíz de un número negativo!")
Explanation: Si quisiéramos controlar una condición más, usaríamos la instrucción elif, que en otros lenguajes como C es conocida como else if:
End of explanation
num = int(input("Introduce un número: "))
while (num < 0):
num = int(input("Introduce un número: "))
raiz = sqrt(num)
print(raiz)
Explanation: Ejercicio
Haz un programa que le pida al usuario un número (de ninjas). Si dicho número es menor que 50 y es par, el programa imprimirá "puedo con ellos!", en caso contrario imprimirá "no me vendría mal una ayudita..."
Nota: para saber si un número es par o no debes usar el operador $\%$ y para saber si dos condiciones se cuplen a la vez, el operador lógico and
while
En el ejemplo anterior le decíamos al usuario que no podíamos hacer la raíz negativa de un número pero, ¿cómo haríamos para, en vez de darle este mensaje, volver a pedirle un número una y otra vez hasta que fuese negativo? Necesitamos ejecutar el mismo código hasta que se dé la condición que buscamos: que el usuario introduzca un número positivo.
Esto podemos hacerlo usando un bucle while!
End of explanation
for i in range(num):
print(i)
Explanation: Ejercicio
Haz un bucle while que imprima todos los números desde el 0 hasta un número que introduzca el usuario. Si el número que introduce es negativo puedes tomar dos decisiones: pedirle que introduzca un número positivo o contar hacia atrás, tú eliges!
for
En lenguajes de programación como C o Java, un bucle for sirve para recorrer una secuencia de números, de la siguiente forma:
for (int i=0; i<maximo_numero; i++)
Donde maximo_numero es una variable previamente definida.
Hay también otro tipo de bucles for denominados forEach en Java que sirven para iterar sobre los elementos de cualquier estructura de datos (más adelante veremos lo que es). En Python, los bucles for tienen esta función: iterar sobre una serie de elementos.
Para iterar sobre una serie de números, debemos generar dicha serie usando la función range(). Así, el ejercicio anteriormente pleanteado para resolverse con un bucle while sería:
End of explanation
for n in range(2,10):
for x in range(2, n):
if n % x == 0:
print(n, " = ", x, " * ", n//x)
break
else:
print(n, " es primo!")
Explanation: ¿Qué es eso de la función range()? Sirve para generar una secuencia de números. Puedes consultar más sobre esta función en la documentación de Python. En Python 2, existían tanto range como xrange aunque en Python 3, range hace lo mismo que hacía xrange en Python 2.
Ejercicio
Genera con range los números pares del 0 al 10, ambos inclusive. ¿Qué cambiarías para generar del 2 al 10?
break
La sentencia break sirve para detener un bucle antes de que llegue al final (en un bucle for) o antes de que la condición sea falsa (en un bucle while).
Los bucles (for y while) pueden tener una sentencia else como los if. Esta sentencia else se ejecuta si el bucle no ha terminado por un break y nos puede servir para controlar cuando un bucle termina o no debido a un break de forma sencilla.
El código siguiente refleja muy bien esto: para saber si un número $n$ es primo calculamos su módulo entre todos los números en el intervalo $[2,n)$ y, en el momento en el que uno de estos módulos sea igual a $0$, sabremos que $n$ no es primo.
End of explanation
mis_amigos = ['Paloma', 'Paula', 'Teresa', 'Marina', 'Braulio']
print(mis_amigos)
Explanation: Cuando $n \% x = 0$, dejamos de hacer módulos con $n$ pues ya sabemos que no es primo. Por tanto, nos salimos del bucle con un break. Al salirnos con el break, no entramos en el else sino que volvemos al bucle inicial. En cambio, no hemos encontrado ningún $x$ tal que $n \% x = 0$, ejecutamos la condición else para decir que $n$ es primo.
Ejercicio
¿Cuál es la diferencia entre la sentencia break y la sentencia continue?
Pista: consúltalo en la documentación de Python.
Estructuras de datos
Por ahora sólo hemos estudiado variables en las que podemos guardar un único valor: un número, una letra, una frase... ¿No te da la impresión de que esto se queda algo corto? Sí, y probablemente no eres la única persona que lo ha pensado. Las estructuras de datos son variables compuestas, esto quiere decir que en ellas podemos almacenar muchos datos. Hay estructuras de datos de todo tipo, en python tenemos las siguientes:
Listas
Tuplas
Conjuntos
Diccionarios
Listas
Imagina que quieres guardar en una variable los nombres de tus mejores amigos. Una muy buena opción para hacerlo es una lista. Al igual que en la vida real hacemos listas como lista de cosas por hacer, lista para la compra, lista de propósitos de año nuevo, etc. en Python también podemos hacerlas usando esta estructura de datos.
Con las listas, podemos guardar en un mismo sitio variables relacionadas entre sí. Esto nos permite poder aplicar operaciones sobre todas ellas sin tener que repetir código.
Para declarar una lista, usaremos los [].
End of explanation
mis_amigos[0]
Explanation: Al imprimir la lista vemos los diferentes elementos que contiene. Pero, ¿y si queremos acceder a sólo uno de los elementos? En ese caso, necesitarás acceder mediante índices. Cada elemento de la lista tiene un número asociado con su posición dentro de la misma:
| Elemento | Posición |
|:--------:|:--------:|
|Paloma | 0 |
|Paula | 1 |
|Teresa | 2 |
|Marina | 3 |
|Braulio | 4 |
así, si por ejemplo queremos únicamente mostrar a nuestro mejor amigo, accederemos a él mediante el índice 0:
End of explanation
mis_amigos[5]
Explanation: Si intentamos acceder a un índice superior al último de todos, 4, obtendremos un error:
End of explanation
mis_amigos[-1]
Explanation: Entonces, uno podría pensar que está obligado a conocer la longitud de la lista si quiere acceder al último elemento pero nada más lejos de la realidad! En python, también existen los índices inversos que nos permiten acceder a los elementos de la lista al revés:
| Elemento | Posición |
|:--------:|:--------:|
|Paloma | -5 |
|Paula | -4 |
|Teresa | -3 |
|Marina | -2 |
|Braulio | -1 |
Por lo que para acceder al último elemento de nuestra lista sólo tendríamos que usar el índice -1:
End of explanation
mis_amigos[0:3]
Explanation: Otra cosa que nos podemos hacer usando índices es quedarnos con una sublista. Por ejemplo, si quisiéramos quedarnos únicamente con un top 3 de amigos guays podríamos hacerlo usando el operador :
End of explanation
mis_amigos[:3]
Explanation: Pero si queremos quedarnos con los tres primeros, podemos simplemente hacerlo de la siguiente forma:
End of explanation
mis_amigos[3:]
Explanation: ¿Y si queremos saber el resto? Simplemente, lo hacemos al revés!
End of explanation
for i in range(len(mis_amigos)):
print(mis_amigos[i])
Explanation: Ejercicio:
Haz una lista de la compra e imprime los siguientes elementos:
Penúltimo elemento
Del segundo al cuarto elemento
Los tres últimos
Todos!
Por último, elimina el tercer elemento de la lista usando la sentencia del
Iterando sobre una lista
Con los bucles podemos iterar sobre los valores de una lista. Típicamente, un programador C iteraría sobre una lista de la siguiente forma:
End of explanation
for amigo in mis_amigos:
print(amigo)
Explanation: Ahora bien, en Python existe una forma mucho más cómoda de iterar sobre los valores de una lista sin tener que estar pendiente de un índice i:
End of explanation
potencias = []
for x in range(10):
potencias.append(2**x)
print(potencias)
Explanation: En Python también existe lo que se llama list comprehesions, que son una forma mucho más sencilla y fácil de leer para crear listas. Por ejemplo, si queremos hacer una lista con las potencias de 2, podríamos hacerlo de la siguiente forma:
End of explanation
potencias = [x**2 for x in range(10)]
print(potencias)
Explanation: Tendríamos nuestra lista de potencias de 2 en tres líneas, pero con los list comprehesions podemos hacerlo en una única línea:
End of explanation
M = [[j for j in range(i, i+3)] for i in range(1,3)]
M
Explanation: Ejercicio
Crea una lista con todos los números pares del 0 al 10 en una única línea.
Listas anidadas
Seguramente te habrás preguntado si se puede hacer una lista cuyos elementos sean listas, y la respuesta es ¡sí!. Esta representación de listas anidadas se suele usar para representar matrices. Por ejemplo, si queremos representar la siguiente matriz en Python:
$$ M_{2 \times 3} = \left ( \begin{matrix}
1 & 2 & 3 \
2 & 3 & 4
\end{matrix} \right)$$
Lo haríamos de la siguiente forma:
End of explanation
tupla_ejemplo = 5, 'perro', 3.6
print(tupla_ejemplo)
Explanation: Ejercicio
Crea la siguiente matriz en una línea:
$$ M_{2 \times 3} = \left ( \begin{matrix}
1 & 2 & 3 \
4 & 5 & 6
\end{matrix} \right)$$
Tuplas
Imagina que deseas guardar tanto los nombres de tus mejores amigos como su edad. De primeras podrías pensar en hacer dos listas, de la siguiente forma:
| índice | 0 | 1 | 2 | 3 | 4 |
|:------:|:-:|:-:|:-:|:-:|:-:|
| nombres| Paloma | Paula | Teresa | Marina | Braulio |
| edades | 25 | 20 | 19 | 19 | 21 |
De tal forma que para saber la edad de Paloma (primer elemento de la lista nombres) tendríamos que mirar el primer elemento de la lista edades. Pero, ¿y si te dijera que en Python podríamos guardar en una misma variable la edad y el nombre de una persona? Se puede! Con las llamadas tuplas.
Una tupla es una serie de valores separados por comas.
End of explanation
amigos_edades = [('Paloma', 25), ('Paula', 20), ('Teresa', 19), ('Marina', 19), ('Braulio', 21)]
print(amigos_edades)
Explanation: Por tanto, para guardar en una lista tanto el nombre como la edad de nuestros amigos podríamos hacerlo de la siguiente forma:
End of explanation
amigos_edades[0][1]
Explanation: Los valores de las tuplas también tienen índices, por tanto, en nuestro caso el nombre tendría el índice 0 y la edad el índice 1. Por tanto, si queremos acceder a la edad de Paloma tenemos que usar el operador [] dos veces: una para acceder al elemento de la lista que queremos y otra para acceder al elemento de la tupla que queremos.
End of explanation
nombre, edad = amigos_edades[0]
print(nombre)
print(edad)
Explanation: Si queremos crear dos variables separadas para guardar el nombre y la edad de nuestro mejor amigo, podemos hacerlo en una sola línea!
End of explanation
amigos_edades[0][1] += 1
Explanation: Cuidado! Si por casualidad un amigo cumpliese un año más no podríamos ponerlo en la tupla, debido a que las tuplas no pueden modificarse.
End of explanation
mi_lista = [5,4,6,3,7,5,1,9,3]
print(mi_lista)
mi_conjunto = set(mi_lista)
print(mi_conjunto)
Explanation: Ejercicio:
Vuelve a hacer la lista de la compra que hiciste en el último ejercicio, pero esta vez guarda cada elemento de la lista de la compra junto con su precio. Después, imprime los siguientes elementos:
El precio del tercer elemento.
El nombre del último elemento.
Tanto el nombre como el precio del primer elemento.
Cojuntos
Un conjunto es una lista de elementos ordenados y en la que no hay elementos repetidos. Se definen con el operador {. Sus operadores básicos son eliminar elementos repetidos, consultar si un elemento está en el conjunto o no y, por supuesto, operaciones típicas de los conjuntos como la unión, la intersección, la diferencia...
¿Qué ventajas puede darte usar un conjunto en lugar de una lista? Al estar ordenados, es mucho más rápido encontrar un elemento aunque esto también hace que insertar nuevos elementos sea más costoso.
Podemos crear un conjunto a partir de una lista:
End of explanation
A = {1,2,4,5,6,7}
B = {2,3,5,6,8,9}
Explanation: Además, podemos hacer las operaciones típicas sobre conjuntos.
End of explanation
A - B
Explanation: Elementos de A que no están en B (diferencia):
End of explanation
A | B
Explanation: Elementos que están o bien en A o bien en B (unión):
End of explanation
A & B
Explanation: Elementos que están tanto en A como en B (intersección):
End of explanation
A ^ B
Explanation: Elementos que están en A o en B pero no en ambos (diferencia simétrica):
End of explanation
{[1,2,3],[4,5,6]}
Explanation: ¿Podríamos hacer un conjunto de listas?
End of explanation
{x for x in 'abracadabra' if x not in 'abc'}
for c in mi_conjunto:
print(c)
Explanation: ¡No! Esto es debido a que los elementos de los conjuntos deben ser hashables. Tal y como se explica en la documentación, un objeto es hashable si su clase define la función __hash__(), que calcula un entero para caracterizar a un objeto, por lo que objetos con el mismo valor deben tener el mismo entero hash, y la función __eq__(), que sirve para comparar objetos.
Las estructuras de datos que pueden cambiar su valor, como las listas o los diccionarios, no son hashables y, por lo tanto, no pueden ser elementos de un conjunto.
Iterando sobre conjuntos
Al igual que con las listas, también es posible iterar sobre conjuntos y hacer set comprehesions:
End of explanation
mis_amigos = {'Paloma':25, 'Paula':20, 'Teresa':19, 'Marina':19,
'Braulio':21}
print(mis_amigos)
Explanation: Ejercicio
¿Es buena idea usar la función set para eliminar los elementos repetidos de una lista?
Un último detalle sobre conjuntos: para crear un conjunto vacío usamos set() ya que usar {} creará un diccionario vacío.
Diccionarios
A diferencia de las estructuras de datos que hemos visto hasta ahora, los diccionarios no se indexan por números sino por claves (keys). Cada entrada de nuestro diccionario está formada por dos valores distintos: la clave y el valor. La clave nos sirve para acceder al elemento (valor) de forma rápida. Debido a que las claves sirven para identificar a cada elemento,deben ser únicas: si introduces un nuevo elemento en el diccionario con clave repetida, se sobreescribirá el elemento anterior así que ¡mucho cuidado!
Vamos a ver un pequeño ejemplo de uso de los diccionarios para que nos quede más claro. Volvamos al ejemplo anterior sobre nuestros amigos y su edad.
End of explanation
print(mis_amigos['Paloma'])
Explanation: Para obtener la edad de Paloma, antes teníamos que estar mirando qué índice tenía en la lista, en cambio, ahora lo tenemos mucho más sencillo:
End of explanation
list(mis_amigos.keys())
Explanation: Si queremos saber los nombres de cada uno de nuestros amigos podemos listar las claves:
End of explanation
'Marta' in mis_amigos
Explanation: Y también podemos ver si hemos incluido a un amigo o no en nuestro diccionario:
End of explanation
dict([('Paloma',25), ('Paula',20), ('Teresa',19), ('Marina', 19),
('Braulio', 21)])
Explanation: La función dict nos permite hacer diccionarios directamente desde Tuplas de la siguiente forma:
End of explanation
dict(Paloma=25, Paula=20, Teresa=19, Marina=19, Braulio=21)
Explanation: Y cuando las keys son simples string también es posible definir un diccionario de la siguiente forma:
End of explanation
{x: x**2 for x in (2,4,6)}
Explanation: Iterando sobre diccionarios
Es posible crear diccionarios en una sola línea, usando las dict comprehesions. En el siguiente ejemplo, cada clave almacena su valor al cuadrado:
End of explanation
for nombre, edad in mis_amigos.items():
print(nombre, edad)
Explanation: Aunque también es posible iterar sobre diccionarios usando bucles for. Para ello, usamos la función items:
End of explanation
for posicion, elemento in enumerate(['tic', 'tac', 'toe']):
print(posicion, elemento)
Explanation: Cuando iteramos sobre una lista o un conjunto, podemos usar la función enumerate para obtener la posición y el elemento de la misma forma:
End of explanation
questions = ['name', 'quest', 'favourite color']
answers = ['lancelot', 'the holy grail', 'blue']
for q, a in zip(questions, answers):
print("What is your {}? It is {}.".format(q,a))
Explanation: Por último, si queremos iterar sobre dos listas o conjuntos a la vez (del mismo tamaño), podemos hacerlo usando la función zip:
End of explanation |
13,869 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recursive Algorithm
Step1: This let's us derive a recurisve form.
$Y = \Theta X$
$Y X^T = \Theta X X^T$
We accumulate $Y X^T$ and $X X^T$ since they are of fixed size
and there is a recursion relation as shown below.
At the end we perform the inverse. This is the same as a pseuod-inverse
solution but done recursively.
$\Theta = Y X^T (X X^T)^{-1}$ | Python Code:
import sympy
sympy.init_printing()
Theta = sympy.Matrix(sympy.symbols(
'theta_0:3_0:4')).reshape(3,4)
def Y(n):
return sympy.Matrix(sympy.symbols(
'G_x:z_0:{:d}'.format(n+1))).T.reshape(3, n+1)
def C(n):
return sympy.ones(n+1, 1)
def T(n):
return sympy.Matrix(sympy.symbols('T_0:{:d}'.format(n+1)))
def T2(n):
return T(n).multiply_elementwise(T(n))
def T3(n):
return T2(n).multiply_elementwise(T(n))
def X(n):
return C(n).row_join(T(n)).row_join(T2(n)).row_join(T3(n)).T
Explanation: Recursive Algorithm
End of explanation
X(0)*X(0).T
X(1)*X(1).T
dX = X(1)*X(1).T - X(0)*X(0).T
dX
Y(0)*X(0).T
Y(1)*X(1).T
dYXT = Y(1)*X(1).T - Y(0)*X(0).T
dYXT
Explanation: This let's us derive a recurisve form.
$Y = \Theta X$
$Y X^T = \Theta X X^T$
We accumulate $Y X^T$ and $X X^T$ since they are of fixed size
and there is a recursion relation as shown below.
At the end we perform the inverse. This is the same as a pseuod-inverse
solution but done recursively.
$\Theta = Y X^T (X X^T)^{-1}$
End of explanation |
13,870 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
COMP 3314 Assignment 1
<br><br>
Nian Xiaodong (3035087112)
Python + ipynb
The goal of this assignment is to learn/review python and ipynb.
Python is a popular programming language, and also interfaced for several machine learning libraries, such as scikit-learn, Theano, and TensorFlow.
Ipynb is a digital notebook format that allows flexible incorporation of a variety of information, such as code (e.g. python), data, text (e.g. markdown, html, and Latex), images (common raster/vector graphics formats such as jpg and svg), and video (e.g. youtube).
We can also run code and experiments directly inside ipynbs.
Thus, we will use ipynb for all assignments in this class.
Sorting
As a starting exercise, let's try to implement a sorting function via python.
The input to the function is a python array consisting of an arbitrary sequence of numbers.
The output is a sorted sequence with numbers ranging from small to large.
The code stub, along with the test driver, are shown below.
There are various algorithms for sorting with different time complexities with respect to the array size $N$, e.g. $O(N^2)$ for bubble sort and $O(Nlog(N))$ for quick sort.
You can choose any algorithm to implement, as long as it produces correct results with reasonable run-time.
Please submit a single ipynb file, consisting of python code in a code cell and descriptions (including algorithm and analysis of complexity) in a markdown cell.
You can use this ipynb as a start, or create your own.
Code (20 points)
Please implement your algorithm via the function below.
Step1: Line fitting
<img src="./images/01_04.png" width=50%>
Given a set of data points $\left(\mathbf{X}, \mathbf{Y}\right)$, fit a model curve to describe their relationship.
This is actually a regression problem, but we have all seen this in prior math/coding classes to serve as a good example for machine learning.
Recall $\mathbf{Y} = f(\mathbf{X}, \Theta)$ is our model.
For 2D linear curve fitting, the model is a straight line | Python Code:
# the function
def sort(values):
# insert your code here
for j in range(len(values)-1,0,-1):
for i in range(0, j):
if values[i] > values[i+1]:
values[i], values[i+1] = values[i+1], values[i]
return values
# main
import numpy as np
# different random seed
np.random.seed()
# generate numbers
N = 10
# the TA will vary the input array size and content during testing
values = np.random.random([N])
sort(values)
correct = True
for index in range(1, len(values)):
if(values[index-1] > values[index]):
correct = False
print('Correct? ' + str(correct))
Explanation: COMP 3314 Assignment 1
<br><br>
Nian Xiaodong (3035087112)
Python + ipynb
The goal of this assignment is to learn/review python and ipynb.
Python is a popular programming language, and also interfaced for several machine learning libraries, such as scikit-learn, Theano, and TensorFlow.
Ipynb is a digital notebook format that allows flexible incorporation of a variety of information, such as code (e.g. python), data, text (e.g. markdown, html, and Latex), images (common raster/vector graphics formats such as jpg and svg), and video (e.g. youtube).
We can also run code and experiments directly inside ipynbs.
Thus, we will use ipynb for all assignments in this class.
Sorting
As a starting exercise, let's try to implement a sorting function via python.
The input to the function is a python array consisting of an arbitrary sequence of numbers.
The output is a sorted sequence with numbers ranging from small to large.
The code stub, along with the test driver, are shown below.
There are various algorithms for sorting with different time complexities with respect to the array size $N$, e.g. $O(N^2)$ for bubble sort and $O(Nlog(N))$ for quick sort.
You can choose any algorithm to implement, as long as it produces correct results with reasonable run-time.
Please submit a single ipynb file, consisting of python code in a code cell and descriptions (including algorithm and analysis of complexity) in a markdown cell.
You can use this ipynb as a start, or create your own.
Code (20 points)
Please implement your algorithm via the function below.
End of explanation
# line model
import numpy as np
class Line(object):
def __init__(self, w0, w1):
self.w0 = w0
self.w1 = w1
def predict(self, x, noise=0):
return (x*self.w1 + self.w0 + noise*np.random.normal())
# Input: data, a 2D array with each (x, t) pair on a row
# Return: w0 and w1, the intercept and slope of the fitted line
def learn(self, data):
# math equations derived above
N = len(data)
sumX = sum(r[0] for r in data)
sumT = sum(r[1] for r in data)
sumX2 = sum(pow(r[0],2) for r in data)
sumXT = sum((r[0]*r[1]) for r in data)
w1 = (N*sumXT - sumX*sumT) / (N*sumX2 - pow(sumX, 2))
w0 = (sumT - w1*sumX) / N
return w0, w1
# test
np.random.seed()
w0 = np.asscalar(np.random.random(1))*2-1
w1 = np.asscalar(np.random.random(1))*2-1
line = Line(w0, w1)
N = 20
noise = 0.05
X = np.random.random([N])
T = []
for x in X:
T.append(np.sum(line.predict(x, noise)))
T = np.array(T)
#data = np.vstack((X, T)).transpose()
data = np.array([X, T]).transpose()
w0_fit, w1_fit = line.learn(data)
line_fit = Line(w0_fit, w1_fit)
print('truth: ' + str(w0) + ' ' + str(w1))
print('predict: ' + str(w0_fit) + ' ' + str(w1_fit))
# plot
%matplotlib inline
import matplotlib.pyplot as plt
plt.scatter(data[:, 0], data[:, 1], color='black', marker='o')
X_endpoints = [0, 1]
Y_truth, Y_fit = [], []
for x in X_endpoints:
Y_truth.append(line.predict(x))
Y_fit.append(line_fit.predict(x))
plt.plot(X_endpoints, Y_truth, color='blue', label='truth')
plt.plot(X_endpoints, Y_fit, color='red', label='predict')
plt.legend(loc='best')
plt.tight_layout()
plt.show()
Explanation: Line fitting
<img src="./images/01_04.png" width=50%>
Given a set of data points $\left(\mathbf{X}, \mathbf{Y}\right)$, fit a model curve to describe their relationship.
This is actually a regression problem, but we have all seen this in prior math/coding classes to serve as a good example for machine learning.
Recall $\mathbf{Y} = f(\mathbf{X}, \Theta)$ is our model.
For 2D linear curve fitting, the model is a straight line:
$y = w_1 x + w_0$, so the parameters $\Theta = {w_0, w_1}$.
The loss function is $L\left(\mathbf{X}, \mathbf{T}, \mathbf{Y}\right) = \sum_i \left( T^{(i)} - Y^{(i)}\right)^2 = \sum_i \left( T^{(i)} - w_1 X^{(i)} - w_0 \right)^2$.
<br>
($\mathbf{X}$ is a matrix/tensor, and each data sample is a row. We denote the ith sample/row as $\mathbf{X}^{(i)}$.)
For this simple example we don't care about regularization, thus $P(\Theta) = 0$.
The goal is to optimize $\Theta = {w_0, w_1 }$ with given $\left(\mathbf{X}, \mathbf{Y}\right)$ to minimize $L$.
For simple cases like this, we can directly optimize via calculus:
$$
\begin{align}
\frac{\partial L}{\partial w_0} & = 0 \
\frac{\partial L}{\partial w_1} & = 0
\end{align}
$$
Math (30 points)
Write down explicit formulas for $w_0$ and $w_1$ in terms of $\mathbf{X}$ and $\mathbf{T}$.
To minimize $L$,
$$
\left{\begin{matrix}
\frac{\partial L}{\partial w_0} = \frac{\partial}{\partial w_0}\left ( T^{(i)} - w_1 X^{(i)} - w_0 \right )^2 = 0\
\frac{\partial L}{\partial w_1} = \frac{\partial}{\partial w_1}\left ( T^{(i)} - w_1 X^{(i)} - w_0 \right )^2 = 0
\end{matrix}\right.
$$
Thus, we get
$$
\left{\begin{matrix}
w_1 = \frac{(\sum_i 1)(\sum_i(X^{(i)}T^{(i)})-(\sum_i X^{(i)})(\sum_i T^{(i)})}{(\sum_i 1)(\sum_i (X^{(i)})^2)-(\sum_iX^{(i)})^2}\
w_0 = \frac{\sum T^{(i)}-w_1(\sum X^{(i)})}{\sum_i 1}
\end{matrix}\right.
$$
Code (50 points)
Implement your math above in the code below.
End of explanation |
13,871 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Photonic design in dphox
At a glance
In this tutorial, the goal is to demonstrate how practical photonic devices can be designed efficiently in dphox.
Along the way, the following advantages will be highlighted
Step1: Waveguide crossing
In this tutorial, we will design waveguide crossings while also understanding how geometries can be manipulated.
First let's define a waveguide. Our goal is to rotate that same waveguide at the center to form a 90-degree crossing with four-way symmetry.
Step2: The ports of the taper waveguide are accessed as follows. Ports can be thought of has "reference poses," where a pose includes a position ($x, y$) and orientation (angle $a$), and also contain information about the width. These ports are incredible important in any design flow, especially for routing, and also play a critical role in simulating waveguide-based devices since they define the mode-based source (the port can store height $h$ and position $z$ for a 3D application).
Step3: One of dphox's advantages is that it provides a convenient shapely interface. We can use shapely's notebook __repr__ to quickly view any pattern by just accessing it
Step4: The shapely pattern is red because there are intersections in the pattern, namely shared boundaries. This makes it hard to do things like shapely boolean operations on the pattern due to self-intersections. To remedy this, we can apply a union to get rid of the shared patterns, resulting in a green preview. Note that now we have a
Step5: Now let's plot the 90-degree rotated waveguide about the origin. Note that we haven't rotated the pattern about its center so it's misaligned.
Step6: Clearly, the green taper is the correct one, but now we need to combine the two waveguides and assign the right ports to it.
Step7: As you can see, we have succeeded in designing a crossing with the appropriate ports.
Polarization insensitive grating
Let's try another related challenge
Step8: Instead of manually calculating where the grating should go, let's use some functionality in dphox to place the grating in the appropriate location. Let's start by doing this for a box. We use the method align which aligns the centers of two patterns.
Step9: We've aligned the box to the center of the pattern but now we need to turn the box into a grating. Thankfully, there are methods for this already built into the Box class.
Step10: Patterns in dphox support boolean operations such as subtraction and addition, which allows us to create our final grating.
Step11: But what if we want holes rather than pillars in the center for this grating? Just use an extra boolean operation!
Step12: We can also look at this in 3D!
Step13: Photonic MZI mesh
In dphox, we provide several prefabbed devices. Here, we demonstrate how to construct an mesh of active MZI devices using either MEMS-based or thermo-optic-based phase shifters. These photonic meshes are useful in quantum computing, machine learning, and optical cryptography applications.
Define phase shifters and couplers
Step14: Define optical interconnects and interposers
We need to have a way to get light on the chip. One way to do this is to use a fiber array. Since the pitch of the interposer is not the same as the pit above (the interport_distance is given to be 50 $\mu$m), we need an interposer from the standard fiber pitch of 127 $\mu$m to 50 $\mu$m. The interposer includes trombones that perform path length matching, which may be desirable in some applications of the mesh.
The actual optical interconnect can be an edge coupler or a grating. Here in dphox, we provide a focusing grating prefab as below, which might work in SOI, though this is untested.
Step15: Here, we place the interposer at the appropriate ports. The outputs are small but once we plot it, holoviews allows us to zoom using the scroll tool.
Step16: Let's take a look at one of our gratings up close using trimesh
Step17: Save the overall device to a GDS file (supported in Python 3.8 and above only; this isn't supported in Colab yet and so should be run locally).
Step18: Use another type of phase shifter
We can also change the phase shifter to be a NEMS-based phase shifter using the code below
Step19: Here's another view!
Step20: Once we are satisfied with a phase shifter design, we can save to a gds.
Step21: We can also plot the mesh with the new phase shifter, but this takes much longer than a GDS export since we leverage cell references in the GDS for computational efficiency. | Python Code:
import dphox as dp
import numpy as np
import holoviews as hv
from trimesh.transformations import rotation_matrix
hv.extension('bokeh')
import warnings
warnings.filterwarnings('ignore') # ignore shapely warnings
Explanation: Photonic design in dphox
At a glance
In this tutorial, the goal is to demonstrate how practical photonic devices can be designed efficiently in dphox.
Along the way, the following advantages will be highlighted:
Efficient raw numpy implementations for polygon and curve transformations
Dependence on shapely
in favor of pyclipper (less actively maintained).
dphox.Curve ~ shapely.geometry.MultiLineString
dphox.Pattern ~ shapely.geometry.MultiPolygon
A simple implementation of GDS I/O
Uses trimesh for 3D viewing/export, blender figures at your fingertips!
Plotting using holoviews and bokeh,
allowing zoom in/out in a notebook.
Prefabbed passive and active components and circuits such as gratings, interposers, MZIs and MZI meshes.
Future tutorials will cover the following:
More intuitive representation of GDS cell hierarchy (via Device).
Interface to photonic simulation (see our simphox and MEEP examples).
Inverse-designed devices may be incorporated via a replace function.
Read and interface with foundry PDKs automatically, even if provided via GDS.
Imports
End of explanation
taper = dp.cubic_taper(1, 1, 12.5, 5)
taper.hvplot()
Explanation: Waveguide crossing
In this tutorial, we will design waveguide crossings while also understanding how geometries can be manipulated.
First let's define a waveguide. Our goal is to rotate that same waveguide at the center to form a 90-degree crossing with four-way symmetry.
End of explanation
taper.port
Explanation: The ports of the taper waveguide are accessed as follows. Ports can be thought of has "reference poses," where a pose includes a position ($x, y$) and orientation (angle $a$), and also contain information about the width. These ports are incredible important in any design flow, especially for routing, and also play a critical role in simulating waveguide-based devices since they define the mode-based source (the port can store height $h$ and position $z$ for a 3D application).
End of explanation
taper.shapely
Explanation: One of dphox's advantages is that it provides a convenient shapely interface. We can use shapely's notebook __repr__ to quickly view any pattern by just accessing it:
End of explanation
taper.shapely_union
Explanation: The shapely pattern is red because there are intersections in the pattern, namely shared boundaries. This makes it hard to do things like shapely boolean operations on the pattern due to self-intersections. To remedy this, we can apply a union to get rid of the shared patterns, resulting in a green preview. Note that now we have a
End of explanation
misaligned_rotated_taper = taper.copy.rotate(90)
(misaligned_rotated_taper.hvplot(color='blue') * taper.hvplot()).opts(xlim=(-2, 14), ylim=(-2, 14))
aligned_rotated_taper = taper.copy.rotate(90, taper.center)
(misaligned_rotated_taper.hvplot('blue') * aligned_rotated_taper.hvplot(color='green') * taper.hvplot()).opts(xlim=(-2, 14), ylim=(-8, 8))
Explanation: Now let's plot the 90-degree rotated waveguide about the origin. Note that we haven't rotated the pattern about its center so it's misaligned.
End of explanation
crossing = dp.Pattern(aligned_rotated_taper, taper)
crossing.hvplot()
crossing.port['a0'] = taper.port['a0'].copy
crossing.port['b0'] = taper.port['b0'].copy
crossing.port['a1'] = aligned_rotated_taper.port['a0'].copy
crossing.port['b1'] = aligned_rotated_taper.port['b0'].copy
crossing.hvplot()
Explanation: Clearly, the green taper is the correct one, but now we need to combine the two waveguides and assign the right ports to it.
End of explanation
taper = dp.cubic_taper(0.5, 9.5, 150, 70)
crossing = dp.Cross(taper)
crossing_plot = crossing.hvplot()
crossing_plot
Explanation: As you can see, we have succeeded in designing a crossing with the appropriate ports.
Polarization insensitive grating
Let's try another related challenge: building a polarization insensitive grating coupler. This requires a cross like before with a much bigger taper, with a grating in the intersection box.
End of explanation
box = dp.Box((10, 10))
aligned_box = box.copy.align(crossing)
crossing_plot * box.hvplot('blue', plot_ports=False) * aligned_box.hvplot('green', plot_ports=False)
Explanation: Instead of manually calculating where the grating should go, let's use some functionality in dphox to place the grating in the appropriate location. Let's start by doing this for a box. We use the method align which aligns the centers of two patterns.
End of explanation
grating = box.striped(stripe_w=0.3, include_boundary=False)
grating.hvplot()
aligned_grating = grating.align(crossing)
crossing_plot * aligned_grating.hvplot('green', plot_ports=False)
Explanation: We've aligned the box to the center of the pattern but now we need to turn the box into a grating. Thankfully, there are methods for this already built into the Box class.
End of explanation
pol_insensitive_grating = crossing - aligned_grating
pol_insensitive_grating.port = crossing.port
pol_insensitive_grating.hvplot()
Explanation: Patterns in dphox support boolean operations such as subtraction and addition, which allows us to create our final grating.
End of explanation
pol_insensitive_grating = crossing - aligned_box + aligned_grating
pol_insensitive_grating.port = crossing.port
pol_insensitive_grating.hvplot()
Explanation: But what if we want holes rather than pillars in the center for this grating? Just use an extra boolean operation!
End of explanation
scene = pol_insensitive_grating.trimesh()
# apply some settings to the scene to make the default view more palatable
scene.apply_transform(rotation_matrix(-np.pi / 4, (1, 0, 0)))
scene.camera.fov = (10, 10)
scene.show()
Explanation: We can also look at this in 3D!
End of explanation
ps = dp.ThermalPS(dp.straight(80).path(0.5), ps_w=4, via=dp.Via((2, 2), 0.1))
dc = dp.DC(waveguide_w=0.5, interaction_l=30, radius=10, interport_distance=50, gap_w=0.3)
mzi = dp.MZI(dc, top_internal=[ps.copy], bottom_internal=[ps.copy], top_external=[ps.copy], bottom_external=[ps.copy])
mesh = dp.LocalMesh(mzi, n=6, triangular=False)
mesh.hvplot()
Explanation: Photonic MZI mesh
In dphox, we provide several prefabbed devices. Here, we demonstrate how to construct an mesh of active MZI devices using either MEMS-based or thermo-optic-based phase shifters. These photonic meshes are useful in quantum computing, machine learning, and optical cryptography applications.
Define phase shifters and couplers
End of explanation
grating = dp.FocusingGrating(
n_env=dp.AIR.n,
n_core=dp.SILICON.n,
min_period=40,
num_periods=30,
wavelength=1.55,
fiber_angle=82,
duty_cycle=0.5
)
interposer = dp.Interposer(
waveguide_w=0.5,
n=6,
init_pitch=50,
final_pitch=127,
self_coupling_extension=50
).with_gratings(grating)
Explanation: Define optical interconnects and interposers
We need to have a way to get light on the chip. One way to do this is to use a fiber array. Since the pitch of the interposer is not the same as the pit above (the interport_distance is given to be 50 $\mu$m), we need an interposer from the standard fiber pitch of 127 $\mu$m to 50 $\mu$m. The interposer includes trombones that perform path length matching, which may be desirable in some applications of the mesh.
The actual optical interconnect can be an edge coupler or a grating. Here in dphox, we provide a focusing grating prefab as below, which might work in SOI, though this is untested.
End of explanation
mesh.clear(interposer) # in case this cell is run more than once, this avoids duplicating the placement of the interposer.
mesh.place(interposer, mesh.port['b0'], from_port=interposer.port['a0'])
mesh.place(interposer, mesh.port['a5'], from_port=interposer.port['a0'])
mesh.hvplot()
Explanation: Here, we place the interposer at the appropriate ports. The outputs are small but once we plot it, holoviews allows us to zoom using the scroll tool.
End of explanation
scene = grating.trimesh()
# apply some settings to the scene to make the default view more palatable
scene.apply_transform(np.diag((1, 1, 5, 1))) # make it easier to see the grating lines by scaling up the z-axis by 5x
scene.apply_transform(rotation_matrix(-np.pi / 2.5, (1, 0, 0)))
scene.show()
Explanation: Let's take a look at one of our gratings up close using trimesh:
End of explanation
# mesh.to_gds('mesh.gds')
Explanation: Save the overall device to a GDS file (supported in Python 3.8 and above only; this isn't supported in Colab yet and so should be run locally).
End of explanation
from dphox.demo import lateral_nems_ps
nems_ps = lateral_nems_ps()
nems_mzi = dp.MZI(dc, top_internal=[nems_ps.copy], bottom_internal=[nems_ps.copy], top_external=[nems_ps.copy], bottom_external=[nems_ps.copy])
nems_mesh = dp.LocalMesh(nems_mzi, 6, triangular=False)
scene = nems_ps.trimesh(exclude_layer=[dp.CommonLayer.CLEAROUT, dp.CommonLayer.ALUMINA])
scene.apply_transform(rotation_matrix(-np.pi / 8, (1, 0, 0)))
scene.camera.fov = (20, 20)
scene.show()
Explanation: Use another type of phase shifter
We can also change the phase shifter to be a NEMS-based phase shifter using the code below:
End of explanation
scene = nems_ps.trimesh(exclude_layer=[dp.CommonLayer.CLEAROUT, dp.CommonLayer.ALUMINA])
scene.apply_transform(rotation_matrix(-np.pi / 2, (1, 0, 0)) @ rotation_matrix(np.pi / 2, (0, 0, 1), point=(*nems_ps.port['b0'].xy, 0)))
scene.camera.fov = (20, 20)
scene.show()
Explanation: Here's another view!
End of explanation
# nems_mesh.to_gds('nems_mesh.gds')
Explanation: Once we are satisfied with a phase shifter design, we can save to a gds.
End of explanation
nems_mesh.hvplot()
Explanation: We can also plot the mesh with the new phase shifter, but this takes much longer than a GDS export since we leverage cell references in the GDS for computational efficiency.
End of explanation |
13,872 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer
Step24: Decoding - Training
Create a training decoding layer
Step27: Decoding - Inference
Create inference decoder
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step40: Batch and pad the source and target sequences
Step43: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step45: Save Parameters
Save the batch_size and save_path parameters for inference.
Step47: Checkpoint
Step50: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step52: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
def convert_source_sentence(sentence):
return [source_vocab_to_int[w] for w in sentence.split(" ") if w!=""]
def convert_target_sentence(sentence):
return [target_vocab_to_int[w] for w in sentence.split(" ") if w!=""]+[target_vocab_to_int['<EOS>']]
return [convert_source_sentence(sentence) for sentence in source_text.split("\n")],\
[convert_target_sentence(sentence) for sentence in target_text.split("\n")]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
helper.preprocess_and_save_data
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
input = tf.placeholder(tf.int32,[None,None],name="input")
targets = tf.placeholder(tf.int32,[None,None],name="targets")
learning_rate = tf.placeholder(tf.float32,name="learning_rate")
keep_probability = tf.placeholder(tf.float32,name="keep_prob")
target_sequence_length = tf.placeholder(tf.int32,[None],name="target_sequence_length")
max_target_len = tf.reduce_max(target_sequence_length)
source_sequence_len = tf.placeholder(tf.int32,[None],name="source_sequence_length")
# TODO: Implement Function
return input, targets, learning_rate, keep_probability, target_sequence_length, max_target_len, source_sequence_len
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
import inspect
inspect.getsourcelines(tests.test_process_encoding_input)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
return(tf.concat([tf.constant([[target_vocab_to_int["<GO>"]]]*batch_size),\
tf.strided_slice(target_data,[0,0],[batch_size,-1],[1,1])],1))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
embed = tf.contrib.layers.embed_sequence(rnn_inputs,rnn_size,encoding_embedding_size)
lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size)
lstm_stack = tf.contrib.rnn.DropoutWrapper(\
tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.LSTMCell(rnn_size) for _ in range(num_layers)]),
output_keep_prob=keep_prob)
return tf.nn.dynamic_rnn(lstm_stack,embed,source_sequence_length,dtype=tf.float32)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
training_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input,
sequence_length=target_sequence_length)
decoder = tf.contrib.seq2seq.BasicDecoder(\
tf.contrib.rnn.DropoutWrapper(dec_cell,output_keep_prob=keep_prob),\
training_helper,encoder_state,output_layer)
final_outputs, final_state = tf.contrib.seq2seq.dynamic_decode(decoder,maximum_iterations=max_summary_length)
return(final_outputs)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')
embed_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,start_tokens,end_of_sequence_id)
decoder = tf.contrib.seq2seq.BasicDecoder(\
dec_cell,\
embed_helper,encoder_state,output_layer)
final_outputs, final_state = tf.contrib.seq2seq.dynamic_decode(decoder,maximum_iterations=max_target_sequence_length,impute_finished=True)
return final_outputs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size)
lstm_stack = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.LSTMCell(rnn_size) for _ in range(num_layers)])
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
with tf.variable_scope("decode") as scope:
train_output = decoding_layer_train(encoder_state, lstm_stack, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)
scope.reuse_variables()
infer_output = decoding_layer_infer(encoder_state, lstm_stack, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'],\
max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob)
return train_output, infer_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
_, encoding_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
enc_embedding_size)
decoder_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
return(decoding_layer(decoder_input, encoding_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
# Number of Epochs
epochs = 50
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 50
decoding_embedding_size = 50
# Learning Rate
learning_rate = 0.003
# Dropout Keep Probability
keep_probability = 0.5
display_step = 10
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
Explanation: Batch and pad the source and target sequences
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
return [vocab_to_int.get(w,vocab_to_int["<UNK>"]) for w in sentence.lower().split(" ")]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
13,873 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learners
In this section, we will introduce several pre-defined learners to learning the datasets by updating their weights to minimize the loss function. when using a learner to deal with machine learning problems, there are several standard steps
Step1: Perceptron Learner
Overview
The Perceptron is a linear classifier. It works the same way as a neural network with no hidden layers (just input and output). First, it trains its weights given a dataset and then it can classify a new item by running it through the network.
Its input layer consists of the item features, while the output layer consists of nodes (also called neurons). Each node in the output layer has n synapses (for every item feature), each with its own weight. Then, the nodes find the dot product of the item features and the synapse weights. These values then pass through an activation function (usually a sigmoid). Finally, we pick the largest of the values and we return its index.
Note that in classification problems each node represents a class. The final classification is the class/node with the max output value.
Below you can see a single node/neuron in the outer layer. With f we denote the item features, with w the synapse weights, then inside the node we have the dot product and the activation function, g.
Implementation
Perceptron learner is actually a neural network learner with only one hidden layer which is pre-defined in the algorithm of perceptron_learner
Step2: Where input_size and output_size are calculated from dataset examples. In the perceptron learner, the gradient descent optimizer is used to update the weights of the network. we return a function predict which we will use in the future to classify a new item. The function computes the (algebraic) dot product of the item with the calculated weights for each node in the outer layer. Then it picks the greatest value and classifies the item in the corresponding class.
Example
Let's try the perceptron learner with the iris dataset examples, first let's regulate the dataset classes
Step3: We can see from the printed lines that the final total loss is converged to around 10.50. If we check the error ratio of perceptron learner on the dataset after training, we will see it is much higher than randomly guess
Step4: If we test the trained learner with some test cases
Step5: It seems the learner is correct on all the test examples.
Now let's try perceptron learner on a more complicated dataset
Step6: Now let's train the perceptron learner on the first 1000 examples of the dataset
Step7: It looks like we have a near 90% error ratio on training data after the network is trained on it. Then we can investigate the model's performance on the test dataset which it never has seen before
Step8: It seems a single layer perceptron learner cannot simulate the structure of the MNIST dataset. To improve accuracy, we may not only increase training epochs but also consider changing to a more complicated network structure.
Neural Network Learner
Although there are many different types of neural networks, the dense neural network we implemented can be treated as a stacked perceptron learner. Adding more layers to the perceptron network could add to the non-linearity to the network thus model will be more flexible when fitting complex data-target relations. Whereas it also adds to the risk of overfitting as the side effect of flexibility.
By default we use dense networks with two hidden layers, which has the architecture as the following
Step9: Where hidden_layer_sizes are the sizes of each hidden layer in a list which can be specified by user. Neural network learner uses gradient descent as default optimizer but user can specify any optimizer when calling neural_net_learner. The other special attribute that can be changed in neural_net_learner is batch_size which controls the number of examples used in each round of update. neural_net_learner also returns a predict function which calculates prediction by multiplying weight to inputs and applying activation functions.
Example
Let's also try neural_net_learner on the iris dataset
Step10: Similarly we check the model's accuracy on both training and test dataset
Step11: We can see that the error ratio on the training set is smaller than the perceptron learner. As the error ratio is relatively small, let's try the model on the MNIST dataset to see whether there will be a larger difference. | Python Code:
import os, sys
sys.path = [os.path.abspath("../../")] + sys.path
from deep_learning4e import *
from notebook4e import *
from learning4e import *
Explanation: Learners
In this section, we will introduce several pre-defined learners to learning the datasets by updating their weights to minimize the loss function. when using a learner to deal with machine learning problems, there are several standard steps:
Learner initialization: Before training the network, it usually should be initialized first. There are several choices when initializing the weights: random initialization, initializing weights are zeros or use Gaussian distribution to init the weights.
Optimizer specification: Which means specifying the updating rules of learnable parameters of the network. Usually, we can choose Adam optimizer as default.
Applying back-propagation: In neural networks, we commonly use back-propagation to pass and calculate gradient information of each layer. Back-propagation needs to be integrated with the chosen optimizer in order to update the weights of NN properly in each epoch.
Iterations: Iterating over the forward and back-propagation process of given epochs. Sometimes the iterating process will have to be stopped by triggering early access in case of overfitting.
We will introduce several learners with different structures. We will import all necessary packages before that:
End of explanation
raw_net = [InputLayer(input_size), DenseLayer(input_size, output_size)]
Explanation: Perceptron Learner
Overview
The Perceptron is a linear classifier. It works the same way as a neural network with no hidden layers (just input and output). First, it trains its weights given a dataset and then it can classify a new item by running it through the network.
Its input layer consists of the item features, while the output layer consists of nodes (also called neurons). Each node in the output layer has n synapses (for every item feature), each with its own weight. Then, the nodes find the dot product of the item features and the synapse weights. These values then pass through an activation function (usually a sigmoid). Finally, we pick the largest of the values and we return its index.
Note that in classification problems each node represents a class. The final classification is the class/node with the max output value.
Below you can see a single node/neuron in the outer layer. With f we denote the item features, with w the synapse weights, then inside the node we have the dot product and the activation function, g.
Implementation
Perceptron learner is actually a neural network learner with only one hidden layer which is pre-defined in the algorithm of perceptron_learner:
End of explanation
iris = DataSet(name="iris")
classes = ["setosa", "versicolor", "virginica"]
iris.classes_to_numbers(classes)
pl = perceptron_learner(iris, epochs=500, learning_rate=0.01, verbose=50)
Explanation: Where input_size and output_size are calculated from dataset examples. In the perceptron learner, the gradient descent optimizer is used to update the weights of the network. we return a function predict which we will use in the future to classify a new item. The function computes the (algebraic) dot product of the item with the calculated weights for each node in the outer layer. Then it picks the greatest value and classifies the item in the corresponding class.
Example
Let's try the perceptron learner with the iris dataset examples, first let's regulate the dataset classes:
End of explanation
print(err_ratio(pl, iris))
Explanation: We can see from the printed lines that the final total loss is converged to around 10.50. If we check the error ratio of perceptron learner on the dataset after training, we will see it is much higher than randomly guess:
End of explanation
tests = [([5.0, 3.1, 0.9, 0.1], 0),
([5.1, 3.5, 1.0, 0.0], 0),
([4.9, 3.3, 1.1, 0.1], 0),
([6.0, 3.0, 4.0, 1.1], 1),
([6.1, 2.2, 3.5, 1.0], 1),
([5.9, 2.5, 3.3, 1.1], 1),
([7.5, 4.1, 6.2, 2.3], 2),
([7.3, 4.0, 6.1, 2.4], 2),
([7.0, 3.3, 6.1, 2.5], 2)]
print(grade_learner(pl, tests))
Explanation: If we test the trained learner with some test cases:
End of explanation
train_img, train_lbl, test_img, test_lbl = load_MNIST(path="../../aima-data/MNIST/Digits")
import numpy as np
import matplotlib.pyplot as plt
train_examples = [np.append(train_img[i], train_lbl[i]) for i in range(len(train_img))]
test_examples = [np.append(test_img[i], test_lbl[i]) for i in range(len(test_img))]
print("length of training dataset:", len(train_examples))
print("length of test dataset:", len(test_examples))
Explanation: It seems the learner is correct on all the test examples.
Now let's try perceptron learner on a more complicated dataset: the MNIST dataset, to see what the result will be. First, we import the dataset to make the examples a Dataset object:
End of explanation
mnist = DataSet(examples=train_examples[:1000])
pl = perceptron_learner(mnist, epochs=10, verbose=1)
print(err_ratio(pl, mnist))
Explanation: Now let's train the perceptron learner on the first 1000 examples of the dataset:
End of explanation
test_mnist = DataSet(examples=test_examples[:100])
print(err_ratio(pl, test_mnist))
Explanation: It looks like we have a near 90% error ratio on training data after the network is trained on it. Then we can investigate the model's performance on the test dataset which it never has seen before:
End of explanation
# initialize the network
raw_net = [InputLayer(input_size)]
# add hidden layers
hidden_input_size = input_size
for h_size in hidden_layer_sizes:
raw_net.append(DenseLayer(hidden_input_size, h_size))
hidden_input_size = h_size
raw_net.append(DenseLayer(hidden_input_size, output_size))
Explanation: It seems a single layer perceptron learner cannot simulate the structure of the MNIST dataset. To improve accuracy, we may not only increase training epochs but also consider changing to a more complicated network structure.
Neural Network Learner
Although there are many different types of neural networks, the dense neural network we implemented can be treated as a stacked perceptron learner. Adding more layers to the perceptron network could add to the non-linearity to the network thus model will be more flexible when fitting complex data-target relations. Whereas it also adds to the risk of overfitting as the side effect of flexibility.
By default we use dense networks with two hidden layers, which has the architecture as the following:
<img src="images/nn.png" width="500"/>
In our code, we implemented it as:
End of explanation
nn = neural_net_learner(iris, epochs=100, learning_rate=0.15, optimizer=gradient_descent, verbose=10)
Explanation: Where hidden_layer_sizes are the sizes of each hidden layer in a list which can be specified by user. Neural network learner uses gradient descent as default optimizer but user can specify any optimizer when calling neural_net_learner. The other special attribute that can be changed in neural_net_learner is batch_size which controls the number of examples used in each round of update. neural_net_learner also returns a predict function which calculates prediction by multiplying weight to inputs and applying activation functions.
Example
Let's also try neural_net_learner on the iris dataset:
End of explanation
print("error ration on training set:",err_ratio(nn, iris))
tests = [([5.0, 3.1, 0.9, 0.1], 0),
([5.1, 3.5, 1.0, 0.0], 0),
([4.9, 3.3, 1.1, 0.1], 0),
([6.0, 3.0, 4.0, 1.1], 1),
([6.1, 2.2, 3.5, 1.0], 1),
([5.9, 2.5, 3.3, 1.1], 1),
([7.5, 4.1, 6.2, 2.3], 2),
([7.3, 4.0, 6.1, 2.4], 2),
([7.0, 3.3, 6.1, 2.5], 2)]
print("accuracy on test set:",grade_learner(nn, tests))
Explanation: Similarly we check the model's accuracy on both training and test dataset:
End of explanation
nn = neural_net_learner(mnist, epochs=100, verbose=10)
print(err_ratio(nn, mnist))
Explanation: We can see that the error ratio on the training set is smaller than the perceptron learner. As the error ratio is relatively small, let's try the model on the MNIST dataset to see whether there will be a larger difference.
End of explanation |
13,874 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning a Reward Function using Preference Comparisons
The preference comparisons algorithm learns a reward function by comparing trajectory segments to each other.
To set up the preference comparisons algorithm, we first need to set up a lot of its internals beforehand
Step1: Then we can start training the reward model. Note that we need to specify the total timesteps that the agent should be trained and how many fragment comparisons should be made.
Step2: After we trained the reward network using the preference comparisons algorithm, we can wrap our environment with that learned reward.
Step3: Now we can train an agent, that only sees those learned reward.
Step4: Then we can evaluate it using the original reward. | Python Code:
from imitation.algorithms import preference_comparisons
from imitation.rewards.reward_nets import BasicRewardNet
from imitation.util.networks import RunningNorm
from imitation.policies.base import FeedForward32Policy, NormalizeFeaturesExtractor
import seals
import gym
from stable_baselines3.common.vec_env import DummyVecEnv
from stable_baselines3 import PPO
venv = DummyVecEnv([lambda: gym.make("seals/CartPole-v0")] * 8)
reward_net = BasicRewardNet(
venv.observation_space, venv.action_space, normalize_input_layer=RunningNorm
)
fragmenter = preference_comparisons.RandomFragmenter(warning_threshold=0, seed=0)
gatherer = preference_comparisons.SyntheticGatherer(seed=0)
reward_trainer = preference_comparisons.CrossEntropyRewardTrainer(
model=reward_net,
epochs=3,
)
agent = PPO(
policy=FeedForward32Policy,
policy_kwargs=dict(
features_extractor_class=NormalizeFeaturesExtractor,
features_extractor_kwargs=dict(normalize_class=RunningNorm),
),
env=venv,
seed=0,
n_steps=2048 // venv.num_envs,
batch_size=64,
ent_coef=0.0,
learning_rate=0.0003,
n_epochs=10,
)
trajectory_generator = preference_comparisons.AgentTrainer(
algorithm=agent,
reward_fn=reward_net,
exploration_frac=0.0,
seed=0,
)
pref_comparisons = preference_comparisons.PreferenceComparisons(
trajectory_generator,
reward_net,
fragmenter=fragmenter,
preference_gatherer=gatherer,
reward_trainer=reward_trainer,
comparisons_per_iteration=100,
fragment_length=100,
transition_oversampling=1,
initial_comparison_frac=0.1,
allow_variable_horizon=False,
seed=0,
initial_epoch_multiplier=2, # Note: set to 200 to achieve sensible results
)
Explanation: Learning a Reward Function using Preference Comparisons
The preference comparisons algorithm learns a reward function by comparing trajectory segments to each other.
To set up the preference comparisons algorithm, we first need to set up a lot of its internals beforehand:
End of explanation
pref_comparisons.train(
total_timesteps=1000, # Note: set to 40000 to achieve sensible results
total_comparisons=120, # Note: set to 4000 to achieve sensible results
)
Explanation: Then we can start training the reward model. Note that we need to specify the total timesteps that the agent should be trained and how many fragment comparisons should be made.
End of explanation
from imitation.rewards.reward_wrapper import RewardVecEnvWrapper
learned_reward_venv = RewardVecEnvWrapper(venv, reward_net.predict)
Explanation: After we trained the reward network using the preference comparisons algorithm, we can wrap our environment with that learned reward.
End of explanation
from stable_baselines3 import PPO
from stable_baselines3.ppo import MlpPolicy
learner = PPO(
policy=MlpPolicy,
env=learned_reward_venv,
seed=0,
batch_size=64,
ent_coef=0.0,
learning_rate=0.0003,
n_epochs=10,
n_steps=64,
)
learner.learn(1000) # Note: set to 100000 to train a proficient expert
Explanation: Now we can train an agent, that only sees those learned reward.
End of explanation
from stable_baselines3.common.evaluation import evaluate_policy
reward, _ = evaluate_policy(agent.policy, venv, 10)
print(reward)
Explanation: Then we can evaluate it using the original reward.
End of explanation |
13,875 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wrangle Volume Data
We wrangle the traffic volume data into a workable format, and extract just the site and detector we are interested in.
Import data
Step1: Filter data
Filter to site 2433 (mid-way along segment of Princes freeway monitored by bluetooth detector sites). Detectors 4-6 are in the outbound/westbound lanes.
Step2: Date range
Extract date from CSV data
Step3: Transform data
Transpose table. Label by time rather than interval. Use detector number as headers.
Step4: Export data
Extract just detector 6 (the rightmost lane)
Step5: Plots | Python Code:
import pandas as pd
f = pd.read_csv('../data/VSDATA_20150819.csv')
Explanation: Wrangle Volume Data
We wrangle the traffic volume data into a workable format, and extract just the site and detector we are interested in.
Import data
End of explanation
vols = f[(f["NB_SCATS_SITE"] == 2433) & f["NB_DETECTOR"].between(4,6)]
vols
Explanation: Filter data
Filter to site 2433 (mid-way along segment of Princes freeway monitored by bluetooth detector sites). Detectors 4-6 are in the outbound/westbound lanes.
End of explanation
import datetime
start_date = vols["QT_INTERVAL_COUNT"].iloc[0]
start_datetime = datetime.datetime.strptime(start_date, '%Y-%m-%d 00:00:00')
date_range = pd.date_range(start_datetime, periods=96, freq='15T')
date_range[:10] # show first 10 rows
Explanation: Date range
Extract date from CSV data
End of explanation
dets = vols.T
dets.columns = dets.loc["NB_DETECTOR"].values
dets = dets.loc['V00':'V95']
dets.index=date_range
dets.head()
Explanation: Transform data
Transpose table. Label by time rather than interval. Use detector number as headers.
End of explanation
d6 = dets[6]
d6.head()
Explanation: Export data
Extract just detector 6 (the rightmost lane)
End of explanation
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
bins = np.linspace(0, max(d6), 51)
plt.hist(d6, bins=bins)
plt.show()
plt.figure(figsize=(16,8))
plt.scatter(np.arange(len(d6)), d6.values)
plt.title("Volume Site 2433 Detector 6 (Outbound along Princes Highway). Wed 19 Aug 2015.")
plt.ylabel("Travel Time (seconds)")
plt.xlabel("Time Leave (15 min offset)")
plt.xlim([0,95])
plt.ylim([0,None])
plt.show()
Explanation: Plots
End of explanation |
13,876 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Settings
Step1: 4. Looking at the data
Summaries
Step2: Cycles
Step3: Selecting specific cells and investigating them
Step4: Let's see how the smoothing (interpolation) method works
Step5: Using hvplot for plotting summaries
You can for example use hvplot for looking more at your summary data
Step6: Looking more in-depth and utilising advanced features
OCV relaxation points
Picking out 5 points on each OCV relaxation curve (distributed by last, last/2, last/2/2, ..., first).
Step7: Looking closer at some summary-plots
Step8: 5. Checking for more details per cycle
A. pick the CellpyData object for one of the cells
Step9: B. Get some voltage curves for some cycles and plot them
The method get_cap can be used to extract voltage curves.
Step10: Looking at some dqdv data
Get capacity cycles and make dqdv using the ica module
Step11: Put it in a for-loop for plotting many ica plots
Step12: Get all the dqdv data in one go | Python Code:
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import cellpy
from cellpy import log
from cellpy import cellreader
from cellpy import prms
from cellpy import prmreader
from cellpy.utils import batch
# import holoviews as hv
%matplotlib inline
# hv.extension('bokeh')
log.setup_logging(default_level="DEBUG")
## Uncomment this and run for checking your cellpy parameters.
# prmreader.info()
filepath = r"C:\Scripting\MyFiles\development_cellpy\dev_data\arbin\2019_types.res"
filepath = [
r"C:\Scripting\MyFiles\development_cellpy\dev_data\arbin\20181126_cen41_02_cc_01.res",
r"C:\Scripting\MyFiles\development_cellpy\dev_data\arbin\20181126_cen41_02_cc_02.res",
r"C:\Scripting\MyFiles\development_cellpy\dev_data\arbin\20181126_cen41_02_cc_03.res",
r"C:\Scripting\MyFiles\development_cellpy\dev_data\arbin\20181126_cen41_02_cc_04.res",
r"C:\Scripting\MyFiles\development_cellpy\dev_data\arbin\20181126_cen41_02_cc_05.res",
r"C:\Scripting\MyFiles\development_cellpy\dev_data\arbin\20181126_cen41_02_cc_06.res",
]
filepath2 = filepath[0]
m = 0.374433
outfilepath = r"C:\Scripting\MyFiles\development_cellpy\dev_data\arbin\2019_types.h5"
prms.Paths.rawdatadir = r"C:\ExperimentalData\BatteryTestData\Arbin\RAW"
cell = cellreader.CellpyData()
cell.from_raw(filepath)
cell2 = cellreader.CellpyData()
cell2.from_raw(filepath2)
cell.set_mass(m)
cell2.set_mass(m)
cell.make_step_table()
cell2.make_step_table()
cell.make_summary()
cell2.make_summary()
dataset = cell.dataset
dataset2 = cell2.dataset
dataset.summary
dataset.steps
dataset.raw
dataset.raw.describe()
dataset.raw.dtypes
dataset2.raw.dtypes
dataset.raw.Step_Index.unique()
dataset2.raw.Step_Index.unique()
dataset.summary.dtypes
dataset.steps.dtypes
cell.save(outfilepath)
Explanation: 1. Settings
End of explanation
# Plot the charge capacity and the C.E. (and resistance) vs. cycle number (standard plot)
b.plot_summaries()
# Show the journal pages
# b.experiment.journal.pages.head()
# Show the most important part of the journal pages
b.view
# b.experiment.status()
# b.summaries.head()
Explanation: 4. Looking at the data
Summaries
End of explanation
%%opts Curve (color=hv.Palette('Magma'))
voltage_curves = dict()
for label in b.experiment.cell_names:
d = b.experiment.data[label]
curves = d.get_cap(label_cycle_number=True, interpolated=True, number_of_points=100)
curve = hv.Curve(curves, kdims=["capacity", "cycle"], vdims="voltage").groupby("cycle").overlay().opts(show_legend=False)
voltage_curves[label] = curve
NdLayout = hv.NdLayout(voltage_curves, kdims='label').cols(3)
NdLayout
%%opts Curve (color=hv.Palette('Magma'))
ocv_curves = dict()
for label in b.experiment.cell_names:
d = b.experiment.data[label]
ocv_data = d.get_ocv(direction="up", number_of_points=40)
ocv_curve = hv.Curve(ocv_data, kdims=["Step_Time", "Cycle_Index"], vdims="Voltage").groupby("Cycle_Index").overlay().opts(show_legend=False)
ocv_curves[label] = ocv_curve
NdLayout = hv.NdLayout(ocv_curves, kdims='label').cols(3)
NdLayout
Explanation: Cycles
End of explanation
# This will show you all your cell names
cell_labels = b.experiment.cell_names
cell_labels
# This is how to select the data (CellpyData-objects)
data1 = b.experiment.data["20160805_test001_45_cc"]
data2 = b.experiment.data["20160805_test001_47_cc"]
Explanation: Selecting specific cells and investigating them
End of explanation
# get voltage curves
df_cycles1 = data1.get_cap(
method="back-and-forth",
categorical_column=True,
label_cycle_number=True,
interpolated=False,
)
# get interpolated voltage curves
df_cycles2 = data1.get_cap(
method="back-and-forth",
categorical_column=True,
label_cycle_number=True,
interpolated=True,
dx=0.1,
number_of_points=100,
)
%%opts Scatter [width=600] (color="red", alpha=0.9, size=12)
single_curve = hv.Curve(df_cycles1, kdims=["capacity", "cycle"], vdims="voltage", label="not-smoothed").groupby("cycle")
single_scatter = hv.Scatter(df_cycles2, kdims=["capacity", "cycle"], vdims="voltage", label="smoothed").groupby("cycle")
single_scatter * single_curve
Explanation: Let's see how the smoothing (interpolation) method works
End of explanation
import hvplot.pandas
# hvplot does not like infinities
s = b.summaries.replace([np.inf, -np.inf], np.nan)
layout = (
s["coulombic_efficiency"].hvplot()
+ s["discharge_capacity"].hvplot() * s["charge_capacity"].hvplot()
)
layout.cols(1)
s["cumulated_coulombic_efficiency"].hvplot()
Explanation: Using hvplot for plotting summaries
You can for example use hvplot for looking more at your summary data
End of explanation
from cellpy.utils.batch_tools.batch_analyzers import OCVRelaxationAnalyzer
print(" analyzing ocv relaxation data ".center(80, "-"))
analyzer = OCVRelaxationAnalyzer()
analyzer.assign(b.experiment)
analyzer.direction = "down"
analyzer.do()
dfs = analyzer.last
df_file_one, _df_file_two = dfs
# keeping only the columns with voltages
ycols = [col for col in df_file_one.columns if col.find("point") >= 0]
# removing the first ocv rlx (relaxation before starting cycling)
df = df_file_one.iloc[1:, :]
# tidy format
df = df.melt(id_vars="cycle", var_name="point", value_vars=ycols, value_name="voltage")
curve = (
hv.Curve(df, kdims=["cycle", "point"], vdims="voltage")
.groupby("point")
.overlay()
.opts(xlim=(1, 10), width=800)
)
scatter = (
hv.Scatter(df, kdims=["cycle", "point"], vdims="voltage")
.groupby("point")
.overlay()
.opts(xlim=(1, 10), ylim=(0.7, 1))
)
layout = hv.Layout(curve * scatter)
layout.cols(1)
Explanation: Looking more in-depth and utilising advanced features
OCV relaxation points
Picking out 5 points on each OCV relaxation curve (distributed by last, last/2, last/2/2, ..., first).
End of explanation
b.summary_columns
discharge_capacity = b.summaries.discharge_capacity
charge_capacity = b.summaries.charge_capacity
coulombic_efficiency = b.summaries.coulombic_efficiency
ir_charge = b.summaries.ir_charge
fig, (ax1, ax2) = plt.subplots(2, 1)
ax1.plot(discharge_capacity)
ax1.set_ylabel("capacity ")
ax2.plot(ir_charge)
ax2.set_xlabel("cycle")
ax2.set_ylabel("resistance")
Explanation: Looking closer at some summary-plots
End of explanation
# Lets check what cells we have
cell_labels = b.experiment.cell_names
cell_labels
# OK, then I choose one of them
data = b.experiment.data["20160805_test001_45_cc"]
Explanation: 5. Checking for more details per cycle
A. pick the CellpyData object for one of the cells
End of explanation
cap = data.get_cap(categorical_column=True)
cap.head()
fig, ax = plt.subplots()
ax.plot(cap.capacity, cap.voltage)
ax.set_xlabel("capacity")
ax.set_ylabel("voltage")
cv = data.get_cap(method="forth")
fig, ax = plt.subplots()
ax.set_xlabel("capacity")
ax.set_ylabel("voltage")
ax.plot(cv.capacity, cv.voltage)
c4 = data.get_cap(cycle=4, method="forth-and-forth")
c10 = data.get_cap(cycle=10, method="forth-and-forth")
fig, ax = plt.subplots()
ax.set_xlabel("capacity")
ax.set_ylabel("voltage")
ax.plot(c4.capacity, c4.voltage, "ro", label="cycle 4")
ax.plot(c10.capacity, c10.voltage, "bs", label="cycle 22")
ax.legend();
Explanation: B. Get some voltage curves for some cycles and plot them
The method get_cap can be used to extract voltage curves.
End of explanation
from cellpy.utils import ica
v4, dqdv4 = ica.dqdv_cycle(
data.get_cap(4, categorical_column=True, method="forth-and-forth")
)
v10, dqdv10 = ica.dqdv_cycle(
data.get_cap(10, categorical_column=True, method="forth-and-forth")
)
plt.plot(v4, dqdv4, label="cycle 4")
plt.plot(v10, dqdv10, label="cycle 10")
plt.legend();
Explanation: Looking at some dqdv data
Get capacity cycles and make dqdv using the ica module
End of explanation
fig, ax = plt.subplots()
for cycle in data.get_cycle_numbers():
d = data.get_cap(cycle, categorical_column=True, method="forth-and-forth")
if not d.empty:
v, dqdv = ica.dqdv_cycle(d)
ax.plot(v, dqdv)
else:
print(f"cycle {cycle} seems to be missing or corrupted")
Explanation: Put it in a for-loop for plotting many ica plots
End of explanation
hv.extension("bokeh")
tidy_ica = ica.dqdv_frames(data)
cycles = list(range(1, 3)) + [10, 11, 12, 15]
tidy_ica = tidy_ica.loc[tidy_ica.cycle.isin(cycles), :]
%%opts Curve [xlim=(0,1)] (color=hv.Palette('Magma'), alpha=0.9) NdOverlay [legend_position='right', width=800, height=500]
curve4 = (hv.Curve(tidy_ica, kdims=['voltage'], vdims=['dq', 'cycle'], label="Incremental capacity plot")
.groupby("cycle")
.overlay()
)
curve4
Explanation: Get all the dqdv data in one go
End of explanation |
13,877 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Auto-provision a new user
Create a home directory
Create NFS export
Create SMB share
Create quota
Set up daily snapshots
Prerequisites
Install the qumulo api via pip install qumulo_api, or download it from your Qumulo cluster on the API & Tools page
set up all the variables in the cell below
Step1: Create directory
Step2: Create 20GB Quota
Step3: Create NFS export
Step4: Create SMB share
Step5: Set up snapshot policy
Step6: Clean up everything | Python Code:
cluster = 'XXXXX' # Qumulo cluster hostname or IP where you're setting up users
api_user = 'XXXXX' # Qumulo api user name
api_password = 'XXXXX' # Qumulo api password
base_dir = 'XXXXX' # the parent path where the users will be created.
user_name = 'XXXXX' # the new "user" to set up.
import os
import sys
import traceback
from qumulo.rest_client import RestClient
from qumulo.rest.nfs import NFSRestriction
full_path = '/'+ base_dir + '/' + user_name
rc = RestClient(cluster, 8000)
rc.login(api_user, api_password)
def create_dir(rc, name, dir_path='/'):
try:
rc.fs.create_directory(name = name, dir_path = dir_path)
except:
exc_type, exc_value, exc_traceback = sys.exc_info()
print("Exception: %s" % exc_value)
# Create base user directory, if it doesn't already exist
create_dir(rc, name=base_dir, dir_path='/')
Explanation: Auto-provision a new user
Create a home directory
Create NFS export
Create SMB share
Create quota
Set up daily snapshots
Prerequisites
Install the qumulo api via pip install qumulo_api, or download it from your Qumulo cluster on the API & Tools page
set up all the variables in the cell below
End of explanation
dir_res = rc.fs.create_directory(name=user_name, dir_path='/'+ base_dir)
print("Directory '%s' created with id: %s" % (full_path, dir_res['file_number']))
dir_id = dir_res['file_number']
Explanation: Create directory
End of explanation
quota_res = rc.quota.create_quota(id_ = dir_id, limit_in_bytes = 20000000000)
Explanation: Create 20GB Quota
End of explanation
nfs_res = rc.nfs.nfs_add_share(export_path = '/' + user_name,
fs_path = full_path,
description = "%s home directory" % user_name,
restrictions = [NFSRestriction({
'read_only': False,
'host_restrictions': [],
'user_mapping': 'NFS_MAP_NONE',
'map_to_user_id': '0'})]
)
print("NFS export created: %s with id %s" % (full_path, nfs_res['id']))
Explanation: Create NFS export
End of explanation
smb_res = rc.smb.smb_add_share(share_name = user_name,
fs_path = full_path,
description = "%s home directory" % user_name
)
print("SMB share created: %s with id %s" % (full_path, smb_res['id']))
Explanation: Create SMB share
End of explanation
snap_res = rc.snapshot.create_policy(name = "User %s" % user_name,
schedule_info = {"creation_schedule":
{"frequency":"SCHEDULE_DAILY_OR_WEEKLY",
"hour":2,"minute":15,
"on_days":["MON","TUE","WED","THU","FRI","SAT","SUN"],
"timezone":"America/Los_Angeles"},
"expiration_time_to_live":"7days"
},
directory_id = str(dir_id),
enabled = True)
print("Snapshot policy created with id %s" % snap_res['id'])
Explanation: Set up snapshot policy
End of explanation
rc.quota.delete_quota(id_ = quota_res['id'])
rc.snapshot.delete_policy(policy_id = snap_res['id'])
rc.smb.smb_delete_share(id_ = smb_res['id'])
rc.nfs.nfs_delete_share(id_ = nfs_res['id'])
if full_path != '/': # small sanity check since tree delete is rather powerful.
rc.fs.delete_tree(path = full_path)
print("Everything is cleaned up!")
Explanation: Clean up everything
End of explanation |
13,878 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Quality-control" data-toc-modified-id="Quality-control-1"><span class="toc-item-num">1 </span>Quality control</a></div><div class="lev2 toc-item"><a href="#Plot-showing-normal-nhr-57-expression-patterns-in-hypoxia-mutants" data-toc-modified-id="Plot-showing-normal-nhr-57-expression-patterns-in-hypoxia-mutants-1.1"><span class="toc-item-num">1.1 </span>Plot showing normal <em>nhr-57</em> expression patterns in hypoxia mutants</a></div><div class="lev1 toc-item"><a href="#Quality-Control-on-the-hypoxia-response-and-the-hif-1-direct-target-predictions" data-toc-modified-id="Quality-Control-on-the-hypoxia-response-and-the-hif-1-direct-target-predictions-2"><span class="toc-item-num">2 </span>Quality Control on the hypoxia response and the hif-1 direct target predictions</a></div>
In this notebook, we present some basic sanity checks that our RNA-seq worked and that the data is picking up on the right signals. It's a fairly short notebook.
Step1: Quality control
egl-9, rhy-1 and nhr-57 are known to be HIF-1 responsive. Let's see if our RNA-seq experiment can recapitulate these known interactions. For ease of viewing, we will plot these results as bar-charts, as if they were qPCR results. To do this, we must select what genes we will use for our quality check. I would like to take a look at nhr-57, since this gene is known to be incredibly up-regulated during hypoxia. If N2 worms became hypoxic during treatment for a period long enough to induce transcriptional changes, then nhr-57 should appear to be significantly down-regulated in the hif-1 and egl-9 hif-1 genotypes.
Step2: Plot showing normal nhr-57 expression patterns in hypoxia mutants
Step3: It looks like we are able to recapitulate most of the known interactions between these reporters and HIF-1 levels. There are no contradicting results, although the egl-9 levels don't all quite reach statistical significance. For completeness, below I show ALL the egl-9 isoforms.
Step4: Quality Control on the hypoxia response and the hif-1 direct target predictions
That's one way to check the quality of our RNA-seq. Another way is to look for what genes are D.E. in our hypoxia dataset. We will test the most conservative guess for the hypoxia response, and the predicted hypoxia targets using a hypergeometric test.
Step5: Hypoxia response (conservative guess) | Python Code:
# important stuff:
import os
import pandas as pd
import numpy as np
# morgan
import morgan as morgan
import gvars
import genpy
# stats
from scipy import stats as sts
# Graphics
import matplotlib as mpl
import matplotlib.ticker as plticker
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.patheffects as path_effects
from matplotlib import rc
rc('text', usetex=True)
rc('text.latex', preamble=r'\usepackage{cmbright}')
rc('font', **{'family': 'sans-serif', 'sans-serif': ['Helvetica']})
# Magic function to make matplotlib inline;
%matplotlib inline
# This enables SVG graphics inline.
# There is a bug, so uncomment if it works.
%config InlineBackend.figure_formats = {'png', 'retina'}
# JB's favorite Seaborn settings for notebooks
rc = {'lines.linewidth': 2,
'axes.labelsize': 18,
'axes.titlesize': 18,
'axes.facecolor': 'DFDFE5'}
sns.set(style='dark', context='notebook', font='sans-serif')
mpl.rcParams['xtick.labelsize'] = 16
mpl.rcParams['ytick.labelsize'] = 16
mpl.rcParams['legend.fontsize'] = 14
# import the code <--> genotype mapping and other useful variables
genvar = gvars.genvars()
tf_df = pd.read_csv('../input/tf_list.csv')
hypoxia_gold = pd.read_csv('../input/hypoxia_gold_standard.csv', sep=',')
hypoxia_response = pd.read_csv('../output/temp_files/hypoxia_response.csv')
# Specify the genotypes to refer to:
single_mutants = ['b', 'c', 'd', 'e', 'g']
double_mutants = {'a' : 'bd', 'f':'bc'}
tidy = pd.read_csv('../output/temp_files/DE_genes.csv')
tidy.sort_values('target_id', inplace=True)
tidy.dropna(subset=['ens_gene'], inplace=True)
# drop the fog-2 dataset
tidy = tidy[tidy.code != 'g']
tidy['fancy genotype'] = tidy.code.map(genvar.fancy_mapping)
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Quality-control" data-toc-modified-id="Quality-control-1"><span class="toc-item-num">1 </span>Quality control</a></div><div class="lev2 toc-item"><a href="#Plot-showing-normal-nhr-57-expression-patterns-in-hypoxia-mutants" data-toc-modified-id="Plot-showing-normal-nhr-57-expression-patterns-in-hypoxia-mutants-1.1"><span class="toc-item-num">1.1 </span>Plot showing normal <em>nhr-57</em> expression patterns in hypoxia mutants</a></div><div class="lev1 toc-item"><a href="#Quality-Control-on-the-hypoxia-response-and-the-hif-1-direct-target-predictions" data-toc-modified-id="Quality-Control-on-the-hypoxia-response-and-the-hif-1-direct-target-predictions-2"><span class="toc-item-num">2 </span>Quality Control on the hypoxia response and the hif-1 direct target predictions</a></div>
In this notebook, we present some basic sanity checks that our RNA-seq worked and that the data is picking up on the right signals. It's a fairly short notebook.
End of explanation
x = ['WBGene00012324', 'F22E12.4a.1',
'WBGene00003647', 'WBGene00002248']
find_x = ((tidy.ens_gene.isin(x)) | (tidy.target_id.isin(x)))
plot_df = tidy[find_x].copy()
x_sort = {'WBGene00012324': 1, 'WBGene00001178': 2,
'WBGene00003647': 3, 'WBGene00002248': 4}
plot_df['order'] = plot_df.ens_gene.map(x_sort)
plot_df.sort_values('order', inplace=True)
plot_df.reset_index(inplace=True)
Explanation: Quality control
egl-9, rhy-1 and nhr-57 are known to be HIF-1 responsive. Let's see if our RNA-seq experiment can recapitulate these known interactions. For ease of viewing, we will plot these results as bar-charts, as if they were qPCR results. To do this, we must select what genes we will use for our quality check. I would like to take a look at nhr-57, since this gene is known to be incredibly up-regulated during hypoxia. If N2 worms became hypoxic during treatment for a period long enough to induce transcriptional changes, then nhr-57 should appear to be significantly down-regulated in the hif-1 and egl-9 hif-1 genotypes.
End of explanation
genpy.qPCR_plot(plot_df, genvar.plot_order, genvar.plot_color,
clustering='fancy genotype', plotting_group='ens_gene', rotation=45)
plt.xlabel(r'Genes selected for measurement', fontsize=20)
save = '../output/supp_figures/supplementary_figure_1.svg'
plt.savefig(save, bbox_inches='tight')
Explanation: Plot showing normal nhr-57 expression patterns in hypoxia mutants
End of explanation
x = ['WBGene00001178']
find_x = tidy.ens_gene.isin(x)
plot_df = tidy[find_x].copy()
x_sort = {}
for i, target in enumerate(plot_df.target_id.unique()):
x_sort[target] = i + 1
plot_df['order'] = plot_df.target_id.map(x_sort)
plot_df.sort_values('order', inplace=True)
plot_df.reset_index(inplace=True)
genpy.qPCR_plot(plot_df, genvar.plot_order, genvar.plot_color,
clustering='fancy genotype', plotting_group='target_id', rotation=45)
plt.xlabel(r'\emph{egl-9} isoforms', fontsize=20)
Explanation: It looks like we are able to recapitulate most of the known interactions between these reporters and HIF-1 levels. There are no contradicting results, although the egl-9 levels don't all quite reach statistical significance. For completeness, below I show ALL the egl-9 isoforms.
End of explanation
q = 0.1
def test_significance(df, gold=hypoxia_gold):
ind = df.ens_gene.isin(hypoxia_gold.WBIDS)
found = df[ind].ens_gene.unique()
sig = len(df.ens_gene.unique()) # number of genes that we picked
ntotal = len(tidy.ens_gene.unique()) # total genes measured
pval = sts.hypergeom.sf(len(found), ntotal,
len(hypoxia_gold), sig)
if pval < 10**-3:
print('This result is statistically significant' +\
' with a p-value of {0:.2g} using a\n hypergeometric test. '.format(pval) +\
'You found {0} gold standard genes!'.format(len(found)))
else:
print(pval)
Explanation: Quality Control on the hypoxia response and the hif-1 direct target predictions
That's one way to check the quality of our RNA-seq. Another way is to look for what genes are D.E. in our hypoxia dataset. We will test the most conservative guess for the hypoxia response, and the predicted hypoxia targets using a hypergeometric test.
End of explanation
test_significance(hypoxia_response)
Explanation: Hypoxia response (conservative guess):
End of explanation |
13,879 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: オートエンコーダの基礎
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: データセットを読み込む
まず、Fashion MNIST データセットを使用して基本的なオートエンコーダーをトレーニングします。このデータセットの各画像は 28x28 ピクセルです。
Step3: 最初の例:オートエンコーダの基本
次の2つの高密度レイヤーでオートエンコーダーを定義します。encoder は、画像を 64 次元の潜在ベクトルに圧縮します。decoder は、潜在空間から元の画像を再構築します。
モデルを定義するには、Keras Model Subclassing API を使用します。
Step4: 入力とターゲットの両方として x_train を使用してモデルをトレーニングします。encoder は、データセットを 784 次元から潜在空間に圧縮することを学習し、decoder は元の画像を再構築することを学習します。
Step5: モデルがトレーニングされたので、テストセットから画像をエンコードおよびデコードしてモデルをテストします。
Step6: 2番目の例:画像のノイズ除去
オートエンコーダは、画像からノイズを除去するようにトレーニングすることもできます。 次のセクションでは、各画像にランダムノイズを適用して、ノイズの多いバージョンの FashionMNIST データセットを作成します。次に、ノイズの多い画像を入力として使用し、元の画像をターゲットとして使用して、オートエンコーダーをトレーニングします。
データセットを再インポートして、以前に行った変更を省略しましょう。
Step7: 画像にランダムノイズを追加します
Step8: ノイズの多い画像をプロットします。
Step9: 畳み込みオートエンコーダーを定義します。
この例では、encoder の Conv2D レイヤーと、decoder の Conv2DTranspose レイヤーを使用して畳み込みオートエンコーダーをトレーニングします。
Step10: エンコーダーの概要を見てみましょう。画像が 28x28 から 7x7 にダウンサンプリングされていることに注目してください。
Step11: デコーダーは画像を 7x7 から 28x28 にアップサンプリングします。
Step12: オートエンコーダにより生成されたノイズの多い画像とノイズ除去された画像の両方をプロットします。
Step13: 3番目の例:異常検出
概要
この例では、オートエンコーダーをトレーニングして、ECG5000 データセットの異常を検出します。このデータセットには、5,000 の心電図が含まれ、それぞれに 140 のデータポイントがあります。データセットの簡略化されたバージョンを使用します。各例には、0(異常なリズムに対応)または1(正常なリズムに対応)のいずれかのラベルが付けられています。ここでは異常なリズムを特定することに興味があります。
注意:これはラベル付きのデータセットであるため、教師あり学習の問題と見なせます。この例の目的は、ラベルが使用できない、より大きなデータセットに適用できる異常検出の概念を説明することです(たとえば、数千の正常なリズムがあり、異常なリズムが少数しかない場合)。
オートエンコーダーを使用すると、どのようにして異常を検出できるのでしょうか?オートエンコーダは、再構築エラーを最小限に抑えるようにトレーニングされていることを思い出してください。オートエンコーダーは通常のリズムでのみトレーニングし、それを使用してすべてのデータを再構築します。私たちの仮説は、異常なリズムはより高い再構成エラーを持つだろうということです。次に、再構成エラーが固定しきい値を超えた場合、リズムを異常として分類します。
ECG データを読み込む
使用するデータセットは、timeseriesclassification.com のデータセットに基づいています。
Step14: データを [0,1] に正規化します。
Step15: このデータセットで 1 としてラベル付けされている通常のリズムのみを使用して、オートエンコーダーをトレーニングします。正常なリズムを異常なリズムから分離します。
Step16: 正常な ECG をプロットします。
Step17: 異常な ECG をプロットします。
Step18: モデルを構築する
Step19: オートエンコーダは通常の ECG のみを使用してトレーニングされますが、完全なテストセットを使用して評価されることに注意してください。
Step20: 再構成エラーが正常なトレーニング例からの1標準偏差より大きい場合、ECG を異常として分類します。まず、トレーニングセットからの正常な ECG、オートエンコーダーによりエンコードおよびデコードされた後の再構成、および再構成エラーをプロットします。
Step21: 異常なテストサンプルで同様にプロットを作成します。
Step22: 異常を検出します
再構成損失が指定してしきい値より大きいかどうかを計算することにより、異常を検出します。このチュートリアルでは、トレーニングセットから正常なサンプルの平均平均誤差を計算し、再構成誤差がトレーニングセットからの1標準偏差よりも大きい場合、以降のサンプルを異常として分類します。
トレーニングセットからの通常の ECG に再構成エラーをプロットします
Step23: 平均より1標準偏差上のしきい値を選択します。
Step24: 注意
Step25: 再構成エラーがしきい値よりも大きい場合は、ECG を異常として分類します。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.metrics import accuracy_score, precision_score, recall_score
from sklearn.model_selection import train_test_split
from tensorflow.keras import layers, losses
from tensorflow.keras.datasets import fashion_mnist
from tensorflow.keras.models import Model
Explanation: オートエンコーダの基礎
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/tutorials/generative/autoencoder"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/generative/autoencoder.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/generative/autoencoder.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHub でソースを表示</a></td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/generative/autoencoder.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
このチュートリアルでは、基本、画像のノイズ除去、異常検出の3つの例を使用してオートエンコーダを紹介します。
オートエンコーダは、入力を出力にコピーするようにトレーニングされた特殊なタイプのニューラルネットワークです。たとえば、手書きの数字の画像が与えられた場合、オートエンコーダは最初に画像を低次元の潜在表現にエンコードし、次に潜在表現をデコードして画像に戻します。オートエンコーダは、再構成エラーを最小限に抑えながらデータを圧縮することを学習します。
オートエンコーダの詳細については、Ian Goodfellow、Yoshua Bengio、AaronCourville によるディープラーニングの第 14 章を参照してください。
TensorFlow とその他のライブラリをインポートする
End of explanation
(x_train, _), (x_test, _) = fashion_mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
print (x_train.shape)
print (x_test.shape)
Explanation: データセットを読み込む
まず、Fashion MNIST データセットを使用して基本的なオートエンコーダーをトレーニングします。このデータセットの各画像は 28x28 ピクセルです。
End of explanation
latent_dim = 64
class Autoencoder(Model):
def __init__(self, latent_dim):
super(Autoencoder, self).__init__()
self.latent_dim = latent_dim
self.encoder = tf.keras.Sequential([
layers.Flatten(),
layers.Dense(latent_dim, activation='relu'),
])
self.decoder = tf.keras.Sequential([
layers.Dense(784, activation='sigmoid'),
layers.Reshape((28, 28))
])
def call(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
autoencoder = Autoencoder(latent_dim)
autoencoder.compile(optimizer='adam', loss=losses.MeanSquaredError())
Explanation: 最初の例:オートエンコーダの基本
次の2つの高密度レイヤーでオートエンコーダーを定義します。encoder は、画像を 64 次元の潜在ベクトルに圧縮します。decoder は、潜在空間から元の画像を再構築します。
モデルを定義するには、Keras Model Subclassing API を使用します。
End of explanation
autoencoder.fit(x_train, x_train,
epochs=10,
shuffle=True,
validation_data=(x_test, x_test))
Explanation: 入力とターゲットの両方として x_train を使用してモデルをトレーニングします。encoder は、データセットを 784 次元から潜在空間に圧縮することを学習し、decoder は元の画像を再構築することを学習します。
End of explanation
encoded_imgs = autoencoder.encoder(x_test).numpy()
decoded_imgs = autoencoder.decoder(encoded_imgs).numpy()
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i])
plt.title("original")
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i])
plt.title("reconstructed")
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
Explanation: モデルがトレーニングされたので、テストセットから画像をエンコードおよびデコードしてモデルをテストします。
End of explanation
(x_train, _), (x_test, _) = fashion_mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]
print(x_train.shape)
Explanation: 2番目の例:画像のノイズ除去
オートエンコーダは、画像からノイズを除去するようにトレーニングすることもできます。 次のセクションでは、各画像にランダムノイズを適用して、ノイズの多いバージョンの FashionMNIST データセットを作成します。次に、ノイズの多い画像を入力として使用し、元の画像をターゲットとして使用して、オートエンコーダーをトレーニングします。
データセットを再インポートして、以前に行った変更を省略しましょう。
End of explanation
noise_factor = 0.2
x_train_noisy = x_train + noise_factor * tf.random.normal(shape=x_train.shape)
x_test_noisy = x_test + noise_factor * tf.random.normal(shape=x_test.shape)
x_train_noisy = tf.clip_by_value(x_train_noisy, clip_value_min=0., clip_value_max=1.)
x_test_noisy = tf.clip_by_value(x_test_noisy, clip_value_min=0., clip_value_max=1.)
Explanation: 画像にランダムノイズを追加します
End of explanation
n = 10
plt.figure(figsize=(20, 2))
for i in range(n):
ax = plt.subplot(1, n, i + 1)
plt.title("original + noise")
plt.imshow(tf.squeeze(x_test_noisy[i]))
plt.gray()
plt.show()
Explanation: ノイズの多い画像をプロットします。
End of explanation
class Denoise(Model):
def __init__(self):
super(Denoise, self).__init__()
self.encoder = tf.keras.Sequential([
layers.Input(shape=(28, 28, 1)),
layers.Conv2D(16, (3, 3), activation='relu', padding='same', strides=2),
layers.Conv2D(8, (3, 3), activation='relu', padding='same', strides=2)])
self.decoder = tf.keras.Sequential([
layers.Conv2DTranspose(8, kernel_size=3, strides=2, activation='relu', padding='same'),
layers.Conv2DTranspose(16, kernel_size=3, strides=2, activation='relu', padding='same'),
layers.Conv2D(1, kernel_size=(3, 3), activation='sigmoid', padding='same')])
def call(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
autoencoder = Denoise()
autoencoder.compile(optimizer='adam', loss=losses.MeanSquaredError())
autoencoder.fit(x_train_noisy, x_train,
epochs=10,
shuffle=True,
validation_data=(x_test_noisy, x_test))
Explanation: 畳み込みオートエンコーダーを定義します。
この例では、encoder の Conv2D レイヤーと、decoder の Conv2DTranspose レイヤーを使用して畳み込みオートエンコーダーをトレーニングします。
End of explanation
autoencoder.encoder.summary()
Explanation: エンコーダーの概要を見てみましょう。画像が 28x28 から 7x7 にダウンサンプリングされていることに注目してください。
End of explanation
autoencoder.decoder.summary()
Explanation: デコーダーは画像を 7x7 から 28x28 にアップサンプリングします。
End of explanation
encoded_imgs = autoencoder.encoder(x_test_noisy).numpy()
decoded_imgs = autoencoder.decoder(encoded_imgs).numpy()
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original + noise
ax = plt.subplot(2, n, i + 1)
plt.title("original + noise")
plt.imshow(tf.squeeze(x_test_noisy[i]))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
bx = plt.subplot(2, n, i + n + 1)
plt.title("reconstructed")
plt.imshow(tf.squeeze(decoded_imgs[i]))
plt.gray()
bx.get_xaxis().set_visible(False)
bx.get_yaxis().set_visible(False)
plt.show()
Explanation: オートエンコーダにより生成されたノイズの多い画像とノイズ除去された画像の両方をプロットします。
End of explanation
# Download the dataset
dataframe = pd.read_csv('http://storage.googleapis.com/download.tensorflow.org/data/ecg.csv', header=None)
raw_data = dataframe.values
dataframe.head()
# The last element contains the labels
labels = raw_data[:, -1]
# The other data points are the electrocadriogram data
data = raw_data[:, 0:-1]
train_data, test_data, train_labels, test_labels = train_test_split(
data, labels, test_size=0.2, random_state=21
)
Explanation: 3番目の例:異常検出
概要
この例では、オートエンコーダーをトレーニングして、ECG5000 データセットの異常を検出します。このデータセットには、5,000 の心電図が含まれ、それぞれに 140 のデータポイントがあります。データセットの簡略化されたバージョンを使用します。各例には、0(異常なリズムに対応)または1(正常なリズムに対応)のいずれかのラベルが付けられています。ここでは異常なリズムを特定することに興味があります。
注意:これはラベル付きのデータセットであるため、教師あり学習の問題と見なせます。この例の目的は、ラベルが使用できない、より大きなデータセットに適用できる異常検出の概念を説明することです(たとえば、数千の正常なリズムがあり、異常なリズムが少数しかない場合)。
オートエンコーダーを使用すると、どのようにして異常を検出できるのでしょうか?オートエンコーダは、再構築エラーを最小限に抑えるようにトレーニングされていることを思い出してください。オートエンコーダーは通常のリズムでのみトレーニングし、それを使用してすべてのデータを再構築します。私たちの仮説は、異常なリズムはより高い再構成エラーを持つだろうということです。次に、再構成エラーが固定しきい値を超えた場合、リズムを異常として分類します。
ECG データを読み込む
使用するデータセットは、timeseriesclassification.com のデータセットに基づいています。
End of explanation
min_val = tf.reduce_min(train_data)
max_val = tf.reduce_max(train_data)
train_data = (train_data - min_val) / (max_val - min_val)
test_data = (test_data - min_val) / (max_val - min_val)
train_data = tf.cast(train_data, tf.float32)
test_data = tf.cast(test_data, tf.float32)
Explanation: データを [0,1] に正規化します。
End of explanation
train_labels = train_labels.astype(bool)
test_labels = test_labels.astype(bool)
normal_train_data = train_data[train_labels]
normal_test_data = test_data[test_labels]
anomalous_train_data = train_data[~train_labels]
anomalous_test_data = test_data[~test_labels]
Explanation: このデータセットで 1 としてラベル付けされている通常のリズムのみを使用して、オートエンコーダーをトレーニングします。正常なリズムを異常なリズムから分離します。
End of explanation
plt.grid()
plt.plot(np.arange(140), normal_train_data[0])
plt.title("A Normal ECG")
plt.show()
Explanation: 正常な ECG をプロットします。
End of explanation
plt.grid()
plt.plot(np.arange(140), anomalous_train_data[0])
plt.title("An Anomalous ECG")
plt.show()
Explanation: 異常な ECG をプロットします。
End of explanation
class AnomalyDetector(Model):
def __init__(self):
super(AnomalyDetector, self).__init__()
self.encoder = tf.keras.Sequential([
layers.Dense(32, activation="relu"),
layers.Dense(16, activation="relu"),
layers.Dense(8, activation="relu")])
self.decoder = tf.keras.Sequential([
layers.Dense(16, activation="relu"),
layers.Dense(32, activation="relu"),
layers.Dense(140, activation="sigmoid")])
def call(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
autoencoder = AnomalyDetector()
autoencoder.compile(optimizer='adam', loss='mae')
Explanation: モデルを構築する
End of explanation
history = autoencoder.fit(normal_train_data, normal_train_data,
epochs=20,
batch_size=512,
validation_data=(test_data, test_data),
shuffle=True)
plt.plot(history.history["loss"], label="Training Loss")
plt.plot(history.history["val_loss"], label="Validation Loss")
plt.legend()
Explanation: オートエンコーダは通常の ECG のみを使用してトレーニングされますが、完全なテストセットを使用して評価されることに注意してください。
End of explanation
encoded_data = autoencoder.encoder(normal_test_data).numpy()
decoded_data = autoencoder.decoder(encoded_data).numpy()
plt.plot(normal_test_data[0], 'b')
plt.plot(decoded_data[0], 'r')
plt.fill_between(np.arange(140), decoded_data[0], normal_test_data[0], color='lightcoral')
plt.legend(labels=["Input", "Reconstruction", "Error"])
plt.show()
Explanation: 再構成エラーが正常なトレーニング例からの1標準偏差より大きい場合、ECG を異常として分類します。まず、トレーニングセットからの正常な ECG、オートエンコーダーによりエンコードおよびデコードされた後の再構成、および再構成エラーをプロットします。
End of explanation
encoded_data = autoencoder.encoder(anomalous_test_data).numpy()
decoded_data = autoencoder.decoder(encoded_data).numpy()
plt.plot(anomalous_test_data[0], 'b')
plt.plot(decoded_data[0], 'r')
plt.fill_between(np.arange(140), decoded_data[0], anomalous_test_data[0], color='lightcoral')
plt.legend(labels=["Input", "Reconstruction", "Error"])
plt.show()
Explanation: 異常なテストサンプルで同様にプロットを作成します。
End of explanation
reconstructions = autoencoder.predict(normal_train_data)
train_loss = tf.keras.losses.mae(reconstructions, normal_train_data)
plt.hist(train_loss[None,:], bins=50)
plt.xlabel("Train loss")
plt.ylabel("No of examples")
plt.show()
Explanation: 異常を検出します
再構成損失が指定してしきい値より大きいかどうかを計算することにより、異常を検出します。このチュートリアルでは、トレーニングセットから正常なサンプルの平均平均誤差を計算し、再構成誤差がトレーニングセットからの1標準偏差よりも大きい場合、以降のサンプルを異常として分類します。
トレーニングセットからの通常の ECG に再構成エラーをプロットします
End of explanation
threshold = np.mean(train_loss) + np.std(train_loss)
print("Threshold: ", threshold)
Explanation: 平均より1標準偏差上のしきい値を選択します。
End of explanation
reconstructions = autoencoder.predict(anomalous_test_data)
test_loss = tf.keras.losses.mae(reconstructions, anomalous_test_data)
plt.hist(test_loss[None, :], bins=50)
plt.xlabel("Test loss")
plt.ylabel("No of examples")
plt.show()
Explanation: 注意: テストサンプルを異常として分類するしきい値を選択するために使用できるアプローチは他にもあります。適切なアプローチはデータセットによって異なります。詳細については、このチュートリアルの最後にあるリンクを参照してください。
テストセットの異常なサンプルの再構成エラーを調べると、ほとんどの場合、再構成エラーはしきい値よりも大きいことがわかります。しきい値を変更することで、分類器の精度とリコールを調整できます。
End of explanation
def predict(model, data, threshold):
reconstructions = model(data)
loss = tf.keras.losses.mae(reconstructions, data)
return tf.math.less(loss, threshold)
def print_stats(predictions, labels):
print("Accuracy = {}".format(accuracy_score(labels, predictions)))
print("Precision = {}".format(precision_score(labels, predictions)))
print("Recall = {}".format(recall_score(labels, predictions)))
preds = predict(autoencoder, test_data, threshold)
print_stats(preds, test_labels)
Explanation: 再構成エラーがしきい値よりも大きい場合は、ECG を異常として分類します。
End of explanation |
13,880 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interpolation Exercise 2
Step1: Sparse 2d interpolation
In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain
Step2: The following plot should show the points on the boundary and the single point in the interior
Step3: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain
Step4: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set_style('white')
from scipy.interpolate import griddata
from scipy.interpolate import interp1d
from scipy.interpolate import interp2d
Explanation: Interpolation Exercise 2
End of explanation
xb=np.array([-5,-4,-3,-2,-1,0,1,2,3,4,5])
yb=np.array([-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5])
yt=np.array([5]*11)
yc=np.array(0)
x=np.hstack((xb,xb,yb[1:10],yt[1:10],yc))
y=np.hstack((yb,yt,xb[1:10],xb[1:10],yc))
f1=np.array([0]*40)
f2=[1]
f=np.hstack((f1,f2))
Explanation: Sparse 2d interpolation
In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain:
The square domain covers the region $x\in[-5,5]$ and $y\in[-5,5]$.
The values of $f(x,y)$ are zero on the boundary of the square at integer spaced points.
The value of $f$ is known at a single interior point: $f(0,0)=1.0$.
The function $f$ is not known at any other points.
Create arrays x, y, f:
x should be a 1d array of the x coordinates on the boundary and the 1 interior point.
y should be a 1d array of the y coordinates on the boundary and the 1 interior point.
f should be a 1d array of the values of f at the corresponding x and y coordinates.
You might find that np.hstack is helpful.
End of explanation
plt.scatter(x,y);
assert x.shape==(41,)
assert y.shape==(41,)
assert f.shape==(41,)
assert np.count_nonzero(f)==1
Explanation: The following plot should show the points on the boundary and the single point in the interior:
End of explanation
# F=np.meshgrid(f,y)
xnew=np.linspace(-5,5,100)
ynew=xnew
Xnew,Ynew=np.meshgrid(xnew,ynew)
Fnew=griddata((x,y),f,(Xnew,Ynew),method='cubic') # worked with Jessi Pilgram
assert xnew.shape==(100,)
assert ynew.shape==(100,)
assert Xnew.shape==(100,100)
assert Ynew.shape==(100,100)
assert Fnew.shape==(100,100)
Explanation: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain:
xnew and ynew should be 1d arrays with 100 points between $[-5,5]$.
Xnew and Ynew should be 2d versions of xnew and ynew created by meshgrid.
Fnew should be a 2d array with the interpolated values of $f(x,y)$ at the points (Xnew,Ynew).
Use cubic spline interpolation.
End of explanation
plt.figure(figsize=(12,8))
plt.contourf(Xnew,Ynew,Fnew)
plt.xlabel('x')
plt.ylabel('y')
plt.title('Sparse 2d Interpolation')
plt.colorbar(shrink=.9)
plt.figure(figsize=(12,8))
plt.pcolor(Xnew,Ynew,Fnew)
plt.xlabel('x')
plt.ylabel('y')
plt.title('2d Sparse Interpolation')
plt.colorbar(shrink=.9)
assert True # leave this to grade the plot
Explanation: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
End of explanation |
13,881 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fourier Conduction
This examples shows how OpenPNM can be used to simulate thermal conduction on a generic grid of nodes. The result obtained from OpenPNM is compared to the analytical result.
As usual, start by importing OpenPNM, and the SciPy library.
Step1: Generating the Network object
Next, 2D a Network is generated with dimensions of 10x50 elements. The lattice spacing is given by Lc. Boundaries are added all around the edges of Network object using the add_boundariy_pores method.
Step2: Creating a Phase object
All simulations require a phase object which possess the thermosphysical properties of the system. In this case, we'll create a generic phase object, call it copper, though it has no properties; we'll add these by hand later.
Step3: Assigning Thermal Conductance to Copper
In a proper OpenPNM model we would create a Geometry object to manage all the geometrical properties, and a Physics object to calculate the thermal conductance based on the geometric information and the thermophysical properties of copper. In the present case, however, we'll just calculate the conductance manually and assign it to Cu.
Step4: Generating the algorithm objects and running the simulation
The last step in the OpenPNM simulation involves the generation of a Algorithm object and running the simulation.
Step5: This is the last step usually required in a OpenPNM simulation. The algorithm was run, and now the simulation data obtained can be analyzed. For illustrative purposes, the results obtained using OpenPNM shall be compared to an analytical solution of the problem in the following.
First let's rehape the 'pore.temperature' array into the shape of the network while also extracting only the internal pores to avoid showing the boundaries.
Step6: Also, let's take a look at the average temperature
Step7: The analytical solution is computed as well, and the result is the same shape as the network (including the boundary pores).
Step8: Also, let's take a look at the average temperature
Step9: Both the analytical solution and OpenPNM simulation can be subtracted from each other to yield the difference in both values. | Python Code:
%matplotlib inline
import numpy as np
import scipy as sp
import openpnm as op
%config InlineBackend.figure_formats = ['svg']
np.random.seed(10)
ws = op.Workspace()
ws.settings["loglevel"] = 40
np.set_printoptions(precision=5)
Explanation: Fourier Conduction
This examples shows how OpenPNM can be used to simulate thermal conduction on a generic grid of nodes. The result obtained from OpenPNM is compared to the analytical result.
As usual, start by importing OpenPNM, and the SciPy library.
End of explanation
divs = [10, 50]
Lc = 0.1 # cm
pn = op.network.Cubic(shape=divs, spacing=Lc)
pn.add_boundary_pores(['left', 'right', 'front', 'back'])
Explanation: Generating the Network object
Next, 2D a Network is generated with dimensions of 10x50 elements. The lattice spacing is given by Lc. Boundaries are added all around the edges of Network object using the add_boundariy_pores method.
End of explanation
# Create Phase object and associate with a Physics object
Cu = op.phases.GenericPhase(network=pn)
Explanation: Creating a Phase object
All simulations require a phase object which possess the thermosphysical properties of the system. In this case, we'll create a generic phase object, call it copper, though it has no properties; we'll add these by hand later.
End of explanation
# Add a unit conductance to all connections
Cu['throat.thermal_conductance'] = 1
# Overwrite boundary conductances since those connections are half as long
Ps = pn.pores('*boundary')
Ts = pn.find_neighbor_throats(pores=Ps)
Cu['throat.thermal_conductance'][Ts] = 2
Explanation: Assigning Thermal Conductance to Copper
In a proper OpenPNM model we would create a Geometry object to manage all the geometrical properties, and a Physics object to calculate the thermal conductance based on the geometric information and the thermophysical properties of copper. In the present case, however, we'll just calculate the conductance manually and assign it to Cu.
End of explanation
# Setup Algorithm object
alg = op.algorithms.FourierConduction(network=pn, phase=Cu)
inlets = pn.pores('right_boundary')
outlets = pn.pores(['front_boundary', 'back_boundary', 'right_boundary'])
T_in = 30*np.sin(np.pi*pn['pore.coords'][inlets, 1]/5)+50
alg.set_value_BC(values=T_in, pores=inlets)
alg.set_value_BC(values=50, pores=outlets)
alg.run()
Explanation: Generating the algorithm objects and running the simulation
The last step in the OpenPNM simulation involves the generation of a Algorithm object and running the simulation.
End of explanation
import matplotlib.pyplot as plt
sim = alg['pore.temperature'][pn.pores('internal')]
temp_map = np.reshape(a=sim, newshape=divs)
plt.subplots(1, 1, figsize=(10, 5))
plt.imshow(temp_map, cmap=plt.cm.plasma);
plt.colorbar();
Explanation: This is the last step usually required in a OpenPNM simulation. The algorithm was run, and now the simulation data obtained can be analyzed. For illustrative purposes, the results obtained using OpenPNM shall be compared to an analytical solution of the problem in the following.
First let's rehape the 'pore.temperature' array into the shape of the network while also extracting only the internal pores to avoid showing the boundaries.
End of explanation
print(f"T_average (numerical): {alg['pore.temperature'][pn.pores('internal')].mean():.5f}")
Explanation: Also, let's take a look at the average temperature:
End of explanation
# Calculate analytical solution over the same domain spacing
X = pn['pore.coords'][:, 0]
Y = pn['pore.coords'][:, 1]
soln = 30*np.sinh(np.pi*X/5)/np.sinh(np.pi/5)*np.sin(np.pi*Y/5) + 50
soln = soln[pn.pores('internal')]
soln = np.reshape(soln, (divs[0], divs[1]))
plt.subplots(1, 1, figsize=(10, 5))
plt.imshow(soln, cmap=plt.cm.plasma);
plt.colorbar();
Explanation: The analytical solution is computed as well, and the result is the same shape as the network (including the boundary pores).
End of explanation
print(f"T_average (analytical): {soln.mean():.5f}")
Explanation: Also, let's take a look at the average temperature:
End of explanation
diff = soln - temp_map
plt.subplots(1, 1, figsize=(10, 5))
plt.imshow(diff, cmap=plt.cm.plasma);
plt.colorbar();
print(f"Minimum error: {diff.min():.5f}, maximum error: {diff.max():.5f}")
Explanation: Both the analytical solution and OpenPNM simulation can be subtracted from each other to yield the difference in both values.
End of explanation |
13,882 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Matplotlib
matplotlib is probably the single most used Python package for 2D-graphics. It provides both a very quick way to visualize data from Python and publication-quality figures in many formats. We are going to explore matplotlib in interactive mode covering most common cases.
pyplot
... provides a convenient interface to the matplotlib object-oriented plotting library. It is modeled closely after matlab. Therefore, the majority of plotting commands in pyplot have matlab analogs with similar arguments. Important commands are explained with interactive examples.
How to use it in a interactive notebook
Step1: The simplest type of plot
Let's draw a sine versus a cosine function on the same plot.
Step2: This is a quick and easy way to plot stuff but not necessarily the right way
A quick anatomy of the matplolib "plot"
The figure is the top-level container in this hierarchy. It is the overall window/page that everything is drawn on. You can have multiple independent figures and figures can contain multiple axes.
Most plotting ocurs on an axes. The axes is effectively the area that we plot data on and any ticks/labels/etc associated with it. Usually we'll set up an axes with a call to subplot (which places Axes on a regular grid), so in most cases, axes and subplot are synonymous.
Each axes has an x-axis and a y-axis. These contain the ticks, tick locations, labels, etc. We mostly control ticks, tick labels, and data limits through other mechanisms, so we won't touch the individual axis part of things all that much. It's worth mentioning to explain where the term axes comes from.
Creating our first figure ...
Step3: Great, we have an empty figure
NOTE
Step4: Axes
All plotting is done with respect to an axes. An axes is made up of axis objects and many other things. An axes object must belong to a figure (and only one figure). Most commands you will ever issue will be with respect to this axes object.
Typically, you'll set up a figure, and then add an axes to it.
You can use fig.add_axes(), but in most cases, you'll find that adding a subplot will fit your needs perfectly. (Again a "subplot" is just an axes on a grid system.)
Step5: Notice the call to set. Matplotlib's objects typically have lots of "explicit setters" -- in other words, functions that start with set_<something> and control a particular option.
To demonstrate this (and as an example of IPython's tab-completion), try typing ax.set_ in a code cell, then hit the <Tab> key. You'll see a long list of Axes methods that start with set.
For example, we could have written the figure from before as
Step6: Clearly this can get repitive quickly. Therefore, Matplotlib's set method can be very handy. It takes each kwarg you pass it and tries to call the corresponding "setter". For example, foo.set(bar='blah') would call foo.set_bar('blah').
Note that the set method doesn't just apply to axes; it applies to more-or-less all matplotlib objects.
However, there are cases where you'll want to use things like ax.set_xlabel('Some Label', size=25) to control other options for a particular function.
Basic Plotting
Most plotting happens on an axes. Therefore, if you're plotting something on an axes, then you'll use one of its methods.
We'll talk about different plotting methods in more depth later. For now, let's focus on two methods
Step7: Axes methods vs. pyplot
Interestingly, just about all methods of an axes object exist as a function in the pyplot module (and vice-versa).
For example, when calling plt.xlim(1, 10), pyplot calls ax.set_xlim(1, 10) on whichever axes is "current". Here is an equivalent version of the above example using just pyplot.
Step8: "Explicit is better than implicit"
While very simple plots, with short scripts would benefit from the conciseness of the pyplot implicit approach, when doing more complicated plots, or working within larger scripts, you will want to explicitly pass around the axes and/or figure object to operate upon.
The advantage of keeping which axes we're working with very clear in our code will become more obvious when we start to have multiple axes in one figure.
Multiple Axes
We've mentioned before that a figure can have more than one axes on it. If you want your axes to be on a regular grid system, then it's easiest to use plt.subplots(...) to create a figure and add the axes to it automatically.
Step9: plt.subplots(...) created a new figure and added 4 subplots to it. The axes object that was returned is a 2D numpy object array. Each item in the array is one of the subplots. They're laid out as you see them on the figure.
Therefore, when we want to work with one of these axes, we can index the axes array and use that item's methods.
Step10: One really nice thing about plt.subplots() is that when it's called with no arguments, it creates a new figure with a single subplot.
Any time you see something like
fig = plt.figure()
ax = fig.add_subplot(111)
You can replace it with | Python Code:
%matplotlib notebook
import matplotlib.pyplot as plt
Explanation: Introduction to Matplotlib
matplotlib is probably the single most used Python package for 2D-graphics. It provides both a very quick way to visualize data from Python and publication-quality figures in many formats. We are going to explore matplotlib in interactive mode covering most common cases.
pyplot
... provides a convenient interface to the matplotlib object-oriented plotting library. It is modeled closely after matlab. Therefore, the majority of plotting commands in pyplot have matlab analogs with similar arguments. Important commands are explained with interactive examples.
How to use it in a interactive notebook
End of explanation
import numpy as np
# numpy array with 256 values ranging between -pi and pi.
x = np.linspace(-np.pi, np.pi, 256, endpoint=True)
cos = np.cos(x)
sin = np.sin(x)
plt.plot(x, sin)
plt.plot(x, cos)
_ = plt.show() # to clean the output
Explanation: The simplest type of plot
Let's draw a sine versus a cosine function on the same plot.
End of explanation
fig = plt.figure()
Explanation: This is a quick and easy way to plot stuff but not necessarily the right way
A quick anatomy of the matplolib "plot"
The figure is the top-level container in this hierarchy. It is the overall window/page that everything is drawn on. You can have multiple independent figures and figures can contain multiple axes.
Most plotting ocurs on an axes. The axes is effectively the area that we plot data on and any ticks/labels/etc associated with it. Usually we'll set up an axes with a call to subplot (which places Axes on a regular grid), so in most cases, axes and subplot are synonymous.
Each axes has an x-axis and a y-axis. These contain the ticks, tick locations, labels, etc. We mostly control ticks, tick labels, and data limits through other mechanisms, so we won't touch the individual axis part of things all that much. It's worth mentioning to explain where the term axes comes from.
Creating our first figure ...
End of explanation
# the new figure will be two times as tall and as wide
w, h = plt.figaspect(2.0)
fig = plt.figure(figsize=(w, h))
Explanation: Great, we have an empty figure
NOTE: if the figure does not display, please use the .show() method on fig: fig.show(). In the case you have a single plot, use plt.show()
Using the figsize argument to control the figure size
figsize expects a tuple of (width, height) in inches.
... but we can control the aspect rate using the .figaspect() method
End of explanation
fig = plt.figure()
# 1row, 1column, 1position on the subplot grid
ax = fig.add_subplot(1, 1, 1)
ax.set(xlim=[0.5, 4.5], # set x axis limits
ylim=[-2, 8], # set y axis limits
title='An Example Axes',
ylabel='Y-Axis',
xlabel='X-Axis'); # this is used to keep things clean
Explanation: Axes
All plotting is done with respect to an axes. An axes is made up of axis objects and many other things. An axes object must belong to a figure (and only one figure). Most commands you will ever issue will be with respect to this axes object.
Typically, you'll set up a figure, and then add an axes to it.
You can use fig.add_axes(), but in most cases, you'll find that adding a subplot will fit your needs perfectly. (Again a "subplot" is just an axes on a grid system.)
End of explanation
fig = plt.figure()
# 1row, 1column, 1position on the subplot grid
ax = fig.add_subplot(1, 1, 1)
ax.set_xlim([0.5, 4.5]) # x axis limits
ax.set_ylim([-2, 8]) # y axis limits
ax.set_title('An Example Axes')
ax.set_ylabel('Y-Axis')
ax.set_xlabel('X-Axis')
_ = plt.show() # to keep things clean
Explanation: Notice the call to set. Matplotlib's objects typically have lots of "explicit setters" -- in other words, functions that start with set_<something> and control a particular option.
To demonstrate this (and as an example of IPython's tab-completion), try typing ax.set_ in a code cell, then hit the <Tab> key. You'll see a long list of Axes methods that start with set.
For example, we could have written the figure from before as:
End of explanation
fig = plt.figure()
# we can use 111 without commas, it is the same as 1, 1, 1
ax = fig.add_subplot(111)
ax.plot([1, 2, 3, 4], [10, 20, 25, 30], color='lightblue', linewidth=3)
ax.scatter([0.3, 3.8, 1.2, 2.5], [11, 25, 9, 26], color='darkgreen', marker='^')
ax.set_xlim(0.5, 4.5)
_ = plt.show()
Explanation: Clearly this can get repitive quickly. Therefore, Matplotlib's set method can be very handy. It takes each kwarg you pass it and tries to call the corresponding "setter". For example, foo.set(bar='blah') would call foo.set_bar('blah').
Note that the set method doesn't just apply to axes; it applies to more-or-less all matplotlib objects.
However, there are cases where you'll want to use things like ax.set_xlabel('Some Label', size=25) to control other options for a particular function.
Basic Plotting
Most plotting happens on an axes. Therefore, if you're plotting something on an axes, then you'll use one of its methods.
We'll talk about different plotting methods in more depth later. For now, let's focus on two methods: plot and scatter.
plot draws points with lines connecting them.
scatter draws unconnected points, optionally scaled or colored by additional variables.
End of explanation
fig = plt.figure() # so it will create a new figure
plt.plot([1, 2, 3, 4], [10, 20, 25, 30], color='lightblue', linewidth=3)
plt.scatter([0.3, 3.8, 1.2, 2.5], [11, 25, 9, 26], color='darkgreen', marker='^')
plt.xlim(0.5, 4.5)
plt.show()
Explanation: Axes methods vs. pyplot
Interestingly, just about all methods of an axes object exist as a function in the pyplot module (and vice-versa).
For example, when calling plt.xlim(1, 10), pyplot calls ax.set_xlim(1, 10) on whichever axes is "current". Here is an equivalent version of the above example using just pyplot.
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=2)
Explanation: "Explicit is better than implicit"
While very simple plots, with short scripts would benefit from the conciseness of the pyplot implicit approach, when doing more complicated plots, or working within larger scripts, you will want to explicitly pass around the axes and/or figure object to operate upon.
The advantage of keeping which axes we're working with very clear in our code will become more obvious when we start to have multiple axes in one figure.
Multiple Axes
We've mentioned before that a figure can have more than one axes on it. If you want your axes to be on a regular grid system, then it's easiest to use plt.subplots(...) to create a figure and add the axes to it automatically.
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=2)
axes[0,0].set(title='Upper Left')
axes[0,1].set(title='Upper Right')
axes[1,0].set(title='Lower Left')
axes[1,1].set(title='Lower Right')
# To iterate over all items in a multidimensional numpy array, use the `flat` attribute
for ax in axes.flat:
# Remove all xticks and yticks...
ax.set(xticks=[0, 1, 2, 3], yticks=[0, 1, 2, 3])
Explanation: plt.subplots(...) created a new figure and added 4 subplots to it. The axes object that was returned is a 2D numpy object array. Each item in the array is one of the subplots. They're laid out as you see them on the figure.
Therefore, when we want to work with one of these axes, we can index the axes array and use that item's methods.
End of explanation
# %load exercises/1.1-subplots_and_basic_plotting.py
import numpy as np
import matplotlib.pyplot as plt
# Try to reproduce the figure shown in images/exercise_1-1.png
# Our data...
x = np.linspace(0, 10, 100)
y1, y2, y3 = np.cos(x), np.cos(x + 1), np.cos(x + 2)
names = ['Signal 1', 'Signal 2', 'Signal 3']
# Can you figure out what to do next to plot x vs y1, y2, and y3 on one figure?
Explanation: One really nice thing about plt.subplots() is that when it's called with no arguments, it creates a new figure with a single subplot.
Any time you see something like
fig = plt.figure()
ax = fig.add_subplot(111)
You can replace it with:
fig, ax = plt.subplots()
We'll be using that approach for the rest of the examples. It's much cleaner.
However, keep in mind that we're still creating a figure and adding axes to it. If we start making plot layouts that can't be described by subplots, we'll have to go back to creating the figure first and then adding axes to it one-by-one.
Quick Exercise: Exercise 1.1
Let's use some of what we've been talking about. Can you reproduce this figure?
Here's the data and some code to get you started.
End of explanation |
13,883 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example Assessment
After installing PyGauss you should be able to open this IPython Notebook from;
https
Step1: The test folder has a number of example Gaussian outputs to play around with.
Step2: Note
Step3: Geometric Analysis
Molecules can be viewed statically or interactively.
Step4: Energetics and Frequency Analysis
Step5: Potential Energy Scan analysis of geometric conformers...
Step6: Partial Charge Analysis
using Natural Bond Orbital (NBO) analysis
Step7: Density of States Analysis
Step8: Bonding Analysis
Using Second Order Perturbation Theory.
Step9: Multiple Computations Analysis
Multiple computations, for instance of different starting conformations, can be grouped into an Analysis class and anlaysed collectively.
Step10: Molecular Comparison
Step11: Data Comparison
Step12: The methods mentioned for indivdiual molecules can be applied to all or a subset of these computations.
Step13: There is also an option (requiring pdflatex and ghostscript+imagemagik) to output the tables as a latex formatted image.
Step14: Multi-Variate Analysis
RadViz is a way of visualizing multi-variate data.
Step15: The KMeans algorithm clusters data by trying to separate samples into n groups of equal variance.
Step16: Documentation (MS Word)
After analysing the computations, it would be reasonable to want to document some of our findings. This can be achieved by outputting individual figure or table images via the folder object.
Step19: But you may also want to produce a more full record of your analysis, and this is where python-docx steps in. Building on this package the pygauss MSDocument class can produce a full document of your analysis. | Python Code:
from IPython.display import display, Image
%matplotlib inline
import pygauss as pg
print 'pygauss version: {}'.format(pg.__version__)
Explanation: Example Assessment
After installing PyGauss you should be able to open this IPython Notebook from;
https://github.com/chrisjsewell/PyGauss/blob/master/Example_Assessment.ipynb, and run the following...
End of explanation
folder = pg.get_test_folder()
len(folder.list_files())
Explanation: The test folder has a number of example Gaussian outputs to play around with.
End of explanation
mol = pg.molecule.Molecule(folder_obj=folder,
init_fname='CJS1_emim-cl_B_init.com',
opt_fname=['CJS1_emim-cl_B_6-311+g-d-p-_gd3bj_opt-modredundant_difrz.log',
'CJS1_emim-cl_B_6-311+g-d-p-_gd3bj_opt-modredundant_difrz_err.log',
'CJS1_emim-cl_B_6-311+g-d-p-_gd3bj_opt-modredundant_unfrz.log'],
freq_fname='CJS1_emim-cl_B_6-311+g-d-p-_gd3bj_freq_unfrz.log',
nbo_fname='CJS1_emim-cl_B_6-311+g-d-p-_gd3bj_pop-nbo-full-_unfrz.log',
atom_groups={'emim':range(20), 'cl':[20]},
alignto=[3,2,1])
Explanation: Note: the folder object will act identical whether using a local path or one on a server over ssh (using paramiko):
folder = pg.Folder('/path/to/folder',
ssh_server='login.server.com',
ssh_username='username')
Single Molecule Analysis
A molecule can be created containg data about the inital geometry, optimisation process and analysis of the final configuration.
End of explanation
#mol.show_initial(active=True)
vdw = mol.show_initial(represent='vdw', rotations=[[0,0,90], [-90, 90, 0]])
ball_stick = mol.show_optimisation(represent='ball_stick', rotations=[[0,0,90], [-90, 90, 0]])
display(vdw, ball_stick)
print 'Cl optimised polar coords from aromatic ring : ({0}, {1},{2})'.format(
*[round(i, 2) for i in mol.calc_polar_coords_from_plane(20,3,2,1)])
ax = mol.plot_opt_trajectory(20, [3,2,1])
ax.set_title('Cl optimisation path')
ax.get_figure().set_size_inches(4, 3)
Explanation: Geometric Analysis
Molecules can be viewed statically or interactively.
End of explanation
print('Optimised? {0}, Conformer? {1}, Energy = {2} a.u.'.format(
mol.is_optimised(), mol.is_conformer(),
round(mol.get_opt_energy(units='hartree'),3)))
ax = mol.plot_opt_energy(units='hartree')
ax.get_figure().set_size_inches(3, 2)
ax = mol.plot_freq_analysis()
ax.get_figure().set_size_inches(4, 2)
Explanation: Energetics and Frequency Analysis
End of explanation
mol2 = pg.molecule.Molecule(folder_obj=folder, alignto=[3,2,1],
pes_fname=['CJS_emim_6311_plus_d3_scan.log',
'CJS_emim_6311_plus_d3_scan_bck.log'])
ax, data = mol2.plot_pes_scans([1,4,9,10], rotation=[0,0,90], img_pos='local_maxs', zoom=0.5)
ax.set_title('Ethyl chain rotational conformer analysis')
ax.get_figure().set_size_inches(7, 3)
Explanation: Potential Energy Scan analysis of geometric conformers...
End of explanation
print '+ve charge centre polar coords from aromatic ring: ({0} {1},{2})'.format(
*[round(i, 2) for i in mol.calc_nbo_charge_center(3, 2, 1)])
display(mol.show_nbo_charges(represent='ball_stick', axis_length=0.4,
rotations=[[0,0,90], [-90, 90, 0]]))
Explanation: Partial Charge Analysis
using Natural Bond Orbital (NBO) analysis
End of explanation
print 'Number of Orbitals: {}'.format(mol.get_orbital_count())
homo, lumo = mol.get_orbital_homo_lumo()
homoe, lumoe = mol.get_orbital_energies([homo, lumo])
print 'HOMO at {} eV'.format(homoe)
print 'LUMO at {} eV'.format(lumoe)
ax = mol.plot_dos(per_energy=1, lbound=-20, ubound=10, legend_size=12)
Explanation: Density of States Analysis
End of explanation
print 'H inter-bond energy = {} kJmol-1'.format(
mol.calc_hbond_energy(eunits='kJmol-1', atom_groups=['emim', 'cl']))
print 'Other inter-bond energy = {} kJmol-1'.format(
mol.calc_sopt_energy(eunits='kJmol-1', no_hbonds=True, atom_groups=['emim', 'cl']))
display(mol.show_sopt_bonds(min_energy=1, eunits='kJmol-1',
atom_groups=['emim', 'cl'],
no_hbonds=True,
rotations=[[0, 0, 90]]))
display(mol.show_hbond_analysis(cutoff_energy=5.,alpha=0.6,
atom_groups=['emim', 'cl'],
rotations=[[0, 0, 90], [90, 0, 0]]))
Explanation: Bonding Analysis
Using Second Order Perturbation Theory.
End of explanation
analysis = pg.Analysis(folder_obj=folder)
errors = analysis.add_runs(headers=['Cation', 'Anion', 'Initial'],
values=[['emim'], ['cl'],
['B', 'BE', 'BM', 'F', 'FE']],
init_pattern='*{0}-{1}_{2}_init.com',
opt_pattern='*{0}-{1}_{2}_6-311+g-d-p-_gd3bj_opt*unfrz.log',
freq_pattern='*{0}-{1}_{2}_6-311+g-d-p-_gd3bj_freq*.log',
nbo_pattern='*{0}-{1}_{2}_6-311+g-d-p-_gd3bj_pop-nbo-full-*.log',
alignto=[3,2,1], atom_groups={'emim':range(1,20), 'cl':[20]},
ipython_print=True)
Explanation: Multiple Computations Analysis
Multiple computations, for instance of different starting conformations, can be grouped into an Analysis class and anlaysed collectively.
End of explanation
fig, caption = analysis.plot_mol_images(mtype='optimised', max_cols=3,
info_columns=['Cation', 'Anion', 'Initial'],
rotations=[[0,0,90]])
print caption
Explanation: Molecular Comparison
End of explanation
fig, caption = analysis.plot_mol_graphs(gtype='dos', max_cols=3,
lbound=-20, ubound=10, legend_size=0,
band_gap_value=False,
info_columns=['Cation', 'Anion', 'Initial'])
print caption
Explanation: Data Comparison
End of explanation
analysis.add_mol_property_subset('Opt', 'is_optimised', rows=[2,3])
analysis.add_mol_property('Energy (au)', 'get_opt_energy', units='hartree')
analysis.add_mol_property('Cation chain, $\\psi$', 'calc_dihedral_angle', [1, 4, 9, 10])
analysis.add_mol_property('Cation Charge', 'calc_nbo_charge', 'emim')
analysis.add_mol_property('Anion Charge', 'calc_nbo_charge', 'cl')
analysis.add_mol_property(['Anion-Cation, $r$', 'Anion-Cation, $\\theta$', 'Anion-Cation, $\\phi$'],
'calc_polar_coords_from_plane', 3, 2, 1, 20)
analysis.add_mol_property('Anion-Cation h-bond', 'calc_hbond_energy',
eunits='kJmol-1', atom_groups=['emim', 'cl'])
analysis.get_table(row_index=['Anion', 'Cation', 'Initial'],
column_index=['Cation', 'Anion', 'Anion-Cation'])
Explanation: The methods mentioned for indivdiual molecules can be applied to all or a subset of these computations.
End of explanation
analysis.get_table(row_index=['Anion', 'Cation', 'Initial'],
column_index=['Cation', 'Anion', 'Anion-Cation'],
as_image=True, font_size=12)
Explanation: There is also an option (requiring pdflatex and ghostscript+imagemagik) to output the tables as a latex formatted image.
End of explanation
ax = analysis.plot_radviz_comparison('Anion', columns=range(4, 10))
Explanation: Multi-Variate Analysis
RadViz is a way of visualizing multi-variate data.
End of explanation
pg.utils.imgplot_kmean_groups(
analysis, 'Anion', 'cl', 4, range(4, 10),
output=['Initial'], mtype='optimised',
rotations=[[0, 0, 90], [-90, 90, 0]],
axis_length=0.3)
Explanation: The KMeans algorithm clusters data by trying to separate samples into n groups of equal variance.
End of explanation
file_path = folder.save_ipyimg(vdw, 'image_of_molecule')
Image(file_path)
Explanation: Documentation (MS Word)
After analysing the computations, it would be reasonable to want to document some of our findings. This can be achieved by outputting individual figure or table images via the folder object.
End of explanation
import matplotlib.pyplot as plt
d = pg.MSDocument()
d.add_heading('A Pygauss Example Assessment', level=0)
d.add_docstring(
# Introduction
We have looked at the following aspects
of [EMIM]^{+}[Cl]^{-} (C_{6}H_{11}ClN_{2});
- Geometric conformers
- Electronic structure
# Geometric Conformers
)
fig, caption = analysis.plot_mol_images(max_cols=2,
rotations=[[90,0,0], [0,0,90]],
info_columns=['Anion', 'Cation', 'Initial'])
d.add_mpl(fig, dpi=96, height=9, caption=caption)
plt.close()
d.add_paragraph()
df = analysis.get_table(
columns=['Anion Charge', 'Cation Charge'],
row_index=['Anion', 'Cation', 'Initial'])
d.add_dataframe(df, incl_indx=True, style='Medium Shading 1 Accent 1',
caption='Analysis of Conformer Charge')
d.add_docstring(
# Molecular Orbital Analysis
## Density of States
It is **important** to *emphasise* that the
computations have only been run in the gas phase.
)
fig, caption = analysis.plot_mol_graphs(gtype='dos', max_cols=3,
lbound=-20, ubound=10, legend_size=0,
band_gap_value=False,
info_columns=['Cation', 'Anion', 'Initial'])
d.add_mpl(fig, dpi=96, height=9, caption=caption)
plt.close()
d.save('exmpl_assess.docx')
Explanation: But you may also want to produce a more full record of your analysis, and this is where python-docx steps in. Building on this package the pygauss MSDocument class can produce a full document of your analysis.
End of explanation |
13,884 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 3
Imports
Step2: Using interact for animation with data
A soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution
Step3: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays
Step4: Compute a 2d NumPy array called phi
Step6: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
Step7: Use interact to animate the plot_soliton_data function versus time. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 3
Imports
End of explanation
np.sech?
def soliton(x, t, c, a):
Return phi(x, t) for a soliton wave with constants c and a.
phi=.5*c*(np.cosh(.5*c**.5*(x-c*t-a)))**-2
return phi
assert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))
Explanation: Using interact for animation with data
A soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution:
$$
\phi(x,t) = \frac{1}{2} c \mathrm{sech}^2 \left[ \frac{\sqrt{c}}{2} \left(x - ct - a \right) \right]
$$
The constant c is the velocity and the constant a is the initial location of the soliton.
Define soliton(x, t, c, a) function that computes the value of the soliton wave for the given arguments. Your function should work when the postion x or t are NumPy arrays, in which case it should return a NumPy array itself.
End of explanation
tmin = 0.0
tmax = 10.0
tpoints = 100
t = np.linspace(tmin, tmax, tpoints)
xmin = 0.0
xmax = 10.0
xpoints = 200
x = np.linspace(xmin, xmax, xpoints)
c = 1.0
a = 0.0
Explanation: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays:
End of explanation
q=[1,2,3,4]
b=[4,5,6]
B,A=np.meshgrid(b,q)
phi=soliton(A,B,c,a)
np.shape(phi)
T,X=np.meshgrid(t,x)
phi=soliton(X,T,c,a)
np.shape(phi)
assert phi.shape==(xpoints, tpoints)
assert phi.ndim==2
assert phi.dtype==np.dtype(float)
assert phi[0,0]==soliton(x[0],t[0],c,a)
Explanation: Compute a 2d NumPy array called phi:
It should have a dtype of float.
It should have a shape of (xpoints, tpoints).
phi[i,j] should contain the value $\phi(x[i],t[j])$.
End of explanation
def plot_soliton_data(i=0):
Plot the soliton data at t[i] versus x.
plt.plot(x,phi[::1,i])
plt.tick_params(direction='out')
plt.xlabel('x')
plt.ylabel('phi')
plt.ylim(0,.6)
plt.title('Soliton Data')
print('t=',i)
plot_soliton_data(0)
assert True # leave this for grading the plot_soliton_data function
Explanation: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
End of explanation
interact(plot_soliton_data,i=(0,tpoints))
assert True # leave this for grading the interact with plot_soliton_data cell
Explanation: Use interact to animate the plot_soliton_data function versus time.
End of explanation |
13,885 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Word Frequency in Literary Text
Click on the play icon above to "run" each box of code.
This program generates a table of how often words appear in a file and sorts them to show the ones the author used most frequently. This example uses Jane Eyre, but there are tons of books to choose from here with lots of books in .txt format.
Step1: Word frequency list
Step2: Filtering the results
This next part removes some of the less interesting words from the list. | Python Code:
import re
import pandas as pd
import urllib.request
frequency = {}
document_text = urllib.request.urlopen \
('http://www.textfiles.com/etext/FICTION/bronte-jane-178.txt') \
.read().decode('utf-8')
text_string = document_text.lower()
match_pattern = re.findall(r'\b[a-z]{3,15}\b', text_string)
for word in match_pattern:
count = frequency.get(word,0)
frequency[word] = count + 1
frequency_list = frequency.keys()
d = []
for word in frequency_list:
var = word + "," + str(frequency[word]) + "\r"
d.append({'word':word, 'Frequency': frequency[word]})
df = pd.DataFrame(d)
Explanation: Word Frequency in Literary Text
Click on the play icon above to "run" each box of code.
This program generates a table of how often words appear in a file and sorts them to show the ones the author used most frequently. This example uses Jane Eyre, but there are tons of books to choose from here with lots of books in .txt format.
End of explanation
df1 = df.sort_values(by="Frequency", ascending=False)
# the next line displays the first number of rows you select
df1.head(10)
Explanation: Word frequency list
End of explanation
df2 = df1.query('word not in \
("the","and","it","was","for","but","that") \
')
df2.head(10)
Explanation: Filtering the results
This next part removes some of the less interesting words from the list.
End of explanation |
13,886 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Norm approximation on restricted quantized domain
Fast approximation of the norm over the value of a 10bit unsigned int
using batch gradient descent to minimize the squared error
Approximation formula
Step1: inizialization of the variable
Step2: Guessing the speed gain if this work
Step3: So if this approximation work we can stimate the norm over x7 time faster
hyper parameters initialization
Step4: Initial Error
Error formula
Step5: the partial derivative of the error
Partial derivative formula
Step6: Results
Step7: Percentual Error Plot | Python Code:
#numerical library
import numpy as np
#plot library
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import matplotlib.pyplot as plt
from pprint import pprint
Explanation: Norm approximation on restricted quantized domain
Fast approximation of the norm over the value of a 10bit unsigned int
using batch gradient descent to minimize the squared error
Approximation formula:
$$\gamma (x+y) \approx \sqrt{x^2+y^2} \ x,y \in [0,1023] \subseteq \mathbb{N}$$
where $$\gamma = 0.7531854654594905$$
import needed modules
End of explanation
x,y = np.indices([1024,1024])
Explanation: inizialization of the variable
End of explanation
%timeit (x**2+y**2)**0.5
%timeit 0.7531854654594905*(x+y)
Explanation: Guessing the speed gain if this work
End of explanation
epoch_of_training = 1000
learning_rate = 1e-8
gamma = 1
Explanation: So if this approximation work we can stimate the norm over x7 time faster
hyper parameters initialization
End of explanation
init_sq_err = np.sum(0.5*((gamma*(x+y) - (x**2+y**2)**0.5)**2))
init_sq_err
Explanation: Initial Error
Error formula:
$$E = \frac{1}{2}\sum_{1}^{n} (\gamma (x+y)-\sqrt{x^2+y^2})^2$$
End of explanation
for i in range(epoch_of_training):
gamma -= learning_rate * np.mean((gamma*(x+y)-(x**2+y**2)**0.5)*(x+y))
Explanation: the partial derivative of the error
Partial derivative formula:
$$\frac{\partial }{\partial \gamma}E =\sum_{1}^{n} (\gamma (x+y)-\sqrt{x^2+y^2})(x+y)$$
The stocastic grdient descent
$$\gamma^{i} = \gamma^{i-1} - \eta \frac{\partial }{\partial \gamma}E$$
End of explanation
gamma
fin_sq_err = np.sum(0.5*((gamma*(x+y) - (x**2+y**2)**0.5)**2))
fin_sq_err
delta_sq_err = init_sq_err - fin_sq_err
delta_sq_err
Error = abs(gamma*(x+y) - (x**2+y**2)**0.5)
print(np.max(Error))
print(np.mean(Error))
print(np.min(Error))
Explanation: Results
End of explanation
fig = plt.figure(figsize=[12,12])
ax = fig.gca(projection='3d')
X = np.arange(1, 1024, 8)
Y = np.arange(1, 1024, 8)
X, Y = np.meshgrid(X, Y)
Z = np.sqrt(X**2 + Y**2)
F = abs(gamma*(X+Y)-Z)/(Z)
surf = ax.plot_surface(X, Y, F, rstride=2, cstride=2, cmap=cm.jet,linewidth=1)
ax.set_xlabel('X')
ax.set_xlim(-10, 1034)
ax.set_ylabel('Y')
ax.set_ylim(-10, 1034)
ax.set_zlabel('Z')
ax.set_zlim(0, 0.30)
ax.invert_yaxis()
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
fig.colorbar(surf)
plt.show()
Explanation: Percentual Error Plot
End of explanation |
13,887 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Edit this next cell to choose a different country / year report
Step1: These next few conversions don't really work. The PPP data field seems wrong.
Step2: But this one only works if you use the PPP given applied to sample mean in LCU, which is odd.
Step3: Minimum and maximum can't be checked, but load them anyway in case we use them later.
Step4: Gini is calculate directly from from $L(p)$, or perhaps unit record data underlying.
Step5: Estimating tail statistics, like headcount poverty, is a little harder. Povcalnet likely uses the underlying unit record data, and the 100 point Lorenz curve is likely too coarse to get a comparable result. So at this step we fit a model of the Lorenz curve using splines.
We use weights to gently encourage the optimization to fit better at the minimum and maximum incomes, and to the first 10% of Lorenz points (to upweight the left tail fit). Unfortunately very large weights tend to produce spurious peaks in the PDF as the algorithm tries to fit the tails, so we sacrifice the tails to avoid this. It's more of a suggestion than a constraint
Step6: Although the sample extrema only say a little about the population extrema, it's interesting to see how they compare with those implied by the spline fit. It is technically disqualifying, but not surprising, if the sample extrema lie outside those from the fit.
If the natural computed maximum is less than the given, we want to use the upper weights. If not, it's better not to as this will force down the maximum which might result in an ill-formed distribution. A good strategy would be to fit once without weights, then choose the weights depending on how the extrema look.
Step7: The FGT2 index is very sensitive to distribution, and neither method reproduces the Povcalnet statistic well.
Step8: Errors here usually mean the distributions are not well specified. | Python Code:
# BGR_3_2001.json
# BRA_3_2001.json
# MWI_3_2010.23.json
# ECU_3_2014.json
# ARM_3_2010.json
# NGA_3_2009.83.json
# IDN_1_2014.json quite pointed / triangular
# PHL_3_2009.json
# ZAR_3_2012.4.json
# TZA_3_2011.77.json
# VNM_3_2008.json
# MOZ_3_2008.67.json quite rounded
# UZB_3_2003.json
# KIR_3_2006.json needs 1e-4
# PNG_3_2009.67.json needs False and 1e-5
# PAK_3_2013.5.json
# BGD_3_2010.json not super good need 1e-5
# ARG_2_1991.json needs False currency scales weird
with open("../jsoncache/MOZ_3_2008.67.json","r") as f:
d = json.loads(f.read())
print("Sample size".ljust(20),d['sample']['N'])
for k in d['dataset']:
print(k.ljust(20),d['dataset'][k])
Explanation: Edit this next cell to choose a different country / year report:
End of explanation
# Check poverty line conversion
DAYS_PER_MONTH = 30.4167
line_month_ppp_calc = d['inputs']['line_day_ppp'] * DAYS_PER_MONTH
line_month_ppp_given = d['inputs']['line_month_ppp']
myassert("Poverty line (PPP):", line_month_ppp_calc, line_month_ppp_given)
ppp = d['inputs']['ppp']
line_month_lcu_calc = line_month_ppp_calc * ppp
line_month_lcu_given = d['inputs']['line_month_lcu']
myassert("Poverty line (LCU):", line_month_lcu_calc, line_month_lcu_given)
# Check data mean
sample_mean_ppp_calc = d['sample']['mean_month_lcu'] / ppp
sample_mean_ppp_given = d['sample']['mean_month_ppp']
myassert("Data mean (PPP):", sample_mean_ppp_calc, sample_mean_ppp_given)
implied_ppp = d['sample']['mean_month_lcu'] / d['sample']['mean_month_ppp']
myassert("Implied PPP:", implied_ppp, ppp)
Explanation: These next few conversions don't really work. The PPP data field seems wrong.
End of explanation
pop_N = d['sample']['effective_pop_N']
total_wealth_calc = pop_N * sample_mean_ppp_calc
total_wealth_given = d['sample']['effective_pop_wealth']
myassert("Total wealth:", total_wealth_calc, total_wealth_given)
Explanation: But this one only works if you use the PPP given applied to sample mean in LCU, which is odd.
End of explanation
# Load the min and max in case we use them to fit the Lorenz curve
sample_max_ppp_given = d['sample']['month_max']
sample_min_ppp_given = d['sample']['month_min']
Explanation: Minimum and maximum can't be checked, but load them anyway in case we use them later.
End of explanation
# Load the Lorenz curve
L = d['lorenz']['L']
p = d['lorenz']['p']
# We need to add the origin, by definition
p = [0.0] + p
L = [0.0] + L
# We can, if we want, use the sample min and max to add a point to the curve
if True:
dp = 1 / d['sample']['N']
dlorenz_at_0 = sample_min_ppp_given/sample_mean_ppp_given
dlorenz_at_1 = sample_max_ppp_given/sample_mean_ppp_given
p_second = 0 + dp
p_penultimate = 1 - dp
L_second = 0 + dlorenz_at_0 * dp
L_penultimate = 1 - dlorenz_at_1 * dp
p = [0.0, p_second] + p[1:-1] + [p_penultimate, 1.0]
L = [0.0, L_second] + L[1:-1] + [L_penultimate, 1.0]
lorenz = pd.DataFrame({'p': p, 'L': L})
lorenz['dp'] = lorenz.p.shift(-1)[:-1] - lorenz.p[:-1]
lorenz['dL'] = lorenz.L.shift(-1)[:-1] - lorenz.L[:-1]
lorenz['dLdp'] = lorenz.dL / lorenz.dp
# Now, F(y) = inverse of Q(p)
lorenz['y'] = lorenz.dLdp * sample_mean_ppp_given
# Calc and compare Ginis
G_calc = 1 - sum(0.5 * lorenz.dp[:-1] * (lorenz.L.shift(-1)[:-1] + lorenz.L[:-1])) / 0.5
G_given = d['dist']['Gini']
myassert("Gini:",G_calc, G_given)
Explanation: Gini is calculate directly from from $L(p)$, or perhaps unit record data underlying.
End of explanation
##########################################
plt.rcParams["figure.figsize"] = (12,2.5)
fig, ax = plt.subplots(1, 4)
##########################################
thehead = int(len(lorenz)*0.1)
themiddle = len(lorenz) - thehead - 2 - 2
lorenz.w = ([100, 100] + [10] * thehead) + ([1] * themiddle) + [1, 1]
#lorenz.w = [10]*thehead + [1]*(len(lorenz)-thehead)
lorenz_interp = scipy.interpolate.UnivariateSpline(lorenz.p,lorenz.L,w=lorenz.w,k=5,s=1e-7)
quantile = lambda p: sample_mean_ppp_given * lorenz_interp.derivative()(p)
cdf = inverse(quantile)
pdf = derivative(cdf)
pgrid = np.linspace(0, 1, 1000)
ax[0].plot(pgrid, lorenz_interp(pgrid))
ax[1].plot(pgrid, quantile(pgrid))
ygrid = np.linspace(0, quantile(0.97), 1000)
ax[2].plot(ygrid, cdf(ygrid))
ax[3].plot(ygrid, pdf(ygrid));
Explanation: Estimating tail statistics, like headcount poverty, is a little harder. Povcalnet likely uses the underlying unit record data, and the 100 point Lorenz curve is likely too coarse to get a comparable result. So at this step we fit a model of the Lorenz curve using splines.
We use weights to gently encourage the optimization to fit better at the minimum and maximum incomes, and to the first 10% of Lorenz points (to upweight the left tail fit). Unfortunately very large weights tend to produce spurious peaks in the PDF as the algorithm tries to fit the tails, so we sacrifice the tails to avoid this. It's more of a suggestion than a constraint :-)
End of explanation
myassert("Minimum",quantile(0),sample_min_ppp_given)
myassert("Maximum",quantile(1),sample_max_ppp_given)
myassert("Minimum / mean",quantile(0)/sample_mean_ppp_given,sample_min_ppp_given/sample_mean_ppp_given)
HC_calc = float(cdf(line_month_ppp_given))
HC_given = float(d['dist']['HC'])
myassert("HC",HC_calc,HC_given)
Explanation: Although the sample extrema only say a little about the population extrema, it's interesting to see how they compare with those implied by the spline fit. It is technically disqualifying, but not surprising, if the sample extrema lie outside those from the fit.
If the natural computed maximum is less than the given, we want to use the upper weights. If not, it's better not to as this will force down the maximum which might result in an ill-formed distribution. A good strategy would be to fit once without weights, then choose the weights depending on how the extrema look.
End of explanation
# Poverty gap
lorenz['PG'] = (line_month_ppp_given - lorenz.y) / line_month_ppp_given
lorenz.PG[lorenz.PG < 0] = 0
PG_direct = sum(lorenz.PG[:-1] * lorenz.dp[:-1])
PG_f = lambda y: pdf(y) * (line_month_ppp_given - y) # PL * Q(PL) - mu * L(Q(PL))
PG_model = (line_month_ppp_given * cdf(line_month_ppp_given) - sample_mean_ppp_given * lorenz_interp(cdf(line_month_ppp_given)) ) / line_month_ppp_given
PG_given = d['dist']['PG']
myassert("PG direct",PG_direct,PG_given)
myassert("PG model",PG_model,PG_given)
# Poverty gap squared (FGT2)
lorenz.FGT2 = lorenz.PG * lorenz.PG
FGT2_direct = sum(lorenz.FGT2[:-1] * lorenz.dp[:-1])
# Numerical integration doesn't work great for second moments so we simulate
M = 100000
FGT2_sim = 0
Watts_sim = 0
#bottom = cdf(sample_min_ppp_given)
bottom = 0.0
top = cdf(line_month_ppp_given)
for m in range(M):
sim_y = quantile(np.random.uniform(bottom, top))
FGT2_sim += (line_month_ppp_given - sim_y)**2 / line_month_ppp_given**2
Watts_sim += np.log(line_month_ppp_given / sim_y)
FGT2_sim /= (M / cdf(line_month_ppp_given))
Watts_sim /= (M / cdf(line_month_ppp_given))
FGT2_given = d['dist']['FGT2']
myassert("FGT2 direct",FGT2_direct,FGT2_given)
myassert("FGT2 model simulated",FGT2_sim,FGT2_given)
# Median
median_calc = lorenz.y[(lorenz.p - 0.5).abs().argmin()]
median_interp_calc = quantile(0.5)
median_given = d['dist']['median_ppp']
myassert("Median direct",median_calc,median_given)
myassert("Median model",median_interp_calc,median_given)
Explanation: The FGT2 index is very sensitive to distribution, and neither method reproduces the Povcalnet statistic well.
End of explanation
# Mean log deviation (MLD)
lorenz.LD = np.log(sample_mean_ppp_given) - np.log(lorenz.y)
MLD_calc = sum(lorenz.LD[:-1] * lorenz.dp[:-1])
# Numerical integration doesn't work great for weird things so we simulate
M = 100000
MLD_sim = 0
for m in range(M):
sim_y = quantile(np.random.uniform(0, 1))
increment = np.log(sample_mean_ppp_given / sim_y)
MLD_sim += increment
MLD_sim /= M
MLD_given = d['dist']['MLD']
myassert("MLD direct",MLD_calc,MLD_given)
myassert("MLD model simulated",MLD_sim,MLD_given)
# Watts index
lorenz.Watts = np.log(line_month_ppp_given) - np.log(lorenz.y)
lorenz.Watts[lorenz.Watts < 0] = 0
Watts_calc = sum(lorenz.Watts[:-1] * lorenz.dp[:-1])
# Watts_sim simulated above with FGT2
Watts_given = d['dist']['Watt']
myassert("Watts direct",Watts_calc,Watts_given)
myassert("Watts model simulated",Watts_sim,Watts_given)
Explanation: Errors here usually mean the distributions are not well specified.
End of explanation |
13,888 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy - multidimensional data arrays
J.R. Johansson (jrjohansson at gmail.com)
The latest version of this IPython notebook lecture is available at http
Step1: Introduction
The numpy package (module) is used in almost all numerical computation using Python. It is a package that provide high-performance vector, matrix and higher-dimensional data structures for Python. It is implemented in C and Fortran so when calculations are vectorized (formulated with vectors and matrices), performance is very good.
To use numpy you need to import the module, using for example
Step2: In the numpy package the terminology used for vectors, matrices and higher-dimensional data sets is array.
Creating numpy arrays
There are a number of ways to initialize new numpy arrays, for example from
a Python list or tuples
using functions that are dedicated to generating numpy arrays, such as arange, linspace, etc.
reading data from files
From lists
For example, to create new vector and matrix arrays from Python lists we can use the numpy.array function.
Step3: The v and M objects are both of the type ndarray that the numpy module provides.
Step4: The difference between the v and M arrays is only their shapes. We can get information about the shape of an array by using the ndarray.shape property.
Step5: The number of elements in the array is available through the ndarray.size property
Step6: Equivalently, we could use the function numpy.shape and numpy.size
Step7: So far the numpy.ndarray looks awefully much like a Python list (or nested list). Why not simply use Python lists for computations instead of creating a new array type?
There are several reasons
Step8: We get an error if we try to assign a value of the wrong type to an element in a numpy array
Step9: If we want, we can explicitly define the type of the array data when we create it, using the dtype keyword argument
Step10: Common data types that can be used with dtype are
Step11: linspace and logspace
Step12: mgrid
Step13: random data
Step14: diag
Step15: zeros and ones
Step16: File I/O
Comma-separated values (CSV)
A very common file format for data files is comma-separated values (CSV), or related formats such as TSV (tab-separated values). To read data from such files into Numpy arrays we can use the numpy.genfromtxt function. For example,
Step17: Using numpy.savetxt we can store a Numpy array to a file in CSV format
Step18: Numpy's native file format
Useful when storing and reading back numpy array data. Use the functions numpy.save and numpy.load
Step19: More properties of the numpy arrays
Step20: Manipulating arrays
Indexing
We can index elements in an array using square brackets and indices
Step21: If we omit an index of a multidimensional array it returns the whole row (or, in general, a N-1 dimensional array)
Step22: The same thing can be achieved with using
Step23: We can assign new values to elements in an array using indexing
Step24: Index slicing
Index slicing is the technical name for the syntax M[lower
Step25: Array slices are mutable
Step26: We can omit any of the three parameters in M[lower
Step27: Negative indices counts from the end of the array (positive index from the begining)
Step28: Index slicing works exactly the same way for multidimensional arrays
Step29: Fancy indexing
Fancy indexing is the name for when an array or list is used in-place of an index
Step30: We can also use index masks
Step31: This feature is very useful to conditionally select elements from an array, using for example comparison operators
Step32: Functions for extracting data from arrays and creating arrays
where
The index mask can be converted to position index using the where function
Step33: diag
With the diag function we can also extract the diagonal and subdiagonals of an array
Step34: take
The take function is similar to fancy indexing described above
Step35: But take also works on lists and other objects
Step36: choose
Constructs an array by picking elements from several arrays
Step37: Linear algebra
Vectorizing code is the key to writing efficient numerical calculation with Python/Numpy. That means that as much as possible of a program should be formulated in terms of matrix and vector operations, like matrix-matrix multiplication.
Scalar-array operations
We can use the usual arithmetic operators to multiply, add, subtract, and divide arrays with scalar numbers.
Step38: Element-wise array-array operations
When we add, subtract, multiply and divide arrays with each other, the default behaviour is element-wise operations
Step39: If we multiply arrays with compatible shapes, we get an element-wise multiplication of each row
Step40: Matrix algebra
What about matrix mutiplication? There are two ways. We can either use the dot function, which applies a matrix-matrix, matrix-vector, or inner vector multiplication to its two arguments
Step41: Alternatively, we can cast the array objects to the type matrix. This changes the behavior of the standard arithmetic operators +, -, * to use matrix algebra.
Step42: If we try to add, subtract or multiply objects with incomplatible shapes we get an error
Step43: See also the related functions
Step44: Hermitian conjugate
Step45: We can extract the real and imaginary parts of complex-valued arrays using real and imag
Step46: Or the complex argument and absolute value
Step47: Matrix computations
Inverse
Step48: Determinant
Step49: Data processing
Often it is useful to store datasets in Numpy arrays. Numpy provides a number of functions to calculate statistics of datasets in arrays.
For example, let's calculate some properties from the Stockholm temperature dataset used above.
Step50: mean
Step51: The daily mean temperature in Stockholm over the last 200 years has been about 6.2 C.
standard deviations and variance
Step52: min and max
Step53: sum, prod, and trace
Step54: Computations on subsets of arrays
We can compute with subsets of the data in an array using indexing, fancy indexing, and the other methods of extracting data from an array (described above).
For example, let's go back to the temperature dataset
Step55: The dataformat is
Step56: With these tools we have very powerful data processing capabilities at our disposal. For example, to extract the average monthly average temperatures for each month of the year only takes a few lines of code
Step57: Calculations with higher-dimensional data
When functions such as min, max, etc. are applied to a multidimensional arrays, it is sometimes useful to apply the calculation to the entire array, and sometimes only on a row or column basis. Using the axis argument we can specify how these functions should behave
Step58: Many other functions and methods in the array and matrix classes accept the same (optional) axis keyword argument.
Reshaping, resizing and stacking arrays
The shape of an Numpy array can be modified without copying the underlaying data, which makes it a fast operation even for large arrays.
Step59: We can also use the function flatten to make a higher-dimensional array into a vector. But this function create a copy of the data.
Step60: Adding a new dimension
Step61: Stacking and repeating arrays
Using function repeat, tile, vstack, hstack, and concatenate we can create larger vectors and matrices from smaller ones
Step62: concatenate
Step63: hstack and vstack
Step64: Copy and "deep copy"
To achieve high performance, assignments in Python usually do not copy the underlaying objects. This is important for example when objects are passed between functions, to avoid an excessive amount of memory copying when it is not necessary (technical term
Step65: If we want to avoid this behavior, so that when we get a new completely independent object B copied from A, then we need to do a so-called "deep copy" using the function copy
Step66: Iterating over array elements
Generally, we want to avoid iterating over the elements of arrays whenever we can (at all costs). The reason is that in a interpreted language like Python (or MATLAB), iterations are really slow compared to vectorized operations.
However, sometimes iterations are unavoidable. For such cases, the Python for loop is the most convenient way to iterate over an array
Step67: When we need to iterate over each element of an array and modify its elements, it is convenient to use the enumerate function to obtain both the element and its index in the for loop
Step69: Vectorizing functions
As mentioned several times by now, to get good performance we should try to avoid looping over elements in our vectors and matrices, and instead use vectorized algorithms. The first step in converting a scalar algorithm to a vectorized algorithm is to make sure that the functions we write work with vector inputs.
Step70: OK, that didn't work because we didn't write the Theta function so that it can handle a vector input...
To get a vectorized version of Theta we can use the Numpy function vectorize. In many cases it can automatically vectorize a function
Step72: We can also implement the function to accept a vector input from the beginning (requires more effort but might give better performance)
Step73: Using arrays in conditions
When using arrays in conditions,for example if statements and other boolean expressions, one needs to use any or all, which requires that any or all elements in the array evalutes to True
Step74: Type casting
Since Numpy arrays are statically typed, the type of an array does not change once created. But we can explicitly cast an array of some type to another using the astype functions (see also the similar asarray function). This always create a new array of new type
Step75: Further reading
http | Python Code:
# what is this line all about?!? Answer in lecture 4
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: Numpy - multidimensional data arrays
J.R. Johansson (jrjohansson at gmail.com)
The latest version of this IPython notebook lecture is available at http://github.com/jrjohansson/scientific-python-lectures.
The other notebooks in this lecture series are indexed at http://jrjohansson.github.io.
End of explanation
from numpy import *
Explanation: Introduction
The numpy package (module) is used in almost all numerical computation using Python. It is a package that provide high-performance vector, matrix and higher-dimensional data structures for Python. It is implemented in C and Fortran so when calculations are vectorized (formulated with vectors and matrices), performance is very good.
To use numpy you need to import the module, using for example:
End of explanation
# a vector: the argument to the array function is a Python list
v = array([1,2,3,4])
v
# a matrix: the argument to the array function is a nested Python list
M = array([[1, 2], [3, 4]])
M
Explanation: In the numpy package the terminology used for vectors, matrices and higher-dimensional data sets is array.
Creating numpy arrays
There are a number of ways to initialize new numpy arrays, for example from
a Python list or tuples
using functions that are dedicated to generating numpy arrays, such as arange, linspace, etc.
reading data from files
From lists
For example, to create new vector and matrix arrays from Python lists we can use the numpy.array function.
End of explanation
type(v), type(M)
Explanation: The v and M objects are both of the type ndarray that the numpy module provides.
End of explanation
v.shape
M.shape
Explanation: The difference between the v and M arrays is only their shapes. We can get information about the shape of an array by using the ndarray.shape property.
End of explanation
M.size
Explanation: The number of elements in the array is available through the ndarray.size property:
End of explanation
shape(M)
size(M)
Explanation: Equivalently, we could use the function numpy.shape and numpy.size
End of explanation
M.dtype
Explanation: So far the numpy.ndarray looks awefully much like a Python list (or nested list). Why not simply use Python lists for computations instead of creating a new array type?
There are several reasons:
Python lists are very general. They can contain any kind of object. They are dynamically typed. They do not support mathematical functions such as matrix and dot multiplications, etc. Implementing such functions for Python lists would not be very efficient because of the dynamic typing.
Numpy arrays are statically typed and homogeneous. The type of the elements is determined when the array is created.
Numpy arrays are memory efficient.
Because of the static typing, fast implementation of mathematical functions such as multiplication and addition of numpy arrays can be implemented in a compiled language (C and Fortran is used).
Using the dtype (data type) property of an ndarray, we can see what type the data of an array has:
End of explanation
M[0,0] = "hello"
Explanation: We get an error if we try to assign a value of the wrong type to an element in a numpy array:
End of explanation
M = array([[1, 2], [3, 4]], dtype=complex)
M
Explanation: If we want, we can explicitly define the type of the array data when we create it, using the dtype keyword argument:
End of explanation
# create a range
x = arange(0, 10, 1) # arguments: start, stop, step
x
x = arange(-1, 1, 0.1)
x
Explanation: Common data types that can be used with dtype are: int, float, complex, bool, object, etc.
We can also explicitly define the bit size of the data types, for example: int64, int16, float128, complex128.
Using array-generating functions
For larger arrays it is inpractical to initialize the data manually, using explicit python lists. Instead we can use one of the many functions in numpy that generate arrays of different forms. Some of the more common are:
arange
End of explanation
# using linspace, both end points ARE included
linspace(0, 10, 25)
logspace(0, 10, 10, base=e)
Explanation: linspace and logspace
End of explanation
x, y = mgrid[0:5, 0:5] # similar to meshgrid in MATLAB
x
y
Explanation: mgrid
End of explanation
from numpy import random
# uniform random numbers in [0,1]
random.rand(5,5)
# standard normal distributed random numbers
random.randn(5,5)
Explanation: random data
End of explanation
# a diagonal matrix
diag([1,2,3])
# diagonal with offset from the main diagonal
diag([1,2,3], k=1)
Explanation: diag
End of explanation
zeros((3,3))
ones((3,3))
Explanation: zeros and ones
End of explanation
!head stockholm_td_adj.dat
data = genfromtxt('stockholm_td_adj.dat')
data.shape
fig, ax = plt.subplots(figsize=(14,4))
ax.plot(data[:,0]+data[:,1]/12.0+data[:,2]/365, data[:,5])
ax.axis('tight')
ax.set_title('tempeatures in Stockholm')
ax.set_xlabel('year')
ax.set_ylabel('temperature (C)');
Explanation: File I/O
Comma-separated values (CSV)
A very common file format for data files is comma-separated values (CSV), or related formats such as TSV (tab-separated values). To read data from such files into Numpy arrays we can use the numpy.genfromtxt function. For example,
End of explanation
M = random.rand(3,3)
M
savetxt("random-matrix.csv", M)
!cat random-matrix.csv
savetxt("random-matrix.csv", M, fmt='%.5f') # fmt specifies the format
!cat random-matrix.csv
Explanation: Using numpy.savetxt we can store a Numpy array to a file in CSV format:
End of explanation
save("random-matrix.npy", M)
!file random-matrix.npy
load("random-matrix.npy")
Explanation: Numpy's native file format
Useful when storing and reading back numpy array data. Use the functions numpy.save and numpy.load:
End of explanation
M.itemsize # bytes per element
M.nbytes # number of bytes
M.ndim # number of dimensions
Explanation: More properties of the numpy arrays
End of explanation
# v is a vector, and has only one dimension, taking one index
v[0]
# M is a matrix, or a 2 dimensional array, taking two indices
M[1,1]
Explanation: Manipulating arrays
Indexing
We can index elements in an array using square brackets and indices:
End of explanation
M
M[1]
Explanation: If we omit an index of a multidimensional array it returns the whole row (or, in general, a N-1 dimensional array)
End of explanation
M[1,:] # row 1
M[:,1] # column 1
Explanation: The same thing can be achieved with using : instead of an index:
End of explanation
M[0,0] = 1
M
# also works for rows and columns
M[1,:] = 0
M[:,2] = -1
M
Explanation: We can assign new values to elements in an array using indexing:
End of explanation
A = array([1,2,3,4,5])
A
A[1:3]
Explanation: Index slicing
Index slicing is the technical name for the syntax M[lower:upper:step] to extract part of an array:
End of explanation
A[1:3] = [-2,-3]
A
Explanation: Array slices are mutable: if they are assigned a new value the original array from which the slice was extracted is modified:
End of explanation
A[::] # lower, upper, step all take the default values
A[::2] # step is 2, lower and upper defaults to the beginning and end of the array
A[:3] # first three elements
A[3:] # elements from index 3
Explanation: We can omit any of the three parameters in M[lower:upper:step]:
End of explanation
A = array([1,2,3,4,5])
A[-1] # the last element in the array
A[-3:] # the last three elements
Explanation: Negative indices counts from the end of the array (positive index from the begining):
End of explanation
A = array([[n+m*10 for n in range(5)] for m in range(5)])
A
# a block from the original array
A[1:4, 1:4]
# strides
A[::2, ::2]
Explanation: Index slicing works exactly the same way for multidimensional arrays:
End of explanation
row_indices = [1, 2, 3]
A[row_indices]
col_indices = [1, 2, -1] # remember, index -1 means the last element
A[row_indices, col_indices]
Explanation: Fancy indexing
Fancy indexing is the name for when an array or list is used in-place of an index:
End of explanation
B = array([n for n in range(5)])
B
row_mask = array([True, False, True, False, False])
B[row_mask]
# same thing
row_mask = array([1,0,1,0,0], dtype=bool)
B[row_mask]
Explanation: We can also use index masks: If the index mask is an Numpy array of data type bool, then an element is selected (True) or not (False) depending on the value of the index mask at the position of each element:
End of explanation
x = arange(0, 10, 0.5)
x
mask = (5 < x) * (x < 7.5)
mask
x[mask]
Explanation: This feature is very useful to conditionally select elements from an array, using for example comparison operators:
End of explanation
indices = where(mask)
indices
x[indices] # this indexing is equivalent to the fancy indexing x[mask]
Explanation: Functions for extracting data from arrays and creating arrays
where
The index mask can be converted to position index using the where function
End of explanation
diag(A)
diag(A, -1)
Explanation: diag
With the diag function we can also extract the diagonal and subdiagonals of an array:
End of explanation
v2 = arange(-3,3)
v2
row_indices = [1, 3, 5]
v2[row_indices] # fancy indexing
v2.take(row_indices)
Explanation: take
The take function is similar to fancy indexing described above:
End of explanation
take([-3, -2, -1, 0, 1, 2], row_indices)
Explanation: But take also works on lists and other objects:
End of explanation
which = [1, 0, 1, 0]
choices = [[-2,-2,-2,-2], [5,5,5,5]]
choose(which, choices)
Explanation: choose
Constructs an array by picking elements from several arrays:
End of explanation
v1 = arange(0, 5)
v1 * 2
v1 + 2
A * 2, A + 2
Explanation: Linear algebra
Vectorizing code is the key to writing efficient numerical calculation with Python/Numpy. That means that as much as possible of a program should be formulated in terms of matrix and vector operations, like matrix-matrix multiplication.
Scalar-array operations
We can use the usual arithmetic operators to multiply, add, subtract, and divide arrays with scalar numbers.
End of explanation
A * A # element-wise multiplication
v1 * v1
Explanation: Element-wise array-array operations
When we add, subtract, multiply and divide arrays with each other, the default behaviour is element-wise operations:
End of explanation
A.shape, v1.shape
A * v1
Explanation: If we multiply arrays with compatible shapes, we get an element-wise multiplication of each row:
End of explanation
dot(A, A)
dot(A, v1)
dot(v1, v1)
Explanation: Matrix algebra
What about matrix mutiplication? There are two ways. We can either use the dot function, which applies a matrix-matrix, matrix-vector, or inner vector multiplication to its two arguments:
End of explanation
M = matrix(A)
v = matrix(v1).T # make it a column vector
v
M * M
M * v
# inner product
v.T * v
# with matrix objects, standard matrix algebra applies
v + M*v
Explanation: Alternatively, we can cast the array objects to the type matrix. This changes the behavior of the standard arithmetic operators +, -, * to use matrix algebra.
End of explanation
v = matrix([1,2,3,4,5,6]).T
shape(M), shape(v)
M * v
Explanation: If we try to add, subtract or multiply objects with incomplatible shapes we get an error:
End of explanation
C = matrix([[1j, 2j], [3j, 4j]])
C
conjugate(C)
Explanation: See also the related functions: inner, outer, cross, kron, tensordot. Try for example help(kron).
Array/Matrix transformations
Above we have used the .T to transpose the matrix object v. We could also have used the transpose function to accomplish the same thing.
Other mathematical functions that transform matrix objects are:
End of explanation
C.H
Explanation: Hermitian conjugate: transpose + conjugate
End of explanation
real(C) # same as: C.real
imag(C) # same as: C.imag
Explanation: We can extract the real and imaginary parts of complex-valued arrays using real and imag:
End of explanation
angle(C+1) # heads up MATLAB Users, angle is used instead of arg
abs(C)
Explanation: Or the complex argument and absolute value
End of explanation
linalg.inv(C) # equivalent to C.I
C.I * C
Explanation: Matrix computations
Inverse
End of explanation
linalg.det(C)
linalg.det(C.I)
Explanation: Determinant
End of explanation
# reminder, the tempeature dataset is stored in the data variable:
shape(data)
Explanation: Data processing
Often it is useful to store datasets in Numpy arrays. Numpy provides a number of functions to calculate statistics of datasets in arrays.
For example, let's calculate some properties from the Stockholm temperature dataset used above.
End of explanation
# the temperature data is in column 3
mean(data[:,3])
Explanation: mean
End of explanation
std(data[:,3]), var(data[:,3])
Explanation: The daily mean temperature in Stockholm over the last 200 years has been about 6.2 C.
standard deviations and variance
End of explanation
# lowest daily average temperature
data[:,3].min()
# highest daily average temperature
data[:,3].max()
Explanation: min and max
End of explanation
d = arange(0, 10)
d
# sum up all elements
sum(d)
# product of all elements
prod(d+1)
# cummulative sum
cumsum(d)
# cummulative product
cumprod(d+1)
# same as: diag(A).sum()
trace(A)
Explanation: sum, prod, and trace
End of explanation
!head -n 3 stockholm_td_adj.dat
Explanation: Computations on subsets of arrays
We can compute with subsets of the data in an array using indexing, fancy indexing, and the other methods of extracting data from an array (described above).
For example, let's go back to the temperature dataset:
End of explanation
unique(data[:,1]) # the month column takes values from 1 to 12
mask_feb = data[:,1] == 2
# the temperature data is in column 3
mean(data[mask_feb,3])
Explanation: The dataformat is: year, month, day, daily average temperature, low, high, location.
If we are interested in the average temperature only in a particular month, say February, then we can create a index mask and use it to select only the data for that month using:
End of explanation
months = arange(1,13)
monthly_mean = [mean(data[data[:,1] == month, 3]) for month in months]
fig, ax = plt.subplots()
ax.bar(months, monthly_mean)
ax.set_xlabel("Month")
ax.set_ylabel("Monthly avg. temp.");
Explanation: With these tools we have very powerful data processing capabilities at our disposal. For example, to extract the average monthly average temperatures for each month of the year only takes a few lines of code:
End of explanation
m = random.rand(3,3)
m
# global max
m.max()
# max in each column
m.max(axis=0)
# max in each row
m.max(axis=1)
Explanation: Calculations with higher-dimensional data
When functions such as min, max, etc. are applied to a multidimensional arrays, it is sometimes useful to apply the calculation to the entire array, and sometimes only on a row or column basis. Using the axis argument we can specify how these functions should behave:
End of explanation
A
n, m = A.shape
B = A.reshape((1,n*m))
B
B[0,0:5] = 5 # modify the array
B
A # and the original variable is also changed. B is only a different view of the same data
Explanation: Many other functions and methods in the array and matrix classes accept the same (optional) axis keyword argument.
Reshaping, resizing and stacking arrays
The shape of an Numpy array can be modified without copying the underlaying data, which makes it a fast operation even for large arrays.
End of explanation
B = A.flatten()
B
B[0:5] = 10
B
A # now A has not changed, because B's data is a copy of A's, not refering to the same data
Explanation: We can also use the function flatten to make a higher-dimensional array into a vector. But this function create a copy of the data.
End of explanation
v = array([1,2,3])
shape(v)
# make a column matrix of the vector v
v[:, newaxis]
# column matrix
v[:,newaxis].shape
# row matrix
v[newaxis,:].shape
Explanation: Adding a new dimension: newaxis
With newaxis, we can insert new dimensions in an array, for example converting a vector to a column or row matrix:
End of explanation
a = array([[1, 2], [3, 4]])
# repeat each element 3 times
repeat(a, 3)
# tile the matrix 3 times
tile(a, 3)
Explanation: Stacking and repeating arrays
Using function repeat, tile, vstack, hstack, and concatenate we can create larger vectors and matrices from smaller ones:
tile and repeat
End of explanation
b = array([[5, 6]])
concatenate((a, b), axis=0)
concatenate((a, b.T), axis=1)
Explanation: concatenate
End of explanation
vstack((a,b))
hstack((a,b.T))
Explanation: hstack and vstack
End of explanation
from numpy import array
A = array([[1, 2], [3, 4]])
A
# now B is referring to the same array data as A
B = A
# changing B affects A
B[0,0] = 10
B
A
Explanation: Copy and "deep copy"
To achieve high performance, assignments in Python usually do not copy the underlaying objects. This is important for example when objects are passed between functions, to avoid an excessive amount of memory copying when it is not necessary (technical term: pass by reference).
End of explanation
B = copy(A)
# now, if we modify B, A is not affected
B[0,0] = -5
B
A
Explanation: If we want to avoid this behavior, so that when we get a new completely independent object B copied from A, then we need to do a so-called "deep copy" using the function copy:
End of explanation
v = array([1,2,3,4])
for element in v:
print(element)
M = array([[1,2], [3,4]])
for row in M:
print("row", row)
for element in row:
print(element)
Explanation: Iterating over array elements
Generally, we want to avoid iterating over the elements of arrays whenever we can (at all costs). The reason is that in a interpreted language like Python (or MATLAB), iterations are really slow compared to vectorized operations.
However, sometimes iterations are unavoidable. For such cases, the Python for loop is the most convenient way to iterate over an array:
End of explanation
for row_idx, row in enumerate(M):
print("row_idx", row_idx, "row", row)
for col_idx, element in enumerate(row):
print("col_idx", col_idx, "element", element)
# update the matrix M: square each element
M[row_idx, col_idx] = element ** 2
# each element in M is now squared
M
Explanation: When we need to iterate over each element of an array and modify its elements, it is convenient to use the enumerate function to obtain both the element and its index in the for loop:
End of explanation
def Theta(x):
Scalar implemenation of the Heaviside step function.
if x >= 0:
return 1
else:
return 0
Theta(array([-3,-2,-1,0,1,2,3]))
Explanation: Vectorizing functions
As mentioned several times by now, to get good performance we should try to avoid looping over elements in our vectors and matrices, and instead use vectorized algorithms. The first step in converting a scalar algorithm to a vectorized algorithm is to make sure that the functions we write work with vector inputs.
End of explanation
Theta_vec = vectorize(Theta)
Theta_vec(array([-3,-2,-1,0,1,2,3]))
Explanation: OK, that didn't work because we didn't write the Theta function so that it can handle a vector input...
To get a vectorized version of Theta we can use the Numpy function vectorize. In many cases it can automatically vectorize a function:
End of explanation
def Theta(x):
Vector-aware implemenation of the Heaviside step function.
return 1 * (x >= 0)
Theta(array([-3,-2,-1,0,1,2,3]))
# still works for scalars as well
Theta(-1.2), Theta(2.6)
Explanation: We can also implement the function to accept a vector input from the beginning (requires more effort but might give better performance):
End of explanation
M
if (M > 5).any():
print("at least one element in M is larger than 5")
else:
print("no element in M is larger than 5")
if (M > 5).all():
print("all elements in M are larger than 5")
else:
print("all elements in M are not larger than 5")
Explanation: Using arrays in conditions
When using arrays in conditions,for example if statements and other boolean expressions, one needs to use any or all, which requires that any or all elements in the array evalutes to True:
End of explanation
M.dtype
M2 = M.astype(float)
M2
M2.dtype
M3 = M.astype(bool)
M3
Explanation: Type casting
Since Numpy arrays are statically typed, the type of an array does not change once created. But we can explicitly cast an array of some type to another using the astype functions (see also the similar asarray function). This always create a new array of new type:
End of explanation
%reload_ext version_information
%version_information numpy
Explanation: Further reading
http://numpy.scipy.org
http://scipy.org/Tentative_NumPy_Tutorial
http://scipy.org/NumPy_for_Matlab_Users - A Numpy guide for MATLAB users.
Versions
End of explanation |
13,889 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Local Development and Validation
This notebook will cover the core parts of the machine learning workflow, running locally within the Google Cloud Datalab environment. Local development and validation, along with using a sample of the full dataset, is recommended as a starting point. This allows for a shorter development-validation iteration cycle.
Workspace Setup
The first step is to setup the workspace that we will use within this notebook - the python libraries, and the local directory containing the data inputs and outputs produced over the course of the steps.
Step1: The local development workspace will be in /content/datalab/workspace/census by default.
Note that the /content/datalab directory is physically located within the data disk mounted into the Datalab instance, but outside of the git repository containing notebooks, which makes it suitable for storing data files and generated files that are useful to keep around while you are working on a project, but do not belong in the source repository.
Step2: NOTE
Step3: To get started, we will copy the data into this workspace. Generally, in your own work, you will need to create a representative sample dataset to use for local development, while leaving the full dataset to use when running on the service. For purposes of the sample, which uses a relatively small dataset, we'll copy it down in entirety.
Step4: Data Exploration
Its a good idea to load data and inspect it to build an understanding of the structure, as well as preparation steps that will be needed.
Step5: The census data contains a large number of columns. Only a few are needed.
Data Cleaning and Transformations
The raw census data requires a number of transformations before it is usable for machine learning
Step6: Creating DataSets
Once the data is ready, the next step is to split data into training and evaluation datasets. In this sample, rows are split randomly in an 80/20 manner. Additionally, the schema is also saved for later use.
Step7: We'll create DataSet objects which are reference to one or more files identified by a path (or path pattern) along with associated schema.
Step8: Analyzing Data
When building a model, a number of pieces of information about the training data are required - for example, the list of entries or vocabulary of a categorical/discrete column, or aggregate statistics like min and max for numerical columns. These require a full pass over the training data, and is usually done once, and needs to be repeated once if you change the schema in a future iteration.
Step9: The output of analysis is a stats file that contains analysis from the numerical columns, and a vocab file from each categorical column.
Step10: Training
All the data is in place to start training. A model learns to predict the target value (the income, 'WAGP'), based on the different pieces of input data (the various columns or features). The target and inputs are defined as features derived from the input data by applying a set of transforms to the columns.
Additionally there is a special key column - this is any column in the data that can be used to uniquely identify instances. The value of this column is ignored during training, but this value is quite useful when using the resulting model during prediction as discussed later.
Step11: TensorBoard
A training job produces various summary events containing values of metrics (eg. throughput and loss) over the course of its execution. These events can be observed in TensorBoard while the job executes or after it is executed.
In this sample, training was short, and has completed. In the general case, especially for longer cloud training jobs, it is more interesting to launch TensorBoard while the training job continues to execute.
Step12: The Trained Model
It is interesting to get a sense of all the outputs produced during training, in addition to the summary event files, visualized in the previous step. In particular, note that the model is produced in a model subdirectory.
Step13: Evaluation
Once a model has been trained, it is necessary to evaluate it and understand how well it is performing. In order to evaluate a model, batch prediction jobs can be run against the one or more evaluation datasets that you have.
Step14: The root mean squared error and distribution of errors indicates how the model is performing at an aggregate level as well as indicative of the span of error values.
Prediction
Now that a model has been trained, and saved on-disk, it can be reloaded using TensorFlow, and be used to produce predictions, i.e. produce the income value given a set of new instances, or features that were not previously present in the training data. This mechanism can also help validate the model - it can be used to produce predictions for one or more evaluation datasets.
Note that prediction data must be of the same type (input format, and order of columns) as the data that was used for training. The only difference is the first column, the target income value, is absent.
Since the model is a regression model, a single value is the output of the prediction.
Also note that second column in our schema was specified as a key column. This value of the key will accompany the output values, so they can be joined with the input instances. | Python Code:
import google.datalab.ml as ml
import json
import math
import matplotlib.pyplot as plot
import mltoolbox.regression.dnn as regression
import numpy as np
import pandas as pd
import os
import seaborn as sns
import sklearn.metrics as metrics
Explanation: Local Development and Validation
This notebook will cover the core parts of the machine learning workflow, running locally within the Google Cloud Datalab environment. Local development and validation, along with using a sample of the full dataset, is recommended as a starting point. This allows for a shorter development-validation iteration cycle.
Workspace Setup
The first step is to setup the workspace that we will use within this notebook - the python libraries, and the local directory containing the data inputs and outputs produced over the course of the steps.
End of explanation
workspace_path = '/content/datalab/workspace/census'
!mkdir -p {workspace_path}
Explanation: The local development workspace will be in /content/datalab/workspace/census by default.
Note that the /content/datalab directory is physically located within the data disk mounted into the Datalab instance, but outside of the git repository containing notebooks, which makes it suitable for storing data files and generated files that are useful to keep around while you are working on a project, but do not belong in the source repository.
End of explanation
!rm -rf {workspace_path} && mkdir {workspace_path}
Explanation: NOTE: If you have previously run this notebook, and want to start from scratch, then run the next cell to delete and create the workspace directory.
End of explanation
!gsutil -q cp gs://cloud-datalab-samples/census/ss14psd.csv {workspace_path}/data/census.csv
!ls -l {workspace_path}/data
Explanation: To get started, we will copy the data into this workspace. Generally, in your own work, you will need to create a representative sample dataset to use for local development, while leaving the full dataset to use when running on the service. For purposes of the sample, which uses a relatively small dataset, we'll copy it down in entirety.
End of explanation
df_data = pd.read_csv(os.path.join(workspace_path, 'data/census.csv'), dtype=str)
print '%d rows' % len(df_data)
df_data.head()
Explanation: Data Exploration
Its a good idea to load data and inspect it to build an understanding of the structure, as well as preparation steps that will be needed.
End of explanation
# This code is packaged as a function that can be reused if you need to apply to future
# datasets, esp. to prediction data, to ensure consistent transformations are applied.
def transform_data(df):
interesting_columns = ['WAGP','SERIALNO','AGEP','COW','ESP','ESR','FOD1P','HINS4','INDP',
'JWMNP', 'JWTR', 'MAR', 'POWPUMA', 'PUMA', 'RAC1P', 'SCHL',
'SCIENGRLP', 'SEX', 'WKW']
df = df[interesting_columns]
# Replace whitespace with NaN, and NaNs with empty string
df = df.replace('\s+', np.nan, regex=True).fillna('')
# Filter out the rows without an income, i.e. there is no target value to learn from
df = df[df.WAGP != '']
# Convert the wage value into units of 1000. So someone making an income from wages
# of $23200 will have it encoded as 23.2
df['WAGP'] = df.WAGP.astype(np.int64) / 1000.0
# Filter out rows with income values we don't care about, i.e. outliers
# Filter out rows with less than 10K and more than 150K
df = df[(df.WAGP >= 10.0) & (df.WAGP < 150.0)]
return df
df_data = transform_data(df_data)
print '%d rows' % len(df_data)
df_data.head()
Explanation: The census data contains a large number of columns. Only a few are needed.
Data Cleaning and Transformations
The raw census data requires a number of transformations before it is usable for machine learning:
Apply understanding of the domain and the problem to determine which data to include or join, as well as which data to filter out if it is adding noise. In the case of census, we'll pick just a few of the many columns present in the dataset.
Handle missing values, or variations in formatting.
Apply other transformations in support of the problem.
End of explanation
def create_schema(df):
fields = []
for name, dtype in zip(df.columns, df.dtypes):
if dtype in (np.str, np.object):
# Categorical columns should have type 'STRING'
fields.append({'name': name, 'type': 'STRING'})
elif dtype in (np.int32, np.int64, np.float32, np.float64):
# Numerical columns have type 'FLOAT'
fields.append({'name': name, 'type': 'FLOAT'})
else:
raise ValueError('Unsupported column type "%s" in column "%s"' % (str(dtype), name))
return fields
def create_datasets(df):
# Numbers in the range of [0, 1)
random_values = np.random.rand(len(df))
# Split data into %80, 20% partitions
df_train = df[random_values < 0.8]
df_eval = df[random_values >= 0.8]
return df_train, df_eval
df_train, df_eval = create_datasets(df_data)
schema = create_schema(df_data)
training_data_path = os.path.join(workspace_path, 'data/train.csv')
eval_data_path = os.path.join(workspace_path, 'data/eval.csv')
schema_path = os.path.join(workspace_path, 'data/schema.json')
df_train.to_csv(training_data_path, header=False, index=False)
df_eval.to_csv(eval_data_path, header=False, index=False)
with open(schema_path, 'w') as f:
f.write(json.dumps(schema, indent=2))
!ls -l {workspace_path}/data
Explanation: Creating DataSets
Once the data is ready, the next step is to split data into training and evaluation datasets. In this sample, rows are split randomly in an 80/20 manner. Additionally, the schema is also saved for later use.
End of explanation
train_data = ml.CsvDataSet(file_pattern=training_data_path, schema_file=schema_path)
eval_data = ml.CsvDataSet(file_pattern=eval_data_path, schema_file=schema_path)
Explanation: We'll create DataSet objects which are reference to one or more files identified by a path (or path pattern) along with associated schema.
End of explanation
analysis_path = os.path.join(workspace_path, 'analysis')
regression.analyze(dataset=train_data, output_dir=analysis_path)
Explanation: Analyzing Data
When building a model, a number of pieces of information about the training data are required - for example, the list of entries or vocabulary of a categorical/discrete column, or aggregate statistics like min and max for numerical columns. These require a full pass over the training data, and is usually done once, and needs to be repeated once if you change the schema in a future iteration.
End of explanation
!ls {analysis_path}
Explanation: The output of analysis is a stats file that contains analysis from the numerical columns, and a vocab file from each categorical column.
End of explanation
features = {
"WAGP": {"transform": "target"},
"SERIALNO": {"transform": "key"},
"AGEP": {"transform": "embedding", "embedding_dim": 2}, # Age
"COW": {"transform": "one_hot"}, # Class of worker
"ESP": {"transform": "embedding", "embedding_dim": 2}, # Employment status of parents
"ESR": {"transform": "one_hot"}, # Employment status
"FOD1P": {"transform": "embedding", "embedding_dim": 3}, # Field of degree
"HINS4": {"transform": "one_hot"}, # Medicaid
"INDP": {"transform": "embedding", "embedding_dim": 5}, # Industry
"JWMNP": {"transform": "embedding", "embedding_dim": 2}, # Travel time to work
"JWTR": {"transform": "one_hot"}, # Transportation
"MAR": {"transform": "one_hot"}, # Marital status
"POWPUMA": {"transform": "one_hot"}, # Place of work
"PUMA": {"transform": "one_hot"}, # Area code
"RAC1P": {"transform": "one_hot"}, # Race
"SCHL": {"transform": "one_hot"}, # School
"SCIENGRLP": {"transform": "one_hot"}, # Science
"SEX": {"transform": "one_hot"},
"WKW": {"transform": "one_hot"} # Weeks worked
}
training_path = os.path.join(workspace_path, 'training')
regression.train(train_dataset=train_data, eval_dataset=eval_data,
output_dir=training_path,
analysis_dir=analysis_path,
features=features,
max_steps=2000,
layer_sizes=[5, 5, 5])
Explanation: Training
All the data is in place to start training. A model learns to predict the target value (the income, 'WAGP'), based on the different pieces of input data (the various columns or features). The target and inputs are defined as features derived from the input data by applying a set of transforms to the columns.
Additionally there is a special key column - this is any column in the data that can be used to uniquely identify instances. The value of this column is ignored during training, but this value is quite useful when using the resulting model during prediction as discussed later.
End of explanation
tensorboard_pid = ml.TensorBoard.start(training_path)
ml.TensorBoard.stop(tensorboard_pid)
Explanation: TensorBoard
A training job produces various summary events containing values of metrics (eg. throughput and loss) over the course of its execution. These events can be observed in TensorBoard while the job executes or after it is executed.
In this sample, training was short, and has completed. In the general case, especially for longer cloud training jobs, it is more interesting to launch TensorBoard while the training job continues to execute.
End of explanation
!ls -R {training_path}/model
Explanation: The Trained Model
It is interesting to get a sense of all the outputs produced during training, in addition to the summary event files, visualized in the previous step. In particular, note that the model is produced in a model subdirectory.
End of explanation
evaluation_path = os.path.join(workspace_path, 'evaluation')
# Note the use of evaluation mode (as opposed to prediction mode). This is used to indicate the data being
# predicted on contains a target value column (prediction data is missing that column).
regression.batch_predict(training_dir=training_path,
prediction_input_file=eval_data_path,
output_dir=evaluation_path,
output_format='json',
mode='evaluation')
!ls -l {evaluation_path}
df_eval = pd.read_json(os.path.join(evaluation_path, 'predictions-00000-of-00001.json'), lines=True)
df_eval.head()
mse = metrics.mean_squared_error(df_eval['target'], df_eval['predicted'])
rmse = math.sqrt(mse)
print 'Root Mean Squared Error: %.3f' % rmse
df_eval['error'] = df_eval['predicted'] - df_eval['target']
_ = plot.hist(df_eval['error'], bins=20)
Explanation: Evaluation
Once a model has been trained, it is necessary to evaluate it and understand how well it is performing. In order to evaluate a model, batch prediction jobs can be run against the one or more evaluation datasets that you have.
End of explanation
%file {workspace_path}/data/prediction.csv
SERIALNO,AGEP,COW,ESP,ESR,FOD1P,HINS4,INDP,JWMNP,JWTR,MAR,POWPUMA,PUMA,RAC1P,SCHL,SCIENGRLP,SEX,WKW
490,64,2,0,1,0,2,8090,015,01,1,00590,00500,1,18,0,2,1
1225,32,5,0,4,5301,2,9680,015,01,1,00100,00100,1,21,2,1,1
1226,30,1,0,1,0,2,8680,020,01,1,00100,00100,1,16,0,2,1
df_instances = pd.read_csv(os.path.join(workspace_path, 'data/prediction.csv'))
df_instances
df_predictions = regression.predict(training_dir=training_path, data=df_instances)
df_predictions
# Index the instances DataFrame using the SERIALNO column, and join the predictions
# DataFrame using the same column.
df_instances.set_index(keys=['SERIALNO'], inplace=True)
df_predictions.set_index(keys=['SERIALNO'], inplace=True)
df_data = df_predictions.join(other=df_instances)
df_data
Explanation: The root mean squared error and distribution of errors indicates how the model is performing at an aggregate level as well as indicative of the span of error values.
Prediction
Now that a model has been trained, and saved on-disk, it can be reloaded using TensorFlow, and be used to produce predictions, i.e. produce the income value given a set of new instances, or features that were not previously present in the training data. This mechanism can also help validate the model - it can be used to produce predictions for one or more evaluation datasets.
Note that prediction data must be of the same type (input format, and order of columns) as the data that was used for training. The only difference is the first column, the target income value, is absent.
Since the model is a regression model, a single value is the output of the prediction.
Also note that second column in our schema was specified as a key column. This value of the key will accompany the output values, so they can be joined with the input instances.
End of explanation |
13,890 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Explauto, an open-source Python library to study autonomous exploration in developmental robotics
Explauto is an open-source Python library providing a unified API to design and compare various exploration strategies driving various sensorimotor learning algorithms in various simulated or robotics systems. Explauto aims at being collaborative and pedagogic, providing a platform to developmental roboticists where they can publish and compare their algorithmic contributions related to autonomous exploration and learning, as well a platform for teaching and scientific diffusion. It is is available on github.
The library is organized in three main packages, each one containing a collection of interchangeable modules
Step1: According to your installation, you will see at least two available environments
Step2: For example, the 'mid_dimensional' configuration corresponds to
Step3: One can use this method with every registered environments. For example the available configurations for the pendulum are
Step4: Let's instantiate a mid-dimensional simple arm
Step5: Each particular environment has to implement its own compute_sensori_effect method, which takes as argument a motor command vector $m$ (here the position of the joints, 7-dimensional). It returns the corresponding sensory effect vector $s$ (here the coordinate of the hand, $2$-dimensional).
Step6: Environments can implement specific methods for, e.g., drawing
Step7: The base of the arm is fixed at (0, 0) (circle). The first angle position m[0] corresponds to the angle between a horizontal line and the segment attached to the base, anticlock-wise. Each following angle position is measured with respect to their respective previous segment.
The Environment base class provides several useful methods in order to, e.g., sample random motor commands
Step8: Let's for example plot 10 random arm configurations
Step9: Dynamical environments are also available, though their integration with the rest of the library is not yet completly clear (to discuss later). E.g., a circular pendulum
Step10: The compute_sensori_effect method is also defined (using a motor primitive)
Step11: But let's continue this tutorial using a mid-dimensional simple arm
Step12: Learning sensorimotor models
In Explauto, a sensorimotor model implements both the iterative learning process from sensorimotor experience, i.e. from the iterative collection of $(m, s)$ pairs by interaction with the environment, and the use of the resulting internal model to perform forward and inverse predictions (or any kind of general prediction between sensorimotor subspaces).
Learning sensorimotor mappings involves machine learning algorithms, for which Explauto provides a unified interface through the SensorimotorModel abstract class.
Using the simple arm environment above, it allows to iteratively learn a sensorimotor model which will be able to
Step13: Here we will use the 'nearest neighbor' model. This sensorimotor model simply stores sensorimotor experience, ie. $(m, s)$ pairs where $m$ is a motor command (here arm joint positions) and $s$ the corresponding sensory effect (here end-effector positions). When asked for a forward prediction for a given motor command $m$, it returns the associated sensory effect $s$ of the nearest neighbor of $m$ in the stored sensorimotor experience. When asked for an inverse prediction to reach a sensory goal $s$, it returns the associated motor command $m$ of the nearest neighbor of $s$ in the stored sensorimotor experience, possibly pertubated with a bit gaussian noise.
Step14: We will use the 'exact' configuration, which perform forward and inverse prediction as explained above, without any noise added (ie., it just looks for the nearest neighbor).
Now we can instantiate the sensorimotor model by using
Step15: Note that in addition to the names of the model and its configuration, one also has to pass environment.conf. This a Configuration object which is instantiated during the environment creation and provides information about the motor and sensorimotor ranges used by the environment. It is useful for the sensorimotor model to be properly configured. When using the 'default' configuration for example, the added noise when performing inverse prediction depends on the motor ranges. Passing environment.conf thus allows to define sensorimotor model configurations independently of particular environment settings.
Now let's train the model from the execution of random motor commands (i.e. random motor babbling)
Step16: Note that sensorimotor model training in Explauto is an iterative process. They incorporate new sensorimotor experience on the fly instead of using batch training. This is a requirement for autonomous exploration where the internal model has to be refined online.
Once the sensorimodel has been trained, one can perform forward and inverse prediction with it. Let's predict the sensori effect of a new random motor command (which is not in the training set we just used) using the forward_prediction method
Step17: and compare the predicted effect with the real effect observed from executing $m$ through the environment
Step18: We observe that the predicted end-effector position is quite close to the observed position when executing the motor command. Using the 'NN' model, it simply corresponds to the sensory effect of the nearest neighbor of $m$ in the stored sensorimotor experience.
Sensorimotor models can also be used for inverse prediction using the inverse_prediction method, allowing the inference of an appropriate motor comand $m$ in order to reach a given sensory goal $s_g$
Step19: We can check if the inferred motor command is actually appropriate to reach the goal $s_g$
Step20: We observe that the inferred motor command results in an end-effector position which is quite close to the goal. Using the 'exact' configuration of the 'nearest_neighbor' model, it is simply the motor command which resulted in the sensory effect which is the closest to $s_g$ in the stored experience.
Here is a bit more complex example where the arm attempt to follow a vertical straight line with the end-effector
Step21: Using another sensorimotor model in Explauto simply consists of changing the model name and configuration above. For example, you can try to execute the exact same code, just replacing the model instanciation by
Step22: Motor and goal babbling using interest models
In Explauto, the role of interest models is to provide sensorimotor predictions (forward or inverse) to be performed by the sensorimotor model. An interest model implements the active exploration process, where sensorimotor experiments are chosen to improve the forward or inverse predictions of the sensorimotor model. It explores in a given interest space resulting in motor babbling strategies when it corresponds to the motor space and in goal babbling strategies when it corresponds to the sensory space.
An interest model has to implement a sampling procedure in the interest space. Explauto provides several sampling procedures
Step23: and the available configurations of a given model by
Step24: Using an environment, a sensorimotor and an interest model, one can run a motor babbling strategy by
Step25: Then running the following simulation loop and (optionally) plotting the reached sensory effects
Step26: (The plots are quite hugly here, we will present Explauto visualization tools in the following.)
Random goal babbling corresponds to
Step27: We observe that goal babbling allow a more uniform covering of the sensory space.
And finally, here is the code for curiosity-driven goal babbling (maximization of the learning progress)
Step28: The reached point obtained above do not well cover the sensory space. This is due to the fact that we did not re-initialize the sensorimotor model (therefore this latter was already trained) to avoid some bootstrapping issues. The next section shows how to encapsulate a sensorimotor and an interest models into an agent to, among other things, take care of those bootstrapping issues.
Encapsulating a sensorimotor and an interest models into an agent
Encapsulating a sensorimotor and an interest models into an agent allows to generalize and simplify the simulation loop whatever the exploration strategy involved, ie whatever the type of babbling, the sensorimotor and the interest models. In Explauto, an agent is intantiated using a configuration (generally from an environment), a sensorimotor and an interest models
Step29: An agent is provided with two methods. One for producing a motor command
Step30: The produce() method calls the sample() method of the interest model, which returns either a motor command or a sensory goal according to the interest space (i.e. the type of babbling). Then it uses the sensorimotor model to complete the obtained value into a full sensorimotor vector (using forward prediction in case of motor babbling and inverse prediction in case of goal babbling). Finally it returns the motor part of this full sensorimotor vector. Agents also take care of model bootstrapping issues.
The second main agent method is perceive(), which informs the agent with the sensorimotor consequence of its action in order to update both the sensorimotor and the interest models
Step31: Hence the entire simulation loop can now be rewritten as
Step32: This loop is valid whatever the exploration strategy involved. The corresponding formal framework is defined in
Step33: and run it using the exact same loop
Step34: Of course lack a way to visualize the result of our simulations here, this is why we introduce Explauto's Experiment in the next section.
Encapsulating an environment and an agent into an experiment
Encapsulating an environment and an agent into an experiment allows to evaluate agent learning and offers plotting facilities. Once an environment and an agent have been constructed, one can set an experiment using
Step35: An experiment offers the management of the simulation loop with evaluation, logging and plotting capabilities. Instead of seperately constructing the environment and the agent (containing the sensorimotor and the interest models), one can simply use
Step36: This is the compact way to construct the environment (here a mid-dimensional 'simple_arm'), the sensorimotor model (here, 'NN') and the interest model (here curiosity-driven goal babbling) and encapsulate them into an experiment.
An experiment allows to insert an evaluation phase at given time steps
Step37: Now let's run the experiment
Step38: This executes the same simulation loop as above, inserting an evaluation phase at each specified time step and logging the flow of interest model choices, sensorimotor model inferences and sensorimotor observations. This allows to, e.g., visualize the chosen goals and reached hand positions during the experiment using the scatter_plot method
Step39: or to vizualize the learning curve
Step40: Parallel comparison of exploration strategies
Various exploration strategies can be launched in parallel and compared by using an experiment pool
Step41: running it
Step42: comparing learning curves
Step43: or vizualize the iterative choice of goals and the reached effects | Python Code:
from explauto.environment import environments
environments.keys()
Explanation: Explauto, an open-source Python library to study autonomous exploration in developmental robotics
Explauto is an open-source Python library providing a unified API to design and compare various exploration strategies driving various sensorimotor learning algorithms in various simulated or robotics systems. Explauto aims at being collaborative and pedagogic, providing a platform to developmental roboticists where they can publish and compare their algorithmic contributions related to autonomous exploration and learning, as well a platform for teaching and scientific diffusion. It is is available on github.
The library is organized in three main packages, each one containing a collection of interchangeable modules:
* The environment package provides a unified interface to real and simulated robots.
* The sensorimotor_model package provides a unified interface to online machine learning algorithm.
* The interest_model package provides a unified interface for the active choice of sensorimotor experiments.
The library is easily extendable by forking the github repository and proposing new modules for each package (tutorial to come, do not hesitate to contact us want to get involved).
This tutorial shows how to use modules contained in these three packages, how to integrated them in simulation loops and how to analyse the results.
Setting environments
In Explauto, an environment implements the physical properties of the interaction between the robot body and the environment in which it evolves. Explauto comes with several sensorimotor systems available from the environment package:
End of explanation
from explauto.environment import available_configurations
available_configurations('simple_arm').keys()
Explanation: According to your installation, you will see at least two available environments:
* a multi-joint arm acting on a plan ('simple_arm')
* an under-actuated torque-controlled circular pendulum ('pendulum').
These environments are simulated. Explauto also provides an interface to real robots based on Dynamixel actuators by providing bindings to the Pypot library (this tutorial shows how to use it on a Poppy robot).
We will use the simple arm for this tutorial. It consists in the simulation of a $n$ degrees-of-freedom (DoF) arm with movements limited to a 2D plan. Each available environment comes with a set of predefined configurations. A default configuration will always be defined. For the simple arm they are:
End of explanation
available_configurations('simple_arm')['mid_dimensional']
Explanation: For example, the 'mid_dimensional' configuration corresponds to:
End of explanation
available_configurations('pendulum').keys()
Explanation: One can use this method with every registered environments. For example the available configurations for the pendulum are:
End of explanation
from explauto import Environment
environment = Environment.from_configuration('simple_arm', 'mid_dimensional')
Explanation: Let's instantiate a mid-dimensional simple arm:
End of explanation
from numpy import pi
m = [-pi/6., pi/3., pi/4., pi/5., 0., pi/3., pi/6.]
environment.compute_sensori_effect(m)
Explanation: Each particular environment has to implement its own compute_sensori_effect method, which takes as argument a motor command vector $m$ (here the position of the joints, 7-dimensional). It returns the corresponding sensory effect vector $s$ (here the coordinate of the hand, $2$-dimensional).
End of explanation
# Create the axes for plotting::
%pylab inline
ax = axes()
# plot the arm:
environment.plot_arm(ax, m)
Explanation: Environments can implement specific methods for, e.g., drawing:
End of explanation
motor_configurations = environment.random_motors(n=10)
Explanation: The base of the arm is fixed at (0, 0) (circle). The first angle position m[0] corresponds to the angle between a horizontal line and the segment attached to the base, anticlock-wise. Each following angle position is measured with respect to their respective previous segment.
The Environment base class provides several useful methods in order to, e.g., sample random motor commands:
End of explanation
# Create the axes for plotting::
%pylab inline
ax = axes()
# Plotting 10 random motor configurations:
for m in motor_configurations:
environment.plot_arm(ax, m)
Explanation: Let's for example plot 10 random arm configurations:
End of explanation
environment = Environment.from_configuration('pendulum', 'default')
%pylab
ax = axes()
# Sequence of torques at each time step:
U = [0.25] * 15 + [-0.25] * 15 + [0.25] * 19
# reset to lower position:
environment.reset()
# apply torque and plot:
for u in U:
ax.cla()
environment.apply_torque(u)
environment.plot_current_state(ax)
draw()
Explanation: Dynamical environments are also available, though their integration with the rest of the library is not yet completly clear (to discuss later). E.g., a circular pendulum:
End of explanation
environment.compute_sensori_effect(environment.random_motors())
Explanation: The compute_sensori_effect method is also defined (using a motor primitive):
End of explanation
environment = Environment.from_configuration('simple_arm', 'mid_dimensional')
Explanation: But let's continue this tutorial using a mid-dimensional simple arm:
End of explanation
from explauto.sensorimotor_model import sensorimotor_models
sensorimotor_models.keys()
Explanation: Learning sensorimotor models
In Explauto, a sensorimotor model implements both the iterative learning process from sensorimotor experience, i.e. from the iterative collection of $(m, s)$ pairs by interaction with the environment, and the use of the resulting internal model to perform forward and inverse predictions (or any kind of general prediction between sensorimotor subspaces).
Learning sensorimotor mappings involves machine learning algorithms, for which Explauto provides a unified interface through the SensorimotorModel abstract class.
Using the simple arm environment above, it allows to iteratively learn a sensorimotor model which will be able to:
* infer the position of the end-effector from a given motor command, what is called forward prediction,
* infer the motor command allowing to reach a particular end-effector position, what is called inverse prediction.
* update online from sensorimotor experience
Several sensorimotor models are provided: simple nearest-neighbor look-up, non-parametric models combining classical regressions and optimization algorithms, online local mixtures of Gaussians (beta).
Similarly to environments, available sensorimotor models in Explauto can be accessed using:
End of explanation
from explauto.sensorimotor_model import available_configurations
available_configurations('nearest_neighbor')
Explanation: Here we will use the 'nearest neighbor' model. This sensorimotor model simply stores sensorimotor experience, ie. $(m, s)$ pairs where $m$ is a motor command (here arm joint positions) and $s$ the corresponding sensory effect (here end-effector positions). When asked for a forward prediction for a given motor command $m$, it returns the associated sensory effect $s$ of the nearest neighbor of $m$ in the stored sensorimotor experience. When asked for an inverse prediction to reach a sensory goal $s$, it returns the associated motor command $m$ of the nearest neighbor of $s$ in the stored sensorimotor experience, possibly pertubated with a bit gaussian noise.
End of explanation
from explauto import SensorimotorModel
sm_model = SensorimotorModel.from_configuration(environment.conf, 'nearest_neighbor', 'exact')
Explanation: We will use the 'exact' configuration, which perform forward and inverse prediction as explained above, without any noise added (ie., it just looks for the nearest neighbor).
Now we can instantiate the sensorimotor model by using:
End of explanation
for m in environment.random_motors(n=1000):
# compute the sensori effect s of the motor command m through the environment:
s = environment.compute_sensori_effect(m)
# update the model according to this experience:
sm_model.update(m, s)
Explanation: Note that in addition to the names of the model and its configuration, one also has to pass environment.conf. This a Configuration object which is instantiated during the environment creation and provides information about the motor and sensorimotor ranges used by the environment. It is useful for the sensorimotor model to be properly configured. When using the 'default' configuration for example, the added noise when performing inverse prediction depends on the motor ranges. Passing environment.conf thus allows to define sensorimotor model configurations independently of particular environment settings.
Now let's train the model from the execution of random motor commands (i.e. random motor babbling):
End of explanation
# random motor command:
m = environment.random_motors(n=1)[0]
# predicted sensory effect:
s_pred = sm_model.forward_prediction(m)
print 'random motor command: ', m
print 'predicted effect: ', s_pred
Explanation: Note that sensorimotor model training in Explauto is an iterative process. They incorporate new sensorimotor experience on the fly instead of using batch training. This is a requirement for autonomous exploration where the internal model has to be refined online.
Once the sensorimodel has been trained, one can perform forward and inverse prediction with it. Let's predict the sensori effect of a new random motor command (which is not in the training set we just used) using the forward_prediction method:
End of explanation
%pylab inline
ax = axes()
environment.plot_arm(ax, m)
ax.plot(*s_pred, marker='o', color='red')
Explanation: and compare the predicted effect with the real effect observed from executing $m$ through the environment:
End of explanation
s_g = [0.7, 0.5]
m = sm_model.inverse_prediction(s_g)
print 'Inferred motor command to reach the position ', s_g, ': ', m
Explanation: We observe that the predicted end-effector position is quite close to the observed position when executing the motor command. Using the 'NN' model, it simply corresponds to the sensory effect of the nearest neighbor of $m$ in the stored sensorimotor experience.
Sensorimotor models can also be used for inverse prediction using the inverse_prediction method, allowing the inference of an appropriate motor comand $m$ in order to reach a given sensory goal $s_g$:
End of explanation
ax = axes()
environment.plot_arm(ax, m)
ax.plot(*s_g, marker='o', color='red')
Explanation: We can check if the inferred motor command is actually appropriate to reach the goal $s_g$:
End of explanation
ax = axes()
# Define the line and plot it:
x = 0.8
y_a = 0.5
y_b = -0.5
ax.plot([x, x], [y_a, y_b], color='red')
# for 10 points equidistantly spaced on the line, perform inverse prediction and plot:
for y in linspace(-0.5, 0.5, 10):
m = sm_model.inverse_prediction([x, y])
environment.plot_arm(ax, m)
Explanation: We observe that the inferred motor command results in an end-effector position which is quite close to the goal. Using the 'exact' configuration of the 'nearest_neighbor' model, it is simply the motor command which resulted in the sensory effect which is the closest to $s_g$ in the stored experience.
Here is a bit more complex example where the arm attempt to follow a vertical straight line with the end-effector:
End of explanation
sm_model = SensorimotorModel.from_configuration(environment.conf, 'LWLR-BFGS', 'default')
Explanation: Using another sensorimotor model in Explauto simply consists of changing the model name and configuration above. For example, you can try to execute the exact same code, just replacing the model instanciation by:
End of explanation
from explauto.interest_model import interest_models, available_configurations
interest_models.keys()
Explanation: Motor and goal babbling using interest models
In Explauto, the role of interest models is to provide sensorimotor predictions (forward or inverse) to be performed by the sensorimotor model. An interest model implements the active exploration process, where sensorimotor experiments are chosen to improve the forward or inverse predictions of the sensorimotor model. It explores in a given interest space resulting in motor babbling strategies when it corresponds to the motor space and in goal babbling strategies when it corresponds to the sensory space.
An interest model has to implement a sampling procedure in the interest space. Explauto provides several sampling procedures:
* random sampling
* learning progress maximization in forward or inverse predictions.
* In development:
* social interaction (e.g. using a mouse pointer to interactively provide sensory goals)
* optimization toward a specific goal
Similarly to environments and sensorimotor models, available interest models in Explauto can be accessed using:
End of explanation
available_configurations('discretized_progress')
Explanation: and the available configurations of a given model by:
End of explanation
from explauto import InterestModel
im_model = InterestModel.from_configuration(environment.conf, environment.conf.m_dims, 'random')
Explanation: Using an environment, a sensorimotor and an interest model, one can run a motor babbling strategy by:
* first instantiate a random motor interest model:
End of explanation
# re-instantiate the sensorimotor model (to forget what was previously learnt in the previous section
sm_model = SensorimotorModel.from_configuration(environment.conf, 'nearest_neighbor', 'default')
# run the simulation loop
for _ in range(100):
# sample a random motor command using the interest model:
m = im_model.sample()
# execute this command and observe the corresponding sensory effect:
s = environment.compute_sensori_effect(m)
# update the sensorimotor model:
sm_model.update(m, s)
im_model.update(hstack((m, s)), hstack((m, s_g)))
# plot the observed sensory effect:
plot(s[0], s[1], 'ok')
Explanation: Then running the following simulation loop and (optionally) plotting the reached sensory effects:
End of explanation
# Instantiate a random goal interest model:
im_model = InterestModel.from_configuration(environment.conf, environment.conf.s_dims, 'random')
for _ in range(100):
# sample a random sensory goal using the interest model:
s_g = im_model.sample()
# infer a motor command to reach that goal using the sensorimotor model:
m = sm_model.inverse_prediction(s_g)
# execute this command and observe the corresponding sensory effect:
s = environment.compute_sensori_effect(m)
# update the sensorimotor model:
sm_model.update(m, s)
# plot the observed sensory effect:
plot(s[0], s[1], 'ok')
Explanation: (The plots are quite hugly here, we will present Explauto visualization tools in the following.)
Random goal babbling corresponds to:
End of explanation
# Instantiate an active goal interest model:
im_model = InterestModel.from_configuration(environment.conf, environment.conf.s_dims, 'discretized_progress')
for _ in range(100):
# sample a sensory goal maximizing learning progress using the interest model:
s_g = im_model.sample()
# infer a motor command to reach that goal using the sensorimotor model:
m = sm_model.inverse_prediction(s_g)
# execute this command and observe the corresponding sensory effect:
s = environment.compute_sensori_effect(m)
# update the sensorimotor model:
sm_model.update(m, s)
# update the interest model:
im_model.update(hstack((m, s)), hstack((m, s_g)))
# plot the observed sensory effect:
plot(s[0], s[1], 'ok')
Explanation: We observe that goal babbling allow a more uniform covering of the sensory space.
And finally, here is the code for curiosity-driven goal babbling (maximization of the learning progress):
End of explanation
from explauto import Agent
sm_model = SensorimotorModel.from_configuration(environment.conf, 'nearest_neighbor', 'default')
im_model = InterestModel.from_configuration(environment.conf, environment.conf.m_dims, 'random')
agent = Agent(environment.conf, sm_model, im_model)
Explanation: The reached point obtained above do not well cover the sensory space. This is due to the fact that we did not re-initialize the sensorimotor model (therefore this latter was already trained) to avoid some bootstrapping issues. The next section shows how to encapsulate a sensorimotor and an interest models into an agent to, among other things, take care of those bootstrapping issues.
Encapsulating a sensorimotor and an interest models into an agent
Encapsulating a sensorimotor and an interest models into an agent allows to generalize and simplify the simulation loop whatever the exploration strategy involved, ie whatever the type of babbling, the sensorimotor and the interest models. In Explauto, an agent is intantiated using a configuration (generally from an environment), a sensorimotor and an interest models:
End of explanation
m = agent.produce()
print m
Explanation: An agent is provided with two methods. One for producing a motor command:
End of explanation
s = environment.update(m)
agent.perceive(s)
Explanation: The produce() method calls the sample() method of the interest model, which returns either a motor command or a sensory goal according to the interest space (i.e. the type of babbling). Then it uses the sensorimotor model to complete the obtained value into a full sensorimotor vector (using forward prediction in case of motor babbling and inverse prediction in case of goal babbling). Finally it returns the motor part of this full sensorimotor vector. Agents also take care of model bootstrapping issues.
The second main agent method is perceive(), which informs the agent with the sensorimotor consequence of its action in order to update both the sensorimotor and the interest models:
End of explanation
for _ in range(100):
m = agent.produce()
s = environment.update(m)
agent.perceive(s)
Explanation: Hence the entire simulation loop can now be rewritten as:
End of explanation
sm_model = SensorimotorModel.from_configuration(environment.conf, 'nearest_neighbor', 'default')
im_model = InterestModel.from_configuration(environment.conf, environment.conf.s_dims, 'discretized_progress')
agent = Agent(environment.conf, sm_model, im_model)
Explanation: This loop is valid whatever the exploration strategy involved. The corresponding formal framework is defined in:
C. Moulin-Frier and P.-Y. Oudeyer, Exploration strategies in developmental robotics: A unified probabilistic framework, ICDL/Epirob, Osaka, Japan, 2013, pp. 1–6.
Let's for example create a curiosity-driven goal babbler:
End of explanation
for _ in range(100):
m = agent.produce()
s = environment.update(m)
agent.perceive(s)
Explanation: and run it using the exact same loop:
End of explanation
from explauto import Experiment
expe = Experiment(environment, agent)
Explanation: Of course lack a way to visualize the result of our simulations here, this is why we introduce Explauto's Experiment in the next section.
Encapsulating an environment and an agent into an experiment
Encapsulating an environment and an agent into an experiment allows to evaluate agent learning and offers plotting facilities. Once an environment and an agent have been constructed, one can set an experiment using:
End of explanation
from explauto.experiment import make_settings
random_goal_babbling = make_settings(environment='simple_arm', environment_config = 'mid_dimensional',
babbling_mode='goal',
interest_model='random',
sensorimotor_model='nearest_neighbor')
expe = Experiment.from_settings(random_goal_babbling)
Explanation: An experiment offers the management of the simulation loop with evaluation, logging and plotting capabilities. Instead of seperately constructing the environment and the agent (containing the sensorimotor and the interest models), one can simply use:
End of explanation
expe.evaluate_at([100, 200, 400, 1000], random_goal_babbling.default_testcases)
Explanation: This is the compact way to construct the environment (here a mid-dimensional 'simple_arm'), the sensorimotor model (here, 'NN') and the interest model (here curiosity-driven goal babbling) and encapsulate them into an experiment.
An experiment allows to insert an evaluation phase at given time steps:
End of explanation
expe.run()
Explanation: Now let's run the experiment:
End of explanation
%pylab inline
ax = axes()
title(('Random goal babbling'))
expe.log.scatter_plot(ax, (('sensori', [0, 1]),))
expe.log.scatter_plot(ax, (('choice', [0, 1]),), marker='.', color='red')
#expe.log.scatter_plot(ax, (('testcases', [0, 1]),), marker='o', color='green')
legend(['reached hand positions', 'chosen goals'])
Explanation: This executes the same simulation loop as above, inserting an evaluation phase at each specified time step and logging the flow of interest model choices, sensorimotor model inferences and sensorimotor observations. This allows to, e.g., visualize the chosen goals and reached hand positions during the experiment using the scatter_plot method:
End of explanation
ax = axes()
expe.log.plot_learning_curve(ax)
Explanation: or to vizualize the learning curve:
End of explanation
from explauto import ExperimentPool
xps = ExperimentPool.from_settings_product(environments=[('simple_arm', 'high_dim_high_s_range')],
babblings=['goal'],
interest_models=[('random', 'default'), ('discretized_progress', 'default')],
sensorimotor_models=[('nearest_neighbor', 'default')],
evaluate_at=[200, 500, 900, 1400],
same_testcases=True)
Explanation: Parallel comparison of exploration strategies
Various exploration strategies can be launched in parallel and compared by using an experiment pool:
End of explanation
xps.run()
Explanation: running it:
End of explanation
ax = axes()
for log in xps.logs:
log.plot_learning_curve(ax)
legend([s.interest_model for s in xps.settings])
Explanation: comparing learning curves:
End of explanation
%pylab
clf()
last_t = 0
for t in linspace(100, xps.logs[0].eval_at[-1], 40):
t = int(t)
for i, (config, log) in enumerate(zip(xps.settings, xps.logs)):
ax = subplot(1, 2, i+1)
log.scatter_plot(ax, (('sensori', [0, 1]),), range(0, t), marker='.', markersize=0.3, color = 'white')
log.density_plot(ax, (('choice', [0, 1]),), range(last_t, t))
title(config.interest_model + ' ' + config.babbling_mode)
draw()
last_t = t
Explanation: or vizualize the iterative choice of goals and the reached effects:
End of explanation |
13,891 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
https
Step2: IP ClockDivider
Non SoC Test | Python Code:
#This notebook also uses the `(some) LaTeX environments for Jupyter`
#https://github.com/ProfFan/latex_envs wich is part of the
#jupyter_contrib_nbextensions package
from myhdl import *
from myhdlpeek import Peeker
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sympy import *
init_printing()
import itertools
from IPython.display import clear_output
#https://github.com/jrjohansson/version_information
%load_ext version_information
%version_information myhdl, myhdlpeek, numpy, pandas, matplotlib, sympy, itertools, IPython
#helper functions to read in the .v and .vhd generated files into python
def VerilogTextReader(loc, printresult=True):
with open(f'{loc}.v', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***Verilog modual from {loc}.v***\n\n', VerilogText)
return VerilogText
def VHDLTextReader(loc, printresult=True):
with open(f'{loc}.vhd', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***VHDL modual from {loc}.vhd***\n\n', VerilogText)
return VerilogText
def ConstraintXDCTextReader(loc, printresult=True):
with open(f'{loc}.xdc', 'r') as xdcText:
ConstraintText=xdcText.read()
if printresult:
print(f'***Constraint file from {loc}.xdc***\n\n', ConstraintText)
return ConstraintText
Explanation: https://yangtavares.com/2017/07/31/creating-a-simple-overlay-for-pynq-z1-board-from-vivado-hlx/
End of explanation
@block
def ClockDivider(Divisor, clkOut, clk,rst):
Simple Clock Divider based on the Digilint Clock Divider
https://learn.digilentinc.com/Documents/262
Input:
Divisor(32 bit): the clock frequncy divide by value
clk(bool): The input clock
rst(bool): clockDivider Reset
Ouput:
clkOut(bool): the divided clock ouput
count(32bit): the value of the internal counter
count_i=Signal(modbv(0)[32:])
@always(clk.posedge, rst.posedge)
def counter():
if rst:
count_i.next=0
elif count_i==(Divisor-1):
count_i.next=0
else:
count_i.next=count_i+1
clkOut_i=Signal(bool(0))
@always(clk.posedge, rst.posedge)
def clockTick():
if rst:
clkOut_i.next=0
elif count_i==(Divisor-1):
clkOut_i.next=not clkOut_i
else:
clkOut_i.next=clkOut_i
@always_comb
def OuputBuffer():
clkOut.next=clkOut_i
return instances()
RefClkFreq=125e6
TargetClkFreq=40
DivsionFactor=int(RefClkFreq/TargetClkFreq)
DivsionFactor
Peeker.clear()
clk=Signal(bool(0)); Peeker(clk, 'clk')
Divisor=Signal(intbv(DivsionFactor)[32:]); Peeker(Divisor, 'Divisor')
clkOut=Signal(bool(0)); Peeker(clkOut, 'clkOut')
rst=Signal(bool(0)); Peeker(rst, 'rst')
DUT=ClockDivider(Divisor, clkOut, clk,rst)
DUT.convert()
VerilogTextReader('ClockDivider');
ConstraintXDCTextReader('ClockAXI');
Explanation: IP ClockDivider
Non SoC Test
End of explanation |
13,892 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interpolate bad channels for MEG/EEG channels
This example shows how to interpolate bad MEG/EEG channels
Using spherical splines from
Step1: Compute interpolation (also works with Raw and Epochs objects)
Step2: You can also use minimum-norm for EEG as well as MEG | Python Code:
# Authors: Denis A. Engemann <[email protected]>
# Mainak Jas <[email protected]>
#
# License: BSD-3-Clause
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
meg_path = data_path / 'MEG' / 'sample'
fname = meg_path / 'sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname, condition='Left Auditory',
baseline=(None, 0))
# plot with bads
evoked.plot(exclude=[], picks=('grad', 'eeg'))
Explanation: Interpolate bad channels for MEG/EEG channels
This example shows how to interpolate bad MEG/EEG channels
Using spherical splines from :footcite:PerrinEtAl1989 for EEG data.
Using field interpolation for MEG and EEG data.
In this example, the bad channels will still be marked as bad.
Only the data in those channels is replaced.
End of explanation
evoked_interp = evoked.copy().interpolate_bads(reset_bads=False)
evoked_interp.plot(exclude=[], picks=('grad', 'eeg'))
Explanation: Compute interpolation (also works with Raw and Epochs objects)
End of explanation
evoked_interp_mne = evoked.copy().interpolate_bads(
reset_bads=False, method=dict(eeg='MNE'), verbose=True)
evoked_interp_mne.plot(exclude=[], picks=('grad', 'eeg'))
Explanation: You can also use minimum-norm for EEG as well as MEG
End of explanation |
13,893 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BSTRINGS COMPOUNDING
Step1: Atomic String as an Integral of Atomic Function (introduced in 2017 by Prof S.Eremenko)
Step2: Atomic String, Atomic Function and Atomic Function Derivative plotted together
Step3: Properties of atomic function Up(x)
1) Remarkably, Atomic Function Derivative can be exressed via Atomic Function itself - up'(x)= 2up(2x+1)-2up(2x-1) meaning the shape of pulses for derivative function can be represented by shifted and stratched Atomic Function itself - remarkable property
2) The Atomic Function pulses superposition set at points -2, -1, 0, +1, +2... can exactly represent a Unity (number 1)
Step4: Properties of BUP atomic function BUp(x)
Step5: Atomic String is a generalisation of an Atomic Function
1) Astring is an swing-like function - Integral of Atomic Function (AF) which can be expressed via AF itself
Step6: 3) All derivatives of AString can be represented via AString itself
Step7: Representing of flat Spacetime Fabric by joining of Atomic Strings Quanta (Metriants)
Step8: Schematic model of Gravitation explaining General Relativity effects where spacetime Shape, Density and Curvature are deeply related being expressed via the same AString or Atomic Function
Step9: Apart from standard Python code, this script and material is the intellectual property of Professor Sergei Yu. Eremenko (https | Python Code:
import numpy as np
import pylab as pl
pl.rcParams["figure.figsize"] = 9,6
def BString1(x: float) -> float:
res = 0.5 * np.sin(2.* np.pi * x/2) ###x
if x > 0.5:
res = 0.5
if x < -0.5:
res = -0.5
return res
############### One String Pulse with width, shift and scale #############
def StringPulse(String1, t: float, a = 1., b = 0., c = 1., d = 0.) -> float:
x = (t - b)/a
if (x < -1):
res = -0.5
elif (x > 1):
res = 0.5
else:
res = String1(x)
res = d + res * c
return res
###### Atomic String Applied to list with width, shift and scale #############
def String(String1, x: list, a = 1., b = 0., c = 1., d = 0.) -> list:
res = []
for i in range(len(x)):
res.append(StringPulse(String1, x[i], a, b, c, d))
return res
###### Summation of two lists #############
def Sum(x1: list, x2: list) -> list:
res = []
for i in range(len(x1)):
res.append(x1[i] + x2[i])
return res
##########################################################
##This script introduces Atomic Function
################### One Pulse of atomic function
def up1(x: float) -> float:
#Atomic function table
up_y = [0.5, 0.48, 0.460000017,0.440000421,0.420003478,0.400016184, 0.380053256, 0.360139056, 0.340308139, 0.320605107,
0.301083436, 0.281802850, 0.262826445, 0.244218000, 0.226041554, 0.208361009, 0.191239338, 0.174736305,
0.158905389, 0.143991189, 0.129427260, 0.115840866, 0.103044024, 0.9110444278e-01, 0.798444445e-01, 0.694444445e-01,
0.598444445e-01, 0.510444877e-01, 0.430440239e-01, 0.358409663e-01, 0.294282603e-01, 0.237911889e-01, 0.189053889e-01,
0.147363055e-01, 0.112393379e-01, 0.836100883e-02, 0.604155412e-02, 0.421800000e-02, 0.282644445e-02, 0.180999032e-02,
0.108343562e-02, 0.605106267e-03, 0.308138660e-03, 0.139055523e-03, 0.532555251e-04, 0.161841328e-04, 0.347816874e-05,
0.420576116e-05, 0.167693347e-07, 0.354008603e-10, 0]
up_x = np.arange(0.5, 1.01, 0.01)
res = 0.
if ((x >= 0.5) and (x <= 1)):
for i in range(len(up_x) - 1):
if (up_x[i] >= x) and (x < up_x[i+1]):
N1 = 1 - (x - up_x[i])/0.01
res = N1 * up_y[i] + (1 - N1) * up_y[i+1]
return res
return res
############### Atomic Function Pulse with width, shift and scale #############
def pulse(up1, t: float, a = 1., b = 0., c = 1., d = 0.) -> float:
x = (t - b)/a
res = 0.
if (x >= 0.5) and (x <= 1):
res = up1(x)
elif (x >= 0.0) and (x < 0.5):
res = 1 - up1(1 - x)
elif (x >= -1 and x <= -0.5):
res = up1(-x)
elif (x > -0.5) and (x < 0):
res = 1 - up1(1 + x)
res = d + res * c
return res
############### Atomic Function Applied to list with width, shift and scale #############
def up(up1, x: list, a = 1., b = 0., c = 1., d = 0.) -> list:
res = []
for i in range(len(x)):
res.append(pulse(up1, x[i], a, b, c, d))
return res
x = np.arange(-2.0, 2.0, 0.01)
pl.title('Atomic Function')
pl.plot(x, up(up1, x), label='Atomic Function')
pl.grid(True)
pl.show()
Explanation: BSTRINGS COMPOUNDING
End of explanation
############### Atomic String #############
def AString1(x: float) -> float:
res = 1 * (pulse(up1, x/2.0 - 0.5) - 0.5)
return res
############### Atomic String Pulse with width, shift and scale #############
def AStringPulse(t: float, a = 1., b = 0., c = 1., d = 0.) -> float:
x = (t - b)/a
if (x < -1):
res = -0.5
elif (x > 1):
res = 0.5
else:
res = AString1(x)
res = d + res * c
return res
###### Atomic String Applied to list with width, shift and scale #############
def AString(x: list, a = 1., b = 0., c = 1., d = 0.) -> list:
res = []
for i in range(len(x)):
res.append(AStringPulse(x[i], a, b, c, d))
return res
pl.title('Atomic String')
pl.plot(x, String(AString1, x, 1.0, 0, 1, 0), label='Atomic String')
pl.plot(x, x, label='y = x')
pl.grid(True)
pl.show()
pl.title('B String')
pl.plot(x, String(BString1, x, 1.0, 0, 1, 0), label='B String')
pl.plot(x, x, label='y = x')
pl.grid(True)
pl.show()
Explanation: Atomic String as an Integral of Atomic Function (introduced in 2017 by Prof S.Eremenko)
End of explanation
pl.plot(x, String(AString1, x, 1.0, 0, 1, 0), label='Atomic String')
pl.plot(x, String(BString1, x, 1.0, 0, 1, 0), label='B String')
pl.title('Atomic and B String')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
#This Calculates Derivative
dx = x[1] - x[0]
dydx = np.gradient(up(up1, x), dx)
dydx1 = np.gradient(String(AString1, x), dx)
pl.plot(x, up(up1, x), label='Atomic Function')
pl.plot(x, String(AString1, x, 1.0, 0, 1, 0), label='Atomic String')
pl.plot(x, dydx, label='A-Function Derivative')
pl.plot(x, dydx1, label='AString Derivative')
pl.title('Atomic Function and String')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
# BUP Function = BString(2x+1) - BString(2x-1)
def bup1(x : float) -> float:
#res = String(BString1, x, 0.5, -0.5, 1, 0) - String(BString1, x, 0.5, +0.5, 1, 0)
res = BString1(2.*x+1) - BString1(2.*x-1)
return res
bup1(0.5)
#This Calculates Derivative
dx = x[1] - x[0]
dydx = np.gradient(up(bup1, x), dx)
dydx1 = np.gradient(String(BString1, x), dx)
pl.plot(x, up(bup1, x), label='BUP Atomic Function')
pl.plot(x, String(BString1, x, 1.0, 0, 1, 0), label='BUP Atomic String')
pl.plot(x, dydx, label='BUP-Function Derivative')
pl.plot(x, dydx1, label='BUP-String Derivative')
pl.title('BUP Atomic Function and String')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: Atomic String, Atomic Function and Atomic Function Derivative plotted together
End of explanation
pl.plot(x, up(up1, x, 1, -1), linewidth=1, label='Atomic Function at x=-1')
pl.plot(x, up(up1, x, 1, +0), linewidth=1, label='Atomic Function at x=0')
pl.plot(x, up(up1, x, 1, -1), linewidth=1, label='Atomic Function at x=-1')
pl.plot(x, Sum(up(up1, x, 1, -1), Sum(up(up1, x), up(up1, x, 1, 1))), linewidth=2, label='Atomic Function Compounding')
pl.title('Atomic Function Compounding represent 1')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: Properties of atomic function Up(x)
1) Remarkably, Atomic Function Derivative can be exressed via Atomic Function itself - up'(x)= 2up(2x+1)-2up(2x-1) meaning the shape of pulses for derivative function can be represented by shifted and stratched Atomic Function itself - remarkable property
2) The Atomic Function pulses superposition set at points -2, -1, 0, +1, +2... can exactly represent a Unity (number 1):
1 = ... up(x-3) + up(x-2) + up(x-1) + up(x-0) + up(x+1) + up(x+2) + up(x+3) + ...
End of explanation
pl.plot(x, up(bup1, x, 1, -1), linewidth=1, label='B-Atomic Function at x=-1')
pl.plot(x, up(bup1, x, 1, +0), linewidth=1, label='B-Atomic Function at x=0')
pl.plot(x, up(bup1, x, 1, -1), linewidth=1, label='b-Atomic Function at x=-1')
pl.plot(x, Sum(up(bup1, x, 1, -1), Sum(up(bup1, x), up(bup1, x, 1, 1))), linewidth=2, label='Atomic Function Compounding')
pl.title('Atomic Function Compounding represent 1')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
### Integration of BUP
CS6 = Sum(up(bup1, x, 1, -1), Sum(up(bup1, x), up(bup1, x, 1, 1)))
pl.plot(x, CS6, label='BUP Spacetime Density')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
IntC6 = np.cumsum(CS6)*dx/50
pl.plot(x, IntC6, label='Spacetime Shape')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
#pl.plot(x, ABline (x, 1, 0), label='ABLine 1*x')
pl.plot(x, String(BString1, x, 1.0, -1.5, 1, -1.5), '--', linewidth=1, label='BString 1')
pl.plot(x, String(BString1, x, 1.0, -0.5, 1, -0.5), '--', linewidth=1, label='BString 2')
pl.plot(x, String(BString1, x, 1.0, +0.5, 1, +0.5), '--', linewidth=1, label='BString 3')
pl.plot(x, String(BString1, x, 1.0, +1.5, 1, +1.5), '--', linewidth=1, label='BString 4')
AS2 = Sum(String(BString1, x, 1.0, -1.5, 1, -1.5), String(BString1, x, 1.0, -0.5, 1.0, -0.5))
AS3 = Sum(AS2, String(BString1, x, 1, 0.5, 1, +0.5))
AS4 = Sum(AS3, String(BString1, x, 1,+1.5, 1, +1.5))
pl.plot(x, AS4, label='BStrings Joins', linewidth=2)
pl.title('BUP Atomic Strings Combinations')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: Properties of BUP atomic function BUp(x)
End of explanation
######### Presentation of Atomic Function via Atomic Strings ##########
x = np.arange(-2.0, 2.0, 0.01)
pl.plot(x, String(AString1, x, 1, 0, 1, 0), '--', linewidth=1, label='AString(x)')
pl.plot(x, String(AString1, x, 0.5, -0.5, +1, 0), '--', linewidth=2, label='+AString(2x+1)')
pl.plot(x, String(AString1, x, 0.5, +0.5, -1, 0), '--', linewidth=2, label='-AString(2x-1)')
#pl.plot(x, up(x, 1.0, 0, 1, 0), '--', linewidth=1, label='Atomic Function')
AS2 = Sum(String(AString1, x, 0.5, -0.5, +1, 0), String(AString1, x, 0.5, +0.5, -1, 0))
pl.plot(x, AS2, linewidth=3, label='Up(x) via Strings')
pl.title('Atomic Function as a Combination of AStrings')
pl.legend(loc='center left', numpoints=1)
pl.grid(True)
pl.show()
######### Presentation of BUP Atomic Function via Atomic Strings ##########
x = np.arange(-2.0, 2.0, 0.01)
pl.plot(x, String(BString1, x, 1, 0, 1, 0), '--', linewidth=1, label='BString(x)')
pl.plot(x, String(BString1, x, 0.5, -0.5, +1, 0), '--', linewidth=2, label='+BString(2x+1)')
pl.plot(x, String(BString1, x, 0.5, +0.5, -1, 0), '--', linewidth=2, label='-BString(2x-1)')
#pl.plot(x, up(x, 1.0, 0, 1, 0), '--', linewidth=1, label='Atomic Function')
AS2 = Sum(String(BString1, x, 0.5, -0.5, +1, 0), String(BString1, x, 0.5, +0.5, -1, 0))
pl.plot(x, AS2, linewidth=3, label='BUp(x) via Strings')
pl.title('B-Atomic Function as a Combination of B-Strings')
pl.legend(loc='center left', numpoints=1)
pl.grid(True)
pl.show()
Explanation: Atomic String is a generalisation of an Atomic Function
1) Astring is an swing-like function - Integral of Atomic Function (AF) which can be expressed via AF itself:
AString(x) = Integral(0,x)(Up(x)) = Up(x/2 - 1/2) - 1/2
2) Atomic Function can be represented via simple superposition of Atomic Strings:
up(x) = AString(2x + 1) - AString(2x - 1)
End of explanation
x = np.arange(-40.0, 40.0, 0.01)
#pl.plot(x, ABline (x, 1, 0), label='ABLine 1*x')
pl.plot(x, AString(x, 10.0,-15, 10, -15), '--', linewidth=1, label='AString 1')
pl.plot(x, AString(x, 10.0, -5, 10, -5), '--', linewidth=1, label='AString 2')
pl.plot(x, AString(x, 10.0, +5, 10, +5), '--', linewidth=1, label='AString 3')
pl.plot(x, AString(x, 10.0,+15, 10, +15), '--', linewidth=1, label='AString 4')
AS2 = Sum(AString(x, 10.0, -15, 10, -15), AString(x, 10., -5, 10, -5))
AS3 = Sum(AS2, AString(x, 10, +5, 10, +5))
AS4 = Sum(AS3, AString(x, 10,+15, 10, +15))
pl.plot(x, AS4, label='AStrings Joins', linewidth=2)
pl.title('Atomic Strings Combinations')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
x = np.arange(-40.0, 40.0, 0.01)
#pl.plot(x, ABline (x, 1, 0), label='ABLine 1*x')
pl.plot(x, String(BString1, x, 10.0,-15, 10, -15), '--', linewidth=1, label='BString 1')
pl.plot(x, String(BString1, x, 10.0, -5, 10, -5), '--', linewidth=1, label='BString 2')
pl.plot(x, String(BString1, x, 10.0, +5, 10, +5), '--', linewidth=1, label='BString 3')
pl.plot(x, String(BString1, x, 10.0,+15, 10, +15), '--', linewidth=1, label='BString 4')
AS2 = Sum(String(BString1, x, 10.0, -15, 10, -15), String(BString1, x, 10., -5, 10, -5))
AS3 = Sum(AS2, String(BString1, x, 10, +5, 10, +5))
AS4 = Sum(AS3, String(BString1, x, 10,+15, 10, +15))
pl.plot(x, AS4, label='BStrings Joins', linewidth=2)
pl.title('BUP Atomic Strings Combinations')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: 3) All derivatives of AString can be represented via AString itself:
AString'(x) = AString(2x + 1) - AString(2x - 1)
4) Combination of Atomic Strings can exactly represent a straight line:
x = AString(x) + Astring(x+1) + Astring(x+2)...
End of explanation
x = np.arange(-30.0, 30.0, 0.01)
#pl.plot(x, ABline (x, 1, 0), label='ABLine 1*x')
pl.plot(x, AString(x, 10.0,-15, 10, -15), '--', linewidth=1, label='AString Quantum 1')
pl.plot(x, AString(x, 10.0, -5, 10, -5), '--', linewidth=1, label='AString Quantum 2')
pl.plot(x, AString(x, 10.0, +5, 10, +5), '--', linewidth=1, label='AString Quantum 3')
pl.plot(x, AString(x, 10.0,+15, 10, +15), '--', linewidth=1, label='AString Quantum 4')
AS2 = Sum(AString(x, 10.0, -15, 10, -15), AString(x, 10., -5, 10, -5))
AS3 = Sum(AS2, AString(x, 10, +5, 10, +5))
AS4 = Sum(AS3, AString(x, 10,+15, 10, +15))
pl.plot(x, AS4, label='Spacetime Dimension', linewidth=2)
pl.title('Representing Spacetime by joining of Atomic Strings')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: Representing of flat Spacetime Fabric by joining of Atomic Strings Quanta (Metriants)
End of explanation
x = np.arange(-50.0, 50.0, 0.1)
dx = x[1] - x[0]
CS6 = Sum(up(up1, x, 5, -30, 5, 5), up(up1, x, 15, 0, 15, 5))
CS6 = Sum(CS6, up(up1, x, 10, +30, 10, 5))
pl.plot(x, CS6, label='Spacetime Density')
IntC6 = np.cumsum(CS6)*dx/50
pl.plot(x, IntC6, label='Spacetime Shape')
DerC6 = np.gradient(CS6, dx)
pl.plot(x, DerC6, label='Spacetime Curvature')
LightTrajectory = -10 -IntC6/5
pl.plot(x, LightTrajectory, label='Light Trajectory')
pl.title('Fabric of Curved Spacetime')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
x = np.arange(-50.0, 50.0, 0.1)
dx = x[1] - x[0]
CS6 = Sum(up(bup1, x, 5, -30, 5, 5), up(bup1, x, 15, 0, 15, 5))
CS6 = Sum(CS6, up(bup1, x, 10, +30, 10, 5))
pl.plot(x, CS6, label='Spacetime Density')
IntC6 = np.cumsum(CS6)*dx/50
pl.plot(x, IntC6, label='Spacetime Shape')
DerC6 = np.gradient(CS6, dx)
pl.plot(x, DerC6, label='Spacetime Curvature')
LightTrajectory = -10 -IntC6/5
pl.plot(x, LightTrajectory, label='Light Trajectory')
pl.title('Fabric of Curved Spacetime')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: Schematic model of Gravitation explaining General Relativity effects where spacetime Shape, Density and Curvature are deeply related being expressed via the same AString or Atomic Function
End of explanation
def integrate(a, b, N = 100):
t = np.linspace(a + (b-a)/(2*N), b- (b-a)/(2*N), N) # Central Points of subintervals
# fx = f(x)
fx = []
for i in range(len(t)):
# fx.append(pulse(up1, t[i]))
fx.append(pulse(bup1, t[i]))
#print (i, t[i], fx[i])
area = np.sum(fx) * (b-a)/N
return area
integrate(-1, 1)
Explanation: Apart from standard Python code, this script and material is the intellectual property of Professor Sergei Yu. Eremenko (https://au.linkedin.com/in/sergei-eremenko-3862079). You may not reproduce, edit, translate, distribute, publish or host this document in any way without the permission of Professor Eremenko.
Integration of BStrings
End of explanation |
13,894 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
how to implement this loss function in TensorFlow using the Keras losses module
| Python Code::
import tensorflow as tf
from tensorflow.keras.losses import MeanAbsoluteError
y_true = [1., 0.]
y_pred = [2., 3.]
mae_loss = MeanAbsoluteError()
loss = mae_loss(y_true, y_pred).numpy()
|
13,895 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
for Loops
A for loop acts as an iterator in Python, it goes through items that are in a sequence or any other iterable item. Objects that we've learned about that we can ietrate over include strings,lists,tuples, and even built in iterables for dictionaries, such as the keys or values.
We've already seen the for statement a little bit in past lectures but now lets formalize our understanding.
Here's the general format for a for loop in Python
Step1: Great! Hopefully this makes sense. Now lets add a if statement to check for even numbers. We'll first introduce a new concept here--the modulo.
Modulo
The modulo allows us to get the remainder in a division and uses the % symbol. For example
Step2: This makes sense since 17 divided by 5 is 3 remainder 2. Let's see a few more quick examples
Step3: Notice that if a number is fuly divisble with no remainder, the result of the modulo call is 0. We can use this to test for even numbers, since if a number modulo 2 is equal to 0, that means it is an even number!
Back to the for loops!
Example 2
Let's print only the even numbers from that list!
Step4: We could have also put in else statement in there
Step5: Example 3
Another common idea during a for loop is keeping some sort of running tally during the multiple loops. For exampl, lets create a for loop that sums up the list
Step6: Great! Read over the above cell and make sure you understand fully what is going on. Also we could have implemented a += to to the addition towards the sum. For example
Step7: Example 4
We've used for loops with lists, how about with strings? Remember strings are a sequence so when we iterate through them we will be accessing each item in that string.
Step8: Example 5
Let's now look at how a for loop can be used with a tuple
Step9: Example 6
Tuples have a special quality when it comes to for loops. If you are iterating through a seqeunce that contains tuples, the item can actually be the tuple itself, this is an example of tuple unpacking. During the for loop we will be unpacking the tuple inside of a sequence and we can access the individual items inside that tuple!
Step10: Cool! With tuples in a sequence we can access the items inside of them through unpacking! The reason this is important is beacause many object will deliver their iterables through tuples. Let's start exploring iterating through Dictionaries to explore this further!
Example 7
Step11: Notice how this produces only the keys. So how can we get the values? Or both the keys and the values?
Here is where we are going to have a Python 3 Alert!
<font color='red'>Python 3 Alert!</font>
Python 2
Step12: Calling the items() method returns a list of tuples. Now we can iterate through them just as we did in the previous examples.
Step13: Python 3 | Python Code:
# We'll learn how to automate this sort of list in the next lecture
l = [1,2,3,4,5,6,7,8,9,10]
for num in l:
print num
Explanation: for Loops
A for loop acts as an iterator in Python, it goes through items that are in a sequence or any other iterable item. Objects that we've learned about that we can ietrate over include strings,lists,tuples, and even built in iterables for dictionaries, such as the keys or values.
We've already seen the for statement a little bit in past lectures but now lets formalize our understanding.
Here's the general format for a for loop in Python:
for item in object:
statements to do stuff
The variable name used for the item is completely up to the coder, so use your best judgement for choosing a name that makes sense and you will be able to understand when revisiting your code. This item name can then be referenced inside you loop, for example if you wanted to use if statements to perform checks.
Let's go ahead and work through several example of for loops using a varieyt of data object types. we'll start simple and build more complexity later on.
Example 1
Iterating through a list.
End of explanation
17 % 5
Explanation: Great! Hopefully this makes sense. Now lets add a if statement to check for even numbers. We'll first introduce a new concept here--the modulo.
Modulo
The modulo allows us to get the remainder in a division and uses the % symbol. For example:
End of explanation
# 3 Remainder 1
10 % 3
# 2 Remainder 4
18 % 7
# 2 no remainder
4 % 2
Explanation: This makes sense since 17 divided by 5 is 3 remainder 2. Let's see a few more quick examples:
End of explanation
for num in l:
if num % 2 == 0:
print num
Explanation: Notice that if a number is fuly divisble with no remainder, the result of the modulo call is 0. We can use this to test for even numbers, since if a number modulo 2 is equal to 0, that means it is an even number!
Back to the for loops!
Example 2
Let's print only the even numbers from that list!
End of explanation
for num in l:
if num % 2 == 0:
print num
else:
print 'Odd number'
Explanation: We could have also put in else statement in there:
End of explanation
# Start sum at zero
list_sum = 0
for num in l:
list_sum = list_sum + num
print list_sum
Explanation: Example 3
Another common idea during a for loop is keeping some sort of running tally during the multiple loops. For exampl, lets create a for loop that sums up the list:
End of explanation
# Start sum at zero
list_sum = 0
for num in l:
list_sum += num
print list_sum
Explanation: Great! Read over the above cell and make sure you understand fully what is going on. Also we could have implemented a += to to the addition towards the sum. For example:
End of explanation
for letter in 'This is a string.':
print letter
Explanation: Example 4
We've used for loops with lists, how about with strings? Remember strings are a sequence so when we iterate through them we will be accessing each item in that string.
End of explanation
tup = (1,2,3,4,5)
for t in tup:
print t
Explanation: Example 5
Let's now look at how a for loop can be used with a tuple:
End of explanation
l = [(2,4),(6,8),(10,12)]
for tup in l:
print tup
# Now with unpacking!
for (t1,t2) in l:
print t1
Explanation: Example 6
Tuples have a special quality when it comes to for loops. If you are iterating through a seqeunce that contains tuples, the item can actually be the tuple itself, this is an example of tuple unpacking. During the for loop we will be unpacking the tuple inside of a sequence and we can access the individual items inside that tuple!
End of explanation
d = {'k1':1,'k2':2,'k3':3}
for item in d:
print item
Explanation: Cool! With tuples in a sequence we can access the items inside of them through unpacking! The reason this is important is beacause many object will deliver their iterables through tuples. Let's start exploring iterating through Dictionaries to explore this further!
Example 7
End of explanation
# Creates a generator
d.iteritems()
Explanation: Notice how this produces only the keys. So how can we get the values? Or both the keys and the values?
Here is where we are going to have a Python 3 Alert!
<font color='red'>Python 3 Alert!</font>
Python 2: Use .iteritems() to iterate through
In Python 2 you should use .iteritems() to iterate through the keys and values of a dictionary. This basically creates a generator (we will get into generators later on in the course) that will generate the keys and values of your dictionary. Let's see it in action:
End of explanation
# Create a generator
for k,v in d.iteritems():
print k
print v
Explanation: Calling the items() method returns a list of tuples. Now we can iterate through them just as we did in the previous examples.
End of explanation
# For Python 3
for k,v in d.items():
print(k)
print(v)
Explanation: Python 3: items()
In Python 3 you should use .items() to iterate through the keys and values of a dictionary. For example:
End of explanation |
13,896 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: In this exercise we will decode orientation using data collected for the Cognitive Neuroscience module in 2017. The subject performed a task that manipulated whether attention was placed towards the left or right visual field, or with no attentional focus (control condition). The stimulus was two gabor patches left and right of fixation flickering at 5 Hz, with the following timing
Step4: Classification analysis
Now let's fit a classifier using balanced 8-fold crossvalidation. For now we only include attention trials. We will fit the classifier at each time point along the trial timecourse. We use a nested crossvalidation loop to determine the classifier parameters for each dataset.
Step5: Plot the results
Step6: Now let's run it with the labels shuffled 100 times to see how good these results are compared to chance. This will take a little while to complete. For a real analysis one would want to do this many more times (up to ~5000) in order for the distribution of extreme values to stabilize.
Step7: Now we plot those results alongside the true classification results, adding an asterisk at the timepoints where the observed accuracy is greater than the 99th percentile of the random accuracies. | Python Code:
import os,json,glob,pickle
import numpy,pandas
import nibabel
import sklearn.multiclass
from sklearn.svm import SVC
import sklearn.metrics
import sklearn.model_selection
import sklearn.preprocessing
import scipy.stats,scipy.io
import random
import seaborn
%matplotlib inline
import matplotlib.pyplot as plt
datadir='data'
print('using data from %s'%datadir)
lv1_ts=scipy.io.loadmat(os.path.join(datadir,'lv1_tseries.mat'))['lv1']
rv1_ts=scipy.io.loadmat(os.path.join(datadir,'rv1_tseries.mat'))['rv1']
# scale the data so that we don't need to bother with intercept in the model
lv1_ts=sklearn.preprocessing.scale(lv1_ts.T)
rv1_ts=sklearn.preprocessing.scale(rv1_ts.T)
tsdata={'leftV1':lv1_ts,'rightV1':rv1_ts}
desmtx=scipy.io.loadmat(os.path.join(datadir,'design.mat'))['design']
labels=desmtx[:,0]
print(labels)
ntrials=desmtx.shape[0]
ntp,nvox=lv1_ts.shape
print(ntrials,'trials')
print(nvox,'voxels')
print(ntp,'timepoints')
lv1_ts.shape
# Reproduce the deconvolution analysis using an FIR model
# the onset times are in volumes, so we just use tr=1
# use 20-second window
def make_fir_model(onsets,tslength,hrflength=48,tr=1):
generate an FIR model design matrix
this only works for a single condition
X=numpy.zeros((tslength,int(hrflength/tr)))
for i in range(hrflength):
for o in onsets:
try:
X[o+i,i]=1
except:
pass
return X
desmtx_df=pandas.DataFrame(desmtx,columns=['condition','onset'])
onsets={}
onsets['neutral']=desmtx_df.query('condition==0').onset.values
onsets['attendleft']=desmtx_df.query('condition==1').onset.values
onsets['attendright']=desmtx_df.query('condition==2').onset.values
left_fir=make_fir_model(onsets['attendleft'],ntp)
right_fir=make_fir_model(onsets['attendright'],ntp)
neutral_fir=make_fir_model(onsets['neutral'],ntp)
fir=numpy.hstack((left_fir,right_fir,neutral_fir))
# show the design matrix
plt.imshow(fir[:400,:])
plt.axis('auto')
print(fir.shape)
# estimate the model
beta_hat_left=numpy.linalg.inv(fir.T.dot(fir)).dot(fir.T).dot(lv1_ts)
beta_hat_right=numpy.linalg.inv(fir.T.dot(fir)).dot(fir.T).dot(rv1_ts)
plt.figure(figsize=(12,6))
plt.subplot(1,2,1)
plt.plot(beta_hat_left[:48].mean(1))
plt.plot(beta_hat_left[48:96].mean(1))
plt.plot(beta_hat_left[96:144].mean(1))
plt.legend(['attend left','attend right','neutral'])
plt.title('Left V1')
plt.subplot(1,2,2)
plt.plot(beta_hat_right[:48].mean(1))
plt.plot(beta_hat_right[48:96].mean(1))
plt.plot(beta_hat_right[96:144].mean(1))
plt.legend(['attend left','attend right','neutral'])
plt.title('Right V1')
pred_left=fir.dot(beta_hat_left)
# check fit of the model over first 500 timepoints
plt.figure(figsize=(14,4))
plt.plot(sklearn.preprocessing.scale(lv1_ts.mean(1)[:500]))
plt.plot(sklearn.preprocessing.scale(rv1_ts.mean(1)[:500]))
meanpred=sklearn.preprocessing.scale(pred_left.mean(1))
plt.plot(meanpred[:500])
pred_left.mean(1).shape
Explanation: In this exercise we will decode orientation using data collected for the Cognitive Neuroscience module in 2017. The subject performed a task that manipulated whether attention was placed towards the left or right visual field, or with no attentional focus (control condition). The stimulus was two gabor patches left and right of fixation flickering at 5 Hz, with the following timing:
fixate: 500 ms
task cue: 500 ms
ISI: 1000 ms
stimulus: 4000 ms
change+resp: 1500 ms
var ITI: uniform distribution between 2500 and 9500 ms
Notes about the data files (from Dan):
v1_tseries are the time series files, as voxel * volume matrices
v1_r2 are the variance explained per voxel by the FIR model with three conditions for task=0/1/2
design is a long form matrix (rows are individual events, first column are volumes and second column trial type) indicating the volume at which the different trial types occurred, 0 = neutral task (press button when stimulus cross changes color), 1 = attend left side and detect the direction of rotation, 2 = attend right side and detect the direction of rotation
Load data
First we load the data files.
End of explanation
def run_classifier(data,labels, shuffle=False,nfolds=8,scale=True,
clf=None):
run classifier for a single dataset
features=data
if scale:
features=sklearn.preprocessing.scale(features)
if shuffle:
numpy.random.shuffle(labels)
if not clf:
clf=sklearn.svm.SVC(C=C)
skf = sklearn.model_selection.StratifiedKFold(5,shuffle=True)
pred=numpy.zeros(labels.shape[0])
for train, test in skf.split(features,labels):
clf.fit(features[train,:],labels[train])
pred[test]=clf.predict(features[test,:])
acc=sklearn.metrics.accuracy_score(labels, pred)
return acc
def get_accuracy_timeseries(tsdata,labels_attend,onsets,shuffle=False,clf=None,window=40,
voxels=None):
iterate over timepoints
acc=numpy.zeros(window)
for tp in range(window):
# pull out data for each trial/timepoint
if voxels is None:
data=numpy.zeros((len(labels_attend),tsdata['leftV1'].shape[1] + tsdata['rightV1'].shape[1]))
else:
data=numpy.zeros((len(labels_attend),tsdata[voxels+'V1'].shape[1]))
ctr=0
for cond in ['attendleft','attendright']:
for ons in onsets[cond]:
if voxels is None:
data[ctr,:]=numpy.hstack((tsdata['leftV1'][ons+tp,:],tsdata['rightV1'][ons+tp,:]))
else:
data[ctr,:]=tsdata[voxels+'V1'][ons+tp,:]
ctr+=1
acc[tp]=run_classifier(data,labels_attend,clf=clf,shuffle=shuffle)
return acc
labels_attend=numpy.array([i for i in labels if i > 0])
#clf=sklearn.linear_model.LogisticRegressionCV(penalty='l1',solver='liblinear')
#clf=sklearn.svm.SVC(C=1)
tuned_parameters = [{'C': [0.0005,0.001,0.005,0.01,0.05, 0.1]}]
clf = sklearn.model_selection.GridSearchCV(sklearn.svm.LinearSVC(C=1), tuned_parameters, cv=5)
acc_all=get_accuracy_timeseries(tsdata,labels_attend,onsets,clf=clf)
acc_left=get_accuracy_timeseries(tsdata,labels_attend,onsets,voxels='left',clf=clf)
acc_right=get_accuracy_timeseries(tsdata,labels_attend,onsets,voxels='right',clf=clf)
Explanation: Classification analysis
Now let's fit a classifier using balanced 8-fold crossvalidation. For now we only include attention trials. We will fit the classifier at each time point along the trial timecourse. We use a nested crossvalidation loop to determine the classifier parameters for each dataset.
End of explanation
plt.figure(figsize=(14,5))
plt.subplot(1,3,1)
plt.plot(numpy.arange(0,20,0.5),acc_all)
plt.axis([0,20,0,1])
plt.plot([0,20],[0.5,0.5],'k--')
plt.title('All voxels')
plt.xlabel('Time (seconds)')
plt.ylabel('Pecent classification accuracy')
plt.subplot(1,3,2)
plt.plot(numpy.arange(0,20,0.5),acc_left)
plt.axis([0,20,0,1])
plt.plot([0,20],[0.5,0.5],'k--')
plt.title('Left V1')
plt.xlabel('Time (seconds)')
plt.ylabel('Pecent classification accuracy')
plt.subplot(1,3,3)
plt.plot(numpy.arange(0,20,0.5),acc_right)
plt.axis([0,20,0,1])
plt.plot([0,20],[0.5,0.5],'k--')
plt.title('Right V1')
plt.xlabel('Time (seconds)')
plt.ylabel('Pecent classification accuracy')
Explanation: Plot the results
End of explanation
# if the saved results already exist then just reload them, to save time
if os.path.exists('shuffled_accuracy.pkl'):
print('loading existing shuffled data')
acc_all_rand,acc_left_rand,acc_right_rand,clf=pickle.load(open('shuffled_accuracy.pkl','rb'))
else:
acc_all_rand=numpy.zeros((100,40))
acc_left_rand=numpy.zeros((100,40))
acc_right_rand=numpy.zeros((100,40))
for i in range(100):
print(i)
acc_all_rand[i,:]=get_accuracy_timeseries(tsdata,labels_attend,onsets,shuffle=True,clf=clf)
acc_left_rand[i,:]=get_accuracy_timeseries(tsdata,labels_attend,onsets,voxels='left',shuffle=True,clf=clf)
acc_right_rand[i,:]=get_accuracy_timeseries(tsdata,labels_attend,onsets,voxels='right',shuffle=True,clf=clf)
pickle.dump((acc_all_rand,acc_left_rand,acc_right_rand,clf),open('shuffled_accuracy.pkl','wb'))
Explanation: Now let's run it with the labels shuffled 100 times to see how good these results are compared to chance. This will take a little while to complete. For a real analysis one would want to do this many more times (up to ~5000) in order for the distribution of extreme values to stabilize.
End of explanation
rand_percentile=(1 - 0.05/40)*100 # percent cutoff for randomization, bonferroni corrected
nrand=acc_all_rand.shape[0]
plt.figure(figsize=(14,5))
plt.subplot(1,3,1)
plt.plot(numpy.arange(0,20,0.5),acc_all)
plt.axis([0,20,0,1])
plt.plot([0,20],[0.5,0.5],'k--')
plt.title('All voxels')
plt.xlabel('Time (seconds)')
plt.ylabel('Percent classification accuracy')
for i in range(nrand):
plt.plot(numpy.arange(0,20,0.5),acc_all_rand[i,:],'r',linewidth=0.01)
cutoff=numpy.zeros(40)
for i in range(40):
cutoff[i]=scipy.stats.scoreatpercentile(acc_all_rand[:,i],rand_percentile)
if acc_all[i]>cutoff[i]:
plt.text(i/2,0.9,'*')
plt.plot(numpy.arange(0,20,0.5),cutoff,'g--')
plt.subplot(1,3,2)
plt.plot(numpy.arange(0,20,0.5),acc_left)
plt.axis([0,20,0,1])
plt.plot([0,20],[0.5,0.5],'k--')
plt.title('Left V1')
plt.xlabel('Time (seconds)')
plt.ylabel('Pecent classification accuracy')
for i in range(nrand):
plt.plot(numpy.arange(0,20,0.5),acc_left_rand[i,:],'r',linewidth=0.01)
cutoff=numpy.zeros(40)
for i in range(40):
cutoff[i]=scipy.stats.scoreatpercentile(acc_left_rand[:,i],rand_percentile)
if acc_left[i]>cutoff[i]:
plt.text(i/2,0.9,'*')
plt.plot(numpy.arange(0,20,0.5),cutoff,'g--')
plt.subplot(1,3,3)
plt.plot(numpy.arange(0,20,0.5),acc_right)
plt.axis([0,20,0,1])
plt.plot([0,20],[0.5,0.5],'k--')
plt.title('Right V1')
plt.xlabel('Time (seconds)')
plt.ylabel('Pecent classification accuracy')
for i in range(nrand):
plt.plot(numpy.arange(0,20,0.5),acc_right_rand[i,:],'r',linewidth=0.01)
cutoff=numpy.zeros(40)
for i in range(40):
cutoff[i]=scipy.stats.scoreatpercentile(acc_right_rand[:,i],rand_percentile)
if acc_right[i]>cutoff[i]:
plt.text(i/2,0.9,'*')
plt.plot(numpy.arange(0,20,0.5),cutoff,'g--')
Explanation: Now we plot those results alongside the true classification results, adding an asterisk at the timepoints where the observed accuracy is greater than the 99th percentile of the random accuracies.
End of explanation |
13,897 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../Pierian-Data-Logo.PNG">
<br>
<strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong>
Full Artificial Neural Network Code Along - CLASSIFICATION
In the last section we took in four continuous variables (lengths) to perform a classification. In this section we'll combine continuous and categorical data to perform a similar classification. The goal is to estimate the relative cost of a New York City cab ride from several inputs. The inspiration behind this code along is a recent <a href='https
Step1: Load the NYC Taxi Fares dataset
The <a href='https
Step3: Conveniently, 2/3 of the data have fares under \$10, and 1/3 have fares \$10 and above.
Fare classes correspond to fare amounts as follows
Step4: Add a datetime column and derive useful statistics
By creating a datetime object, we can extract information like "day of the week", "am vs. pm" etc.
Note that the data was saved in UTC time. Our data falls in April of 2010 which occurred during Daylight Savings Time in New York. For that reason, we'll make an adjustment to EDT using UTC-4 (subtracting four hours).
Step5: Separate categorical from continuous columns
Step6: <div class="alert alert-info"><strong>NOTE
Step7: We can see that <tt>df['Hour']</tt> is a categorical feature by displaying some of the rows
Step8: Here our categorical names are the integers 0 through 23, for a total of 24 unique categories. These values <em>also</em> correspond to the codes assigned to each name.
We can access the category names with <tt>Series.cat.categories</tt> or just the codes with <tt>Series.cat.codes</tt>. This will make more sense if we look at <tt>df['AMorPM']</tt>
Step9: <div class="alert alert-info"><strong>NOTE
Step10: <div class="alert alert-info"><strong>NOTE
Step11: We can feed all of our continuous variables into the model as a tensor. We're not normalizing the values here; we'll let the model perform this step.
<div class="alert alert-info"><strong>NOTE
Step12: Note
Step13: Set an embedding size
The rule of thumb for determining the embedding size is to divide the number of unique entries in each column by 2, but not to exceed 50.
Step14: Define a TabularModel
This somewhat follows the <a href='https
Step15: <div class="alert alert-danger"><strong>This is how the categorical embeddings are passed into the layers.</strong></div>
Step16: Define loss function & optimizer
For our classification we'll replace the MSE loss function with <a href='https
Step17: Perform train/test splits
At this point our batch size is the entire dataset of 120,000 records. To save time we'll use the first 60,000. Recall that our tensors are already randomly shuffled.
Step18: Train the model
Expect this to take 30 minutes or more! We've added code to tell us the duration at the end.
Step19: Plot the loss function
Step20: Validate the model
Step21: Now let's look at the first 50 predicted values
Step22: Save the model
Save the trained model to a file in case you want to come back later and feed new data through it.
Step23: Loading a saved model (starting from scratch)
We can load the trained weights and biases from a saved model. If we've just opened the notebook, we'll have to run standard imports and function definitions. To demonstrate, restart the kernel before proceeding.
Step24: Now define the model. Before we can load the saved settings, we need to instantiate our TabularModel with the parameters we used before (embedding sizes, number of continuous columns, output size, layer sizes, and dropout layer p-value).
Step25: Once the model is set up, loading the saved settings is a snap.
Step26: Next we'll define a function that takes in new parameters from the user, performs all of the preprocessing steps above, and passes the new data through our trained model.
Step27: Feed new data through the trained model
For convenience, here are the max and min values for each of the variables | Python Code:
import torch
import torch.nn as nn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: <img src="../Pierian-Data-Logo.PNG">
<br>
<strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong>
Full Artificial Neural Network Code Along - CLASSIFICATION
In the last section we took in four continuous variables (lengths) to perform a classification. In this section we'll combine continuous and categorical data to perform a similar classification. The goal is to estimate the relative cost of a New York City cab ride from several inputs. The inspiration behind this code along is a recent <a href='https://www.kaggle.com/c/new-york-city-taxi-fare-prediction'>Kaggle competition</a>.
<div class="alert alert-success"><strong>NOTE:</strong> This notebook differs from the previous regression notebook in that it uses <tt>'fare_class'</tt> for the <tt><strong>y</strong></tt> set, and the output contains two values instead of one. In this exercise we're training our model to perform a binary classification, and predict whether a fare is greater or less than $10.00.</div>
Working with tabular data
Deep learning with neural networks is often associated with sophisticated image recognition, and in upcoming sections we'll train models based on properties like pixels patterns and colors.
Here we're working with tabular data (spreadsheets, SQL tables, etc.) with columns of values that may or may not be relevant. As it happens, neural networks can learn to make connections we probably wouldn't have developed on our own. However, to do this we have to handle categorical values separately from continuous ones. Make sure to watch the theory lectures! You'll want to be comfortable with:
* continuous vs. categorical values
* embeddings
* batch normalization
* dropout layers
Perform standard imports
End of explanation
df = pd.read_csv('../Data/NYCTaxiFares.csv')
df.head()
df['fare_class'].value_counts()
Explanation: Load the NYC Taxi Fares dataset
The <a href='https://www.kaggle.com/c/new-york-city-taxi-fare-prediction'>Kaggle competition</a> provides a dataset with about 55 million records. The data contains only the pickup date & time, the latitude & longitude (GPS coordinates) of the pickup and dropoff locations, and the number of passengers. It is up to the contest participant to extract any further information. For instance, does the time of day matter? The day of the week? How do we determine the distance traveled from pairs of GPS coordinates?
For this exercise we've whittled the dataset down to just 120,000 records from April 11 to April 24, 2010. The records are randomly sorted. We'll show how to calculate distance from GPS coordinates, and how to create a pandas datatime object from a text column. This will let us quickly get information like day of the week, am vs. pm, etc.
Let's get started!
End of explanation
def haversine_distance(df, lat1, long1, lat2, long2):
Calculates the haversine distance between 2 sets of GPS coordinates in df
r = 6371 # average radius of Earth in kilometers
phi1 = np.radians(df[lat1])
phi2 = np.radians(df[lat2])
delta_phi = np.radians(df[lat2]-df[lat1])
delta_lambda = np.radians(df[long2]-df[long1])
a = np.sin(delta_phi/2)**2 + np.cos(phi1) * np.cos(phi2) * np.sin(delta_lambda/2)**2
c = 2 * np.arctan2(np.sqrt(a), np.sqrt(1-a))
d = (r * c) # in kilometers
return d
df['dist_km'] = haversine_distance(df,'pickup_latitude', 'pickup_longitude', 'dropoff_latitude', 'dropoff_longitude')
df.head()
Explanation: Conveniently, 2/3 of the data have fares under \$10, and 1/3 have fares \$10 and above.
Fare classes correspond to fare amounts as follows:
<table style="display: inline-block">
<tr><th>Class</th><th>Values</th></tr>
<tr><td>0</td><td>< \$10.00</td></tr>
<tr><td>1</td><td>>= \$10.00</td></tr>
</table>
Calculate the distance traveled
The <a href='https://en.wikipedia.org/wiki/Haversine_formula'>haversine formula</a> calculates the distance on a sphere between two sets of GPS coordinates.<br>
Here we assign latitude values with $\varphi$ (phi) and longitude with $\lambda$ (lambda).
The distance formula works out to
${\displaystyle d=2r\arcsin \left({\sqrt {\sin ^{2}\left({\frac {\varphi {2}-\varphi {1}}{2}}\right)+\cos(\varphi {1})\:\cos(\varphi {2})\:\sin ^{2}\left({\frac {\lambda {2}-\lambda {1}}{2}}\right)}}\right)}$
where
$\begin{split} r&: \textrm {radius of the sphere (Earth's radius averages 6371 km)}\
\varphi_1, \varphi_2&: \textrm {latitudes of point 1 and point 2}\
\lambda_1, \lambda_2&: \textrm {longitudes of point 1 and point 2}\end{split}$
End of explanation
df['EDTdate'] = pd.to_datetime(df['pickup_datetime'].str[:19]) - pd.Timedelta(hours=4)
df['Hour'] = df['EDTdate'].dt.hour
df['AMorPM'] = np.where(df['Hour']<12,'am','pm')
df['Weekday'] = df['EDTdate'].dt.strftime("%a")
df.head()
df['EDTdate'].min()
df['EDTdate'].max()
Explanation: Add a datetime column and derive useful statistics
By creating a datetime object, we can extract information like "day of the week", "am vs. pm" etc.
Note that the data was saved in UTC time. Our data falls in April of 2010 which occurred during Daylight Savings Time in New York. For that reason, we'll make an adjustment to EDT using UTC-4 (subtracting four hours).
End of explanation
df.columns
cat_cols = ['Hour', 'AMorPM', 'Weekday']
cont_cols = ['pickup_latitude', 'pickup_longitude', 'dropoff_latitude', 'dropoff_longitude', 'passenger_count', 'dist_km']
y_col = ['fare_class'] # this column contains the labels
Explanation: Separate categorical from continuous columns
End of explanation
# Convert our three categorical columns to category dtypes.
for cat in cat_cols:
df[cat] = df[cat].astype('category')
df.dtypes
Explanation: <div class="alert alert-info"><strong>NOTE:</strong> If you plan to use all of the columns in the data table, there's a shortcut to grab the remaining continuous columns:<br>
<pre style='background-color:rgb(217,237,247)'>cont_cols = [col for col in df.columns if col not in cat_cols + y_col]</pre>
Here we entered the continuous columns explicitly because there are columns we're not running through the model (fare_amount and EDTdate)</div>
Categorify
Pandas offers a <a href='https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html'><strong>category dtype</strong></a> for converting categorical values to numerical codes. A dataset containing months of the year will be assigned 12 codes, one for each month. These will usually be the integers 0 to 11. Pandas replaces the column values with codes, and retains an index list of category values. In the steps ahead we'll call the categorical values "names" and the encodings "codes".
End of explanation
df['Hour'].head()
Explanation: We can see that <tt>df['Hour']</tt> is a categorical feature by displaying some of the rows:
End of explanation
df['AMorPM'].head()
df['AMorPM'].cat.categories
df['AMorPM'].head().cat.codes
df['Weekday'].cat.categories
df['Weekday'].head().cat.codes
Explanation: Here our categorical names are the integers 0 through 23, for a total of 24 unique categories. These values <em>also</em> correspond to the codes assigned to each name.
We can access the category names with <tt>Series.cat.categories</tt> or just the codes with <tt>Series.cat.codes</tt>. This will make more sense if we look at <tt>df['AMorPM']</tt>:
End of explanation
hr = df['Hour'].cat.codes.values
ampm = df['AMorPM'].cat.codes.values
wkdy = df['Weekday'].cat.codes.values
cats = np.stack([hr, ampm, wkdy], 1)
cats[:5]
Explanation: <div class="alert alert-info"><strong>NOTE: </strong>NaN values in categorical data are assigned a code of -1. We don't have any in this particular dataset.</div>
Now we want to combine the three categorical columns into one input array using <a href='https://docs.scipy.org/doc/numpy/reference/generated/numpy.stack.html'><tt>numpy.stack</tt></a> We don't want the Series index, just the values.
End of explanation
# Convert categorical variables to a tensor
cats = torch.tensor(cats, dtype=torch.int64)
# this syntax is ok, since the source data is an array, not an existing tensor
cats[:5]
Explanation: <div class="alert alert-info"><strong>NOTE:</strong> This can be done in one line of code using a list comprehension:
<pre style='background-color:rgb(217,237,247)'>cats = np.stack([df[col].cat.codes.values for col in cat_cols], 1)</pre>
Don't worry about the dtype for now, we can make it int64 when we convert it to a tensor.</div>
Convert numpy arrays to tensors
End of explanation
# Convert continuous variables to a tensor
conts = np.stack([df[col].values for col in cont_cols], 1)
conts = torch.tensor(conts, dtype=torch.float)
conts[:5]
conts.type()
Explanation: We can feed all of our continuous variables into the model as a tensor. We're not normalizing the values here; we'll let the model perform this step.
<div class="alert alert-info"><strong>NOTE:</strong> We have to store <tt>conts</tt> and <tt>y</tt> as Float (float32) tensors, not Double (float64) in order for batch normalization to work properly.</div>
End of explanation
# Convert labels to a tensor
y = torch.tensor(df[y_col].values).flatten()
y[:5]
cats.shape
conts.shape
y.shape
Explanation: Note: the CrossEntropyLoss function we'll use below expects a 1d y-tensor, so we'll replace <tt>.reshape(-1,1)</tt> with <tt>.flatten()</tt> this time.
End of explanation
# This will set embedding sizes for Hours, AMvsPM and Weekdays
cat_szs = [len(df[col].cat.categories) for col in cat_cols]
emb_szs = [(size, min(50, (size+1)//2)) for size in cat_szs]
emb_szs
Explanation: Set an embedding size
The rule of thumb for determining the embedding size is to divide the number of unique entries in each column by 2, but not to exceed 50.
End of explanation
# This is our source data
catz = cats[:4]
catz
# This is passed in when the model is instantiated
emb_szs
# This is assigned inside the __init__() method
selfembeds = nn.ModuleList([nn.Embedding(ni, nf) for ni,nf in emb_szs])
selfembeds
list(enumerate(selfembeds))
# This happens inside the forward() method
embeddingz = []
for i,e in enumerate(selfembeds):
embeddingz.append(e(catz[:,i]))
embeddingz
# We concatenate the embedding sections (12,1,4) into one (17)
z = torch.cat(embeddingz, 1)
z
# This was assigned under the __init__() method
selfembdrop = nn.Dropout(.4)
z = selfembdrop(z)
z
Explanation: Define a TabularModel
This somewhat follows the <a href='https://docs.fast.ai/tabular.models.html'>fast.ai library</a> The goal is to define a model based on the number of continuous columns (given by <tt>conts.shape[1]</tt>) plus the number of categorical columns and their embeddings (given by <tt>len(emb_szs)</tt> and <tt>emb_szs</tt> respectively). The output would either be a regression (a single float value), or a classification (a group of bins and their softmax values). For this exercise our output will be a single regression value. Note that we'll assume our data contains both categorical and continuous data. You can add boolean parameters to your own model class to handle a variety of datasets.
<div class="alert alert-info"><strong>Let's walk through the steps we're about to take. See below for more detailed illustrations of the steps.</strong><br>
1. Extend the base Module class, set up the following parameters:
* <tt>emb_szs: </tt>list of tuples: each categorical variable size is paired with an embedding size
* <tt>n_cont: </tt>int: number of continuous variables
* <tt>out_sz: </tt>int: output size
* <tt>layers: </tt>list of ints: layer sizes
* <tt>p: </tt>float: dropout probability for each layer (for simplicity we'll use the same value throughout)
<tt><font color=black>class TabularModel(nn.Module):<br>
def \_\_init\_\_(self, emb_szs, n_cont, out_sz, layers, p=0.5):<br>
super().\_\_init\_\_()</font></tt><br>
2. Set up the embedded layers with <a href='https://pytorch.org/docs/stable/nn.html#modulelist'><tt><strong>torch.nn.ModuleList()</strong></tt></a> and <a href='https://pytorch.org/docs/stable/nn.html#embedding'><tt><strong>torch.nn.Embedding()</strong></tt></a><br>Categorical data will be filtered through these Embeddings in the forward section.<br>
<tt><font color=black> self.embeds = nn.ModuleList([nn.Embedding(ni, nf) for ni,nf in emb_szs])</font></tt><br><br>
3. Set up a dropout function for the embeddings with <a href='https://pytorch.org/docs/stable/nn.html#dropout'><tt><strong>torch.nn.Dropout()</strong></tt></a> The default p-value=0.5<br>
<tt><font color=black> self.emb_drop = nn.Dropout(emb_drop)</font></tt><br><br>
4. Set up a normalization function for the continuous variables with <a href='https://pytorch.org/docs/stable/nn.html#batchnorm1d'><tt><strong>torch.nn.BatchNorm1d()</strong></tt></a><br>
<tt><font color=black> self.bn_cont = nn.BatchNorm1d(n_cont)</font></tt><br><br>
5. Set up a sequence of neural network layers where each level includes a Linear function, an activation function (we'll use <a href='https://pytorch.org/docs/stable/nn.html#relu'><strong>ReLU</strong></a>), a normalization step, and a dropout layer. We'll combine the list of layers with <a href='https://pytorch.org/docs/stable/nn.html#sequential'><tt><strong>torch.nn.Sequential()</strong></tt></a><br>
<tt><font color=black> self.bn_cont = nn.BatchNorm1d(n_cont)<br>
layerlist = []<br>
n_emb = sum((nf for ni,nf in emb_szs))<br>
n_in = n_emb + n_cont<br>
<br>
for i in layers:<br>
layerlist.append(nn.Linear(n_in,i)) <br>
layerlist.append(nn.ReLU(inplace=True))<br>
layerlist.append(nn.BatchNorm1d(i))<br>
layerlist.append(nn.Dropout(p))<br>
n_in = i<br>
layerlist.append(nn.Linear(layers[-1],out_sz))<br>
<br>
self.layers = nn.Sequential(*layerlist)</font></tt><br><br>
6. Define the forward method. Preprocess the embeddings and normalize the continuous variables before passing them through the layers.<br>Use <a href='https://pytorch.org/docs/stable/torch.html#torch.cat'><tt><strong>torch.cat()</strong></tt></a> to combine multiple tensors into one.<br>
<tt><font color=black>def forward(self, x_cat, x_cont):<br>
embeddings = []<br>
for i,e in enumerate(self.embeds):<br>
embeddings.append(e(x_cat[:,i]))<br>
x = torch.cat(embeddings, 1)<br>
x = self.emb_drop(x)<br>
<br>
x_cont = self.bn_cont(x_cont)<br>
x = torch.cat([x, x_cont], 1)<br>
x = self.layers(x)<br>
return x</font></tt>
</div>
<div class="alert alert-danger"><strong>Breaking down the embeddings steps</strong> (this code is for illustration purposes only.)</div>
End of explanation
class TabularModel(nn.Module):
def __init__(self, emb_szs, n_cont, out_sz, layers, p=0.5):
super().__init__()
self.embeds = nn.ModuleList([nn.Embedding(ni, nf) for ni,nf in emb_szs])
self.emb_drop = nn.Dropout(p)
self.bn_cont = nn.BatchNorm1d(n_cont)
layerlist = []
n_emb = sum((nf for ni,nf in emb_szs))
n_in = n_emb + n_cont
for i in layers:
layerlist.append(nn.Linear(n_in,i))
layerlist.append(nn.ReLU(inplace=True))
layerlist.append(nn.BatchNorm1d(i))
layerlist.append(nn.Dropout(p))
n_in = i
layerlist.append(nn.Linear(layers[-1],out_sz))
self.layers = nn.Sequential(*layerlist)
def forward(self, x_cat, x_cont):
embeddings = []
for i,e in enumerate(self.embeds):
embeddings.append(e(x_cat[:,i]))
x = torch.cat(embeddings, 1)
x = self.emb_drop(x)
x_cont = self.bn_cont(x_cont)
x = torch.cat([x, x_cont], 1)
x = self.layers(x)
return x
torch.manual_seed(33)
model = TabularModel(emb_szs, conts.shape[1], 2, [200,100], p=0.4) # out_sz = 2
model
Explanation: <div class="alert alert-danger"><strong>This is how the categorical embeddings are passed into the layers.</strong></div>
End of explanation
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
Explanation: Define loss function & optimizer
For our classification we'll replace the MSE loss function with <a href='https://pytorch.org/docs/stable/nn.html#crossentropyloss'><strong><tt>torch.nn.CrossEntropyLoss()</tt></strong></a><br>
For the optimizer, we'll continue to use <a href='https://pytorch.org/docs/stable/optim.html#torch.optim.Adam'><strong><tt>torch.optim.Adam()</tt></strong></a>
End of explanation
batch_size = 60000
test_size = 12000
cat_train = cats[:batch_size-test_size]
cat_test = cats[batch_size-test_size:batch_size]
con_train = conts[:batch_size-test_size]
con_test = conts[batch_size-test_size:batch_size]
y_train = y[:batch_size-test_size]
y_test = y[batch_size-test_size:batch_size]
len(cat_train)
len(cat_test)
Explanation: Perform train/test splits
At this point our batch size is the entire dataset of 120,000 records. To save time we'll use the first 60,000. Recall that our tensors are already randomly shuffled.
End of explanation
import time
start_time = time.time()
epochs = 300
losses = []
for i in range(epochs):
i+=1
y_pred = model(cat_train, con_train)
loss = criterion(y_pred, y_train)
losses.append(loss)
# a neat trick to save screen space:
if i%25 == 1:
print(f'epoch: {i:3} loss: {loss.item():10.8f}')
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f'epoch: {i:3} loss: {loss.item():10.8f}') # print the last line
print(f'\nDuration: {time.time() - start_time:.0f} seconds') # print the time elapsed
Explanation: Train the model
Expect this to take 30 minutes or more! We've added code to tell us the duration at the end.
End of explanation
plt.plot(range(epochs), losses)
plt.ylabel('Cross Entropy Loss')
plt.xlabel('epoch');
Explanation: Plot the loss function
End of explanation
# TO EVALUATE THE ENTIRE TEST SET
with torch.no_grad():
y_val = model(cat_test, con_test)
loss = criterion(y_val, y_test)
print(f'CE Loss: {loss:.8f}')
Explanation: Validate the model
End of explanation
rows = 50
correct = 0
print(f'{"MODEL OUTPUT":26} ARGMAX Y_TEST')
for i in range(rows):
print(f'{str(y_val[i]):26} {y_val[i].argmax():^7}{y_test[i]:^7}')
if y_val[i].argmax().item() == y_test[i]:
correct += 1
print(f'\n{correct} out of {rows} = {100*correct/rows:.2f}% correct')
Explanation: Now let's look at the first 50 predicted values
End of explanation
# Make sure to save the model only after the training has happened!
if len(losses) == epochs:
torch.save(model.state_dict(), 'TaxiFareClssModel.pt')
else:
print('Model has not been trained. Consider loading a trained model instead.')
Explanation: Save the model
Save the trained model to a file in case you want to come back later and feed new data through it.
End of explanation
import torch
import torch.nn as nn
import numpy as np
import pandas as pd
def haversine_distance(df, lat1, long1, lat2, long2):
r = 6371
phi1 = np.radians(df[lat1])
phi2 = np.radians(df[lat2])
delta_phi = np.radians(df[lat2]-df[lat1])
delta_lambda = np.radians(df[long2]-df[long1])
a = np.sin(delta_phi/2)**2 + np.cos(phi1) * np.cos(phi2) * np.sin(delta_lambda/2)**2
c = 2 * np.arctan2(np.sqrt(a), np.sqrt(1-a))
return r * c
class TabularModel(nn.Module):
def __init__(self, emb_szs, n_cont, out_sz, layers, p=0.5):
super().__init__()
self.embeds = nn.ModuleList([nn.Embedding(ni, nf) for ni,nf in emb_szs])
self.emb_drop = nn.Dropout(p)
self.bn_cont = nn.BatchNorm1d(n_cont)
layerlist = []
n_emb = sum((nf for ni,nf in emb_szs))
n_in = n_emb + n_cont
for i in layers:
layerlist.append(nn.Linear(n_in,i))
layerlist.append(nn.ReLU(inplace=True))
layerlist.append(nn.BatchNorm1d(i))
layerlist.append(nn.Dropout(p))
n_in = i
layerlist.append(nn.Linear(layers[-1],out_sz))
self.layers = nn.Sequential(*layerlist)
def forward(self, x_cat, x_cont):
embeddings = []
for i,e in enumerate(self.embeds):
embeddings.append(e(x_cat[:,i]))
x = torch.cat(embeddings, 1)
x = self.emb_drop(x)
x_cont = self.bn_cont(x_cont)
x = torch.cat([x, x_cont], 1)
return self.layers(x)
Explanation: Loading a saved model (starting from scratch)
We can load the trained weights and biases from a saved model. If we've just opened the notebook, we'll have to run standard imports and function definitions. To demonstrate, restart the kernel before proceeding.
End of explanation
emb_szs = [(24, 12), (2, 1), (7, 4)]
model2 = TabularModel(emb_szs, 6, 2, [200,100], p=0.4)
Explanation: Now define the model. Before we can load the saved settings, we need to instantiate our TabularModel with the parameters we used before (embedding sizes, number of continuous columns, output size, layer sizes, and dropout layer p-value).
End of explanation
model2.load_state_dict(torch.load('TaxiFareClssModel.pt'));
model2.eval() # be sure to run this step!
Explanation: Once the model is set up, loading the saved settings is a snap.
End of explanation
def test_data(mdl): # pass in the name of the new model
# INPUT NEW DATA
plat = float(input('What is the pickup latitude? '))
plong = float(input('What is the pickup longitude? '))
dlat = float(input('What is the dropoff latitude? '))
dlong = float(input('What is the dropoff longitude? '))
psngr = int(input('How many passengers? '))
dt = input('What is the pickup date and time?\nFormat as YYYY-MM-DD HH:MM:SS ')
# PREPROCESS THE DATA
dfx_dict = {'pickup_latitude':plat,'pickup_longitude':plong,'dropoff_latitude':dlat,
'dropoff_longitude':dlong,'passenger_count':psngr,'EDTdate':dt}
dfx = pd.DataFrame(dfx_dict, index=[0])
dfx['dist_km'] = haversine_distance(dfx,'pickup_latitude', 'pickup_longitude',
'dropoff_latitude', 'dropoff_longitude')
dfx['EDTdate'] = pd.to_datetime(dfx['EDTdate'])
# We can skip the .astype(category) step since our fields are small,
# and encode them right away
dfx['Hour'] = dfx['EDTdate'].dt.hour
dfx['AMorPM'] = np.where(dfx['Hour']<12,0,1)
dfx['Weekday'] = dfx['EDTdate'].dt.strftime("%a")
dfx['Weekday'] = dfx['Weekday'].replace(['Fri','Mon','Sat','Sun','Thu','Tue','Wed'],
[0,1,2,3,4,5,6]).astype('int64')
# CREATE CAT AND CONT TENSORS
cat_cols = ['Hour', 'AMorPM', 'Weekday']
cont_cols = ['pickup_latitude', 'pickup_longitude', 'dropoff_latitude',
'dropoff_longitude', 'passenger_count', 'dist_km']
xcats = np.stack([dfx[col].values for col in cat_cols], 1)
xcats = torch.tensor(xcats, dtype=torch.int64)
xconts = np.stack([dfx[col].values for col in cont_cols], 1)
xconts = torch.tensor(xconts, dtype=torch.float)
# PASS NEW DATA THROUGH THE MODEL WITHOUT PERFORMING A BACKPROP
with torch.no_grad():
z = mdl(xcats, xconts).argmax().item()
print(f'\nThe predicted fare class is {z}')
Explanation: Next we'll define a function that takes in new parameters from the user, performs all of the preprocessing steps above, and passes the new data through our trained model.
End of explanation
test_data(model2)
Explanation: Feed new data through the trained model
For convenience, here are the max and min values for each of the variables:
<table style="display: inline-block">
<tr><th>Column</th><th>Minimum</th><th>Maximum</th></tr>
<tr><td>pickup_latitude</td><td>40</td><td>41</td></tr>
<tr><td>pickup_longitude</td><td>-74.5</td><td>-73.3</td></tr>
<tr><td>dropoff_latitude</td><td>40</td><td>41</td></tr>
<tr><td>dropoff_longitude</td><td>-74.5</td><td>-73.3</td></tr>
<tr><td>passenger_count</td><td>1</td><td>5</td></tr>
<tr><td>EDTdate</td><td>2010-04-11 00:00:00</td><td>2010-04-24 23:59:42</td></tr>
<strong>Use caution!</strong> The distance between 1 degree of latitude (from 40 to 41) is 111km (69mi) and between 1 degree of longitude (from -73 to -74) is 85km (53mi). The longest cab ride in the dataset spanned a difference of only 0.243 degrees latitude and 0.284 degrees longitude. The mean difference for both latitude and longitude was about 0.02. To get a fair prediction, use values that fall close to one another.
End of explanation |
13,898 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Title
Notebook orignially contributed by
Step2: {Put all your imports and installs up into a setup section.}
Notes
For general instructions on how to write docs for Tensorflow see Writing TensorFlow Documentation.
The tips below are specific to notebooks for tensorflow.
General
Include the collapsed license at the top (this uses Colab's "Form" mode to hide the cells).
Only include a single H1 title.
Include the button-bar immediately under the H1.
Include an overview section before any code.
Put all your installs and imports in a setup section.
Always include the three __future__ imports.
Save the notebook with the Table of Contents open.
Write python3 compatible code.
Keep cells small (~max 20 lines).
Working in GitHub
Be consistent about how you save your notebooks, otherwise the JSON-diffs will be a mess.
This notebook has the "Omit code cell output when saving this notebook" option set. GitHub refuses to diff notebooks with large diffs (inline images).
reviewnb.com may help. You can access it using this bookmarklet | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import numpy as np
Explanation: Title
Notebook orignially contributed by: {link to you}
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/template/notebook.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/template/notebook.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
{Fix these links}
Overview
{Include a paragraph or two explaining what this example demonstrates, who should be interested in it, and what you need to know before you get started.}
Setup
End of explanation
#Build the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu', input_shape=(None, 5)),
tf.keras.layers.Dense(3)
])
# Run the model on a single batch of data, and inspect the output.
result = model(tf.constant(np.random.randn(10,5), dtype = tf.float32)).numpy()
print("min:", result.min())
print("max:", result.max())
print("mean:", result.mean())
print("shape:", result.shape)
# Compile the model for training
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True))
Explanation: {Put all your imports and installs up into a setup section.}
Notes
For general instructions on how to write docs for Tensorflow see Writing TensorFlow Documentation.
The tips below are specific to notebooks for tensorflow.
General
Include the collapsed license at the top (this uses Colab's "Form" mode to hide the cells).
Only include a single H1 title.
Include the button-bar immediately under the H1.
Include an overview section before any code.
Put all your installs and imports in a setup section.
Always include the three __future__ imports.
Save the notebook with the Table of Contents open.
Write python3 compatible code.
Keep cells small (~max 20 lines).
Working in GitHub
Be consistent about how you save your notebooks, otherwise the JSON-diffs will be a mess.
This notebook has the "Omit code cell output when saving this notebook" option set. GitHub refuses to diff notebooks with large diffs (inline images).
reviewnb.com may help. You can access it using this bookmarklet:
javascript:(function(){ window.open(window.location.toString().replace(/github\.com/, 'app.reviewnb.com').replace(/files$/,"")); })()
To open a GitHub notebook in Colab use the Open in Colab extension (or make a bookmarklet).
The easiest way to edit a notebook in GitHub is to open it with Colab from the branch you want to edit. Then use File --> Save a copy in GitHub, which will save it back to the branch you opened it from.
For PRs it's helpful to post a direct Colab link to the PR head: https://colab.research.google.com/github/{user}/{repo}/blob/{branch}/{path}.ipynb
Code Style
Notebooks are for people. Write code optimized for clarity.
Demonstrate small parts before combining them into something more complex. Like below:
End of explanation |
13,899 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 10
Examples and Exercises from Think Stats, 2nd Edition
http
Step1: Least squares
One more time, let's load up the NSFG data.
Step2: The following function computes the intercept and slope of the least squares fit.
Step3: Here's the least squares fit to birth weight as a function of mother's age.
Step4: The intercept is often easier to interpret if we evaluate it at the mean of the independent variable.
Step5: And the slope is easier to interpret if we express it in pounds per decade (or ounces per year).
Step6: The following function evaluates the fitted line at the given xs.
Step7: And here's an example.
Step8: Here's a scatterplot of the data with the fitted line.
Step9: Residuals
The following functon computes the residuals.
Step10: Now we can add the residuals as a column in the DataFrame.
Step11: To visualize the residuals, I'll split the respondents into groups by age, then plot the percentiles of the residuals versus the average age in each group.
First I'll make the groups and compute the average age in each group.
Step12: Next I'll compute the CDF of the residuals in each group.
Step13: The following function plots percentiles of the residuals against the average age in each group.
Step14: The following figure shows the 25th, 50th, and 75th percentiles.
Curvature in the residuals suggests a non-linear relationship.
Step17: Sampling distribution
To estimate the sampling distribution of inter and slope, I'll use resampling.
Step18: The following function resamples the given dataframe and returns lists of estimates for inter and slope.
Step19: Here's an example.
Step20: The following function takes a list of estimates and prints the mean, standard error, and 90% confidence interval.
Step21: Here's the summary for inter.
Step22: And for slope.
Step23: Exercise
Step24: Or we can make a neater (and more efficient plot) by computing fitted lines and finding percentiles of the fits for each value of the dependent variable.
Step25: This example shows the confidence interval for the fitted values at each mother's age.
Step26: Coefficient of determination
The coefficient compares the variance of the residuals to the variance of the dependent variable.
Step27: For birth weight and mother's age $R^2$ is very small, indicating that the mother's age predicts a small part of the variance in birth weight.
Step28: We can confirm that $R^2 = \rho^2$
Step29: To express predictive power, I think it's useful to compare the standard deviation of the residuals to the standard deviation of the dependent variable, as a measure RMSE if you try to guess birth weight with and without taking into account mother's age.
Step30: As another example of the same idea, here's how much we can improve guesses about IQ if we know someone's SAT scores.
Step31: Hypothesis testing with slopes
Here's a HypothesisTest that uses permutation to test whether the observed slope is statistically significant.
Step32: And it is.
Step33: Under the null hypothesis, the largest slope we observe after 1000 tries is substantially less than the observed value.
Step34: We can also use resampling to estimate the sampling distribution of the slope.
Step35: The distribution of slopes under the null hypothesis, and the sampling distribution of the slope under resampling, have the same shape, but one has mean at 0 and the other has mean at the observed slope.
To compute a p-value, we can count how often the estimated slope under the null hypothesis exceeds the observed slope, or how often the estimated slope under resampling falls below 0.
Step36: Here's how to get a p-value from the sampling distribution.
Step37: Resampling with weights
Resampling provides a convenient way to take into account the sampling weights associated with respondents in a stratified survey design.
The following function resamples rows with probabilities proportional to weights.
Step38: We can use it to estimate the mean birthweight and compute SE and CI.
Step39: And here's what the same calculation looks like if we ignore the weights.
Step40: The difference is non-negligible, which suggests that there are differences in birth weight between the strata in the survey.
Exercises
Exercise | Python Code:
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print("Downloaded " + local)
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkstats2.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkplot.py")
import numpy as np
import random
import thinkstats2
import thinkplot
Explanation: Chapter 10
Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/first.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dct")
download(
"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dat.gz"
)
import first
live, firsts, others = first.MakeFrames()
live = live.dropna(subset=['agepreg', 'totalwgt_lb'])
ages = live.agepreg
weights = live.totalwgt_lb
Explanation: Least squares
One more time, let's load up the NSFG data.
End of explanation
from thinkstats2 import Mean, MeanVar, Var, Std, Cov
def LeastSquares(xs, ys):
meanx, varx = MeanVar(xs)
meany = Mean(ys)
slope = Cov(xs, ys, meanx, meany) / varx
inter = meany - slope * meanx
return inter, slope
Explanation: The following function computes the intercept and slope of the least squares fit.
End of explanation
inter, slope = LeastSquares(ages, weights)
inter, slope
Explanation: Here's the least squares fit to birth weight as a function of mother's age.
End of explanation
inter + slope * 25
Explanation: The intercept is often easier to interpret if we evaluate it at the mean of the independent variable.
End of explanation
slope * 10
Explanation: And the slope is easier to interpret if we express it in pounds per decade (or ounces per year).
End of explanation
def FitLine(xs, inter, slope):
fit_xs = np.sort(xs)
fit_ys = inter + slope * fit_xs
return fit_xs, fit_ys
Explanation: The following function evaluates the fitted line at the given xs.
End of explanation
fit_xs, fit_ys = FitLine(ages, inter, slope)
Explanation: And here's an example.
End of explanation
thinkplot.Scatter(ages, weights, color='blue', alpha=0.1, s=10)
thinkplot.Plot(fit_xs, fit_ys, color='white', linewidth=3)
thinkplot.Plot(fit_xs, fit_ys, color='red', linewidth=2)
thinkplot.Config(xlabel="Mother's age (years)",
ylabel='Birth weight (lbs)',
axis=[10, 45, 0, 15],
legend=False)
Explanation: Here's a scatterplot of the data with the fitted line.
End of explanation
def Residuals(xs, ys, inter, slope):
xs = np.asarray(xs)
ys = np.asarray(ys)
res = ys - (inter + slope * xs)
return res
Explanation: Residuals
The following functon computes the residuals.
End of explanation
live['residual'] = Residuals(ages, weights, inter, slope)
Explanation: Now we can add the residuals as a column in the DataFrame.
End of explanation
bins = np.arange(10, 48, 3)
indices = np.digitize(live.agepreg, bins)
groups = live.groupby(indices)
age_means = [group.agepreg.mean() for _, group in groups][1:-1]
age_means
Explanation: To visualize the residuals, I'll split the respondents into groups by age, then plot the percentiles of the residuals versus the average age in each group.
First I'll make the groups and compute the average age in each group.
End of explanation
cdfs = [thinkstats2.Cdf(group.residual) for _, group in groups][1:-1]
Explanation: Next I'll compute the CDF of the residuals in each group.
End of explanation
def PlotPercentiles(age_means, cdfs):
thinkplot.PrePlot(3)
for percent in [75, 50, 25]:
weight_percentiles = [cdf.Percentile(percent) for cdf in cdfs]
label = '%dth' % percent
thinkplot.Plot(age_means, weight_percentiles, label=label)
Explanation: The following function plots percentiles of the residuals against the average age in each group.
End of explanation
PlotPercentiles(age_means, cdfs)
thinkplot.Config(xlabel="Mother's age (years)",
ylabel='Residual (lbs)',
xlim=[10, 45])
Explanation: The following figure shows the 25th, 50th, and 75th percentiles.
Curvature in the residuals suggests a non-linear relationship.
End of explanation
def SampleRows(df, nrows, replace=False):
Choose a sample of rows from a DataFrame.
df: DataFrame
nrows: number of rows
replace: whether to sample with replacement
returns: DataDf
indices = np.random.choice(df.index, nrows, replace=replace)
sample = df.loc[indices]
return sample
def ResampleRows(df):
Resamples rows from a DataFrame.
df: DataFrame
returns: DataFrame
return SampleRows(df, len(df), replace=True)
Explanation: Sampling distribution
To estimate the sampling distribution of inter and slope, I'll use resampling.
End of explanation
def SamplingDistributions(live, iters=101):
t = []
for _ in range(iters):
sample = ResampleRows(live)
ages = sample.agepreg
weights = sample.totalwgt_lb
estimates = LeastSquares(ages, weights)
t.append(estimates)
inters, slopes = zip(*t)
return inters, slopes
Explanation: The following function resamples the given dataframe and returns lists of estimates for inter and slope.
End of explanation
inters, slopes = SamplingDistributions(live, iters=1001)
Explanation: Here's an example.
End of explanation
def Summarize(estimates, actual=None):
mean = Mean(estimates)
stderr = Std(estimates, mu=actual)
cdf = thinkstats2.Cdf(estimates)
ci = cdf.ConfidenceInterval(90)
print('mean, SE, CI', mean, stderr, ci)
Explanation: The following function takes a list of estimates and prints the mean, standard error, and 90% confidence interval.
End of explanation
Summarize(inters)
Explanation: Here's the summary for inter.
End of explanation
Summarize(slopes)
Explanation: And for slope.
End of explanation
for slope, inter in zip(slopes, inters):
fxs, fys = FitLine(age_means, inter, slope)
thinkplot.Plot(fxs, fys, color='gray', alpha=0.01)
thinkplot.Config(xlabel="Mother's age (years)",
ylabel='Residual (lbs)',
xlim=[10, 45])
Explanation: Exercise: Use ResampleRows and generate a list of estimates for the mean birth weight. Use Summarize to compute the SE and CI for these estimates.
Visualizing uncertainty
To show the uncertainty of the estimated slope and intercept, we can generate a fitted line for each resampled estimate and plot them on top of each other.
End of explanation
def PlotConfidenceIntervals(xs, inters, slopes, percent=90, **options):
fys_seq = []
for inter, slope in zip(inters, slopes):
fxs, fys = FitLine(xs, inter, slope)
fys_seq.append(fys)
p = (100 - percent) / 2
percents = p, 100 - p
low, high = thinkstats2.PercentileRows(fys_seq, percents)
thinkplot.FillBetween(fxs, low, high, **options)
Explanation: Or we can make a neater (and more efficient plot) by computing fitted lines and finding percentiles of the fits for each value of the dependent variable.
End of explanation
PlotConfidenceIntervals(age_means, inters, slopes, percent=90,
color='gray', alpha=0.3, label='90% CI')
PlotConfidenceIntervals(age_means, inters, slopes, percent=50,
color='gray', alpha=0.5, label='50% CI')
thinkplot.Config(xlabel="Mother's age (years)",
ylabel='Residual (lbs)',
xlim=[10, 45])
Explanation: This example shows the confidence interval for the fitted values at each mother's age.
End of explanation
def CoefDetermination(ys, res):
return 1 - Var(res) / Var(ys)
Explanation: Coefficient of determination
The coefficient compares the variance of the residuals to the variance of the dependent variable.
End of explanation
inter, slope = LeastSquares(ages, weights)
res = Residuals(ages, weights, inter, slope)
r2 = CoefDetermination(weights, res)
r2
Explanation: For birth weight and mother's age $R^2$ is very small, indicating that the mother's age predicts a small part of the variance in birth weight.
End of explanation
print('rho', thinkstats2.Corr(ages, weights))
print('R', np.sqrt(r2))
Explanation: We can confirm that $R^2 = \rho^2$:
End of explanation
print('Std(ys)', Std(weights))
print('Std(res)', Std(res))
Explanation: To express predictive power, I think it's useful to compare the standard deviation of the residuals to the standard deviation of the dependent variable, as a measure RMSE if you try to guess birth weight with and without taking into account mother's age.
End of explanation
var_ys = 15**2
rho = 0.72
r2 = rho**2
var_res = (1 - r2) * var_ys
std_res = np.sqrt(var_res)
std_res
Explanation: As another example of the same idea, here's how much we can improve guesses about IQ if we know someone's SAT scores.
End of explanation
class SlopeTest(thinkstats2.HypothesisTest):
def TestStatistic(self, data):
ages, weights = data
_, slope = thinkstats2.LeastSquares(ages, weights)
return slope
def MakeModel(self):
_, weights = self.data
self.ybar = weights.mean()
self.res = weights - self.ybar
def RunModel(self):
ages, _ = self.data
weights = self.ybar + np.random.permutation(self.res)
return ages, weights
Explanation: Hypothesis testing with slopes
Here's a HypothesisTest that uses permutation to test whether the observed slope is statistically significant.
End of explanation
ht = SlopeTest((ages, weights))
pvalue = ht.PValue()
pvalue
Explanation: And it is.
End of explanation
ht.actual, ht.MaxTestStat()
Explanation: Under the null hypothesis, the largest slope we observe after 1000 tries is substantially less than the observed value.
End of explanation
sampling_cdf = thinkstats2.Cdf(slopes)
Explanation: We can also use resampling to estimate the sampling distribution of the slope.
End of explanation
thinkplot.PrePlot(2)
thinkplot.Plot([0, 0], [0, 1], color='0.8')
ht.PlotCdf(label='null hypothesis')
thinkplot.Cdf(sampling_cdf, label='sampling distribution')
thinkplot.Config(xlabel='slope (lbs / year)',
ylabel='CDF',
xlim=[-0.03, 0.03],
legend=True, loc='upper left')
Explanation: The distribution of slopes under the null hypothesis, and the sampling distribution of the slope under resampling, have the same shape, but one has mean at 0 and the other has mean at the observed slope.
To compute a p-value, we can count how often the estimated slope under the null hypothesis exceeds the observed slope, or how often the estimated slope under resampling falls below 0.
End of explanation
pvalue = sampling_cdf[0]
pvalue
Explanation: Here's how to get a p-value from the sampling distribution.
End of explanation
def ResampleRowsWeighted(df, column='finalwgt'):
weights = df[column]
cdf = thinkstats2.Cdf(dict(weights))
indices = cdf.Sample(len(weights))
sample = df.loc[indices]
return sample
Explanation: Resampling with weights
Resampling provides a convenient way to take into account the sampling weights associated with respondents in a stratified survey design.
The following function resamples rows with probabilities proportional to weights.
End of explanation
iters = 100
estimates = [ResampleRowsWeighted(live).totalwgt_lb.mean()
for _ in range(iters)]
Summarize(estimates)
Explanation: We can use it to estimate the mean birthweight and compute SE and CI.
End of explanation
estimates = [thinkstats2.ResampleRows(live).totalwgt_lb.mean()
for _ in range(iters)]
Summarize(estimates)
Explanation: And here's what the same calculation looks like if we ignore the weights.
End of explanation
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/brfss.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/CDBRFS08.ASC.gz")
import brfss
df = brfss.ReadBrfss(nrows=None)
df = df.dropna(subset=['htm3', 'wtkg2'])
heights, weights = df.htm3, df.wtkg2
log_weights = np.log10(weights)
Explanation: The difference is non-negligible, which suggests that there are differences in birth weight between the strata in the survey.
Exercises
Exercise: Using the data from the BRFSS, compute the linear least squares fit for log(weight) versus height. How would you best present the estimated parameters for a model like this where one of the variables is log-transformed? If you were trying to guess someone’s weight, how much would it help to know their height?
Like the NSFG, the BRFSS oversamples some groups and provides a sampling weight for each respondent. In the BRFSS data, the variable name for these weights is totalwt. Use resampling, with and without weights, to estimate the mean height of respondents in the BRFSS, the standard error of the mean, and a 90% confidence interval. How much does correct weighting affect the estimates?
Read the BRFSS data and extract heights and log weights.
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.